Fulltext search in archive



« advanced mode »

    1  2   3    next 

Results 1 to 30 of 76:

The Digital Media in Lithuania: Combating Disinformation and Fake News

Aelita Skarzauskiene, Monika Maciuliene, Ornela Ramasauskaite

Acta Informatica Pragensia 2020, 9(2), 74-91 | DOI: 10.18267/j.aip.1348749

The prevalence of so-called “fake news” is a relatively recent social phenomenon that is linked to disinformation, misinformation and other forms of networked manipulation facilitated by the rise of the Internet and online social media. The spread of misinformation is among the most pressing challenges of our time. Sources from which disinformation originates are constantly changing and present an enormous challenge for real-time detection algorithms and more targeted science based socio-technical interventions. The primary aim of this paper is to illuminate the practices and interpretations, focusing on three perspectives: general attitudes to fake news, perceived interaction with disinformation and opinion on counteraction with respect to fake news. The innovative character of the research is achieved by the focus on community solutions to combat disinformation and the collaboration between media users, media organizations, scientists, communication managers, journalists and other important actors in the media ecosystem. Based on insights from interviews with communication field experts, the paper sheds light on the efforts of Lithuanian society to confront the problem of fake news in digital media environment. Lithuania is also an interesting case study for fake news due to its status as a former Soviet state now in the EU. Our research indicates that not all media users are prepared and/or have the necessary competencies to combat fake news, so that citizen engagement might actually negatively influence the quality of the counteraction process. Indeed, proactive citizens’ organizations and NGOs could be an important catalyst fostering collaboration between stakeholders. The responsibility of governments could be to create the structures, methodologies and supporting educational activities to involve the stakeholders in collaborating activities combating disinformation.

Multi-Class Text Classification on Khmer News Using Ensemble Method in Machine Learning Algorithms

Raksmey Phann, Chitsutha Soomlek, Pusadee Seresangtakul

Acta Informatica Pragensia 2023, 12(2), 243-259 | DOI: 10.18267/j.aip.2106723

The research herein applies text classification with which to categorize Khmer news articles. News articles were collected from three online websites through web scraping and grouped into nine categories. After text preprocessing, the dataset was split into training and testing sets. We then evaluated the performance of the ensemble learning method via machine learning classifiers with k-fold validation. Various machine learning classifiers were employed, namely logistic regression, Complement Naive Bayes, Bernoulli Naive Bayes, k-nearest neighbours, perceptron, support vector machines, stochastic gradient descent, AdaBoost, decision tree, and random forest were employed. Accuracy was improved for the categorization of Khmer news articles, in which Grid Search CV was used to find the optimal hyperparameters for each machine learning classifier with feature extraction TF-IDF and Delta TF-IDF. The results determined that the highest accuracy was achieved through the ensemble learning method in the support vector machine with the optimal hyperparameters (C = 10, kernel = rbf), using feature extraction TF-IDF and Delta TF-IDF, at 83.47% and 83.40%, respectively. The model establishes that Khmer news articles can be accurately categorized.

Impact of Social Media Application Qualities on Using Them for Daily News

Davod Farhadi, Ali Maroosi

Acta Informatica Pragensia 2022, 11(1), 48-61 | DOI: 10.18267/j.aip.1645213

A model is introduced to investigate the effect of social media application qualities on the using of these applications for daily news. A standard questionnaire was designed and distributed among randomly selected social media users in the city of Neyshabur in Iran. The content of the questionnaire was validated by experts and its reliability verified using Cronbach's alpha. Random sampling was used identify participants. The SmartPLS software was used to investigate the research findings. Structural equation modelling is used for data analysis. The results show that the factors of system quality, information quality, service quality and personalization of applications affect their perceived usefulness. System quality and service quality affect the perceived ease of use of applications. However, information quality does not affect perceived ease of use. Also, the results show that perceived usefulness has a greater effect (with a path coefficient of 45%) on the attitude in respect of perceived ease of use (path coefficient of 26%). Personalization has the most positive impact on perceived usefulness and service quality has a great impact on perceived ease of use. Facilitating conditions have a positive impact on the use of social media and their use for news. Furthermore, the results show that the factors that affect the use of applications in Telegram are better than those in Viber. These findings justify why Iranian people migrate from Viber to Telegram as their social media application.

Digital Twins in the Context of Ensuring Sustainable Industrial Development

Yuliia Biliavska, Valentyn Biliavskyi

Acta Informatica Pragensia 2026, 15(1), 198-220 | DOI: 10.18267/j.aip.2911549

Background: Currently, there is a megatrend towards digitalisation and servitisation using digital technologies and digital twins to support the digital transformation of the economy. In the literature, new digital technologies are seen as creating added value, strengthening customer relationships and accelerating the process of servitisation from manufacturing. The implementation of such a complex of technologies and business solutions can lead to the adaptation of the product and service life cycle, as well as the entire business model, to full servitisation.Objective: This study reveals the role of digital twins in the context of entrepreneurship in compliance with the Sustainable Development Goals (SDGs). By constructing a thematic map of scientific clusters and SDGs, the relationship between science and practical aspects is established.Methods: Research into digital twins has led to the use of research methods such as scientific abstraction and synthesis, historical, grouping, analogy, structural-logical modelling, tabular and logical generalisation methods, as well as the bibliometric analysis method based on VOSviewer software.Results: The study analyses the evolution of the latest technology, which demonstrates the relevance of digital twins as one of the key technologies for digitalisation in many business processes. Special attention is paid to the role of digital twins in the implementation of the SDGs. The results of the bibliometric review indicate scientific interest in researching digital twins in the fields of modelling, information technology, operational management, automation and robotics. The thematic map combining scientific clusters and SDGs highlights the importance of digital twins in entrepreneurship and ensuring sustainable industrial development.Conclusion: This study provides valuable information for managers as it proves the need to implement digital twins, which enable intelligent manufacturing, serve as the main technology supporting Industry 4.0, can reflect physical information in cyberspace and manipulate physical objects by studying and researching information models in manufacturing. Therefore, future research should focus on developing reliable mechanisms for applying digital twins in the context of the SDGs in areas such as the economy, social aspects and the biosphere. This will ensure the competitiveness of the industrial sector and the country.

Hateful and Other Negative Communication in Online Commenting Environments: Content, Structure and Targets

Vasja Vehovar, Dejan Jontes

Acta Informatica Pragensia 2021, 10(3), 257-274 | DOI: 10.18267/j.aip.1655676

Information and communication technologies are increasingly interacting with modern societies. One specific manifestation of this interaction concerns hateful and other negative comments in online environments. Various terms appear to denote this communication, from flaming, indecency and intolerance to hate speech. However, there is still a lack of an umbrella term that broadly captures this communication. Therefore, this paper introduces the concept of socially unacceptable discourse, which serves as the basis for an empirical study that evaluated online comments scraped from the Facebook pages of the three most-visited Slovenian news outlets. Machine-learning algorithms were used to narrow the focus to topics related to refugees and LGBT rights. Ten thousand comments were manually coded to identify and structure socially undesirable discourse. The results show that about half of all comments belonged to this type of discourse, with a surprisingly stable level and structure across media (i.e., right-wing versus mainstream) and topics. Most of these comments could also be considered a potential violation of hate speech legislation. In the context of these findings, the political and ideological consequences and implications of mediatised emotions are discussed.

Culturally Sensitive Website Elements and Features: A Cross-National Comparison of Websites from Selected Countries

Radim Cermak

Acta Informatica Pragensia 2020, 9(2), 132-153 | DOI: 10.18267/j.aip.1376766

The goal of this case study is to compare websites from 9 different countries―Austria, Chile, China, Japan, Latvia, Nigeria, Saudi Arabia, the US and the Czech Republic―and, based on this comparison, to provide the missing linking of website elements with cultural dimensions for better cultural adaptation of web content. Hofstede’s cultural dimensions were used for the selection of countries for this study. To examine the influence of culture on websites, countries with extreme values of cultural dimensions were selected. An important benefit is that this study takes into account all of Hofstede's cultural dimensions, including the latest one (indulgence vs restraint). For each country 50 websites were selected from areas that most closely reflect the culture of the country. The main focus was on the selection of an appropriate representative sample of websites for each state. A total of 450 pages was analyzed. For each website, 42 web elements determined as the most important were monitored. Moreover, the presence of various types of social networks and five general characteristics were monitored. The findings show that culture influences website design. The results of this study reveal a connection between website elements and Hofstede’s cultural dimensions. For example, headlines are important for countries with a high value of individualism, uncertainty avoidance, and a low value of power distance and indulgence. Newsletters are associated with a high value of indulgence and a low value of long-term orientation and search option with a high value of power distance. Overall, about 20 culturally sensitive website elements were identified. The study also provides a comprehensive overview of website characteristics for each of the selected countries. For UX designers, web localization specialists, academicians, and web developers, this study provides an original view into culturally sensitive website elements and features.

University Library Information Resources as a Basis for Enhancing Educational and Professional Programmes in Information, Library and Archival Studies

Nadiia Bachynska, Yurii Horban, Tetiana Novalska, Vladyslav Kasian, Nataliya Gaisynuik

Acta Informatica Pragensia 2024, 13(1), 62-84 | DOI: 10.18267/j.aip.2295747

The article aims to explore the role of information resources provided by university libraries in strengthening educational and professional programmes in the field of Information, Library and Archival Studies based on the Scientific Library of Kyiv National University of Culture and Arts. The purpose of this study is to investigate how these resources can contribute to the overall growth and development of students and professionals in the field. Using a descriptive and analytical research methodology, the study examines the diverse range of information resources available in the library, including digital databases, online journals, e-books and other relevant materials. The findings reveal that these resources serve as a solid foundation for enhancing knowledge, skills and competencies required in the field. The practical implications of this research emphasize the importance of utilizing the rich information resources of university libraries to design and implement effective educational and professional programmes. By utilizing these resources, educational institutions and professionals can strive for continuous improvement, staying updated with the latest trends and advancements in the field. This study highlights the critical role of university library information resources in augmenting educational and professional programmes in Information, Library and Archival Studies. The findings underscore the need for collaboration and strategic utilization of these resources to shape well-rounded professionals capable of meeting the evolving demands of the information age.

Systematic Review on Algorithmic Trading

David Jukl, Jan Lansky

Acta Informatica Pragensia 2025, 14(3), 506-534 | DOI: 10.18267/j.aip.2766630

Background: Algorithmic trading systems (ATS) are defined by the use of computational algorithms for automating financial transactions. They have become a critical part of modern financial markets because of their efficiency and ability to carry out complex strategies.Objective: This research involves a systematic review that assesses the market impact, technological advancements, strategic approaches and regulatory challenges related to algorithmic trading.Methods: Following PRISMA 2020 guidelines, this study conducts a systematic literature review by screening 1,567 articles across five academic databases, namely IEEE Xplore, ACM Digital Library, SpringerLink, Web of Science and SSRN. After applying predefined inclusion and exclusion criteria, 208 peer-reviewed journal and conference papers published between 2015 and 2024 are selected. The PICOC framework is used to define the review scope. Data are extracted using structured templates capturing study details, research objectives, artificial intelligence (AI) integration, profitability analysis and limitations. Tools such as Rayyan, NVivo, MS Excel and Zotero support the screening, coding and qualitative synthesis of findings.Results: AI methods, especially machine learning (used in 50% of the studies) and sentiment analysis (20%), significantly improve predictive accuracy and profitability. Most studies focus on equities (35%) and forex (30%), with high-frequency trading being the most examined strategy (30%). Challenges include latency (30%), scalability (25%) and regulatory issues (25%).Conclusion: Future research should prioritize ethical frameworks, regulatory clarity and wider access to AI-driven ATS components. This review provides a robust foundation for academics and practitioners to innovate and optimize algorithmic trading strategies.

Exploring Oral History Archives Using State-of-the-Art Artificial Intelligence Methods

Martin Bulín, Jan Švec, Pavel Ircing, Adam Frémund, Filip Polák

Acta Informatica Pragensia 2025, 14(2), 207-214 | DOI: 10.18267/j.aip.2683543

Background: The preservation and analysis of spoken data in oral history archives, such as Holocaust testimonies, provide a vast and complex knowledge source. These archives pose unique challenges and opportunities for computational methods, particularly in self-supervised learning and information retrieval.Objective: This study explores the application of state-of-the-art artificial intelligence (AI) models, particularly transformer-based architectures, to enhance navigation and engagement with large-scale oral history testimonies. The goal is to improve accessibility while preserving the authenticity and integrity of historical records.Methods: We developed an asking questions framework utilizing a fine-tuned T5 model to generate contextually relevant questions from interview transcripts. To ensure semantic coherence, we introduced a semantic continuity model based on a BERT-like architecture trained with contrastive loss.Results: The system successfully generated contextually relevant questions from oral history testimonies, enhancing user navigation and engagement. Filtering techniques improved question quality by retaining only semantically coherent outputs, ensuring alignment with the testimony content. The approach demonstrated effectiveness in handling spontaneous, unstructured speech, with a significant improvement in question relevance compared to models trained on structured text. Applied to real-world interview transcripts, the framework balanced enrichment of user experience with preservation of historical authenticity.Conclusion: By integrating generative AI models with robust retrieval techniques, we enhance the accessibility of oral history archives while maintaining their historical integrity. This research demonstrates how AI-driven approaches can facilitate interactive exploration of vast spoken data repositories, benefiting researchers, historians and the general public.

Political Actors in the Age of Generative Artificial Intelligence: The Czech Perspective

Daniel Šárovec

Acta Informatica Pragensia 2025, 14(2), 282-295 | DOI: 10.18267/j.aip.2724115

Background: The phenomenon of artificial intelligence (AI) has been studied for decades. However, only the ascent of tools such as ChatGPT brought AI into a broader public consciousness, as people started using it for a broad spectrum of tasks and questions.Objective: The goal of this overview article is to present a new perspective on AI issues in the context of the social sciences and, more specifically, political science. Indeed, AI tools play an important role in the political process—a fact reflected by governments and other political actors, including political parties.Methods: The qualitatively and interpretively oriented paper seeks to demonstrate existing connotations of the relationship between AI and politics in the Czech context. The text is designed as an overview based on secondary sources. We first focus on AI popularity and use in the general public and public institutions. Then the article focuses on government strategies with implications for international organizations. The final part outlines the relationship between generative AI and Czech political parties.Results: The results indicate that the popularity of AI grew substantially after OpenAI launched its model. Nowadays, generative AI-based tools are commonly used by various public institutions. To date, the Government of the Czech Republic has issued two national strategies on AI issues. Political parties are among the actors using generative AI on a daily basis.Conclusion: The analysis seeks to fill in the blanks in this under-researched area and to demonstrate what kind of interdisciplinary implications of the AI–politics relationship can be examined. Moreover, we view the gradual adoption of AI tools as the next step in the process of adaptation to new digital tools that started years ago.

Measuring the Feasibility of a Question and Answering System for the Sarawak Gazette Using Chatbot Technology

Yasir Lutfan bin Yusuf, Suhaila binti Saee

Acta Informatica Pragensia 2025, 14(3), 365-392 | DOI: 10.18267/j.aip.2635959

Background: The Sarawak Gazette is a critical repository of information pertaining to Sarawak’s history. It has received much attention over the last two decades, with prior studies focusing on digitizing and extracting the gazette’s ontologies to increase the gazette’s accessibility. However, the creation of a question answering system for the Sarawak Gazette, another avenue that could improve accessibility, has been overlooked. Objective: This study created a new system to generate answers for user questions related to the gazette using chatbot technology. Methods: This system sends user queries to a context retrieval system, then generates an answer from the retrieved contexts using a Large Language Model. A question answering dataset was also created using a Large Language Model to evaluate this system, with dataset quality assessed by 10 annotators. Results: The system achieved 55% higher precision, and 42% higher recall compared to previous state-of-the-art historical document question answering while only sacrificing 11% of cosine similarity. The annotators overall rated the dataset 2.9 out of 3. Conclusion: The system could answer the general public’s questions about the Sarawak Gazette in a more direct and friendly manner compared to traditional information retrieval methods. The methods developed in this study are also applicable to other Malaysian historical texts that are written in English. All code used in this study have been released on GitHub.

Ethical Application of Artificial Intelligence in the Contemporary Information Society: A Scoping Review

Marija Kuštelega, Renata Mekovec

[Ahead of Print]Acta Informatica Pragensia X:X | DOI: 10.18267/j.aip.308136

Background: Artificial intelligence (AI) has become a fundamental part of everyday life, making it crucial to integrate AI into the information society in ways that protect individual rights.Objective: This study explores the perspectives of different stakeholders on the ethical use of AI. The aim of this research is to identify practical measures that can help address ethical challenges associated with AI deployment.Methods: A scoping literature review approach was adopted, focusing on the most relevant articles addressing the ethical aspects of AI usage from Web of Science Core Collection and Scopus databases. The analysis was performed with focus on the perspectives of four key stakeholders: policymakers, AI innovators, business leaders, and individuals.Results: Findings highlight key measures to promote ethical AI usage: technical, organisational, regulatory, and individual measures. In this context: (1) policymakers are responsible for establishing governance and regulations; (2) AI innovators must embed ethics into AI systems; (3) business leaders should establish ethical policies and guidelines; and (4) individuals need to think critically and use AI responsibly.Conclusion: The responsible deployment of AI requires a comprehensive approach that involves the collaboration of all relevant stakeholders. The future development of AI relies on the adoption of ethical guidelines and the assurance of responsible AI system design.

Cloud-Based Large Language Model Deployment: A Comparative Analysis of Serverless and Bring-Your-Own-Container Architectures

Mateusz Ploskonka

[Ahead of Print]Acta Informatica Pragensia X:X | DOI: 10.18267/j.aip.31319

Background: Large Language Models (LLMs) have transformed research and industry applications; however, cloud deployment decisions remain complex and poorly documented, particularly for academic researchers operating under budget constraints. Systematic guidance on infrastructure selection for LLM-based research is limited.Objective: This study provides a comprehensive empirical evaluation of cloud-based LLM deployment architectures, examining inference efficiency, serverless platform availability, and architectural trade-offs across major cloud providers to deliver actionable guidance for budget-constrained researchers.Methods: The author evaluated 32 open-source LLMs ranging from 0.6 billion to 1 trillion parameters across serverless and Bring Your Own Container (BYOC) deployment configurations. Using the Belebele benchmark, we analyzed cost–efficiency relationships, serverless platform availability, and metrics exposure across Amazon SageMaker, Amazon Bedrock, Azure Serverless, and Hugging Face–compatible providers.Results: Model performance follows a logarithmic scaling relationship with parameter count (R²=0.727) and deployment cost (R²=0.639). Models in the 30–50B parameter range achieve 85–90% of maximum accuracy at a fraction of the cost of frontier models. However, serverless availability remains fragmented: only 34.4% of examined models are accessible via serverless endpoints, with minimal cross-platform redundancy (6.2%). Deployment architecture introduces a fundamental trade-off: serverless platforms expose 71% fewer metrics than BYOC approaches while eliminating infrastructure management overhead and idle costs.Conclusion: These findings provide practical guidance for researchers selecting cloud infrastructure under budget constraints. Models in the 7–14B range offer optimal cost efficiency, while the 30–50B range maximizes accuracy per dollar for demanding tasks. The results also challenge the prevailing emphasis on ever-larger models, as diminishing returns become substantial beyond 30B parameters. Persistent gaps in serverless availability and observability highlight the need for greater standardization in cloud platforms.

Drone Delivery Global Research Landscape: A Bibliometric Analysis

Abdulwahab Funsho Atanda, Daniel Yong Wen Tan, Huong Yong Ting, Wasiu Olakunle Oyenuga, Abdulrauf Uthman Tosho

Acta Informatica Pragensia 2026, 15(1), 221-252 | DOI: 10.18267/j.aip.2961808

Background: Rapid technological advancements have revolutionized research into unmanned aerial vehicles (UAVs), commonly known as drones, particularly in delivery applications. However, despite numerous related publications, there remains a lack of systematic reviews that synthesize challenges, trends and recent advances in drone delivery. To address this gap, the present study conducts a bibliometric analysis to examine evolutionary trends and emerging applications of UAVs between 2015 and 2024.Objective: This study aims to identify established and emerging trends in drone delivery research by analysing articles, journals, authors, institutions, countries and thematic areas.Methods: Previous studies are selected using a systematic approach, followed by bibliometric analysis with tools including VOSviewer, Bibliometrix and ScientoPy, which emphasizes key authors, top journals and countries, collaboration patterns and recurring author keywords.Results: The bibliometric analysis of 1,438 articles from 583 sources authored by 4,333 scholars (2015–2024) reveals a strong interdisciplinary focus in drone delivery research. Military applications largely drove early studies, but recent breakthroughs highlight the integration of artificial intelligence (AI) for autonomous navigation and energy optimization. Emerging themes include the development of drone swarms for scalable applications such as disaster response and agricultural mapping. Geographically, China, the United States and Australia dominate contributions, with extensive international collaborations fostering global innovation. Across journals and authors, the literature reflects a steady evolution from conceptual and technical foundations to applied studies addressing logistics, smart cities and environmental monitoring. Overall, the results suggest that drone delivery research is transitioning from exploratory phases towards AI-enabled autonomy deployment.Conclusion: Drone delivery research has evolved from military origins into a global, interdisciplinary field driven by AI. China, the USA and Australia are the leading contributors. Its future hinges on balancing technological innovation—such as autonomous navigation and swarm applications—with ethical, regulatory and societal considerations for sustainable integration.

ResNetMF: Improving Recommendation Accuracy and Speed with Matrix Factorization Enhanced by Residual Networks

Mustafa Payandenick, YinChai Wang, Mohd Kamal Othman, Muhammad Payandenick

Acta Informatica Pragensia 2026, 15(1), 1-21 | DOI: 10.18267/j.aip.2802477

Background: Recommendation systems are essential for personalized user experiences but struggle to balance accuracy and efficiency.Objective: This paper presents ResNetMF, an innovative hybrid framework designed to address these limitations by combining the strengths of matrix factorization (MF) and deep residual networks (ResNet). Matrix factorization excels at capturing explicit linear relationships between users and items, while ResNet is employed to model non-linear residuals.Methods: By focusing on refining the baseline MF output through incremental improvements, ResNetMF minimizes redundant computations and significantly enhances recommendation accuracy. The unique architecture of the framework allows it to capture and represent both linear and non-linear relationships between users and items, ensuring robust and scalable performance. Extensive experiments conducted on the widely used MovieLens dataset demonstrate the superiority of ResNetMF over existing methods.Results: Specifically, it achieves a minimum improvement of 7.95% in root mean square error compared to neural collaborative filtering and outperforms other state-of-the-art techniques in key metrics such as precision, recall and training efficiency. These results highlight the ability of ResNetMF to deliver highly accurate recommendations while maintaining computational efficiency, making it an efficient approach to real-world application of recommendation systems.Conclusion: By addressing the dual challenges of accuracy and efficiency, ResNetMF offers a balanced and scalable approach to personalized recommendation systems.

Enhancing Imperceptibility: Zero-width Character-based Text Steganography for Preserving Message Privacy

Saqib Ishtiaq, Naveed Ejaz, Muhammad Usman Hashmi, Syed Imran Hussain Shah

Acta Informatica Pragensia 2025, 14(3), 445-459 | DOI: 10.18267/j.aip.2714426

Background: Text steganography preserves the privacy of secret messages by hiding them in cover text. However, existing text steganography techniques embed messages by introducing distortions in text, reducing the similarity between the cover and stegotext. Objective: The objective of this study was to design a method that increases the number of embedding choices and locations to hide more secret bits per distortion in the cover text. The goal is to enhance both embedding capacity and imperceptibility.Methods: A text steganography method is proposed that uses eight zero-width characters (ZWCs) to embed secret messages in the cover text. The proposed method also treats every character in the cover text as a potential embedding location. With eight embedding choices and bit encoding based on embedding locations, more bits can be hidden with fewer insertions in cover text.Results: Experimental results confirm that the proposed method embeds a greater number of bits per insertion of ZWC in the cover text. It also requires a smaller number of insertions to embed secret messages of comparable length. Consequently, the proposed method achieves higher embedding capacity and better imperceptibility compared to existing text steganography methods.Conclusion: The proposed method presents a substantial improvement in text steganography by increasing embedding capacity per distortion and preserving high similarity between cover and stegotext, thus enabling more secure covert communication.

Fairness-Aware Multimodal Machine Learning for Retail Stock Prediction from Sentiment and Market Data

Sanjay Rastogi, Kamal Upreti, Uma Shankar, Pravin Ramdas Kshirsagar, Tan Kuan Tak, Rituraj Jain, Ganesh Veluswwamy Radhakrishnan

[Ahead of Print]Acta Informatica Pragensia X:X | DOI: 10.18267/j.aip.299407

Background: The introduction of retail investors to AI-powered trading platforms and especially on emerging markets, has resulted in a new set of risks linked to algorithmic bias and financial forecasting fairness. Social media sentiment and structured data multimodal strategies have demonstrated a potential, but frequently do not have ethical considerations.Objective: This work proposes a multimodal model predictive control (MPC) framework grounded in fairness-based forecasting of next-day returns on stock in stock market settings, particularly ethical behaviour and transparency of the model on retail markets.Methods: We combine BERT-based sentiment analysis of Reddit discussions and organized stock market indicators and use XGBoost as the fundamental model. Bias is measured using fairness metrics, including demographic parity difference and equal opportunity difference. Debiasing measures such as reweighting and stratified calibration were used to curb the differences in stock categories.Results: The first model has an overall accuracy of 72.3 with the highest accuracy of 83.1 in the case of Tesla – representing bias in the model. Fairness assessment shows some significant differences (DPD=0.23, EOD=0.31), but the mitigation decreases to 0.07. However, the massive performance improvement after adjustment brings up the issue of overfitting or fairness overcorrection.Conclusion: While the proposed debiased framework successfully reduces algorithmic bias, the trade-off between fairness and generalizability underscores the need for caution. These results hold significant implications for digital trading systems and regulatory frameworks of emerging economies such as India, where explainability and fairness of AI models are significant for ethical financial engagement.

Evaluating AI Text Detection Tools for Distinguishing Human-Written from AI-Generated Abstracts in Persian-Language Journals of Library and Information Science

Amrollah Shamsi, Ting Wang, Maryam Amraei, Narayanaswamy Vasantha Raju

Acta Informatica Pragensia 2026, 15(1), 126-134 | DOI: 10.18267/j.aip.2932437

Background: Researchers are using artificial intelligence (AI) tools in academic writing. However, their use may compromise the integrity and originality of the work. Hence, AI text detection tools have come to increase transparency. Objective: This study aims to evaluate the accuracy of AI text detection tools in recognizing human-written and AI-written abstracts in library and information science (LIS).Methods: Seven Persian academic journals in LIS were selected. ZeroGPT and GPTZero as AI text detectors were used. AI-generated abstracts were produced by AI chatbots (ChatGPT 4.0, DeepSeek and Qwen).Results: Despite performing strongly in detecting AI-generated text, especially from models such as DeepSeek and Qwen, ZeroGPT and GPTZero struggle to accurately identify human-written content, resulting in high false positive rates and raising concerns about their reliability.Conclusion: The findings highlight the need for culturally and linguistically inclusive AI detection tools, as current systems such as ZeroGPT and GPTZero show limitations in diverse language contexts, underscoring the importance of improved algorithms and human-involved evaluation to ensure fairness and reliability in academic settings.

Artificial Intelligence Applications in Consumer Behaviour Analysis: A Systematic Review, Mapping Trends and Challenges

Adrián No-Pérez, Sandra Castro-González

[Ahead of Print]Acta Informatica Pragensia X:X | DOI: 10.18267/j.aip.301349

Background: The vast amounts of data generated by consumers require new forms of processing, in which artificial intelligence stands out for its ability to analyse them more quickly and deeply. However, although there is abundant literature on artificial intelligence (AI) and consumption, most of it focuses on its impact on consumer behaviour rather than its usefulness in enhancing understanding.Objective: The aim of this study is to conduct a thorough review of the existing literature on the use of AI to understand consumer behaviour.Methods: This study uses the PRISMA protocol for the selection of the studies. Then, it combines bibliometric methods with a TCM-ADO framework to review articles. The Scopus database was used to gather peer-reviewed articles from 2014 to 2024. VOS Viewer and R-Studio were utilised for the analysis and visualisation of data.Results: The study provides insights into publication trends, dominant theories, methods, antecedents, decisions and results in the literature about the use of AI to understand consumer behaviour. Furthermore, it identifies potential avenues for future research to advance the development of theory and methodology.Conclusion: Research into the use of AI to understand consumers is still in its infancy. However, everything points to the application of AI in consumer behaviour continuing to expand, and its use for analysing attitudes and behaviour becoming more sophisticated and widespread.

Optimizing Battery Charging in Wireless Sensor Networks: Performance Assessment of MPPT Algorithms in Different Environmental Settings

Abdullah Fadhil Noor Shubbar, Serkan Savaş, Osman Güler

Acta Informatica Pragensia 2025, 14(3), 422-444 | DOI: 10.18267/j.aip.2673903

Background: Photovoltaic (PV)-based energy harvesting systems are crucial for ensuring the sustainability and long-term operation of wireless sensor networks (WSNs), especially in remote or infrastructure-less environments. Given the critical role of battery performance in WSN reliability, efficient energy management through Maximum Power Point Tracking (MPPT) algorithms is essential to adapt to variable environmental conditions such as solar irradiance and ambient temperature.Objective: This study aims to comparatively assess the performance of four widely adopted MPPT algorithms—Perturb and Observe (P&O), Incremental Conductance (IC), Fuzzy Logic (FL), and Particle Swarm Optimization (PSO)—in enhancing battery charging efficiency in PV-powered WSNs under dynamic environmental conditions.Methods: A simulation-based evaluation framework was developed using MATLAB/Simulink to model a PV-powered WSN system. Each MPPT algorithm was implemented and tested using the same simulation conditions, with key performance metrics including voltage and current overshoot, response time, energy transfer efficiency, and adaptability to fluctuating irradiance and temperature profiles. A Proportional-Integral (PI) controller was also used to manage the battery charging process, and environmental profiles were varied across simulation periods to assess algorithm robustness.Results: The PSO algorithm achieved superior performance across all metrics, demonstrating the fastest response time (0.1 s), lowest overshoot (14.8 V, 25 mA), and highest energy transfer efficiency. IC and FL methods showed balanced adaptability and performance, while P&O lagged in both responsiveness and efficiency. The simulation results also confirmed that environmental conditions significantly affect PV panel output and battery State of Charge (SoC), highlighting the necessity for adaptive MPPT solutions.Conclusion: This study provides a unified and realistic comparative analysis of major MPPT algorithms for PV-powered WSNs. The PSO algorithm emerges as the most effective, though its computational complexity may limit its application in low-power systems. IC and FL serve as promising alternatives for scenarios with resource constraints. The findings contribute to the design of environmentally adaptive and energy-efficient WSNs, paving the way for their robust deployment in real-world settings.

Generative Artificial Intelligence in Education: Advancing Adaptive and Personalized Learning

Manel Guettala, Samir Bourekkache, Okba Kazar, Saad Harous

Acta Informatica Pragensia 2024, 13(3), 460-489 | DOI: 10.18267/j.aip.23518491

The integration of generative artificial intelligence (AI) into adaptive and personalized learning represents a transformative shift in the educational landscape. This research paper investigates the impact of incorporating generative AI into adaptive and personalized learning environments, with a focus on tracing the evolution from conventional artificial intelligence methods to generative AI and identifying its diverse applications in education. The study begins with a comprehensive review of the evolution of generative AI models and frameworks. A framework of selection criteria is established to curate case studies showcasing the applications of generative AI in education. These case studies are analysed to elucidate the benefits and challenges associated with integrating generative AI into adaptive learning frameworks. Through an in-depth analysis of selected case studies, the study reveals tangible benefits derived from generative AI integration, including increased student engagement, improved test scores and accelerated skill development. Ethical, technical and pedagogical challenges related to generative AI integration are identified, emphasizing the need for careful consideration and collaborative efforts between educators and technologists. The findings underscore the transformative potential of generative AI in revolutionizing education. By addressing ethical concerns, navigating technical challenges and embracing human-centric approaches, educators and technologists can collaboratively harness the power of generative AI to create innovative and inclusive learning environments. Additionally, the study highlights the transition from Education 4.0 to Education 5.0, emphasizing the importance of social-emotional learning and human connection alongside personalization in shaping the future of education.

Current Woes and Pitfalls of Publishing Scientific Journals: Development of Acta Informatica Pragensia and Reflection on Using GenAI Tools

Zdenek Smutny

Acta Informatica Pragensia 2025, 14(3), 296-305 | DOI: 10.18267/j.aip.2742451

The editorial summarises the development of the Acta Informatica Pragensia journal over the last three years and complements the journal statistics for the years 2019–2025. Thanks to the indexing of the journal in Web of Science and Scopus citation databases, the world's most prestigious scientific citation databases, the journal began to profile itself as international with regional roots and a core community of Editorial Board members from Central Europe. The paper also presents the journal metrics and statistics of submitted and accepted articles for the observed period. Against the background of the current development of tools based on generative artificial intelligence, the perspectives presented in selected articles previously published in Acta Informatica Pragensia are discussed in the context of current and future directions of academic publishing. Finally, unfair practices of authors that I have encountered in our journal as Editor-in-Chief are presented and some others that are currently resonating in academic communities are also listed.

Electronic Health Record Systems in Limited Resource Settings: A Comprehensive Evaluation of the Impilo Platform

Hamufare Dumisani Mugauri, Memory Chimsimbe

Acta Informatica Pragensia 2025, 14(3), 393-407 | DOI: 10.18267/j.aip.2657433

Background: Zimbabwe has implemented the Impilo electronic health record (EHR) system since 2016 to manage the health system electronically, gather strategic information and reduce manual documentation burden.Objective: We evaluated the capacity of decentralized structures to effectively use the Impilo EHR platform, identify training needs and challenges and provide recommendations for enhancing its effectiveness and support for integrated people-centred services at the primary healthcare level.Methods: We conducted a cross-sectional, mixed-method design, applying the COM-B (Capability, Opportunity, Motivation and Behaviour) model of behavioural change. Forty-five purposively selected healthcare workers (nurses, data entry clerks, receptionists, pharmacy staff, laboratory technicians and primary counsellors) from ten healthcare facilities in Harare and Bulawayo were included in this study. Interviews were transcribed, translated and manually coded for thematic analysis using the COM-B constructs.Results: Health workers had satisfactory skills for using the Impilo EHR system but lacked troubleshooting abilities. The capacity building did not equip users with the necessary programme-specific skills. Problems such as internet connectivity, power backup, human resource shortages, interoperability issues and lack of editing rights hindered usage. The EHR system integrated primary health services but struggled with interoperability with other software and lacked data aggregation servers, limiting its effectiveness. Leadership support and user involvement were missed opportunities to enhance performance.Conclusion: This study provided key insights into the implementation of the Impilo EHR system in Zimbabwe. The system empowers healthcare professionals with timely information, improving decision-making and patient care. However, problems such as module issues, knowledge gaps, internet connectivity, interoperability, human resource shortages and power constraints hinder its full potential. We recommend addressing these handicaps, enhancing leadership support, integrating EHR usage into performance appraisals and improving system integration with other platforms to enhance accuracy and reliability.

Optimized Ensemble Support Vector Regression Models for Predicting Stock Prices with Multiple Kernels

Subba Reddy Thumu, Geethanjali Nellore

Acta Informatica Pragensia 2024, 13(1), 24-37 | DOI: 10.18267/j.aip.2264976

Stock forecasting is a complicated and daily challenge for investors because of the non-linearity of the market and the high volatility of financial assets such as stocks, bonds and other commodities. There is a need for a powerful and adaptive stock prediction model that handles complexities and provides accurate predictions. The support vector regression (SVR) model is one of the most prominent machine learning models for forecasting time series data. An ensemble hyperbolic tangent kernel SVR (HTK-SVR-BO) is proposed in this paper, combining Tanh and inverse Tanh kernels with Bayesian optimization. Combining the strengths of multiple kernels using the ensemble technique and then using optimization to identify the optimal values for each SVR model to enhance the ensemble model performance is possible. Our proposed model is compared with an ensemble SVR model (LPR-SVR-BO), which uses well-known SVR kernel types, including linear, polynomial and radial basis function (RBF). We apply the proposed models to Microsoft Corporation (MSFT) stock prices. The mean absolute error (MAE), mean squared error (MSE), root mean squared error (RMSE), R2 score (model accuracy) and mean absolute percentage error (MAPE) are the regression metrics used to compare the effectiveness of each ensemble model. In our comparison, HTK-SVR-BO performs better in terms of regression metrics compared to LPR-SVR-BO and achieves results of 0.27424, 0.13392, 0.36595, 0.99997 and 5.2331 respectively. According to the analysis, the proposed model is more predictive and may generalize to previously unknown data more effectively, so it can be accurate when forecasting future stock prices.

Innovations in Deep Learning and Intelligent Systems for Healthcare and Engineering Applications

Hakim Bendjenna, Lawrence Chung, Abdallah Meraoumia

Acta Informatica Pragensia 2024, 13(2), 165-167 | DOI: 10.18267/j.aip.2471426

This editorial summarises the special issue entitled “Future Trends of Machine Intelligence in Science and Industry”, which brings together several pieces of research that showcase the transformative impact of deep learning and intelligent systems across various domains, including healthcare, security and communication networks. By exploring advanced methodologies and innovative applications, this collection highlights significant strides in medical imaging, mental health diagnosis, biometric identification, smart grid management and adaptive e-learning. The featured articles delve into topics such as breast cancer detection using UNET architecture, psychodiagnosis prediction with deep learning, and blockchain-secured IoT systems for healthcare. Additionally, the issue covers revolutionary approaches in historical manuscript analysis, and contactless palm-print recognition. Through these comprehensive studies, we aim to inspire further advancements and cross-disciplinary collaborations, pushing the boundaries of what is achievable with modern technology.

Survey on Security and Interoperability of Electronic Health Record Sharing Using Blockchain Technology

Reval Prabhu Puneeth, Govindaswamy Parthasarathy

Acta Informatica Pragensia 2023, 12(1), 160-178 | DOI: 10.18267/j.aip.1876691

Blockchain is regarded as a significant innovation and shows a set of promising features that can certainly address existing issues in real time applications. Decentralization, greater transparency, improved traceability and secure architecture can revolutionize healthcare systems. With the help of advancement in computer technologies, most healthcare institutions try to store patient data digitally rather than on paper. Electronic health records are regarded as some of the most important assets in healthcare system and are required to be shared among different hospitals and other organizations to improve diagnosis efficiency. While sharing patients’ details, certain basic standards such as integrity and confidentiality of the information need to be considered. Blockchain technology provides the above standards with features of immutability and granting access to stored information only to authorized users. The examination approach depends on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (or PRISMA) rules and an efficient planned search convention is utilized to look through multiple scientific databases to recognize, investigate and separate every important publication. In this paper, we present a solid systematic review on the blockchain and healthcare domain to identify the existing challenges and benefits of applying blockchain technology in healthcare systems. More than 150 scientific papers published in the last ten years are surveyed, resulting in the identifications and summarization of observations made on the different privacy-preserving approaches and also assessment of their performances. We also present a significant architectural solutions of blockchain to achieve interoperability. Thereby, we attempt to analyse the ideas of blockchain in the medical domain, by assessing the advantages and limitations, subsequently giving guidance to other researchers in the area.

Investigating the Causes of Non-realization of Project Prediction and Proposal of a New Prediction Framework

Radek Doskočil, Branislav Lacko

Acta Informatica Pragensia 2024, 13(3), 418-432 | DOI: 10.18267/j.aip.2504864

The main goal of the paper is to identify the causes of non-realization of project prediction and to propose a new framework for project prediction. A secondary goal is to explain why the approaches to project prediction used currently do not provide satisfactory results. The research was realised in the form of qualitative research using semi-structured interviews. The findings reveal that the main causes of non-realization of project prediction are follows: there is no methodology that could be practically used; simplified approaches to project prediction usually have low reliability for which reason they are generally unusable; suitable input data and information for project prediction are not available. The main contribution made by the paper is the identification of causes of non-realization of project prediction and the proposal of a new framework for project prediction that respects changing conditions during the lifecycle of the project and changes in the way of thinking in project prediction. A prerequisite for its application is a functioning system of knowledge management in projects, including the realization of post-project analysis.

Emotion-Based Sentiment Analysis Using Conv-BiLSTM with Frog Leap Algorithms

Sandeep Yelisetti, Nellore Geethanjali

Acta Informatica Pragensia 2023, 12(2), 225-242 | DOI: 10.18267/j.aip.2066526

Social media, blogs, review sites and forums can produce large volumes of data in the form of users’ emotions, views, arguments and opinions about various political events, brands, products and social problems. The user's sentiment expressed on the web influences readers, politicians and product vendors. These unstructured social media data are analysed to form structured data, and for this reason sentiment analysis has recently received the most important research attention. Sentiment analysis is a process of classifying the user’s feelings in different manners such as positive, negative or both. The major issue of sentiment analysis is insufficient data processing and outcome prediction. For this, deep learning-based approaches are effective due to their autonomous learning ability. Emotion identification from the text in natural language processing (NLP) provides more benefits in the field of e-commerce and business environments. In this paper, emotion detection-based text classification is used for sentiment analysis. The data collected are pre-processed using tokenization, stop word discarding, stemming and lemmatization. After performing data pre-processing, the features are identified using term frequency and inverse document frequency (TF-IDF). Then the filtered features are turned into word embeddings by documents as a vector (Doc2Vec). Then, for text classification, a deep learning (DL) based model called convolutional bidirectional long short-term memory (CBLSTM) is used to differentiate the sentiments of human expression into positive or good and negative or bad emotions. The neural network hyper-parameters are optimized with a meta-heuristic algorithm called the frog leap approach (FLA). The proposed CBLSTM with FLA uses four review and Twitter datasets. The experimental results of this study are compared with the conventional approaches LSTM-RNN and LSTM-CNN to prove the efficiency of the proposed model. Compared to LSTM-RNN and LSTM-CNN, the proposed model secures an improved average accuracy of 98.1% for review datasets and 97.5% for Twitter datasets.

Beyond Traditional Biometrics: Harnessing Chest X-Ray Features for Robust Person Identification

Farah Hazem, Bennour Akram, Tahar Mekhaznia, Fahad Ghabban, Abdullah Alsaeedi, Bhawna Goyal

Acta Informatica Pragensia 2024, 13(2), 234-250 | DOI: 10.18267/j.aip.2384354

Person identification through chest X-ray radiographs stands as a vanguard in both healthcare and biometrical security domains. In contrast to traditional biometric modalities, such as facial recognition, fingerprints and iris scans, the research orientation towards chest X-ray recognition has been spurred by its remarkable recognition rates. Capturing the intricate anatomical nuances of an individual's rib cage, lungs and heart, chest X-ray images emerge as a focal point for identification, even in scenarios where the human body is entirely damaged. Concerning the field of deep learning, a paradigm is exemplified in contemporary generations, with promising outcomes in classification and image similarity challenges. However, the training of convolutional neural networks (CNNs) requires copious labelled data and is time-consuming. In this study, we delve into the rich repository of the NIH ChestX-ray14 dataset, comprising 112,120 frontal-view chest radiographs from 30,805 unique patients. Our methodology is nuanced, employing the potency of Siamese neural networks and the triplet loss in conjunction with refined CNN models for feature extraction. The Siamese networks facilitate robust image similarity comparison, while the triplet loss optimizes the embedding space, mitigating intra-class variations and amplifying inter-class distances. A meticulous examination of our experimental results reveals profound insights into our model performance. Noteworthy is the remarkable accuracy achieved by the VGG-19 model, standing at an impressive 97%. This achievement is underpinned by a well-balanced precision of 95.3% and an outstanding recall of 98.4%. Surpassing other CNN models utilized in our research and outshining existing state-of-the-art models, our approach establishes itself as a vanguard in the pursuit of person identification through chest X-ray images.

Blockchain-Based Framework for Privacy Preservation and Securing EHR with Patient-Centric Access Control

Reval Prabhu Puneeth, Govindaswamy Parthasarathy

Acta Informatica Pragensia 2024, 13(1), 1-23 | DOI: 10.18267/j.aip.2256260

The technological advancements in the field of E-healthcare have resulted in unprecedented generation of medical data which increases the risk of data security and privacy. Ensuring the privacy of Electronic Health Records (EHR) has become challenging due to outsourcing of healthcare information in the cloud. This increases the chance of data leakage to unauthorized users and affects the privacy and integrity of the user data. It requires a trustworthy central authority to protect the sensitive patient information from both internal and external attacks. This paper presents a blockchain based privacy preservation framework for securing EHR data. The proposed framework integrates the immutability and decentralized nature of blockchain with advanced cryptographic techniques to ensure the confidentiality, integrity and availability of EHR. The EHR data are stored in an InterPlanetary File System (IPFS) which is encrypted using a hybrid cryptographic algorithm. In addition, a novel smart contact based patient-centric access control is designed in this paper using a blockchain-based SHA-256 hashing algorithm to protect the privacy of patient data. The experimental results show that the proposed framework enables secure sharing of health information between network users with improved data privacy and security. Furthermore, the optimized search process reduces the time and space complexity compared to the traditional search process. Through the utilization of smart contracts, this framework enforces patient-centric access controls and allows patients to manage and authorize access to their medical data.

    1  2   3    next