Fulltext search in archive
Results 1 to 30 of 76:
The Digital Media in Lithuania: Combating Disinformation and Fake NewsAelita Skarzauskiene, Monika Maciuliene, Ornela RamasauskaiteActa Informatica Pragensia 2020, 9(2), 74-91 | DOI: 10.18267/j.aip.1348749 The prevalence of so-called “fake news” is a relatively recent social phenomenon that is linked to disinformation, misinformation and other forms of networked manipulation facilitated by the rise of the Internet and online social media. The spread of misinformation is among the most pressing challenges of our time. Sources from which disinformation originates are constantly changing and present an enormous challenge for real-time detection algorithms and more targeted science based socio-technical interventions. The primary aim of this paper is to illuminate the practices and interpretations, focusing on three perspectives: general attitudes to fake news, perceived interaction with disinformation and opinion on counteraction with respect to fake news. The innovative character of the research is achieved by the focus on community solutions to combat disinformation and the collaboration between media users, media organizations, scientists, communication managers, journalists and other important actors in the media ecosystem. Based on insights from interviews with communication field experts, the paper sheds light on the efforts of Lithuanian society to confront the problem of fake news in digital media environment. Lithuania is also an interesting case study for fake news due to its status as a former Soviet state now in the EU. Our research indicates that not all media users are prepared and/or have the necessary competencies to combat fake news, so that citizen engagement might actually negatively influence the quality of the counteraction process. Indeed, proactive citizens’ organizations and NGOs could be an important catalyst fostering collaboration between stakeholders. The responsibility of governments could be to create the structures, methodologies and supporting educational activities to involve the stakeholders in collaborating activities combating disinformation. |
Multi-Class Text Classification on Khmer News Using Ensemble Method in Machine Learning AlgorithmsRaksmey Phann, Chitsutha Soomlek, Pusadee SeresangtakulActa Informatica Pragensia 2023, 12(2), 243-259 | DOI: 10.18267/j.aip.2106723 The research herein applies text classification with which to categorize Khmer news articles. News articles were collected from three online websites through web scraping and grouped into nine categories. After text preprocessing, the dataset was split into training and testing sets. We then evaluated the performance of the ensemble learning method via machine learning classifiers with k-fold validation. Various machine learning classifiers were employed, namely logistic regression, Complement Naive Bayes, Bernoulli Naive Bayes, k-nearest neighbours, perceptron, support vector machines, stochastic gradient descent, AdaBoost, decision tree, and random forest were employed. Accuracy was improved for the categorization of Khmer news articles, in which Grid Search CV was used to find the optimal hyperparameters for each machine learning classifier with feature extraction TF-IDF and Delta TF-IDF. The results determined that the highest accuracy was achieved through the ensemble learning method in the support vector machine with the optimal hyperparameters (C = 10, kernel = rbf), using feature extraction TF-IDF and Delta TF-IDF, at 83.47% and 83.40%, respectively. The model establishes that Khmer news articles can be accurately categorized. |
Impact of Social Media Application Qualities on Using Them for Daily NewsDavod Farhadi, Ali MaroosiActa Informatica Pragensia 2022, 11(1), 48-61 | DOI: 10.18267/j.aip.1645213 A model is introduced to investigate the effect of social media application qualities on the using of these applications for daily news. A standard questionnaire was designed and distributed among randomly selected social media users in the city of Neyshabur in Iran. The content of the questionnaire was validated by experts and its reliability verified using Cronbach's alpha. Random sampling was used identify participants. The SmartPLS software was used to investigate the research findings. Structural equation modelling is used for data analysis. The results show that the factors of system quality, information quality, service quality and personalization of applications affect their perceived usefulness. System quality and service quality affect the perceived ease of use of applications. However, information quality does not affect perceived ease of use. Also, the results show that perceived usefulness has a greater effect (with a path coefficient of 45%) on the attitude in respect of perceived ease of use (path coefficient of 26%). Personalization has the most positive impact on perceived usefulness and service quality has a great impact on perceived ease of use. Facilitating conditions have a positive impact on the use of social media and their use for news. Furthermore, the results show that the factors that affect the use of applications in Telegram are better than those in Viber. These findings justify why Iranian people migrate from Viber to Telegram as their social media application. |
Digital Twins in the Context of Ensuring Sustainable Industrial DevelopmentYuliia Biliavska, Valentyn BiliavskyiActa Informatica Pragensia 2026, 15(1), 198-220 | DOI: 10.18267/j.aip.2911549 Background: Currently, there is a megatrend towards digitalisation and servitisation using digital technologies and digital twins to support the digital transformation of the economy. In the literature, new digital technologies are seen as creating added value, strengthening customer relationships and accelerating the process of servitisation from manufacturing. The implementation of such a complex of technologies and business solutions can lead to the adaptation of the product and service life cycle, as well as the entire business model, to full servitisation.Objective: This study reveals the role of digital twins in the context of entrepreneurship in compliance with the Sustainable Development Goals (SDGs). By constructing a thematic map of scientific clusters and SDGs, the relationship between science and practical aspects is established.Methods: Research into digital twins has led to the use of research methods such as scientific abstraction and synthesis, historical, grouping, analogy, structural-logical modelling, tabular and logical generalisation methods, as well as the bibliometric analysis method based on VOSviewer software.Results: The study analyses the evolution of the latest technology, which demonstrates the relevance of digital twins as one of the key technologies for digitalisation in many business processes. Special attention is paid to the role of digital twins in the implementation of the SDGs. The results of the bibliometric review indicate scientific interest in researching digital twins in the fields of modelling, information technology, operational management, automation and robotics. The thematic map combining scientific clusters and SDGs highlights the importance of digital twins in entrepreneurship and ensuring sustainable industrial development.Conclusion: This study provides valuable information for managers as it proves the need to implement digital twins, which enable intelligent manufacturing, serve as the main technology supporting Industry 4.0, can reflect physical information in cyberspace and manipulate physical objects by studying and researching information models in manufacturing. Therefore, future research should focus on developing reliable mechanisms for applying digital twins in the context of the SDGs in areas such as the economy, social aspects and the biosphere. This will ensure the competitiveness of the industrial sector and the country. |
Hateful and Other Negative Communication in Online Commenting Environments: Content, Structure and TargetsVasja Vehovar, Dejan JontesActa Informatica Pragensia 2021, 10(3), 257-274 | DOI: 10.18267/j.aip.1655676 Information and communication technologies are increasingly interacting with modern societies. One specific manifestation of this interaction concerns hateful and other negative comments in online environments. Various terms appear to denote this communication, from flaming, indecency and intolerance to hate speech. However, there is still a lack of an umbrella term that broadly captures this communication. Therefore, this paper introduces the concept of socially unacceptable discourse, which serves as the basis for an empirical study that evaluated online comments scraped from the Facebook pages of the three most-visited Slovenian news outlets. Machine-learning algorithms were used to narrow the focus to topics related to refugees and LGBT rights. Ten thousand comments were manually coded to identify and structure socially undesirable discourse. The results show that about half of all comments belonged to this type of discourse, with a surprisingly stable level and structure across media (i.e., right-wing versus mainstream) and topics. Most of these comments could also be considered a potential violation of hate speech legislation. In the context of these findings, the political and ideological consequences and implications of mediatised emotions are discussed. |
Culturally Sensitive Website Elements and Features: A Cross-National Comparison of Websites from Selected CountriesRadim CermakActa Informatica Pragensia 2020, 9(2), 132-153 | DOI: 10.18267/j.aip.1376766 The goal of this case study is to compare websites from 9 different countries―Austria, Chile, China, Japan, Latvia, Nigeria, Saudi Arabia, the US and the Czech Republic―and, based on this comparison, to provide the missing linking of website elements with cultural dimensions for better cultural adaptation of web content. Hofstede’s cultural dimensions were used for the selection of countries for this study. To examine the influence of culture on websites, countries with extreme values of cultural dimensions were selected. An important benefit is that this study takes into account all of Hofstede's cultural dimensions, including the latest one (indulgence vs restraint). For each country 50 websites were selected from areas that most closely reflect the culture of the country. The main focus was on the selection of an appropriate representative sample of websites for each state. A total of 450 pages was analyzed. For each website, 42 web elements determined as the most important were monitored. Moreover, the presence of various types of social networks and five general characteristics were monitored. The findings show that culture influences website design. The results of this study reveal a connection between website elements and Hofstede’s cultural dimensions. For example, headlines are important for countries with a high value of individualism, uncertainty avoidance, and a low value of power distance and indulgence. Newsletters are associated with a high value of indulgence and a low value of long-term orientation and search option with a high value of power distance. Overall, about 20 culturally sensitive website elements were identified. The study also provides a comprehensive overview of website characteristics for each of the selected countries. For UX designers, web localization specialists, academicians, and web developers, this study provides an original view into culturally sensitive website elements and features. |
University Library Information Resources as a Basis for Enhancing Educational and Professional Programmes in Information, Library and Archival StudiesNadiia Bachynska, Yurii Horban, Tetiana Novalska, Vladyslav Kasian, Nataliya GaisynuikActa Informatica Pragensia 2024, 13(1), 62-84 | DOI: 10.18267/j.aip.2295747 The article aims to explore the role of information resources provided by university libraries in strengthening educational and professional programmes in the field of Information, Library and Archival Studies based on the Scientific Library of Kyiv National University of Culture and Arts. The purpose of this study is to investigate how these resources can contribute to the overall growth and development of students and professionals in the field. Using a descriptive and analytical research methodology, the study examines the diverse range of information resources available in the library, including digital databases, online journals, e-books and other relevant materials. The findings reveal that these resources serve as a solid foundation for enhancing knowledge, skills and competencies required in the field. The practical implications of this research emphasize the importance of utilizing the rich information resources of university libraries to design and implement effective educational and professional programmes. By utilizing these resources, educational institutions and professionals can strive for continuous improvement, staying updated with the latest trends and advancements in the field. This study highlights the critical role of university library information resources in augmenting educational and professional programmes in Information, Library and Archival Studies. The findings underscore the need for collaboration and strategic utilization of these resources to shape well-rounded professionals capable of meeting the evolving demands of the information age. |
Systematic Review on Algorithmic TradingDavid Jukl, Jan LanskyActa Informatica Pragensia 2025, 14(3), 506-534 | DOI: 10.18267/j.aip.2766630 Background: Algorithmic trading systems (ATS) are defined by the use of computational algorithms for automating financial transactions. They have become a critical part of modern financial markets because of their efficiency and ability to carry out complex strategies.Objective: This research involves a systematic review that assesses the market impact, technological advancements, strategic approaches and regulatory challenges related to algorithmic trading.Methods: Following PRISMA 2020 guidelines, this study conducts a systematic literature review by screening 1,567 articles across five academic databases, namely IEEE Xplore, ACM Digital Library, SpringerLink, Web of Science and SSRN. After applying predefined inclusion and exclusion criteria, 208 peer-reviewed journal and conference papers published between 2015 and 2024 are selected. The PICOC framework is used to define the review scope. Data are extracted using structured templates capturing study details, research objectives, artificial intelligence (AI) integration, profitability analysis and limitations. Tools such as Rayyan, NVivo, MS Excel and Zotero support the screening, coding and qualitative synthesis of findings.Results: AI methods, especially machine learning (used in 50% of the studies) and sentiment analysis (20%), significantly improve predictive accuracy and profitability. Most studies focus on equities (35%) and forex (30%), with high-frequency trading being the most examined strategy (30%). Challenges include latency (30%), scalability (25%) and regulatory issues (25%).Conclusion: Future research should prioritize ethical frameworks, regulatory clarity and wider access to AI-driven ATS components. This review provides a robust foundation for academics and practitioners to innovate and optimize algorithmic trading strategies. |
Exploring Oral History Archives Using State-of-the-Art Artificial Intelligence MethodsMartin Bulín, Jan Švec, Pavel Ircing, Adam Frémund, Filip PolákActa Informatica Pragensia 2025, 14(2), 207-214 | DOI: 10.18267/j.aip.2683543 Background: The preservation and analysis of spoken data in oral history archives, such as Holocaust testimonies, provide a vast and complex knowledge source. These archives pose unique challenges and opportunities for computational methods, particularly in self-supervised learning and information retrieval.Objective: This study explores the application of state-of-the-art artificial intelligence (AI) models, particularly transformer-based architectures, to enhance navigation and engagement with large-scale oral history testimonies. The goal is to improve accessibility while preserving the authenticity and integrity of historical records.Methods: We developed an asking questions framework utilizing a fine-tuned T5 model to generate contextually relevant questions from interview transcripts. To ensure semantic coherence, we introduced a semantic continuity model based on a BERT-like architecture trained with contrastive loss.Results: The system successfully generated contextually relevant questions from oral history testimonies, enhancing user navigation and engagement. Filtering techniques improved question quality by retaining only semantically coherent outputs, ensuring alignment with the testimony content. The approach demonstrated effectiveness in handling spontaneous, unstructured speech, with a significant improvement in question relevance compared to models trained on structured text. Applied to real-world interview transcripts, the framework balanced enrichment of user experience with preservation of historical authenticity.Conclusion: By integrating generative AI models with robust retrieval techniques, we enhance the accessibility of oral history archives while maintaining their historical integrity. This research demonstrates how AI-driven approaches can facilitate interactive exploration of vast spoken data repositories, benefiting researchers, historians and the general public. |
Political Actors in the Age of Generative Artificial Intelligence: The Czech PerspectiveDaniel ŠárovecActa Informatica Pragensia 2025, 14(2), 282-295 | DOI: 10.18267/j.aip.2724115 Background: The phenomenon of artificial intelligence (AI) has been studied for decades. However, only the ascent of tools such as ChatGPT brought AI into a broader public consciousness, as people started using it for a broad spectrum of tasks and questions.Objective: The goal of this overview article is to present a new perspective on AI issues in the context of the social sciences and, more specifically, political science. Indeed, AI tools play an important role in the political process—a fact reflected by governments and other political actors, including political parties.Methods: The qualitatively and interpretively oriented paper seeks to demonstrate existing connotations of the relationship between AI and politics in the Czech context. The text is designed as an overview based on secondary sources. We first focus on AI popularity and use in the general public and public institutions. Then the article focuses on government strategies with implications for international organizations. The final part outlines the relationship between generative AI and Czech political parties.Results: The results indicate that the popularity of AI grew substantially after OpenAI launched its model. Nowadays, generative AI-based tools are commonly used by various public institutions. To date, the Government of the Czech Republic has issued two national strategies on AI issues. Political parties are among the actors using generative AI on a daily basis.Conclusion: The analysis seeks to fill in the blanks in this under-researched area and to demonstrate what kind of interdisciplinary implications of the AI–politics relationship can be examined. Moreover, we view the gradual adoption of AI tools as the next step in the process of adaptation to new digital tools that started years ago. |
Measuring the Feasibility of a Question and Answering System for the Sarawak Gazette Using Chatbot TechnologyYasir Lutfan bin Yusuf, Suhaila binti SaeeActa Informatica Pragensia 2025, 14(3), 365-392 | DOI: 10.18267/j.aip.2635959 Background: The Sarawak Gazette is a critical repository of information pertaining to Sarawak’s history. It has received much attention over the last two decades, with prior studies focusing on digitizing and extracting the gazette’s ontologies to increase the gazette’s accessibility. However, the creation of a question answering system for the Sarawak Gazette, another avenue that could improve accessibility, has been overlooked. Objective: This study created a new system to generate answers for user questions related to the gazette using chatbot technology. Methods: This system sends user queries to a context retrieval system, then generates an answer from the retrieved contexts using a Large Language Model. A question answering dataset was also created using a Large Language Model to evaluate this system, with dataset quality assessed by 10 annotators. Results: The system achieved 55% higher precision, and 42% higher recall compared to previous state-of-the-art historical document question answering while only sacrificing 11% of cosine similarity. The annotators overall rated the dataset 2.9 out of 3. Conclusion: The system could answer the general public’s questions about the Sarawak Gazette in a more direct and friendly manner compared to traditional information retrieval methods. The methods developed in this study are also applicable to other Malaysian historical texts that are written in English. All code used in this study have been released on GitHub. |
Ethical Application of Artificial Intelligence in the Contemporary Information Society: A Scoping ReviewMarija Kuštelega, Renata Mekovec[Ahead of Print]Acta Informatica Pragensia X:X | DOI: 10.18267/j.aip.308136 Background: Artificial intelligence (AI) has become a fundamental part of everyday life, making it crucial to integrate AI into the information society in ways that protect individual rights.Objective: This study explores the perspectives of different stakeholders on the ethical use of AI. The aim of this research is to identify practical measures that can help address ethical challenges associated with AI deployment.Methods: A scoping literature review approach was adopted, focusing on the most relevant articles addressing the ethical aspects of AI usage from Web of Science Core Collection and Scopus databases. The analysis was performed with focus on the perspectives of four key stakeholders: policymakers, AI innovators, business leaders, and individuals.Results: Findings highlight key measures to promote ethical AI usage: technical, organisational, regulatory, and individual measures. In this context: (1) policymakers are responsible for establishing governance and regulations; (2) AI innovators must embed ethics into AI systems; (3) business leaders should establish ethical policies and guidelines; and (4) individuals need to think critically and use AI responsibly.Conclusion: The responsible deployment of AI requires a comprehensive approach that involves the collaboration of all relevant stakeholders. The future development of AI relies on the adoption of ethical guidelines and the assurance of responsible AI system design. |
ResNetMF: Improving Recommendation Accuracy and Speed with Matrix Factorization Enhanced by Residual NetworksMustafa Payandenick, YinChai Wang, Mohd Kamal Othman, Muhammad PayandenickActa Informatica Pragensia 2026, 15(1), 1-21 | DOI: 10.18267/j.aip.2802477 Background: Recommendation systems are essential for personalized user experiences but struggle to balance accuracy and efficiency.Objective: This paper presents ResNetMF, an innovative hybrid framework designed to address these limitations by combining the strengths of matrix factorization (MF) and deep residual networks (ResNet). Matrix factorization excels at capturing explicit linear relationships between users and items, while ResNet is employed to model non-linear residuals.Methods: By focusing on refining the baseline MF output through incremental improvements, ResNetMF minimizes redundant computations and significantly enhances recommendation accuracy. The unique architecture of the framework allows it to capture and represent both linear and non-linear relationships between users and items, ensuring robust and scalable performance. Extensive experiments conducted on the widely used MovieLens dataset demonstrate the superiority of ResNetMF over existing methods.Results: Specifically, it achieves a minimum improvement of 7.95% in root mean square error compared to neural collaborative filtering and outperforms other state-of-the-art techniques in key metrics such as precision, recall and training efficiency. These results highlight the ability of ResNetMF to deliver highly accurate recommendations while maintaining computational efficiency, making it an efficient approach to real-world application of recommendation systems.Conclusion: By addressing the dual challenges of accuracy and efficiency, ResNetMF offers a balanced and scalable approach to personalized recommendation systems. |
Enhancing Imperceptibility: Zero-width Character-based Text Steganography for Preserving Message PrivacySaqib Ishtiaq, Naveed Ejaz, Muhammad Usman Hashmi, Syed Imran Hussain ShahActa Informatica Pragensia 2025, 14(3), 445-459 | DOI: 10.18267/j.aip.2714426 Background: Text steganography preserves the privacy of secret messages by hiding them in cover text. However, existing text steganography techniques embed messages by introducing distortions in text, reducing the similarity between the cover and stegotext. Objective: The objective of this study was to design a method that increases the number of embedding choices and locations to hide more secret bits per distortion in the cover text. The goal is to enhance both embedding capacity and imperceptibility.Methods: A text steganography method is proposed that uses eight zero-width characters (ZWCs) to embed secret messages in the cover text. The proposed method also treats every character in the cover text as a potential embedding location. With eight embedding choices and bit encoding based on embedding locations, more bits can be hidden with fewer insertions in cover text.Results: Experimental results confirm that the proposed method embeds a greater number of bits per insertion of ZWC in the cover text. It also requires a smaller number of insertions to embed secret messages of comparable length. Consequently, the proposed method achieves higher embedding capacity and better imperceptibility compared to existing text steganography methods.Conclusion: The proposed method presents a substantial improvement in text steganography by increasing embedding capacity per distortion and preserving high similarity between cover and stegotext, thus enabling more secure covert communication. |
Fairness-Aware Multimodal Machine Learning for Retail Stock Prediction from Sentiment and Market DataSanjay Rastogi, Kamal Upreti, Uma Shankar, Pravin Ramdas Kshirsagar, Tan Kuan Tak, Rituraj Jain, Ganesh Veluswwamy Radhakrishnan[Ahead of Print]Acta Informatica Pragensia X:X | DOI: 10.18267/j.aip.299407 Background: The introduction of retail investors to AI-powered trading platforms and especially on emerging markets, has resulted in a new set of risks linked to algorithmic bias and financial forecasting fairness. Social media sentiment and structured data multimodal strategies have demonstrated a potential, but frequently do not have ethical considerations.Objective: This work proposes a multimodal model predictive control (MPC) framework grounded in fairness-based forecasting of next-day returns on stock in stock market settings, particularly ethical behaviour and transparency of the model on retail markets.Methods: We combine BERT-based sentiment analysis of Reddit discussions and organized stock market indicators and use XGBoost as the fundamental model. Bias is measured using fairness metrics, including demographic parity difference and equal opportunity difference. Debiasing measures such as reweighting and stratified calibration were used to curb the differences in stock categories.Results: The first model has an overall accuracy of 72.3 with the highest accuracy of 83.1 in the case of Tesla – representing bias in the model. Fairness assessment shows some significant differences (DPD=0.23, EOD=0.31), but the mitigation decreases to 0.07. However, the massive performance improvement after adjustment brings up the issue of overfitting or fairness overcorrection.Conclusion: While the proposed debiased framework successfully reduces algorithmic bias, the trade-off between fairness and generalizability underscores the need for caution. These results hold significant implications for digital trading systems and regulatory frameworks of emerging economies such as India, where explainability and fairness of AI models are significant for ethical financial engagement. |
Evaluating AI Text Detection Tools for Distinguishing Human-Written from AI-Generated Abstracts in Persian-Language Journals of Library and Information ScienceAmrollah Shamsi, Ting Wang, Maryam Amraei, Narayanaswamy Vasantha RajuActa Informatica Pragensia 2026, 15(1), 126-134 | DOI: 10.18267/j.aip.2932437 Background: Researchers are using artificial intelligence (AI) tools in academic writing. However, their use may compromise the integrity and originality of the work. Hence, AI text detection tools have come to increase transparency. Objective: This study aims to evaluate the accuracy of AI text detection tools in recognizing human-written and AI-written abstracts in library and information science (LIS).Methods: Seven Persian academic journals in LIS were selected. ZeroGPT and GPTZero as AI text detectors were used. AI-generated abstracts were produced by AI chatbots (ChatGPT 4.0, DeepSeek and Qwen).Results: Despite performing strongly in detecting AI-generated text, especially from models such as DeepSeek and Qwen, ZeroGPT and GPTZero struggle to accurately identify human-written content, resulting in high false positive rates and raising concerns about their reliability.Conclusion: The findings highlight the need for culturally and linguistically inclusive AI detection tools, as current systems such as ZeroGPT and GPTZero show limitations in diverse language contexts, underscoring the importance of improved algorithms and human-involved evaluation to ensure fairness and reliability in academic settings. |
Artificial Intelligence Applications in Consumer Behaviour Analysis: A Systematic Review, Mapping Trends and ChallengesAdrián No-Pérez, Sandra Castro-González[Ahead of Print]Acta Informatica Pragensia X:X | DOI: 10.18267/j.aip.301349 Background: The vast amounts of data generated by consumers require new forms of processing, in which artificial intelligence stands out for its ability to analyse them more quickly and deeply. However, although there is abundant literature on artificial intelligence (AI) and consumption, most of it focuses on its impact on consumer behaviour rather than its usefulness in enhancing understanding.Objective: The aim of this study is to conduct a thorough review of the existing literature on the use of AI to understand consumer behaviour.Methods: This study uses the PRISMA protocol for the selection of the studies. Then, it combines bibliometric methods with a TCM-ADO framework to review articles. The Scopus database was used to gather peer-reviewed articles from 2014 to 2024. VOS Viewer and R-Studio were utilised for the analysis and visualisation of data.Results: The study provides insights into publication trends, dominant theories, methods, antecedents, decisions and results in the literature about the use of AI to understand consumer behaviour. Furthermore, it identifies potential avenues for future research to advance the development of theory and methodology.Conclusion: Research into the use of AI to understand consumers is still in its infancy. However, everything points to the application of AI in consumer behaviour continuing to expand, and its use for analysing attitudes and behaviour becoming more sophisticated and widespread. |
Cloud-Based Large Language Model Deployment: A Comparative Analysis of Serverless and Bring-Your-Own-Container ArchitecturesMateusz Ploskonka[Ahead of Print]Acta Informatica Pragensia X:X | DOI: 10.18267/j.aip.31319 Background: Large Language Models (LLMs) have transformed research and industry applications; however, cloud deployment decisions remain complex and poorly documented, particularly for academic researchers operating under budget constraints. Systematic guidance on infrastructure selection for LLM-based research is limited.Objective: This study provides a comprehensive empirical evaluation of cloud-based LLM deployment architectures, examining inference efficiency, serverless platform availability, and architectural trade-offs across major cloud providers to deliver actionable guidance for budget-constrained researchers.Methods: The author evaluated 32 open-source LLMs ranging from 0.6 billion to 1 trillion parameters across serverless and Bring Your Own Container (BYOC) deployment configurations. Using the Belebele benchmark, we analyzed cost–efficiency relationships, serverless platform availability, and metrics exposure across Amazon SageMaker, Amazon Bedrock, Azure Serverless, and Hugging Face–compatible providers.Results: Model performance follows a logarithmic scaling relationship with parameter count (R²=0.727) and deployment cost (R²=0.639). Models in the 30–50B parameter range achieve 85–90% of maximum accuracy at a fraction of the cost of frontier models. However, serverless availability remains fragmented: only 34.4% of examined models are accessible via serverless endpoints, with minimal cross-platform redundancy (6.2%). Deployment architecture introduces a fundamental trade-off: serverless platforms expose 71% fewer metrics than BYOC approaches while eliminating infrastructure management overhead and idle costs.Conclusion: These findings provide practical guidance for researchers selecting cloud infrastructure under budget constraints. Models in the 7–14B range offer optimal cost efficiency, while the 30–50B range maximizes accuracy per dollar for demanding tasks. The results also challenge the prevailing emphasis on ever-larger models, as diminishing returns become substantial beyond 30B parameters. Persistent gaps in serverless availability and observability highlight the need for greater standardization in cloud platforms. |
Drone Delivery Global Research Landscape: A Bibliometric AnalysisAbdulwahab Funsho Atanda, Daniel Yong Wen Tan, Huong Yong Ting, Wasiu Olakunle Oyenuga, Abdulrauf Uthman ToshoActa Informatica Pragensia 2026, 15(1), 221-252 | DOI: 10.18267/j.aip.2961808 Background: Rapid technological advancements have revolutionized research into unmanned aerial vehicles (UAVs), commonly known as drones, particularly in delivery applications. However, despite numerous related publications, there remains a lack of systematic reviews that synthesize challenges, trends and recent advances in drone delivery. To address this gap, the present study conducts a bibliometric analysis to examine evolutionary trends and emerging applications of UAVs between 2015 and 2024.Objective: This study aims to identify established and emerging trends in drone delivery research by analysing articles, journals, authors, institutions, countries and thematic areas.Methods: Previous studies are selected using a systematic approach, followed by bibliometric analysis with tools including VOSviewer, Bibliometrix and ScientoPy, which emphasizes key authors, top journals and countries, collaboration patterns and recurring author keywords.Results: The bibliometric analysis of 1,438 articles from 583 sources authored by 4,333 scholars (2015–2024) reveals a strong interdisciplinary focus in drone delivery research. Military applications largely drove early studies, but recent breakthroughs highlight the integration of artificial intelligence (AI) for autonomous navigation and energy optimization. Emerging themes include the development of drone swarms for scalable applications such as disaster response and agricultural mapping. Geographically, China, the United States and Australia dominate contributions, with extensive international collaborations fostering global innovation. Across journals and authors, the literature reflects a steady evolution from conceptual and technical foundations to applied studies addressing logistics, smart cities and environmental monitoring. Overall, the results suggest that drone delivery research is transitioning from exploratory phases towards AI-enabled autonomy deployment.Conclusion: Drone delivery research has evolved from military origins into a global, interdisciplinary field driven by AI. China, the USA and Australia are the leading contributors. Its future hinges on balancing technological innovation—such as autonomous navigation and swarm applications—with ethical, regulatory and societal considerations for sustainable integration. |
Generative Artificial Intelligence in Education: Advancing Adaptive and Personalized LearningManel Guettala, Samir Bourekkache, Okba Kazar, Saad HarousActa Informatica Pragensia 2024, 13(3), 460-489 | DOI: 10.18267/j.aip.23518491 The integration of generative artificial intelligence (AI) into adaptive and personalized learning represents a transformative shift in the educational landscape. This research paper investigates the impact of incorporating generative AI into adaptive and personalized learning environments, with a focus on tracing the evolution from conventional artificial intelligence methods to generative AI and identifying its diverse applications in education. The study begins with a comprehensive review of the evolution of generative AI models and frameworks. A framework of selection criteria is established to curate case studies showcasing the applications of generative AI in education. These case studies are analysed to elucidate the benefits and challenges associated with integrating generative AI into adaptive learning frameworks. Through an in-depth analysis of selected case studies, the study reveals tangible benefits derived from generative AI integration, including increased student engagement, improved test scores and accelerated skill development. Ethical, technical and pedagogical challenges related to generative AI integration are identified, emphasizing the need for careful consideration and collaborative efforts between educators and technologists. The findings underscore the transformative potential of generative AI in revolutionizing education. By addressing ethical concerns, navigating technical challenges and embracing human-centric approaches, educators and technologists can collaboratively harness the power of generative AI to create innovative and inclusive learning environments. Additionally, the study highlights the transition from Education 4.0 to Education 5.0, emphasizing the importance of social-emotional learning and human connection alongside personalization in shaping the future of education. |
Current Woes and Pitfalls of Publishing Scientific Journals: Development of Acta Informatica Pragensia and Reflection on Using GenAI ToolsZdenek SmutnyActa Informatica Pragensia 2025, 14(3), 296-305 | DOI: 10.18267/j.aip.2742451 The editorial summarises the development of the Acta Informatica Pragensia journal over the last three years and complements the journal statistics for the years 2019–2025. Thanks to the indexing of the journal in Web of Science and Scopus citation databases, the world's most prestigious scientific citation databases, the journal began to profile itself as international with regional roots and a core community of Editorial Board members from Central Europe. The paper also presents the journal metrics and statistics of submitted and accepted articles for the observed period. Against the background of the current development of tools based on generative artificial intelligence, the perspectives presented in selected articles previously published in Acta Informatica Pragensia are discussed in the context of current and future directions of academic publishing. Finally, unfair practices of authors that I have encountered in our journal as Editor-in-Chief are presented and some others that are currently resonating in academic communities are also listed. |
Electronic Health Record Systems in Limited Resource Settings: A Comprehensive Evaluation of the Impilo PlatformHamufare Dumisani Mugauri, Memory ChimsimbeActa Informatica Pragensia 2025, 14(3), 393-407 | DOI: 10.18267/j.aip.2657433 Background: Zimbabwe has implemented the Impilo electronic health record (EHR) system since 2016 to manage the health system electronically, gather strategic information and reduce manual documentation burden.Objective: We evaluated the capacity of decentralized structures to effectively use the Impilo EHR platform, identify training needs and challenges and provide recommendations for enhancing its effectiveness and support for integrated people-centred services at the primary healthcare level.Methods: We conducted a cross-sectional, mixed-method design, applying the COM-B (Capability, Opportunity, Motivation and Behaviour) model of behavioural change. Forty-five purposively selected healthcare workers (nurses, data entry clerks, receptionists, pharmacy staff, laboratory technicians and primary counsellors) from ten healthcare facilities in Harare and Bulawayo were included in this study. Interviews were transcribed, translated and manually coded for thematic analysis using the COM-B constructs.Results: Health workers had satisfactory skills for using the Impilo EHR system but lacked troubleshooting abilities. The capacity building did not equip users with the necessary programme-specific skills. Problems such as internet connectivity, power backup, human resource shortages, interoperability issues and lack of editing rights hindered usage. The EHR system integrated primary health services but struggled with interoperability with other software and lacked data aggregation servers, limiting its effectiveness. Leadership support and user involvement were missed opportunities to enhance performance.Conclusion: This study provided key insights into the implementation of the Impilo EHR system in Zimbabwe. The system empowers healthcare professionals with timely information, improving decision-making and patient care. However, problems such as module issues, knowledge gaps, internet connectivity, interoperability, human resource shortages and power constraints hinder its full potential. We recommend addressing these handicaps, enhancing leadership support, integrating EHR usage into performance appraisals and improving system integration with other platforms to enhance accuracy and reliability. |
Optimizing Battery Charging in Wireless Sensor Networks: Performance Assessment of MPPT Algorithms in Different Environmental SettingsAbdullah Fadhil Noor Shubbar, Serkan Savaş, Osman GülerActa Informatica Pragensia 2025, 14(3), 422-444 | DOI: 10.18267/j.aip.2673903 Background: Photovoltaic (PV)-based energy harvesting systems are crucial for ensuring the sustainability and long-term operation of wireless sensor networks (WSNs), especially in remote or infrastructure-less environments. Given the critical role of battery performance in WSN reliability, efficient energy management through Maximum Power Point Tracking (MPPT) algorithms is essential to adapt to variable environmental conditions such as solar irradiance and ambient temperature.Objective: This study aims to comparatively assess the performance of four widely adopted MPPT algorithms—Perturb and Observe (P&O), Incremental Conductance (IC), Fuzzy Logic (FL), and Particle Swarm Optimization (PSO)—in enhancing battery charging efficiency in PV-powered WSNs under dynamic environmental conditions.Methods: A simulation-based evaluation framework was developed using MATLAB/Simulink to model a PV-powered WSN system. Each MPPT algorithm was implemented and tested using the same simulation conditions, with key performance metrics including voltage and current overshoot, response time, energy transfer efficiency, and adaptability to fluctuating irradiance and temperature profiles. A Proportional-Integral (PI) controller was also used to manage the battery charging process, and environmental profiles were varied across simulation periods to assess algorithm robustness.Results: The PSO algorithm achieved superior performance across all metrics, demonstrating the fastest response time (0.1 s), lowest overshoot (14.8 V, 25 mA), and highest energy transfer efficiency. IC and FL methods showed balanced adaptability and performance, while P&O lagged in both responsiveness and efficiency. The simulation results also confirmed that environmental conditions significantly affect PV panel output and battery State of Charge (SoC), highlighting the necessity for adaptive MPPT solutions.Conclusion: This study provides a unified and realistic comparative analysis of major MPPT algorithms for PV-powered WSNs. The PSO algorithm emerges as the most effective, though its computational complexity may limit its application in low-power systems. IC and FL serve as promising alternatives for scenarios with resource constraints. The findings contribute to the design of environmentally adaptive and energy-efficient WSNs, paving the way for their robust deployment in real-world settings. |
Blockchain-Based Framework for Privacy Preservation and Securing EHR with Patient-Centric Access ControlReval Prabhu Puneeth, Govindaswamy ParthasarathyActa Informatica Pragensia 2024, 13(1), 1-23 | DOI: 10.18267/j.aip.2256260 The technological advancements in the field of E-healthcare have resulted in unprecedented generation of medical data which increases the risk of data security and privacy. Ensuring the privacy of Electronic Health Records (EHR) has become challenging due to outsourcing of healthcare information in the cloud. This increases the chance of data leakage to unauthorized users and affects the privacy and integrity of the user data. It requires a trustworthy central authority to protect the sensitive patient information from both internal and external attacks. This paper presents a blockchain based privacy preservation framework for securing EHR data. The proposed framework integrates the immutability and decentralized nature of blockchain with advanced cryptographic techniques to ensure the confidentiality, integrity and availability of EHR. The EHR data are stored in an InterPlanetary File System (IPFS) which is encrypted using a hybrid cryptographic algorithm. In addition, a novel smart contact based patient-centric access control is designed in this paper using a blockchain-based SHA-256 hashing algorithm to protect the privacy of patient data. The experimental results show that the proposed framework enables secure sharing of health information between network users with improved data privacy and security. Furthermore, the optimized search process reduces the time and space complexity compared to the traditional search process. Through the utilization of smart contracts, this framework enforces patient-centric access controls and allows patients to manage and authorize access to their medical data. |
Information Ethics in Light of Bibliometric Analyses: Discovering a Shift to Ethics of Artificial IntelligenceJela Steinerová, Miriam OndrišováActa Informatica Pragensia 2024, 13(3), 433-459 | DOI: 10.18267/j.aip.2375883 The objectives of this study are to analyse the content of publications focused on the area of information ethics and discover patterns, knowledge and thematic trends. The main research question is: What is the intellectual and topical structure of the field of information ethics? We apply bibliometric analytical methods, including co-citation analysis (41 most cited authors out of 9947), co-word analysis (127 keywords), visualizations (maps) and analysis of time periods in strategic diagrams. These methods are interpreted with the use of previous content analyses and results of a Delphi study. The dataset covers publications between 1988 and 2023 collected from Web of Science using the search term “information ethics” in titles, keywords and abstracts (469 records). The study presents the research background and objectives, related research review, research methods and findings. Results are visualized in maps of topics and trends. We investigate the intellectual and thematic structure of information ethics, including numbers of publications, main disciplines, the intellectual structure (authors, topics, trends) and identify four time periods (1988-2005, 2006-2012, 2013-2019, 2020-2023) visualized by strategic diagrams. The study reveals the multidimensionality and multidisciplinary dynamic evolution of information ethics. The main trends are the topics of ethics of artificial intelligence and algorithms, data ethics, ethics of information literacy, informational privacy and dis/misinformation. We find that information ethics studies are embedded in wider contexts of the information crisis and design of public digital services. We propose education and information literacy courses related to ethical sensitivity, data ethics and the use of AI tools. The study contributes to bridging the gap between information ethics studies and human information interactions. Our results confirm the increasing interest in ethics of artificial intelligence. |
Deep Learning Approach for Predicting PsychodiagnosisZouaoui Samia, Khamari ChahinezActa Informatica Pragensia 2024, 13(2), 288-307 | DOI: 10.18267/j.aip.2434472 Artificial intelligence methods, especially deep learning, have seen increasing application in analysing personality and occupational data to identify individuals with psychological and neurological disorders. Currently, there is a great need for effectively processing mental healthcare with the integration of artificial intelligence such as machine learning and deep learning. The paper addresses the pressing need for accurate and efficient methods for diagnosing psychiatric disorders, which are often complex and multifaceted. By exploiting the power of convolutional neural networks (CNN), we propose a novel CNN-based natural language processing method without removing stop words for predicting psychiatric diagnoses capable of accurately classifying individuals based on their psychological data. Our proposal is based on keeping a richer linguistic and semantic context to accurately predict psychiatric diagnosis. The experiment involves two datasets: one gathered from a private clinic and the other from Kaggle, called the Human Stress Dataset. The outcomes from the first dataset demonstrate a remarkable accuracy rate of 98.51% when employing CNN, showcasing their superior performance compared to the standard machine learning techniques such as logistic regression, k-nearest neighbours and support vector machines. With the second dataset, our model achieved an impressive area under the receiver operating characteristic curve (AUROC) of 0.87. This result surpasses those achieved by existing state-of-the-art methods, further highlighting the efficacy of our CNN-based approach in discerning subtle nuances within the data and making accurate predictions. Moreover, we have compared our model with three other programs on the same dataset and the accuracy reached 78.52%. The results are promising to aid parents or clinicians in early and rapidly predicting the ill individual. |
The Fairness Stitch: A Novel Approach for Neural Network DebiasingModar Sulaiman, Kallol RoyActa Informatica Pragensia 2024, 13(3), 359-373 | DOI: 10.18267/j.aip.2414154 The pursuit of fairness in machine learning models has become increasingly crucial across various applications, including bank loan approval and face detection. Despite the widespread use of artificial intelligence algorithms, concerns persist regarding biases and discrimination within these models. This study introduces a novel approach, termed “The Fairness Stitch” (TFS), aimed at enhancing fairness in deep learning models by combining model stitching and training jointly, while incorporating fairness constraints. We evaluate the effectiveness of TFS through a comprehensive assessment using two established datasets, CelebA and UTKFace. The evaluation involves a systematic comparison with the existing baseline method, fair deep feature reweighting (FDR). Our analysis demonstrates that TFS achieves a better balance between fairness and performance compared to the baseline method (FDR). Specifically, our method shows significant improvements in mitigating biases while maintaining performance levels. These results underscore the promising potential of TFS in addressing bias-related challenges and promoting equitable outcomes in machine learning models. This research challenges conventional wisdom regarding the efficacy of the last layer in deep learning models for debiasing purposes. The findings suggest that integrating fairness constraints into our proposed framework (TFS) can lead to more effective mitigation of biases and contribute to fairer AI systems. |
Impact of Management Support on Business Intelligence Adoption: Illustrative Case Study Testing Different Managerial StrategiesJakub Andar, Petra KasparovaActa Informatica Pragensia 2024, 13(1), 85-99 | DOI: 10.18267/j.aip.2306150 Business intelligence (BI) is a crucial tool for organizations to gain a competitive advantage on the market. BI encompasses collection, analysis and utilization of data to enhance decision-making and drive organizational innovation. The system quality or management support most often influences the success of BI projects. The present article aims to verify the importance of management support in implementing BI solutions. The research was conducted in a global shipping company. The research is based on an illustrative case study in which four managerial tactics were tested. A different form of managerial support was applied to the four newly introduced reports provided by BI tools. The results confirm the importance of management support but also show the impact of other factors influencing user behaviour. Keeping the possibility of using the original data sources played a significant role. Thus, the habit effect manifested itself, as it strongly accompanies all efforts for process changes. So far, business intelligence tools are becoming a part of decision-making processes on an operational basis, but management support is still essential at all job levels. |
Factors Influencing Cloud Computing Adoption by SMEs in the Czech Republic: An Empirical Analysis Using Technology-Organization-Environment FrameworkJiří Homan, Ladislav BeránekActa Informatica Pragensia 2023, 12(2), 296-310 | DOI: 10.18267/j.aip.2174179 Cloud computing technologies have come a long way and are available to virtually any company today. However, which factors will cause the company to decide to implement these services? Based on existing research abroad, we compiled a Technology-Organization-Environment (TOE) framework and proposed questions that support individual factors in our model to address this problem. Small and medium-sized enterprises (SMEs) in the Czech Republic actively participated in the research, from which we received 99 valid responses. Our results show a significant influence of four factors. The first factor is relative advantage, and the second is competitive pressure. In our case, companies are convinced that thanks to cloud computing, they will gain a more advantageous position over competitors, especially in the area of costs, increased productivity and entry into new industries. At the same time, they are convinced that competing cloud computing companies are implementing and taking advantage of it. The third factor is compatibility. This factor may be the cause of the temporary expansion of only simple implementations. The fourth factor is industry. So, companies perceive pressure to implement cloud computing in their business area. To support the further expansion of cloud computing, it is necessary to continue highlighting the cost benefits of cloud computing. At the same time, it certainly makes sense to bring new applications with a simple billing model and simple integration between the most used applications. |
Blockchain-Powered Patient-Centric Access Control with MIDC AES-256 Encryption for Enhanced Healthcare Data SecurityKrishna Prasad Narasimha Rao, Selvan ChinnaiyanActa Informatica Pragensia 2024, 13(3), 374-394 | DOI: 10.18267/j.aip.2426182 Patient-centric access control in healthcare data management is paramount for ensuring privacy, confidentiality and security. In this paper, we propose a novel blockchain-powered patient-centric access control system integrated with MIDC AES-256 encryption to enhance healthcare data security. The proposed system prioritizes patient autonomy by granting patients control over access to their detailed health information, while hospitals are authorized to share relevant medical history. Using blockchain technology ensures decentralization, transparency and immutability of data, while smart contracts and consensus mechanisms enforce accountability and integrity. Additionally, the system employs MIDC AES-256 encryption, which combines multi-input data concatenation (MIDC) with AES-256 encryption, optimizing data integrity and security. The study involves a comparative analysis with existing methods including ABE, RSA and hybrid algorithm AES. The results demonstrate the superiority of our proposed system in terms of encryption speed, decryption time and memory usage. The proposed system achieves an encryption time of 3.8 seconds and a decryption time of 3.2 seconds, significantly outperforming ABE, RSA and hybrid algorithm AES. Moreover, the system exhibits lower memory usage (0.146 MB), highlighting its efficiency and scalability. The proposed system is implemented in Python, providing a versatile and accessible solution for healthcare data security enhancement. Through blockchain-powered patient-centric access control and MIDC AES-256 encryption, our system offers a robust framework for securing sensitive healthcare information while prioritizing patient privacy and control. |
