Introduction: Chatbots are increasingly utilized in education, offering real-time, personalized communication. While research has explored technical aspects of chatbots, user experience remains under-investigated. This study examines a model for evaluating user experience and satisfaction with chatbots in higher education. Methodology: A four-factor model (information quality, system quality, chatbot experience, user satisfaction) was proposed based on prior research. An alternative two-factor model emerged through exploratory factor analysis, focusing on “Chatbot Response Quality” and “User Experience and Satisfaction with the Chatbot.” Surveys were distributed to students and faculty at a university in Ecuador to collect data. Confirmatory factor analysis validated both models. Results: The two-factor model explained a significantly greater proportion of the data’s variance (55.2%) compared to the four-factor model (46.4%). Conclusion: This study suggests that a simpler model focusing on chatbot response quality and user experience is more effective for evaluating chatbots in education. Future research can explore methods to optimize these factors and improve the learning experience for students.
This study applies machine learning methods such as Decision Tree (CART) and Random Forest to classify drought intensity based on meteorological data. The goal of the study was to evaluate the effectiveness of these methods for drought classification and their use in water resource management and agriculture. The methodology involved using two machine learning models that analyzed temperature and humidity indicators, as well as wind speed indicators. The models were trained and tested on real meteorological data to assess their accuracy and identify key factors affecting predictions. Results showed that the Random Forest model achieved the highest accuracy of 94.4% when analyzing temperature and humidity indicators, while the Decision Tree (CART) achieved an accuracy of 93.2%. When analyzing wind speed indicators, the models’ accuracies were 91.3% and 93.0%, respectively. Feature importance revealed that atmospheric pressure, temperature at 2 m, and wind speed are key factors influencing drought intensity. One of the study’s limitations was the insufficient amount of data for high drought levels (classes 4 and 5), indicating the need for further data collection. The innovation of this study lies in the integration of various meteorological parameters to build drought classification models, achieving high prediction accuracy. Unlike previous studies, our approach demonstrates that using a wide range of meteorological data can significantly improve drought classification accuracy. Significant findings include the necessity to expand the dataset and integrate additional climatic parameters to improve models and enhance their reliability.
Telecommunications markets have a giant impact on countries’ economies. An example of this is the great potential offered by the internet service, which allows growth in various aspects such as productivity, education, health, and connectivity. A few companies dominate telecommunications markets, so there is a high market concentrations risk. In that sense, the state has to generate strong regulation in the sector. Models for measuring competition in telecommunications markets allow the state to monitor the concentration performance in these markets. The prediction of competition in the telecommunications market based on artificial intelligence techniques would allow the state to anticipate the necessary controls to regulate the market and avoid monopolies and oligopolies. This work’s added value and the main objective is to measure the current concentration level in the Colombian telecommunications market, this allows for competitive analysis in order to propose effective strategies and methodologies to improve competition in the future of Colombian telecommunications services operators. The main result obtained in the research is the existence of concentration in the Colombian telecommunications market.
Countering cyber extremism is a crucial challenge in the digital age. Social media algorithms, if designed and used properly, have the potential to be a powerful tool in this fight, development of technological solutions that can make social networks a safer and healthier space for all users. this study mainly aims to provide a comprehensive view of the role played by the algorithms of social networking sites in countering electronic extremism, and clarifying the expected ease of use by programmers in limiting the dissemination of extremist data. Additionally, to analyzing the intended benefit in controlling and organizing digital content for users from all societal groups. Through the systematic review tool, a variety of previous literature related to the applications of algorithms in the field of online radicalization reduction was evaluated. Algorithms use machine learning and analysis of text and images to detect content that may be harmful, hateful, or call for violence. Posts, comments, photos and videos are analyzed to detect any signs of extremism. Algorithms also contribute to enhancing content that promotes positive values, tolerance and understanding between individuals, which reduces the impact of extremist content. Algorithms are also constantly updated to be able to discover new methods used by extremists to spread their ideas and avoid detection. The results indicate that it is possible to make the most of these algorithms and use them to enhance electronic security and reduce digital threats.
Copyright © by EnPress Publisher. All rights reserved.