The objective of this work was to analyze the effect of the use of ChatGPT in the teaching-learning process of scientific research in engineering. Artificial intelligence (AI) is a topic of great interest in higher education, as it combines hardware, software and programming languages to implement deep learning procedures. We focused on a specific course on scientific research in engineering, in which we measured the competencies, expressed in terms of the indicators, mastery, comprehension and synthesis capacity, in students who decided to use or not ChatGPT for the development and fulfillment of their activities. The data were processed through the statistical T-Student test and box-and-whisker plots were constructed. The results show that students’ reliance on ChatGPT limits their engagement in acquiring knowledge related to scientific research. This research presents evidence indicating that engineering science research students rely on ChatGPT to replace their academic work and consequently, they do not act dynamically in the teaching-learning process, assuming a static role.
This study conducts a comparative analysis of various machine learning and deep learning models for predicting order quantities in supply chain tiers. The models employed include XGBoost, Random Forest, CNN-BiLSTM, Linear Regression, Support Vector Regression (SVR), K-Nearest Neighbors (KNN), Multi-Layer Perceptron (MLP), Recurrent Neural Network (RNN), Bidirectional LSTM (BiLSTM), Bidirectional GRU (BiGRU), Conv1D-BiLSTM, Attention-LSTM, Transformer, and LSTM-CNN hybrid models. Experimental results show that the XGBoost, Random Forest, CNN-BiLSTM, and MLP models exhibit superior predictive performance. In particular, the XGBoost model demonstrates the best results across all performance metrics, attributed to its effective learning of complex data patterns and variable interactions. Although the KNN model also shows perfect predictions with zero error values, this indicates a need for further review of data processing procedures or model validation methods. Conversely, the BiLSTM, BiGRU, and Transformer models exhibit relatively lower performance. Models with moderate performance include Linear Regression, RNN, Conv1D-BiLSTM, Attention-LSTM, and the LSTM-CNN hybrid model, all displaying relatively higher errors and lower coefficients of determination (R²). As a result, tree-based models (XGBoost, Random Forest) and certain deep learning models like CNN-BiLSTM are found to be effective for predicting order quantities in supply chain tiers. In contrast, RNN-based models (BiLSTM, BiGRU) and the Transformer show relatively lower predictive power. Based on these results, we suggest that tree-based models and CNN-based deep learning models should be prioritized when selecting predictive models in practical applications.
Named Entity Recognition (NER), a core task in Information Extraction (IE) alongside Relation Extraction (RE), identifies and extracts entities like place and person names in various domains. NER has improved business processes in both public and private sectors but remains underutilized in government institutions, especially in developing countries like Indonesia. This study examines which government fields have utilized NER over the past five years, evaluates system performance, identifies common methods, highlights countries with significant adoption, and outlines current challenges. Over 64 international studies from 15 countries were selected using PRISMA 2020 guidelines. The findings are synthesized into a preliminary ontology design for Government NER.
The power of Artificial Intelligence (AI) combined with the surgeons’ expertise leads to breakthroughs in surgical care, bringing new hope to patients. Utilizing deep learning-based computer vision techniques in surgical procedures will enhance the healthcare industry. Laparoscopic surgery holds excellent potential for computer vision due to the abundance of real-time laparoscopic recordings captured by digital cameras containing significant unexplored information. Furthermore, with computing power resources becoming increasingly accessible and Machine Learning methods expanding across various industries, the potential for AI in healthcare is vast. There are several objectives of AI’s contribution to laparoscopic surgery; one is an image guidance system to identify anatomical structures in real-time. However, few studies are concerned with intraoperative anatomy recognition in laparoscopic surgery. This study provides a comprehensive review of the current state-of-the-art semantic segmentation techniques, which can guide surgeons during laparoscopic procedures by identifying specific anatomical structures for dissection or avoiding hazardous areas. This review aims to enhance research in AI for surgery to guide innovations towards more successful experiments that can be applied in real-world clinical settings. This AI contribution could revolutionize the field of laparoscopic surgery and improve patient outcomes.
The telecommunications services market faces essential challenges in an increasingly flexible and customer-adaptable environment. Research has highlighted that the monopolization of the spectrum by one operator reduces competition and negatively impacts users and the general dynamics of the sector. This article aims to present a proposal to predict the number of users, the level of traffic, and the operators’ income in the telecommunications market using artificial intelligence. Deep Learning (DL) is implemented through a Long-Short Term Memory (LSTM) as a prediction technique. The database used corresponds to the users, revenues, and traffic of 15 network operators obtained from the Communications Regulation Commission of the Republic of Colombia. The ability of LSTMs to handle temporal sequences, long-term dependencies, adaptability to changes, and complex data management makes them an excellent strategy for predicting and forecasting the telecom market. Various works involve LSTM and telecommunications. However, many questions remain in prediction. Various strategies can be proposed, and continued research should focus on providing cognitive engines to address further challenges. MATLAB is used for the design and subsequent implementation. The low Root Mean Squared Error (RMSE) values and the acceptable levels of Mean Absolute Percentage Error (MAPE), especially in an environment characterized by high variability in the number of users, support the conclusion that the implemented model exhibits excellent performance in terms of precision in the prediction process in both open-loop and closed-loop.
Recognizing the importance of competition analysis in telecommunications markets is essential to improve conditions for users and companies. Several indices in the literature assess competition in these markets, mainly through company concentration. Artificial Intelligence (AI) emerges as an effective solution to process large volumes of data and manually detect patterns that are difficult to identify. This article presents an AI model based on the LINDA indicator to predict whether oligopolies exist. The objective is to offer a valuable tool for analysts and professionals in the sector. The model uses the traffic produced, the reported revenues, and the number of users as input variables. As output parameters of the model, the LINDA index is obtained according to the information reported by the operators, the prediction using Long-Short Term Memory (LSTM) for the input variables, and finally, the prediction of the LINDA index according to the prediction obtained by the LSTM model. The obtained Mean Absolute Percentage Error (MAPE) levels indicate that the proposed strategy can be an effective tool for forecasting the dynamic fluctuations of the communications market.
Copyright © by EnPress Publisher. All rights reserved.