This study conducts a comparative analysis of various machine learning and deep learning models for predicting order quantities in supply chain tiers. The models employed include XGBoost, Random Forest, CNN-BiLSTM, Linear Regression, Support Vector Regression (SVR), K-Nearest Neighbors (KNN), Multi-Layer Perceptron (MLP), Recurrent Neural Network (RNN), Bidirectional LSTM (BiLSTM), Bidirectional GRU (BiGRU), Conv1D-BiLSTM, Attention-LSTM, Transformer, and LSTM-CNN hybrid models. Experimental results show that the XGBoost, Random Forest, CNN-BiLSTM, and MLP models exhibit superior predictive performance. In particular, the XGBoost model demonstrates the best results across all performance metrics, attributed to its effective learning of complex data patterns and variable interactions. Although the KNN model also shows perfect predictions with zero error values, this indicates a need for further review of data processing procedures or model validation methods. Conversely, the BiLSTM, BiGRU, and Transformer models exhibit relatively lower performance. Models with moderate performance include Linear Regression, RNN, Conv1D-BiLSTM, Attention-LSTM, and the LSTM-CNN hybrid model, all displaying relatively higher errors and lower coefficients of determination (R²). As a result, tree-based models (XGBoost, Random Forest) and certain deep learning models like CNN-BiLSTM are found to be effective for predicting order quantities in supply chain tiers. In contrast, RNN-based models (BiLSTM, BiGRU) and the Transformer show relatively lower predictive power. Based on these results, we suggest that tree-based models and CNN-based deep learning models should be prioritized when selecting predictive models in practical applications.
Retinal disorders, such as diabetic retinopathy, glaucoma, macular edema, and vein occlusions, are significant contributors to global vision impairment. These conditions frequently remain symptomless until patients suffer severe vision deterioration, underscoring the critical importance of early diagnosis. Fundus images serve as a valuable resource for identifying the initial indicators of these ailments, particularly by examining various characteristics of retinal blood vessels, such as their length, width, tortuosity, and branching patterns. Traditionally, healthcare practitioners often rely on manual retinal vessel segmentation, a process that is both time-consuming and intricate, demanding specialized expertise. However, this approach poses a notable challenge since its precision and consistency heavily rely on the availability of highly skilled professionals. To surmount these challenges, there is an urgent demand for an automatic and efficient method for retinal vessel segmentation and classification employing computer vision techniques, which form the foundation of biomedical imaging. Numerous researchers have put forth techniques for blood vessel segmentation, broadly categorized into machine learning, filtering-based, and model-based methods. Machine learning methods categorize pixels as either vessels or non-vessels, employing classifiers trained on hand-annotated images. Subsequently, these techniques extract features using 7D feature vectors and apply neural network classification. Additional post-processing steps are used to bridge gaps and eliminate isolated pixels. On the other hand, filtering-based approaches employ morphological operators within morphological image processing, capitalizing on predefined shapes to filter out objects from the background. However, this technique often treats larger blood vessels as cohesive structures. Model-based methods leverage vessel models to identify retinal blood vessels, but they are sensitive to parameter selection, necessitating careful choices to simultaneously detect thin and large vessels effectively. Our proposed research endeavors to conduct a thorough and empirical evaluation of the effectiveness of automated segmentation and classification techniques for identifying eye-related diseases, particularly diabetic retinopathy and glaucoma. This evaluation will involve various retinal image datasets, including DRIVE, REVIEW, STARE, HRF, and DRION. The methodologies under consideration encompass machine learning, filtering-based, and model-based approaches, with performance assessment based on a range of metrics, including true positive rate (TPR), true negative rate (TNR), positive predictive value (PPV), negative predictive value (NPV), false discovery rate (FDR), Matthews's correlation coefficient (MCC), and accuracy (ACC). The primary objective of this research is to scrutinize, assess, and compare the design and performance of different segmentation and classification techniques, encompassing both supervised and unsupervised learning methods. To attain this objective, we will refine existing techniques and develop new ones, ensuring a more streamlined and computationally efficient approach.
Named Entity Recognition (NER), a core task in Information Extraction (IE) alongside Relation Extraction (RE), identifies and extracts entities like place and person names in various domains. NER has improved business processes in both public and private sectors but remains underutilized in government institutions, especially in developing countries like Indonesia. This study examines which government fields have utilized NER over the past five years, evaluates system performance, identifies common methods, highlights countries with significant adoption, and outlines current challenges. Over 64 international studies from 15 countries were selected using PRISMA 2020 guidelines. The findings are synthesized into a preliminary ontology design for Government NER.
The power of Artificial Intelligence (AI) combined with the surgeons’ expertise leads to breakthroughs in surgical care, bringing new hope to patients. Utilizing deep learning-based computer vision techniques in surgical procedures will enhance the healthcare industry. Laparoscopic surgery holds excellent potential for computer vision due to the abundance of real-time laparoscopic recordings captured by digital cameras containing significant unexplored information. Furthermore, with computing power resources becoming increasingly accessible and Machine Learning methods expanding across various industries, the potential for AI in healthcare is vast. There are several objectives of AI’s contribution to laparoscopic surgery; one is an image guidance system to identify anatomical structures in real-time. However, few studies are concerned with intraoperative anatomy recognition in laparoscopic surgery. This study provides a comprehensive review of the current state-of-the-art semantic segmentation techniques, which can guide surgeons during laparoscopic procedures by identifying specific anatomical structures for dissection or avoiding hazardous areas. This review aims to enhance research in AI for surgery to guide innovations towards more successful experiments that can be applied in real-world clinical settings. This AI contribution could revolutionize the field of laparoscopic surgery and improve patient outcomes.
The telecommunications services market faces essential challenges in an increasingly flexible and customer-adaptable environment. Research has highlighted that the monopolization of the spectrum by one operator reduces competition and negatively impacts users and the general dynamics of the sector. This article aims to present a proposal to predict the number of users, the level of traffic, and the operators’ income in the telecommunications market using artificial intelligence. Deep Learning (DL) is implemented through a Long-Short Term Memory (LSTM) as a prediction technique. The database used corresponds to the users, revenues, and traffic of 15 network operators obtained from the Communications Regulation Commission of the Republic of Colombia. The ability of LSTMs to handle temporal sequences, long-term dependencies, adaptability to changes, and complex data management makes them an excellent strategy for predicting and forecasting the telecom market. Various works involve LSTM and telecommunications. However, many questions remain in prediction. Various strategies can be proposed, and continued research should focus on providing cognitive engines to address further challenges. MATLAB is used for the design and subsequent implementation. The low Root Mean Squared Error (RMSE) values and the acceptable levels of Mean Absolute Percentage Error (MAPE), especially in an environment characterized by high variability in the number of users, support the conclusion that the implemented model exhibits excellent performance in terms of precision in the prediction process in both open-loop and closed-loop.
Recognizing the importance of competition analysis in telecommunications markets is essential to improve conditions for users and companies. Several indices in the literature assess competition in these markets, mainly through company concentration. Artificial Intelligence (AI) emerges as an effective solution to process large volumes of data and manually detect patterns that are difficult to identify. This article presents an AI model based on the LINDA indicator to predict whether oligopolies exist. The objective is to offer a valuable tool for analysts and professionals in the sector. The model uses the traffic produced, the reported revenues, and the number of users as input variables. As output parameters of the model, the LINDA index is obtained according to the information reported by the operators, the prediction using Long-Short Term Memory (LSTM) for the input variables, and finally, the prediction of the LINDA index according to the prediction obtained by the LSTM model. The obtained Mean Absolute Percentage Error (MAPE) levels indicate that the proposed strategy can be an effective tool for forecasting the dynamic fluctuations of the communications market.
Copyright © by EnPress Publisher. All rights reserved.