This study conducts a comparative analysis of various machine learning and deep learning models for predicting order quantities in supply chain tiers. The models employed include XGBoost, Random Forest, CNN-BiLSTM, Linear Regression, Support Vector Regression (SVR), K-Nearest Neighbors (KNN), Multi-Layer Perceptron (MLP), Recurrent Neural Network (RNN), Bidirectional LSTM (BiLSTM), Bidirectional GRU (BiGRU), Conv1D-BiLSTM, Attention-LSTM, Transformer, and LSTM-CNN hybrid models. Experimental results show that the XGBoost, Random Forest, CNN-BiLSTM, and MLP models exhibit superior predictive performance. In particular, the XGBoost model demonstrates the best results across all performance metrics, attributed to its effective learning of complex data patterns and variable interactions. Although the KNN model also shows perfect predictions with zero error values, this indicates a need for further review of data processing procedures or model validation methods. Conversely, the BiLSTM, BiGRU, and Transformer models exhibit relatively lower performance. Models with moderate performance include Linear Regression, RNN, Conv1D-BiLSTM, Attention-LSTM, and the LSTM-CNN hybrid model, all displaying relatively higher errors and lower coefficients of determination (R²). As a result, tree-based models (XGBoost, Random Forest) and certain deep learning models like CNN-BiLSTM are found to be effective for predicting order quantities in supply chain tiers. In contrast, RNN-based models (BiLSTM, BiGRU) and the Transformer show relatively lower predictive power. Based on these results, we suggest that tree-based models and CNN-based deep learning models should be prioritized when selecting predictive models in practical applications.
The telecommunications services market faces essential challenges in an increasingly flexible and customer-adaptable environment. Research has highlighted that the monopolization of the spectrum by one operator reduces competition and negatively impacts users and the general dynamics of the sector. This article aims to present a proposal to predict the number of users, the level of traffic, and the operators’ income in the telecommunications market using artificial intelligence. Deep Learning (DL) is implemented through a Long-Short Term Memory (LSTM) as a prediction technique. The database used corresponds to the users, revenues, and traffic of 15 network operators obtained from the Communications Regulation Commission of the Republic of Colombia. The ability of LSTMs to handle temporal sequences, long-term dependencies, adaptability to changes, and complex data management makes them an excellent strategy for predicting and forecasting the telecom market. Various works involve LSTM and telecommunications. However, many questions remain in prediction. Various strategies can be proposed, and continued research should focus on providing cognitive engines to address further challenges. MATLAB is used for the design and subsequent implementation. The low Root Mean Squared Error (RMSE) values and the acceptable levels of Mean Absolute Percentage Error (MAPE), especially in an environment characterized by high variability in the number of users, support the conclusion that the implemented model exhibits excellent performance in terms of precision in the prediction process in both open-loop and closed-loop.
To achieve sustainable development, detailed planning, control and management of land cover changes that occur naturally or by human caused artificial factors, are essential. Urban managers and planners need a tool that represents them the information accurate, fast and in exact time. In this study, land use changes of 3 periods, 1994-2002, 2002-2009, 2009-2015 and predictions of 2009, 2015 and 2023 were assessed. In this paper, Maximum Likelihood method was used to classify the images, so that after evaluation of accuracy, amount of overall accuracy for images of 2013 was 85.55% and its Kappa coefficient was 80.03%. To predict land use changes, Markov-CA model was used after assessing the accuracy, and the amount of overall accuracy for 2009 was 82.57% and for 2015 was 93.865%. Then web GIS application was designed via map server application and evoked shape files through map file and open layers to browser environment and for design of appearance of website CSS, HTML and JavaScript languages were used. HTML is responsible for creating the foundation and overall structure of webpage but beautifying and layout design on CSS.
The purpose of this paper is to explore the performance of ridge regression and the random forest model improved by genetic algorithm in predicting the Boston house price data set and conduct a comparative analysis. To achieve it, the data is divided into training set and test set according to the ratio of 70-30. The RidgeCV library is used to select the best regularization parameter for the Ridge regression model, and for the random forest model, the genetic algorithm is used to optimize the model's hyperparameters. The result shows that compared with ridge regression, the random forest model improved by genetic algorithm can perform better in the regression problem of Boston house prices.
With the rapid development of artificial intelligence (AI) technology, its application in the field of auditing has gained increasing attention. This paper explores the application of AI technology in audit risk assessment and control (ARAC), aiming to improve audit efficiency and effectiveness. First, the paper introduces the basic concepts of AI technology and its application background in the auditing field. Then, it provides a detailed analysis of the specific applications of AI technology in audit risk assessment and control, including data analysis, risk prediction, automated auditing, continuous monitoring, intelligent decision support, and compliance checks. Finally, the paper discusses the challenges and opportunities of AI technology in audit risk assessment and control, as well as future research directions.
Copyright © by EnPress Publisher. All rights reserved.