This study conducts a comparative analysis of various machine learning and deep learning models for predicting order quantities in supply chain tiers. The models employed include XGBoost, Random Forest, CNN-BiLSTM, Linear Regression, Support Vector Regression (SVR), K-Nearest Neighbors (KNN), Multi-Layer Perceptron (MLP), Recurrent Neural Network (RNN), Bidirectional LSTM (BiLSTM), Bidirectional GRU (BiGRU), Conv1D-BiLSTM, Attention-LSTM, Transformer, and LSTM-CNN hybrid models. Experimental results show that the XGBoost, Random Forest, CNN-BiLSTM, and MLP models exhibit superior predictive performance. In particular, the XGBoost model demonstrates the best results across all performance metrics, attributed to its effective learning of complex data patterns and variable interactions. Although the KNN model also shows perfect predictions with zero error values, this indicates a need for further review of data processing procedures or model validation methods. Conversely, the BiLSTM, BiGRU, and Transformer models exhibit relatively lower performance. Models with moderate performance include Linear Regression, RNN, Conv1D-BiLSTM, Attention-LSTM, and the LSTM-CNN hybrid model, all displaying relatively higher errors and lower coefficients of determination (R²). As a result, tree-based models (XGBoost, Random Forest) and certain deep learning models like CNN-BiLSTM are found to be effective for predicting order quantities in supply chain tiers. In contrast, RNN-based models (BiLSTM, BiGRU) and the Transformer show relatively lower predictive power. Based on these results, we suggest that tree-based models and CNN-based deep learning models should be prioritized when selecting predictive models in practical applications.
The use of artificial intelligence (AI) in the detection and diagnosis of plant diseases has gained significant interest in modern agriculture. The appeal of AI arises from its ability to rapidly and precisely analyze extensive and complex information, allowing farmers and agricultural experts to quickly identify plant diseases. The use of artificial intelligence (AI) in the detection and diagnosis of plant diseases has gained significant attention in the world of agriculture and agronomy. By harnessing the power of AI to identify and diagnose plant diseases, it is expected that farmers and agricultural experts will have improved capabilities to tackle the challenges posed by these diseases. This will lead to increased effectiveness and efficiency, ultimately resulting in higher agricultural productivity and reduced losses caused by plant diseases. The use of artificial intelligence (AI) in the detection and diagnosis of plant diseases has resulted in significant benefits in the field of agriculture. By using AI technology, farmers and agricultural professionals can quickly and accurately identify illnesses affecting their crops. This allows for the prompt adoption of appropriate preventative and corrective actions, therefore reducing losses caused by plant diseases.
Falling is one of the most critical outcomes of loss of consciousness during triage in emergency department (ED). It is an important sign requires an immediate medical intervention. This paper presents a computer vision-based fall detection model in ED. In this study, we hypothesis that the proposed vision-based triage fall detection model provides accuracy equal to traditional triage system (TTS) conducted by the nursing team. Thus, to build the proposed model, we use MoveNet, a pose estimation model that can identify joints related to falls, consisting of 17 key points. To test the hypothesis, we conducted two experiments: In the deep learning (DL) model we used the complete feature consisting of 17 keypoints which was passed to the triage fall detection model and was built using Artificial Neural Network (ANN). In the second model we use dimensionality reduction Feature-Reduction for Fall model (FRF), Random Forest (RF) feature selection analysis to filter the key points triage fall classifier. We tested the performance of the two models using a dataset consisting of many images for real-world scenarios classified into two classes: Fall and Not fall. We split the dataset into 80% for training and 20% for validation. The models in these experiments were trained to obtain the results and compare them with the reference model. To test the effectiveness of the model, a t-test was performed to evaluate the null hypothesis for both experiments. The results show FRF outperforms DL model, and FRF has same accuracy of TTS.
The usage of cybersecurity is growing steadily because it is beneficial to us. When people use cybersecurity, they can easily protect their valuable data. Today, everyone is connected through the internet. It’s much easier for a thief to connect important data through cyber-attacks. Everyone needs cybersecurity to protect their precious personal data and sustainable infrastructure development in data science. However, systems protecting our data using the existing cybersecurity systems is difficult. There are different types of cybersecurity threats. It can be phishing, malware, ransomware, and so on. To prevent these attacks, people need advanced cybersecurity systems. Many software helps to prevent cyber-attacks. However, these are not able to early detect suspicious internet threat exchanges. This research used machine learning models in cybersecurity to enhance threat detection. Reducing cyberattacks internet and enhancing data protection; this system makes it possible to browse anywhere through the internet securely. The Kaggle dataset was collected to build technology to detect untrustworthy online threat exchanges early. To obtain better results and accuracy, a few pre-processing approaches were applied. Feature engineering is applied to the dataset to improve the quality of data. Ultimately, the random forest, gradient boosting, XGBoost, and Light GBM were used to achieve our goal. Random forest obtained 96% accuracy, which is the best and helpful to get a good outcome for the social development in the cybersecurity system.
The cost of diagnostic errors has been high in the developed world economics according to a number of recent studies and continues to rise. Up till now, a common process of performing image diagnostics for a growing number of conditions has been examination by a single human specialist (i.e., single-channel recognition and classification decision system). Such a system has natural limitations of unmitigated error that can be detected only much later in the treatment cycle, as well as resource intensity and poor ability to scale to the rising demand. At the same time Machine Intelligence (ML, AI) systems, specifically those including deep neural network and large visual domain models have made significant progress in the field of general image recognition, in many instances achieving the level of an average human and in a growing number of cases, a human specialist in the effectiveness of image recognition tasks. The objectives of the AI in Medicine (AIM) program were set to leverage the opportunities and advantages of the rapidly evolving Artificial Intelligence technology to achieve real and measurable gains in public healthcare, in quality, access, public confidence and cost efficiency. The proposal for a collaborative AI-human image diagnostics system falls directly into the scope of this program.
Money laundering has become a vital issue all over the world especially in the emerging economy over the last two decades. Till now, the developing and emerging countries face challenges about the remedies and inceptions of anti-money laundering issues. The objective of the study is to provide a thorough picture of the diversified movements of academic research on money laundering and anti-money laundering activities all over the world. This study aims at exploring the contemporary issues in Anti-money laundering based on the academic points of view. Further, the study is explored to render a portrayal of anti-money laundering activities from an emergency country context. A review of publicly available reports, published documents, daily newspapers, case studies, and previous academic research comprised the main sources of data for the study. It is found that the contemporary money laundering and anti-money laundering academic research might be classified into four broad categories. An emerging country like Bangladesh has taken little initiative to inductee anti-money laundering initiatives. It implies that for the successful implementation of anti-money laundering activities, good governance along with a congenial regulatory framework is a prerequisite in an emerging country context. In addition, the machine learning may enhance the quality of money laundering detections in Bangladesh.
Copyright © by EnPress Publisher. All rights reserved.