Fog computing (FC) has been presented as a modern distributed technology that will overcome the different issues that Cloud computing faces and provide many services. It brings computation and data storage closer to data resources such as sensors, cameras, and mobile devices. The fog computing paradigm is instrumental in scenarios where low latency, real-time processing, and high bandwidth are critical, such as in smart cities, industrial IoT, and autonomous vehicles. However, the distributed nature of fog computing introduces complexities in managing and predicting the execution time of tasks across heterogeneous devices with varying computational capabilities. Neural network models have demonstrated exceptional capability in prediction tasks because of their capacity to extract insightful patterns from data. Neural networks can capture non-linear interactions and provide precise predictions in various fields by using numerous layers of linked nodes. In addition, choosing the right inputs is essential to forecasting the correct value since neural network models rely on the data fed into the network to make predictions. The scheduler may choose the appropriate resource and schedule for practical resource usage and decreased make-span based on the expected value. In this paper, we suggest a model Neural Network model for fog computing task time execution prediction and an input assessment of the Interpretive Structural Modeling (ISM) technique. The proposed model showed a 23.9% reduction in MRE compared to other methods in the state-of-arts.
In this paper, we assess the results of experiment with different machine learning algorithms for the data classification on the basis of accuracy, precision, recall and F1-Score metrics. We collected metrics like Accuracy, F1-Score, Precision, and Recall: From the Neural Network model, it produced the highest Accuracy of 0.129526 also highest F1-Score of 0.118785, showing that it has the correct balance of precision and recall ratio that can pick up important patterns from the dataset. Random Forest was not much behind with an accuracy of 0.128119 and highest precision score of 0.118553 knit a great ability for handling relations in large dataset but with slightly lower recall in comparison with Neural Network. This ranked the Decision Tree model at number three with a 0.111792, Accuracy Score while its Recall score showed it can predict true positives better than Support Vector Machine (SVM), although it predicts more of the positives than it actually is a majority of the times. SVM ranked fourth, with accuracy of 0.095465 and F1-Score of 0.067861, the figure showing difficulty in classification of associated classes. Finally, the K-Neighbors model took the 6th place, with the predetermined accuracy of 0.065531 and the unsatisfactory results with the precision and recall indicating the problems of this algorithm in classification. We found out that Neural Networks and Random Forests are the best algorithms for this classification task, while K-Neighbors is far much inferior than the other classifiers.
Breast cancer was a prevalent form of cancer worldwide. Thermography, a method for diagnosing breast cancer, involves recording the thermal patterns of the breast. This article explores the use of a convolutional neural network (CNN) algorithm to extract features from a dataset of thermographic images. Initially, the CNN network was used to extract a feature vector from the images. Subsequently, machine learning techniques can be used for image classification. This study utilizes four classification methods, namely Fully connected neural network (FCnet), support vector machine (SVM), classification linear model (CLINEAR), and KNN, to classify breast cancer from thermographic images. The accuracy rates achieved by the FCnet, SVM, CLINEAR, and k-nearest neighbors (KNN) algorithms were 94.2%, 95.0%, 95.0%, and 94.1%, respectively. Furthermore, the reliability parameters for these classifiers were computed as 92.1%, 97.5%, 96.5%, and 91.2%, while their respective sensitivities were calculated as 95.5%, 94.1%, 90.4%, and 93.2%. These findings can assist experts in developing an expert system for breast cancer diagnosis.
Accurate demand forecasting is key for companies to optimize inventory management and satisfy customer demand efficiently. This paper aims to Investigate on the application of generative AI models in demand forecasting. Two models were used: Long Short-Term Memory (LSTM) networks and Variational Autoencoder (VAE), and results were compared to select the optimal model in terms of performance and forecasting accuracy. The difference of actual and predicted demand values also ascertain LSTM’s ability to identify latent features and basic trends in the data. Further, some of the research works were focused on computational efficiency and scalability of the proposed methods for providing the guidelines to the companies for the implementation of the complicated techniques in demand forecasting. Based on these results, LSTM networks have a promising application in enhancing the demand forecasting and consequently helpful for the decision-making process regarding inventory control and other resource allocation.
Accurate prediction of US Treasury bond yields is crucial for investment strategies and economic policymaking. This paper explores the application of advanced machine learning techniques, specifically Recurrent Neural Networks (RNN) and Long Short-Term Memory (LSTM) models, in forecasting these yields. By integrating key economic indicators and policy changes, our approach seeks to enhance the precision of yield predictions. Our study demonstrates the superiority of LSTM models over traditional RNNs in capturing the temporal dependencies and complexities inherent in financial data. The inclusion of macroeconomic and policy variables significantly improves the models’ predictive accuracy. This research underscores a pioneering movement for the legacy banking industry to adopt artificial intelligence (AI) in financial market prediction. In addition to considering the conventional economic indicator that drives the fluctuation of the bond market, this paper also optimizes the LSTM to handle situations when rate hike expectations have already been priced-in by market sentiment.
This research examines three data mining approaches employing cost management datasets from 391 Thai contractor companies to investigate the predictive modeling of construction project failure with nine parameters. Artificial neural networks, naive bayes, and decision trees with attribute selection are some of the algorithms that were explored. In comparison to artificial neural network’s (91.33%) and naive bays’ (70.01%) accuracy rates, the decision trees with attribute selection demonstrated greater classification efficiency, registering an accuracy of 98.14%. Finally, the nine parameters include: 1) planning according to the current situation; 2) the company’s cost management strategy; 3) control and coordination from employees at different levels of the organization to survive on the basis of various uncertainties; 4) the importance of labor management factors; 5) the general status of the company, which has a significant effect on the project success; 6) the cost of procurement of the field office location; 7) the operational constraints and long-term safe work procedures; 8) the implementation of the construction system system piece by piece, using prefabricated parts; 9) dealing with the COVID-19 crisis, which is crucial for preventing project failure. The results show how advanced data mining approaches can improve cost estimation and prevent project failure, as well as how computational methods can enhance sustainability in the building industry. Although the results are encouraging, they also highlight issues including data asymmetry and the potential for overfitting in the decision tree model, necessitating careful consideration.
Copyright © by EnPress Publisher. All rights reserved.