Fog computing (FC) has been presented as a modern distributed technology that will overcome the different issues that Cloud computing faces and provide many services. It brings computation and data storage closer to data resources such as sensors, cameras, and mobile devices. The fog computing paradigm is instrumental in scenarios where low latency, real-time processing, and high bandwidth are critical, such as in smart cities, industrial IoT, and autonomous vehicles. However, the distributed nature of fog computing introduces complexities in managing and predicting the execution time of tasks across heterogeneous devices with varying computational capabilities. Neural network models have demonstrated exceptional capability in prediction tasks because of their capacity to extract insightful patterns from data. Neural networks can capture non-linear interactions and provide precise predictions in various fields by using numerous layers of linked nodes. In addition, choosing the right inputs is essential to forecasting the correct value since neural network models rely on the data fed into the network to make predictions. The scheduler may choose the appropriate resource and schedule for practical resource usage and decreased make-span based on the expected value. In this paper, we suggest a model Neural Network model for fog computing task time execution prediction and an input assessment of the Interpretive Structural Modeling (ISM) technique. The proposed model showed a 23.9% reduction in MRE compared to other methods in the state-of-arts.
The expanding adoption of artificial intelligence systems across high-impact sectors has catalyzed concerns regarding inherent biases and discrimination, leading to calls for greater transparency and accountability. Algorithm auditing has emerged as a pivotal method to assess fairness and mitigate risks in applied machine learning models. This systematic literature review comprehensively analyzes contemporary techniques for auditing the biases of black-box AI systems beyond traditional software testing approaches. An extensive search across technology, law, and social sciences publications identified 22 recent studies exemplifying innovations in quantitative benchmarking, model inspections, adversarial evaluations, and participatory engagements situated in applied contexts like clinical predictions, lending decisions, and employment screenings. A rigorous analytical lens spotlighted considerable limitations in current approaches, including predominant technical orientations divorced from lived realities, lack of transparent value deliberations, overwhelming reliance on one-shot assessments, scarce participation of affected communities, and limited corrective actions instituted in response to audits. At the same time, directions like subsidiarity analyses, human-cent
This study comprehensively evaluates the system performance by considering the thermodynamic and exergy analysis of hydrogen production by the water electrolysis method. Energy inputs, hydrogen and oxygen production capacities, exergy balance, and losses of the electrolyzer system were examined in detail. In the study, most of the energy losses are due to heat losses and electrochemical conversion processes. It has also been observed that increased electrical input increases the production of hydrogen and oxygen, but after a certain point, the rate of efficiency increase slows down. According to the exergy analysis, it was determined that the largest energy input of the system was electricity, hydrogen stood out as the main product, and oxygen and exergy losses were important factors affecting the system performance. The results, in line with other studies in the literature, show that the integration of advanced materials, low-resistance electrodes, heat recovery systems, and renewable energy is critical to increasing the efficiency of electrolyzer systems and minimizing energy losses. The modeling results reveal that machine learning programs have significant potential to achieve high accuracy in electrolysis performance estimation and process view. This study aims to contribute to the production of growth generation technologies and will shed light on global and technological regional decision-making for sustainable energy policies as it expands.
To study the environment of the Kipushi mining locality (LMK), the evolution of its landscape was observed using Landsat images from 2000 to 2020. The evolution of the landscape was generally modified by the unplanned expansion of human settlements, agricultural areas, associated with the increase in firewood collection, carbonization, and exploitation of quarry materials. The problem is that this area has never benefited from change detection studies and the LMK area is very heterogeneous. The objective of the study is to evaluate the performance of classification algorithms and apply change detection to highlight the degradation of the LMK. The first approach concerned the classifications based on the stacking of the analyzed Landsat image bands of 2000 and 2020. And the second method performed the classifications on neo-images derived from concatenations of the spectral indices: Normalized Difference Vegetation Index (NDVI), Normalized Difference Building Index (NDBI) and Normalized Difference Water Index (NDWI). In both cases, the study comparatively examined the performance of five variants of classification algorithms, namely, Maximum Likelihood (ML), Minimum Distance (MD), Neural Network (NN), Parallelepiped (Para) and Spectral Angle Mapper (SAM). The results of the controlled classifications on the stacking of Landsat image bands from 2000 and 2020 were less consistent than those obtained with the index concatenation approach. The Para and DM classification algorithms were less efficient. With their respective Kappa scores ranging from 0.27 (2000 image) to 0.43 (2020 image) for Para and from 0.64 (2000 image) to 0.84 (2020 image) for DM. The results of the SAM classifier were satisfactory for the Kappa score of 0.83 (2000) and 0.88 (2020). The ML and NN were more suitable for the study area. Their respective Kappa scores ranged between 0.91 (image 2000) and 0.99 (image 2020) for the LM algorithm and between 0.95 (image 2000) and 0.96 (image 2020) for the NN algorithm.
Finding the right technique to optimize a complex problem is not an easy task. There are hundreds of methods, especially in the field of metaheuristics suitable for solving NP-hard problems. Most metaheuristic research is characterized by developing a new algorithm for a task, modifying or improving an existing technique. The overall rate of reuse of metaheuristics is small. Many problems in the field of logistics are complex and NP-hard, so metaheuristics can adequately solve them. The purpose of this paper is to promote more frequent reuse of algorithms in the field of logistics. For this, a framework is presented, where tasks are analyzed and categorized in a new way in terms of variables or based on the type of task. A lot of emphasis is placed on whether the nature of a task is discrete or continuous. Metaheuristics are also analyzed from a new approach: the focus of the study is that, based on literature, an algorithm has already effectively solved mostly discrete or continuous problems. An algorithm is not modified and adapted to a problem, but methods that provide a possible good solution for a task type are collected. A kind of reverse optimization is presented, which can help the reuse and industrial application of metaheuristics. The paper also contributes to providing proof of the difficulties in the applicability of metaheuristics. The revealed research difficulties can help improve the quality of the field and, by initiating many additional research questions, it can improve the real application of metaheuristic algorithms to specific problems. The paper helps with decision support in logistics in the selection of applied optimization methods. We tested the effectiveness of the selection method on a specific task, and it was proven that the functional structure can help the decision when choosing the appropriate algorithm.
The Agriculture Trading Platform (ATP) represents a significant innovation in the realm of agricultural trade in Malaysia. This web-based platform is designed to address the prevalent inefficiencies and lack of transparency in the current agricultural trading environment. By centralizing real-time data on agricultural production, consumption, and pricing, ATP provides a comprehensive dashboard that facilitates data-driven decision-making for all stakeholders in the agricultural supply chain. The platform employs advanced deep learning algorithms, including Long Short-Term Memory (LSTM) networks and Convolutional Neural Networks (CNN), to forecast market trends and consumption patterns. These predictive capabilities enable producers to optimize their market strategies, negotiate better prices, and access broader markets, thereby enhancing the overall efficiency and transparency of agricultural trading in Malaysia. The ATP’s user-friendly interface and robust analytical tools have the potential to revolutionize the agricultural sector by empowering farmers, reducing reliance on intermediaries, and fostering a more equitable trading environment.
Copyright © by EnPress Publisher. All rights reserved.