Fog computing (FC) has been presented as a modern distributed technology that will overcome the different issues that Cloud computing faces and provide many services. It brings computation and data storage closer to data resources such as sensors, cameras, and mobile devices. The fog computing paradigm is instrumental in scenarios where low latency, real-time processing, and high bandwidth are critical, such as in smart cities, industrial IoT, and autonomous vehicles. However, the distributed nature of fog computing introduces complexities in managing and predicting the execution time of tasks across heterogeneous devices with varying computational capabilities. Neural network models have demonstrated exceptional capability in prediction tasks because of their capacity to extract insightful patterns from data. Neural networks can capture non-linear interactions and provide precise predictions in various fields by using numerous layers of linked nodes. In addition, choosing the right inputs is essential to forecasting the correct value since neural network models rely on the data fed into the network to make predictions. The scheduler may choose the appropriate resource and schedule for practical resource usage and decreased make-span based on the expected value. In this paper, we suggest a model Neural Network model for fog computing task time execution prediction and an input assessment of the Interpretive Structural Modeling (ISM) technique. The proposed model showed a 23.9% reduction in MRE compared to other methods in the state-of-arts.
The fast-growing field of nanotheranostics is revolutionizing cancer treatment by allowing for precise diagnosis and targeted therapy at the cellular and molecular levels. These nanoscale platforms provide considerable benefits in oncology, including improved disease and therapy specificity, lower systemic toxicity, and real-time monitoring of therapeutic outcomes. However, nanoparticles' complicated interactions with biological systems, notably the immune system, present significant obstacles for clinical translation. While certain nanoparticles can elicit favorable anti-tumor immune responses, others cause immunotoxicity, including complement activation-related pseudoallergy (CARPA), cytokine storms, chronic inflammation, and organ damage. Traditional toxicity evaluation approaches are frequently time-consuming, expensive, and insufficient to capture these intricate nanoparticle-biological interactions. Artificial intelligence (AI) and machine learning (ML) have emerged as transformational solutions to these problems. This paper summarizes current achievements in nanotheranostics for cancer, delves into the causes of nanoparticle-induced immunotoxicity, and demonstrates how AI/ML may help anticipate and create safer nanoparticles. Integrating AI/ML with modern computational approaches allows for the detection of potentially dangerous nanoparticle qualities, guides the optimization of physicochemical features, and speeds up the development of immune-compatible nanotheranostics suited to individual patients. The combination of nanotechnology with AI/ML has the potential to completely realize the therapeutic promise of nanotheranostics while assuring patient safety in the age of precision medicine.
The destructive geohazard of landslides produces significant economic and environmental damages and social effects. State-of-the-art advances in landslide detection and monitoring are made possible through the integration of increased Earth Observation (EO) technologies and Deep Learning (DL) methods with traditional mapping methods. This assessment examines the EO and DL union for landslide detection by summarizing knowledge from more than 500 scholarly works. The research included examinations of studies that combined satellite remote sensing information, including Synthetic Aperture Radar (SAR) and multispectral imaging, with up-to-date Deep Learning models, particularly Convolutional Neural Networks (CNNs) and their U-Net versions. The research categorizes the examined studies into groups based on their methodological development, spatial extent, and validation techniques. Real-time EO data monitoring capabilities become more extensive through their use, but DL models perform automated feature recognition, which enhances accuracy in detection tasks. The research faces three critical problems: the deficiency of training data quantity for building stable models, the need to improve understanding of AI’s predictions, and its capacity to function across diverse geographical landscapes. We introduce a combined approach that uses multi-source EO data alongside DL models incorporating physical laws to improve the evaluation and transferability between different platforms. Incorporating explainable AI (XAI) technology and active learning methods reduces the uninterpretable aspects of deep learning models, thereby improving the trustworthiness of automated landslide maps. The review highlights the need for a common agreement on datasets, benchmark standards, and interdisciplinary team efforts to advance the research topic. Research efforts in the future must combine semi-supervised learning approaches with synthetic data creation and real-time hazardous event predictions to optimise EO-DL framework deployments regarding landslide danger management. This study integrates EO and AI analysis methods to develop future landslide surveillance systems that aid in reducing disasters amid the current acceleration of climate change.
This study comprehensively evaluates the system performance by considering the thermodynamic and exergy analysis of hydrogen production by the water electrolysis method. Energy inputs, hydrogen and oxygen production capacities, exergy balance, and losses of the electrolyzer system were examined in detail. In the study, most of the energy losses are due to heat losses and electrochemical conversion processes. It has also been observed that increased electrical input increases the production of hydrogen and oxygen, but after a certain point, the rate of efficiency increase slows down. According to the exergy analysis, it was determined that the largest energy input of the system was electricity, hydrogen stood out as the main product, and oxygen and exergy losses were important factors affecting the system performance. The results, in line with other studies in the literature, show that the integration of advanced materials, low-resistance electrodes, heat recovery systems, and renewable energy is critical to increasing the efficiency of electrolyzer systems and minimizing energy losses. The modeling results reveal that machine learning programs have significant potential to achieve high accuracy in electrolysis performance estimation and process view. This study aims to contribute to the production of growth generation technologies and will shed light on global and technological regional decision-making for sustainable energy policies as it expands.
Creating a crop type map is a dominant yet complicated model to produce. This study aims to determine the best model to identify the wheat crop in the Haridwar district, Uttarakhand, India, by presenting a novel approach using machine learning techniques for time series data derived from the Sentinel-2 satellite spanned from mid-November to April. The proposed methodology combines the Normalized Difference Vegetation Index (NDVI), satellite bands like red, green, blue, and NIR, feature extraction, and classification algorithms to capture crop growth's temporal dynamics effectively. Three models, Random Forest, Convolutional Neural Networks, and Support Vector Machine, were compared to obtain the start of season (SOS). It is validated and evaluated using the performance metrics. Further, Random Forest stood out as the best model statistically and spatially for phenology parameter extraction with the least RMSE value at 19 days. CNN and Random Forest models were used to classify wheat crops by combining SOS, blue, green, red, NIR bands, and NDVI. Random Forest produces a more accurate wheat map with an accuracy of 69% and 0.5 MeanIoU. It was observed that CNN is not able to distinguish between wheat and other crops. The result revealed that incorporating the Sentinel-2 satellite data bearing a high spatial and temporal resolution with supervised machine-learning models and crop phenology metrics can empower the crop type classification process.
Brain tumors are a primary factor causing cancer-related deaths globally, and their classification remains a significant research challenge due to the variability in tumor intensity, size, and shape, as well as the similar appearances of different tumor types. Accurate differentiation is further complicated by these factors, making diagnosis difficult even with advanced imaging techniques such as magnetic resonance imaging (MRI). Recent techniques in artificial intelligence (AI), in particular deep learning (DL), have improved the speed and accuracy of medical image analysis, but they still face challenges like overfitting and the need for large annotated datasets. This study addresses these challenges by presenting two approaches for brain tumor classification using MRI images. The first approach involves fine-tuning transfer learning cutting-edge models, including SEResNet, ConvNeXtBase, and ResNet101V2, with global average pooling 2D and dropout layers to minimize overfitting and reduce the need for extensive preprocessing. The second approach leverages the Vision Transformer (ViT), optimized with the AdamW optimizer and extensive data augmentation. Experiments on the BT-Large-4C dataset demonstrate that SEResNet achieves the highest accuracy of 97.96%, surpassing ViT’s 95.4%. These results suggest that fine-tuning and transfer learning models are more effective at addressing the challenges of overfitting and dataset limitations, ultimately outperforming the Vision Transformer and existing state-of-the-art techniques in brain tumor classification.
Copyright © by EnPress Publisher. All rights reserved.