Fog computing (FC) has been presented as a modern distributed technology that will overcome the different issues that Cloud computing faces and provide many services. It brings computation and data storage closer to data resources such as sensors, cameras, and mobile devices. The fog computing paradigm is instrumental in scenarios where low latency, real-time processing, and high bandwidth are critical, such as in smart cities, industrial IoT, and autonomous vehicles. However, the distributed nature of fog computing introduces complexities in managing and predicting the execution time of tasks across heterogeneous devices with varying computational capabilities. Neural network models have demonstrated exceptional capability in prediction tasks because of their capacity to extract insightful patterns from data. Neural networks can capture non-linear interactions and provide precise predictions in various fields by using numerous layers of linked nodes. In addition, choosing the right inputs is essential to forecasting the correct value since neural network models rely on the data fed into the network to make predictions. The scheduler may choose the appropriate resource and schedule for practical resource usage and decreased make-span based on the expected value. In this paper, we suggest a model Neural Network model for fog computing task time execution prediction and an input assessment of the Interpretive Structural Modeling (ISM) technique. The proposed model showed a 23.9% reduction in MRE compared to other methods in the state-of-arts.
The fast-growing field of nanotheranostics is revolutionizing cancer treatment by allowing for precise diagnosis and targeted therapy at the cellular and molecular levels. These nanoscale platforms provide considerable benefits in oncology, including improved disease and therapy specificity, lower systemic toxicity, and real-time monitoring of therapeutic outcomes. However, nanoparticles' complicated interactions with biological systems, notably the immune system, present significant obstacles for clinical translation. While certain nanoparticles can elicit favorable anti-tumor immune responses, others cause immunotoxicity, including complement activation-related pseudoallergy (CARPA), cytokine storms, chronic inflammation, and organ damage. Traditional toxicity evaluation approaches are frequently time-consuming, expensive, and insufficient to capture these intricate nanoparticle-biological interactions. Artificial intelligence (AI) and machine learning (ML) have emerged as transformational solutions to these problems. This paper summarizes current achievements in nanotheranostics for cancer, delves into the causes of nanoparticle-induced immunotoxicity, and demonstrates how AI/ML may help anticipate and create safer nanoparticles. Integrating AI/ML with modern computational approaches allows for the detection of potentially dangerous nanoparticle qualities, guides the optimization of physicochemical features, and speeds up the development of immune-compatible nanotheranostics suited to individual patients. The combination of nanotechnology with AI/ML has the potential to completely realize the therapeutic promise of nanotheranostics while assuring patient safety in the age of precision medicine.
The destructive geohazard of landslides produces significant economic and environmental damages and social effects. State-of-the-art advances in landslide detection and monitoring are made possible through the integration of increased Earth Observation (EO) technologies and Deep Learning (DL) methods with traditional mapping methods. This assessment examines the EO and DL union for landslide detection by summarizing knowledge from more than 500 scholarly works. The research included examinations of studies that combined satellite remote sensing information, including Synthetic Aperture Radar (SAR) and multispectral imaging, with up-to-date Deep Learning models, particularly Convolutional Neural Networks (CNNs) and their U-Net versions. The research categorizes the examined studies into groups based on their methodological development, spatial extent, and validation techniques. Real-time EO data monitoring capabilities become more extensive through their use, but DL models perform automated feature recognition, which enhances accuracy in detection tasks. The research faces three critical problems: the deficiency of training data quantity for building stable models, the need to improve understanding of AI’s predictions, and its capacity to function across diverse geographical landscapes. We introduce a combined approach that uses multi-source EO data alongside DL models incorporating physical laws to improve the evaluation and transferability between different platforms. Incorporating explainable AI (XAI) technology and active learning methods reduces the uninterpretable aspects of deep learning models, thereby improving the trustworthiness of automated landslide maps. The review highlights the need for a common agreement on datasets, benchmark standards, and interdisciplinary team efforts to advance the research topic. Research efforts in the future must combine semi-supervised learning approaches with synthetic data creation and real-time hazardous event predictions to optimise EO-DL framework deployments regarding landslide danger management. This study integrates EO and AI analysis methods to develop future landslide surveillance systems that aid in reducing disasters amid the current acceleration of climate change.
Mangrove forests are vital to coastal protection, biodiversity support, and climate regulation. In the Niger Delta, these ecosystems are increasingly threatened by oil spill incidents linked to intensive petroleum activities. This study investigates the extent of mangrove degradation between 1986 and 2022 in the lower Niger Delta, specifically the region between the San Bartolomeo and Imo Rivers, using remote sensing and machine learning. Landsat 5 TM (1986) and Landsat 8 OLI (2022) imagery were classified using the Support Vector Machine (SVM) algorithm. Classification accuracy was high, with overall accuracies of 98% (1986) and 99% (2022) and Kappa coefficients of 0.97 and 0.98. Healthy mangrove cover declined from 2804.37 km2 (58%) to 2509.18 km2 (52%), while degraded mangroves increased from 72.03 km2 (1%) to 327.35 km2 (7%), reflecting a 354.46% rise. Water bodies expanded by 101.17 km2 (5.61%), potentially due to dredging, erosion, and sea-level rise. Built-up areas declined from 131.85 km2 to 61.14 km2, possibly reflecting socio-environmental displacement. Statistical analyses, including Chi-square (χ2 = 1091.33, p < 0.001) and Kendall’s Tau (τ = 1, p < 0.001), showed strong correlations between oil spills and mangrove degradation. From 2012 to 2022, over 21,914 barrels of oil were spilled, with only 38% recovered. Although paired t-tests and ANOVA results indicated no statistically significant changes at broad scales, localized ecological shifts remain severe. These findings highlight the urgent need for integrated environmental policies and restoration efforts to mitigate mangrove loss and enhance sustainability in the Niger Delta.
The expanding adoption of artificial intelligence systems across high-impact sectors has catalyzed concerns regarding inherent biases and discrimination, leading to calls for greater transparency and accountability. Algorithm auditing has emerged as a pivotal method to assess fairness and mitigate risks in applied machine learning models. This systematic literature review comprehensively analyzes contemporary techniques for auditing the biases of black-box AI systems beyond traditional software testing approaches. An extensive search across technology, law, and social sciences publications identified 22 recent studies exemplifying innovations in quantitative benchmarking, model inspections, adversarial evaluations, and participatory engagements situated in applied contexts like clinical predictions, lending decisions, and employment screenings. A rigorous analytical lens spotlighted considerable limitations in current approaches, including predominant technical orientations divorced from lived realities, lack of transparent value deliberations, overwhelming reliance on one-shot assessments, scarce participation of affected communities, and limited corrective actions instituted in response to audits. At the same time, directions like subsidiarity analyses, human-cent
Soil salinization is a difficult challenge for agricultural productivity and environmental sustainability, particularly in arid and semi-arid coastal regions. This study investigates the spatial variability of soil electrical conductivity (EC) and its relationship with key cations and anions (Na+, K+, Ca2+, Mg2+, Cl⁻, CO32⁻, HCO3⁻, SO42⁻) along the southeastern coast of the Caspian Sea in Iran. Using a combination of field-based soil sampling, laboratory analyses, and Landsat 8 spectral data, linear Multiple Linear Regression and Partial Least Squares Regression (MLR, PLSR) and nonlinear Artifician Neural Network and Support Vector Machine (ANN, SVM) modeling approaches were employed to estimate and map soil EC. Results identified Na+ and Cl⁻ as the primary contributors to salinity (r = 0.78 and r = 0.88, respectively), with NaCl salts dominating the region’s soil salinity dynamics. Secondary contributions from Potassium Chloride KCl and Magnesium Chloride MgCl2 were also observed. Coastal landforms such as lagoon relicts and coastal plains exhibited the highest salinity levels, attributed to geomorphic processes and anthropogenic activities. Among the predictive models, the SVM algorithm outperformed others, achieving higher R2 values and lower RMSE (RMSETest = 27.35 and RMSETrain = 24.62, respectively), underscoring its effectiveness in capturing complex soil-environment interactions. This study highlights the utility of digital soil mapping (DSM) for assessing soil salinity and provides actionable insights for sustainable land management, particularly in mitigating salinity and enhancing agricultural practices in vulnerable coastal systems.
Copyright © by EnPress Publisher. All rights reserved.