Fog computing (FC) has been presented as a modern distributed technology that will overcome the different issues that Cloud computing faces and provide many services. It brings computation and data storage closer to data resources such as sensors, cameras, and mobile devices. The fog computing paradigm is instrumental in scenarios where low latency, real-time processing, and high bandwidth are critical, such as in smart cities, industrial IoT, and autonomous vehicles. However, the distributed nature of fog computing introduces complexities in managing and predicting the execution time of tasks across heterogeneous devices with varying computational capabilities. Neural network models have demonstrated exceptional capability in prediction tasks because of their capacity to extract insightful patterns from data. Neural networks can capture non-linear interactions and provide precise predictions in various fields by using numerous layers of linked nodes. In addition, choosing the right inputs is essential to forecasting the correct value since neural network models rely on the data fed into the network to make predictions. The scheduler may choose the appropriate resource and schedule for practical resource usage and decreased make-span based on the expected value. In this paper, we suggest a model Neural Network model for fog computing task time execution prediction and an input assessment of the Interpretive Structural Modeling (ISM) technique. The proposed model showed a 23.9% reduction in MRE compared to other methods in the state-of-arts.
The destructive geohazard of landslides produces significant economic and environmental damages and social effects. State-of-the-art advances in landslide detection and monitoring are made possible through the integration of increased Earth Observation (EO) technologies and Deep Learning (DL) methods with traditional mapping methods. This assessment examines the EO and DL union for landslide detection by summarizing knowledge from more than 500 scholarly works. The research included examinations of studies that combined satellite remote sensing information, including Synthetic Aperture Radar (SAR) and multispectral imaging, with up-to-date Deep Learning models, particularly Convolutional Neural Networks (CNNs) and their U-Net versions. The research categorizes the examined studies into groups based on their methodological development, spatial extent, and validation techniques. Real-time EO data monitoring capabilities become more extensive through their use, but DL models perform automated feature recognition, which enhances accuracy in detection tasks. The research faces three critical problems: the deficiency of training data quantity for building stable models, the need to improve understanding of AI’s predictions, and its capacity to function across diverse geographical landscapes. We introduce a combined approach that uses multi-source EO data alongside DL models incorporating physical laws to improve the evaluation and transferability between different platforms. Incorporating explainable AI (XAI) technology and active learning methods reduces the uninterpretable aspects of deep learning models, thereby improving the trustworthiness of automated landslide maps. The review highlights the need for a common agreement on datasets, benchmark standards, and interdisciplinary team efforts to advance the research topic. Research efforts in the future must combine semi-supervised learning approaches with synthetic data creation and real-time hazardous event predictions to optimise EO-DL framework deployments regarding landslide danger management. This study integrates EO and AI analysis methods to develop future landslide surveillance systems that aid in reducing disasters amid the current acceleration of climate change.
Brain tumors are a primary factor causing cancer-related deaths globally, and their classification remains a significant research challenge due to the variability in tumor intensity, size, and shape, as well as the similar appearances of different tumor types. Accurate differentiation is further complicated by these factors, making diagnosis difficult even with advanced imaging techniques such as magnetic resonance imaging (MRI). Recent techniques in artificial intelligence (AI), in particular deep learning (DL), have improved the speed and accuracy of medical image analysis, but they still face challenges like overfitting and the need for large annotated datasets. This study addresses these challenges by presenting two approaches for brain tumor classification using MRI images. The first approach involves fine-tuning transfer learning cutting-edge models, including SEResNet, ConvNeXtBase, and ResNet101V2, with global average pooling 2D and dropout layers to minimize overfitting and reduce the need for extensive preprocessing. The second approach leverages the Vision Transformer (ViT), optimized with the AdamW optimizer and extensive data augmentation. Experiments on the BT-Large-4C dataset demonstrate that SEResNet achieves the highest accuracy of 97.96%, surpassing ViT’s 95.4%. These results suggest that fine-tuning and transfer learning models are more effective at addressing the challenges of overfitting and dataset limitations, ultimately outperforming the Vision Transformer and existing state-of-the-art techniques in brain tumor classification.
Breast cancer was a prevalent form of cancer worldwide. Thermography, a method for diagnosing breast cancer, involves recording the thermal patterns of the breast. This article explores the use of a convolutional neural network (CNN) algorithm to extract features from a dataset of thermographic images. Initially, the CNN network was used to extract a feature vector from the images. Subsequently, machine learning techniques can be used for image classification. This study utilizes four classification methods, namely Fully connected neural network (FCnet), support vector machine (SVM), classification linear model (CLINEAR), and KNN, to classify breast cancer from thermographic images. The accuracy rates achieved by the FCnet, SVM, CLINEAR, and k-nearest neighbors (KNN) algorithms were 94.2%, 95.0%, 95.0%, and 94.1%, respectively. Furthermore, the reliability parameters for these classifiers were computed as 92.1%, 97.5%, 96.5%, and 91.2%, while their respective sensitivities were calculated as 95.5%, 94.1%, 90.4%, and 93.2%. These findings can assist experts in developing an expert system for breast cancer diagnosis.
Named Entity Recognition (NER), a core task in Information Extraction (IE) alongside Relation Extraction (RE), identifies and extracts entities like place and person names in various domains. NER has improved business processes in both public and private sectors but remains underutilized in government institutions, especially in developing countries like Indonesia. This study examines which government fields have utilized NER over the past five years, evaluates system performance, identifies common methods, highlights countries with significant adoption, and outlines current challenges. Over 64 international studies from 15 countries were selected using PRISMA 2020 guidelines. The findings are synthesized into a preliminary ontology design for Government NER.
Monitoring marine biodiversity is a challenge in some vulnerable and difficult-to-access habitats, such as underwater caves. Underwater caves are a great focus of biodiversity, concentrating a large number of species in their environment. However, most of the sessile species that live on the rocky walls are very vulnerable, and they are often threatened by different pressures. The use of these spaces as a destination for recreational divers can cause different impacts on the benthic habitat. In this work, we propose a methodology based on video recordings of cave walls and image analysis with deep learning algorithms to estimate the spatial density of structuring species in a study area. We propose a combination of automatic frame overlap detection, estimation of the actual extent of surface cover, and semantic segmentation of the main 10 species of corals and sponges to obtain species density maps. These maps can be the data source for monitoring biodiversity over time. In this paper, we analyzed the performance of three different semantic segmentation algorithms and backbones for this task and found that the Mask R-CNN model with the Xception101 backbone achieves the best accuracy, with an average segmentation accuracy of 82%.
Copyright © by EnPress Publisher. All rights reserved.