The destructive geohazard of landslides produces significant economic and environmental damages and social effects. State-of-the-art advances in landslide detection and monitoring are made possible through the integration of increased Earth Observation (EO) technologies and Deep Learning (DL) methods with traditional mapping methods. This assessment examines the EO and DL union for landslide detection by summarizing knowledge from more than 500 scholarly works. The research included examinations of studies that combined satellite remote sensing information, including Synthetic Aperture Radar (SAR) and multispectral imaging, with up-to-date Deep Learning models, particularly Convolutional Neural Networks (CNNs) and their U-Net versions. The research categorizes the examined studies into groups based on their methodological development, spatial extent, and validation techniques. Real-time EO data monitoring capabilities become more extensive through their use, but DL models perform automated feature recognition, which enhances accuracy in detection tasks. The research faces three critical problems: the deficiency of training data quantity for building stable models, the need to improve understanding of AI's predictions, and its capacity to function across diverse geographical landscapes. We introduce a combined approach that uses multi-source EO data alongside DL models incorporating physical laws to improve the evaluation and transferability between different platforms. Incorporating explainable AI (XAI) technology and active learning methods reduces the uninterpretable aspects of deep learning models, thereby improving the trustworthiness of automated landslide maps. The review highlights the need for a common agreement on datasets, benchmark standards, and interdisciplinary team efforts to advance the research topic. Research efforts in the future must combine semi-supervised learning approaches with synthetic data creation and real-time hazardous event predictions to optimise EO-DL framework deployments regarding landslide danger management. This study integrates EO and AI analysis methods to develop future landslide surveillance systems that aid in reducing disasters amid the current acceleration of climate change.
This research explores the advancement of Artificial Intelligence (AI) in Occupational Health and Safety (OHS) across high-risk industries, highlighting its pivotal role in mitigating the global incidence of occupational incidents and diseases, which result in approximately 2.3 million fatalities annually. Traditional OHS practices often fall short in completely preventing workplace incidents, primarily due to limitations in human-operated risk assessments and management. The integration of AI technologies has been instrumental in automating hazardous tasks, enhancing real-time monitoring, and improving decision-making through comprehensive data analysis. Specific AI applications discussed include drones and robots for risky operations, computer vision for environmental monitoring, and predictive analytics to pre-empt potential hazards. Additionally, AI-driven simulations are enhancing training protocols, significantly improving both the safety and efficiency of workers. Various studies supporting the effectiveness of these AI applications indicate marked improvements in risk management and incident prevention. By transitioning from reactive to proactive safety measures, the implementation of AI in OHS represents a transformative approach, aiming to substantially reduce the global burden of occupational injuries and fatalities in high-risk sectors.
Brain tumors are a primary factor causing cancer-related deaths globally, and their classification remains a significant research challenge due to the variability in tumor intensity, size, and shape, as well as the similar appearances of different tumor types. Accurate differentiation is further complicated by these factors, making diagnosis difficult even with advanced imaging techniques such as magnetic resonance imaging (MRI). Recent techniques in artificial intelligence (AI), in particular deep learning (DL), have improved the speed and accuracy of medical image analysis, but they still face challenges like overfitting and the need for large annotated datasets. This study addresses these challenges by presenting two approaches for brain tumor classification using MRI images. The first approach involves fine-tuning transfer learning cutting-edge models, including SEResNet, ConvNeXtBase, and ResNet101V2, with global average pooling 2D and dropout layers to minimize overfitting and reduce the need for extensive preprocessing. The second approach leverages the Vision Transformer (ViT), optimized with the AdamW optimizer and extensive data augmentation. Experiments on the BT-Large-4C dataset demonstrate that SEResNet achieves the highest accuracy of 97.96%, surpassing ViT’s 95.4%. These results suggest that fine-tuning and transfer learning models are more effective at addressing the challenges of overfitting and dataset limitations, ultimately outperforming the Vision Transformer and existing state-of-the-art techniques in brain tumor classification.
Named Entity Recognition (NER), a core task in Information Extraction (IE) alongside Relation Extraction (RE), identifies and extracts entities like place and person names in various domains. NER has improved business processes in both public and private sectors but remains underutilized in government institutions, especially in developing countries like Indonesia. This study examines which government fields have utilized NER over the past five years, evaluates system performance, identifies common methods, highlights countries with significant adoption, and outlines current challenges. Over 64 international studies from 15 countries were selected using PRISMA 2020 guidelines. The findings are synthesized into a preliminary ontology design for Government NER.
Breast cancer was a prevalent form of cancer worldwide. Thermography, a method for diagnosing breast cancer, involves recording the thermal patterns of the breast. This article explores the use of a convolutional neural network (CNN) algorithm to extract features from a dataset of thermographic images. Initially, the CNN network was used to extract a feature vector from the images. Subsequently, machine learning techniques can be used for image classification. This study utilizes four classification methods, namely Fully connected neural network (FCnet), support vector machine (SVM), classification linear model (CLINEAR), and KNN, to classify breast cancer from thermographic images. The accuracy rates achieved by the FCnet, SVM, CLINEAR, and k-nearest neighbors (KNN) algorithms were 94.2%, 95.0%, 95.0%, and 94.1%, respectively. Furthermore, the reliability parameters for these classifiers were computed as 92.1%, 97.5%, 96.5%, and 91.2%, while their respective sensitivities were calculated as 95.5%, 94.1%, 90.4%, and 93.2%. These findings can assist experts in developing an expert system for breast cancer diagnosis.
The power of Artificial Intelligence (AI) combined with the surgeons’ expertise leads to breakthroughs in surgical care, bringing new hope to patients. Utilizing deep learning-based computer vision techniques in surgical procedures will enhance the healthcare industry. Laparoscopic surgery holds excellent potential for computer vision due to the abundance of real-time laparoscopic recordings captured by digital cameras containing significant unexplored information. Furthermore, with computing power resources becoming increasingly accessible and Machine Learning methods expanding across various industries, the potential for AI in healthcare is vast. There are several objectives of AI’s contribution to laparoscopic surgery; one is an image guidance system to identify anatomical structures in real-time. However, few studies are concerned with intraoperative anatomy recognition in laparoscopic surgery. This study provides a comprehensive review of the current state-of-the-art semantic segmentation techniques, which can guide surgeons during laparoscopic procedures by identifying specific anatomical structures for dissection or avoiding hazardous areas. This review aims to enhance research in AI for surgery to guide innovations towards more successful experiments that can be applied in real-world clinical settings. This AI contribution could revolutionize the field of laparoscopic surgery and improve patient outcomes.
Copyright © by EnPress Publisher. All rights reserved.