Credit risk assessment is one of the most important aspects of financial decision-making processes. This study presents a systematic review of the literature on the application of Artificial Intelligence (AI) and Machine Learning (ML) techniques in credit risk assessment, offering insights into methodologies, outcomes, and prevalent analysis techniques. Covering studies from diverse regions and countries, the review focuses on AI/ML-based credit risk assessment from consumer and corporate perspectives. Employing the PRISMA framework, Antecedents, Decisions, and Outcomes (ADO) framework and stringent inclusion criteria, the review analyses geographic focus, methodologies, results, and analytical techniques. It examines a wide array of datasets and approaches, from traditional statistical methods to advanced AI/ML and deep learning techniques, emphasizing their impact on improving lending practices and ensuring fairness for borrowers. The discussion section critically evaluates the contributions and limitations of existing research papers, providing novel insights and comprehensive coverage. This review highlights the international scope of research in this field, with contributions from various countries providing diverse perspectives. This systematic review enhances understanding of the evolving landscape of credit risk assessment and offers valuable insights into the application, challenges, and opportunities of AI and ML in this critical financial domain. By comparing findings with existing survey papers, this review identifies novel insights and contributions, making it a valuable resource for researchers, practitioners, and policymakers in the financial industry.
Photovoltaic systems have shown significant attention in energy systems due to the recent machine learning approach to addressing photovoltaic technical failures and energy crises. A precise power production analysis is utilized for failure identification and detection. Therefore, detecting faults in photovoltaic systems produces a considerable challenge, as it needs to determine the fault type and location rapidly and economically while ensuring continuous system operation. Thus, applying an effective fault detection system becomes necessary to moderate damages caused by faulty photovoltaic devices and protect the system against possible losses. The contribution of this study is in two folds: firstly, the paper presents several categories of photovoltaic systems faults in literature, including line-to-line, degradation, partial shading effect, open/close circuits and bypass diode faults and explores fault discovery approaches with specific importance on detecting intricate faults earlier unexplored to address this issue; secondly, VOSviewer software is presented to assess and review the utilization of machine learning within the solar photovoltaic system sector. To achieve the aims, 2258 articles retrieved from Scopus, Google Scholar, and ScienceDirect were examined across different machine learning and energy-related keywords from 1990 to the most recent research papers on 14 January 2025. The results emphasise the efficiency of the established methods in attaining fault detection with a high accuracy of over 98%. It is also observed that considering their effortlessness and performance accuracy, artificial neural networks are the most promising technique in finding a central photovoltaic system fault detection. In this regard, an extensive application of machine learning to solar photovoltaic systems could thus clinch a quicker route through sustainable energy production.
The fast-growing field of nanotheranostics is revolutionizing cancer treatment by allowing for precise diagnosis and targeted therapy at the cellular and molecular levels. These nanoscale platforms provide considerable benefits in oncology, including improved disease and therapy specificity, lower systemic toxicity, and real-time monitoring of therapeutic outcomes. However, nanoparticles' complicated interactions with biological systems, notably the immune system, present significant obstacles for clinical translation. While certain nanoparticles can elicit favorable anti-tumor immune responses, others cause immunotoxicity, including complement activation-related pseudoallergy (CARPA), cytokine storms, chronic inflammation, and organ damage. Traditional toxicity evaluation approaches are frequently time-consuming, expensive, and insufficient to capture these intricate nanoparticle-biological interactions. Artificial intelligence (AI) and machine learning (ML) have emerged as transformational solutions to these problems. This paper summarizes current achievements in nanotheranostics for cancer, delves into the causes of nanoparticle-induced immunotoxicity, and demonstrates how AI/ML may help anticipate and create safer nanoparticles. Integrating AI/ML with modern computational approaches allows for the detection of potentially dangerous nanoparticle qualities, guides the optimization of physicochemical features, and speeds up the development of immune-compatible nanotheranostics suited to individual patients. The combination of nanotechnology with AI/ML has the potential to completely realize the therapeutic promise of nanotheranostics while assuring patient safety in the age of precision medicine.
Fog computing (FC) has been presented as a modern distributed technology that will overcome the different issues that Cloud computing faces and provide many services. It brings computation and data storage closer to data resources such as sensors, cameras, and mobile devices. The fog computing paradigm is instrumental in scenarios where low latency, real-time processing, and high bandwidth are critical, such as in smart cities, industrial IoT, and autonomous vehicles. However, the distributed nature of fog computing introduces complexities in managing and predicting the execution time of tasks across heterogeneous devices with varying computational capabilities. Neural network models have demonstrated exceptional capability in prediction tasks because of their capacity to extract insightful patterns from data. Neural networks can capture non-linear interactions and provide precise predictions in various fields by using numerous layers of linked nodes. In addition, choosing the right inputs is essential to forecasting the correct value since neural network models rely on the data fed into the network to make predictions. The scheduler may choose the appropriate resource and schedule for practical resource usage and decreased make-span based on the expected value. In this paper, we suggest a model Neural Network model for fog computing task time execution prediction and an input assessment of the Interpretive Structural Modeling (ISM) technique. The proposed model showed a 23.9% reduction in MRE compared to other methods in the state-of-arts.
Copyright © by EnPress Publisher. All rights reserved.