The fast-growing field of nanotheranostics is revolutionizing cancer treatment by allowing for precise diagnosis and targeted therapy at the cellular and molecular levels. These nanoscale platforms provide considerable benefits in oncology, including improved disease and therapy specificity, lower systemic toxicity, and real-time monitoring of therapeutic outcomes. However, nanoparticles' complicated interactions with biological systems, notably the immune system, present significant obstacles for clinical translation. While certain nanoparticles can elicit favorable anti-tumor immune responses, others cause immunotoxicity, including complement activation-related pseudoallergy (CARPA), cytokine storms, chronic inflammation, and organ damage. Traditional toxicity evaluation approaches are frequently time-consuming, expensive, and insufficient to capture these intricate nanoparticle-biological interactions. Artificial intelligence (AI) and machine learning (ML) have emerged as transformational solutions to these problems. This paper summarizes current achievements in nanotheranostics for cancer, delves into the causes of nanoparticle-induced immunotoxicity, and demonstrates how AI/ML may help anticipate and create safer nanoparticles. Integrating AI/ML with modern computational approaches allows for the detection of potentially dangerous nanoparticle qualities, guides the optimization of physicochemical features, and speeds up the development of immune-compatible nanotheranostics suited to individual patients. The combination of nanotechnology with AI/ML has the potential to completely realize the therapeutic promise of nanotheranostics while assuring patient safety in the age of precision medicine.
The expanding adoption of artificial intelligence systems across high-impact sectors has catalyzed concerns regarding inherent biases and discrimination, leading to calls for greater transparency and accountability. Algorithm auditing has emerged as a pivotal method to assess fairness and mitigate risks in applied machine learning models. This systematic literature review comprehensively analyzes contemporary techniques for auditing the biases of black-box AI systems beyond traditional software testing approaches. An extensive search across technology, law, and social sciences publications identified 22 recent studies exemplifying innovations in quantitative benchmarking, model inspections, adversarial evaluations, and participatory engagements situated in applied contexts like clinical predictions, lending decisions, and employment screenings. A rigorous analytical lens spotlighted considerable limitations in current approaches, including predominant technical orientations divorced from lived realities, lack of transparent value deliberations, overwhelming reliance on one-shot assessments, scarce participation of affected communities, and limited corrective actions instituted in response to audits. At the same time, directions like subsidiarity analyses, human-cent
In this paper, we assess the results of experiment with different machine learning algorithms for the data classification on the basis of accuracy, precision, recall and F1-Score metrics. We collected metrics like Accuracy, F1-Score, Precision, and Recall: From the Neural Network model, it produced the highest Accuracy of 0.129526 also highest F1-Score of 0.118785, showing that it has the correct balance of precision and recall ratio that can pick up important patterns from the dataset. Random Forest was not much behind with an accuracy of 0.128119 and highest precision score of 0.118553 knit a great ability for handling relations in large dataset but with slightly lower recall in comparison with Neural Network. This ranked the Decision Tree model at number three with a 0.111792, Accuracy Score while its Recall score showed it can predict true positives better than Support Vector Machine (SVM), although it predicts more of the positives than it actually is a majority of the times. SVM ranked fourth, with accuracy of 0.095465 and F1-Score of 0.067861, the figure showing difficulty in classification of associated classes. Finally, the K-Neighbors model took the 6th place, with the predetermined accuracy of 0.065531 and the unsatisfactory results with the precision and recall indicating the problems of this algorithm in classification. We found out that Neural Networks and Random Forests are the best algorithms for this classification task, while K-Neighbors is far much inferior than the other classifiers.
This study aims to identify the causes of delays in public construction projects in Thailand, a developing country. Increasing construction durations lead to higher costs, making it essential to pinpoint the causes of these delays. The research analyzed 30 public construction projects that encountered delays. Delay causes were categorized into four groups: contractor-related, client-related, supervisor-related, and external factors. A questionnaire was used to survey these causes, and the Relative Importance Index (RII) method was employed to prioritize them. The findings revealed that the primary cause of delays was contractor-related financial issues, such as cash flow problems, with an RII of 0.777 and a weighted value of 84.44%. The second most significant cause was labor issues, such as a shortage of workers during the harvest season or festivals, with an RII of 0.773. Additionally, various algorithms were used to compare the Relative Importance Index (RII) and four machine learning methods: Decision Tree (DT), Deep Learning, Neural Network, and Naïve Bayes. The Deep Learning model proved to be the most effective baseline model, achieving a 90.79% accuracy rate in identifying contractor-related financial issues as a cause of construction delays. This was followed by the Neural Network model, which had an accuracy rate of 90.26%. The Decision Tree model had an accuracy rate of 85.26%. The RII values ranged from 68.68% for the Naïve Bayes model to 77.70% for the highest RII model. The research results indicate that contractor financial liquidity and costs significantly impact construction operations, which public agencies must consider. Additionally, the availability of contractor labor is crucial for the continuity of projects. The accuracy and reliability of the data obtained using advanced data mining techniques demonstrate the effectiveness of these results. This can be efficiently utilized by stakeholders involved in construction projects in Thailand to enhance construction project management.
This study comprehensively evaluates the system performance by considering the thermodynamic and exergy analysis of hydrogen production by the water electrolysis method. Energy inputs, hydrogen and oxygen production capacities, exergy balance, and losses of the electrolyzer system were examined in detail. In the study, most of the energy losses are due to heat losses and electrochemical conversion processes. It has also been observed that increased electrical input increases the production of hydrogen and oxygen, but after a certain point, the rate of efficiency increase slows down. According to the exergy analysis, it was determined that the largest energy input of the system was electricity, hydrogen stood out as the main product, and oxygen and exergy losses were important factors affecting the system performance. The results, in line with other studies in the literature, show that the integration of advanced materials, low-resistance electrodes, heat recovery systems, and renewable energy is critical to increasing the efficiency of electrolyzer systems and minimizing energy losses. The modeling results reveal that machine learning programs have significant potential to achieve high accuracy in electrolysis performance estimation and process view. This study aims to contribute to the production of growth generation technologies and will shed light on global and technological regional decision-making for sustainable energy policies as it expands.
Mangrove forests are vital to coastal protection, biodiversity support, and climate regulation. In the Niger Delta, these ecosystems are increasingly threatened by oil spill incidents linked to intensive petroleum activities. This study investigates the extent of mangrove degradation between 1986 and 2022 in the lower Niger Delta, specifically the region between the San Bartolomeo and Imo Rivers, using remote sensing and machine learning. Landsat 5 TM (1986) and Landsat 8 OLI (2022) imagery were classified using the Support Vector Machine (SVM) algorithm. Classification accuracy was high, with overall accuracies of 98% (1986) and 99% (2022) and Kappa coefficients of 0.97 and 0.98. Healthy mangrove cover declined from 2804.37 km2 (58%) to 2509.18 km2 (52%), while degraded mangroves increased from 72.03 km2 (1%) to 327.35 km2 (7%), reflecting a 354.46% rise. Water bodies expanded by 101.17 km2 (5.61%), potentially due to dredging, erosion, and sea-level rise. Built-up areas declined from 131.85 km2 to 61.14 km2, possibly reflecting socio-environmental displacement. Statistical analyses, including Chi-square (χ2 = 1091.33, p < 0.001) and Kendall’s Tau (τ = 1, p < 0.001), showed strong correlations between oil spills and mangrove degradation. From 2012 to 2022, over 21,914 barrels of oil were spilled, with only 38% recovered. Although paired t-tests and ANOVA results indicated no statistically significant changes at broad scales, localized ecological shifts remain severe. These findings highlight the urgent need for integrated environmental policies and restoration efforts to mitigate mangrove loss and enhance sustainability in the Niger Delta.
Copyright © by EnPress Publisher. All rights reserved.