Accurate drug-drug interaction (DDI) prediction is essential to prevent adverse effects, especially with the increased use of multiple medications during the COVID-19 pandemic. Traditional machine learning methods often miss the complex relationships necessary for effective DDI prediction. This study introduces a deep learning-based classification framework to assess adverse effects from interactions between Fluvoxamine and Curcumin. Our model integrates a wide range of drug-related data (e.g., molecular structures, targets, side effects) and synthesizes them into high-level features through a specialized deep neural network (DNN). This approach significantly outperforms traditional classifiers in accuracy, precision, recall, and F1-score. Additionally, our framework enables real-time DDI monitoring, which is particularly valuable in COVID-19 patient care. The model’s success in accurately predicting adverse effects demonstrates the potential of deep learning to enhance drug safety and support personalized medicine, paving the way for safer, data-driven treatment strategies.
This study aimed to determine the socio-economic poverty status of those living in rural areas using data surveys obtained from household expenditure and income. Machine learning-based classification and clustering models were proven to provide an overview of efforts to determine similarities in poverty characteristics. Efforts to address poverty classification and clustering typically involve comprehensive strategies that aim to improve socio-economic conditions in the affected areas. This research focuses on the combined application of machine learning classification and clustering techniques to analyze poverty. It aims to investigate whether the integration of classification and clustering algorithms can enhance the accuracy of poverty analysis by identifying distinct poverty classes or clusters based on multidimensional indicators. The results showed the superiority of machine learning in mapping poverty in rural areas; therefore, it can be adopted in the private sector and government domains. It is important to have access to relevant and reliable data to apply these machine learning techniques effectively. Data sources may include household surveys, census data, administrative records, satellite imagery, and other socioeconomic indicators. Machine learning classification and clustering analyses are used as a decision support tool to gain an understanding of poverty data from each village. These strategies are also used to describe the profile of poverty clusters in the community in terms of significant socio-economic indicators present in the data. Village clusters based on an analysis of existing poverty indicators are grouped into high, moderate, and low poverty levels. Machine learning can be a valuable tool for analyzing and understanding poverty by classifying individuals or households into different poverty categories and identifying patterns and clusters of poverty. These insights can inform targeted interventions, policy decisions, and resource allocation for poverty reduction programs.
Falling is one of the most critical outcomes of loss of consciousness during triage in emergency department (ED). It is an important sign requires an immediate medical intervention. This paper presents a computer vision-based fall detection model in ED. In this study, we hypothesis that the proposed vision-based triage fall detection model provides accuracy equal to traditional triage system (TTS) conducted by the nursing team. Thus, to build the proposed model, we use MoveNet, a pose estimation model that can identify joints related to falls, consisting of 17 key points. To test the hypothesis, we conducted two experiments: In the deep learning (DL) model we used the complete feature consisting of 17 keypoints which was passed to the triage fall detection model and was built using Artificial Neural Network (ANN). In the second model we use dimensionality reduction Feature-Reduction for Fall model (FRF), Random Forest (RF) feature selection analysis to filter the key points triage fall classifier. We tested the performance of the two models using a dataset consisting of many images for real-world scenarios classified into two classes: Fall and Not fall. We split the dataset into 80% for training and 20% for validation. The models in these experiments were trained to obtain the results and compare them with the reference model. To test the effectiveness of the model, a t-test was performed to evaluate the null hypothesis for both experiments. The results show FRF outperforms DL model, and FRF has same accuracy of TTS.
Credit policies for clean and renewable energy businesses play a crucial role in supporting carbon neutrality efforts to combat climate change. Clustering the credit capacity of these companies to prioritize lending is essential given the limited capital available. Support Vector Machine (SVM) and Artificial Neural Network (ANN) are two robust machine learning algorithms for addressing complex clustering problems. Additionally, hyperparameter selection within these models is effectively enhanced through the support of a robust heuristic optimization algorithm, Particle Swarm Optimization (PSO). To leverage the strength of these advanced machine learning techniques, this paper aims to develop SVM and ANN models, optimized with the PSO, for the clustering problem of green credit capacity in the renewable energy industry. The results show low Mean Square Error (MSE) values for both models, indicating high clustering accuracy. The credit capabilities of wind energy, clean fuel, and biomass pellet companies are illustrated in quadrant charts, providing stakeholders with a clear view to adjust their credit strategies. This helps ensure the efficient operation of banking green credit policies.
The goal of this work was to create and assess machine-learning models for estimating the risk of budget overruns in developed projects. Finding the best model for risk forecasting required evaluating the performance of several models. Using a dataset of 177 projects took into account variables like environmental risks employee skill level safety incidents and project complexity. In our experiments, we analyzed the application of different machine learning models to analyze the risk for the management decision policies of developed organizations. The performance of the chosen model Neural Network (MLP) was improved after applying the tuning process which increased the Test R2 from −0.37686 before tuning to 0.195637 after tuning. The Support Vector Machine (SVM), Ridge Regression, Lasso Regression, and Random Forest (Tuned) models did not improve, as seen when Test R2 is compared to the experiments. No changes in Test R2’s were observed on GBM and XGBoost, which retained same Test R2 across different tuning attempts. Stacking Regressor was used only during the hyperparameter tuning phase and brought a Test R2 of 0. 022219.Decision Tree was again the worst model among all throughout the experiments, with no signs of improvement in its Test R2; it was −1.4669 for Decision Tree in all experiments arranged on the basis of Gender. These results indicate that although, models such as the Neural Network (MLP) sees improvements due to hyperparameter tuning, there are minimal improvements for most models. This works does highlight some of the weaknesses in specific types of models, as well as identifies areas where additional work can be expected to deliver incremental benefits to the structured applied process of risk assessment in organizational policies.
Copyright © by EnPress Publisher. All rights reserved.