Falling is one of the most critical outcomes of loss of consciousness during triage in emergency department (ED). It is an important sign requires an immediate medical intervention. This paper presents a computer vision-based fall detection model in ED. In this study, we hypothesis that the proposed vision-based triage fall detection model provides accuracy equal to traditional triage system (TTS) conducted by the nursing team. Thus, to build the proposed model, we use MoveNet, a pose estimation model that can identify joints related to falls, consisting of 17 key points. To test the hypothesis, we conducted two experiments: In the deep learning (DL) model we used the complete feature consisting of 17 keypoints which was passed to the triage fall detection model and was built using Artificial Neural Network (ANN). In the second model we use dimensionality reduction Feature-Reduction for Fall model (FRF), Random Forest (RF) feature selection analysis to filter the key points triage fall classifier. We tested the performance of the two models using a dataset consisting of many images for real-world scenarios classified into two classes: Fall and Not fall. We split the dataset into 80% for training and 20% for validation. The models in these experiments were trained to obtain the results and compare them with the reference model. To test the effectiveness of the model, a t-test was performed to evaluate the null hypothesis for both experiments. The results show FRF outperforms DL model, and FRF has same accuracy of TTS.
Breast cancer was a prevalent form of cancer worldwide. Thermography, a method for diagnosing breast cancer, involves recording the thermal patterns of the breast. This article explores the use of a convolutional neural network (CNN) algorithm to extract features from a dataset of thermographic images. Initially, the CNN network was used to extract a feature vector from the images. Subsequently, machine learning techniques can be used for image classification. This study utilizes four classification methods, namely Fully connected neural network (FCnet), support vector machine (SVM), classification linear model (CLINEAR), and KNN, to classify breast cancer from thermographic images. The accuracy rates achieved by the FCnet, SVM, CLINEAR, and k-nearest neighbors (KNN) algorithms were 94.2%, 95.0%, 95.0%, and 94.1%, respectively. Furthermore, the reliability parameters for these classifiers were computed as 92.1%, 97.5%, 96.5%, and 91.2%, while their respective sensitivities were calculated as 95.5%, 94.1%, 90.4%, and 93.2%. These findings can assist experts in developing an expert system for breast cancer diagnosis.
In this study, we utilized a convolutional neural network (CNN) trained on microscopic images encompassing the SARS-CoV-2 virus, the protozoan parasite “plasmodium falciparum” (causing of malaria in humans), the bacterium “vibrio cholerae” (which produces the cholera disease) and non-infected samples (healthy persons) to effectively classify and predict epidemics. The findings showed promising results in both classification and prediction tasks. We quantitatively compared the obtained results by using CNN with those attained employing the support vector machine. Notably, the accuracy in prediction reached 97.5% when using convolutional neural network algorithms.
Recognizing the discipline category of the abstract text is of great significance for automatic text recommendation and knowledge mining. Therefore, this study obtained the abstract text of social science and natural science in the Web of Science 2010-2020, and used the machine learning model SVM and deep learning model TextCNN and SCI-BERT models constructed a discipline classification model. It was found that the SCI-BERT model had the best performance. The precision, recall, and F1 were 86.54%, 86.89%, and 86.71%, respectively, and the F1 is 6.61% and 4.05% higher than SVM and TextCNN. The construction of this model can effectively identify the discipline categories of abstracts, and provide effective support for automatic indexing of subjects.
In agriculture, crop yield and quality are critical for global food supply and human survival. Challenges such as plant leaf diseases necessitate a fast, automatic, economical, and accurate method. This paper utilizes deep learning, transfer learning, and specific feature learning modules (CBAM, Inception-ResNet) for their outstanding performance in image processing and classification. The ResNet model, pretrained on ImageNet, serves as the cornerstone, with introduced feature learning modules in our IRCResNet model. Experimental results show our model achieves an average prediction accuracy of 96.8574% on public datasets, thoroughly validating our approach and significantly enhancing plant leaf disease identification.
Copyright © by EnPress Publisher. All rights reserved.