Retinal disorders, such as diabetic retinopathy, glaucoma, macular edema, and vein occlusions, are significant contributors to global vision impairment. These conditions frequently remain symptomless until patients suffer severe vision deterioration, underscoring the critical importance of early diagnosis. Fundus images serve as a valuable resource for identifying the initial indicators of these ailments, particularly by examining various characteristics of retinal blood vessels, such as their length, width, tortuosity, and branching patterns. Traditionally, healthcare practitioners often rely on manual retinal vessel segmentation, a process that is both time-consuming and intricate, demanding specialized expertise. However, this approach poses a notable challenge since its precision and consistency heavily rely on the availability of highly skilled professionals. To surmount these challenges, there is an urgent demand for an automatic and efficient method for retinal vessel segmentation and classification employing computer vision techniques, which form the foundation of biomedical imaging. Numerous researchers have put forth techniques for blood vessel segmentation, broadly categorized into machine learning, filtering-based, and model-based methods. Machine learning methods categorize pixels as either vessels or non-vessels, employing classifiers trained on hand-annotated images. Subsequently, these techniques extract features using 7D feature vectors and apply neural network classification. Additional post-processing steps are used to bridge gaps and eliminate isolated pixels. On the other hand, filtering-based approaches employ morphological operators within morphological image processing, capitalizing on predefined shapes to filter out objects from the background. However, this technique often treats larger blood vessels as cohesive structures. Model-based methods leverage vessel models to identify retinal blood vessels, but they are sensitive to parameter selection, necessitating careful choices to simultaneously detect thin and large vessels effectively. Our proposed research endeavors to conduct a thorough and empirical evaluation of the effectiveness of automated segmentation and classification techniques for identifying eye-related diseases, particularly diabetic retinopathy and glaucoma. This evaluation will involve various retinal image datasets, including DRIVE, REVIEW, STARE, HRF, and DRION. The methodologies under consideration encompass machine learning, filtering-based, and model-based approaches, with performance assessment based on a range of metrics, including true positive rate (TPR), true negative rate (TNR), positive predictive value (PPV), negative predictive value (NPV), false discovery rate (FDR), Matthews's correlation coefficient (MCC), and accuracy (ACC). The primary objective of this research is to scrutinize, assess, and compare the design and performance of different segmentation and classification techniques, encompassing both supervised and unsupervised learning methods. To attain this objective, we will refine existing techniques and develop new ones, ensuring a more streamlined and computationally efficient approach.
Falling is one of the most critical outcomes of loss of consciousness during triage in emergency department (ED). It is an important sign requires an immediate medical intervention. This paper presents a computer vision-based fall detection model in ED. In this study, we hypothesis that the proposed vision-based triage fall detection model provides accuracy equal to traditional triage system (TTS) conducted by the nursing team. Thus, to build the proposed model, we use MoveNet, a pose estimation model that can identify joints related to falls, consisting of 17 key points. To test the hypothesis, we conducted two experiments: In the deep learning (DL) model we used the complete feature consisting of 17 keypoints which was passed to the triage fall detection model and was built using Artificial Neural Network (ANN). In the second model we use dimensionality reduction Feature-Reduction for Fall model (FRF), Random Forest (RF) feature selection analysis to filter the key points triage fall classifier. We tested the performance of the two models using a dataset consisting of many images for real-world scenarios classified into two classes: Fall and Not fall. We split the dataset into 80% for training and 20% for validation. The models in these experiments were trained to obtain the results and compare them with the reference model. To test the effectiveness of the model, a t-test was performed to evaluate the null hypothesis for both experiments. The results show FRF outperforms DL model, and FRF has same accuracy of TTS.
Cartography includes two major tasks: map making and map application, which is inextricably linked to artificial intelligence technology. The cartographic expert system experienced the intelligent expression of symbolism. After the spatial optimization decision of behaviorism intelligent expression, cartography faces the combination of deep learning under connectionism to improve the intelligent level of cartography. This paper discusses three problems about the proposition of “deep learning + cartography”. One is the consistency between the deep learning method and the map space problem solving strategy, based on gradient descent, local correlation, feature reduction and non-linear nature that answer the feasibility of the combination of “deep learning + cartography”; the second is to analyze the challenges faced by the combination of cartography from its unique disciplinary characteristics and technical environment, involving the non-standard organization of map data, professional requirements for sample establishment, the integration of geometric and geographical features, as well as the inherent spatial scale of the map; thirdly, the entry points and specific methods for integrating map making and map application into deep learning are discussed respectively.
Recognizing the discipline category of the abstract text is of great significance for automatic text recommendation and knowledge mining. Therefore, this study obtained the abstract text of social science and natural science in the Web of Science 2010-2020, and used the machine learning model SVM and deep learning model TextCNN and SCI-BERT models constructed a discipline classification model. It was found that the SCI-BERT model had the best performance. The precision, recall, and F1 were 86.54%, 86.89%, and 86.71%, respectively, and the F1 is 6.61% and 4.05% higher than SVM and TextCNN. The construction of this model can effectively identify the discipline categories of abstracts, and provide effective support for automatic indexing of subjects.
To save patients’ lives, it is important to go for an early diagnosis of intracranial hemorrhage (ICH). For diagnosing ICH, the widely used method is non-contrast computed tomography (NCCT). It has fast acquisition and availability in medical emergency facilities. To predict hematoma progression and mortality, it is important to estimate the volume of intracranial hemorrhage. Radiologists can manually delineate the ICH region to estimate the hematoma volume. This process takes time and undergoes inter-rater variability. In this research paper, we develop and discuss a fine segmentation model and a coarse model for intracranial hemorrhage segmentations. Basically, two different models are discussed for intracranial hemorrhage segmentation. We trained a 2DDensNet in the first model for coarse segmentation and cascaded the coarse segmentation mask output in the fine segmentation model along with input training samples. A nnUNet model is trained in the second fine stage and will use the segmentation labels of the coarse model with true labels for intracranial hemorrhage segmentation. An optimal performance for intracranial hemorrhage segmentation solution is obtained.
Monitoring marine biodiversity is a challenge in some vulnerable and difficult-to-access habitats, such as underwater caves. Underwater caves are a great focus of biodiversity, concentrating a large number of species in their environment. However, most of the sessile species that live on the rocky walls are very vulnerable, and they are often threatened by different pressures. The use of these spaces as a destination for recreational divers can cause different impacts on the benthic habitat. In this work, we propose a methodology based on video recordings of cave walls and image analysis with deep learning algorithms to estimate the spatial density of structuring species in a study area. We propose a combination of automatic frame overlap detection, estimation of the actual extent of surface cover, and semantic segmentation of the main 10 species of corals and sponges to obtain species density maps. These maps can be the data source for monitoring biodiversity over time. In this paper, we analyzed the performance of three different semantic segmentation algorithms and backbones for this task and found that the Mask R-CNN model with the Xception101 backbone achieves the best accuracy, with an average segmentation accuracy of 82%.
Copyright © by EnPress Publisher. All rights reserved.