Cartography includes two major tasks: map making and map application, which is inextricably linked to artificial intelligence technology. The cartographic expert system experienced the intelligent expression of symbolism. After the spatial optimization decision of behaviorism intelligent expression, cartography faces the combination of deep learning under connectionism to improve the intelligent level of cartography. This paper discusses three problems about the proposition of “deep learning + cartography”. One is the consistency between the deep learning method and the map space problem solving strategy, based on gradient descent, local correlation, feature reduction and non-linear nature that answer the feasibility of the combination of “deep learning + cartography”; the second is to analyze the challenges faced by the combination of cartography from its unique disciplinary characteristics and technical environment, involving the non-standard organization of map data, professional requirements for sample establishment, the integration of geometric and geographical features, as well as the inherent spatial scale of the map; thirdly, the entry points and specific methods for integrating map making and map application into deep learning are discussed respectively.
Retinal disorders, such as diabetic retinopathy, glaucoma, macular edema, and vein occlusions, are significant contributors to global vision impairment. These conditions frequently remain symptomless until patients suffer severe vision deterioration, underscoring the critical importance of early diagnosis. Fundus images serve as a valuable resource for identifying the initial indicators of these ailments, particularly by examining various characteristics of retinal blood vessels, such as their length, width, tortuosity, and branching patterns. Traditionally, healthcare practitioners often rely on manual retinal vessel segmentation, a process that is both time-consuming and intricate, demanding specialized expertise. However, this approach poses a notable challenge since its precision and consistency heavily rely on the availability of highly skilled professionals. To surmount these challenges, there is an urgent demand for an automatic and efficient method for retinal vessel segmentation and classification employing computer vision techniques, which form the foundation of biomedical imaging. Numerous researchers have put forth techniques for blood vessel segmentation, broadly categorized into machine learning, filtering-based, and model-based methods. Machine learning methods categorize pixels as either vessels or non-vessels, employing classifiers trained on hand-annotated images. Subsequently, these techniques extract features using 7D feature vectors and apply neural network classification. Additional post-processing steps are used to bridge gaps and eliminate isolated pixels. On the other hand, filtering-based approaches employ morphological operators within morphological image processing, capitalizing on predefined shapes to filter out objects from the background. However, this technique often treats larger blood vessels as cohesive structures. Model-based methods leverage vessel models to identify retinal blood vessels, but they are sensitive to parameter selection, necessitating careful choices to simultaneously detect thin and large vessels effectively. Our proposed research endeavors to conduct a thorough and empirical evaluation of the effectiveness of automated segmentation and classification techniques for identifying eye-related diseases, particularly diabetic retinopathy and glaucoma. This evaluation will involve various retinal image datasets, including DRIVE, REVIEW, STARE, HRF, and DRION. The methodologies under consideration encompass machine learning, filtering-based, and model-based approaches, with performance assessment based on a range of metrics, including true positive rate (TPR), true negative rate (TNR), positive predictive value (PPV), negative predictive value (NPV), false discovery rate (FDR), Matthews's correlation coefficient (MCC), and accuracy (ACC). The primary objective of this research is to scrutinize, assess, and compare the design and performance of different segmentation and classification techniques, encompassing both supervised and unsupervised learning methods. To attain this objective, we will refine existing techniques and develop new ones, ensuring a more streamlined and computationally efficient approach.
Abrupt changes in environmental temperature, wind and humidity can lead to great threats to human life safety. The Gansu marathon disaster of China highlights the importance of early warning of hypothermia from extremely low apparent temperature (AT). Here a deep convolutional neural network model together with a statistical downscaling framework is developed to forecast environmental factors for 1 to 12 h in advance to evaluate the effectiveness of deep learning for AT prediction at 1 km resolution. The experiments use data for temperature, wind speed and relative humidity in ERA-5 and the results show that the developed deep learning model can predict the upcoming extreme low temperature AT event in the Gansu marathon region several hours in advance with better accuracy than climatological and persistence forecasting methods. The hypothermia time estimated by the deep learning method with a heat loss model agrees well with the observed estimation at 3-hour lead. Therefore, the developed deep learning forecasting method is effective for short-term AT prediction and hypothermia warnings at local areas.
This study conducts a comparative analysis of various machine learning and deep learning models for predicting order quantities in supply chain tiers. The models employed include XGBoost, Random Forest, CNN-BiLSTM, Linear Regression, Support Vector Regression (SVR), K-Nearest Neighbors (KNN), Multi-Layer Perceptron (MLP), Recurrent Neural Network (RNN), Bidirectional LSTM (BiLSTM), Bidirectional GRU (BiGRU), Conv1D-BiLSTM, Attention-LSTM, Transformer, and LSTM-CNN hybrid models. Experimental results show that the XGBoost, Random Forest, CNN-BiLSTM, and MLP models exhibit superior predictive performance. In particular, the XGBoost model demonstrates the best results across all performance metrics, attributed to its effective learning of complex data patterns and variable interactions. Although the KNN model also shows perfect predictions with zero error values, this indicates a need for further review of data processing procedures or model validation methods. Conversely, the BiLSTM, BiGRU, and Transformer models exhibit relatively lower performance. Models with moderate performance include Linear Regression, RNN, Conv1D-BiLSTM, Attention-LSTM, and the LSTM-CNN hybrid model, all displaying relatively higher errors and lower coefficients of determination (R²). As a result, tree-based models (XGBoost, Random Forest) and certain deep learning models like CNN-BiLSTM are found to be effective for predicting order quantities in supply chain tiers. In contrast, RNN-based models (BiLSTM, BiGRU) and the Transformer show relatively lower predictive power. Based on these results, we suggest that tree-based models and CNN-based deep learning models should be prioritized when selecting predictive models in practical applications.
Copyright © by EnPress Publisher. All rights reserved.