Land use as for human-circumstance interaction is as we all know changed the global land surface sharply and continuously. Farmland abandonment is the phenomenon of going extreme of marginal of land use, which exert positive and negative impacts on our living circumstances. In order to map the extent of farmland abandonment of Zhejiang Province, we try to use the geo-big data analysis platform to perform the massive data preprocessing and map the extent of farmland abandonment of the study area based on multi-source land use and land cover data. Then we execute landscape pattern analysis using landscape pattern analysis software and spatial auto-correlation (Moran's I) analysis based on ArcGIS and Fragstats software. We found that the area of farmland is about 16.32% on account of all land use types, which is 1.89104 km2. While the whole area of FA is 1.72 × 108 m2, and the farmland abandonment ratio is 1.65%. AF's area is about 1.95 × 109 m2, and the continuous cultivation ratio is 18.69%. The landscape fragmentation, landscape aggregation and landscape diversity of FA, AF and FL are different. At the same time, the spatial auto-correlation of FA and AF are dominant high congregation and low discrete. At last, we compared our calculated results with the existed research results which demonstrate our research does scientific convincible. We also make futural prospects prediction and show the research deficiency as well as bring out some policy implications based on our research, which means build proper land use management regulation and decrease the farmland abandonment on account of the premise of suitable land use policies.
Retinal disorders, such as diabetic retinopathy, glaucoma, macular edema, and vein occlusions, are significant contributors to global vision impairment. These conditions frequently remain symptomless until patients suffer severe vision deterioration, underscoring the critical importance of early diagnosis. Fundus images serve as a valuable resource for identifying the initial indicators of these ailments, particularly by examining various characteristics of retinal blood vessels, such as their length, width, tortuosity, and branching patterns. Traditionally, healthcare practitioners often rely on manual retinal vessel segmentation, a process that is both time-consuming and intricate, demanding specialized expertise. However, this approach poses a notable challenge since its precision and consistency heavily rely on the availability of highly skilled professionals. To surmount these challenges, there is an urgent demand for an automatic and efficient method for retinal vessel segmentation and classification employing computer vision techniques, which form the foundation of biomedical imaging. Numerous researchers have put forth techniques for blood vessel segmentation, broadly categorized into machine learning, filtering-based, and model-based methods. Machine learning methods categorize pixels as either vessels or non-vessels, employing classifiers trained on hand-annotated images. Subsequently, these techniques extract features using 7D feature vectors and apply neural network classification. Additional post-processing steps are used to bridge gaps and eliminate isolated pixels. On the other hand, filtering-based approaches employ morphological operators within morphological image processing, capitalizing on predefined shapes to filter out objects from the background. However, this technique often treats larger blood vessels as cohesive structures. Model-based methods leverage vessel models to identify retinal blood vessels, but they are sensitive to parameter selection, necessitating careful choices to simultaneously detect thin and large vessels effectively. Our proposed research endeavors to conduct a thorough and empirical evaluation of the effectiveness of automated segmentation and classification techniques for identifying eye-related diseases, particularly diabetic retinopathy and glaucoma. This evaluation will involve various retinal image datasets, including DRIVE, REVIEW, STARE, HRF, and DRION. The methodologies under consideration encompass machine learning, filtering-based, and model-based approaches, with performance assessment based on a range of metrics, including true positive rate (TPR), true negative rate (TNR), positive predictive value (PPV), negative predictive value (NPV), false discovery rate (FDR), Matthews's correlation coefficient (MCC), and accuracy (ACC). The primary objective of this research is to scrutinize, assess, and compare the design and performance of different segmentation and classification techniques, encompassing both supervised and unsupervised learning methods. To attain this objective, we will refine existing techniques and develop new ones, ensuring a more streamlined and computationally efficient approach.
With the wide application of the Internet and smart systems, data centers (DCs) have become a hot spot of global concern. The energy saving for data centers is at the core of the related works. The thermal performance of a data center directly affects its total energy consumption, as cooling consumption accounts for nearly 50% of total energy consumption. Superior power distribution is a reliable method to improve the thermal performance of DCs. Therefore, analyzing the effects of different power distribution on thermal performance is a challenge for DCs. This paper analyzes the thermal performance numerically and experimentally in DCs with different power distribution. First, it uses Fluent simulate the temperature distribution and flow field distribution in the room, taking the cloud computing room as the research object. Then, it summarizes a formula based on the computing power distribution in a certain range by the numerical and experimental analysis. Finally, it calculates an optimal cooling power by analyzing the cooling power distribution. The results shows that it reduces the maximum temperature difference between the highest temperature of the cabinet from 5-7k to within 1.2k. In addition, the cooling energy consumption is reduced by more than 5%.
The widespread adoption of digital technologies in tourism has transformed the data privacy landscape, necessitating stronger safeguards. This study examines the evolving research environment of digital privacy in tourism management, focusing on publication trends, collaborative networks, and social contract theory. A mixed-methods approach was employed, combining bibliometric analysis, social contract theory, and qualitative content analysis. Data from 2004 to 2023 were analyzed using network visualization tools to identify key researchers and trends. The study highlights a significant increase in academic attention after 2015, reflecting the industry's growing recognition of digital privacy as crucial. Social contract theory provided a framework emphasizing transparency, consent, and accountability. The study also examined high-impact articles and the role of publishers like Elsevier and Wiley. The findings offer practical insights for policymakers, industry leaders, and researchers, advocating for ongoing collaboration to address privacy challenges in tourism.
By reviewing US state-level panel data on infrastructure spending and on per capita income inequality from 1950 to 2010, this paper sets out to test whether an empirical link exists between infrastructure and inequality. Panel regressions with fixed effects show that an increase in the growth rate of spending on highways and higher education in a given decade correlates negatively with Gini indices at the end of the decade, thus suggesting a causal effect from growth in infrastructure spending to a reduction in inequality through better access to education and opportunities for employment. More significantly, this relationship is more pronounced with inequality at the bottom 40 percent of the income distribution. In addition, infrastructure expenditures on highways are shown to be more effective at reducing inequality. By carrying out a counterfactual experiment, the results show that those US states with a significantly higher bottom Gini coefficient in 2010 had underinvested in infrastructure during the previous decade. From a policy-making perspective, new innovations in finance for infrastructure investments are developed, for the US, other industrially advanced countries and also for developing economies.
Identify and diagnosis of homogenous units and separating them and eventually planning separately for each unit are considered the most principled way to manage units of forests and creating these trustable maps of forest’s types, plays important role in making optimum decisions for managing forest ecosystems in wide areas. Field method of circulation forest and Parcel explore to determine type of forest require to spend cost and much time. In recent years, providing these maps by using digital classification of remote sensing’s data has been noticed. The important tip to create these units is scale of map. To manage more accurate, it needs larger scale and more accurate maps. Purpose of this research is comparing observed classification of methods to recognize and determine type of forest by using data of Land Cover of Modis satellite with 1 kilometer resolution and on images of OLI sensor of LANDSAT satellite with 30 kilometers resolution by using vegetation indicators and also timely PCA and to create larger scale, better and more accurate resolution maps of homogenous units of forest. Eventually by using of verification, the best method was obtained to classify forest in Golestan province’s forest located on north-east of country.
Copyright © by EnPress Publisher. All rights reserved.