The implementation of data interoperability in healthcare relies heavily on policy frameworks. However, many hospitals across South Africa are struggling to integrate data interoperability between systems, due to insufficient policy frameworks. There is a notable awareness that existing policies do not provide clear actionable direction for interoperability implementation in hospitals. This study aims to develop a policy framework for integrating data interoperability in public hospitals in Gauteng Province, South Africa. The study employed a conceptual framework grounded in institutional theory, which provided a lens to understand policies for interoperability. This study employed a convergence mixed method research design. Data were collected through an online questionnaire and semi-structured interviews. The study comprised 144 clinical and administrative personnel and 16 managers. Data were analyzed through descriptive and thematic analysis. The results show evidence of coercive isomorphism that public hospitals lack cohesive policies that facilitate data interoperability. Key barriers to establishing policy framework include inadequate funding, ambiguous guidelines, weak governance, and conflicting interests among stakeholders. The study developed a policy to facilitate the integration of data interoperability in hospitals. This study underscores the critical need for the South African government, legislators, practitioners, and policymakers to consult and involve external stakeholders in the policy-making processes.
With the advent of the big data era, the amount of various types of data is growing exponentially. Technologies such as big data, cloud computing, and artificial intelligence have achieved unprecedented development speed, and countries, regions, and multiple fields have included big data technology in their key development strategies. Big data technology has been widely applied in various aspects of society and has achieved significant results. Using data to speak, analyze, manage, make decisions, and innovate has become the development direction of various fields in society. Taxation is the main form of China’s fiscal revenue, playing an important role in improving the national economic structure and regulating income distribution, and is the fundamental guarantee for promoting social development. Re examining the tax administration of tax authorities in the context of big data can achieve efficient and reasonable application of big data technology in tax administration, and better serve tax administration. Big data technology has the characteristics of scale, diversity, and speed. The effect of tax big data on tax collection and management is becoming increasingly prominent, gradually forming a new tax collection and management system driven by tax big data. The key research content of this article is how to organically combine big data technology with tax management, how to fully leverage the advantages of big data, and how to solve the problems of insufficient application of big data technology, lack of data security guarantee, and shortage of big data application talents in tax authorities when applying big data to tax management.
Developing “New Quality Productive Forces” (NQPFs) has been accepted as a new theory to accelerate the high-quality development in China. In current, China’s high-quality development mainly relies on the traction of the digital economy. In view of this, developing NQPFs in China’s digital economy sector requires locate and remove some obstacles, such as the insufficient utilization of data, inadequate algorithm regulation, the mismatched supply and demand of regional computing power and the immature market environment. As a solution, it is necessary to allocating data property rights in a market-oriented way, establishing a user-centered algorithm governance system, accelerating the establishment of the national integrated computing network, and maintaining fair competition to optimize the market environment.
Retinal disorders, such as diabetic retinopathy, glaucoma, macular edema, and vein occlusions, are significant contributors to global vision impairment. These conditions frequently remain symptomless until patients suffer severe vision deterioration, underscoring the critical importance of early diagnosis. Fundus images serve as a valuable resource for identifying the initial indicators of these ailments, particularly by examining various characteristics of retinal blood vessels, such as their length, width, tortuosity, and branching patterns. Traditionally, healthcare practitioners often rely on manual retinal vessel segmentation, a process that is both time-consuming and intricate, demanding specialized expertise. However, this approach poses a notable challenge since its precision and consistency heavily rely on the availability of highly skilled professionals. To surmount these challenges, there is an urgent demand for an automatic and efficient method for retinal vessel segmentation and classification employing computer vision techniques, which form the foundation of biomedical imaging. Numerous researchers have put forth techniques for blood vessel segmentation, broadly categorized into machine learning, filtering-based, and model-based methods. Machine learning methods categorize pixels as either vessels or non-vessels, employing classifiers trained on hand-annotated images. Subsequently, these techniques extract features using 7D feature vectors and apply neural network classification. Additional post-processing steps are used to bridge gaps and eliminate isolated pixels. On the other hand, filtering-based approaches employ morphological operators within morphological image processing, capitalizing on predefined shapes to filter out objects from the background. However, this technique often treats larger blood vessels as cohesive structures. Model-based methods leverage vessel models to identify retinal blood vessels, but they are sensitive to parameter selection, necessitating careful choices to simultaneously detect thin and large vessels effectively. Our proposed research endeavors to conduct a thorough and empirical evaluation of the effectiveness of automated segmentation and classification techniques for identifying eye-related diseases, particularly diabetic retinopathy and glaucoma. This evaluation will involve various retinal image datasets, including DRIVE, REVIEW, STARE, HRF, and DRION. The methodologies under consideration encompass machine learning, filtering-based, and model-based approaches, with performance assessment based on a range of metrics, including true positive rate (TPR), true negative rate (TNR), positive predictive value (PPV), negative predictive value (NPV), false discovery rate (FDR), Matthews's correlation coefficient (MCC), and accuracy (ACC). The primary objective of this research is to scrutinize, assess, and compare the design and performance of different segmentation and classification techniques, encompassing both supervised and unsupervised learning methods. To attain this objective, we will refine existing techniques and develop new ones, ensuring a more streamlined and computationally efficient approach.
The explosion of information technology, besides its positive aspects, has raised many issues related to personal information and personal data in the network environment. Because children are vulnerable to abuse, fraud and exploitation, protecting children’s personal information and personal data is always of concern to many countries. From the concept and characteristics of personal information and personal data of children in Europe, the United States and Vietnam, it can be seen that children’s personal information and personal data protection is very necessary in every country today. This research focuses on the age considered a child, the child’s consent and his or her parental consent when providing and processing personal information or personal data of children under the laws of the EU, US and Vietnam. Therefore, the article proposes some recommendations related to the child’s consent and his or her parental consent in protecting children’s personal data in Vietnam.
Central Sulawesi has been grappling with significant challenges in human development, as indicated by its Human Development Index (HDI). Despite recent improvements, the region still lags behind the national average. Key issues such as high poverty rates and malnutrition among children, particularly underweight prevalence, pose substantial barriers to enhancing the HDI. This study aims to analyze the impact of poverty, malnutrition, and household per capita income on the HDI in Central Sulawesi. By employing panel data regression analysis over the period from 2018 to 2022, the research seeks to identify significant determinants that influence HDI and provide evidence-based recommendations for policy interventions. Utilizing panel data regression analysis with a Fixed Effect Model (FEM), the study reveals that while poverty negatively influences with HDI, underweight prevalence is not statistically significant. In contrast, household per capita income significantly impacts HDI, with lower income levels leading to declines in HDI. The findings emphasize the need for comprehensive policy interventions in nutrition, healthcare, and economic support to enhance human development in the region. These interventions are crucial for addressing the root causes of underweight prevalence and poverty, ultimately leading to improved HDI and overall well-being. The originality of this research lies in its focus on a specific region of Indonesia, providing localized insights and recommendations that are critical for targeted policy making.
Copyright © by EnPress Publisher. All rights reserved.