This study explores the complex dynamics of handling augmented reality (AR) data in higher education in the United Arab Emirates (UAE). Although there is a growing interest in incorporating augmented reality (AR) to improve learning experiences, there are still issues in efficiently managing the data produced by these apps. This study attempts to understand the elements that affect AR data management by examining the relationship between the investigated variables: faculty readiness, technological limits, financial constraint, and student engagement on data management in higher education institutions in the UAE, building on earlier research that has identified these problems. The research analyzes financial constraints, technological infrastructure, and faculty preparation to understand their impact on AR data management. The study collected detailed empirical data on AR data management in UAE higher education environments using a quantitative research methods approach, surveys. The reasons for choosing this research method include cost-effectiveness, flexibility in questionnaire design, anonymity and confidentiality involved in the chosen methods. The results of this study are expected to enhance academic discourse by highlighting the obstacles and remedies to improving the efficiency of AR technology data management at higher education institutions. The findings are expected to enlighten decision-making in higher education institutions on maximizing AR technology’s benefits for improved learning outcomes.
Catastrophes, like earthquakes, bring sudden and severe damage, causing fatalities, injuries, and property loss. This often triggers a rapid increase in insurance claims. These claims can encompass various types, such as life insurance claims for deaths, health insurance claims for injuries, and general insurance claims for property damage. For insurers offering multiple types of coverage, this surge in claims can pose a risk of financial losses or bankruptcy. One option for insurers is to transfer some of these risks to reinsurance companies. Reinsurance companies will assess the potential losses due to a catastrophe event, then issue catastrophe reinsurance contracts to insurance companies. This study aims to construct a valuation model for catastrophe reinsurance contracts that can cover claim losses arising from two types of insurance products. Valuation in this study is done using the Fundamental Theorem of Asset Pricing, which is the expected present value of the number of claims that occur during the reinsurance coverage period. The number of catastrophe events during the reinsurance coverage period is assumed to follow a Poisson process. Each impact of a catastrophe event, such as the number of fatalities and injuries that cause claims, is represented as random variables, and modeled using Peaks Over Threshold (POT). This study uses Clayton, Gumbel, and Frank copulas to describe various dependence characteristics between random variables. The parameters of the POT model and copula are estimated using Inference Functions for Margins method. After estimating the model parameters, Monte Carlo simulations are performed to obtain numerical solutions for the expected value of catastrophe reinsurance based on the Fundamental Theorem of Asset Pricing. The expected reinsurance value based on Monte Carlo simulations using Indonesian earthquake data from 1979–2021 is Rp 10,296,819,838.
Surveys are one of the most important tasks to be executed to get valued information. One of the main problems is how the data about many different persons can be processed to give good information about their environment. Modelling environments through Artificial Neural Networks (ANNs) is highly common because ANN’s are excellent to model predictable environments using a set of data. ANN’s are good in dealing with sets of data with some noise, but they are fundamentally surjective mathematical functions, and they aren’t able to give different results for the same input. So, if an ANN is trained using data where samples with the same input configuration has different outputs, which can be the case of survey data, it can be a major problem for the success of modelling the environment. The environment used to demonstrate the study is a strategic environment that is used to predict the impact of the applied strategies to an organization financial result, but the conclusions are not limited to this type of environment. Therefore, is necessary to adjust, eliminate invalid and inconsistent data. This permits one to maximize the probability of success and precision in modeling the desired environment. This study demonstrates, describes and evaluates each step of a process to prepare data for use, to improve the performance and precision of the ANNs used to obtain the model. This is, to improve the model quality. As a result of the studied process, it is possible to see a significant improvement both in the possibility of building a model as in its accuracy.
While the notion of the smart city has grown in popularity, the backlash against smart urban infrastructure in the context of changing state-public relations has seldom been examined. This article draws on the case of Hong Kong’s smart lampposts to analyse the emergence of networked dissent against smart urban infrastructure during a period of unrest. Deriving insights from critical data studies, dissentworks theory, and relevant work on networked activism, the article illustrates how a smart urban infrastructure was turned into both a source and a target of popular dissent through digital mediation and politicisation. Drawing on an interpretive analysis of qualitative data collected from multiple digital platforms, the analysis explicates the citizen curation of socio-technic counter-imaginaries that constituted a consent of dissent in the digital realm, and the creation and diffusion of networked action repertoires in response to a changing political opportunity structure. In addition to explicating the words and deeds employed in this networked dissent, this article also discusses the technopolitical repercussions of this dissent for the city’s later attempts at data-based urban governance, which have unfolded at the intersections of urban techno-politics and local contentious politics. Moving beyond the common focus on neoliberal governmentality and its limits, this article reveals the underexplored pitfalls of smart urban infrastructure vis-à-vis the shifting socio-political landscape of Hong Kong, particularly in the digital age.
Urban infrastructures and services—such as public transportation, innovation bodies and environmental services—are important drivers for the sustainable development of our society. How effectively citizens, institutions and enterprises interact, how quickly technological innovations are implemented and how carefully new policies are pursued, synergically determine development. In this work, data related to urban infrastructure features such as patents and recycled waste referred to 106 province areas in Italy are investigated over a period of twenty years (2001–2020). Scaling laws with exponents characterizing the above mentioned features are observed and adopted to scrutinize whether and how multiple interactions within a population have amplification effects on the recycling and innovation performance. The study shows that there is a multiplication effect of the population size on the innovation performance of territories, meaning that the dynamic interactions among the elements of the innovation eco-systems in a territory increase its innovation performance. We discuss how to use such approach and the related indexes for understanding metropolitan development policy.
Data literacy is an important skill for students in studying physics. With data literacy, students have the ability to collect, analyze and interpret data as well as construct data-based scientific explanations and reasoning. However, students’ ability to data literacy is still not satisfactory. On the other hand, various learning strategies still provide opportunities to design learning models that are more directed at data literacy skills. For this reason, in this research a physics learning model was developed that is oriented towards physics objects represented in various modes and is called the Object-Oriented Physics Learning (OOPL) Model. The learning model was developed through several stages and based on the results of the validity analysis; it shows that the OOPL model is included in the valid category. The OOPL model fulfils the elements of content validity and construct validity. The validity of the OOPL model and its implications are discussed in detail in the discussion.
Copyright © by EnPress Publisher. All rights reserved.