Under the developing trend of artificial intelligence (AI) technology gradually penetrating all aspects of society, the traditional language education industry is also greatly affected [1]. AI technology has had a positive impact on college English teaching, but it also presents challenges and negative impacts. On the positive side, AI technology can provide personalized learning experiences, real-time feedback, and autonomous learning opportunities, and so on. However, it may also lead to a lack of communication between students and humans, resulting in a decline in students’ interpersonal skills, and cause students’ dependence on online learning resources as well as possible risks to student data privacy and security, and other negative impacts. To address these challenges, teachers can adopt the following countermeasures: improving teachers’ skills in the use of AI technology incorporated in the classroom, offering personalized instruction to reduce students’ dependence on AI technologies, emphasizing the cultivation of students’ humanistic literacy and interpersonal communication ability. Additionally, colleges and technology providers should strengthen data security and privacy protection to ensure the safety and confidentiality of student data. By implementing comprehensive measures, we can maximize the advantages of AI technology in college English teaching while overcoming potential issues and challenges.
With the rapid development of artificial intelligence (AI) technology, its application in the field of auditing has gained increasing attention. This paper explores the application of AI technology in audit risk assessment and control (ARAC), aiming to improve audit efficiency and effectiveness. First, the paper introduces the basic concepts of AI technology and its application background in the auditing field. Then, it provides a detailed analysis of the specific applications of AI technology in audit risk assessment and control, including data analysis, risk prediction, automated auditing, continuous monitoring, intelligent decision support, and compliance checks. Finally, the paper discusses the challenges and opportunities of AI technology in audit risk assessment and control, as well as future research directions.
Purpose: This research examines the intricate interplay between Business Intelligence (BI), Big Data Analytics (BDA), and Artificial Intelligence (AI) within the realm of Supply Chain Management (SCM). While the integration of these technologies has promised improved operational efficiency and decision-making capabilities, concerns about complexities and potential overreliance on technology persist. The study aims to provide insights into achieving a balance between data-driven insights and qualitative factors in SCM for sustained competitiveness. Design/methodology/approach: The research executed interviews with ten Arab Gulf-based consulting firms. These companies’ ability to successfully complete BI projects is well recognised. Findings: Through examining the interplay of human judgement and data-driven strategies, addressing integration challenges, and understanding the risks of excessive data reliance, the research enhances comprehension of the modern SCM landscape. It underscores BI’s foundational role, the necessity of balanced human input, and the significance of customer-centric strategies for lasting competitive advantage and relationships. Practical implications: The research provided information for organizations seeking to effectively navigate the complexities of integrating data-driven technologies in SCM. The research is a foundation for future studies to delve deeper into quantitative measurement methodologies and effective data security strategies in the SCM context. Originality: The research highlights the value of integrating BI, BDA, and AI in SCM for improved efficiency, cost reduction, and customer satisfaction, emphasising the need for a balanced approach that combines data-driven insights, human judgement, and customer-centric strategies to maintain competitiveness.
The objective of this work was to analyze the effect of the use of ChatGPT in the teaching-learning process of scientific research in engineering. Artificial intelligence (AI) is a topic of great interest in higher education, as it combines hardware, software and programming languages to implement deep learning procedures. We focused on a specific course on scientific research in engineering, in which we measured the competencies, expressed in terms of the indicators, mastery, comprehension and synthesis capacity, in students who decided to use or not ChatGPT for the development and fulfillment of their activities. The data were processed through the statistical T-Student test and box-and-whisker plots were constructed. The results show that students’ reliance on ChatGPT limits their engagement in acquiring knowledge related to scientific research. This research presents evidence indicating that engineering science research students rely on ChatGPT to replace their academic work and consequently, they do not act dynamically in the teaching-learning process, assuming a static role.
Resisting the adoption of medical artificial intelligence (AI), it is suggested that this opposition can be overcome by combining AI awareness, AI risks, and responsibility displacement. Through effective integration of public AI dangers and displacement of responsibility, some of these major concerns can be alleviated. The United Kingdom’s National Health Service has adopted the use of chatbots to provide medical advice, whereas heart disease diagnoses can be made by IBM’s Watson. This has the ability to improve healthcare by increasing accuracy, efficiency, and patient outcomes. The resistance may be due to concerns about losing jobs, anxieties about misdiagnosis or medical mistakes, and the consciousness of AI systems drifting more responsibility away from medical professionals. There is hesitancy among healthcare professionals and the general public about the deployment of AI, despite the fact that healthcare is being revolutionised by AI, its uses are pervasive. Participants’ awareness of AI in healthcare, AI risk, resistance to AI, responsibility displacement and ethical considerations were gathered through questionnaires. Descriptive statistics, chi-square tests and correlation analyses were used to establish the relationship between resistance and medical AI. The study’s objective seeks to collect data on primary and public AI awareness, perceptions of risk and feelings of displacement that the professionals have regarding medical AI. Some of these concerns can be resolved when AI awareness is effectively integrated and patients, healthcare providers, as well as the general public are well informed about AI’s potential advantages. Trust is built when, AI related issues such as bias, transparency, and data privacy are critically addressed. Another objective is to develop a seamless integration of risk management, communication and awareness of AI. Lastly to assess how this comprehensive approach has affected hospital settings’ ambitions to use medical AI. Fusing AI awareness, risk management, and effective communication can be used as a comprehensive strategy to address and promote the application of medical AI in hospital settings. An argument made by Chen et al. is that providing training in AI can improve adoption intentions while lowering complexity through the awareness of AI.
Copyright © by EnPress Publisher. All rights reserved.