In this study, the authors propose a method that combines CNN and LSTM networks to recognize facial expressions. To handle illumination changes and preserve edge information in the image, the method uses two different preprocessing techniques. The preprocessed image is then fed into two independent CNN layers for feature extraction. The extracted features are then fused with an LSTM layer to capture the temporal dynamics of facial expressions. To evaluate the method's performance, the authors use the FER2013 dataset, which contains over 35,000 facial images with seven different expressions. To ensure a balanced distribution of the expressions in the training and testing sets, a mixing matrix is generated. The models in FER on the FER2013 dataset with an accuracy of 73.72%. The use of Focal loss, a variant of cross-entropy loss, improves the model's performance, especially in handling class imbalance. Overall, the proposed method demonstrates strong generalization ability and robustness to variations in illumination and facial expressions. It has the potential to be applied in various real-world applications such as emotion recognition in virtual assistants, driver monitoring systems, and mental health diagnosis.
In agriculture, crop yield and quality are critical for global food supply and human survival. Challenges such as plant leaf diseases necessitate a fast, automatic, economical, and accurate method. This paper utilizes deep learning, transfer learning, and specific feature learning modules (CBAM, Inception-ResNet) for their outstanding performance in image processing and classification. The ResNet model, pretrained on ImageNet, serves as the cornerstone, with introduced feature learning modules in our IRCResNet model. Experimental results show our model achieves an average prediction accuracy of 96.8574% on public datasets, thoroughly validating our approach and significantly enhancing plant leaf disease identification.
Given the heavy workload faced by teachers, automatic speaking scoring systems provide essential support. This study aims to consolidate technological configurations of automatic scoring systems for spontaneous L2 English, drawing from literature published between 2014 and 2024. The focus will be on the architecture of the automatic speech recognition model and the scoring model, as well as on features used to evaluate phonological competence, linguistic proficiency, and task completion. By synthesizing these elements, the study seeks to identify potential research areas, as well as provide a foundation for future research and practical applications in software engineering.
This study investigates the impact of human resource management (HRM) practices on employee retention and job satisfaction within Malaysia’s IT industry. The research centered on middle-management executives from the top 10 IT companies in the Greater Klang Valley and Penang. Using a self-administered questionnaire, the study gathered data on demographic characteristics, HRM practices, and employee retention, with the questionnaire design drawing from established literature and validated measuring scales. The study employed the PLS 4.0 method for analyzing structural relationships and tested various hypotheses regarding HRM practices and employee retention. Key findings revealed that work-life balance did not significantly impact employee retention. Conversely, job security positively influenced employee retention. Notably, rewards, recognition, and training and development were found to be insignificant in predicting employee retention. Additionally, the study explored the mediating role of job satisfaction but found it did not mediate the relationship between work-life balance and employee retention nor between job security and employee retention. The research highlighted that HRM practices have diverse effects on employee retention in Malaysia’s IT sector. Acknowledging limitations like sample size and research design, the study suggests the need for further research to deepen understanding in this area.
Copyright © by EnPress Publisher. All rights reserved.