This review provides an overview of the importance of nanoparticles in various fields of science, their classification, synthesis, reinforcements, and applications in numerous areas of interest. Normally nanoparticles are particles having a size of 100 nm or less that would be included in the larger category of nanoparticles. Generally, these materials are either 0-D, 1-D, 2-D, or 3-D. They are classified into groups based on their composition like being organic and inorganic, shapes, and sizes. These nanomaterials are synthesized with the help of top-down bottom and bottom-up methods. In case of plant-based synthesis i.e., the synthesis using plant extracts is non-toxic, making plants the best choice for producing nanoparticles. Several physicochemical characterization techniques are available such as ultraviolet spectrophotometry, Fourier transform infrared spectroscopy, the atomic force microscopy, the scanning electron microscopy, the vibrating specimen magnetometer, the superconducting complex optical device, the energy dispersive X-ray spectrometry, and X-ray photoelectron spectroscopy to investigate the nanomaterials. In the meanwhile, there are some challenges associated with the use of nanoparticles, which need to be addressed for the sustainable environment.
This study evaluated the performance of several machine learning classifiers—Decision Tree, Random Forest, Logistic Regression, Gradient Boosting, SVM, KNN, and Naive Bayes—for adaptability classification in online and onsite learning environments. Decision Tree and Random Forest models achieved the highest accuracy of 0.833, with balanced precision, recall, and F1-scores, indicating strong, overall performance. In contrast, Naive Bayes, while having the lowest accuracy (0.625), exhibited high recall, making it potentially useful for identifying adaptable students despite lower precision. SHAP (SHapley Additive exPlanations) analysis further identified the most influential features on adaptability classification. IT Resources at the University emerged as the primary factor affecting adaptability, followed by Digital Tools Exposure and Class Scheduling Flexibility. Additionally, Psychological Readiness for Change and Technical Support Availability were impactful, underscoring their importance in engaging students in online learning. These findings illustrate the significance of IT infrastructure and flexible scheduling in fostering adaptability, with implications for enhancing online learning experiences.
In this paper, we assess the results of experiment with different machine learning algorithms for the data classification on the basis of accuracy, precision, recall and F1-Score metrics. We collected metrics like Accuracy, F1-Score, Precision, and Recall: From the Neural Network model, it produced the highest Accuracy of 0.129526 also highest F1-Score of 0.118785, showing that it has the correct balance of precision and recall ratio that can pick up important patterns from the dataset. Random Forest was not much behind with an accuracy of 0.128119 and highest precision score of 0.118553 knit a great ability for handling relations in large dataset but with slightly lower recall in comparison with Neural Network. This ranked the Decision Tree model at number three with a 0.111792, Accuracy Score while its Recall score showed it can predict true positives better than Support Vector Machine (SVM), although it predicts more of the positives than it actually is a majority of the times. SVM ranked fourth, with accuracy of 0.095465 and F1-Score of 0.067861, the figure showing difficulty in classification of associated classes. Finally, the K-Neighbors model took the 6th place, with the predetermined accuracy of 0.065531 and the unsatisfactory results with the precision and recall indicating the problems of this algorithm in classification. We found out that Neural Networks and Random Forests are the best algorithms for this classification task, while K-Neighbors is far much inferior than the other classifiers.
Preserving roads involves regularly evaluating government policy through advanced assessments using vehicles with specialized capabilities and high-resolution scanning technology. However, the cost is often not affordable due to a limited budget. Road surface surveys are highly expected to use low-cost tools and methods capable of being carried out comprehensively. This research aims to create a road damage detection application system by identifying and qualifying precisely the type of damage that occurs using a single CNN to detect objects in real time. Especially for the type of pothole, further analysis is to measure the volume or dimensions of the hole with a LiDAR smartphone. The study area is 38 province’s representative area in Indonesia. This research resulted in the iRodd (intelligent-road damage detection) for detection and classification per type of road damage in real-time object detection. Especially for the type of pothole damage, further analysis is carried out to obtain a damage volume calculation model and 3D visualization. The resulting iRodd model contributes in terms of completion (analyzing the parameters needed to be related to the road damage detection process), accuracy (precision), reliability (the level of reliability has high precision and is still within the limits of cost-effective), correct prediction (four-fifths of all positive objects that should be identified), efficient (object detection models strike a good balance between being able to recognize objects with high precision and being able to capture most objects that would otherwise be detected-high sensitivity), meanwhile, in the calculation of pothole volume, where the precision level is established according to the volume error value, comparing the derived data to the reference data with an average error of 5.35% with an RMSE value of 6.47 mm. The advanced iRodd model with LiDAR smartphone devices can present visualization and precision in efficiently calculating the volume of asphalt damage (potholes).
This study applies machine learning methods such as Decision Tree (CART) and Random Forest to classify drought intensity based on meteorological data. The goal of the study was to evaluate the effectiveness of these methods for drought classification and their use in water resource management and agriculture. The methodology involved using two machine learning models that analyzed temperature and humidity indicators, as well as wind speed indicators. The models were trained and tested on real meteorological data to assess their accuracy and identify key factors affecting predictions. Results showed that the Random Forest model achieved the highest accuracy of 94.4% when analyzing temperature and humidity indicators, while the Decision Tree (CART) achieved an accuracy of 93.2%. When analyzing wind speed indicators, the models’ accuracies were 91.3% and 93.0%, respectively. Feature importance revealed that atmospheric pressure, temperature at 2 m, and wind speed are key factors influencing drought intensity. One of the study’s limitations was the insufficient amount of data for high drought levels (classes 4 and 5), indicating the need for further data collection. The innovation of this study lies in the integration of various meteorological parameters to build drought classification models, achieving high prediction accuracy. Unlike previous studies, our approach demonstrates that using a wide range of meteorological data can significantly improve drought classification accuracy. Significant findings include the necessity to expand the dataset and integrate additional climatic parameters to improve models and enhance their reliability.
This research paper aims to benchmark the characteristics of financial systems for 102 countries worldwide from the period of 2005 to 2017. The financial systems’ database encompasses four main dimensions, each consisting of several variables for every indicator: (a) financial depth, (b) financial efficiency, (c) financial access, and (d) financial stability. The objective is to closely analyse the different factors that contribute to the attractiveness of financial and economic systems globally. Furthermore, this paper employs a literature review and an empirical modelling and classification of financial systems worldwide to assess their attractiveness. The modelling process utilizes two statistical analysis methods: discriminant analysis (PCA) and neural analysis. By doing so, this research paper aims to identify the most appropriate measures to strengthen these systems and economies. The main conclusion of the research is to establish a ranking of the world’s best countries and also the validation of the hypothesis that macroeconomic conditions are the effective determinants of the classification dimensions of financial systems.
Copyright © by EnPress Publisher. All rights reserved.