Surveys are one of the most important tasks to be executed to get valued information. One of the main problems is how the data about many different persons can be processed to give good information about their environment. Modelling environments through Artificial Neural Networks (ANNs) is highly common because ANN’s are excellent to model predictable environments using a set of data. ANN’s are good in dealing with sets of data with some noise, but they are fundamentally surjective mathematical functions, and they aren’t able to give different results for the same input. So, if an ANN is trained using data where samples with the same input configuration has different outputs, which can be the case of survey data, it can be a major problem for the success of modelling the environment. The environment used to demonstrate the study is a strategic environment that is used to predict the impact of the applied strategies to an organization financial result, but the conclusions are not limited to this type of environment. Therefore, is necessary to adjust, eliminate invalid and inconsistent data. This permits one to maximize the probability of success and precision in modeling the desired environment. This study demonstrates, describes and evaluates each step of a process to prepare data for use, to improve the performance and precision of the ANNs used to obtain the model. This is, to improve the model quality. As a result of the studied process, it is possible to see a significant improvement both in the possibility of building a model as in its accuracy.
The technological development and growth of the telecommunications industry have had a great positive impact on the education, health, and economic sectors, among others. However, they have also increased rivalry between companies in the market to keep and acquire new customers. A lower level of market concentration is related to a higher level of competitiveness among companies in the sector that drives a country’s socioeconomic development. To guarantee and improve the level of competition, it is necessary to monitor the concentration level in the telecommunications market to plan and develop appropriate strategies by governments. With this in mind, the present work aims to analyze the concentration prediction in the telecommunications market through recurrent neural networks and the Herfindahl-Hirschman index. The results show a slight gradual increase in competition in terms of traffic and access, while a more stable concentration level is observed in revenues.
In this paper, we assess the results of experiment with different machine learning algorithms for the data classification on the basis of accuracy, precision, recall and F1-Score metrics. We collected metrics like Accuracy, F1-Score, Precision, and Recall: From the Neural Network model, it produced the highest Accuracy of 0.129526 also highest F1-Score of 0.118785, showing that it has the correct balance of precision and recall ratio that can pick up important patterns from the dataset. Random Forest was not much behind with an accuracy of 0.128119 and highest precision score of 0.118553 knit a great ability for handling relations in large dataset but with slightly lower recall in comparison with Neural Network. This ranked the Decision Tree model at number three with a 0.111792, Accuracy Score while its Recall score showed it can predict true positives better than Support Vector Machine (SVM), although it predicts more of the positives than it actually is a majority of the times. SVM ranked fourth, with accuracy of 0.095465 and F1-Score of 0.067861, the figure showing difficulty in classification of associated classes. Finally, the K-Neighbors model took the 6th place, with the predetermined accuracy of 0.065531 and the unsatisfactory results with the precision and recall indicating the problems of this algorithm in classification. We found out that Neural Networks and Random Forests are the best algorithms for this classification task, while K-Neighbors is far much inferior than the other classifiers.
Accurate drug-drug interaction (DDI) prediction is essential to prevent adverse effects, especially with the increased use of multiple medications during the COVID-19 pandemic. Traditional machine learning methods often miss the complex relationships necessary for effective DDI prediction. This study introduces a deep learning-based classification framework to assess adverse effects from interactions between Fluvoxamine and Curcumin. Our model integrates a wide range of drug-related data (e.g., molecular structures, targets, side effects) and synthesizes them into high-level features through a specialized deep neural network (DNN). This approach significantly outperforms traditional classifiers in accuracy, precision, recall, and F1-score. Additionally, our framework enables real-time DDI monitoring, which is particularly valuable in COVID-19 patient care. The model’s success in accurately predicting adverse effects demonstrates the potential of deep learning to enhance drug safety and support personalized medicine, paving the way for safer, data-driven treatment strategies.
The purpose of Vehicular Ad Hoc Network (VANET) is to provide users with better information services through effective communication. For this purpose, IEEE 802.11p proposes a protocol standard based on enhanced distributed channel access (EDCA) contention. In this standard, the backoff algorithm randomly adopts a lower bound of the contention window (CW) that is always fixed at zero. The problem that arises is that in severe network congestion, the backoff process will choose a smaller value to start backoff, thereby increasing conflicts and congestion. The objective of this paper is to solve this unbalanced backoff interval problem in saturation vehicles and this paper proposes a method that is a deep neural network Q-learning-based channel access algorithm (DQL-CSCA), which adjusts backoff with a deep neural network Q-learning algorithm according to vehicle density. Network simulation is conducted using NS3, the proposed algorithm is compared with the CSCA algorithm. The find is that DQL-CSCA can better reduce EDCA collisions.
Copyright © by EnPress Publisher. All rights reserved.