Objective: Standardizing image acquisition protocols and image quality across cameras is an important need in imaging, in particular in multi-center clinical trials and the use of image analysis and machine learning algorithms. The objective of this study was to examine the effect of ordered subset expectation maximization (OSEM) reconstruction parameters on the quantitative image quality of cardiac perfusion SPECT images in different typical SPECT cameras and therefore assess the need to change the parameter values across cameras. Methods: The analysis was carried out by comparing the defect contrast-to-noise ratio (CNR) at 12 OSEM subset-iteration combinations. Eight frames were reconstructed using the SIMIND Monte Carlo Simulation package. An activity of 370 MBq (10mCi) and projection acquisition interval of 20 seconds per projection were used. Attenuation (AC) and scatter corrections (SC) were performed in this study for all images. Results: The 16-2 subset-iteration combination yielded the highest CNR and defect contrast values for both cameras. The difference between CNR values for two cameras was found to be close to 5%. Conclusions: Monte Carlo simulations can be useful to investigate how quantitative image quality behaves with respect to reconstruction parameters and correction algorithms in a controlled environment. In this study, the use of different camera brands did not seem to significantly affect the lesion detectability. Further simulations with more extended range of parameters and camera brands may be conducted in the future to quantify further the variability between different brands of cameras.
Heat transfer fluids (HTFs) are critical in numerous industrial processes (e.g., the chemical industry, oil and gas, and renewable energy), enabling efficient heat exchange and precise temperature control. HTF degradation, primarily due to thermal cracking and oxidation, negatively impacts system performance, reduces fluid lifespan, and increases operational costs associated with correcting resulting issues. Regular monitoring and testing of fluid properties can help mitigate these effects and provide insights into the health of both the fluid and the system. To date, there is no extensive literature published on this topic, and the current narrative review was designed to address this gap. This review outlines the typical operating temperature ranges for industrial heat transfer fluids (i.e., steam, organic, synthetic, and molten salts) and then focuses specifically on organic and synthetic fluids used in industrial applications. It also outlines the mechanisms of fluid degradation and the impact of fluid type and condition. Other topics covered include the importance of fluid sampling and analysis, the parameters used to assess the extent of thermal degradation, and the management strategies that can be considered to help sustain fluid and system health. Operating temperature, system design, and fluid health play a significant role in the extent of thermal degradation, and regular monitoring of fluid properties, such as viscosity, acidity, and flash point, is crucial in detecting changes in condition (both early and ongoing) and providing a basis for decisions and interventions needed to mitigate or even reverse these effects. This includes, for example, selecting the right HTF for the specific application and operating temperature. This article concludes that by understanding the mechanisms of thermal degradation and implementing appropriate management strategies, it is possible to sustain the lifespan of thermal fluids and systems, ensure safe operation, and help minimise operational expenditure.
Open-source software (OSS) has emerged as a transformative tool whose implementation has the potential to modernise many libraries around the world in the digital age. OSS is a type of software which permits its users to inspect, share, modify, and enhance through its freely accessed source code. The accessibility and openness of the source code permits users to manipulate, change, and improve the way in which a piece of software, program, or application works. OSS solutions therefore provide cost-effective alternatives that enable libraries to enhance their technological infrastructure without being constrained by proprietary systems. Hence, many countries have initiated and formulated policies and legislative frameworks to support the implementation and use of OSS library solutions such as DSpace, Alfresco, and Greenstone. The purpose of the study reported on was to investigate the leveraging of OSS to modernise public libraries in South Africa. Content analysis was adopted as the research methodology for this qualitative study, which was based on a literature review integrating insights from the researchers’ experiences with the use of OSS in libraries The findings of the study reveal that the use of OSS has the potential to modernise public libraries, especially those located outside cities or urban areas. These libraries are often less well equipped with the necessary technology infrastructure to meet the demands of the digital age, such as online books and open access materials. The study culminated in an OSS framework that may be implemented to modernise public libraries. This framework may help public libraries to integrate OSS solutions and further allow users access to digital services.
Instant and accurate evaluation of drug resistance in tumors before and during chemotherapy is important for patients with advanced colon cancer and is beneficial for prolonging their progression-free survival time. Here, the possible biomarkers that reflect the drug resistance of colon cancer were investigated using proton magnetic resonance spectroscopy (1H-MRS) in vivo. SW480[5-fluorouracil(5-FU)-responsive] and SW480/5-FU (5-FU-resistant) xenograft models were generated and subjected to in vivo 1H-MRS examinations when the maximum tumor diameter reached 1–1.5 cm. The areas under the peaks for metabolites, including choline (Cho), lactate (Lac), glutamine/glutamate (Glx), and myoinositol (Ins)/creatine (Cr) in the tumors, were analyzed between two groups. The resistancerelated protein expression, cell morphology, necrosis, apoptosis, and cell survival of these tumor specimens were assessed. The content for tCho, Lac, Glx, and Ins/Cr in the tumors of the SW480 group was significantly lower than that of the SW480/5-FU group (p < 0.05). While there was no significant difference in the degree of necrosis and apoptosis rate of tumor cells between the two groups (p > 0.05), the tumor cells of the SW480/5-FU showed a higher cell density and larger nuclei. The expression levels of resistance-related proteins (P-gp, MPR1, PKC) in the SW480 group were lower than those in the SW480/5-FU group (p < 0.01). The survival rate of 5-FU-resistant colon cancer cells was significantly higher than that of 5-FUresponsive ones at 5-FU concentrations greater than 2.5 μg/mL (p < 0.05). These results suggest that alterations in tCho, Lac, Glx1, Glx2, and Ins/Cr detected by 1H-MRS may be used for monitoring tumor resistance to 5-FU in vivo.
Brain tumors are a primary factor causing cancer-related deaths globally, and their classification remains a significant research challenge due to the variability in tumor intensity, size, and shape, as well as the similar appearances of different tumor types. Accurate differentiation is further complicated by these factors, making diagnosis difficult even with advanced imaging techniques such as magnetic resonance imaging (MRI). Recent techniques in artificial intelligence (AI), in particular deep learning (DL), have improved the speed and accuracy of medical image analysis, but they still face challenges like overfitting and the need for large annotated datasets. This study addresses these challenges by presenting two approaches for brain tumor classification using MRI images. The first approach involves fine-tuning transfer learning cutting-edge models, including SEResNet, ConvNeXtBase, and ResNet101V2, with global average pooling 2D and dropout layers to minimize overfitting and reduce the need for extensive preprocessing. The second approach leverages the Vision Transformer (ViT), optimized with the AdamW optimizer and extensive data augmentation. Experiments on the BT-Large-4C dataset demonstrate that SEResNet achieves the highest accuracy of 97.96%, surpassing ViT’s 95.4%. These results suggest that fine-tuning and transfer learning models are more effective at addressing the challenges of overfitting and dataset limitations, ultimately outperforming the Vision Transformer and existing state-of-the-art techniques in brain tumor classification.
This study conducts a systematic review to explore the applications of Artificial Intelligence (AI) in mobile learning to support indigenous communities in Malaysia. It also examines the AI techniques used more broadly in education. The main objectives of this research are to investigate the role of Artificial Intelligence (AI) in support the mobile learning and education and provide a taxonomy that shows the stages of process that used in this research and presents the main AI applications that used in mobile learning and education. To identify relevant studies, four reputable databases—ScienceDirect, Web of Science, IEEE Xplore, and Scopus—were systematically searched using predetermined inclusion/exclusion criteria. This screening process resulted in 50 studies which were further classified into groups: AI Technologies (19 studies), Machine Learning (11), Deep Learning (8), Chatbots/ChatGPT/WeChat (4), and Other (8). The results were analyzed taxonomically to provide a structured framework for understanding the diverse applications of AI in mobile learning and education. This review summarizes current research and organizes it into a taxonomy that reveals trends and techniques in using AI to support mobile learning, particularly for indigenous groups in Malaysia.
Copyright © by EnPress Publisher. All rights reserved.