<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" version="2.0">
<channel>
<title>FARS - 2024</title>
<link>http://drr.vau.ac.lk/handle/123456789/1003</link>
<description/>
<pubDate>Sun, 05 Apr 2026 19:38:04 GMT</pubDate>
<dc:date>2026-04-05T19:38:04Z</dc:date>
<item>
<title>Diabetic Retinopathy Detection Using Deep Learning</title>
<link>http://drr.vau.ac.lk/handle/123456789/1082</link>
<description>Diabetic Retinopathy Detection Using Deep Learning
Jayasundara, L.J.M.D.N.K.; Jeyamugan, T.; Vijayakanthan, G.
The eye is a vital organ in human anatomy, uniquely allowing non-invasive examination of its interior, including vascular and brain tissues, from the outside. While retinal fundus images are traditionally used to diagnose ophthalmic conditions, they also provide critical information about systemic health. Diabetic Retinopathy (DR), a severe complication of diabetes, can cause irreversible vision loss if not detected and treated promptly. Manual examination of retinal fundus images for DR detection is time-consuming, subjective, and limited by the availability of expert clinicians. In contrast, automatic DR detection methods offer greater efficiency, cost-effectiveness, and speed. Recent advancements in deep learning (DL), particularly convolutional neural networks (CNNs), have shown promising results in automating DR detection from retinal fundus photographs. In this study, we present a custom MobileNetV2 architecture modified for DR detection using fundus images from Sri Lankan patients, aiming to improve the accuracy, efficiency, and accessibility of early detection. The dataset used for model training comprises 255 DR and 17 normal fundus images collected from General Hospital Kandy in Sri Lanka. These images were meticulously annotated to ensure a comprehensive database for analysis. The pre-processing steps included resizing, normalization, and augmentation to enhance training. Unlike common practices that crop images, this study applied text removal techniques to preserve the entire retinal area, ensuring that critical diagnostic features remain intact. After conducting extensive experimentation and employing various fine-tuning techniques, the results demonstrate a 96.08% accuracy, with high precision (98%), recall (94%), and F1-score (96%) for the DR class. This model’s ability to detect DR early can significantly impact patient outcomes by facilitating timely intervention. This study provides a comprehensive analysis of DL methodologies and their potential to revolutionize ophthalmology and diabetic retinopathy management in Sri Lanka
</description>
<pubDate>Wed, 30 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://drr.vau.ac.lk/handle/123456789/1082</guid>
<dc:date>2024-10-30T00:00:00Z</dc:date>
</item>
<item>
<title>A Deep Learning Approach Utilizing Convolutional Neural Networks for the Detection of Generated Human Face Images</title>
<link>http://drr.vau.ac.lk/handle/123456789/1081</link>
<description>A Deep Learning Approach Utilizing Convolutional Neural Networks for the Detection of Generated Human Face Images
Wanigasekara, W.G.M.G.N.; Jeyamugan, T.; Sobana, S.
The proliferation of Artificial intelligence-generated human face images has posed significant challenges to the detection and authentication of digital content, particularly within social networks. These synthetic faces, produced by advanced Artificial Intelligence techniques such as Generative Adversarial Networks (GANs), are often indistinguishable from real faces, complicating traditional detection methods. The ability to generate realistic facial images has broad implications in media, security, digital forensics, and identity verification. Traditional detection methods, which rely on manually created features and rule-based algorithms, are becoming ineffective against advanced generated images, as they struggle to detect subtle patterns that differentiate them from real ones. The urgency of developing more robust detection methods is driven by the risks posed by synthetic images, including misinformation, identity fraud, and the erosion of content integrity. In the literature, researchers used one image generator to generate the images for training and used that same generator for testing and validating the model. Most of the images were in JPG format with the same quality for all the images. This research aims to develop an advanced detection methodology, using deep learning and convolutional neural networks, to differentiate between real and generated face images that can adapt to different generators from ProGAN, StyleGAN, and Stable Diffusion, with resolutions ranging from 128x128 to 768x768 pixels, and both PNG and JPG formats. The study introduces a customized model named ”CCNNgenFace,” designed for detecting generated face images. The model achieved an accuracy of 89.45% on the test dataset, demonstrating its ability to generalize across different generated images and formats. This approach enhances detection accuracy, helping safeguard digital media authenticity and preventing the misuse of synthetic facial images
</description>
<pubDate>Wed, 30 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://drr.vau.ac.lk/handle/123456789/1081</guid>
<dc:date>2024-10-30T00:00:00Z</dc:date>
</item>
<item>
<title>Securing Large Language Models: Investigating Prompt Injection Attacks and Remediation Tactics</title>
<link>http://drr.vau.ac.lk/handle/123456789/1080</link>
<description>Securing Large Language Models: Investigating Prompt Injection Attacks and Remediation Tactics
Zahry, M.Z.L.
The rapid advancement of Large Language Models (LLMs) has brought about remarkable capabilities in natural language processing, but it has also exposed vulnerabilities such as prompt injection attacks, which pose significant security threats. This research nvestigates the effectiveness of prompt injection attacks on LLMs, focusing on role-based scenarios, and explores potential remediation tactics to mitigate these risks. The primary objective is to test the impact of direct prompt injection attacks and identify mitigations. To address this, we developed a dataset containing both benign and malicious prompts and evaluated the responses of four LLMs: Gemini, ChatGPT, Perplexity, and a quantized Llama 2 model. Our methodology involved testing these models’ behaviours and implementing a system that applies sentiment analysis to filter harmful outputs. The results indicate that Gemini and Perplexity exhibited significant vulnerability, often generating harmful or manipulative content. ChatGPT-4 and quantized Llama 2 demonstrated moderate resistance, producing safer alternatives but still failing in some cases. To mitigate harmful content, a response filtering system based on sentiment analysis was implemented. This successfully flagged and neutralised harmful outputs by replacing them with neutral responses when sentiment scores fell below a predetermined threshold. Llama 2 was used as the ground for research and the sentiment analysis revealed that Llama 2’s responses improved significantly after applying these mitigation techniques, with compound sentiment scores increasing from 0.5453 to 0.8345, reflecting a notable reduction in harmful content. These findings highlight the need for defence strategy, like real time sentiment monitoring, to enhance the security of LLMs against prompt injection attacks. This research suggests the need for ongoing refinement of mitigation tactics as LLMs continue to evolve, with potential applications in improving the security of AI-driven systems across various domains
</description>
<pubDate>Wed, 30 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://drr.vau.ac.lk/handle/123456789/1080</guid>
<dc:date>2024-10-30T00:00:00Z</dc:date>
</item>
<item>
<title>Using Spatial Pyramid Pooling Layer in Convolutional Neural Networks For Image-based Malware Detection</title>
<link>http://drr.vau.ac.lk/handle/123456789/1079</link>
<description>Using Spatial Pyramid Pooling Layer in Convolutional Neural Networks For Image-based Malware Detection
Albuker, H.N.I.; Jeyamugan, T.; Sobana, S.
As computer technology rapidly advances, the prevalence of malware has surged, posing significant threats to network security and user data. Malware can infiltrate systems by spreading through the Internet, leading to data loss, fraud, and network breakdowns. Researchers are continually exploring methods to enhance malware detection. This study focuses on improving image-based malware detection by integrating the Spatial Pyramid Pooling (SPP) layer into Convolutional Neural Networks (CNNs). The primary challenge lies in classifying malware when converting binary files of various sizes into images using traditional CNN models, which struggle with varying input dimensions. The SPP layer addresses this by allowing CNNs to process images of different sizes more effectively, identifying features at multiple scales, and enhancing adaptability. In the methodology, malware binaries were converted into grayscale images and fed into a CNN with the SPP layer, generating fixed-length feature maps. The model was evaluated using accuracy, precision, recall, and F1-score metrics. Results showed that the model achieved a high detection accuracy of 96%, with strong performance across most malware classes, including Adialer.C, Agent.FYI, and Allaple.A. However, some malware types, such as Swizzor.gen!I, showed variability in detection performance. These findings confirm that integrating the SPP layer into CNNs significantly enhances the model’s ability to detect diverse malware types, improving its effectiveness in real-world scenarios. In conclusion, this research demonstrates that the SPP-enhanced CNN model offers a robust solution for malware detection, contributing to the cybersecurity field by providing a more adaptable and accurate automatic detection system. Further research could focus on refining the model for specific malware classes with lower detection rates
</description>
<pubDate>Wed, 30 Oct 2024 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://drr.vau.ac.lk/handle/123456789/1079</guid>
<dc:date>2024-10-30T00:00:00Z</dc:date>
</item>
</channel>
</rss>
