<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
<title>FARS - 2021</title>
<link href="http://drr.vau.ac.lk/handle/123456789/240" rel="alternate"/>
<subtitle/>
<id>http://drr.vau.ac.lk/handle/123456789/240</id>
<updated>2026-04-05T19:42:50Z</updated>
<dc:date>2026-04-05T19:42:50Z</dc:date>
<entry>
<title>Character recognition focusing partially disfigured name boards</title>
<link href="http://drr.vau.ac.lk/handle/123456789/315" rel="alternate"/>
<author>
<name>Lathagini, S.</name>
</author>
<author>
<name>Suthaharan, S.</name>
</author>
<id>http://drr.vau.ac.lk/handle/123456789/315</id>
<updated>2025-03-14T05:30:15Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">Character recognition focusing partially disfigured name boards
Lathagini, S.; Suthaharan, S.
Name boards are the most popular visual aids on roadways, and they help with location identification. Damage to name boards can deface them in various cases. When name boards are severely damaged, it can be difficult for a visitor to recognize them. This research aims to identify, locate, and recognize characters in name boards using various font styles and partial letters. Due to the lack of a dataset of name board images with partially visible letters, images of name boards in three languages: Tamil, English, and Sinhala, are collected around Jaffna. Only the English language displayed in name boards is used to predict partial and whole characters in this work. First, the image is preprocessed with grayscale transformation and thresholding, followed by morphological transformations to localize the text sections. A connected component analysis is used to scan a binarized image and classify its pixels into components based on their pixel connection. Each pixel is assigned a value in identified groups of pixels based on its&#13;
assigned component. The text lines are then segmented using the Skeleton analysis method in the next phase. Then, the segmented character image is turned into strings using the coefficient of correlation and structural similarity index methods. Character recognition is also built and trained using upper and lower case letter character visual&#13;
data samples. As a result, characters with disfigurements were identified using the most similar character image. As some proposed strategies, such as the automated text detection algorithm and the maximally stable extremal regions feature detector, failed to predict missing characters in the name boards, this proposed method achieved results with an accuracy of 81.7%.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Morphological changes in the major white matter fibre bundles: A 3D curve analysis</title>
<link href="http://drr.vau.ac.lk/handle/123456789/314" rel="alternate"/>
<author>
<name>Lakshika, G.</name>
</author>
<id>http://drr.vau.ac.lk/handle/123456789/314</id>
<updated>2025-03-27T06:12:51Z</updated>
<published>2021-09-15T00:00:00Z</published>
<summary type="text">Morphological changes in the major white matter fibre bundles: A 3D curve analysis
Lakshika, G.
Diffusion magnetic resonance imaging (MRI) provides an exclusive window on brain anatomy that allows exploring the brain’s structural connectivity in vivo and non-invasive. Fibre tractography using diffusion MR imaging is a promising method for reconstructing the human brain white matter’s 3D fibre (curve) architecture. Alzheimer’s disease is a disconnection syndrome in which brain regions become physically and functionally detached one after the other as the disease progresses. Early identification of Alzheimer’s disease, particularly in the pre-symptomatic period, is critical for slowing or preventing disease progression. Alzheimer’s related structural and functional biomarkers have been&#13;
developed by advanced neuroimaging techniques, such as positron emission tomography, structural MRI, diffusion MRI, and functional MRI. This research focuses on investigating the morphological changes in the Alzheimer’s disease brain and developing a robust method for Alzheimer’s detection using 3D curve analysis of the major white matter bundles. This is the first attempt to analyse the 3D curves in Alzheimer’s disease prediction to the best of our knowledge. We have created six major fibre bundles from the Diffusion MRI of the Normal subjects and Alzheimer’s subjects with many medical image processing steps such as pre-processing, registration, whole-brain fibre tracking, and segmenting the major bundles. We have presented visually and quantitatively; there are differences in major bundles in different features of the curves. For the quantitative analysis, the Delaunay triangulation method and average curves with geometrical features have been used to compare the normal subjects and Alzheimer’s subjects statistically. The results show significant individual differences in each of the major bundles, reflecting the previous findings. In the future, we should analyse the biological reason for these differences.
</summary>
<dc:date>2021-09-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automated classification of white blood cells from microscopic images</title>
<link href="http://drr.vau.ac.lk/handle/123456789/313" rel="alternate"/>
<author>
<name>Sharcky, M.H.H.</name>
</author>
<id>http://drr.vau.ac.lk/handle/123456789/313</id>
<updated>2025-03-11T11:23:32Z</updated>
<published>2021-09-15T00:00:00Z</published>
<summary type="text">Automated classification of white blood cells from microscopic images
Sharcky, M.H.H.
The application of computational techniques is rapidly growing, improving the quality of laboratory analysis and diagnosis. Visual analysis of peripheral blood samples is an important test in the procedures for diagnosing leukaemia. Acute Lymphocytic Leukemia is fatal if left untreated due to its rapid spread into the bloodstream and other vital organs. Early diagnosis of the disease is crucial for the recovery of patients, especially in the case of children. The current practice of reading medical images is labour-intensive, time-consuming, costly, and error-prone. It would be more desirable to have a computer-aided system that can automatically make diagnosis and treatment recommendations. This  paper presents a set of preprocessing and segmentation algorithms and features that can recognize and classify different types of normal white blood cells from digital microscopic images. We created a multi-step procedure that included extracting a region of interest from a larger image around threshold cell nuclei, segmenting that image into a cell and non-cell regions using Canny edge detection and a circle identification algorithm, and extracting a feature set based on cell colour, size, and nuclear morphological information, and applying a classifier. The number of classifiers was used for 101 images of white blood cells. The instance-based classifier performed well with 91% classification accuracy. The results of these analyses can be used as a reference for evaluating patients by medical teams.
</summary>
<dc:date>2021-09-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>Static Sinhala sign language recognition using MobileNet convolutional neural network</title>
<link href="http://drr.vau.ac.lk/handle/123456789/312" rel="alternate"/>
<author>
<name>Jagodage, J.P.T.</name>
</author>
<author>
<name>Umesh, E.R.</name>
</author>
<id>http://drr.vau.ac.lk/handle/123456789/312</id>
<updated>2025-03-25T09:30:35Z</updated>
<published>2021-09-15T00:00:00Z</published>
<summary type="text">Static Sinhala sign language recognition using MobileNet convolutional neural network
Jagodage, J.P.T.; Umesh, E.R.
The deaf/dumb people cannot exchange their thoughts and ideas using words. They face many difficulties because of that in their day-to-day life. They use sign language to communicate with others-—no standard sign language for mute people since it differs from place to place. Therefore, simple technology is necessary that helps deaf or dumb people communicate with ordinary people without any barrier. In recent years, researchers are interested in sign language recognition and convert them into text or voice using  vision-based and sensor-based approaches. However, a significant performance gap is still observed in these approaches due to the movement of hands at different speeds, real-time recognition, and background and illumination changes. Especially there is no accurately completed research for Sinhala sign language recognition. This research aims to recognize static Sinhala sign language gestures and translate them into natural language using deep learning algorithms. An efficient convolutional neural network (CNN) architecture is presented to obtain high-quality features from the 2D static hand gesture images to classify 12 different gestures. MobileNets, a class of lightweight deep convolutional neural networks, was used to classify the hand gesture images, and the development was implemented on Keras deep learning library with TensorFlow backend. The proposed method efficiently shows discriminative static hand gesture images and offers an accurate sign language detection classifier. The proposed MobileNets CNN architecture model-based approach achieves 96% prediction accuracy for the 240 images from the 12 static gestures, which means that our prediction model is more robust and accurate compared to the previous approaches. This work explores the benefits of deep learning by recognizing a subset of signs of Sinhala sign language, and the recognition bridges the gap of communication between deaf or dumb people and others.
</summary>
<dc:date>2021-09-15T00:00:00Z</dc:date>
</entry>
</feed>
