i am thrilled to share that we have been given the opportunity to share our work through two papers to be read at this year's annual conference of the Royal Geographical Society :-)
this year's conference will take place as a hybrid event in London, from the 29th of August till the 1st of September.
Exploring Ornithological Identification with Deep Learning is co-authored with Edwin Ong and Shang Yu Liow. its abstract reads:
Only 4.5 percent of Singapore's land area is set aside as nature reserve. Despite this, Singapore is an important stopover point for more than forty species of migratory birds along both the Central Asian Flyway and the East Asian - Australasian Flyway. The birds are attracted to the wetlands and mudflats of the island-nation. This paper reports an independent research project conducted by a pair of high school students between April 2022 and March 2023, under the mentorship of a senior research scientist at the National Institute of Education in Singapore. It documents the identification and evaluation of Deep Learning Models in ornithological identification. Specifically, the capabilities of two neural network frameworks, VGG16 and InceptionV3, were identified due to their ability to build image classification models. The models were trained on publicly available datasets comprising more than 70000 images, representing more than 400 species. By the end of the project, the iteration of the model achieved a validation accuracy of 73.04% on forty locally sighted species. As an example of misidentification, an image of a juvenile Barred Eagle-Owl was classified as a White-headed Fish Eagle due to the similar white colouration. The bird prediction could be impacted by the lack of a variety of training data which represents the bird at all stages of its life, all while including images of both genders. A web-based service which linked to the external Singapore Birds Project was subsequently designed as a proof of concept for future scaling.
Designing an Autonomous Platform using Hobbyist Robotics Embedded Computing for Performing Geospatial 3D Mapping is co-authored with Christopher Cheng and Yi Xin Ong. its abstract reads:
Typical use cases of geospatial 3D mapping relate to outdoor environments, where virtual navigation platforms are generated via satellite and aerial imagery, and photogrammetry. Much remains to be discovered about mapping indoor environments, where data may not be readily accessible over long periods of time. This paper describes an independent research project conducted by a pair of high school students between April 2022 and March 2023, under the mentorship of a senior research scientist at the National Institute of Education in Singapore. The project investigated remote sensing methods of Light Detection and Ranging sensors (LiDAR), as well as computer vision functionalities of the NVIDIA Jetson Nano, which is a compact microcomputer ideal for machine learning applications. More specifically, through the utilisation of open-source hardware and software, the project aimed to design a mapping device by pairing autonomous robot navigation with map plotting and depth maps. The device organised and interpreted data sourced from Raspberry Pi cameras and a 360° LiDAR fitted onto a mobile platform. Depth perception models and mapping algorithms were explored as precedent for comparing their accuracy in data collection and efficacy in map generation. LiDAR systems - particularly those implemented for large-scale, outdoor scenarios - are costly to maintain. Although a LiDAR system with more available optical power has increased accuracy, it is also more expensive to operate. Nevertheless, autonomous platforms surpass manual labour in 3D mapping. By scheduling timely deployment of the mapping device, up-to-date information and visual representations of a mapped region could be more easily obtained.