Leon Davies

  • PhD research student
Date of start of studies: 01 July 2022
Supervisors: Professor Qinggang Meng, Professor Baihua Li and Dr Mohamad Saada
Research areas: Ai & Robotics, SLAM, Large Language Models for Robotics, Vision Language Models

Biography

Leon is a PhD candidate in the Artificial Intelligence & Robotics Department at º¬Ðß²ÝÊÓƵ, having previously completed an MSc in Artificial Intelligence at the same institution. He is deeply passionate about advancing the frontiers of technology and is currently focused on an industrial project with partners at WTW, automating fire risk assessments for insurance risk management through Ai & Robotics. With a commitment to addressing real-world challenges and implementing Ai & Robotics into practical applications. Leon is actively involved in the Robotics, Machine Learning, and Embedded Systems Programming modules at the university. He fosters collaboration and knowledge-sharing with academic institutions and industry partners, striving to push the implementation and innovation of AI and robotics into the hands of professionals.

Research

Leon benefits from an industry partnership with insurance brokerage WTW. This partnership provides a real-world context for much of his research and projects. This collaboration enables him to experiment with algorithms and robotic systems to address practical problems. Leon is part of the TECH-NGI-CDT program at º¬Ðß²ÝÊÓƵ.

Ongoing Research Projects:

  • Automating Fire Asset Detection for Insurance Risk Management (PhD): Utilises semantic SLAM, language modelling, and vision language modelling to enable robots to generate documentation required for insurance and fire safety inspection visits.
  • Semantic SLAM for Automating 2-dimensional Fire Infrastructure Maps: Combines LiDAR and object detection algorithms to construct 2-dimensional semantic SLAM maps for fire inspection robots. Fire safety infrastructure is searched for, detected and ladled in a precise space within a generated 2D map, streamlining the process of assessing fire infrastructure.
  • LLM Operating System for Robotics (Leo Rover): Integration of local language model and vision language model to control and operate UGV, tasking it with simple spoken instructions translating to chain of thought for decision making and actuation to act as helper during fire safety inspections.
  • Semantic SLAM (Leo Rover): Project to build semantic SLAM system and integrate it into the Leo Turtlebot UGV, using 3D LiDAR and RGB cameras.

Previous Research Projects:

  • Semantic SLAM (UGV): Development of a Semantic SLAM algorithm to build 3-dimensional semantic SLAM maps, focusing on unmanned ground vehicles (UGVs) within the Isaac simulator environment, 3D Pointclouds of detected objects are imposed onto a 2D SLAM and used as landmarks to improve SLAM accuracy.
  • SLAM for a Team of Underwater Robots (UUV): Contributed to the development of multi-agent SLAM systems for a fleet of underwater robots (UUVs), enhancing their capabilities for navigation and mapping tasks in large and unknown underwater environments.
  • LLMs for Robotics: Exploration of the use of Language Learning Models (LLMs) to control lab robots in real-time, responding to basic written and spoken instructions. LLM responds in pure python code to control the robot which is executed in real-time.
  • LLM Nvidia Isaac Manipulation: Usage and prompt engineering of LLMs to control robots (UAV/UGV) within the Isaac Simulator, enabling them to execute tasks based on basic English instructions, used to evaluate strength of LLM for decision of actuation (chain of thought) for a variety of robotic tasks.
  • GAN-Based 2D SLAM Error correction and Artefact Removal: Deep learning enabled error correction and artefact removal through training a GAN model to identify and resolve common SLAM errors.
  • DRL Data generation: Large scale automated occupancy grid mapping using deep reinforcement learning to mass produce large realistic examples of occupancy grid maps.