HOME

Welcome to UiTM ​Robocup Team 2024

copyright @ UiTM Robopioneers

Our Architecture Overview


operating

System

Camera icon

Perception

Main Contribution

I. System

The UITM Robopioneers' wheelchair project serves the noble purpose of assisting elderly and disabled individuals in their daily lives at home. The wheelchair is powered by advanced software using the Robot Operating System (ROS) and Raspberry Pi. The hardware components include a Raspberry Pi 4 Model B with 4GB RAM, Arduino Mega, Rotary Encoder (LPD3806-400BM), Voltage Regulator (LM2596 @ XL4015), Motor Driver (MD30C), DC Motor (MY1052), Rp LiDAR (A1M8), Kinect Camera (Version 1), and a 12V battery. These components work together to create an intelligent and user-friendly wheelchair system, leveraging ROS for efficient control and Raspberry Pi for processing power. The inclusion of LiDAR and Kinect camera enhances the wheelchair's ability to perceive its surroundings, providing a safer and more adaptive solution for individuals with mobility challenges.

Figure 1 : Schematic diagram

II. Navigation

For UITM Robopioneers' robotic system navigation process, we start by simulating the environment in Gazebo, using lidar and other sensors to create a detailed map with SLAM techniques. Lidar, a crucial sensor, provides precise point cloud data for mapping. UITM Robopioneers use octometry for mapping which adds a 3D layer for better localization accuracy. To enhance indoor localization, UiTM Robopioneers also integrate IMU and GPS with Lidar data. Before the robot starts moving, we carefully determine the surrounding parameters to establish a safe radius, ensuring it avoids collisions with obstacles. The ROS Navigation Stack is then utilized, featuring a dynamic local planner. This planner, part of the broader ROS framework, adapts the robot's path in real time based on Lidar and sensor data, enabling it to navigate through the mapped environment while skillfully avoiding dynamic obstacles. This approach, combining Lidar technology with safety parameters, reflects our commitment to effective and adaptive robotic navigation in various scenarios.




III. Human-Robot Interaction

UITM Robopioneers' human-robot interaction seamlessly blends navigation and hand gesture functionalities. Users can request directions verbally, prompting the robot to instruct them to follow, leading them to the desired location, and confirming arrival with a response like "You have arrived at your destination." Simultaneously, the robot interprets user hand gestures through sensors, responding accordingly to pre-set gestures. This integrated approach combines verbal and non-verbal communication, providing users with a versatile and engaging interaction experience that showcases the innovation and adaptability of UITM Robopioneers' robotic system.

IV.Perception

4.1 Gesture Recognition

The UITM Robopioneer project enters a realm of heightened interactivity by incorporating gesture recognition capabilities into its autonomous electric wheelchair system. Leveraging the power of MediaPipe, the wheelchair is designed to discern and respond to human gestures in real-time. The focal point of this innovation lies in the recognition of specific gestures, such as a raised hand, signaling the wheelchair to dynamically adjust its trajectory and move towards the person. This groundbreaking feature not only enhances the efficiency of user-machine communication but also fosters a more intuitive and user-friendly experience for individuals with varying mobility needs. By harnessing the capabilities of MediaPipe for precise and reliable gesture recognition, the UITM Robopioneer is poised to redefine the landscape of autonomous wheelchair technology, paving the way for a more responsive and personalized mobility solution.






IV. People Recognition

Through the application of machine learning, specifically utilizing our custom training dataset, the wheelchair is now equipped to discern and identify both caretakers and patients in its vicinity. This bespoke training data enables the system to understand and distinguish the unique features associated with caregivers and those receiving assistance. This advancement in people recognition not only enhances the wheelchair's situational awareness but also lays the foundation for personalized and adaptive functionalities tailored to the specific needs of both caretakers and patients. By harnessing the power of machine learning, the UITM Robopioneer is poised to redefine the landscape of autonomous wheelchair technology, ushering in a new era of intelligent and user-centric mobility solutions.

V. Object Detection

Our project embarks on an innovative endeavor to enhance the capabilities of autonomous electric wheelchairs through the integration of cutting-edge deep learning techniques, specifically employing the YOLOv3 algorithm. This initiative aims to equip the wheelchair with advanced object detection capabilities, enabling it to navigate through dynamic indoor environments with heightened awareness. The YOLOv3 model has been fine-tuned on a diverse dataset, encompassing objects such as chairs, persons, tables, vases, and more, ensuring a comprehensive understanding of the wheelchair's surroundings. As a result, the UITM Robopioneer is poised to autonomously and intelligently detect and respond to obstacles in real-time, showcasing a seamless fusion of state-of-the-art technology and mobility assistance for a more inclusive and adaptive user experience.

copyright @ UiTM Robopioneers