Skip to main content

DIY LiDAR SLAM Robot from Scratch

 

 

Showcase & Documentary


 1. Deep Learning Based Calibration


 2. ICP Based Odometry & Positioning (Manual)


 3. Path-Finding using A* Algorithm & Traversing (Manual)


4. Automatic Navigation & Path-Finding


5. Complete Autonomus Navigation With Feedback Loop


 Download Vagabond:  https://github.com/Kawai-Senpai/Vagabond/

Welcome to this exciting exploration into the world of SLAM—Simultaneous Localization and Mapping. Most people working on SLAM typically rely on Robot Operating System (ROS) and predefined libraries, using simulations instead of real-world scenarios. But we decided to take a different route. We believed that if we wanted to truly master SLAM, we needed to do it the right way—from scratch.
Rather than using pre-built templates and libraries, which can be picked up at any time, we challenged ourselves to build a SLAM system manually. Now, we didn’t have the budget to buy a professional LiDAR sensor, which can be quite expensive, so we decided to create our own using a servo motor and a Time-of-Flight laser distance sensor. Along with that, we built a robotic car, calibrated the entire system, and coded the entire SLAM architecture by hand in pure Python. We didn’t stop there; we also used OpenCV to display the results.
Why go through all this trouble? Because real-world data is messy—it's noisy, error-prone, and unpredictable. By learning how to make a DIY sensor work effectively, we prepared ourselves to handle the complexities of real-world SLAM. We implemented particle filters, noise reduction techniques, temporal filtering, and probabilistic methods. To refine our data, we also used AI to calibrate and adjust the incoming data, providing us with a cleaner, more accurate experience. On top of that, we applied advanced techniques like erosion to process and refine probabilistic maps, leading to highly accurate results.
This project required a deep dive into mathematics and algorithms, and the only way to fully grasp these concepts was to recreate the entire system from scratch using Python—coding every line by hand. Although many people use ROS, and we did too eventually, our journey began with Python. By the end of this video, you’ll see how we ported and transferred our Python-based system into ROS on a Linux platform, displayed the results using RViz, and how the entire system functions with nodes, topics, listeners, and publishers in the ROS environment.
Before we get into the ROS part, let's break down how we developed the system in Python. We divided the program into four key parts, each serving a crucial role:
1. Driver Node: This bridges the gap between the program running on the computer and the robot itself. It handles all the functionalities required to communicate with the robot, such as receiving and calibrating data to ensure accuracy. It also sends commands back to the robot when we want it to move.
2. Update Grids Node: This part is responsible for converting all measurements into Cartesian coordinates, updating occupancy maps, and plotting probabilistic grids that represent free space. It also pinpoints the robot's pose and gives us the grids we need to visualize the environment.
3. Path Finding: This node ensures that we always have the latest path available. Whenever the robot’s position, the occupancy grid, or the goal position changes, this node updates the path accordingly. It also listens for mouse clicks in RViz within ROS, allowing us to update the robot’s path to any clicked goal position.
4. Navigation: This final part calculates the robot’s next steps—how much it should turn, whether it has reached the goal, and in which direction it should move. It uses models that we’ll discuss later to send the appropriate commands to the robot with the right timing and precision.
Additionally, we've published a Python library called "Vagabond," which you can use for pathfinding in your own projects. We’ll provide more details on this later in the video.
So, stay tuned as we dive deeper into the technical aspects of SLAM, explore our DIY solutions, and reveal how you can apply these techniques to your own robotics projects!

Before diving into the project, let me walk you through how we built the robot itself. To work effectively with SLAM, it's crucial to understand how the incoming data behaves, and for that, we needed a reliable distance sensor. Initially, we opted for an ultrasonic sensor, mainly because of our previous project, SARDA, which also used ultrasonic technology. That project involved navigation, but it wasn’t as advanced as what we’re tackling here. Back then, we were just beginning to explore robotics, focusing on basic navigation and mapping, but now we’re ready to take it to the next level.
We already had the know-how on using ultrasonic sensors to transmit data from the robot to a computer, as well as how to drive the mobile robot using motor driver modules. So, we began by gathering all the necessary components: DC motors, wheels, battery holders, ultrasonic sensors, and of course, breadboards.
Initially, our plan was to incorporate two distance sensors—specifically, ultrasonic sensors—mounted on a servo to rotate and scan the front and back of the map. However, we later upgraded to a single time-of-flight (TOF) laser distance sensor for better accuracy. The mobile robot is powered by an ESP32 microcontroller, which communicates with the servo, distance sensors, and motor driver. We also included a logic level shifter to manage the voltage between 5V and 3V. The robot is equipped with three 3.7V Li-ion batteries and a power switch to easily turn the system on and off.

This time around, we decided to move away from the laminated fiber materials used in our previous robots. Instead, we chose fiberglass for its transparency and increased thickness. The new design included a large chassis to house all the components, with wheels attached, and a longitudinal piece that would hold the ultrasonic sensors on either side.
We began by working on the ultrasonic sensors, which were mounted on a very thin, perforated breadboard. We carefully soldered the two ultrasonic sensors onto the breadboard, ensuring they were securely fixed. This setup was then attached to the longitudinal piece of fiberglass. Additionally, we mounted a servo on this longitudinal piece, allowing it to rotate and scan the surroundings. The servo could pivot between 0 and 100 degrees at both the front and the back, providing comprehensive scanning capabilities for the robot.

Initially, the ultrasonic sensors proved to be a poor choice for our project. They were highly unpredictable, noisy, and inaccurate, though they did manage to get the job done for a time. However, we soon realized that a time-of-flight laser distance sensor would be a better option. At that moment, we didn’t have that upgrade yet, so we worked with what we had, focusing on optimizing and reducing noise to make the ultrasonic sensors function as well as possible.
Once the mechanical assembly was complete, we physically tested the servo’s rotation from left to right, ensuring it could scan effectively. The data from the ultrasonic sensors was sent to the ESP32 microcontroller, which then transmitted it to the computer via UDP in JSON format for processing.
We faced significant challenges in converting this data into Cartesian coordinates, plotting the points accurately, and accounting for various error states. Testing was particularly difficult due to the cluttered environment at home, filled with furniture and small objects that interfered with accurate sensor readings. To address this, we needed a compact, controlled environment where we could effectively evaluate the sensor’s performance, especially given its limited range. This led us to build a small DIY testing setup to validate our ultrasonic sensor’s measurements.

To effectively test the ultrasonic sensors, we needed a controlled environment where our robot could move around freely. We repurposed an old TV box to create a makeshift playground. After laying the box flat on the ground, we cut one side to form a small arena with walls for the robot to navigate and scan.
The original walls were too short, so we extended their height by mounting them on pillars, ensuring the ultrasonic sensors could adequately detect and measure distances. To make this setup more functional, we used tape and markers to create measurement guides and ensured the walls were as straight as possible. We also covered the corners with pieces of cardboard to smooth out the edges, preventing the sound waves from getting trapped or deflected.
The result was a well-constructed mini-arena where our robot could test its navigation and measurement capabilities. We even added small blocks to simulate obstacles and walls, giving us a comprehensive environment to evaluate how well the sensors performed in detecting and mapping the surroundings.

We ran into a significant issue with the ultrasonic sensors. When we placed them in the arena, their performance was inconsistent. The sensors struggled, especially when scanning corners, where the sound waves seemed to get trapped or reflected unpredictably. Additionally, when we tested them against various surfaces, like a marble wall, the sensors often failed to detect the surface accurately, leading to unreliable data.
Recognizing the limitations of the ultrasonic sensors, we decided to switch to a time-of-flight laser distance sensor from a previous project. This sensor promised more accurate and stable measurements compared to the noisy and erratic ultrasonic sensors. We carefully disassembled our old robot, removed the time-of-flight sensor, and mounted it on a servo in place of the ultrasonic sensors. This upgrade significantly improved the robot's performance.
With the time-of-flight laser distance sensor now in place, the robot’s scanning and measurement capabilities were greatly enhanced. But before we dive into the impressive results of this upgrade, let’s take a closer look at how we built the robot chassis itself.

Building on our previous experience with SARDA, our latest robot chassis was crafted from thick, transparent fiberglass. This time, we added some modularity to our design for added flexibility. We secured DC motors on either side of the robot and installed a freely rotating wheel at the front and back to ensure smooth movement. The setup also included battery holders, a motor driver, and other essential components.
To make the system modular, we designed it so that key components could be easily swapped or upgraded. For instance, we soldered header pins onto a laminated perforated board, allowing us to plug and unplug the ESP32 microcontroller as needed. This 'plug-and-play' approach meant that if we needed to replace the ESP32 or repurpose it for another project, we could simply disconnect it and reconnect it later.
Similarly, for the motor driver, we avoided permanent soldering by using header pins and plug-in connectors. This way, we could easily detach the motor driver if necessary. All the wiring was meticulously routed, secured with zip ties, and organized for a clean, efficient setup. Finally, we mounted the servo and the compact time-of-flight laser distance sensor on top of the chassis, completing the assembly with a system that was both practical and innovative.

Coding the entire system was quite a challenge, especially with the ESP32's limited number of pins. When using Wi-Fi to broadcast data, many of the available pins were occupied, leaving us with only half of the pins to manage all the functionalities. We had to carefully allocate these pins and write efficient, high-speed code to ensure that the scanning and data transmission processes were smooth and lag-free. We'll dive into the coding details later, but first, let's address another major issue we faced.
The robot’s turning mechanism initially posed a significant problem. The wheel mounted at the back of the robot would rotate erratically and often get stuck, making it difficult to drive the robot straight or turn accurately. Despite both forward motors working in unison, the robot would veer off course. To resolve this, we replaced the problematic wheel with two additional motors at the back, giving the robot a total of four powered wheels. This adjustment, powered by a single motor driver, provided much better stability and control.
With this upgrade, the robot could now turn 360 degrees smoothly and maintain a more stable movement compared to its previous two-wheel setup with a non-powered third wheel. With the mechanical issues sorted, we were ready to dive into implementing the SLAM system.

Our first major change was moving away from traditional binary occupancy grids and adopting a probabilistic approach. This shift significantly reduced noise and provided a more accurate representation of the environment. We implemented probabilistic occupancy grids and free-space grids to better handle the inherent noise in our system.
To further enhance accuracy, we employed various noise reduction techniques, including temporal filters and probabilistic filtering. We fine-tuned raw Cartesian coordinate points calculated from the probabilistic grids, addressing minor errors with erosion techniques to eliminate minute inaccuracies. We also developed programs to mitigate clamping artifacts, ensuring that our grid representations aligned more closely with real-world scenarios.
There were minor imperfections in our setup, such as the servo not rotating precisely and occasional calibration issues with the LiDAR sensor. To address these, we used deep learning. We designed a specialized model with layers that processed each value independently, reflecting the real-world scenario where data points are largely independent.
Training this model required considerable effort. We manually measured distances by positioning the robot in front of a straight wall and taking multiple observations. This data was then used to synthesize a diverse dataset, which was fed into the model for training. Once trained, the lightweight model was saved and could be easily loaded for real-time use, without significantly impacting performance during operations.

Let’s keep the details of the SLAM implementation brief. We used historical data and various filtering techniques to achieve a clean and noise-free map for our algorithms. Initially, we employed different types of particle filters to pinpoint the robot's location within the map. The particle filtering was so precise that it could update the robot’s position even without odometry information when the robot was moved manually. However, for complex motions and actions, odometry was essential.
To integrate odometry, we created a model based on simple equations to estimate the robot's movement. Although our robot lacked wheel encoders and gyroscopes, we could estimate distance and rotation by controlling motor speed and duration. We used these estimates to command the robot to move specific distances or angles, adjusting commands based on the model's calculations. This model allowed us to update the robot's position in real time on the occupancy grid.
In summary, our robot was controlled not by velocity commands but by specific distance or angle commands. We calculated the required parameters for accurate movement based on various measurements and adjustments, which were then applied through the trained model. The particle filters accounted for any small discrepancies, ensuring precise positioning and orientation.


Next, we focused on pathfinding. To make our SLAM project effective, we needed a robust pathfinding algorithm to navigate the robot to its goal position. Implementing this was challenging. We chose the A* algorithm for pathfinding and leveraged our own library, Vagabond, which we developed years ago. Vagabond allowed us to perform pathfinding using the A* algorithm efficiently. We integrated functions to find neighbors, calculate costs, and apply heuristics. We ensured that the cost calculations minimized unnecessary deviations, directing the robot towards the goal smoothly.
We also included a path simplification algorithm to streamline the path, making it easier for the robot to navigate. This simplified path was then used by the navigation module to determine the robot’s movements. The navigation system calculated the robot's current position and orientation, determining the necessary turns and adjustments to approach the goal accurately. This process involved iterative adjustments with appropriate weights and feedback control to ensure the robot reached the target position. Additionally, the pre-trained models were used to generate precise commands for the robot.
With these systems in place, the robot could autonomously navigate and follow the path effectively, performing multiple maneuvers to reach the goal position with a specific tolerance.

One of the standout features of this project is the Vagabond library, which simplifies pathfinding tasks. We’re excited to release Vagabond to the public, allowing you to easily integrate pathfinding capabilities into your own projects. You can install Vagabond via pip or check out the GitHub link in the video description.
Vagabond offers an intuitive way to handle pathfinding problems. The library requires you to define a function that identifies the neighbors of a given node in your graph. You can assign different costs to these neighbors, which helps the algorithm determine the most efficient path. With a specific node object designed for pathfinding, Vagabond automates the process of finding the optimal route.
To get started, install the library using: pip install py-vagabond
Here’s a quick overview of how to use Vagabond for pathfinding. You begin by defining your environment, such as a grid representing free and occupied spaces. Next, specify the starting and ending points for your path.
You then create a function to determine the neighbors of a node based on its position in the grid. For each neighbor, you can set costs that reflect the difficulty of moving to that neighbor. This cost helps guide the algorithm in selecting the most efficient path.
The library provides methods to execute the A* pathfinding algorithm, which involves calculating the shortest path from the start to the end node while considering the defined costs. Once the path is computed, you can visualize it using plotting tools, showing the optimal route on a grid.
To illustrate, Vagabond can handle tasks like navigating a robot through a grid by calculating distances and adjusting movements to reach the goal efficiently. This example demonstrates how the library can be applied to a grid with different cost values, calculating and visualizing the shortest path between two points.
Give Vagabond a try in your projects. Just define the necessary functions, install the library, and let it handle the complex pathfinding tasks for you.

At the final stage of the project, we integrated everything into a ROS2-driven program. This required setting up the necessary nodes, topics, publishers, and listeners to ensure smooth operation within the ROS2 environment. To streamline the process, we transitioned from using OpenCV for plotting to utilizing RViz for visualization.
To achieve this, we made sure that all data being broadcasted adhered to the specific ROS2 datatypes, ensuring compatibility and proper communication across the system.
We developed four distinct nodes to handle various aspects of the system:
1. Communication Node: Manages the interface between the robot and the driver, ensuring accurate updates and calculations.
2. Grid Management Node: Handles the probabilistic grids, particle filtering, and conversion of distance information into Cartesian coordinates to track the robot’s position.
3. Pathfinding Node: Ensures that a viable path is continuously available and updated whenever changes occur in the environment.
4. Navigation Node: Directs the robot’s movements, including how far and in what direction to move, while providing feedback control to adjust the robot’s orientation.
After the conversion to ROS2, we meticulously refined the system to ensure everything functioned seamlessly. The final result is showcased in RViz, where you can see the map being plotted and updated in real time.

So, that’s a wrap! We've successfully built a SLAM system from scratch, demonstrating not only our ability to handle complex algorithms but also our flexibility in integrating with ROS2. From creating and fine-tuning probabilistic occupancy grids and particle filtering to implementing efficient pathfinding and navigation, every step has been a journey of innovation and learning.
We’ve showcased how our system can function seamlessly within the ROS environment, leveraging RViz for real-time visualization and ensuring precise robot control and positioning. This project highlights our commitment to pushing the boundaries of robotics and automation, and we’re excited about the potential applications and improvements that lie ahead.
Thank you for following along with our development process. If you have any questions or want to dive deeper into the details, feel free to reach out. Stay tuned for more updates and projects as we continue to explore and innovate in the field of robotics!

 



Comments

Popular posts from this blog

A Multipurpose Mini Desktop Assistant Robot

       We have named it 'Cookie Bot' 🍪 Cookie Bot is a multipurpose mini desktop assistant robot that you can control with your voice when needed but also has a mind of its own ! It uses a combination of AI, Machine Learning, and speech recognition to control its movements and actions based on different situations and surroundings. Users can interact with the robot through voice commands and its voice assistant artificial intelligence system allows it to detect and understand user commands in different natural languages including English and Hindi !  Cookie Bot acts lively, moves around on its own using a path finding algorithm to navigate and avoid obstacles while responding to different commands from the user. The Cookie Bot uses a LIDER sensor to detect objects in its path and a Ultrasonic sensor to sense objects and objects in its surrounding environment. Here's a video of the first working prototype : Prototype 1 : December 15th, 2022 Prototype 1 : Projec...

Search and Rescue Deployable Assistant

   The Project Proposal Objectives : The future calls for a safer, more efficient, and reliable habitable environment for everyone throughout the globe - an attack, accident, and disaster-proof infrastructure with zero casualties in such events. But if there are any, we need to have the technology and resources to combat such situations. SARDA is a deployable robotic unit intended to help first responders in such crucial situations to carry out search and rescue operations in dangerous environments.  When deployed into the disaster site, it helps to map and render a dynamic color coded 3D model of it's immediate surrounding along with data like temperature and humidity ; Which, not only aids the rescue team to better navigate the terrain but also informs us in case of fire and smoke along with it's exact location. This digital twin can also be overridden with manual control, providing us with an efficient two-way communication system. The Deploy-able unit ...

Morse code Broadcaster App & Receiver using Arduino

Me and Sayanti Chatterjee had been working on this for over a month now.  This was really an ambitious project and we tried to pull off something big ! The app takes in a string of words - encodes it into Morse code and broadcasts it in form of light pulses using the phones flash light in accordance with both of the frequency parameters. Changing those parameters changes the speed of the broadcast. On the other end there's a Arduino with a photo-resistor which keeps sleeping and wakes up only when senses the first light pulse. ( So, basically it's a auto on/off feature ) which receives and reads those light flashes, processes it and decodes it into it's original form. ( Again, in accordance with those parameters ) Displays it and goes back to sleep ! This is how wireless data transfer works ! It's just a simplified form of a WiFi or a cell tower. We've recreated a model wireless communication system from scratch !  What is the most important skill that you can have...

Robotic Shooter Game • From DIY to Unreal Engine 5 Masterpiece!

Previously, when we showcased our initial project at the national science fair, we presented "Armball Mania," a novel and intuitive sensor-controlled game. This project utilized an accelerometer connected to a compact Arduino Nano, with serial communication enabling the transfer of orientation data to an Unreal Engine game. The game's objective was to balance a ball on a football field and score goals by manipulating the field's orientation with the accelerometer mounted on a DIY glove worn by the user. The project was a hit at the fair in Kolkata, drawing long lines of enthusiastic students and children eager to experience the game. Despite the success, we encountered significant challenges with the accelerometer. The sensor's sensitivity to minute hand movements caused jittery and unstable outputs, making the gameplay less smooth and precise than desired. To overcome this, we decided to upgrade to a gyroscope. The gyroscope provided a remarkable improvement in a...

Homemade 2-axis gimbal using Arduino

  I present to you the cheapest camera or object stabilizer in the whole world ! A 2-axis gimbal that keeps an object steady in a certain position despite the movement of the base on which it is mounted. This means when the base keeps changing its position constantly, the object will remain stable and there won’t be any change in its position. It all started with my new ADXL345 and I thought to myself why not try and build a hand-held gimbal at home since these things are rather costly out there. So, I fired up max and as usual started working on the digital prototype or the blueprint. I had a few servos left from the previous project that I worked on and two of them were enough for this simple stabilizer. And I have learnt from my mistakes ! Knowing that these servos won't be able to put up with so much, I designed the frame in a way that would fairly distribute the load away from the motor. With all this in mind, I roughly designed the whole thing and tried to make i...

AI-Powered 6-DOF Robot in Unreal Engine

      Waste management has become one of the most pressing issues of our time, and innovative solutions are needed now more than ever. What if we could blend the precision of robotics with the power of AI to create a system that not only automates waste segregation but also bridges the gap between virtual simulations and real-world applications? In this documentary, we’ll take you behind the scenes of a groundbreaking project: the development of a 6-degree-of-freedom robotic arm designed for waste segregation, all simulated within the immersive environment of Unreal Engine. The journey began with the idea of creating a robotic arm that could mimic human movements with high precision. To achieve this, we focused on developing a 6-degree-of-freedom (6-DOF) system, allowing the arm to move in any direction, rotate, and grasp objects as efficiently as a human hand. But building this complex system wasn’t just about mechanics—it was about crafting a seamless interaction betwee...

Ball Game using Accelerometer & Arduino

   Project Proposal PDF : Armball Mania Project Proposal I was recently experimenting with serial communication between Arduino and ue4 and suddenly had this urge to make a fun sensor controlled game . Something different and intuitive ! Now, I remember that I had the ADXL from my previous project which had become one of my favorite sensor ! - and I knew exactly what to do. I know, I know. Why am I using an accelerometer ! Using a gyro would've been a better idea. And you're right, That's the intuitive part !    The Arduino Part Before everything, I needed to test if it even works. So, I created a demo project, where I would control the pitch yaw and roll of a flat rectangular object with live inputs from the sensor which Will be in the users hands. Serial communication was a bit tricky. And I needed to open and close the ports dynamically so that I can change it on the go. After a bit of tweaking, I got it to work ! To summarize the protocol in English, ...

DIY Contactless Mist Sanitizer

  Today I'm going to show you how I turned a broken pocket mist generator into: A Contactless Mist Sanitizer Machine  in just one day. Sounds cool, right? Let's get started ! First, let me explain what a pocket mist generator is. It's basically a device that sprays a fine mist of liquid, usually water or sanitizer, to moisturize or disinfect your skin or surroundings. It's handy and portable, but mine stopped working after a few months of use. So I decided to open it up and see what's inside. To my surprise, I found out that it doesn't use any heating element or pump to create the mist. Instead, it has a tiny diaphragm that vibrates at a very high frequency, around 100 kHz, and breaks the liquid into tiny droplets that form the mist. This is called ultrasonic atomization, and it's very efficient and quiet. I wanted to control this diaphragm using my Arduino Uno, but I couldn't generate such a high frequency with it. So I had to use the origi...

Miniature Remote controlled Robotic Arm

    A remote controlled cute little miniature version of a robotic arm which can sit on your table and pick up your eraser when you drop it. I've always been fascinated by Industrial Robotic arms. They are so simple yet complicated ! And I wanted to try and make one from the ground up without any help. Kind of wanted to simplify it and work with what I got. Planning At the moment, I had only two servos and 1 stepper motor. So, I needed to build the whole thing with nothing but those. So, I fired up Max and started working on the blueprint.     It's like a prototype that I like to create virtually - Which give me the Idea on How the real thing should be built ! And also, I can change stuffs easily here if things dont work out and improve the overall model if I need to. So, I found a way to create a cute little robotic mechanical arm with only the things I got. This would be able to pick up things with the help of that mouth and move ut around. Pretty simple...