We have named it 'Cookie Bot' 🍪
Cookie Bot is a multipurpose mini desktop assistant robot that you can control with your voice when needed but also has a mind of its own ! It uses a combination of AI, Machine Learning, and speech recognition to control its movements and actions based on different situations and surroundings.
Users can interact with the robot through voice commands and its voice assistant artificial intelligence system allows it to detect and understand user commands in different natural languages including English and Hindi !
Cookie Bot acts lively, moves around on its own using a path finding algorithm to navigate and avoid obstacles while responding to different commands from the user. The Cookie Bot uses a LIDER sensor to detect objects in its path and a Ultrasonic sensor to sense objects and objects in its surrounding environment.
Here's a video of the first working prototype :
Prototype 1 : December 15th, 2022
Prototype 1 : Project Report
Prototype 1 : Robot View
Here is how we made it
Right after we completed our first mobile robot project, we felt confident in our understanding of building robots—from integrating sensors and designing the chassis to coding efficiently and mastering the overall flow of a robot's functions. We also learned how to manually cut and shape the chassis using laminated fiber materials. With these new skills in hand, we thought, "Why not take on a bigger challenge?"
That's when the idea for a mobile, portable desktop assistant robot was born. The challenge would be to accommodate all the sensors and components into a small, compact chassis—our most ambitious project yet. This robot wouldn't just sit on a desk; it would move around, operate on minimal power, and respond to voice commands. Inspired by the commercially available robots with their cute, animated eyes, we decided our robot would have its own unique personality, with crafted animations and behaviors. It would feature touch controls, voice activation, obstacle detection, and—perhaps most impressively—we planned to build our own multithreading system on a single Arduino Nano, which isn’t exactly known for its processing power. And that’s how the idea for the Cookie Bot was born, setting us off on this exciting journey.
Let’s dive into the various functionalities that make the Cookie Bot so unique. First, it features a large OLED screen that can display a wide range of information. This screen brings the robot to life with expressive eyes, but it doesn’t stop there—it can also show news, weather updates, temperature, humidity, and more. The potential is limitless.
One of the standout features is the DIY touch sensors we created using aluminum foil. These sensors add an interactive layer to the robot, allowing it to respond to physical touch—a feature we’ll explore in more detail later in this video. For navigation, the robot is equipped with a VL53LOX TOF laser distance sensor, which helps it detect and avoid obstacles. The robot’s head moves smoothly thanks to the MG90S 9G metal gear servo, adding to its lifelike presence.
To make this desktop assistant truly useful, we packed it with several essential sensors. The DHT11 temperature sensor, for example, can also measure humidity, providing real-time environmental data. But how does the Cookie Bot listen to us? That’s where the voice recognition module comes in, featuring a microphone strategically placed to capture voice commands clearly.
The robot’s design also includes practical elements, like ventilation holes at the front to dissipate heat, essential for keeping the motor driver cool. This motor driver powers two DC motors connected to the wheels, enabling the robot to move. The entire system runs on two 3.7-volt lithium-ion batteries, located at the lower back like a backpack, giving the robot its mobility and endurance.
At the back, you’ll find a serial port that allows us to update the robot’s programming at any time. This feature acts as a sandbox, letting us reprogram and refine the robot’s behavior even years down the line without any need for physical modifications. Also located at the back is the on/off switch, giving us control over when the robot is active.
Last but not least, we have the 5-volt buzzer, hidden inside the robot. This speaker allows the Cookie Bot to make sounds, enhancing its lifelike qualities and making it feel more animated and alive.
Battary
DC Motor
Heat Dissipation Mesh
Head Dissipation Mesh (Front View)
Microphone
OLED Screen
ON / OFF Switch
Servo
Signal Bus
Temperature Sensor
TOF Sensor (LIDER)
DIY Touch Sensor
Upload Port
With this ambitious vision in mind, we finally began developing the Cookie Bot. Before diving in, I should mention that, as usual, I used laminated fiber to build the robot. To work with this material, I had all the necessary tools to cut, perforate, and assemble the parts using hot glue, bolts, and dendrite.
We started with the head, the most expressive and crucial part of the robot. The head needed to house the OLED screen and the time-of-flight (ToF) sensor within an enclosure that would be mounted on a servo, allowing it to rotate left and right. My first task was to design and construct this enclosure. I carefully cut a rectangular hole in the fiber to fit the OLED screen, ensuring it would sit securely.
The ToF sensor, or LIDAR sensor, presented a unique challenge. Since the Cookie Bot was designed to operate on a desktop or tabletop, it needed to avoid both obstacles and potential falls from edges. We didn’t have the luxury of using two separate sensors—one for the ground and one for the front—due to space and power constraints. Instead, we opted for a clever solution: we mounted the sensor at a negative 40-degree angle. This positioning allowed the sensor to scan both the ground and the area in front of the robot simultaneously.
This angle worked perfectly because we already knew the robot's height. If the sensor returned a measurement close to the expected value, it indicated an obstacle at a specific distance. If the reading was significantly greater, it meant the robot was approaching the edge of the table and needed to stop to avoid falling.
At this stage, we didn’t completely finish developing the head. We had the screen and LIDAR sensor in place, and the front enclosure was taking shape. But for now, we decided to shift focus to the body. We needed to develop the brain of the robot and the main control units so we could test out the head, screen, and other sensors effectively.
Next, we focused on building the body, which required a larger enclosure. We needed to create four walls to encase the body, along with a top and bottom surface—essentially forming a box. The bottom surface would need to house a non-powered support wheel, allowing the robot to move freely. This meant the robot would be supported by three wheels: a small, non-powered wheel at the back, attached to the bottom surface, and two larger wheels at the front connected to DC motors on either side of the chassis.
We started by constructing the bottom surface, then moved on to the rear wall of the robot. This wall was particularly important because it would hold the battery holders and expose the Arduino Nano's port for easy code uploads. Additionally, the power switch for the entire robot would be mounted here. This made it essential to get this wall right from the start.
We temporarily placed an upgrade board where the Arduino Nano would be housed. This setup allowed us to power the robot, access the port for coding, and have the on/off switch in place. We could also connect wires from the Arduino Nano to various sensors to test their functionality. Finally, we attached the small support wheel at the back using hot glue and bolts, ensuring the robot would remain stable and not tip over.
Once the robot's brain—the microcontroller—is securely in place, with the breadboard properly set up and everything aligned, it was time to work on the sides. I began by cutting the laminated fiberboard using my tools, first measuring the length and width to ensure a perfect fit. These side panels, which enclose the robot from the left and right, are rectangular in shape. After cutting and checking the fit, I attached the motors.
To secure the motors, I used hot glue, but I also reinforced them with zip ties and screws to ensure they stayed firmly in place. I repeated this process for the other side, making sure both motors were mounted at the same height to prevent any imbalance or errors.
Once the motors were attached and secured, I routed the wires inside the robot through holes drilled in the chassis. These wires, connected to the DC motors on both sides, were then linked to the motor driver, which is housed in the central unit of the robot, just below the Arduino Nano. The motor driver, which receives power from the battery, is also connected to the Arduino Nano to receive instructions and process information.
Thanks to our previous robotics projects, we had already learned how to properly connect wires and understood how motor drivers worked. We also had predefined libraries, data sheets, and diagrams from those projects, so this part of the process went smoothly, allowing us to move forward with confidence.
Once the sides were prepared, we didn’t attach them right away using hot glue or fix them in place, as there were still several tasks to complete. The next major challenge was giving the robot the ability to listen to us. This could be achieved with a voice recognition unit—a module that could be trained to recognize our voices and then provide corresponding signals.
We decided to offload the voice recognition and signal processing tasks to this external module to relieve the Arduino Nano from handling these functions. This decision was crucial because, later on, we planned to implement a way to multitask on the Arduino Nano, allowing it to handle multiple operations simultaneously. This would make the robot responsive and lifelike with minimal latency.
So, we attached the voice recognition module to the robot's body and connected all the necessary wires to the Arduino Nano, enabling data exchange between the two. The voice recognition module was linked to a microphone, which we needed to expose on the robot's exterior. We cut a square-shaped hole in one side of the chassis to expose the microphone, ensuring that when someone spoke to the robot, it could clearly hear and recognize the voice. If the microphone had been placed inside the body, the sound would have been muffled, making voice recognition ineffective. With the microphone exposed and connected to the voice recognition module housed internally, the robot was equipped to listen and respond to commands.
Once these tasks were completed, we moved on to installing the temperature sensor and the buzzer. The temperature sensor was straightforward to integrate. We cut a square-shaped hole on one side of the chassis to expose the perforated sensor head, ensuring it could accurately sense the external temperature rather than the internal heat generated by the robot. Similarly, we placed the buzzer on another side and cut a small hole to allow sound to escape clearly without being muffled.
Both the temperature sensor, which also doubles as a humidity sensor due to its internal humidity module, and the buzzer were directly connected to the Arduino Nano. Securing these components in place required careful application of hot glue, ensuring they were positioned without interfering with other modules housed inside the robot. The sensors were meticulously arranged to fit snugly within the compact body while maintaining a clean external appearance.
We took every necessary step to avoid forcing any sensor into place, ensuring that there was ample space for each component. Remarkably, the design allowed for even more compactness if we had chosen to pursue it.
Once the temperature sensor and buzzer were in place, we moved on to securing the side panels of the chassis. These side pieces were crucial because they housed many of the robot's essential modules. We carefully attached each component, ensuring everything was firmly secured to prevent movement, disconnections, or damage under pressure.
To keep the connections stable, we soldered the wires to the DC motors, ensuring they operated flawlessly. Additionally, we enclosed all wires and sensitive connections with non-conductive electrical tape, providing safety and a clean, polished appearance by covering up any unsightly parts.
While we were working on the mechanical assembly and electrical connections, we also tested each component in parallel. As we cut, attached, and connected the physical parts, we simultaneously uploaded code, programmed the robot, and checked if everything was functioning as intended. We spent considerable time experimenting with the OLED screen to ensure it worked responsively with minimal latency.
However, this presented a significant challenge— the OLED screen required substantial program memory to store individual frames. As our code grew, we faced memory constraints on the Arduino Nano. To overcome this, we had to get creative: optimizing libraries, reducing code size, and finding innovative ways to manage memory efficiently.
During this process, we accidentally shorted the OLED screen, rendering it completely unusable. The screen was dead, and we had no choice but to dismantle the head assembly and start over. This setback forced us to pause while we ordered a new screen online. After a frustrating delay, the new screen finally arrived, allowing us to replace the defective one. During the reassembly, we took the opportunity to modify the structure, making it more efficient, reliable, and foolproof.
While we waited for the new OLED screen to arrive, we focused on assembling the body of the robot. We meticulously connected all data lines and ensured that each module received the necessary power and voltage. Every connection was checked to avoid loose wires or interruptions. Although we were close to finishing, we left the body unglued and open for now, as we had other tasks to complete.
Once the new screen arrived, we dismantled the old, defective one and installed the new OLED screen. We carefully secured all the wires from both the screen and the lidar sensor, using zip ties and hot glue to prevent any loose connections. This was crucial because the head's movements could potentially cause disconnections or interference, which might affect the robot's performance.
With the new screen in place, we proceeded to test the integration of the OLED display and the lidar sensor. After extensive research and fine-tuning, we successfully managed to read data from the lidar sensor and display it on the OLED screen. This verification was a significant milestone, confirming that both components were functioning correctly and displaying the sensor data as intended.
But our project wasn’t quite finished yet. We wanted to add one more feature to our robot: the ability for it to respond to touch. We envisioned a cute desktop assistant with expressive eyes, sounds, and voice activation. To make it feel interactive, we needed it to recognize when someone touched it, whether on the sides or the head.
Given our budget constraints, we couldn’t afford high-end touch sensors, so we had to get creative. Reflecting on our physics lessons from school, we remembered that every object has a property called capacitance. Capacitance is the ability of an object to hold an electrical charge. While large capacitors in electronics can hold a lot of charge, even everyday objects can hold some charge, albeit in much smaller amounts.
We came up with an idea: why not use aluminum foil, which is conductive, to act as a touch sensor? By applying a potential difference (voltage) to the aluminum foil, we could create a makeshift capacitor. When someone touches the foil, it changes its capacitance because the touch alters the charge distribution on the foil.
We tested this concept and found that it worked perfectly! We bought a pack of thin aluminum foil and cut it into the shapes we needed. We placed pieces of aluminum foil on the sides and head of the robot. These pieces were connected to wires, which in turn connected to the Arduino Nano. The Arduino applies a small charge to the foil and measures the capacitance. When someone touches the foil, the capacitance changes, and the Arduino detects this change.
We secured the aluminum foil with tape to ensure a stable connection and prevent interference. With this setup, our robot could now recognize and respond to touch, adding a delightful interactive element to its features.
Once we completed the touch controls with aluminum foil, it was time to focus on finishing the robot’s head. The next step involved attaching the servo to complete the head’s enclosure.
The head of the robot was designed with six walls: a front wall with cutouts for the OLED screen and the time-of-flight sensor, a top wall, two side walls with openings for air passage, and a rear wall with holes for the voice recognition module. The wiring had already been set up, connecting the head to the body’s power supply and Arduino Nano.
To finalize the head, we needed to secure it properly. The bottom wall of the head, made from laminated fiber, was where we mounted the servo. We used screws to fix the servo and hot glue to secure the servo armature, which is the part that moves. This setup allows the head to rotate freely as the servo armature turns.
Attaching the servo was a bit challenging. We had to ensure that the wires were positioned correctly so that the servo could rotate without obstruction. We carefully cut and placed the laminated fibers, making precise connections and adjustments to ensure that everything fit together properly.
This careful planning and execution were crucial to make sure the head operated smoothly and the servo had the freedom to rotate without interfering with the wires.
With the head complete, the next crucial step was to assemble the robot by attaching the sides to the main body. At this point, all the components, including the DC motors and various sensors, were installed in separate pieces but had not yet been attached to the robot's main chassis.
To start the final assembly, we first focused on closing off the body chassis. This involved joining two side pieces, each housing important components like the DC motors, temperature sensor, and microphone. We used hot glue to securely bond these side pieces to the main body, ensuring a solid and stable structure.
We paid careful attention to the wiring during this process. It was essential to manage the wires properly to prevent any interference or disconnections. By securing the sides with hot glue and neatly routing the wires, we ensured that everything was in place and functioning correctly.
With the sides securely attached, the next step was to close off the robot’s body. We needed to fasten the front panel to complete the assembly. To ensure functionality, we drilled holes in the front panel for the indicator lights. These lights would provide visual feedback, letting us know if the robot was operating correctly. Additionally, these holes served a practical purpose by allowing heat from the motor driver to dissipate, preventing overheating.
Once the body was closed off, the robot was structurally secure, but we still had to ensure everything was firmly in place. Given that this was a handmade project with laminated fibers that couldn't be easily screwed or bent, hot glue became our go-to solution. We meticulously applied hot glue to all joints and edges, carefully securing each part while maintaining the robot’s rectangular shape.
By the end of this phase, we had successfully attached all the wires, connected the motor driver, and fixed the servo and head in place. With the front panel closed, the robot was now a solid, integrated structure with no loose pieces. This marked the completion of the robot’s physical assembly, ready for the final testing and adjustments.
With the structural assembly complete, it was time to tackle the cosmetic finishing. The extensive use of hot glue, while functional, left the robot looking less than sleek. As a handmade DIY project rather than a 3D-printed piece, the aesthetics needed a bit of refinement. The hot glue was messy and detracted from the overall appearance, so we needed a solution to conceal it while adding some visual appeal.
Our approach was to use electrical non-conductive tape to cover the hot glue and joints. This not only masked the unsightly glue but also reinforced the joints, adding extra strength and stability to the structure. By applying the tape carefully over all the glued areas, we managed to enhance both the robot's appearance and its durability.
The robot, built from laminated fibers with a distinctive brown finish, took on a unique and refined look with this final touch. The tape effectively concealed the glue while providing a clean, finished appearance. This step of decorating and securing the structure was both quick and transformative, giving the robot a polished and cohesive look.
Even after securing everything with electrical tape, the robot still didn’t achieve the polished look we were aiming for. The visible holes and the overall DIY appearance needed refinement. Drawing on my childhood passion for origami and cardboard crafts, I decided to create a custom cover for the robot. I had a lot of experience making everything from short films to intricate armor using paper and cardboard, so I felt confident in this approach.
My idea was to design a sleek, single-piece armor to wrap around the robot. Instead of using multiple pieces and lots of glue, I planned to cut and fold a single sheet of cardboard to form a seamless cover. This would minimize the need for additional adhesives and ensure a clean, professional finish.
I started by measuring the robot’s dimensions and accounting for any protrusions caused by the hot glue. With careful planning and prototyping using pencil and paper, I designed the armor to fit snugly. The result was a futuristic, sci-fi-inspired piece with strategically placed openings for ventilation.
The cardboard armor was designed to be a one-piece wrap, enveloping the robot as if it were its outer shell. If we had access to 3D printing technology, we might have used it for a more precise cover, but given the DIY nature of this project, handcrafting the armor was a satisfying and practical solution. I’m excited about the possibilities for future projects with advanced technology, but for now, this hand-made approach was both functional and aesthetically pleasing.
After confirming that the cardboard armor fit the robot perfectly, we tackled the final aesthetic enhancement. Initially, we used white cardboard, but it didn’t achieve the sleek, professional look we desired. To enhance the appearance and ensure a polished finish, we decided to switch to black cardboard for the cover.
Since some parts of the robot, like the wheels and the back portion, were intentionally exposed, we needed a solution to hide these elements and create a cohesive, professional look. To accomplish this, we chose to spray paint the entire exterior. Spray painting would effectively cover any imperfections and unify the robot’s appearance.
We selected a high-quality spray paint with a fast-drying formula to achieve a smooth, even finish. Our approach involved carefully applying the paint to the outer cardboard armor, ensuring there were no rough patches or uneven spots. As a first-time attempt at spray painting, some areas required extra attention to detail.
Additionally, we spray-painted the wheels. Originally red, the wheels were made of plastic and purchased from a hobby store. By painting them black, we achieved a more refined look that matched the robot’s sleek design. This final touch not only concealed the red plastic but also integrated the wheels seamlessly into the overall aesthetic.
With the spray painting complete, the robot now boasted a clean, professional appearance, with all components blending harmoniously. The transformation from DIY to a polished final product was a gratifying success.
Once the spray painting was complete, it was time to wrap the robot with the newly designed cardboard cover. Thanks to precise measurements and careful preparation, the wrapping process went smoothly. The cover was crafted by folding a single flat piece of cardboard in specific ways to ensure it fit perfectly around the robot’s six sides.
The design included extended portions that were specifically made to be glued together at the extremities. This meticulous approach allowed the entire cover to be assembled from one continuous piece of cardboard, avoiding the need for multiple seams or joints. The only gluing required was at the points where the extended edges met, ensuring a seamless and cohesive look.
The result was a sleek, unified exterior that beautifully concealed the robot's internal components and enhanced its overall appearance. The magic of using a single piece of cardboard was evident in the smooth, professional finish that wrapped effortlessly around the robot.
With the exterior finally complete, our robot was ready for action. Supported by three wheels and adorned with a sleek, black exterior, the robot exuded a professional, high-tech appearance. The black paint on the wheels and cover ensured that the OLED screen seamlessly blended with the rest of the design. It was amazing to see how everything fit perfectly into such a compact robot, especially considering that this project dates back to a time before we even knew how to solder.
Reflecting on how far we’ve come, we now use custom PCBs, advanced soldering techniques, and state-of-the-art equipment. Despite the simplicity of the tools available back then, we managed to create a robust and functional robot that could withstand rigorous handling without losing any connections.
But the project wasn't finished yet. While the physical construction was complete, we needed to program the robot to bring it to life. With the code upload port conveniently located at the back, we could easily update and modify the robot’s behavior. Our next task was to upload the main programming code, enabling the robot to exhibit its full range of functionalities: from animating its eyes on the OLED screen to responding to voice commands and interacting with users through speech recognition.
Our journey into coding the robot was a crucial step, especially given the limitations of the Arduino Nano. We quickly discovered that the Nano, with its modest processing power and limited program memory, struggled to handle simultaneous tasks—like controlling a display, moving the robot, generating sounds, and processing speech recognition—all at once.
The Nano's single-core nature made it clear that traditional approaches wouldn't suffice. We needed a solution to manage multiple tasks efficiently without causing lag or unresponsiveness. Thus, we set out to develop a lightweight multi-threading engine specifically for the Nano. This was no small feat; it involved creating a system from scratch that could handle tasks concurrently on a microcontroller with limited resources.
After days of development, we successfully created a multi-threading library in C++. This library allowed us to assign tasks and hooks that could be executed in parallel, despite the Nano’s constraints. Integrating this system into the Arduino IDE was straightforward, and the results were promising. The robot could now move, make sounds, and display information on the screen simultaneously without any significant performance issues.
This breakthrough was a significant achievement, as it enabled the robot to perform its functions smoothly and efficiently, showcasing the power of creative problem-solving even with limited hardware.
With the multi-threading engine successfully implemented, we turned our attention to bringing the robot to life through its behaviors and animations. The eyes of the robot were a key feature, essential for its cuteness and lifelike interaction. Designed to serve as a helpful and stress-relieving desktop assistant, the robot's eyes would play a significant role in expressing emotions and enhancing user engagement.
We began by focusing on the eyes, which were crucial for conveying different emotional states. Given the Arduino Nano’s limited processing power and storage, we opted for a frame-based approach rather than complex animations. This meant we would use different frames to represent various emotions and transitions.
To manage these transitions smoothly, we developed a mini-engine within the Arduino Nano. This engine handled the display of different eye poses and emotions. For instance, if the robot needed to change from an angry to a happy expression, it wouldn’t jump directly between these states. Instead, it would pass through an idle pose, creating a more fluid and natural transition. Similarly, when the robot needed to shift its gaze from left to right, it would first move to an idle pose to ensure a smoother movement.
Before diving into the eye animations, we designed a startup sequence for the robot. Upon powering on, the robot would display a custom logo and play a startup sound, signaling that it was operational. Following this, the robot's eyes would "wake up" from sleep mode, adding a touch of personality to its startup routine.
This careful attention to detail helped us create a robot that was not only functional but also engaging and interactive, embodying both cuteness and functionality.
Creating a blinking animation for the robot’s eyes using OpenToonz for visualization on an OLED screen has been an exciting and challenging journey. Initially, I faced difficulties in curving the eye shape to ensure it was cute and capable of conveying a range of expressions effectively. This required meticulous attention to detail and a thorough understanding of OpenToonz’s vector and bitmap drawing tools.
I designed distinct eye shapes to represent various emotions: an activated or wakeful state, sadness, happiness, surprise, and a deactivated state. Ensuring that each expression was both accurate and appealing demanded precise adjustments to every corner of the eye shapes.
OpenToonz, an open-source 2D animation software developed by DWANGO and based on the Studio Ghibli's Toonz, proved to be an invaluable tool for this project. Its comprehensive set of features, including advanced digital drawing capabilities, vector and bitmap tools, and indexed color palettes, enabled me to create intricate eye shapes with precision. The software’s automatic inbetweening features facilitated smooth transitions between different expressions, greatly enhancing the overall animation quality.
Despite the initial challenges, working with OpenToonz has been an educational and rewarding experience. Its robust features, such as motion tracking, frame-by-frame animation, and scripting capabilities, provided a versatile platform for refining my animation project. The ability to automate routine tasks and integrate special effects seamlessly has made OpenToonz an indispensable part of my animation workflow, significantly improving my skills in digital animation.
When we moved on to developing the sound system for the robot, we encountered a unique challenge. The robot's sound system needed its own behavior structure or behavior tree, much like the other units. The challenge arose from using a digital buzzer module, which only has two states: on and off. This limitation initially posed a problem because we wanted the buzzer to produce different sounds.
To overcome this, we employed a technique called Manual Pulse Width Modulation (PWM). Pulse Width Modulation is a method used to control the amount of power delivered to a component by varying the width of the pulses in a signal. In simpler terms, it involves rapidly turning a signal on and off, creating a series of pulses. The ratio of on-time to off-time, known as the duty cycle, determines how much power the component receives.
Manual PWM involves manually coding the timing of these pulses rather than relying on automated hardware functions. While Manual PWM can sometimes result in a less smooth signal compared to hardware PWM, this characteristic actually worked to our advantage. The slight irregularities in the pulse signal caused the buzzer to produce distinctive, bubbling sounds. By adjusting the frequency (how quickly the pulses repeat) and duty cycle (the proportion of time the signal is on versus off), we were able to create various sounds.
This unique approach allowed us to achieve different sound effects with a simple buzzer. For example, the buzzer could produce different noises based on the robot’s actions, such as turning left or right, or when it successfully recognized a voice command. These sounds were designed to make the robot more lifelike, similar to how a pet cat meows or purrs to express itself. By creatively utilizing the limitations of the digital buzzer, we enhanced the robot's interactivity and charm.
With the sound system completed and the movement behavior tree in place, we turned our attention to finalizing the voice recognition system. As previously mentioned, we utilized an external voice recognition module for this task. This module is capable of training and storing specific voice commands, which are crucial for interacting with the robot. Given that the robot lacked a touchscreen or physical buttons—except for capacitive touch sensors installed on its sides—voice recognition became the primary mode of communication.
To debug and showcase the robot's basic capabilities, we initially programmed a set of fundamental commands such as "move forward," "turn left," and "turn right." When the robot hears its name, "Cookie," it enters command mode. In this mode, it stops its current activity and listens for further voice instructions. This setup allows the robot to respond to commands with actions, making it interactive and user-friendly.
Our goal was to expand the robot’s functionality beyond basic movements. For instance, integrating a temperature sensor meant that the robot could report the current temperature when asked. This information would be displayed on its screen, providing useful feedback. Similarly, implementing a voice-activated calculator was straightforward. Users could verbally input numbers and mathematical operators, and the robot would compute the result and display it on the screen.
We spent time training the voice recognition module with various commands and ensuring it could accurately interpret and respond to user input. After this extensive training process, the voice recognition system was finally ready. The robot was now equipped to handle a variety of commands and tasks, enhancing its versatility and functionality.
With the eyes secured, we moved on to enhancing the robot's lifelike qualities by developing a behavior tree. This behavior tree is a crucial component for defining and managing the various movements and animations of the robot. It includes different types of animations and actions, such as eye animations for different emotions and movement protocols.
For the robot’s movements, the behavior tree encompasses a range of actions. These include moving left, turning right, moving forward or backward by specified distances, and rotating its head. The head's rotation can range from 0 to 180 degrees, allowing for a wide range of motion. Additionally, the robot can use its time-of-flight sensor to measure distances and detect obstacles. It can also produce sounds through the internal buzzer during these actions, adding another layer of interactivity.
The behavior tree is designed to handle the robot's responses to various stimuli. For instance, if the robot's capacitive touch sensors detect touch, the behavior tree dictates how the robot should react. This includes the type of movement, sound, and emotional display on the screen. Similarly, when the robot receives voice commands, the behavior tree determines the appropriate response, including what actions to perform and how to interact with the user.
Once all the different stacks, protocols, and response scenarios were established, we began developing and coding the behavior tree. This programming ensures that the robot can understand and respond to inputs quickly and interactively, providing a lifelike experience. The behavior tree integrates all the components and functionalities of the robot, allowing for seamless interaction and a more engaging user experience.
And there you have it—our robot, affectionately named "Cookie," is finally complete and ready to bring joy and utility to your life! This little explorer will rove around your desktop or tabletop, carefully navigating obstacles and avoiding falls with its vigilant eyes and adjustable head. Cookie's movements are akin to a curious pet, always eager to explore and discover its surroundings.
When not in use, Cookie conserves energy, extending battery life and ensuring long-lasting enjoyment. If Cookie ventures into your personal space, you can easily command it to stop, move left, move right, or move away. Beyond its exploratory abilities, Cookie is designed with numerous functionalities. We plan to incorporate fun tricks, like spinning around, to entertain you.
As we mentioned at the start of this journey, Cookie's primary purpose is to provide comfort and companionship, serving as a charming alternative for those who can't have pets of their own. It also doubles as a utility bot, equipped with a voice-activated calculator and the ability to display temperature and humidity.
After months of dedicated work and development, Cookie is more than just a project; it feels like a beloved member of the family—a cute, interactive companion ready to brighten your day.
Comments
Post a Comment