Basic Examples
Before You Begin... Make sure youβve completed the following setup steps:
β Set up your robot by following the instructions here.
β Update the firmware using the instructions here.
β Install the required software using the instructions here.
Once everything is set up, youβre good to go! The following programs will run on the Raspberry Pi 5 inside your robot. You can connect via SSH, VNC, or even link VS Code to your Pi over SSH for development.
π Movement
Moving the base with command velocity
from hackerbot import Hackerbot
bot = Hackerbot()
bot.base.drive(0, 65) # Turn around
bot.base.drive(200, 0) # Move forward
bot.base.destroy(auto_dock=True) # Destroy instance and dock to charger
Navigation with SLAM
from hackerbot import Hackerbot
bot = Hackerbot()
bot.base.maps.goto(1.0, 1.0, 0.0, 0.1)
bot.base.destroy(auto_dock=True) # Destroy instance and dock to charger
The robot will first try to localize itself in the map then navigate to the destination.
To understand the map & positions better, check out command center.
π£οΈ Voice
Text to speech (TTS)
The Hackerbot Python package uses Piper TTS for speech synthesis.
Use the Piper tts tool to find the voice you want for your Hackerbot.
Navigate to the π€ directory and select the model you want to use:
Copy the name of your model to your script, and utilize the
speak
functionality:
bot.base.speak(model_src="en_GB-semaine-medium",text="Didi dadodoooo", speaker_id=None)
Speech to text (STT)
Currently, speech to text isn't officially supported. However, there are many speech to text functions out there you can try, e.g. OpenAI, Google Speech to Text, or the most common SpeechRecognition.
ποΈ Vision
Before trying out some of the coolest examples, make sure you have the dependencies:
cd ~/hackerbot/hackerbot-tutorials/vision
pip install --no-cache-dir -r requirements.txt # This install will take a while
Image Recognition with YOLO
cd ~/hackerbot/hackerbot-tutorials/vision/image_rec
python3 yolo.py

Tap the "q" key on your keyboard to quit.
Image Recognition with the AI Kit
Check out our tutorial on getting started with the AI Kit here
Face Recognition
Navigate to the directory:
cd ~/hackerbot/hackerbot-tutorials/vision/face_rec
Take some headshots, it'll take a number of pictures between every delay.
python3 headshots_picam.py --name Bobby --num_photos 10 --delay 2
Then train the model by running:
python3 train_model.py
Then use it to recognize your face by running:
python3 facial_req.py

π€ Head Movement
Now you can utilize the camera to recognize objects and person, try look around to scan for faces!
cd ~/hackerbot/hackerbot-tutorials/vision/face_rec
python3 look_around.py --name Bobby
π¦Ύ Arm & Gripper Manipulation
More cool examples coming soon! At the mean time, just smile and wave!
import time
from hackerbot import Hackerbot
bot = Hackerbot()
bot.arm.move_joints(90, 50, 0, -60, -90, 0, 100) # Move to right
time.sleep(1.5)
bot.arm.move_joints(90, -20, 0, 0, -90, 0, 100) # Move to left
time.sleep(1.5)
bot.arm.move_joints(0, 0, 0, 0, 0, 0, 100) # Center to straight position
time.sleep(1.5)
bot.destroy() # Destroy instance
π£ References
Last updated