Basic Examples

Before You Begin... Make sure you’ve completed the following setup steps:

  • βœ… Set up your robot by following the instructions here.

  • βœ… Update the firmware using the instructions here.

  • βœ… Install the required software using the instructions here.

Once everything is set up, you’re good to go! The following programs will run on the Raspberry Pi 5 inside your robot. You can connect via SSH, VNC, or even link VS Code to your Pi over SSH for development.


πŸ‘Ÿ Movement

Model supports: All versions

  • Moving the base with command velocity

from hackerbot import Hackerbot

bot = Hackerbot()

bot.base.drive(0, 65) # Turn around
bot.base.drive(200, 0) # Move forward

bot.base.destroy(auto_dock=True) # Destroy instance and dock to charger
  • Navigation with SLAM

from hackerbot import Hackerbot

bot = Hackerbot()

bot.base.maps.goto(1.0, 1.0, 0.0, 0.1)

bot.base.destroy(auto_dock=True) # Destroy instance and dock to charger

The robot will first try to localize itself in the map then navigate to the destination.

To understand the map & positions better, check out command center.


πŸ—£οΈ Voice

Model supports: AI, AI PRO, AI ELITE versions

  • Text to speech (TTS)

The Hackerbot Python package uses Piper TTS for speech synthesis.

  1. Use the Piper tts tool to find the voice you want for your Hackerbot.

  2. Navigate to the πŸ€— directory and select the model you want to use:

  3. Copy the name of your model to your script, and utilize the speak functionality:

bot.base.speak(model_src="en_GB-semaine-medium",text="Didi dadodoooo", speaker_id=None)
  • Speech to text (STT)

Currently, speech to text isn't officially supported. However, there are many speech to text functions out there you can try, e.g. OpenAI, Google Speech to Text, or the most common SpeechRecognition.


πŸ‘οΈ Vision

Model supports: AI, AI PRO, AI ELITE versions

Before trying out some of the coolest examples, make sure you have the dependencies:

cd ~/hackerbot/hackerbot-tutorials/vision
pip install --no-cache-dir -r requirements.txt  # This install will take a while

To view the cv window, make sure you're running the following in VNC.

  • Image Recognition with YOLO

cd ~/hackerbot/hackerbot-tutorials/vision/image_rec
python3 yolo.py
YOLOv11 Recognizing everything on my desk!

Tap the "q" key on your keyboard to quit.

  • Image Recognition with the AI Kit

Check out our tutorial on getting started with the AI Kit here

  • Face Recognition

Navigate to the directory:

cd ~/hackerbot/hackerbot-tutorials/vision/face_rec

Take some headshots, it'll take a number of pictures between every delay.

python3 headshots_picam.py --name Bobby --num_photos 10 --delay 2

Then train the model by running:

python3 train_model.py 

Then use it to recognize your face by running:

python3 facial_req.py
Your Hackerbot now recognizes you!

πŸ€– Head Movement

Model supports: AI PRO, AI ELITE versions

Now you can utilize the camera to recognize objects and person, try look around to scan for faces!

cd ~/hackerbot/hackerbot-tutorials/vision/face_rec
python3 look_around.py --name Bobby

🦾 Arm & Gripper Manipulation

Model supports: AI ELITE versions

More cool examples coming soon! At the mean time, just smile and wave!

import time
from hackerbot import Hackerbot

bot = Hackerbot()

bot.arm.move_joints(90, 50, 0, -60, -90, 0, 100) # Move to right
time.sleep(1.5)
bot.arm.move_joints(90, -20, 0, 0, -90, 0, 100) # Move to left
time.sleep(1.5)
bot.arm.move_joints(0, 0, 0, 0, 0, 0, 100) # Center to straight position
time.sleep(1.5)

bot.destroy() # Destroy instance

πŸ‘£ References

Last updated