🤖
Documentation | Hackerbot Industries
  • Welcome
  • Getting Started
    • What is Hackerbot?
    • Unboxing Your Robot
    • Install the Raspberry Pi OS + WiFi
    • Updating the Firmware
  • Enabling VNC (Remote Desktop)
  • Send Commands to Hackerbot
  • SOFTWARE
    • Software Install & Overview
    • Python Package
    • Flask APIs & Command Center
  • Tutorials
    • Hello World!
    • Deploy an AI Agent
    • Teleoperation
    • Utilizing AI Kit
  • Firmware
    • Serial Commands
  • Downloads
    • URDF Files
    • CAD Files
    • Schematics
  • Resources
    • Contributing
    • Software Release Notes
Powered by GitBook
On this page
  • Prerequisites
  • Setup
  • Obtain an API Key from Gemini
  • Set Up the .env File
  • Customize the AI Agent
  • Run the Robot Assistant
  • Troubleshooting
  • Summary
  1. Tutorials

Deploy an AI Agent

This guide explains how to deploy and integrate Large Language Models with your Hackerbot system. We'll use Google's Gemini API in this example, but the steps apply similarly to other APIs as well

PreviousHello World!NextTeleoperation

Last updated 10 days ago

Prerequisites

  • Latest version of installed

  • Python virtual environment configured

  • Microphone and speaker connected to your robot

  • Hackerbot AI + recommended

  • Access to (or equivalent LLM provider)


Setup

Move into the Hackerbot tutorials directory where the LLM scripts are located:

cd ~/hackerbot/hackerbot-tutorials/AI

Install the Python libraries required:

pip install -r requirements.txt

Install system required packages

sudo apt-get install flac espeak-ng

Obtain an API Key from Gemini

  1. Sign in with your Google account.

  2. Create a new API Key.

  3. Copy the generated key — you’ll need it for the next step.


Set Up the .env File

Create a .env file in the directory ~/hackerbot/hackerbot-tutorials/ if it does not exist:

touch .env

Open .env and add your Gemini API key:

GOOGLE_API_KEY=your-api-key-here

Customize the AI Agent

The behavior and response format of your Hackerbot AI agent can be customized inside speak_w_gemini.py (or your main script).

Here are the important sections:

1. Configure the Agent’s Personality

You can set the tone or role of the AI when initializing the chat history:

chat = model.start_chat(history=[
    {
        "role": "user",
        "parts": [{"text": 
            "You are a colleague named Robby, and you are experiencing Monday blues.\n"
            ...
        }]
    },
    ...
])

Tip: You can modify the personality to make the robot more cheerful, formal, or specialized (e.g., teacher, tour guide, etc.).

Example alternatives:

  • "You are an enthusiastic personal trainer motivating someone to exercise."

  • "You are a formal assistant robot trained to help users navigate a warehouse."


2. Configure the agent's voice

3. Configure the Response Format

The Gemini agent is instructed to only respond with raw JSON. This allows the robot to parse actions reliably without extra text.

Example of the prompt instructions:

Respond ONLY with JSON in one of the following formats:
- {"action": "action_name"}
- {"action": "speak", "parameters": {"text": "your text"}}
- or a list of such objects if you want the robot to perform multiple actions.

DO NOT add explanations.
DO NOT use markdown formatting (like triple backticks).

This strict format ensures the robot can easily extract and execute actions from the AI’s response.


4. Add New Supported Actions

In the original example, head movement actions are included. If your robot doesn't have a head, feel free to exclude those actions from the supported list.

Supported actions are listed in the same prompt:

Supported actions are: shake_head, nod_head, look_left, look_right, look_up, look_down, spin_right, spin_left, spin_around, and speak.

If you want to add a new action, you must:

  1. Define the function in actions.py:

    def wave_hand():
        print("Waving hand!")
        # Add your robot command here
  2. Update the execute_robot_action function in utils.py:

    "new_action":  lambda: new_action(bot),
  3. Update the Gemini prompt to include the new action name:

    Supported actions are: new_action, shake_head, nod_head, wave_hand, look_left, look_right, ...

This tells Gemini it can now trigger the new action.


Run the Robot Assistant

After everything is configured, start the assistant:

python3 speak_w_gemini.py

The robot will:

  • Listen for your voice commands

  • Send them to Gemini

  • Parse the response

  • Execute the requested action(s)


Troubleshooting

  • Authentication Error: Make sure .env is correctly set with your API key.

  • Speech Recognition Error: Ensure your microphone is accessible and configured, and espeak or espeak-ng is installed.

  • Action Not Triggering: Confirm the action function exists in actions.py and the action name matches the prompt.

  • Gemini Response Invalid: If Gemini returns invalid JSON, double-check your prompt to enforce strict JSON responses.


Summary

By following these steps, you can successfully deploy an LLM-powered interaction system on Hackerbot. You can expand functionality further by adding new actions, switching to other LLM APIs, or enhancing the user input handling.

Go to .

In the speak function inside actions.py, make sure you load the appropriate Piper TTS model. For details on how to do this, check the .

python package
Google Gemini API
Google AI Studio
documentation here