Skip to main content

ChatGPT Voice Assisant on Raspberry Pi using custom data, ChatGPT, Whisper API, Speech Recognition and Pyttsx3



Raspberry Pi can be used to build a voice assistant using Chat GPT and Whisper API as shown here. This article shows how to customize the voice assistant application using your own data, e.g., pdf documents, financial data etc.




Now, to train and create an AI chatbot based on a custom knowledge base, we need to get an API key from OpenAI. The API key will allow you to use OpenAI’s model as the LLM to study your custom data and draw inferences.



The Chat GPT library needs to be configured with an account's secret key which is available on the website. Set the api key OPENAI_API_KEY. This API key let you  use OpenAI’s model as the LLM for any  custom data and draw inferences.

    os.environ["OPENAI_API_KEY"] = 'YOUR API KEY'


Copy the custom data documents on a specific directory on Raspberry pi. e.g., /home/pi/Documents


Use the Recognizer class from the Speech Recognition library to recognize spoken words and phrases.
if __name__ == "__main__":
    # create a recognizer
    recoginzer = sr.Recognizer()
    mcrophone = sr.Microphone()
    # start the bot
    voice_bot(mcrophone,recoginzer)

Speech Recognition's AudioFile interface can be used to obtain the audio file, e.g., a file in wav format.
    while True:
        with microphone as source:
            recognizer.adjust_for_ambient_noise(source)
            print("Say something!")
            audio = recognizer.listen(source)


            try:
                # convert audio to text using Whisper API
                whisperresponse: str = getWhisperResponse(audio)
                # check for wake up word
                if "hello" in whisperresponse.lower():

Save the resulting audio file in a folder, e.g., /home/pi/Downloads. 

def getWhisperResponse(audio):
    with open("/home/pi/Downloads/microphone.wav", "wb") as f:
        f.write(audio.get_wav_data())
    file= open("/home/pi/Downloads/microphone.wav", "rb")
    response = openai.Audio.transcribe(model="whisper-1", file=file)
    os.remove("/home/satishjo/Downloads/microphone.wav")                           return response
This will be used as input for Whisper API to convert the audio file into text format response. 

                # convert audio to text using Whisper API
                whisperresponse: str = getWhisperResponse(audio)


The LlamaIndex converts your document data into a vectorized index for efficient query process. This index information is used to find the most relevant response based on the query data. 

prompt_helper = PromptHelper(max_input_size, num_outputs, max_chunk_overlap, chunk_size_limit=chunk_size_limit)

                    llm_predictor = LLMPredictor(llm=ChatOpenAI(temperature=0.7, model_name="gpt-3.5-turbo", max_tokens=num_outputs))


The retrieved information from LlamIndex will be sent to the GPT prompt so that GPT would have the context for answering the question, which is from Whisper API,   and provide a response.


documents = SimpleDirectoryReader("/home/pi/Documents").load_data()
                    
                    index = GPTSimpleVectorIndex(documents, llm_predictor=llm_predictor, prompt_helper=prompt_helper)
                    index.save_to_disk('index.json')

                    query_engine = GPTSimpleVectorIndex.load_from_disk('index.json')
                    chatgpt_response = query_engine.query(whisperresponse, response_mode="compact")



Here is the complete code:

import os
import speech_recognition as sr
import requests
import pyttsx3
import openai
from gpt_index import SimpleDirectoryReader, GPTListIndex, GPTSimpleVectorIndex, LLMPredictor, PromptHelper
from langchain.chat_models import ChatOpenAI
import sys

os.environ["OPENAI_API_KEY"] = 'YOUR API KEY'


def getWhisperResponse(audio):
    with open("/home/pi/Downloads/microphone.wav", "wb") as f:
        f.write(audio.get_wav_data())
    file= open("/home/pi/Downloads/microphone.wav", "rb")
    response = openai.Audio.transcribe(model="whisper-1", file=file)
    os.remove("/home/pi/Downloads/microphone.wav")
    return response.text

def voice_bot(microphone: sr.Microphone,recognizer: sr.Recognizer):
   

    OPENAI_API_KEY = 'YOUR API KEY'
    openai.api_key = OPENAI_API_KEY

    #instatiate speaker and set speaker rate
    engine = pyttsx3.init()
    engine.setProperty('rate', 150)
  
    #set gender based voice
    voices = engine.getProperty('voices')
    engine.setProperty('voice', 'english+f4')
  
    # start a loop for pinput
    while True:
        with microphone as source:
            recognizer.adjust_for_ambient_noise(source, duration=0.5)
            print("Say something!")
            audio = recognizer.listen(source)

            try:
                # convert audio to text using Google speech API
                whisperresponse: str = getWhisperResponse(audio)

                # check for wake up word
                if "hello" in whisperresponse.lower():
                    # create user
                    engine.say("Hi, Welcome")
max_input_size = 4096 num_outputs = 256 max_chunk_overlap = 20 chunk_size_limit = 600 prompt_helper = PromptHelper(max_input_size, num_outputs, max_chunk_overlap, chunk_size_limit=chunk_size_limit) llm_predictor = LLMPredictor(llm=ChatOpenAI(temperature=0.7, model_name="gpt-3.5-turbo", max_tokens=num_outputs)) documents = SimpleDirectoryReader("/home/pi/Documents").load_data() index = GPTSimpleVectorIndex(documents, llm_predictor=llm_predictor, prompt_helper=prompt_helper) index.save_to_disk('index.json') query_engine = GPTSimpleVectorIndex.load_from_disk('index.json') chatgpt_response = query_engine.query(whisperresponse, response_mode="compact") engine.say(chatgpt_response) engine.runAndWait() except sr.UnknownValueError: print("Recognizer unknown error") except sr.RequestError as e: print(f"Request Error Speeck Recognizer {e}") if __name__ == "__main__": # create a recognizer recoginzer = sr.Recognizer() mcrophone = sr.Microphone() # start the bot voice_bot(mcrophone,recoginzer)

Comments

Popular posts from this blog

Water Leak Detection Notifications

Water Leak Detection Project with GrovePi  and Hologram Nova This project uses the Grove water sensor that can detect water leaks, spills, floods, and rain. It also can send notifications using cellular network. Things used in this project: Hardware components: Raspberry Pi Zero W or WH GrovePi Zero  Grove Water Sensor Grove LCD RGB Backlight Hologram SIM Software Apps: GrovePi  Python Hologram CLI Water sensor module from Grove system provides  the ability to detect if there is any water leak by measuring conductivity. The water sensor traces have a weak pull-up resistor of 1 MΩ and it pulls the sensor trace value high until a drop of water shorts the sensor trace to the grounded trace. GrovePi Zero is a HAT from Dexter Industries   that allows Grove Water sensor to connect to Raspberry Pi zero with out needing soldering or breadboards. One can plug in the Grove water sensor start programming. Grove water sensor works with digital I/...

Temperature and Humidity Monitoring Notifications

Temperature and Humidity Monitoring Project with GrovePi and Hologram Nova A Temperature and Humidity Grove sensor is used in this project to measure relative humidity and temperature. It provides relative humidity measurement expressed as a percentage of the ration of moisture in the air to the maximum amount that can be held in the air at that temperature. The relative humidity changes with temperature as air becomes hotter and it holds more moisture. Things used in this project: Things used in this project: Hardware components: Raspberry Pi Zero  WH GrovePi Zero  Gove -Temperature and Humidity  Sensor Grove LCD RGB Backlight Hologram SIM Software Apps: GrovePi  Python Hologram CLI GrovePi Zero is a HAT from  Dexter Industries   that allows Grove PIR motion sensor to connect to Raspberry Pi zero with out needing soldering or breadboards. One can plug in the Grove water sensor start programming. Grove Temperature and Humid...

Raspberry Pi Setup for Voice Assistant Applications Using ChatGPT, Whisper API, gTTS and Pysstx3

  Raspberry Pi can be used for providing voice assistant capabilities by integrating with ChatGPT and Whisper APIs from OpenAI. This article shows how to set up the required libraries such as Chat GPT, Whisper API, Text-to-speech Pysttx3 etc., on Raspberry Pi for enabling voice  assistant  applications.   Hardware components Raspberry Pi Micro SD Card Power Supply USB Speaker USB Microphone Recommended for initial setup USB Mouse USB Keyboard HDMI Cable Monitor Software Apps: Python ChatGPT, Whisper, Speech Recognition, Pysttx3 libraries Raspberry pi operating system needs to be installed on a micro SD card before installing any ChatGPT based libraries. Raspberry Pi Imager running on another computer can be used to copy the operating system into the SD card.  Click on 'CHOOSE OS' button and select Raspberry Pi OS (64-bit) option and select 'WRITE' button to install the operating system on the SD card. After installing Raspberry pi os on SD card, it can be insert...