Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 7 Next »

This tutorial shows you how to run a simple pipeline (ASR + NLU) where Whisper transcribes your speech and feeds it into the NLU component to run inference

Steps:

Since the NLU component is not yet available on PyPI, for now, we will need to clone the repository and install it locally.

  1. Clone the SIC repo

    1. git clone https://github.com/Social-AI-VU/social-interaction-cloud.git

  2. Switch to the nlu_component branch:

    1. git checkout nlu_component

  3. Create and activate a virtual environment:

    1. If you are using pure Python environment

      python -m venv venv_sic
      source venv_sic/bin/activate
    2. If you are using Anaconda environment

      conda create -n venv_sic python=3.12
      conda activate venv_sic
  4. Install SIC, nlu and whisper dependencies from local repo:

    cd social-interaction-cloud
    pip install ."[whisper-speech-to-text,nlu]"
  5. Run the NLU and Whisper components in separate terminals (Don’t forget to run a redis server. See the details here https://socialrobotics.atlassian.net/wiki/spaces/CBSR/pages/2180415493/Getting+started#Step-1%3A-starting-Redis-on-your-laptop ):

    # Start the Redis server
    redis-server conf/redis/redis.conf
    
    # Start the NLU and Whisper components separately
    run-whisper
    run-nlu
  6. Open a new terminal and activate the same virtual environment you created earlier

  7. Clone the sic_applications repo

    1. git clone https://github.com/Social-AI-VU/sic_applications.git

  8. Add (trained) nlu model and ontology to the configuration folder sic_applications/conf/nlu. The default names are “model_checkpoint.pt" and "ontology.json".

    image-20241217-142723.png

  9. Run the demo

    cd sic_applications/demos/desktop
    python demo_desktop_asr_nlu.py

  • No labels