Sentiment Analysis - Restructured

Introduction

The sentiment_analysis service can be used to apply sentiment analysis on any text input. By using this service together with the Dialogflow service, sentiment analysis can be run on the audio input of a user (i.e. speech). Making use of a Naive Bayes Classifier from the Natural Language Toolkit (NLTK), this service classifies a sentence as either being positive or negative.

Docker name: sentiment_analysis

Input

  • sensors: Microphone

  • actuators: Speaker

  • services: stream_audio, dialogflow

  • parameters:

    • Dialogflow keyfile path: str

    • Dialogflow project ID: str

    • sentiment: bool = True

      • parameter used to enable the service in the BasicSICConnector

    • The parameters are set at instantiation time

Output

  • Sentiment:str

    • the outcome of the sentiment analysis

    • can be Positive/Negative

Initialisation

Ensure:

  1. All required services are running, stream_audio, dialogflow, sentiment_analysis, as well as the drivers for microphone and speaker (if running on Laptops/Computers (local debugging) ).

  2. Your Dialogflow agent is set up correctly:

    Dialogflow Intents Setup
    Dialogflow Entities Setup

     

  3. To pass your local IP address, Dialogflow key file path, Dialogflow agent ID, and equate sentiment = True when creating an instance of BasicSICConnector.

  4. Run the script

Example

Refer to sentiment_analysis_example to find the practical implementation of the steps mentioned in the Initialisation section.

Events

  • onAudioIntent

    • a new intent is detected

  • IntentDetectionDone

    • a new intent has finished being detected

  • onAudioLanguage

    • the audio language has been changed

  • LoadAudioDone

    • if an audio file is used, the event is raised when the file has finished being loaded

  • onTextSentiment

    • When sentiment analysis on a text has been completed, giving a result in Positive or Negative.

Known Issues

The model was trained on 10 000 tweets from Twitter. Therefore the model is not trained for conversations specifically, thus in some cases, it might not perform very well. It only outputs positive or negative.