Table of Contents | ||
---|---|---|
|
Getting Started
Natural Language Understanding (NLU) is a core component of conversational AI systems, enabling machines to interpret and act on user input in natural language. This Intent and Slot Classifier project is designed to help students understand the pipeline involved in building an NLU model that performs intent classification and slot filling. These tasks allow AI models to classify a user's goal (intent) and extract key information (slots) from their input.
...
To complete these goals you must go through all subpages in this section starting with Ontology.
...
Repository Overview
Note |
---|
Find the Intent and Slot Classifier files under / |
Code Block | ||
---|---|---|
| ||
The repository is structured as follows: utils/ ├── checkpoints/ # Directory for saving trained model checkpoints ├── data/ # Directory containing data files (ontology, train/test datasets) │ ├── ontology.json # Ontology file containing intents, slots, and synonyms │ ├── train.json # Training dataset │ ├── test.json # Test dataset │ └── synonyms.json # Synonyms for slot normalization ├── data_processing.py # Utilities for additional data preprocessing (if needed) ├── dataset.py # Dataset preparation and preprocessing module ├── evaluation.py # Model evaluation and metrics generation ├── run_train_test.py # Main script to run training, evaluation, and inference ├── model.py # Defines the BERT-based model architecture ├── predict.py # Inference module for predicting intents and slots ├── requirements.txt # Python dependencies for the project ├── train.py # Training module for the intent-slot classifier └── utils.py # Helper functions for argument parsing, slot normalization, and synonym resolution |
...
Argument | Type | Default | Description |
---|---|---|---|
|
|
| Path to the ontology JSON file. |
|
|
| Path to the training dataset. |
|
|
| Path to the test dataset. |
|
|
| Path to save/load the trained model weights. |
|
|
| Train the model when this flag is set. |
|
|
| Evaluate the model on the test dataset when this flag is set. |
|
|
| Number of epochs for training. |
|
|
| Batch size for training. |
|
|
| Learning rate for the optimizer. |
|
|
| Maximum sequence length for tokenization. |
|
|
| Random seed for reproducibility. |
|
|
| Text input for running inference. |
|
|
| Show the intent and slot distribution in the dataset. |
...
Use
...
Info |
---|
Due to how the imports are set up and because the intent and slot classifier is part of the social-interaction-cloud python package when you make changes you need to make sure to reinstall the social interaction cloud via |
1. Viewing Dataset Distribution
...
Code Block |
---|
python run_train_test.py --show_dist |
...
2. Training the Model
To train the model using the training dataset:
Code Block |
---|
python mainrun_train_test.py --train_model |
...
3. Evaluating the Model
To evaluate a pre-trained model on the test dataset:
Code Block |
---|
python mainrun_train_test.py --evaluate |
...
4. Running Inference
To predict the intent and slots for a given input text:
Code Block |
---|
python mainrun_train_test.py --inference_text "Find me a Japanese recipe for lunch." |
5. Run with ASR
When complete with this section read https://socialrobotics.atlassian.net/wiki/spaces/PCA2/pages/2709488567/Run+your+Conversational+Agent#Run-your-Intent-and-Slot-Classifier-with-WHISPER to connect your intent and slot classifier with WHISPER