Installation & Getting started
You can find the instructions on how to install the framework here: Getting started
To include a robot in the mix read Getting started with a robot
API
Running a hello world on a NAOv6 robot is as simple as
Code Block | ||
---|---|---|
| ||
from sic_framework.devices import Nao from sic_framework.devices.common_naoqi.naoqi_text_to_speech import NaoqiTextToSpeechRequest nao = Nao(ip='192.168.0.151') # change into IP-address of your Nao robot. nao.tts.request(NaoqiTextToSpeechRequest("Hello world!")) |
Installation & Getting started
You can find the instructions on how to install the framework here: Install
To get started read Getting started with a robotComponents
Quickly link per-trained models and cloud solutions together using components, such as:
There are plenty of demo’s on how to use various components in https://github.com/Social-AI-VU/sic_applications, so check them out!
Overview of framework structure
To give an example of how the framework is structured, here are three usecases and the code a student would have to write
A student wants to display face recognition on their laptop.
1 input, 1 output to student deviceA student wants to send and image to face recognition and save the result
1 input, 1 output that must be tied to the inputA student wants to send robot audio to dialogflow and wave when it detects “Hello”
2 asynchronous inputs (audio, command), 2 asynchronous outputs (transcript, intent)
Display face recognition
Do face recognition on a nao’s camera stream
Code Block | ||
---|---|---|
| ||
image = None def set_image_variable(img): image = img bbox = None def set_bbox_variable(box): bbox = box nao = Nao(ip="192.168.0.181") # change into IP-address of your Nao robot. face_recognition = FaceRecognition(ip="127.0.0.1") nao.top_camera.register_callback(set_image_variable) face_recognition.connect(nao.top_camera) face_recognition.register_callback(set_bbox_variable) while True: image.draw(bbox) display(image) |
Do a single face recognition
Recognize the faces in a picture from the student’s laptop
Code Block | ||
---|---|---|
| ||
face_recognition = FaceRecognition() image = load_from_disk("picture.jpg") image_request = FaceRecognitonRequest(image=image) bboxes = face_recognition.request(image_request) draw_bbox_on_image(image.image, bboxes.bboxes[0]) display(image.image) |
Dialogflow hello detection
A demo that has a robot wave whenever someone says “hello” and prints the detected transcript on the laptop.
Code Block | ||
---|---|---|
| ||
pepper = Pepper(ip="192.168.0.123") # change into IP-address of your Pepper robot. keyfile_json = json.load(open("sail-380610-0dea39e1a452.json")) conf = DialogflowConf(keyfile_json=keyfile_json, sample_rate_hertz=16000, ) dialogflow = Dialogflow() dialogflow.connect(pepper.microphone) dialogflow.register_callback(print_transcript) while True: intent = dialogflow.request(ListenForIntentRequest()) if intent.name == "hello": wave_req = PepperMotionRequest("wave") pepper.motion.request(wave_req) |