Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
The Social Interaction Cloud (SIC) is a light-weight, easy-to-use framework for developing socially interactive systems. It has been developed with the aim to facilitate social interaction with physical devices. The framework can be also used to easily create data pipelines and enables scaling to support more advanced architectures using more advanced cloud-based computing, for example.

Installation & Getting started

You can find the instructions on how to install the framework here: ​Getting started

To include a robot in the mix read ​Getting started with a robot

API

Running a hello world on a NAOv6 robot is as simple as

Code Block
languagepy
from sic_framework.devices import Nao
from sic_framework.devices.common_naoqi.naoqi_text_to_speech import NaoqiTextToSpeechRequest

nao = Nao(ip='192.168.0.151')  # change into IP-address of your Nao robot.
nao.tts.request(NaoqiTextToSpeechRequest("Hello world!"))
Installation & Getting started

You can find the instructions on how to install the framework here: Install

To get started read Getting started with a robot

Components

Quickly link per-trained models and cloud solutions together using components, such as:

whisper-20240204-210940.pngImage Removedhttps://bitbucket.org/socialroboticshub/framework/src/master/sic_framework/services/openai_whisper_speech_to_text/

image-20240204-210331.pngImage Removedhttps://bitbucket.org/socialroboticshub/framework/src/master/sic_framework/services/openai_gpt/

image-20240204-210743.pngImage Removedhttps://bitbucket.org/socialroboticshub/framework/src/master/sic_framework/services/dialogflow/

image-20240204-211212.pngImage Removedhttps://bitbucket.org/socialroboticshub/framework/src/master/sic_framework/services/face_detection_dnn/

image-20240204-211340.pngImage Removedhttps://bitbucket.org/socialroboticshub/framework/src/master/sic_framework/services/text2speech/

pytorch-20240204-211903.pngImage Removed

There are plenty of demo’s on how to use various components in https://bitbucketgithub.org/socialroboticshub/framework/src/master/sic_framework/tests/com/Social-AI-VU/sic_applications, so check them out!


Overview of framework structure

To give an example of how the framework is structured, here are three usecases and the code a student would have to write

  1. A student wants to display face recognition on their laptop.
    1 input, 1 output to student device

  2. A student wants to send and image to face recognition and save the result
    1 input, 1 output that must be tied to the input

  3. A student wants to send robot audio to dialogflow and wave when it detects “Hello”
    2 asynchronous inputs (audio, command), 2 asynchronous outputs (transcript, intent)

Display face recognition

Do face recognition on a nao’s camera stream

Code Block
languagepy
image = None
def set_image_variable(img):
  image = img

bbox = None
def set_bbox_variable(box):
  bbox = box
  
nao = Nao(ip="192.168.0.181")  # change into IP-address of your Nao robot.
face_recognition = FaceRecognition(ip="127.0.0.1")

nao.top_camera.register_callback(set_image_variable)

face_recognition.connect(nao.top_camera)
face_recognition.register_callback(set_bbox_variable)

while True:
  image.draw(bbox)
  display(image)

Do a single face recognition

Recognize the faces in a picture from the student’s laptop

Code Block
languagepy
face_recognition = FaceRecognition()

image = load_from_disk("picture.jpg")

image_request = FaceRecognitonRequest(image=image)
bboxes = face_recognition.request(image_request)

draw_bbox_on_image(image.image, bboxes.bboxes[0])

display(image.image)

Dialogflow hello detection

A demo that has a robot wave whenever someone says “hello” and prints the detected transcript on the laptop.

Code Block
languagepy
pepper = Pepper(ip="192.168.0.123")  # change into IP-address of your Pepper robot.


keyfile_json = json.load(open("sail-380610-0dea39e1a452.json"))
conf = DialogflowConf(keyfile_json=keyfile_json,
                      sample_rate_hertz=16000, )
dialogflow = Dialogflow()

dialogflow.connect(pepper.microphone)
dialogflow.register_callback(print_transcript)

while True:
  intent = dialogflow.request(ListenForIntentRequest())
  if intent.name == "hello":
    wave_req = PepperMotionRequest("wave")
    pepper.motion.request(wave_req)