The purpose of this page is to layout exactly what elements of SIC are being run and where and why, to gain more insight into the design of the framework. It is not to analyze how face detection works.
I will break it down into different levels of increasing detail as to not be too overwhelming at once. In this simple example, a desktop computer uses its camera to detect and display bounding boxes around faces.
Level 1
FaceDetection service is ran in a terminal, creating an associated ComponentManager
run-face-detection
This runs the main() function of face_detection.py, which starts a ComponentManager
enveloping the FaceDetectionComponent
. As of now, this script is ran on the same computer as the rest of the application, although it may be ran on another computer.
Desktop device is instantiated
# Connect to the services desktop = Desktop(camera_conf=conf)
Rather than running a ComponentManager
in a separate terminal, the Desktop class (inherits from Device) creates a new thread where its ComponentManager
runs.
FaceDetection is connected to
face_rec = FaceDetection(ip="")
This instantiates a Connector
which tries to connect to the actual FaceDetection
component running on the specified IP.
Desktop camera feed is connected to FaceDetection
face_rec.connect(desktop.camera)
This connects the output channel of the camera to the input channel of the FaceDetection component. desktop.camera
is also a Connector
.
Callback functions are registered for Desktop camera and FaceDetection component
imgs_buffer = queue.Queue(maxsize=1) faces_buffer = queue.Queue(maxsize=1) def on_image(image_message: CompressedImageMessage): imgs_buffer.put(image_message.image) def on_faces(message: BoundingBoxesMessage): faces_buffer.put(message.bboxes) desktop.camera.register_callback(on_image) face_rec.register_callback(on_faces)
For every message the DesktopCamera
or FaceDetection
components publish on their output channels, they also will put in their respective buffers.
Buffers are continuously read from, bounding boxes are drawn on and displayed
while True: img = imgs_buffer.get() faces = faces_buffer.get() for face in faces: utils_cv2.draw_bbox_on_image(face, img) cv2.imshow("", img) cv2.waitKey(1)
Assumes that the first image in the faces buffer (contains bounding boxes) corresponds with the first image in the images buffer.
Takeaways:
Components like
FaceDetection
do not simply run by themselves. They consist of aConnector
which is basically a remote control for the actual component, aComponentManager
which runs them, and then the component itself.