Perception Messages
Output types for machine learning and computer vision pipelines — detections, segmentation masks, tracked objects, and landmarks.
# simplified
from horus import Detection, Detection3D, BoundingBox2D, BoundingBox3D, SegmentationMask
Detection
2D object detection — flat constructor with class, confidence, and bounding box fields.
# simplified
import horus
det = horus.Detection(
class_name="person",
confidence=0.95,
x=100.0, y=50.0,
width=100.0, height=250.0,
class_id=0,
instance_id=0,
)
| Field | Type | Default | Description |
|---|
class_name | str | "" | Detected class name |
confidence | float | 0.0 | Detection confidence |
x, y | float | 0.0 | Bounding box top-left (px) |
width, height | float | 0.0 | Bounding box size (px) |
class_id | int | 0 | Numeric class identifier |
instance_id | int | 0 | Instance identifier |
bbox | BoundingBox2D | — | Bounding box as a BoundingBox2D object |
| Method | Returns | Description |
|---|
is_confident(threshold) | bool | True if confidence exceeds threshold |
with_class_id(class_id) | Detection | Return a copy with class_id set |
Detection3D
3D object detection — flat constructor with center, dimensions, and yaw.
# simplified
det3d = horus.Detection3D(
class_name="car",
confidence=0.87,
cx=5.0, cy=2.0, cz=0.8,
length=4.5, width=1.8, height=1.5,
yaw=0.0,
)
| Field | Type | Default | Description |
|---|
class_name | str | "" | Detected class name |
confidence | float | 0.0 | Detection confidence |
cx, cy, cz | float | 0.0 | Bounding box center (m) |
length, width, height | float | 0.0 | Bounding box dimensions (m) |
yaw | float | 0.0 | Heading angle (rad) |
bbox | BoundingBox3D | — | Bounding box as a BoundingBox3D object |
velocity_x, velocity_y, velocity_z | float | 0.0 | Object velocity |
| Method | Returns | Description |
|---|
with_velocity(vx, vy, vz) | Detection3D | Return a copy with velocity set |
BoundingBox2D
Axis-aligned 2D bounding box in pixel coordinates.
# simplified
bbox = horus.BoundingBox2D(x=100.0, y=50.0, width=100.0, height=250.0)
| Field | Type | Default | Description |
|---|
x, y | float | 0.0 | Top-left corner (px) |
width, height | float | 0.0 | Box dimensions (px) |
center_x | float | — | Box center X (getter only) |
center_y | float | — | Box center Y (getter only) |
area | float | — | Box area in pixels (getter only) |
Static Methods:
| Method | Returns | Description |
|---|
BoundingBox2D.from_center(cx, cy, width, height) | BoundingBox2D | Create from center point |
Methods:
| Method | Returns | Description |
|---|
iou(other) | float | Intersection over Union with another BoundingBox2D |
as_tuple() | tuple | Returns (x, y, width, height) |
as_xyxy() | tuple | Returns (x1, y1, x2, y2) format |
BoundingBox3D
3D bounding box in world coordinates.
# simplified
bbox3d = horus.BoundingBox3D(
cx=5.0, cy=2.0, cz=0.8,
length=4.5, width=1.8, height=1.5,
yaw=0.0,
)
| Field | Type | Default | Description |
|---|
cx, cy, cz | float | 0.0 | Box center (m) |
length, width, height | float | 0.0 | Box dimensions (m) |
yaw | float | 0.0 | Heading angle (rad) |
SegmentationMask
Per-pixel class labels.
# simplified
mask = horus.SegmentationMask(width=640, height=480, mask_type=0, num_classes=21)
| Field | Type | Default | Description |
|---|
width, height | int | 0 | Image dimensions (px) |
mask_type | int | 0 | Segmentation mask type (getter only) |
num_classes | int | 0 | Number of semantic classes |
frame_id | str | — | Frame identifier (getter only) |
timestamp_ns | int | 0 | Timestamp |
seq | int | 0 | Sequence number |
Static Methods:
| Method | Returns | Description |
|---|
SegmentationMask.semantic(width, height, num_classes) | SegmentationMask | Create a semantic segmentation mask |
SegmentationMask.instance(width, height) | SegmentationMask | Create an instance segmentation mask |
SegmentationMask.panoptic(width, height, num_classes) | SegmentationMask | Create a panoptic segmentation mask |
Methods:
| Method | Returns | Description |
|---|
is_semantic() | bool | True if semantic segmentation |
is_instance() | bool | True if instance segmentation |
is_panoptic() | bool | True if panoptic segmentation |
data_size() | int | Size of mask data buffer in bytes |
data_size_u16() | int | Size of mask data buffer in u16 elements |
TrackedObject
Object with persistent tracking ID across frames.
# simplified
tracked = horus.TrackedObject(
track_id=42,
x=100.0, y=50.0,
width=100.0, height=250.0,
class_id=0,
confidence=0.9,
)
| Field | Type | Default | Description |
|---|
track_id | int | 0 | Persistent tracking ID |
x, y | float | 0.0 | Bounding box top-left (px) |
width, height | float | 0.0 | Bounding box size (px) |
class_id | int | 0 | Numeric class identifier |
confidence | float | 0.0 | Detection confidence |
class_name | str | — | Class name |
bbox | BoundingBox2D | — | Current bounding box (getter only) |
predicted_bbox | BoundingBox2D | — | Predicted bounding box (getter only) |
velocity_x, velocity_y | float | — | Estimated velocity (getter only) |
velocity | tuple | — | Velocity as (vx, vy) tuple (getter only) |
accel_x, accel_y | float | — | Estimated acceleration (getter only) |
age | int | — | Track age in frames (getter only) |
hits | int | — | Number of detection hits (getter only) |
time_since_update | int | — | Frames since last update (getter only) |
state | int | — | Track state code (getter only) |
Methods:
| Method | Returns | Description |
|---|
speed() | float | Estimated speed (magnitude of velocity) |
heading() | float | Estimated heading angle (radians) |
is_tentative() | bool | True if track is tentative (not yet confirmed) |
is_confirmed() | bool | True if track is confirmed |
is_deleted() | bool | True if track is marked for deletion |
confirm() | None | Confirm the track |
delete() | None | Mark the track for deletion |
mark_missed() | None | Mark a missed detection (no match this frame) |
update(bbox, confidence) | None | Update with new detection |
Landmark / Landmark3D
Visual landmarks for SLAM and localization.
# simplified
lm = horus.Landmark(x=1.5, y=2.3, visibility=0.95, index=7)
lm3d = horus.Landmark3D(x=1.5, y=2.3, z=0.8, visibility=0.95, index=7)
Landmark
| Field | Type | Default | Description |
|---|
x, y | float | 0.0 | Position (px or m) |
visibility | float | 1.0 | Visibility score (0.0-1.0) |
index | int | 0 | Landmark index |
| Static Method | Returns | Description |
|---|
Landmark.visible(x, y, index) | Landmark | Create a visible landmark (visibility=1.0) |
| Method | Returns | Description |
|---|
is_visible(threshold) | bool | True if visibility exceeds threshold |
distance_to(other) | float | Euclidean distance to another Landmark |
Landmark3D
| Field | Type | Default | Description |
|---|
x, y, z | float | 0.0 | 3D position (m) |
visibility | float | 1.0 | Visibility score (0.0-1.0) |
index | int | 0 | Landmark index |
| Static Method | Returns | Description |
|---|
Landmark3D.visible(x, y, z, index) | Landmark3D | Create a visible 3D landmark |
| Method | Returns | Description |
|---|
is_visible(threshold) | bool | True if visibility exceeds threshold |
distance_to(other) | float | Euclidean distance to another Landmark3D |
to_2d() | Landmark | Project to 2D (drops z coordinate) |
Example: YOLO Detection Pipeline
# simplified
import horus
def detect_tick(node):
img = node.recv("camera.rgb")
if img is None:
return
frame = img.to_numpy() # Zero-copy
results = model.predict(frame)
for r in results:
det = horus.Detection(
class_name=r.class_name,
confidence=float(r.confidence),
x=r.x, y=r.y,
width=r.w, height=r.h,
class_id=r.class_id,
)
node.send("detections", det)
detector = horus.Node(
name="yolo",
subs=[horus.Image],
pubs=[horus.Detection],
tick=detect_tick,
rate=30,
compute=True,
on_miss="skip",
)
horus.run(detector)
See Also