Perception Messages

HORUS provides message types for 3D perception tasks. These include:

  • Pool-backed RAII types: PointCloud, DepthImage — zero-copy shared memory backed, managed via global tensor pool
  • Serde types: PointField, PlaneDetection — flexible, support serialization
  • Pod (zero-copy) types: PointXYZ, Landmark, TrackedObject, SegmentationMask — fixed-size, suitable for shared memory transport with zero serialization overhead

PointCloud

Pool-backed RAII point cloud type with zero-copy shared memory transport. PointCloud allocates from a global tensor pool.

use horus::prelude::*;

// Create XYZ point cloud (10000 points, 3 floats per point)
let mut cloud = PointCloud::new(10000, 3, TensorDtype::F32)?;

// Copy point data
let point_data: Vec<u8> = vec![0; 10000 * 3 * 4]; // 10000 points * 3 floats * 4 bytes
cloud.copy_from(&point_data);

// Set metadata (method chaining)
cloud.set_frame_id("lidar_front").set_timestamp_ns(1234567890);

// Access properties
println!("Points: {}", cloud.point_count());
println!("Fields per point: {}", cloud.fields_per_point());
println!("Is XYZ: {}", cloud.is_xyz());

// Zero-copy data access
let data: &[u8] = cloud.data();

// Extract XYZ coordinates as Vec<[f32; 3]>
if let Some(points) = cloud.extract_xyz() {
    for p in &points[..3] {
        println!("({:.2}, {:.2}, {:.2})", p[0], p[1], p[2]);
    }
}

// Access individual point
if let Some(point_bytes) = cloud.point_at(0) {
    println!("Point 0: {} bytes", point_bytes.len());
}

PointCloud methods:

PointCloud is an RAII type — fields are private, accessed through methods. Mutation methods return &mut Self for chaining.

MethodReturnsDescription
new(num_points, fields_per_point, dtype)Result<PointCloud>Create point cloud (allocates from global pool)
data()&[u8]Zero-copy access to raw point data
data_mut()&mut [u8]Mutable access to point data
copy_from(src)&mut SelfCopy data into point cloud
point_at(idx)Option<&[u8]>Get bytes for the i-th point
extract_xyz()Option<Vec<[f32; 3]>>Extract XYZ as float arrays (F32 only)
point_count()u64Number of points
fields_per_point()u32Floats per point (3=XYZ, 4=XYZI, 6=XYZRGB)
dtype()TensorDtypeData type of point components
is_xyz()boolWhether this is a plain XYZ cloud
has_intensity()boolWhether cloud has intensity field
set_frame_id(id)&mut SelfSet sensor frame identifier
set_timestamp_ns(ts)&mut SelfSet timestamp in nanoseconds

PointField

Describes a field within point cloud data (serde type, used for custom point formats):

use horus::prelude::*;

// Create custom field
let intensity = PointField::new("intensity", 12, TensorDtype::F32, 1);
println!("Field: {}, size: {} bytes", intensity.name_str(), intensity.field_size());

PointField uses TensorDtype (available via prelude) for its datatype field. See TensorDtype values for the full list.

Pod Point Types (Zero-Copy)

Fixed-size point types suitable for zero-copy shared memory transport via PodTopic.

PointXYZ

Basic 3D point (12 bytes, #[repr(C, packed)]).

use horus::prelude::*;

let point = PointXYZ::new(1.0, 2.0, 3.0);

println!("Distance from origin: {:.2}m", point.distance());

let other = PointXYZ::new(4.0, 6.0, 3.0);
println!("Distance between: {:.2}m", point.distance_to(&other));
FieldTypeDescription
xf32X coordinate (meters)
yf32Y coordinate (meters)
zf32Z coordinate (meters)

PointXYZRGB

3D point with RGB color (16 bytes, #[repr(C)]). Common for RGB-D cameras like Intel RealSense.

use horus::prelude::*;

let point = PointXYZRGB::new(1.0, 2.0, 3.0, 255, 0, 0); // Red point

// Convert from PointXYZ (defaults to white)
let xyz = PointXYZ::new(1.0, 2.0, 3.0);
let colored = PointXYZRGB::from_xyz(xyz);

// Get packed RGB as u32 (0xRRGGBBAA)
let packed = point.rgb_packed();

// Convert back to PointXYZ (drop color)
let xyz_only = point.xyz();
FieldTypeDescription
x, y, zf32Coordinates (meters)
r, g, bu8Color components (0-255)
au8Alpha/padding (255 default)

PointXYZI

3D point with intensity (16 bytes, #[repr(C)]). Common for LiDAR sensors (Velodyne, Ouster, Livox).

use horus::prelude::*;

let point = PointXYZI::new(1.0, 2.0, 3.0, 128.0);

// Convert from PointXYZ (zero intensity)
let xyz = PointXYZ::new(1.0, 2.0, 3.0);
let with_intensity = PointXYZI::from_xyz(xyz);
FieldTypeDescription
x, y, zf32Coordinates (meters)
intensityf32Reflectance (typically 0-255)

PointCloudHeader

Header for array transmission of point clouds (64 bytes, #[repr(C)]). Sent before the point data array via IPC.

use horus::prelude::*;

// Create header for different point types
let header = PointCloudHeader::xyz(10000)
    .with_frame_id("lidar_front")
    .with_timestamp(timestamp_ns);

// Or for colored points
let header = PointCloudHeader::xyzrgb(5000);

// Calculate total data size
println!("Data size: {} bytes", header.data_size());
FieldTypeDescription
num_pointsu64Number of points
point_typeu320=XYZ, 1=XYZRGB, 2=XYZI
point_strideu32Bytes per point
timestamp_nsu64Nanoseconds since epoch
sequ64Sequence number
frame_id[u8; 32]Sensor/coordinate frame

DepthImage

Pool-backed RAII depth image with zero-copy shared memory transport. Supports both F32 (meters) and U16 (millimeters) formats.

use horus::prelude::*;

// Create F32 depth image (meters)
let mut depth = DepthImage::new(640, 480, TensorDtype::F32)?;

// Or U16 depth image (millimeters)
let mut depth_u16 = DepthImage::new(640, 480, TensorDtype::U16)?;

// Set metadata (method chaining)
depth.set_frame_id("depth_camera").set_timestamp_ns(1234567890);

// Access depth at pixel (always returns meters as f32)
if let Some(d) = depth.get_depth(320, 240) {
    println!("Depth at center: {:.3}m", d);
}

// Set depth at pixel (value in meters)
depth.set_depth(100, 100, 1.5);

// Get raw U16 value for millimeter-format images
if let Some(mm) = depth_u16.get_depth_u16(320, 240) {
    println!("Raw depth: {}mm", mm);
}

// Get statistics (min, max, mean in meters)
if let Some((min, max, mean)) = depth.depth_statistics() {
    println!("Depth range: {:.2}-{:.2}m, mean: {:.2}m", min, max, mean);
}

// Zero-copy data access
let data: &[u8] = depth.data();
println!("Image: {}x{}, {} bytes", depth.width(), depth.height(), data.len());

DepthImage methods:

DepthImage is an RAII type — fields are private, accessed through methods. Mutation methods return &mut Self for chaining.

MethodReturnsDescription
new(width, height, dtype)Result<DepthImage>Create depth image (F32 or U16)
data()&[u8]Zero-copy access to raw depth data
data_mut()&mut [u8]Mutable access to depth data
get_depth(x, y)Option<f32>Get depth in meters at pixel
set_depth(x, y, value)&mut SelfSet depth in meters at pixel
get_depth_u16(x, y)Option<u16>Get raw U16 value (millimeter format only)
depth_statistics()Option<(f32, f32, f32)>Get (min, max, mean) of valid depths in meters
width()u32Image width in pixels
height()u32Image height in pixels
set_frame_id(id)&mut SelfSet camera frame identifier
set_timestamp_ns(ts)&mut SelfSet timestamp in nanoseconds

PlaneDetection

Detected planar surface.

use horus::prelude::*;

// Create plane detection (floor plane)
let coefficients = [0.0, 0.0, 1.0, 0.0];  // ax + by + cz + d = 0
let center = Point3::new(0.0, 0.0, 0.0);
let normal = Vector3::new(0.0, 0.0, 1.0);

let plane = PlaneDetection::new(coefficients, center, normal)
    .with_type("floor");

// Check distance from point to plane
let test_point = Point3::new(1.0, 2.0, 0.1);
let distance = plane.distance_to_point(&test_point);

// Check if point is on plane (within tolerance)
if plane.contains_point(&test_point, 0.05) {
    println!("Point is on the plane");
}

println!("Plane type: {}", plane.plane_type_str());

Fields:

FieldTypeDescription
coefficients[f64; 4]Plane equation [a, b, c, d]
centerPoint3Plane center point
normalVector3Plane normal vector
size[f64; 2]Plane bounds (width, height)
inlier_countu32Number of inlier points
confidencef32Detection confidence (0-1)
plane_type[u8; 16]Type label ("floor", "wall", etc.)
timestamp_nsu64Nanoseconds since epoch

PlaneArray

Array of detected planes (max 16).

FieldTypeDescription
planes[PlaneDetection; 16]Array of plane detections
countu8Number of valid planes
frame_id[u8; 32]Source sensor frame
algorithm[u8; 32]Detection algorithm used
timestamp_nsu64Nanoseconds since epoch

Landmark (Zero-Copy)

2D landmark/keypoint for pose estimation, facial landmarks, hand tracking. Pod type (16 bytes).

use horus::prelude::*;

// Create landmark with visibility
let nose = Landmark::new(320.0, 240.0, 0.95, 0);  // (x, y, visibility, index)

// Create visible landmark (visibility = 1.0)
let eye = Landmark::visible(300.0, 220.0, 1);  // left_eye

// Check visibility
if nose.is_visible(0.5) {
    println!("Nose detected at ({}, {})", nose.x, nose.y);
}

// Distance between landmarks
let distance = nose.distance_to(&eye);

Fields (16 bytes, #[repr(C)]):

FieldTypeDescription
xf32X coordinate (pixels or normalized 0-1)
yf32Y coordinate (pixels or normalized 0-1)
visibilityf32Visibility/confidence (0.0-1.0)
indexu32Landmark index (joint ID)

Landmark3D

3D landmark for 3D pose estimation, MediaPipe-style landmarks. Pod type (20 bytes, packed).

use horus::prelude::*;

let landmark = Landmark3D::new(0.5, 0.3, 0.8, 0.95, 0);

// Project to 2D (drops Z)
let landmark_2d = landmark.to_2d();

// 3D distance
let other = Landmark3D::visible(0.6, 0.4, 0.9, 1);
let dist = landmark.distance_to(&other);
FieldTypeDescription
x, y, zf32Coordinates (meters or normalized)
visibilityf32Visibility/confidence (0.0-1.0)
indexu32Landmark index

LandmarkArray

Header for landmark array transmission. Pod type (40 bytes). Includes presets for common formats.

use horus::prelude::*;

// Standard pose estimation formats
let coco = LandmarkArray::coco_pose();           // 17 2D landmarks
let mp_pose = LandmarkArray::mediapipe_pose();    // 33 3D landmarks
let mp_hand = LandmarkArray::mediapipe_hand();    // 21 3D landmarks
let mp_face = LandmarkArray::mediapipe_face();    // 468 3D landmarks

// Custom array with metadata
let header = LandmarkArray::new_2d(17)
    .with_confidence(0.92)
    .with_bbox(100.0, 50.0, 200.0, 300.0);

println!("Data size: {} bytes", header.data_size());

COCO Pose landmark indices (available as constants in landmark::coco):

IndexLandmarkIndexLandmark
0Nose9Left wrist
1Left eye10Right wrist
2Right eye11Left hip
3Left ear12Right hip
4Right ear13Left knee
5Left shoulder14Right knee
6Right shoulder15Left ankle
7Left elbow16Right ankle
8Right elbow

TrackedObject (Zero-Copy)

Multi-object tracking result with Kalman filter prediction. Pod type (96 bytes).

use horus::prelude::*;

// Create tracked object
let bbox = TrackingBBox::new(100.0, 100.0, 50.0, 50.0);
let mut track = TrackedObject::new(1, bbox, 0, 0.95);
track.set_class_name("person");

// Track lifecycle
assert!(track.is_tentative());  // New tracks start tentative

track.confirm();                 // Promote to confirmed
assert!(track.is_confirmed());

// Update with new detection
track.update(TrackingBBox::new(110.0, 105.0, 50.0, 50.0), 0.93);
println!("Velocity: ({}, {})", track.velocity_x, track.velocity_y);
println!("Speed: {:.1}", track.speed());
println!("Heading: {:.2} rad", track.heading());

// Handle missed detection
track.mark_missed();
println!("Predicted position: ({}, {})",
    track.predicted_bbox.x, track.predicted_bbox.y);

TrackedObject fields (96 bytes, #[repr(C)]):

FieldTypeDescription
bboxTrackingBBoxCurrent bounding box
predicted_bboxTrackingBBoxPredicted next position (Kalman)
track_idu64Persistent tracking ID
confidencef32Detection confidence (0.0-1.0)
class_idu32Class ID
velocity_x, velocity_yf32Velocity (pixels/frame or m/s)
accel_x, accel_yf32Acceleration
ageu32Frames since first detection
hitsu32Frames with detection
time_since_updateu32Consecutive frames without detection
stateu320=tentative, 1=confirmed, 2=deleted
class_name[u8; 16]Class name (max 15 chars)

TrackingBBox fields (16 bytes):

FieldTypeDescription
x, yf32Top-left corner (pixels)
width, heightf32Dimensions (pixels)

TrackingHeader fields (32 bytes):

FieldTypeDescription
num_tracksu32Number of tracked objects
frame_idu32Frame number
timestamp_nsu64Nanoseconds since epoch
total_tracksu64Total tracks created
active_tracksu32Active confirmed tracks

SegmentationMask (Zero-Copy)

Header for segmentation masks. Pod type (64 bytes). The mask pixel data follows the header as a raw byte array.

use horus::prelude::*;

// Semantic segmentation (class ID per pixel)
let mask = SegmentationMask::semantic(1920, 1080, 80)
    .with_frame_id("camera_front")
    .with_timestamp(timestamp_ns);

println!("Data size: {} bytes", mask.data_size());  // 1920*1080

// Instance segmentation
let mask = SegmentationMask::instance(640, 480);

// Panoptic segmentation
let mask = SegmentationMask::panoptic(640, 480, 80);

// Check type
if mask.is_panoptic() {
    println!("Panoptic mask, u16 data: {} bytes", mask.data_size_u16());
}

Fields (64 bytes, #[repr(C)]):

FieldTypeDescription
widthu32Image width
heightu32Image height
num_classesu32Number of classes (semantic/panoptic)
mask_typeu320=semantic, 1=instance, 2=panoptic
timestamp_nsu64Nanoseconds since epoch
sequ64Sequence number
frame_id[u8; 32]Camera/coordinate frame

Common COCO class IDs (available as constants in segmentation::classes): BACKGROUND(0), PERSON(1), BICYCLE(2), CAR(3), MOTORCYCLE(4), BUS(6), TRAIN(7), TRUCK(8)

Perception Pipeline Example

use horus::prelude::*;

struct PerceptionNode {
    depth_sub: Topic<DepthImage>,
    cloud_pub: Topic<PointCloud>,
    fx: f32,
    fy: f32,
    cx: f32,
    cy: f32,
}

impl Node for PerceptionNode {
    fn name(&self) -> &str { "PerceptionNode" }

    fn tick(&mut self) {
        if let Some(depth) = self.depth_sub.recv() {
            // Read depth values and build point cloud
            let w = depth.width();
            let h = depth.height();
            let num_points = (w * h) as u32;

            if let Ok(mut cloud) = PointCloud::new(num_points, 3, TensorDtype::F32) {
                // Depth-to-pointcloud conversion would fill cloud data here
                // using camera intrinsics (fx, fy, cx, cy)
                self.cloud_pub.send(cloud);
            }
        }
    }
}

See Also