Vision Messages

HORUS provides message types for cameras, images, camera calibration, and visual detection systems.

Image

Pool-backed RAII image type with zero-copy shared memory transport. Image allocates from a global tensor pool — you don't manage memory directly.

use horus::prelude::*;

// Create an RGB image (width, height, encoding) — allocates from global pool
let mut image = Image::new(640, 480, ImageEncoding::Rgb8)?;

// Copy pixel data into the image
let pixels: Vec<u8> = vec![128; 480 * 640 * 3];
image.copy_from(&pixels);

// Set metadata (method chaining)
image.set_frame_id("camera_front").set_timestamp_ns(1234567890);

// Access image properties
println!("Image: {}x{}, {:?}", image.width(), image.height(), image.encoding());
println!("Data size: {} bytes", image.data().len());

// Access individual pixel (x, y)
if let Some(pixel) = image.pixel(0, 0) {
    println!("Pixel[0,0]: R={}, G={}, B={}", pixel[0], pixel[1], pixel[2]);
}

// Set a pixel value
image.set_pixel(0, 0, &[255, 0, 0]);

// Extract region of interest (returns raw bytes, not Image)
if let Some(roi_data) = image.roi(0, 0, 100, 100) {
    println!("ROI data: {} bytes", roi_data.len());
}

// Fill entire image with a color
image.fill(&[0, 0, 0]); // Black

ImageEncoding values:

EncodingChannelsBytes/PixelDescription
Mono8118-bit monochrome
Mono161216-bit monochrome
Rgb8338-bit RGB (default)
Bgr8338-bit BGR (OpenCV)
Rgba8448-bit RGBA
Bgra8448-bit BGRA
Yuv42222YUV 4:2:2
Mono32F1432-bit float mono
Rgb32F31232-bit float RGB
BayerRggb811Bayer pattern (raw)
Depth161216-bit depth (mm)

ImageEncoding methods:

MethodReturnsDescription
bytes_per_pixel()u32Bytes per pixel for this encoding
is_color()boolWhether encoding has color information

Image methods:

Image is an RAII type — fields are private, accessed through methods. Mutation methods return &mut Self for chaining.

MethodReturnsDescription
new(width, height, encoding)Result<Image>Create image (allocates from global pool)
width()u32Image width in pixels
height()u32Image height in pixels
encoding()ImageEncodingPixel encoding format
data()&[u8]Zero-copy access to pixel data
data_mut()&mut [u8]Mutable access to pixel data
copy_from(src)&mut SelfCopy pixel data into image
pixel(x, y)Option<&[u8]>Get pixel bytes at coordinates
set_pixel(x, y, value)&mut SelfSet pixel value at coordinates
fill(value)&mut SelfFill entire image with a value
roi(x, y, w, h)Option<Vec<u8>>Extract raw bytes for a region
set_frame_id(id)&mut SelfSet camera frame identifier
set_timestamp_ns(ts)&mut SelfSet timestamp in nanoseconds

CompressedImage

Compressed image data (JPEG, PNG, etc.).

use horus::prelude::*;

// Create compressed image from JPEG data
let jpeg_data = std::fs::read("image.jpg").unwrap();
let compressed = CompressedImage::new("jpeg", jpeg_data);

println!("Format: {}", compressed.format_str());
println!("Compressed size: {} bytes", compressed.data.len());

// Optional: set original dimensions if known
let mut img = compressed;
img.width = 640;
img.height = 480;

Fields:

FieldTypeDescription
format[u8; 8]Compression format string (null-padded)
dataVec<u8>Compressed data
widthu32Original width (0 if unknown)
heightu32Original height (0 if unknown)
frame_id[u8; 32]Camera identifier
timestamp_nsu64Nanoseconds since epoch

Methods:

MethodReturnsDescription
new(format, data)CompressedImageCreate from format string and data (auto-sets timestamp)
format_str()StringGet format as string

CameraInfo

Camera calibration information.

use horus::prelude::*;

// Create camera info with intrinsics
let camera = CameraInfo::new(
    640, 480,       // width, height
    525.0, 525.0,   // fx, fy
    320.0, 240.0    // cx, cy (principal point)
).with_distortion_model("plumb_bob");

// Access intrinsics
let (fx, fy) = camera.focal_lengths();
let (cx, cy) = camera.principal_point();

println!("Focal length: ({:.1}, {:.1})", fx, fy);
println!("Principal point: ({:.1}, {:.1})", cx, cy);

// Set distortion coefficients
let mut camera = camera;
camera.distortion_coefficients = [
    -0.25,  // k1
    0.12,   // k2
    0.001,  // p1
    -0.001, // p2
    0.0, 0.0, 0.0, 0.0  // k3-k6
];

Camera Matrix (3x3):

[fx,  0,  cx]
[ 0, fy,  cy]
[ 0,  0,   1]

Projection Matrix (3x4):

[fx',  0, cx', Tx]
[ 0,  fy', cy', Ty]
[ 0,   0,   1,  0]

Fields:

FieldTypeDescription
widthu32Image width in pixels
heightu32Image height in pixels
distortion_model[u8; 16]Distortion model name (null-padded)
distortion_coefficients[f64; 8][k1, k2, p1, p2, k3, k4, k5, k6]
camera_matrix[f64; 9]3x3 intrinsic matrix (row-major)
rectification_matrix[f64; 9]3x3 rectification matrix (identity by default)
projection_matrix[f64; 12]3x4 projection matrix (row-major)
frame_id[u8; 32]Camera identifier
timestamp_nsu64Nanoseconds since epoch

Methods:

MethodReturnsDescription
new(width, height, fx, fy, cx, cy)CameraInfoCreate with intrinsics (auto-sets camera/projection matrices)
with_distortion_model(model)CameraInfoSet distortion model name (builder)
focal_lengths()(f64, f64)Get (fx, fy) from camera matrix
principal_point()(f64, f64)Get (cx, cy) from camera matrix

RegionOfInterest

Region of interest (bounding box) in an image.

use horus::prelude::*;

// Create ROI
let roi = RegionOfInterest::new(100, 50, 200, 150);

// Check if point is inside ROI
if roi.contains(150, 100) {
    println!("Point is inside ROI");
}

// Get area
println!("ROI area: {} pixels", roi.area());

// Access properties
println!("ROI: ({}, {}) -> {}x{}",
    roi.x_offset, roi.y_offset, roi.width, roi.height);

Fields:

FieldTypeDescription
x_offsetu32X offset of region
y_offsetu32Y offset of region
widthu32Region width
heightu32Region height
do_rectifyboolApply rectification

Methods:

MethodReturnsDescription
new(x, y, width, height)RegionOfInterestCreate a new ROI
contains(x, y)boolCheck if point is inside ROI
area()u32Get area in pixels

StereoInfo

Stereo camera pair information.

use horus_library::messages::vision::StereoInfo;
use horus::prelude::*;

// Create stereo configuration
let left = CameraInfo::new(640, 480, 525.0, 525.0, 320.0, 240.0);
let right = CameraInfo::new(640, 480, 525.0, 525.0, 320.0, 240.0);

let stereo = StereoInfo {
    left_camera: left,
    right_camera: right,
    baseline: 0.12,  // 12cm between cameras
    depth_scale: 1.0,
};

// Calculate depth from disparity
let disparity = 64.0_f32;  // pixels
let depth = stereo.depth_from_disparity(disparity);
println!("Disparity {} -> depth {:.2}m", disparity, depth);

// Calculate disparity from depth
let depth = 2.0_f32;  // meters
let disparity = stereo.disparity_from_depth(depth);
println!("Depth {}m -> disparity {:.1}px", depth, disparity);

Note: StereoInfo is not included in the convenience re-exports. Import it directly from horus_library::messages::vision::StereoInfo.

Fields:

FieldTypeDescription
left_cameraCameraInfoLeft camera calibration
right_cameraCameraInfoRight camera calibration
baselinef64Camera distance (meters)
depth_scalef64Disparity-to-depth factor

Methods:

MethodReturnsDescription
depth_from_disparity(disparity: f32)f32Calculate depth from pixel disparity (returns INFINITY if disparity <= 0)
disparity_from_depth(depth: f32)f32Calculate disparity from depth (returns 0 if depth <= 0)

BoundingBox2D

2D bounding box for object detection. This is a Pod (Plain Old Data) type suitable for zero-copy shared memory transport.

use horus::prelude::*;

// Create from top-left corner
let bbox = BoundingBox2D::new(100.0, 50.0, 200.0, 150.0);

// Create from center (YOLO format)
let bbox = BoundingBox2D::from_center(200.0, 125.0, 200.0, 150.0);

// Get properties
println!("Center: ({}, {})", bbox.center_x(), bbox.center_y());
println!("Area: {} px²", bbox.area());

// Calculate IoU between two boxes
let other = BoundingBox2D::new(150.0, 75.0, 200.0, 150.0);
println!("IoU: {:.3}", bbox.iou(&other));

Fields (16 bytes, #[repr(C)]):

FieldTypeDescription
xf32X of top-left corner (pixels)
yf32Y of top-left corner (pixels)
widthf32Width (pixels)
heightf32Height (pixels)

Methods:

MethodReturnsDescription
new(x, y, width, height)BoundingBox2DCreate from top-left corner
from_center(cx, cy, width, height)BoundingBox2DCreate from center (YOLO format)
center_x()f32Get center X coordinate
center_y()f32Get center Y coordinate
area()f32Get area
iou(other)f32Intersection over Union with another box

Detection

2D object detection result. This is a Pod type (72 bytes) suitable for zero-copy shared memory transport.

use horus::prelude::*;

// Create detection with class name, confidence, and bounding box coordinates
let det = Detection::new("person", 0.95, 100.0, 50.0, 200.0, 300.0);

println!("Detected: {} ({:.1}% confidence)",
    det.class_name(),
    det.confidence * 100.0);

println!("BBox: ({}, {}) {}x{}",
    det.bbox.x, det.bbox.y, det.bbox.width, det.bbox.height);

// Check confidence threshold
if det.is_confident(0.9) {
    println!("High confidence detection!");
}

// Create with class ID instead of name
let bbox = BoundingBox2D::new(100.0, 50.0, 200.0, 300.0);
let det = Detection::with_class_id(1, 0.88, bbox);

Fields (72 bytes, #[repr(C)]):

FieldTypeDescription
bboxBoundingBox2DBounding box (x, y, width, height)
confidencef32Detection confidence (0.0-1.0)
class_idu32Numeric class identifier
class_name[u8; 32]Class name string (null-padded, max 31 chars)
instance_idu32Instance ID (for instance segmentation)

Methods:

MethodReturnsDescription
new(class_name, confidence, x, y, width, height)DetectionCreate with name and bbox coordinates
with_class_id(class_id, confidence, bbox)DetectionCreate with numeric class ID
set_class_name(name)()Set class name (truncates to 31 chars)
class_name()&strGet class name as string
is_confident(threshold)boolCheck if confidence >= threshold

Detection3D

3D object detection from point clouds or depth-aware models. Pod type (104 bytes) with velocity tracking.

use horus::prelude::*;

// Create 3D bounding box (center, dimensions, yaw)
let bbox = BoundingBox3D::new(
    5.0, 2.0, 0.5,    // center (x, y, z) in meters
    4.5, 2.0, 1.5,    // dimensions (length, width, height)
    0.1                // yaw rotation in radians
);

// Create 3D detection with velocity
let det = Detection3D::new("car", 0.92, bbox)
    .with_velocity(10.0, 5.0, 0.0);  // m/s

println!("Detected: {} at ({}, {}, {})",
    det.class_name(),
    det.bbox.cx, det.bbox.cy, det.bbox.cz);
println!("Volume: {:.1} m³", det.bbox.volume());

BoundingBox3D fields (48 bytes, #[repr(C)]):

FieldTypeDescription
cx, cy, czf32Center coordinates (meters)
length, width, heightf32Dimensions (meters)
roll, pitch, yawf32Euler angles (radians)

Detection3D fields (104 bytes, #[repr(C)]):

FieldTypeDescription
bboxBoundingBox3D3D bounding box
confidencef32Confidence score (0.0-1.0)
class_idu32Numeric class identifier
class_name[u8; 32]Class name (null-padded, max 31 chars)
velocity_x, velocity_y, velocity_zf32Velocity in m/s
instance_idu32Instance/tracking ID

Vision Processing Node Example

use horus::prelude::*;

struct VisionNode {
    image_sub: Topic<Image>,
    camera_info_sub: Topic<CameraInfo>,
    detection_pub: Topic<Detection>,
    camera_info: Option<CameraInfo>,
}

impl Node for VisionNode {
    fn name(&self) -> &str { "VisionNode" }

    fn tick(&mut self) {
        // Update camera calibration
        if let Some(info) = self.camera_info_sub.recv() {
            self.camera_info = Some(info);
        }

        // Process images
        if let Some(image) = self.image_sub.recv() {
            // Run detection (your ML model here)
            let detection = Detection::new(
                "person", 0.95,
                100.0, 50.0, 200.0, 300.0
            );

            self.detection_pub.send(detection);
        }
    }
}

See Also