Vision Messages
HORUS provides message types for cameras, images, camera calibration, and visual detection systems.
Image
Pool-backed RAII image type with zero-copy shared memory transport. Image allocates from a global tensor pool — you don't manage memory directly.
use horus::prelude::*;
// Create an RGB image (width, height, encoding) — allocates from global pool
let mut image = Image::new(640, 480, ImageEncoding::Rgb8)?;
// Copy pixel data into the image
let pixels: Vec<u8> = vec![128; 480 * 640 * 3];
image.copy_from(&pixels);
// Set metadata (method chaining)
image.set_frame_id("camera_front").set_timestamp_ns(1234567890);
// Access image properties
println!("Image: {}x{}, {:?}", image.width(), image.height(), image.encoding());
println!("Data size: {} bytes", image.data().len());
// Access individual pixel (x, y)
if let Some(pixel) = image.pixel(0, 0) {
println!("Pixel[0,0]: R={}, G={}, B={}", pixel[0], pixel[1], pixel[2]);
}
// Set a pixel value
image.set_pixel(0, 0, &[255, 0, 0]);
// Extract region of interest (returns raw bytes, not Image)
if let Some(roi_data) = image.roi(0, 0, 100, 100) {
println!("ROI data: {} bytes", roi_data.len());
}
// Fill entire image with a color
image.fill(&[0, 0, 0]); // Black
ImageEncoding values:
| Encoding | Channels | Bytes/Pixel | Description |
|---|---|---|---|
Mono8 | 1 | 1 | 8-bit monochrome |
Mono16 | 1 | 2 | 16-bit monochrome |
Rgb8 | 3 | 3 | 8-bit RGB (default) |
Bgr8 | 3 | 3 | 8-bit BGR (OpenCV) |
Rgba8 | 4 | 4 | 8-bit RGBA |
Bgra8 | 4 | 4 | 8-bit BGRA |
Yuv422 | 2 | 2 | YUV 4:2:2 |
Mono32F | 1 | 4 | 32-bit float mono |
Rgb32F | 3 | 12 | 32-bit float RGB |
BayerRggb8 | 1 | 1 | Bayer pattern (raw) |
Depth16 | 1 | 2 | 16-bit depth (mm) |
ImageEncoding methods:
| Method | Returns | Description |
|---|---|---|
bytes_per_pixel() | u32 | Bytes per pixel for this encoding |
is_color() | bool | Whether encoding has color information |
Image methods:
Image is an RAII type — fields are private, accessed through methods. Mutation methods return &mut Self for chaining.
| Method | Returns | Description |
|---|---|---|
new(width, height, encoding) | Result<Image> | Create image (allocates from global pool) |
width() | u32 | Image width in pixels |
height() | u32 | Image height in pixels |
encoding() | ImageEncoding | Pixel encoding format |
data() | &[u8] | Zero-copy access to pixel data |
data_mut() | &mut [u8] | Mutable access to pixel data |
copy_from(src) | &mut Self | Copy pixel data into image |
pixel(x, y) | Option<&[u8]> | Get pixel bytes at coordinates |
set_pixel(x, y, value) | &mut Self | Set pixel value at coordinates |
fill(value) | &mut Self | Fill entire image with a value |
roi(x, y, w, h) | Option<Vec<u8>> | Extract raw bytes for a region |
set_frame_id(id) | &mut Self | Set camera frame identifier |
set_timestamp_ns(ts) | &mut Self | Set timestamp in nanoseconds |
CompressedImage
Compressed image data (JPEG, PNG, etc.).
use horus::prelude::*;
// Create compressed image from JPEG data
let jpeg_data = std::fs::read("image.jpg").unwrap();
let compressed = CompressedImage::new("jpeg", jpeg_data);
println!("Format: {}", compressed.format_str());
println!("Compressed size: {} bytes", compressed.data.len());
// Optional: set original dimensions if known
let mut img = compressed;
img.width = 640;
img.height = 480;
Fields:
| Field | Type | Description |
|---|---|---|
format | [u8; 8] | Compression format string (null-padded) |
data | Vec<u8> | Compressed data |
width | u32 | Original width (0 if unknown) |
height | u32 | Original height (0 if unknown) |
frame_id | [u8; 32] | Camera identifier |
timestamp_ns | u64 | Nanoseconds since epoch |
Methods:
| Method | Returns | Description |
|---|---|---|
new(format, data) | CompressedImage | Create from format string and data (auto-sets timestamp) |
format_str() | String | Get format as string |
CameraInfo
Camera calibration information.
use horus::prelude::*;
// Create camera info with intrinsics
let camera = CameraInfo::new(
640, 480, // width, height
525.0, 525.0, // fx, fy
320.0, 240.0 // cx, cy (principal point)
).with_distortion_model("plumb_bob");
// Access intrinsics
let (fx, fy) = camera.focal_lengths();
let (cx, cy) = camera.principal_point();
println!("Focal length: ({:.1}, {:.1})", fx, fy);
println!("Principal point: ({:.1}, {:.1})", cx, cy);
// Set distortion coefficients
let mut camera = camera;
camera.distortion_coefficients = [
-0.25, // k1
0.12, // k2
0.001, // p1
-0.001, // p2
0.0, 0.0, 0.0, 0.0 // k3-k6
];
Camera Matrix (3x3):
[fx, 0, cx]
[ 0, fy, cy]
[ 0, 0, 1]
Projection Matrix (3x4):
[fx', 0, cx', Tx]
[ 0, fy', cy', Ty]
[ 0, 0, 1, 0]
Fields:
| Field | Type | Description |
|---|---|---|
width | u32 | Image width in pixels |
height | u32 | Image height in pixels |
distortion_model | [u8; 16] | Distortion model name (null-padded) |
distortion_coefficients | [f64; 8] | [k1, k2, p1, p2, k3, k4, k5, k6] |
camera_matrix | [f64; 9] | 3x3 intrinsic matrix (row-major) |
rectification_matrix | [f64; 9] | 3x3 rectification matrix (identity by default) |
projection_matrix | [f64; 12] | 3x4 projection matrix (row-major) |
frame_id | [u8; 32] | Camera identifier |
timestamp_ns | u64 | Nanoseconds since epoch |
Methods:
| Method | Returns | Description |
|---|---|---|
new(width, height, fx, fy, cx, cy) | CameraInfo | Create with intrinsics (auto-sets camera/projection matrices) |
with_distortion_model(model) | CameraInfo | Set distortion model name (builder) |
focal_lengths() | (f64, f64) | Get (fx, fy) from camera matrix |
principal_point() | (f64, f64) | Get (cx, cy) from camera matrix |
RegionOfInterest
Region of interest (bounding box) in an image.
use horus::prelude::*;
// Create ROI
let roi = RegionOfInterest::new(100, 50, 200, 150);
// Check if point is inside ROI
if roi.contains(150, 100) {
println!("Point is inside ROI");
}
// Get area
println!("ROI area: {} pixels", roi.area());
// Access properties
println!("ROI: ({}, {}) -> {}x{}",
roi.x_offset, roi.y_offset, roi.width, roi.height);
Fields:
| Field | Type | Description |
|---|---|---|
x_offset | u32 | X offset of region |
y_offset | u32 | Y offset of region |
width | u32 | Region width |
height | u32 | Region height |
do_rectify | bool | Apply rectification |
Methods:
| Method | Returns | Description |
|---|---|---|
new(x, y, width, height) | RegionOfInterest | Create a new ROI |
contains(x, y) | bool | Check if point is inside ROI |
area() | u32 | Get area in pixels |
StereoInfo
Stereo camera pair information.
use horus_library::messages::vision::StereoInfo;
use horus::prelude::*;
// Create stereo configuration
let left = CameraInfo::new(640, 480, 525.0, 525.0, 320.0, 240.0);
let right = CameraInfo::new(640, 480, 525.0, 525.0, 320.0, 240.0);
let stereo = StereoInfo {
left_camera: left,
right_camera: right,
baseline: 0.12, // 12cm between cameras
depth_scale: 1.0,
};
// Calculate depth from disparity
let disparity = 64.0_f32; // pixels
let depth = stereo.depth_from_disparity(disparity);
println!("Disparity {} -> depth {:.2}m", disparity, depth);
// Calculate disparity from depth
let depth = 2.0_f32; // meters
let disparity = stereo.disparity_from_depth(depth);
println!("Depth {}m -> disparity {:.1}px", depth, disparity);
Note:
StereoInfois not included in the convenience re-exports. Import it directly fromhorus_library::messages::vision::StereoInfo.
Fields:
| Field | Type | Description |
|---|---|---|
left_camera | CameraInfo | Left camera calibration |
right_camera | CameraInfo | Right camera calibration |
baseline | f64 | Camera distance (meters) |
depth_scale | f64 | Disparity-to-depth factor |
Methods:
| Method | Returns | Description |
|---|---|---|
depth_from_disparity(disparity: f32) | f32 | Calculate depth from pixel disparity (returns INFINITY if disparity <= 0) |
disparity_from_depth(depth: f32) | f32 | Calculate disparity from depth (returns 0 if depth <= 0) |
BoundingBox2D
2D bounding box for object detection. This is a Pod (Plain Old Data) type suitable for zero-copy shared memory transport.
use horus::prelude::*;
// Create from top-left corner
let bbox = BoundingBox2D::new(100.0, 50.0, 200.0, 150.0);
// Create from center (YOLO format)
let bbox = BoundingBox2D::from_center(200.0, 125.0, 200.0, 150.0);
// Get properties
println!("Center: ({}, {})", bbox.center_x(), bbox.center_y());
println!("Area: {} px²", bbox.area());
// Calculate IoU between two boxes
let other = BoundingBox2D::new(150.0, 75.0, 200.0, 150.0);
println!("IoU: {:.3}", bbox.iou(&other));
Fields (16 bytes, #[repr(C)]):
| Field | Type | Description |
|---|---|---|
x | f32 | X of top-left corner (pixels) |
y | f32 | Y of top-left corner (pixels) |
width | f32 | Width (pixels) |
height | f32 | Height (pixels) |
Methods:
| Method | Returns | Description |
|---|---|---|
new(x, y, width, height) | BoundingBox2D | Create from top-left corner |
from_center(cx, cy, width, height) | BoundingBox2D | Create from center (YOLO format) |
center_x() | f32 | Get center X coordinate |
center_y() | f32 | Get center Y coordinate |
area() | f32 | Get area |
iou(other) | f32 | Intersection over Union with another box |
Detection
2D object detection result. This is a Pod type (72 bytes) suitable for zero-copy shared memory transport.
use horus::prelude::*;
// Create detection with class name, confidence, and bounding box coordinates
let det = Detection::new("person", 0.95, 100.0, 50.0, 200.0, 300.0);
println!("Detected: {} ({:.1}% confidence)",
det.class_name(),
det.confidence * 100.0);
println!("BBox: ({}, {}) {}x{}",
det.bbox.x, det.bbox.y, det.bbox.width, det.bbox.height);
// Check confidence threshold
if det.is_confident(0.9) {
println!("High confidence detection!");
}
// Create with class ID instead of name
let bbox = BoundingBox2D::new(100.0, 50.0, 200.0, 300.0);
let det = Detection::with_class_id(1, 0.88, bbox);
Fields (72 bytes, #[repr(C)]):
| Field | Type | Description |
|---|---|---|
bbox | BoundingBox2D | Bounding box (x, y, width, height) |
confidence | f32 | Detection confidence (0.0-1.0) |
class_id | u32 | Numeric class identifier |
class_name | [u8; 32] | Class name string (null-padded, max 31 chars) |
instance_id | u32 | Instance ID (for instance segmentation) |
Methods:
| Method | Returns | Description |
|---|---|---|
new(class_name, confidence, x, y, width, height) | Detection | Create with name and bbox coordinates |
with_class_id(class_id, confidence, bbox) | Detection | Create with numeric class ID |
set_class_name(name) | () | Set class name (truncates to 31 chars) |
class_name() | &str | Get class name as string |
is_confident(threshold) | bool | Check if confidence >= threshold |
Detection3D
3D object detection from point clouds or depth-aware models. Pod type (104 bytes) with velocity tracking.
use horus::prelude::*;
// Create 3D bounding box (center, dimensions, yaw)
let bbox = BoundingBox3D::new(
5.0, 2.0, 0.5, // center (x, y, z) in meters
4.5, 2.0, 1.5, // dimensions (length, width, height)
0.1 // yaw rotation in radians
);
// Create 3D detection with velocity
let det = Detection3D::new("car", 0.92, bbox)
.with_velocity(10.0, 5.0, 0.0); // m/s
println!("Detected: {} at ({}, {}, {})",
det.class_name(),
det.bbox.cx, det.bbox.cy, det.bbox.cz);
println!("Volume: {:.1} m³", det.bbox.volume());
BoundingBox3D fields (48 bytes, #[repr(C)]):
| Field | Type | Description |
|---|---|---|
cx, cy, cz | f32 | Center coordinates (meters) |
length, width, height | f32 | Dimensions (meters) |
roll, pitch, yaw | f32 | Euler angles (radians) |
Detection3D fields (104 bytes, #[repr(C)]):
| Field | Type | Description |
|---|---|---|
bbox | BoundingBox3D | 3D bounding box |
confidence | f32 | Confidence score (0.0-1.0) |
class_id | u32 | Numeric class identifier |
class_name | [u8; 32] | Class name (null-padded, max 31 chars) |
velocity_x, velocity_y, velocity_z | f32 | Velocity in m/s |
instance_id | u32 | Instance/tracking ID |
Vision Processing Node Example
use horus::prelude::*;
struct VisionNode {
image_sub: Topic<Image>,
camera_info_sub: Topic<CameraInfo>,
detection_pub: Topic<Detection>,
camera_info: Option<CameraInfo>,
}
impl Node for VisionNode {
fn name(&self) -> &str { "VisionNode" }
fn tick(&mut self) {
// Update camera calibration
if let Some(info) = self.camera_info_sub.recv() {
self.camera_info = Some(info);
}
// Process images
if let Some(image) = self.image_sub.recv() {
// Run detection (your ML model here)
let detection = Detection::new(
"person", 0.95,
100.0, 50.0, 200.0, 300.0
);
self.detection_pub.send(detection);
}
}
}
See Also
- Perception Messages - PointCloud, DepthImage
- Message Types - Standard message type overview