What is HORUS?
A surgical robot performs 200 corrections per second. Each correction reads force sensors, computes inverse kinematics, and commands six motors — all within 5 milliseconds. A single late message means the scalpel moves too far. A single dropped message means it doesn't move at all.
Traditional robotics middleware serializes every message, copies it through a kernel socket, and deserializes it on the other side. That round trip costs 50–100 microseconds — eating 1–2% of the budget for every message. In a system with 20 inter-node messages per cycle, middleware alone consumes 20–40% of the time budget before any application code runs.
HORUS eliminates this overhead. Publisher and subscriber read from the same memory region — no serialization, no copies, no kernel transitions. A CmdVel message reaches the motor controller in ~85 nanoseconds, leaving 99.98% of the time budget for actual computation. The framework is written in Rust, so the safety guarantees that protect against data races and use-after-free bugs come from the compiler, not from runtime checks.
How It Works
HORUS applications are built from three primitives:
Nodes
A Node is a component with a single responsibility. It implements the Node trait — only tick() is required:
// simplified
use horus::prelude::*;
struct MotorController {
cmd_sub: Topic<CmdVel>,
}
impl Node for MotorController {
fn name(&self) -> &str { "MotorController" }
fn tick(&mut self) {
if let Some(cmd) = self.cmd_sub.recv() {
// Drive motors based on velocity command
}
}
}
The scheduler calls init() once at startup, tick() every cycle, and shutdown() on Ctrl+C. Nodes are isolated — a crash in one doesn't bring down the system.
Topics
A Topic is a named shared-memory channel. Multiple publishers can send, multiple subscribers can receive:
// simplified
let publisher: Topic<f32> = Topic::new("sensor.temperature")?;
publisher.send(25.0);
let subscriber: Topic<f32> = Topic::new("sensor.temperature")?;
if let Some(temp) = subscriber.recv() {
println!("Got: {temp}");
}
Topic names use dots for hierarchy: "sensor.imu.accel", "motor.left_wheel". The type parameter (<f32>) ensures compile-time type safety.
Use dots (not slashes) in topic names. Slashes work on Linux but fail on macOS due to shm_open limitations.
Scheduler
The Scheduler runs nodes in priority order each tick cycle:
// simplified
let mut scheduler = Scheduler::new();
scheduler.add(SensorNode::new()?).order(0).build()?; // runs first
scheduler.add(ProcessNode::new()?).order(1).build()?; // runs second
scheduler.add(MotorNode::new()?).order(2).build()?; // runs third
scheduler.run()?; // loop until Ctrl+C
Order controls data flow: sensors publish before processors consume, processors publish before actuators consume. The scheduler also provides deadline monitoring, graduated watchdog, and coordinated shutdown.
Python and Rust nodes communicate through the same shared-memory Topics — mix languages freely within the same application.
Key Concepts
| Concept | What it does | Learn more |
|---|---|---|
| Node | Self-contained component with tick() lifecycle | Nodes |
| Topic | Named shared-memory pub/sub channel | Topic |
| Scheduler | Priority-based execution with deadline monitoring | Scheduler |
| Execution Classes | 5 executor types (RT, Compute, Event, AsyncIo, BestEffort) | Execution Classes |
| node! Macro | Eliminates boilerplate for common node patterns | node! Macro |
| Messages | Typed structs (CmdVel, Imu, LaserScan, etc.) for zero-copy IPC | Message Types |
When to Use HORUS
Good fit:
- Multi-component systems where components communicate at high frequency
- Real-time control loops (motor control, flight control, haptics)
- Single-machine distributed systems (Raspberry Pi, Jetson, edge devices)
- Mixed-language applications (Rust performance + Python prototyping)
Not a good fit:
- Simple single-file scripts (HORUS adds unnecessary structure)
- Internet-scale distributed systems (use gRPC, Kafka, or message queues)
- CRUD web applications (use Axum, Actix, Django, Flask)
- Bare-metal embedded without an OS (use RTIC or Embassy)
Performance
| Metric | Value |
|---|---|
| Same-thread latency | ~3 ns |
| Same-process latency | ~18–36 ns |
| Cross-process latency | ~50–167 ns |
| Small message throughput (<1 KB) | 2M+ msgs/sec |
| Framework memory overhead | ~2 MB |
For context, ROS2 DDS achieves ~50–100 microseconds for intra-machine messages — HORUS is 300–30,000x faster because data never leaves RAM.
Design Decisions
Why shared memory instead of network IPC? Shared memory eliminates serialization, copying, and kernel transitions. A network-based framework like ROS2 pays ~50–100 us per message for DDS serialization + UDP. HORUS pays ~3–167 ns because the data never leaves RAM — publisher writes, subscriber reads from the same memory region.
Why Rust? Robotics code controls physical actuators. A null pointer dereference in a motor controller can damage hardware or injure people. Rust's ownership system prevents entire categories of bugs (use-after-free, data races, null pointers) at compile time. Python is supported via bindings for ease of use, but the runtime is Rust.
Why a scheduler instead of independent processes? A scheduler enables deterministic execution order (safety node always runs before motor node), deadline monitoring (detect if a node is too slow), and coordinated shutdown (all actuators go to safe state). Independent processes cannot guarantee these properties without complex external coordination.
Trade-offs
| Gain | Cost |
|---|---|
| Sub-microsecond IPC — no serialization overhead | Single-machine only — no built-in cross-network communication |
| Compile-time memory safety from Rust | Steeper learning curve for developers new to Rust |
| Deterministic execution order via scheduler | All nodes must run in the same scheduler (or use multi-process with shared topics) |
| Zero-copy communication for large payloads (images, point clouds) | Shared memory requires careful lifecycle management (handled by the framework) |
| Same API across Rust and Python | Python nodes pay a small overhead for the Rust FFI bridge |
See Also
- Quick Start — Build your first HORUS application
- Why HORUS — Detailed motivation and design goals
- vs ROS2 — Side-by-side comparison with ROS2
- Architecture — System architecture and internals