Quick Start
Prefer Python? See Quick Start (Python).
Prerequisites
- HORUS installed with
horus --helpworking - A terminal and text editor
What You'll Build
A temperature monitoring system with two nodes:
- Sensor — generates temperature readings and publishes them
- Monitor — subscribes to the readings and displays them
The nodes communicate through HORUS's shared-memory Topics — no sockets, no serialization, no configuration.
Time estimate: ~10 minutes
Step 1: Create a New Project
horus new temperature-monitor
Select options in the interactive prompt:
- Language: Rust
- Use macros: No (we'll learn the basics first)
cd temperature-monitor
You should see three items in the project directory:
src/main.rs— your application codehorus.toml— project configuration (name, version, dependencies).horus/— generated build files (managed automatically)
Step 2: Write the Sensor Node
Replace the contents of src/main.rs with the following:
use horus::prelude::*;
use std::time::Duration;
// ── Sensor Node ─────────────────────────────────────────────
// Publishes a simulated temperature reading every second.
struct TemperatureSensor {
publisher: Topic<f32>,
temperature: f32,
}
impl TemperatureSensor {
fn new() -> Result<Self> {
Ok(Self {
publisher: Topic::new("temperature")?,
temperature: 20.0,
})
}
}
impl Node for TemperatureSensor {
fn name(&self) -> &str {
"TemperatureSensor"
}
fn tick(&mut self) {
self.temperature += 0.1;
self.publisher.send(self.temperature);
// WARNING: sleep() in tick() blocks the scheduler thread.
// In production, use .rate(1_u64.hz()) on the node builder instead.
std::thread::sleep(Duration::from_secs(1));
}
fn shutdown(&mut self) -> Result<()> {
eprintln!("Sensor shutting down. Last reading: {:.1}°C", self.temperature);
Ok(())
}
}
// ── Monitor Node ────────────────────────────────────────────
// Subscribes to temperature readings and prints them.
struct TemperatureMonitor {
subscriber: Topic<f32>,
}
impl TemperatureMonitor {
fn new() -> Result<Self> {
Ok(Self {
subscriber: Topic::new("temperature")?,
})
}
}
impl Node for TemperatureMonitor {
fn name(&self) -> &str {
"TemperatureMonitor"
}
fn tick(&mut self) {
if let Some(temp) = self.subscriber.recv() {
println!("Temperature: {:.1}°C", temp);
}
}
fn shutdown(&mut self) -> Result<()> {
eprintln!("Monitor shutting down.");
Ok(())
}
}
// ── Main ────────────────────────────────────────────────────
fn main() -> Result<()> {
eprintln!("Starting temperature monitoring system...\n");
let mut scheduler = Scheduler::new();
// The scheduler auto-detects order from topics:
// sensor publishes "temperature", monitor subscribes → sensor runs first
scheduler.add(TemperatureSensor::new()?).build()?;
scheduler.add(TemperatureMonitor::new()?).build()?;
// Run until Ctrl+C
scheduler.run()?;
Ok(())
}
Step 3: Run It
horus run --release
You should see output like:
Starting temperature monitoring system...
Temperature: 20.1°C
Temperature: 20.2°C
Temperature: 20.3°C
Temperature: 20.4°C
...
Press Ctrl+C to stop. You should see the shutdown messages from both nodes.
Debug vs Release:
horus runwithout--releaseuses debug mode (~60-200μs per tick). With--release, tick times drop to ~1-3μs. If you're thinking "HORUS is slow" — you're probably running in debug mode. Always use--releasefor performance testing.
Step 3.5: Introspect While Running
While your app is running, open a second terminal and peek inside:
# See all active topics
horus topic list
# Output:
# temperature (f32) — 1 publisher, 1 subscriber
# Watch messages in real-time
horus topic echo temperature
# Output:
# 20.1
# 20.2
# 20.3
# ...
# Check the publishing rate
horus topic hz temperature
# Output: average rate: 1.0 Hz
# See running nodes
horus node list
# Output:
# NAME RATE STATUS
# TemperatureSensor 1Hz Running
# TemperatureMonitor 1Hz Running
This works because HORUS topics live in shared memory — any process on the machine can inspect them, including the CLI tools. This is your primary debugging tool.
Step 3.6: What Just Happened
When you ran horus run --release, HORUS:
- Parsed
horus.tomland compiled your Rust code via Cargo - Created shared memory — a ring buffer at
/dev/shm/horus_<namespace>/topics/horus_temperaturefor the "temperature" topic - Initialized nodes — called
new()on both structs, which opened the same SHM region viaTopic::new("temperature") - Started the tick loop — the scheduler calls
TemperatureSensor::tick()thenTemperatureMonitor::tick()in order, every cycle - On Ctrl+C — sent SIGTERM, called
shutdown()on both nodes, cleaned up SHM
The data flow: TemperatureSensor::tick() writes an f32 directly into the ring buffer (zero-copy, ~3ns). TemperatureMonitor::tick() reads it out. No serialization, no network, no kernel involvement.
Step 4: Understand the Key Patterns
You just used three core HORUS concepts:
Topic — Communication Channel
// Both nodes create a Topic with the same name — simplified
publisher: Topic::new("temperature")? // returns HorusResult<Topic<f32>>
subscriber: Topic::new("temperature")?
// ...
Topic::new() returns a HorusResult — the ? propagates errors if shared memory allocation fails. Any number of nodes can publish or subscribe to the same topic name.
Node Trait — Component Lifecycle
Every HORUS component implements the Node trait. Only tick() is required — all other methods have defaults:
// simplified
impl Node for MyNode {
fn name(&self) -> &str { "MyNode" } // identity
fn tick(&mut self) { /* runs every cycle */ } // required
fn shutdown(&mut self) -> Result<()> { Ok(()) } // cleanup on Ctrl+C
}
Scheduler — Orchestration
The scheduler runs nodes in priority order each tick cycle:
// simplified
let mut scheduler = Scheduler::new();
// Scheduler auto-detects order from topic dependencies
scheduler.add(sensor).build()?;
scheduler.add(monitor).build()?;
scheduler.run()?; // loop until Ctrl+C
Step 5: Try the node! Macro (Optional)
The same two nodes can be written with the node! macro, which eliminates boilerplate:
use horus::prelude::*;
node! {
TemperatureSensor {
pub { publisher: f32 -> "temperature" }
data { temperature: f32 = 20.0 }
tick {
self.temperature += 0.1;
self.publisher.send(self.temperature);
std::thread::sleep(std::time::Duration::from_secs(1));
}
}
}
node! {
TemperatureMonitor {
sub { subscriber: f32 -> "temperature" }
tick {
if let Some(temp) = self.subscriber.recv() {
println!("Temperature: {:.1}°C", temp);
}
}
}
}
The macro generates the struct, constructor, and Node trait implementation. Both approaches produce identical runtime behavior. See the node! Macro Guide for the full syntax.
Troubleshooting
| Symptom | Cause | Fix |
|---|---|---|
Failed to create Topic | Stale shared memory from a previous run | Usually auto-cleaned — if persists, run horus clean --shm |
| Nothing prints | Monitor added but sensor missing | Ensure both nodes are added to the scheduler |
horus run fails to build | Missing system dependencies | See Installation Step 1 |
| Output looks slow | Running in debug mode | Use horus run --release |
Topic not in topic list | CLI is in a different SHM namespace | Run CLI in the same terminal, or set matching HORUS_NAMESPACE |
Permission denied on shared memory | SHM directory permissions | Run horus doctor to diagnose |
Address already in use | Another HORUS process still running | Check for running processes; if none, horus clean --shm |
.build() returns error | Duplicate node name or invalid config | Check that each node has a unique name |
Python Equivalent
The same system in Python (8 lines):
import horus
temperature = 20.0
def sensor_tick(node):
global temperature
temperature += 0.1
node.send("temperature", temperature)
def monitor_tick(node):
temp = node.recv("temperature")
if temp is not None:
print(f"Temperature: {temp:.1f}°C")
sensor = horus.Node(name="sensor", pubs=["temperature"], tick=sensor_tick, rate=1, order=0)
monitor = horus.Node(name="monitor", subs=["temperature"], tick=monitor_tick, rate=1, order=1)
horus.run(sensor, monitor)
See Quick Start (Python) for a full walkthrough.
Graceful Shutdown
When you press Ctrl+C, HORUS:
- Catches the SIGTERM signal
- Calls
shutdown()on each node in registration order - Cleans up shared memory regions owned by this process
- Prints a timing report (total ticks, average tick duration)
Always zero actuators in shutdown() — if your node controls motors, send a stop command before exiting.
Key Takeaways
- Nodes implement the
Nodetrait —tick()runs every scheduler cycle - Topics are named shared-memory channels —
send()to publish,recv()to subscribe - Scheduler orchestrates nodes in priority order with
.order(n) Topic::new("name")?returns aHorusResult— always handle the error- Multi-process communication works automatically: same topic name = same channel
Next Steps
- Quick Start (Python) — Build the same system in Python
- Second Application — Build a 3-node pipeline with filtering, monitoring, and watchdog
- Common Mistakes — Avoid the pitfalls that trip up every beginner
Beyond the Basics
You've seen Nodes, Topics, and the Scheduler. HORUS has much more — here's what to explore next:
| Feature | What it does | Guide |
|---|---|---|
| Execution classes | Run nodes as RT, compute-bound, event-driven, or async I/O | Execution Classes |
| Watchdog & safety | Detect frozen nodes, enforce deadlines, graduated degradation | Safety Monitor |
| BlackBox | Flight recorder for post-mortem crash analysis | BlackBox |
| Deterministic mode | Reproducible execution for simulation and CI | Deterministic Mode |
| Record & Replay | Tick-perfect replay for reproducing field bugs | Record & Replay |
| Fault tolerance | Per-node failure policies (restart, skip, fatal) | Circuit Breaker |
| Framework clock | horus::now(), dt(), budget_remaining() | Real-Time Systems |
| Progressive config | From prototype to production in 5 levels | Choosing Your Configuration |
See Also
- Nodes (Concept) — How nodes work under the hood
- Topic (Concept) — Shared memory architecture
- Scheduler (Concept) — Execution model and priority
- Examples — More sample applications