Quick Start

Prefer Python? See Quick Start (Python).

Prerequisites

What You'll Build

A temperature monitoring system with two nodes:

  1. Sensor — generates temperature readings and publishes them
  2. Monitor — subscribes to the readings and displays them

The nodes communicate through HORUS's shared-memory Topics — no sockets, no serialization, no configuration.

Time estimate: ~10 minutes

Step 1: Create a New Project

horus new temperature-monitor

Select options in the interactive prompt:

  • Language: Rust
  • Use macros: No (we'll learn the basics first)
cd temperature-monitor

You should see three items in the project directory:

  • src/main.rs — your application code
  • horus.toml — project configuration (name, version, dependencies)
  • .horus/ — generated build files (managed automatically)

Step 2: Write the Sensor Node

Replace the contents of src/main.rs with the following:

use horus::prelude::*;
use std::time::Duration;

// ── Sensor Node ─────────────────────────────────────────────
// Publishes a simulated temperature reading every second.

struct TemperatureSensor {
    publisher: Topic<f32>,
    temperature: f32,
}

impl TemperatureSensor {
    fn new() -> Result<Self> {
        Ok(Self {
            publisher: Topic::new("temperature")?,
            temperature: 20.0,
        })
    }
}

impl Node for TemperatureSensor {
    fn name(&self) -> &str {
        "TemperatureSensor"
    }

    fn tick(&mut self) {
        self.temperature += 0.1;
        self.publisher.send(self.temperature);

        // WARNING: sleep() in tick() blocks the scheduler thread.
        // In production, use .rate(1_u64.hz()) on the node builder instead.
        std::thread::sleep(Duration::from_secs(1));
    }

    fn shutdown(&mut self) -> Result<()> {
        eprintln!("Sensor shutting down. Last reading: {:.1}°C", self.temperature);
        Ok(())
    }
}

// ── Monitor Node ────────────────────────────────────────────
// Subscribes to temperature readings and prints them.

struct TemperatureMonitor {
    subscriber: Topic<f32>,
}

impl TemperatureMonitor {
    fn new() -> Result<Self> {
        Ok(Self {
            subscriber: Topic::new("temperature")?,
        })
    }
}

impl Node for TemperatureMonitor {
    fn name(&self) -> &str {
        "TemperatureMonitor"
    }

    fn tick(&mut self) {
        if let Some(temp) = self.subscriber.recv() {
            println!("Temperature: {:.1}°C", temp);
        }
    }

    fn shutdown(&mut self) -> Result<()> {
        eprintln!("Monitor shutting down.");
        Ok(())
    }
}

// ── Main ────────────────────────────────────────────────────

fn main() -> Result<()> {
    eprintln!("Starting temperature monitoring system...\n");

    let mut scheduler = Scheduler::new();

    // The scheduler auto-detects order from topics:
    // sensor publishes "temperature", monitor subscribes → sensor runs first
    scheduler.add(TemperatureSensor::new()?).build()?;
    scheduler.add(TemperatureMonitor::new()?).build()?;

    // Run until Ctrl+C
    scheduler.run()?;

    Ok(())
}

Step 3: Run It

horus run --release

You should see output like:

Starting temperature monitoring system...

Temperature: 20.1°C
Temperature: 20.2°C
Temperature: 20.3°C
Temperature: 20.4°C
...

Press Ctrl+C to stop. You should see the shutdown messages from both nodes.

Debug vs Release: horus run without --release uses debug mode (~60-200μs per tick). With --release, tick times drop to ~1-3μs. If you're thinking "HORUS is slow" — you're probably running in debug mode. Always use --release for performance testing.

Step 3.5: Introspect While Running

While your app is running, open a second terminal and peek inside:

# See all active topics
horus topic list
# Output:
# temperature (f32) — 1 publisher, 1 subscriber

# Watch messages in real-time
horus topic echo temperature
# Output:
# 20.1
# 20.2
# 20.3
# ...

# Check the publishing rate
horus topic hz temperature
# Output: average rate: 1.0 Hz

# See running nodes
horus node list
# Output:
# NAME                  RATE   STATUS
# TemperatureSensor     1Hz    Running
# TemperatureMonitor    1Hz    Running

This works because HORUS topics live in shared memory — any process on the machine can inspect them, including the CLI tools. This is your primary debugging tool.

Step 3.6: What Just Happened

When you ran horus run --release, HORUS:

  1. Parsed horus.toml and compiled your Rust code via Cargo
  2. Created shared memory — a ring buffer at /dev/shm/horus_<namespace>/topics/horus_temperature for the "temperature" topic
  3. Initialized nodes — called new() on both structs, which opened the same SHM region via Topic::new("temperature")
  4. Started the tick loop — the scheduler calls TemperatureSensor::tick() then TemperatureMonitor::tick() in order, every cycle
  5. On Ctrl+C — sent SIGTERM, called shutdown() on both nodes, cleaned up SHM

The data flow: TemperatureSensor::tick() writes an f32 directly into the ring buffer (zero-copy, ~3ns). TemperatureMonitor::tick() reads it out. No serialization, no network, no kernel involvement.

Step 4: Understand the Key Patterns

You just used three core HORUS concepts:

Topic — Communication Channel

// Both nodes create a Topic with the same name — simplified
publisher: Topic::new("temperature")?    // returns HorusResult<Topic<f32>>
subscriber: Topic::new("temperature")?
// ...

Topic::new() returns a HorusResult — the ? propagates errors if shared memory allocation fails. Any number of nodes can publish or subscribe to the same topic name.

Node Trait — Component Lifecycle

Every HORUS component implements the Node trait. Only tick() is required — all other methods have defaults:

// simplified
impl Node for MyNode {
    fn name(&self) -> &str { "MyNode" }      // identity
    fn tick(&mut self) { /* runs every cycle */ }  // required
    fn shutdown(&mut self) -> Result<()> { Ok(()) }  // cleanup on Ctrl+C
}

Scheduler — Orchestration

The scheduler runs nodes in priority order each tick cycle:

// simplified
let mut scheduler = Scheduler::new();

// Scheduler auto-detects order from topic dependencies
scheduler.add(sensor).build()?;
scheduler.add(monitor).build()?;

scheduler.run()?;   // loop until Ctrl+C

Step 5: Try the node! Macro (Optional)

The same two nodes can be written with the node! macro, which eliminates boilerplate:

use horus::prelude::*;

node! {
    TemperatureSensor {
        pub { publisher: f32 -> "temperature" }
        data { temperature: f32 = 20.0 }
        tick {
            self.temperature += 0.1;
            self.publisher.send(self.temperature);
            std::thread::sleep(std::time::Duration::from_secs(1));
        }
    }
}

node! {
    TemperatureMonitor {
        sub { subscriber: f32 -> "temperature" }
        tick {
            if let Some(temp) = self.subscriber.recv() {
                println!("Temperature: {:.1}°C", temp);
            }
        }
    }
}

The macro generates the struct, constructor, and Node trait implementation. Both approaches produce identical runtime behavior. See the node! Macro Guide for the full syntax.

Troubleshooting

SymptomCauseFix
Failed to create TopicStale shared memory from a previous runUsually auto-cleaned — if persists, run horus clean --shm
Nothing printsMonitor added but sensor missingEnsure both nodes are added to the scheduler
horus run fails to buildMissing system dependenciesSee Installation Step 1
Output looks slowRunning in debug modeUse horus run --release
Topic not in topic listCLI is in a different SHM namespaceRun CLI in the same terminal, or set matching HORUS_NAMESPACE
Permission denied on shared memorySHM directory permissionsRun horus doctor to diagnose
Address already in useAnother HORUS process still runningCheck for running processes; if none, horus clean --shm
.build() returns errorDuplicate node name or invalid configCheck that each node has a unique name

Python Equivalent

The same system in Python (8 lines):

import horus

temperature = 20.0

def sensor_tick(node):
    global temperature
    temperature += 0.1
    node.send("temperature", temperature)

def monitor_tick(node):
    temp = node.recv("temperature")
    if temp is not None:
        print(f"Temperature: {temp:.1f}°C")

sensor = horus.Node(name="sensor", pubs=["temperature"], tick=sensor_tick, rate=1, order=0)
monitor = horus.Node(name="monitor", subs=["temperature"], tick=monitor_tick, rate=1, order=1)

horus.run(sensor, monitor)

See Quick Start (Python) for a full walkthrough.

Graceful Shutdown

When you press Ctrl+C, HORUS:

  1. Catches the SIGTERM signal
  2. Calls shutdown() on each node in registration order
  3. Cleans up shared memory regions owned by this process
  4. Prints a timing report (total ticks, average tick duration)

Always zero actuators in shutdown() — if your node controls motors, send a stop command before exiting.

Key Takeaways

  • Nodes implement the Node trait — tick() runs every scheduler cycle
  • Topics are named shared-memory channels — send() to publish, recv() to subscribe
  • Scheduler orchestrates nodes in priority order with .order(n)
  • Topic::new("name")? returns a HorusResult — always handle the error
  • Multi-process communication works automatically: same topic name = same channel

Next Steps

Beyond the Basics

You've seen Nodes, Topics, and the Scheduler. HORUS has much more — here's what to explore next:

FeatureWhat it doesGuide
Execution classesRun nodes as RT, compute-bound, event-driven, or async I/OExecution Classes
Watchdog & safetyDetect frozen nodes, enforce deadlines, graduated degradationSafety Monitor
BlackBoxFlight recorder for post-mortem crash analysisBlackBox
Deterministic modeReproducible execution for simulation and CIDeterministic Mode
Record & ReplayTick-perfect replay for reproducing field bugsRecord & Replay
Fault tolerancePer-node failure policies (restart, skip, fatal)Circuit Breaker
Framework clockhorus::now(), dt(), budget_remaining()Real-Time Systems
Progressive configFrom prototype to production in 5 levelsChoosing Your Configuration

See Also