Coming from ROS2

If you have experience with ROS2, you already know most of the concepts in HORUS. This guide maps what you know to how HORUS does it, highlights the architectural differences, and shows code side-by-side.

Concept Mapping

ROS2HORUSNotes
NodeNode traitSame concept. Implement tick() instead of callbacks
Publisher / SubscriberTopic (send/recv)Named channels, zero-copy via SHM
ServiceServiceRequest/response, same pattern
ActionActionLong-running tasks with feedback
tf2TransformFrametf / tf_static topics, tree lookups
Parameter ServerRuntimeParamsPer-node typed parameters
Launch fileSchedulerSingle process, all nodes in one scheduler
rqt / FoxgloveMonitorBuilt-in web dashboard + TUI
rosbagRecord / ReplayTopic recording and playback
QoS profilesNot yet available
Lifecycle nodeNode traitinit() / shutdown() methods on every node
DDS middlewareSHM IPCNo middleware layer, sub-microsecond latency
colcon buildhorus buildSingle manifest (horus.toml), no CMake
ros2 topic echohorus topic echoSame idea, different CLI

Architecture Differences

ROS2: Multi-Process, Callback-Based

In ROS2, each node is typically its own OS process. Nodes communicate over DDS (a network middleware), and you write callbacks that fire when messages arrive. Launch files coordinate which processes to start.

Loading diagram...
ROS2: Each node is a separate process, communicating over DDS middleware

HORUS: Single-Process, Tick-Based

In HORUS, all nodes live in one process. The scheduler calls each node's tick() in a deterministic order every cycle. Nodes communicate through shared-memory topics with zero-copy reads.

Loading diagram...
HORUS: All nodes in one process, deterministic tick order, zero-copy SHM

Why Tick-Based Matters for Real-Time

PropertyROS2 CallbacksHORUS Ticks
Execution orderNon-deterministicDeterministic (.order())
Timing jitterDepends on DDS, OS schedulingBounded by scheduler budget
Deadline enforcementManual (timers)Built-in (.deadline(), .on_miss())
Thread safetyYou manage mutexesSingle-threaded tick, no locks needed
LatencyMicroseconds to milliseconds (DDS)Sub-microsecond (SHM)

Cross-Process Communication

HORUS nodes can still talk across processes. SHM topics are visible to any process on the same machine. You simply run two schedulers that share the same topic names — no DDS required.

Code Comparison

Here is the same motor controller node in ROS2 C++ and HORUS Rust.

ROS2 C++

#include <rclcpp/rclcpp.hpp>
#include <sensor_msgs/msg/imu.hpp>
#include <geometry_msgs/msg/twist.hpp>

class MotorNode : public rclcpp::Node {
public:
  MotorNode() : Node("motor") {
    sub_ = create_subscription<sensor_msgs::msg::Imu>(
      "imu", 10, [this](sensor_msgs::msg::Imu::SharedPtr msg) {
        last_imu_ = *msg;
      });
    pub_ = create_publisher<geometry_msgs::msg::Twist>("cmd_vel", 10);
    timer_ = create_wall_timer(10ms, [this]() { tick(); });
  }

private:
  void tick() {
    geometry_msgs::msg::Twist cmd;
    cmd.linear.x = compute_speed(last_imu_);
    pub_->publish(cmd);
  }

  rclcpp::Subscription<sensor_msgs::msg::Imu>::SharedPtr sub_;
  rclcpp::Publisher<geometry_msgs::msg::Twist>::SharedPtr pub_;
  rclcpp::TimerBase::SharedPtr timer_;
  sensor_msgs::msg::Imu last_imu_;
};

int main(int argc, char** argv) {
  rclcpp::init(argc, argv);
  rclcpp::spin(std::make_shared<MotorNode>());
}

HORUS Rust

use horus::prelude::*;

struct MotorNode {
    imu_sub: Topic<Imu>,
    cmd_pub: Topic<Twist>,
}

impl MotorNode {
    fn new() -> Result<Self> {
        Ok(Self {
            imu_sub: Topic::new("imu")?,
            cmd_pub: Topic::new("cmd_vel")?,
        })
    }
}

impl Node for MotorNode {
    fn name(&self) -> &str { "motor_node" }

    fn tick(&mut self) {
        if let Some(imu) = self.imu_sub.recv() {
            let cmd = Twist::default(); // compute from IMU
            self.cmd_pub.send(cmd);
        }
    }
}

fn main() -> Result<()> {
    let mut scheduler = Scheduler::new();
    scheduler.add(MotorNode::new()?)
        .order(0)
        .rate(100_u64.hz())
        .build()?;
    scheduler.run()
}

Key differences:

  • No callback boilerplate — tick() reads and writes directly
  • Rate is set on the scheduler, not via a timer
  • No SharedPtr, no mutex — the scheduler guarantees single-threaded access
  • Scheduler::run() replaces rclcpp::spin()

Message Type Mapping

ROS2 MessageHORUS TypeModule
sensor_msgs/ImuImuhorus::messages
sensor_msgs/LaserScanLaserScanhorus::messages
sensor_msgs/ImageImagehorus::memory
sensor_msgs/JointStateJointStatehorus::messages
sensor_msgs/PointCloud2PointCloudhorus::memory
geometry_msgs/TwistTwisthorus::messages
geometry_msgs/PosePose3Dhorus::messages
geometry_msgs/TransformTFMessagehorus::transform_frame
nav_msgs/OdometryOdometryhorus::messages
std_msgs/StringStringRust stdlib
std_msgs/BoolboolRust stdlib
std_msgs/Float64f64Rust stdlib

What HORUS Adds Over ROS2

Zero-copy SHM. Topics use shared memory by default. Readers get a direct pointer to the data — no serialization, no copy. This gives sub-microsecond publish-to-read latency.

Deterministic mode. The scheduler can run in lockstep with a simulation clock. Every tick produces identical results given the same inputs. This is critical for sim-to-real transfer.

Built-in safety monitor. Every node has a watchdog. If a node exceeds its deadline, the scheduler can warn, skip the node, reduce its rate, or trigger a safe-state shutdown — all configured per-node via .on_miss().

Auto-RT detection. Set .rate() or .budget() on a node and HORUS automatically classifies it as real-time. No need to manually configure thread priorities or scheduling policies.

Single-file configuration. One horus.toml replaces package.xml, CMakeLists.txt, setup.py, and launch files. Dependencies, scripts, and node configuration all live in one place.

What HORUS Doesn't Have Yet

Multi-machine networking. HORUS currently runs on a single machine. SHM topics do not cross network boundaries. For multi-machine setups, you would need a custom bridge.

Visualization (rviz equivalent). There is no 3D visualization tool like rviz. The Monitor provides metrics dashboards but not scene rendering.

Bag file format. Record/Replay works but uses an internal format. There is no equivalent to the rosbag2 format or interoperability with ROS2 bags.

QoS profiles. There is no quality-of-service configuration for topics (reliability, durability, history depth). Topics are currently best-effort with configurable buffer sizes.

Ecosystem breadth. ROS2 has thousands of community packages. HORUS is younger and has a smaller library of pre-built drivers and algorithms. Check the HORUS Registry for available packages.

Migration Checklist

If you are porting a ROS2 project to HORUS:

  1. Map your nodes. Each ROS2 node becomes a struct implementing the Node trait
  2. Replace callbacks with tick(). Read all inputs at the top of tick(), compute, then publish outputs
  3. Convert message types. Use the mapping table above. Custom messages become Rust structs
  4. Replace launch files. Build your scheduler in main() with .add() calls
  5. Replace package.xml + CMakeLists.txt. Write one horus.toml
  6. Replace tf2 with TransformFrame. Same tree semantics, publish to tf / tf_static topics
  7. Test with tick_once(). HORUS supports single-tick execution for deterministic unit tests