EmbodiedAgents

Production-grade framework to deploy Physical AI on real world robots.

Create interactive, physical agents that do not just chat, but understand, move, manipulate, and adapt to their environment.

Get StartedView on GitHub

Production Ready - Designed for autonomous systems in dynamic environments. Provides an orchestration layer for Adaptive Intelligence, making Physical AI simple to deploy.

Self-Referential - Create agents that can start, stop, or reconfigure their components based on internal or external events. Trivially switch from cloud to local ML or switch planners based on location or vision input. Make agents self-referential Gödel machines.

Spatio-Temporal Memory - Provides embodiment primitives like a heirarchical spatio-temporal memory and semantic routing to build arbitrarily complex graphs for agentic information flow. No need to utilize bloated “GenAI” frameworks on your robot.

Pure Python, Native ROS2 - Define complex asynchronous graphs in standard Python without touching XML launch files. Yet, underneath, it is pure ROS2; compatible with the entire ecosystem of hardware drivers, simulation tools, and visualization suites.

Get Started

Installation

Setup EmbodiedAgents on your system

Installation
Quickstart

Launch your first embodied agent in minutes

Quick Start
Basic Concepts

Learn the core building blocks of the framework

Components
Foundation Recipes

Explore basic agent recipes and get introduced to system components

Foundation Recipes Overview
Planning and Control

Learn to use task specific VLMs for planning and VLAs for manipulation control

Embodied Planning & Control Overview
AI-Assisted Coding

Get the llms.txt for your coding-agent and let it write the recipes for you

llms.txt

Contributions

EmbodiedAgents has been developed in collaboration between Automatika Robotics and Inria. Contributions from the community are most welcome.