EmbodiedAgents¶
Production-grade framework to deploy Physical AI on real world robots.
Create interactive, physical agents that do not just chat, but understand, move, manipulate, and adapt to their environment.
Production Ready - Designed for autonomous systems in dynamic environments. Provides an orchestration layer for Adaptive Intelligence, making Physical AI simple to deploy.
Self-Referential - Create agents that can start, stop, or reconfigure their components based on internal or external events. Trivially switch from cloud to local ML or switch planners based on location or vision input. Make agents self-referential Gödel machines.
Spatio-Temporal Memory - Provides embodiment primitives like a heirarchical spatio-temporal memory and semantic routing to build arbitrarily complex graphs for agentic information flow. No need to utilize bloated “GenAI” frameworks on your robot.
Pure Python, Native ROS2 - Define complex asynchronous graphs in standard Python without touching XML launch files. Yet, underneath, it is pure ROS2; compatible with the entire ecosystem of hardware drivers, simulation tools, and visualization suites.
Get Started¶
Setup EmbodiedAgents on your system
Launch your first embodied agent in minutes
Learn the core building blocks of the framework
Explore basic agent recipes and get introduced to system components
Learn to use task specific VLMs for planning and VLAs for manipulation control
Get the llms.txt for your coding-agent and let it write the recipes for you
Contributions¶
EmbodiedAgents has been developed in collaboration between Automatika Robotics and Inria. Contributions from the community are most welcome.