This is amazing! As someone working with industrial robots, normally under strict environmental constraints and control, witnessing such real-world robotics progress truly excites me about the future!
By the way, they’ve open-sourced their π0 model (code and model weights). More information can be found here: https://github.com/Physical-Intelligence/openpi
I'm genuinely asking (not trying to be snarky)... Why are these robots so slow?
Is it a throughput constraint given too much data from the environment sensors?
Is it processing the data?
I'm curious about where the bottleneck is.
Amazing! On a fun note, I believe if a human kid were cleaning up the spill and threw the sponge into the sink like that, the kid would be in trouble. XD
These variable-length arrays are getting quite advanced
Is the robot platform they're using something they've developed themselves? The paper doesn't seem to mention any details outside of sensors and actuators.
VLA = vision-language-action, a kind of a machine learning model
I'm just a layman, but I can't see this design scaling. It's way too slow and "hard" for fine motor tasks like cleaning up a kitchen or being anywhere around humans, really.
I think the future is in "softer" type of robots that can sense whether their robot fingers are pushing a cabinet door (or if it's facing resistance) and adjust accordingly. A quick google search shows this example (animated render) which is closer to what I imagine the ultimate solution will be: https://compliance-robotics.com/compliance-industry/
Human flesh is way too squishy for us to allow hard tools to interface with it, unless the human is in control. The difference between a blunt weapon and the robot from TFA is that the latter is very slow and on wheels.
> Investors > We are grateful for the support of Bond, Jeff Bezos, Khosla Ventures, Lux Capital, OpenAI, Redpoint Ventures, Sequoia Capital, and Thrive Capital.
Does the general laws of demos apply here? Than any automation shown is the extent of capabilities not the start?
Most of it is open source. Their VLAs are based upon Gemma models + vision encoders, plus their own action experts. You can download and play around or fine tune their Pi0 VLAs from their servers directly (JAX format) or from Huggingface LeRobot safetensors port. They also have notebooks and code in their repo to get started with fine-tuning. Inference runs in a single 4090 RTX streamed over WiFi to the robot.