https://youtube.com/watch?v=MwIBTbumd1Q
Eight months ago he built a quadrupedal robot that could step sideways using three of them per leg. I’m not going to link that, you’ll have to find it from his YouTube page because you should look around.
During the COVID period, some Chinese companies even sold variants of actuators inspired by the Mini Cheetah design.
Aaed Musa has also mentioned in some of his videos that his actuator designs were inspired by the Mini Cheetah actuator. Yes, His capstan drive video is especially impressive.
For example, in Aaed Musa’s video "I Built a Rubik's Cube Solving Robot" (https://www.youtube.com/watch?v=m0bMMALYMYk), he states in the description that the design was inspired by Ben Katz’s work.
Ben Katz master thesis, is worth reading: "A low cost modular actuator for dynamic robots" 2018 https://dspace.mit.edu/handle/1721.1/118671 and also has a good post https://robot-daycare.com/posts/2019-12-16-the-mini-cheetah-...
And also, The Rubik's Contraption (2018), 297 points, the work done by same author https://news.ycombinator.com/item?id=16561049
Ben's vids were kind of mind-blowing for me at that time. I couldn't believe some of the control that was possible with relatively pedestrian electronics. Aaed's vids do a wonderful job of making it accessible in an applied way.
It's something I think a lot of the folks on HN would find interesting to tinker with. Nice mix of software and hardware that actually does work in physical reality. It also gives me a level of appreciation for the advances in humanoid robots that I don't think I would have had otherwise. (If you *do* get into it, I'd highly recommend getting into field oriented control with brushless motors and encoders. Small hobby servos are fun but they encapsulate a lot of the interesting parts and tend to have limited options available for things like the capstan vid linked above)
Too much gear reduction, and you can't back-drive or sense forces from the motor end. Too little gear reduction, and your motors are too bulky or too weak. Reflected inertia goes up as the square of the gear ratio, as the article points out, because the gear ratio gets you both coming and going. So high gear ratios really hurt.
Robots, like drones, need custom motors sized for the specific requirements of the joint. For a long time, the robotics industry was too tiny to get such custom motors engineered, and had to use motors designed for other purposes. This will become a non-problem as volume increases. Especially since 3-phase servomotor controllers, which drones need, are now small and cheap. They used to be the size of a paperback book or larger.
(I've been out of this for years. I've used hydraulic robots and R/C servo powered robots. The newer machinery sucks a lot less.)
The other side of the scaling laws say that motor torque scales as a square of air gap radius (roughly rotor radius), and output torque scales as linearly with gearing ratio.
When you balance these out, the reflected inertia depends on the inverse of power dissipated for a fixed output torque.
In an ideal world, your total reflected inertia is independent of the gearbox and largely depends on the motor fill factor and how hot you can run it.
If someone invents a new type of 'artificial muscle' which has low inertia, high force/torque density, and can work without overheating, that would instantly kill all other robotics companies.
The reason why muscles do not overheat as much as electric motors, despite lower efficiency, is because they have good liquid cooling, by blood. The cooling system of animals has outstanding reliability, due to self repair. Liquid cooling is also possible for electric motors, but it is usually avoided due to high cost and low reliability.
The weight problem of the electric motors is solved by keeping them in the body of the robot and transmitting the movement towards the moving parts that need low inertia by various means (ropes, cables, levers, hydraulic/pneumatic fluids).
This is also done with muscles, which frequently are far away from the bone that they are moving and the movement is transmitted by long thin tendons, to reduce the inertia of the moving limb.
I think only 1X is using tendon drives for all (?) joints in their robots, but most robots use them just for hands. 1X uses tendons just for moving the motor one link upwards the chain, not through multiple joints to the body (elbow motor is in shoulder etc.). Transmitting power from the body to remote joints with tendons is quite hard problem, and I'm not aware of anyone doing it this way. One problem is that there is no obvious way to decouple the motion of tendons when they go through a joint, e.g. if the elbow joint moves then it affects all tendons which go through the elbow. Also, it's rather complex and there is friction, stretching and vibration and so on, which might be hard to model / simulate.
Hydraulics might work, but Boston Dynamics gave up on it, so I think it's probably not worth it. Hydraulics could in theory be very good because it needs just one motor for the pump and the fluid can distribute the power to many actuators.
Moving the motors to the body is a good way to solve the mass / inertia problem, but no one has really figured out how to do it. Whatever 1X is doing might be the sweet spot.
If you have a load that is purely inertial, the optimum gear ratio (to minimize I-squared-R motor loss) is found by picking a gear ratio which matches the reflected inertia of load and motor. At this point, on every acceleration you put just as much energy into the rotor inertia as into the load inertia.
In contrast, for a steady-state load which is all friction (e.g. a mixer such as for paint or food), a gear ratio which balances friction loss in the motor with the load friction will minimize the armature loss.
Most applications have live between these points, and these optimizations ignore gearing losses and expense and noise, but they can serve as guide posts.
There's also the issue of separating winding choice from gearing choice. For each candidate motor there exists an optimum gear ratio which will minimize the heat produced when driving a given load (friction and inertia) over a given velocity profile. That gearing can be found by trial and error in a simulation. These aren't crazy difficult simulations (can be done in a spreadsheet) but do need to take temperature dissipation and change of motor performance with temperature into account. Once that gearing is found, the V-I requirements of the motor at that gearing will be known and then winding adjusted to fit requirements of driver circuitry (i.e. trade current for voltage).
The current "futurist" vision is one of humanoid robots taking over many/most jobs done by humans today, but - as someone that routinely hires human welders & assemblers - the dexterity required for most ad-hoc tasks seems many many decades (if not more?) away from what I see robots do--yes, even the fancy chinese jumping ones.
This has led me to think one of two things:
1. The robotics revolution will not come. It's predicated on the idea that advances in robotics will follow a curve of the same shape as advances in compute/ai, which will not happen. OR...
2. There has been some paradigm-shift or some breakthrough that has put robotics improvement on a new curve.
To an outsider, what I see in robots is not categorically different than like, the sony AIBO dog in 1999. It's significantly better of course, but is it really that different? (Whereas what we can do in compute-land today is categorically diffrent because of the transformer model breakthrough).
So:
1. Have there been any breakthroughs that would lead us to believe that a robot will be able to like, look under a table to adjust a screw?
2. What are the scaling laws & practical limits to present-day robotic dexterity? Is it materials? Energy density? What?
3. What is the real rate of improvement along these key dimensions? Are robots improving linearly? Geometrically? Exponentially?
4.Or should I keep discounting robotics until we get our first robots that are made of meat? That I'd believe would result in exponential change!
There's also endless welding and assembly robots and have been for a long time. Sure they're huge and weight 3 tons or whatever, but it's not like we're building humanoid robots to do work like that anyway.
The human system weighs 250lbs and can be placed anywhere. Let's ask what it takes to walk the factory robot in that direction. First let's have the work piece be moving, let's say on a conveyor belt. The old robotics way of thinking would be to introduce this variable into the programming of the bot/station, create simple sensors for either the work piece or conveyor itself to indicate to the programming loop where the part is with as little error as possible, and continue to keep accuracy while maintaining as much precision as possible using rigidity (which equals mass and space). Now the whole system is functionally 7 DOF, and you add in the error and failure modes of the 7th DOF (the conveyor system) and accumulate some error. Now just imagine instead of a conveyor the part is on a rolling table with random Z height, and so it the robot arm, and you can see this will fall apart, you can't fight this battle with deterministic programming, machining precision, and rigidity. Obviously if you extended this system to be a humanoid robot on a 3 legged ladder which would be 30+ DOF between the weld and the ground, it couldn't possibly work.
But back to the hungover human, why does this system work so well? The human has very good eyes and a very good internal IMU. They are looking at the end of the filler rod and the weld pool, and even though the information isn't that good coming through the scratched welding helmet, they can compensate for all that error and run an internal function that holds the torch and filler rod in the optimum position to do a good TIG weld while ignoring or automatically adjusting for tons of other variables. Now to address your original question, in our system 1. Are current cameras good enough to get an equivalent amount of information about the weld that the hungover welder has? Yes, in fact can get more information than a human can 2. Are IMUs as good as a hungover human has? Hard to really know, but seems like it, though if you need many IMUs attached to different limbs on a robot its probably not as good as humans yet 3. Is the power density of actuators and power storage good enough to approximate this 250lb system of a human on a ladder with some combination of DOF that reaches a sufficient range of motion to emulate the humans hands (whether the robot looks like a human or not?) - yeah, plus in this case the welder is plugged into the ground for the human anyway so that system is already attached to mains power
So given all this, seems like the limiter is just software, which is the bull case for this prospected robotics revolution
Cooling in general is not a bad idea to allow you dissipate heat as you push motors to their saturation limits.
Plus, cryo temps require a lot of power to keep thing cool and coper and iron embrittle so the forces acting on the winding could shatter them.
At cryo nobody sane is building robots unless the budget is a joke.
Still, I think this idea is under-explored. There are probably applications for robots that move really fast, but only for a second before having to cool down.
Net is that a couple of figures of merit limit the performance of electromagnetic systems, and being sneaky won't let you exceed them.
TLDR; is that you need high current, meaning a lot of ohmic heating. With non-negligible back-EMF resulting in even more losses. Rotating motors essentially "lengthen" the travel of the "plunger" compared to linear motors.
There is such an insane amount of information richness in mammals and sensor specialization:
- Merkel discs
- Ruffini corpuscles
- Meissner corpuscles
- Pacinian corpuscles
- Muscle spindles
- Golgi tendon organs
- Nociceptors
And it's not the biology 101 types of sensors (in terms of variety of sources) but also the information density per square millimetre. It is orders of magnitude above anything that has ever been created with technology.
Once we leap above just Visual Servoing and start reaching sensory density and feature parity with human skin, joints, muscle, feet/hands, then we may start to see real breakthroughs.