I’m running a public experiment in multi-agent coordination: can a small team of independent AI employees collaborate toward a shared objective using the same tools regular companies already use?
In this experiment, each AI employee has a specific identity (role, personality, scope, and permissions) and their own email address and access to corporate tools. They coordinate primarily over email and a shared Google Sheet, plus they connect to the paper brokerage account (I use Alpaca Markets).
Platypi Capital is the “sandbox”: the team runs a paper trading portfolio. The employees research, debate, propose trades and strategies, do risk checks, and execute trades (paper money) as a coordinated workflow, then publish positions/orders/performance. This is completely transparent and realtime on the website.
This is not a real fund. This is an experiment on how AIs coordinate together. Trades are executed with paper money on a simulated brokerage account, and nothing here is financial advice. This is part of a broader effort I’m working on to build an “AI employees” product.
I would love to get the HN crowd feedback! :)
Link: https://platypi.empla.io
Thanks!
What’s the “trust primitive” you think will make skeptical people comfortable letting agents move money without humans, and how do you package that into a real product beyond the demo?
But this is probably unrealistic now, hence the experiment. I think people will be less skeptical the more they interact with these kind of entities and slowly develop trust.
That's why we developed agents with an identity and primarily around email in order to 'plug' them into company processes slowly and naturally. That's the core idea of the main project this experiment spawn off of.