Bridgewater Had Believability Issues - https://news.ycombinator.com/item?id=38181360 - Nov 2023 (2 comments)
Everyone is encouraged to rate anyone else in a variety of categories, as often as possible. Every rating is public. You know who is rating you, and how they did it. Those ratings are put together to get a score in every category, and can be seen by anyone. It's your Baseball Card.
The problem is that not everyone is equally 'credible' in their rating. If I am bad at underwater basketweaving, my opinions on the matter are useless. But if I become good, suddenly my opinion is very important. You can imagine how, as one accumulates ratings, the system becomes unstable: My high credibility makes someone else have bad credibility, which changes the rankings again. How many iterations do we run before we consider the results stable? Maybe there's two groups of people that massively disagree in a topic. One will be high credibility, and the other bad, and that determines final scores. Maybe the opinion of one other random employee just changes everyone else's scores massively.
So the first thing is that the way we know an iteration is good involves whether certain key people are rated highly or not, because anything that, say, said that Ray is bad at critical thinking is obviously faulty. So ultimately winners and losers on anything contentious are determined by fiat.
So then we have someone who is highly rated, and is 100% aware of who is rating them badly. Do you really think it's safe to do that? I don't. Therefore, if you don't have very significant clout, your rating of people should be simple: Go look at that person's baseball card, and rate something very similar in that category. Anything else is asking for trouble. You are supposed to be honest... but honestly, its better to just agree with those that are credible.
So what you get is a system where, if you are smart, you mostly agree with the bosses... but not too much, as to show that you are an independent thinker. But you better disagree in places that don't matter too much.
If there's anything surprising, is that more people involved into the initiative stayed on board that long, because it's clear that the stated goals and the actual goals are completely detached from each other. It's unsurprising that it's not a great place to work, although the pay is very good.
Tangential to your point but this is actually a solved problem and you only have to run once. You might even recognize the solution [1] [2].
[1]: https://en.wikipedia.org/wiki/PageRank [2]: https://en.wikipedia.org/wiki/Eigenvalues_and_eigenvectors
It looks like Ray's fixation on "radical transparency" ignored the basics of human nature.
https://en.wikipedia.org/wiki/App_Development_and_Condiments
#SixSeasonsAndAMovie!
A) underfunded and understaffed
B) have no time and energy for niceties
all those in more comfy positions might be inclined to vote against them, while they probably don't vote at all because they got no time for that shit.
As you said, if you let a pidgeon vote on how good your chess moves were you will get pidgeon shit on a chessboard. A costly form of bureaucratic performance art, sans the art.
Most folks just didn't participate and it faded away pretty fast.
It would be so cool if someone could apply game theory for mortals that could digest these systems and show how to make them more "fair". It may be an impossible task but it seems worthy of exploration.
I consider this to be a hard, but clearly not an impossible task. What I rather consider to be nigh impossible is to get a good definition on what we actually want to achieve by creating these systems. There is where in my opinion the true difficulties lie.
Just to give some shallow examples:
Do we want to make this purpose be quite fixed (say, for decades) (and create a system around this purpose)? Then it will be quite hard to change the ship's directions if because of some event the economic environment changes a lot.
On the other hand: do you want to make the system's purpose very flexible, so that the system can react to such circumstances? You can bet that this will be gamed towards the political whims of the people inside the system.
On the other hand: if you want to get a glimpse at system ideas that did "work out", look at long-existing religions, or entrepreneurial dynasties with a history of centuries. It should be obvious that these examples are far off from a "democratic", "particapative" spirit (which perhaps should be a lesson for anybody designing such systems).
You'd think someone with as much experience as Ray Dalio would be vigilant against overfitting a model, and that's exactly what this is.
Just goes to show you that even the most successful are susceptible to ego-driven blind spots.
Seems anecdotally that it’s used as a stepping stone into leadership or used to bank savings.
Several tech leaders I know are out of BW.
An SRE I knew there was at ~$600 TC and got a retention offer at ~$900 when they left to do a passion job.
A bad actor could use this to craft a „malicious review“.
“Inside James Comey’s Bizarre $7M Job as a Top Hedge Fund’s In-House Inquisitor” https://www.vanityfair.com/news/2023/11/james-comey-dalio-br...
It’s every bit as strange as it sounds.
The section with the multiple investigations of the employee who didn't bring in the bagels really brings that home lol
Match made in heaven.
Nope. Much, much stranger.
It's really more of a continuum.
Or you could broadly tranche into 4 categories - 1) pure hunches, 2) pure quant ML stuff (I don't know what this signal even means, but computer say number go up), 3) hunches that you validate with data (kind of like the scientific process), and 4) the reverse - quant ML/data that you validate with logic/hunches (computer says number go up, but why would this pattern exist, who is on other side of this, what are the tail risks, etc).
People say the same thing about fundamental analysis versus technical. It's not a binary; everyone is both, and only vary on how much they lean to one side.
I definitely do not remember anything about Simon making trades based on 'hunches'
Page 79: "In 1978, Simons left academia to start his own investment firm focusing on currency trading."
Page 99: "In the following days, Simons emerged from his funk, more determined than ever to build a high-tech trading system guided by algorithms, or step-by-step computer instructions, rather than human judgment. Until then, Simons and Baum had relied on crude trading models, as well as their own instincts, an approach that had left Simons in crisis."
Page 101: "The regulators somehow missed the humor in Simons's misadventure. They closed out his potato positions, costing Simons and his investors millions of dollars. Soon, he and Baum had lost confidence in their system. They could see the Piggy Basket's trades and were aware when it made and lost money, but Simons and Baum weren't sure why the model was making its trading decisions. [...] In 1980, Hullender quit..."
Page 102: "With Hullender gone and the Piggy Basket malfunctioning, Simons and Baum drifted from predictive mathematical models to a more traditional trading style."
Page 105: "Their traditional trading approach was going so well that, when the boutique next door closed, Simons rented the space... Simons came to see himself as a venture capitalist as much as a trader."
[Various things happen. Renaissance makes and then loses a bunch of money. It is now 1985]
Page 114: "Simons wondered if the technology was yet available to trade using mathematical models and present algorithms, to avoid the emotional ups and downs that come with betting on markets with only intelligence and intuition."
Page 201: "By 1990, Simons had high hopes Frey and Kepler might find success with their stock trades. He was even more enthused about his own Medallion fund and its quantitative-trading strategies... Competition was building, however, with some rivals embracing similar trading strategies. Simons's biggest competition figured to come from David Shaw."
So my timeline was a little off: Renaissance was largely a traditional fund from 1978 until AxCom in 1985, with D.E. Shaw only in 1990.
The mythology focusing on math/quant I think is a feature, for them.
Apologies for not finding the original; my $SE appears to be dominated by a recent tax deal instead.
Have they just resigned themselves to the uphill battle?
I love the idea of strong feedback. I've always wanted work to feel like I'm lifting weights with my buddies: We constantly critique each other. We all want to get better and any advice or critique is welcome.
In general, withholding criticism is a sign that either:
1. The person needing advice is more concerned with their ego than actually getting better. They get mad about criticism or find it hurtful.
2. The person who is withholding advice either has nefarious purposes, has a low option of you (they assume you fall into category 1), or simply doesn't care about helping you.
Criticism is the respectful, professional thing to do. It assumes the best in people - that they're trying their best and want to get better.
You shouldn't be an asshole by the way. If you phrase your criticism in a way where someone is likely to get defensive then it's less likely to be effective. Empathy is an important skill in teaching.
After years spent working with companies in SF - where criticism is generally avoided at all costs - Bridgewater piqued my interest.
I tried to discern if Bridgewater shared my outlook but, ultimately, I just couldn't tell. I asked extremely pointed questions - uncomfortable things I wouldn't normally ask in an interview. But I figured they wanted radical candor, right?
I asked every interviewer things like "How important is making sure people actually hear this criticism?", or "Couldn't people just use this as an excuse to be a jerk?". All the answers were wishy-washy.
Plus, they force constant feedback. More feedback is good. But constant? It felt like it'd be, at best, distracting. And at worst, like it would lead to a lot of false criticism. If you _have_ to criticize, even if you don't have an opinion, then are you really making people better?
In the end I got the impression that they value criticism for the sake of criticism. And it generally seemed like giving criticism was more highly valued than ensuring people actually heard what you were saying (communication skills and empathy weren't emphasized). They'd confused the forest for the trees.
And the issue I have with feedback is that there is a big difference between feedback to improve something where the person is deficient (e.g. you should learn about net present value), and feedback that is just about the person's style (e.g. you ask too many questions).
People are different. There is more than one way to skin a cat. For example, some people develop a solution through an analytical approach - defining a goal, looking at all options, coming up with some objective way to rank them, then filtering down.
Other people do it in a very "messy" way (or what some percive as messy). They immediately come up with a solution, then refine as they go based on what they learn and what other people think (e.g. questions, opinions, etc).
Is one way better than the other? No. But if someone kept giving me feedback on my particular style, I'd start to get annoyed. If it's not wrong, then there shouldn't be feedback. You actually want people with different styles, it can prevent group think and personally, I learn a lot by working with people who approach problems different.
I get the sense Bridgewater feedback is just used to shape behaviors in the model of some ideal form (i.e. the New Soviet Man), not feedback that improve people whatever their style is. But I might be wrong.
If you want to get continuous feedback, all you need to do is to get married.
A long time ago I was friends with a facilities person at Apple. She told of a peculiar case where she was called in to take care of a problem in one of the conference rooms. Apparently someone had not thought a meeting had gone well. That evening they had returned to the conference room and taken a dump on the table. Message received.
Maybe someone was trying to send a Ray a message with a puddle of pee:)
If these articles are right, the picture I'm coming away with is that Dalio had some good strategies that gave him an edge decades ago but now his competitors have caught up and pulled ahead. Apparently Bridgewater no longer has any quantitative edge and it's really just Dalio and a few close advisors making calls from the gut. Plus all this weird nonsense about "principles".
They don’t retire, they just graduate to “thought leaders” because what else are you going to do with that much free time and ego. The money no longer matters, all that is left is to chase status.
https://www.google.com/search?q=bill+gross+lucky
https://www.cnbc.com/video/2013/04/03/great-investors-more-l...
https://www.reuters.com/article/us-janus-henderson-billgross...
https://www.sec.gov/news/press-release/2016-252 (ETF pricing violations mentioned in the book)
https://us.macmillan.com/books/9781250120854/thebondking
(highly recommend the book "The Bond King" for deep background on Gross' professional career and PIMCO)
So some guys who are early, big and right can attract a halo that long outlives its reality.
To build some piece of tech you need to have some sort of process, and it is probably replicable to building another piece of tech. It is entirely possible to just get lucky right place, right time and make a killing investing for a good run before your lack of process catches up with you.
Half the "Big Short" guys fall under the same category as Dalio & Gross.
But hey, if you make your number and then can graduate into "thought leadership" its not such a bad gig either.
There are a lot of oft recommended books whenever the topic comes up, but they always border on too theoretical or outright "pop."
I would imagine simply looking at a syllabus from something like Baruch and going through the recommended readings would be a starting point.
The way they sold it sounded a bit culty to me, and the potential for abusive behaviour seemed high. Didn't make me want to work for them.
On the engineering side, ideally this is how things are done anyway.
I've never had a problem digging into the ideas presented by engineers more senior than me, and I've always encouraged junior engineers to question any ideas I present.
Everyone, independent of seniority, is blind to giant gaping flaws in their own ideas. People also don't know what they don't know. If I have a gap in my knowledge, any plans I come up with are not going to take what I don't know into consideration if I don't know that I don't know something.
What put me off about this company was how they sold this as the central idea of their company. It made me wonder what made them think this needed to be pushed so strongly. I tend to be suspicious of overriding ideologies.
- being presented under-cooked ideas so you do the thinking for them.
- too many ideas (which can lead to the first) with an inability to focus
There is a feedback loop for the first where people don't dive too deeply into their idea because they know they'll get a lot of criticism.
The second can be bad because the idea or its implementation is never polished.
Treating all ideas as valid is fine if that means all ideas get evaluated. For hierarchy you need somebody who can be the decision maker when a consensus cannot be reached otherwise the most stubborn opinionated people become the de facto decision makers. I experienced that situation before and resulted in some very bad implementations and massive tech debt and the people who stuck us with those bad decisions got to avoid responsibility and leave other people cleaning up the mess they created.
It sounded like the kind of idea that would either be awesome if done well, or sheer hell if people did what people often do with otherwise good ideas...
Major Animal Farm vibes.
Is it all a front or a Madoff situation?
They are generally making millions and millions of smaller trades. So sure they have $100B AUM and maybe 5x levered to $500B. But across a million positions that's .. an average $500K position size.
Few if any of these funds ever have like $10B sitting in a single trade.
And even if they did, consider the result of a quick google search "Average daily trading volume of U.S. treasury securities 2000-2018. In 2018, the average total volume of treasury securities traded per day was over 547 billion U.S. dollars."
From the article's description, CEOs like to flaunt their corruption and lack of merit these days.
One on one he was a nice enough guy, and he was super smart. In meetings or as a manager he was a nightmare and left a dumpster fire of problems in his wake including driving off nearly his whole team.
The necessary precondition of private individuals side stepping democratic control via their unchecked powerful organizations is already established. Gotta hope for the whistleblowers...
Anyways, thats the optimistic take :P
"How goes our plans to <do something morally reprehensible>?"
Oh, we are still having trouble with the requirements. It's all blocked up in committee. Not much progress to report.
Chances are a CEO type will be the first to get their hands on AGI or something close enough.
…
> Staffers were given iPads and directed to rank one another on a one-to-ten scale on their performance dozens of times per day in categories derived from Dalio’s Principles
…
> The goal of all the data, Dalio would say, was to sort everyone at Bridgewater on a single scale.
———
Truly they’ve stumbled on the heart of human connection and meaning.
A book being "well-received" doesn't generally mean "everyone should follow all of the ideas in here."
This article really buried the lede here
So many questions. So, so many.
To be fair, if his claim is "out of control crime" and you interpret it as "out of control crime rates", you're talking about different things. There's enough anecdotal evidence to suggest that in some big cities, the occurrence of certain kinds of crime (like cars being broken into) is up, yet reporting is down, leading to a lower crime rate that doesn't reflect the street-level reality accurately.
Of course, this isn't a problem limited to this domain.
That's too easy to just type; and therefore means nothing. If you think you have an argument, make a claim for what the rates are and show evidence for that. Show evidence that published rates are unusually low.
In the Internet, post-truth era, anecodetal evidence - if you even have that - is less reliable than ever; we swim in oceans of misinformation and disinformation. People post their takes, repeat others, etc. endlessly.
My anecdotal evidence, based on much time spent in cities that are depicted (by right-wing media - let's be honest) as crime ridden, and IME they are almost wonderful, there almost couldn't be a better time to be in them. Crime is low, cities are clean, streets are busy with people and energy, and people go about their days without much of a care. Is that true in every neighborhood? No; but that's not a realistic standard.
I assume Dalio has been having a field day with ChatGPT/LLMs.
I don’t think you know what you’re talking about in this regard and/or didn’t serve, candidly.
NCOERs/OERs aren't great, I agree. They're almost as bad and useless as command climate surveys.
But the up-or-out system used for officers in the US military is notorious for being primarily political after like O3 and historically almost comically incapable of rejecting unqualified but politically sophisticated/connected officers.
If you predict that it will rain on Friday, and I predict an alien invasion before Christmas, and both actually happen… Does that mean I was smarter because my event was vastly more unlikely? How would you score these two successful predictions and what possible value could those scores have? Where was intelligence applied here?
It's very simple and probably wouldn't pass muster with an HR department, but it was effective at getting rid of turds.