I strongly suspect that a better model is that people instead of optimizing their outcomes instead optimize the ease of decision making while still getting an acceptable course of action. Most of our biases serve to either allow us to make decisions quicker or minimize the odds of catastrophically bad outcomes for our decisions, which fit nicely with this model. The fact is that indecision is often worse than a bad decision, and the evolutionary forces that shaped our brains are stochastic in nature and thus don't dock points for missed opportunities.
This is a very profound insight that I completely agree with. I've noticed that exact phenomena in my own life in my peer groups. Basically disengaging, not looking for new local maximas (in fairness, because they are hard to detect as they are happening) because the current situation is good enough to keep coasting on.
This might explain some behavior, but how does this model explain why many people choose to hurt others out of spite even if it means hurting themselves? Those choices are neither easy, nor optimal, nor ultimately acceptable as many people who do stupid things like that end up regretting it. It seems to me and most of historical humanity that something is fundamentally broken in us beyond merely missing out on the optimal outcome due to stochastic acceptableness. Sometimes we deliberately choose to do something very difficult that we know is wrong because we desire the bad outcome. That is messed up.
Now a rational actor would carefully evaluate the consequences of possible responses to come up with an appropriate option and if the cost of their feud were greater than the likely reward then they'ed simply let it go. While it leads to better outcomes, this is a slow and draining process.
On the other hand, a simple "eye for an eye" response will often lead to suboptimal results, particularly when the perceived sleight is very different from the actual transgression, but people still will be hesitant to mess with you all the same. While in our modern era of functional justice systems this approach is generally unnecessary, the overwhelming majority of our evolutionary history did not contain such a luxury.
I’m rationally irrational.
I also don't want to take every opportunity I get, that would be pretty exhausting. I would have opportunity to save some taxes if I invest a few hours into tax law. Certainly and opportunity and pretty productive. But I just don't want to because I hate doing taxes.
Sure, these models do not apply to individuals (although this fact is often neglected). Also a model is always a simplification. Intrinsic to that is that it will by definition only ever be approximative. It neglects parts of reality, hopefully the less important ones but you cannot be sure about correctness and extend of approximation.
For example if I know a behavioral scientist that I just don't like for any reason and he suggest I should exercise more, I might go eat an extra pot of ice cream. This would render "nudging" quite ineffective or worse have the opposite than the intended effect.
I think it is more constructive to accept limitations of a model. It can help for prognosis and diagnostics. Why is it for example that people exercise less? Probably work load or distractions from entertainment or whatever reason. I think the field should concentrate on trying to get answers to such questions.
Psychology is interesting and much of the content that cannot be replicated is probably still true under certain circumstances. But for generalization these circumstances need to be known.
Which is a pretty good null hypothesis, actually.
That's entirely consistent with people frequently optimising for ease of decision making, it's just not consistent with slavish adherence to a particular specified decision making function an economist has designed policy around exploiting. The canonical example in macroeconomics being that if a government announced its intention to increase inflation, it would be unreasonable to assume that people weren't rational enough to consider asking for a pay rise.
Epicycles were abandoned because we had a more parsimonious default model, not because we wanted to have a more complex idea of reality and handwaved about maybe being more multidisciplinary.
On the last point, evolution doesn't dock points for missed opportunities... provided someone else didn't miss them.
If you've never looked at cognitive biases through the lens of performance optimization you should try it. What seems like an arbitrary list from the bias perspective becomes clever approximative techniques in the performance optimization perspective.
I often think about why this isn't more commonly known among people who call themselves rationalists and tend to spend a lot of time discussing cognitive bias. They seem to be trending toward a belief that general superintelligence is of infinite power, doubling down on their fallacious and hubristic appreciation for the power of intelligence.
I say this, because when you apply the algorithms that don't have these biases - the behavioral economist view wouldn't find them to be irrational since they stick to the math, they follow things like the coherence principles for how we ought to work with probabilities as seen in works by Jayne, Finett, and so on - they either don't terminate, or, if you force them to do so... well... they lose to humans; even humans who aren't very good at the task.
because most of these people do nothing else but writing blogs about rationalism. Same reason university tests are sometimes so removed from practicality compared to evaluation criteria in business, the people who make them do nothing else but write these tests.
I suspect if you put some rationalists into the trenches in the Donbass for a week they'd quickly have a more balanced view of what's needed to solve a problem besides rational contemplation.
In your learning problem where thing were made tractable by differentiation you have something like an elevation map that you are following, but in the multi-stage decision problem you have something more like a fractal elevation map. When you want to know the value of a particular point on the elevation map you have to look for the highest point or the lowest point on the elevation map you get by zooming in on the area which is the resultant of your having chosen a particular policy.
The problem is that since this is a multi-agent environment they can react to your policy choice. So they can for example choose to have you get a high value only if you have the correct password entered on a form. That elevation map is designed to be a plain everywhere and another fractal zoom corresponding with a high utility or a low error term only at the point where you enter the right password.
Choose a random point and you aren't going to have any information about what the password was. The optimization process won't help you. So you have to search. One way to do that is to do a random search; if you do that you eventually find a differing elevation - assuming one exists. But what if there were two passwords - one takes you to a low elevation fractal world that corresponds with a low reward because it is a honeypot. The other takes you to the fractal zoom where the elevation map is conditioned on you having root access to the system.
This argument shows us that we actually would need to search over every point to get the best answer possible. Yet if we do that we have to search over the entire continuous distribution for our policy. Since by definition there are an infinite number of states a computer with infinite search speed can't enumerate them; there is another infinite fractal under every policy choice that also needs full enumeration. We have non-termination by a diagonalization argument for a computer that has infinite speed.
Now observe that in our reality passwords exist. Less extreme - notice that reacting to policy choice in general, for example, moving out of the way of a car that drives toward you but not changing the way you would walk if it doesn't, isn't actually an unusual property in decision problems. It is normal.
Could you give an example of this?
The algorithms have this tendency. They use counterfactual reasoning to determine that assuming a nash player alike to them is their opponent when making their decisions. Sometimes they don't have a nash opponent, but they persist in this assumption anyway. In the cognitive bias framing this tendency is error. In the game theoretic framing this corresponds with minimizing the degree to which you would be exploited. You can find times where the algorithm plays against something that isn't nash and so it was operating according to a flawed model. You can call it biased for assuming that others operated according to that flawed model. From a complexity perspective this assumption lets you drop an infinite number of continuous strategy distributions from consideration - with strong theoretical backing for why it won't hurt you to do so - since nash is optimal according to some important metrics.
- Attentional bias
The tendency to pay attention to some things and not other things. Some examples of times where we do that are with alpha beta pruning. You can find moves that involve sacrifice that show the existence of this bias. The conceit in the cognitive bias framing is that it is stupid because some of the things might be important. The justification is that it some things are more promising than others and we have limited computational budget. Better to stop exploring the things which are not promising since they are not promising and direct efforts to where they are promising. Something like an upper confidence bound tree search in the cognitive bias model would turn balancing the explore exploit dynamic as part of approximating the nash equillibrium into erroneous reasoning because it doesn't choose to explore everything is an example of the lesser form of anchoring effects as they relate to attentional bias. It weights the action values according to the promising rollout more highly.
- Apophenia
Hashing techniques are used to reduce dimensionality. There is an error term here but you gain faster reasoning speed. Seen in blueprint abstraction - the poker example I gave - since we've hashing down using similarity to help bucket similar things. This gives rise to things like selective attention (another bias, and kind of related to this general category of bias).
Jumping ahead to something like confirmation bias the heuristic that all these algorithms are using are flawed in various ways. They see that they are flawed after a node expansion and update their beliefs, but they don't update the heuristic. In fact if a flawed heuristic was working well such that it won we would have greater confidence rather than lesser confidence in the bias.
---
Putting all that aside I would caution against specifity in understanding my point. I think approaching it in this direction - very specific examples - is horrible because it directs attention to the wrong things; when you look at specific examples you're always in a more specific situation and if you're in a more specific situation it means that your situation is more computationally tractable than the general situation which was being handled by the algorithm. So trying to focus on examples is actually going to give you weird inversions where the rules that applied in general don't apply to the specific situation.
You need to come about it from the opposite direction - from the problem descriptions to the necessary constraints on your solution. Then it happens that the error in reasoning is a natural result of trying to do well.
An obvious class of problems is where determining the optimum takes more time than the lifetime of the problem. Say you need to write an algorithm at work that does X, and you need X by tomorrow. If it would take you a week to find the theoretical optimum, then the optimum in a "global" sense is to deliver the best you can within the constraints, not the abstract theoretical optimum. The time to produce the solution is part of the total cost. An imprudent person would either say it's not possible, or never deliver the solution in time.
The key shift is to move the utility function from evaluating a future state of the world to evaluating the utility of an opportunity for attention in the present moment.
All the "cognitive errors" that we humans make are with respect to predicting the future. But we all know what we find appealing in the present moment.
And when we look at economics from this new perspective of the present, we get an economics of attention. We can measure, and model, for the first time, how we choose how to allocate the scarce resource of the internet age: human attention.
I dropped out of academia as soon as I finished this work, and never publicized it broadly within academia, but I still believe it has great potential impact for economics, and it would be great to get the word out.
I like to say that most human problems are a result of the conflict between short and long term goals. This is true at all levels from individuals to small groups, companies, and states. Many, many "failures" can be framed this way. I would say it's not even a problem of predicting the future (thought that is an issue) but of failure to prioritize the future over the present.
Epicycles based models were far superior in practice, such as predicting planetary conjunctions. Heliocentric models did not really catched up, until Newton invented gravity and calculus.
And centre of mass of solar system (barycenter in Newtonian physics), is outside of Sun, so heliocentric models technically never gave solid predictions! Stellar parallax (main prediction from Copernicus theory) was not confirmed until 19th century! Heliocentrism is mainly philosophical concept!
I will stick with my primitive old thinking and biases, thank you! If I get mugged a few times in a neighbourhood, I will assume it is not safe. There is no need to overthink it!
In this case I’m not so sure. As a plebeian normie, it seems like the “rational actor” model of economics has a lot of problems.
Now I do believe that All people are All of the time trying to achieve their goals and meet their needs as can best be achieved in the given situation and in the way that they best know how.
But this includes a junkie digging through trash for things to sell, a housewife poisoning her abusive husband, and a schizophrenic blowing up mailboxes to stop an international plot against her. It includes a recent widower staying in bed for two weeks. It certainly includes your exclusion of an entire neighborhood and its thousands of inhabitants from your care due to some harrowing experiences.
As I understand it, most economists, and certainly the ones that influence policy, are not really thinking of these things as “rational”. To them rational means “increasing your own wealth or exchanging your money in the most efficient and expedient way possible”. And that’s very good because this is the way that corporations and rich people that hire people to manage their money effectively operate. But it doesn’t really work for normal people in normal situations. Our lack of information about our surroundings and our incredibly wide array of emotional states doesnt leave a lot of room for rationality.
I won’t really expound on it because this is already so long, but having a single definition of rationality also excludes any possibility of having an informed multicultural viewpoint.
I believe it isn't. Actually I think it's in a much worse position for the following reasons:
1) A Government is made of people (usually elected directly or indirectly by the majority based on feelings and all the same irrationality), which in turn will likely be "irrational", or have the wrong incentives (be elected again).
2) A Government is made of few people compared to all the people that there are in the Country. They can't possibly know about all the details of the economy and the situations people are in or they can't process it.
3) Government policies can affect the entire economy. An error there can have bigger repercussions than, for instance, a company making a mistake.
But you can not approximate complex system like human brain with couple of variables. There are not hundreds, but millions of biases.
Advanced epicycle models had dozens moving parts. JPL planetary ephemerides (modern equivalent in polynomials) have several millions of parameters and terabytes of equations.
Gravity - some mystical force that attracts masses together - turns out to be a completely fictional thing. Mass curves spacetime, objects actually move in straight lines, and the fact you can explain the results of that as an 'attractive force' turns out to just be a convenient invention. The idea of summing how all that works in terms of a simple inverse square force is just an ingenious human observation and invention.
Like, say, the invention of the number 0. https://en.wikipedia.org/wiki/0
you might as well contest that he invented calculus
The author like then lauds some impressive/hard to conduct/large-scale interventions which are formidable but don't really teach us about economic theory, and in fact neither were published in economics journals. Maybe the field should move in that direction, I am agnostic on the point, but the author's argument wasn't coherent in my opinion.
[1] https://rajchetty.com/wp-content/uploads/2021/04/behavioral_...
[2] https://www.hbs.edu/ris/Publication%20Files/Selective%20Atte...
Example from the article:
> Many costly signals are inherently wasteful. Money, time, or other resources are burnt. And wasteful acts are the types of things that we often call irrational. A fancy car may be a logical choice if you are seeking to signal wealth, despite the harm it does to your retirement savings. Do you need help to overcome your error in not saving for retirement, or an alternative way to signal your wealth to your intended audience? You can only understand this if you understand the objective.
If you are a lawyer with a good practice, you are expected to drive a nice large car. If you drove a battered old economy-class car, your clients might see it as a sign that something is wrong with you (there are several plausible ideas) and shun dealing with you. There go fat fees and investment savings.
It is astonishing to see what we've accomplished despite these rather large shortcomings.
If you disagree, how do you know this emotion isnt triggered by what you would like to be real?
When convinced of anything one grows a bias blind spot of biblical volume. It is a tremendous struggle to look around it.
The "homo economicus" model has become somewhat of a straw man for behavioural economics to disprove. Realistically the model was never claimed to apply in the types of domains where it's being disproved.
Consumers of the modern world are bombarded with choices and attempts to influence these choices. That's not a world, IMO, that can be "modelled" in the same way that a medieval village can be modelled.
If the discipline must be scientific, maybe the better model is "engineering science" where you have to try and build the thing in order to study it. Computation may exist, in various forms, in nature. But, the way to do computer science isn't observing nature. It's building computers, at least on paper.
The efficient market hypothesis ("homo economicus") model is a prime example. Of course it is wrong. It is a model.
That doesn't mean it is not useful.
Howard Marks (highly successful investor over many decades) has this to say about it
> if you ignore the efficient market hypothesis, you’re going to be very disappointed, because you’re going to find out that very few of your active investment decisions work. But if you swallow it whole, you won’t be an investor, and you’ll give up on active success. So the truth, if there is one, has to lie somewhere in between, and that’s what I believe.
https://www.oaktreecapital.com/insights/memo/conversation-at...
> Evolution is ruthlessly rational.
Yes, but that doesn't mean its products or their behaviour are mostly or even partly rational, just that whatever strategy they adopted has worked to ensure survival.
If this is your definition of what's 'rational', that seems a very low bar to set.
That said I enjoyed the article. Frankly it's long seemed to me that economic models based around the idea of the human as a rational actor are fundamentally wrong, because people are not rational (and I include myself in this, more's the pity). Look out at the world, see all the self-sabotage, the unfounded hatred, the violence, the retribution, the temporarily embarrassed millionaires who vote to keep themselves in the gutter.
Can we stretch the idea of "rational" behaviour by looking at evolutionary impulses and saying "well, at X point in the past, this impulse may have helped tribal cohesion and increased the chances of group survival, even at the cost of blah blah blah". Sure we can, sure. But that behaviour is not then "rational" in the society we live in today.
So yeah, modelling of populations based on rational self interest really does look like it has a foundational error. An economics based around a much more chaotic model of human behaviour, with some rationalities built in, is probably needed.
And what would be the expense of figureing out what the optimal choice is.
I got into this way of thinking this morning starting with what is the definition of "Technical Debt". I would say it is the cost of moving from the current implementation to the optimal implementation.
But we don't know what would the optimal implementation. That means we can't even begin to estimate how much it would cost to refactor the current implementation into the optimal implementation.
Therefore I conclude that "Technical Debt" is consultant-speak. Something you can sell without clearly explaining what you are selling.
In short, I'm still mildly skeptical that we've really disproven the rational actor model.
Maybe the rational actor model is the heliocentrism we're looking for. People want it to be false, in order to justify paternalism.
I once read that in the century after Newton, the French Academy offered a prize for evidence that disproved Newtonian mechanics, and they awarded it several times before finally giving up. The disproofs were all flawed.
I think something similar could happen here. We come up with a more accurate model that requires fewer exceptions, but is harder to apply/work with day-to-day.
Sorry to make everything about AI and machine learning, but for a moment there, I thought this was precisely about AI and machine learning.
- Delivering value to our customers.
- Investing in our employees.
- Dealing fairly and ethically with our suppliers.
- Supporting the communities in which we work.
- Generating long-term value for shareholders.
https://www.businessroundtable.org/business-roundtable-redef...Biological proof of work, heh
I realized about myself that I became better at decision-making the easier I could switch to the "appropriate model" for a give situation. Not even physics can get a unified model, and the primitives in social sciences (humans, memories, desires, education) are all as fuzzy as can be.
The article does mention that "rational-actor" might actually be the best we can come up with, but that's if we always have to always work with the same model.
We could have a newtonian/relativistic style pair of models for people, based on urgency or marginal utility threshold (different rules apply to your last dollar), but that threshold has to be subjective, and we seem to have lost all hope in anything subjective.
The simplest way to disprove it:
If we did not have biases but wrong models, fixing models would make us unbiased.
But that works very rarely in real life.