This makes sense given both automation and the US's role in the global economy, but it runs somewhat contrary to standard ideas of class and inequality.
Apparently "top executive" median pay is $105,350 per year: https://www.bls.gov/ooh/management/top-executives.htm
Now just think of the comp levels in sectors like government, education, etc.
Chief Executives is actually a specific sub-category of it and is, obviously, much smaller.
Can you elaborate?
I think AI outcomes distribute to contexts where it is used, and produce a change in how we work, what work we take on. Competition takes care of taking those surpluses and investing them in new structure, which becomes load bearing and we can't do without it anymore.
In the end it looks like we are treading water, just like it was when computers got 1M times faster in a couple of decades, but we felt very little improvement in earnings or reduction in work.
Surplus becomes structure and the changed structure is something you can't function without. Like the cell and mitochondrion, after they merged they can't be apart, can't pay their costs individually anymore. Surplus is absorbed into the baseline cost.
For a business, the question is whether you can make more money by doing more ambitious things.
But given that the stock market hasn't panicked, this must mean at least one of these premises is false:
1. Economic activity is relatively flat.
2. AI makes us a million billion zillion times more productive than we used to be.
3. The stock market is rooted in reality.
This was already obvious, the more important question is what are we (collectively, society & our governments) going to do about it?
We (should have) already known most of our jobs were bullshit jobs, especially white collar jobs. The difference is now we might have something coming that will eliminate the bullshit jobs.
But society will always need bullshit jobs or the whole system collapses. Not everyone can go dig ditches, so what do we do?
I think this is a very important point. The hedonic treadmill means real gains are discounted. The novelty information cycle is like an Osborn Effect for improvements, like the semi-annual Popular Mechanic's flying car covers where there is an enticing future perpetually nearly here and at the same time disappointingly never materialized.
Agriculture is a good example of that: http://www.johnhearfield.com/History/Breadt.htm
This time the jobs most in the crosshairs of AI are the jobs that constituted the paper pushing overhead of modern society, all the paper pushing jobs. Instead of $1 widgets from China replacing $2 domestic widgets it's gonna be $1 AI services replacing $2 services that require a real human.
This is hard to reason about because people tend to consume these kinds of services in big multi hundred or multi thousand dollar increments but in practice what it means is that when you have to engage an accountant, engineer, having something planned out in accordance with some standard, that will be substantially cheaper because of the reduced professional labor component.
And of course, as usual, the string pulling and in investor class will get fabulously wealthy along the way.
Does the work you do provide more or less value to the company than your salary? Where does the difference go? If your killer feature closes a $5M deal, who gets that money?
We live as capitalist serfs. Someone else gets all the value you create, and you should be grateful for the peanuts they toss back to you.
The 1% pockets, this is where the vast majority of the extra productivity computers/internet/automation brought goes to for the last 50 years: https://www.epi.org/productivity-pay-gap/
1) The salaries of corporate employees 2) Shareholders and capital owners
Regarding number 2: "Shareholders" would include anyone who owns any stock at all, including a lot of middle class people with a simple S&P 500 ETF in their portfolio.
And the increase in productivity allowed more people to become capital owners, AKA entrepreneurs. The explosion in software entrepreneurs, for example.
(don't forget to "allow pasting" in [chrome] console first)
1) Avoid contrasting red/green and blue/yellow, as these are common colorblind pairs.
2) Pick shades that still look different when shown in grayscale.
3) All bar charts should have 0 at one end.
4) Please no 3-D pie charts.
To find good color palettes, check out https://colorbrewer2.org
https://www.vischeck.com/run.html
It twiddles colors in a physiologically-aware manner to improve legibility for colorblind observers:
https://github.com/wadelab/VischeckTinyeyes/blob/main/websit...
BLS forward looking guidance means nothing when technology revolutionizes the nature of work.
No one can predict everything perfectly. This is just the guidance based on the data that was reported. AI is advancing faster than anyone can imagine and no one knows the impact - good or bad.
Putting aside the slop facade place atop the data....why would we trust the data?
Yay!
>Computer Programmers: -6%
Oh no
(Source: https://www.bls.gov/ooh/computer-and-information-technology/...)
Computer Programmers median pay according to BLS: $98,670 per year
(Source: https://www.bls.gov/ooh/computer-and-information-technology/...)
Software developers typically do the following:
- Analyze users’ needs and then design and develop software to meet those needs Recommend software upgrades for customers’ existing programs and systems Design each piece of an application or system and plan how the pieces will work together
- Create a variety of models and diagrams showing programmers the software code needed for an application
- Ensure that a program continues to function normally through software maintenance and testing
- Document every aspect of an application or system as a reference for future maintenance and upgrades
(Source: https://www.bls.gov/ooh/computer-and-information-technology/...)
Computer programmers typically do the following:
- Write programs in a variety of computer languages, such as C++ and Java
- Update and expand existing programs
- Test programs for errors and fix the faulty lines of computer code
- Create, modify, and test code or scripts in software that simplifies development
(Source: https://www.bls.gov/ooh/computer-and-information-technology/...)
Reason for hope
They're saying that programmers will be declining. While Developers, and crucially, Testers and QA people will be increasing. That testers and QA become more important in the future sounds plausible to me in a future hypothetical world of ubiquitous AI.
All of that doesn't necessarily imply that the Developer class of employees will grow at the same rate as the Tester and QA classes of employees.
My friends and I who have a bachelor's degree in CS make more money than my friends who have or are working towards master's degrees in CS, because the former are working in the private sector and the latter are in academia making peanuts.
Edit: Another possible reason that Masters degrees were less common in the past, so the Bachelors pay statistics skew towards people with more work experience in their higher earning years, whereas the Masters pay statistics skew towards younger people with less work experience.
Congress/president should pause H1B visas or hike up fee to 200-500K so that only truly exceptional talent are allowed in. Right now it's just give away to corporations that are laying off people by tens of thousands.
1) how many of these people leave the country in this analysis.
2) OPTs likely will get h1b/l1s/leave the country and are being counted distinctly.
3) not all h1b/l1/OPTs are for tech. majority for sure, but there's a conversation factor.
specially in the current situation that green cards are much harder to obtain and many OPTs don't find a job, I expect 1 to be much larger than in the past.
as a more general observation, this line of reasoning does fit lump of labor fallacies: https://en.wikipedia.org/wiki/Lump_of_labour_fallacy
Since the fee went up to $100k, I’m not aware of any companies still sponsoring hires who need a new H1B
https://apnews.com/article/teacher-jobs-h1b-j1-visa-online-s...
Like many school systems facing teacher shortages, South Carolina’s Allendale County has looked overseas for help. A quarter of the teachers in the rural, high-poverty district come from other countries.
The superintendent praises the international educators — mostly from Jamaica and the Philippines — for their skill and dedication, but she is preparing to lose some of them as the Trump administration reshapes visa programs.
Facing higher visa sponsorship costs and uncertain immigration policies, Superintendent Vallerie Cave said it feels too risky to extend some international teachers whose contracts are up or bring on others.So healthcare industries turn to H1Bs to hire specialty positions in underserved / rural areas. The alternative is to shut these facilities down, which has other negative aspects to communities.
I find this argument extremely funny because when immigrations are taking the white collar jobs, you guys get anti immigrants, tighten the visa stuff, but when blue collar and low level jobs are taken by illegal folks you turn and blind eye and noone is illegal in stolen land login.
I 100% agree that H1B has been extremely abused by folks from specific country running body shop tech consultancies but the solution is not to hike up the fees to 200k-500k.
The 100k fee by Trump admin is already showing effects in the job market. Most companies are not readily sponsoring H1B visa anymore, getting a big tech job as a intl student is already tough and only exceptional ones are getting such jobs.
There's lies, damned lies, and then: there's statistics.
You have to counter the growth in jobs based on how many new people there are to take them, the location in which they are, and somewhat weirdly other jobs.
Plenty of people feel so dejected at the current state of things that they leave computer work entirely making "openings" where there isn't actually any growth.
Like all things that you try to understand: a single datapoint, when averaged, is like trying to calculate the heat from the sun by looking through a telescope at jupiter. It will give you a far-out tiny facet of data that only makes sense when coalesced with a hundred other ones.
On second thought, client service folks might do extremely well here!
What you mention here is the exact thing why my earlier relationship went bust, because I didnt have any of these, then the children arrived :-X
In the AI maximalist world where humans are obsolete and cannot contribute to the economy in any meaningful way, there is actually no reason for public education to exist beyond being a free day care for non-rich people. Why learn algebra/calculus at all if the AIs can do it? Why should the US invest billions of dollars into public education instead of data centers?
I hope the US and AI leaders are still "speciesist" in that they put humans first. I hope AI will cure all illnesses, unlock space travel, and lead to flourishing of humanity, not just a flourishing of datacenters. It's also possible that AI just cleave societies in half and we are all worse off for it.
Stand in front with a gun while mobs come to burn down the data center that took their jobs.
(I think I'm half joking).
I doubt it'd be old hit-and-run, more like small scale Ukraine with drones filled with explosives.
Apple, a very successful company, makes 300B/y revenue? (ish)
~10% is all you need to be Apple.
And, it can work by taking all of 10% of the jobs and collecting the whole salary (the AI employee -- dubious proposition),
or by taking 10% of everyone's salary and automating part of everyone's job (the AI "tool" -- much more plausible).
If "part" being automated is >10%, we all win in the long run, every company gets productivity growth without cost growth, etc etc.
If you add in data center costs, and multiple competing AI companies, and then expand the TAM to all white collar work worldwide, you can make everyone successful beyond their wildest dreams with a "20% of work for 20% of the cost" model. Again, how you distribute that 20% remains to be seen (20% new unemployment, or new 0% unemployment with "tools".
I formalized my thoughts here: https://jodavaho.io/posts/ai-jobpocolypse.html
Potable water is far more important than AI or iPads ever will be, but the world's most valuable water company only does about 5B/year in revenue: https://en.wikipedia.org/wiki/American_Water_Works
It's also understated, because the real value of AI is not in replacing work, but making new products possible either because it's finally cheap enough to make them, or because -- AI.
So no, little or none of the AI productivity gains will go to workers, barring significant changes in public policy like universal basic income and the massive tax increases necessary to implement it.
Given the state of AI (LLMs) - they still need a very human (skilled driver) to operate
Frequently seen as a big fun number in pitch decks. "The TAM for our new Coca-Cola killer is $1.6T: all humans who imbibe liquids on a regular basis. You simply MUST invest."
> Rate the occupation's overall AI Exposure on a scale from 0 to 10.
Are LLMs good at scoring? In my experience, using an LLM for scoring things usually produces arbitrary results. I'm surprised to see Karpathy employ it
Now I'm not sure if this is actually an LLM only thing. Because I think people probably do similar when you ask them to give a number to things without providing a concrete grading rubric...
Started my career in the decade of offshoring and didn't think we'd have anything close to an "AI" taking our jobs before we potentially unionized or had a government that would protect its labor force from being replaced by literal robots.
2020-2022 felt like the usa tech ship was finally growing into something really great. All gone now.
When I worked in devops I always worried that my job was automating away other engineers, it definitely had a "when will this come for me" feeling, because it really was, now the dev and ops are both getting automated away.
This is my first time looking at HN in practically a year. Tech is just so uninteresting to me now. Nobody is hiring SDE/SWE/SREs except for the problem makers, like Anthropic, Meta, etc. Anthropic has pages and pages of $300k-$600k roles open right now. But do you go help the rest of your colleagues lose their jobs?
I guess lets talk about kubernetes or something...
Ask a colorblind person to explain how to win at candy crush and you'll be surprised (hint: we do not use colors, we use the shapes).
If you turn on the color filters in accessibility settings in macOS you can see what the contrast could look like to a colorblind person.
The general trick is you can rely on differences in color lightness, patterns, text and icons, but not differences in color hue. The page should be usable in grayscale.
Whats the outlook like?
Thank you!
Needs
- [utility] add filter by keyword / substring match, e.g majority of visualized reports are un-labeled requiring hovering with a mouse pointer
- [improve discovery] add sort by demographic / pop impact, e.g largest block is 7m ('Hand laborers and movers') and default sorted to bottom-left
A -4.0% hit to cashiers may have less of an impact than -4.0% to lawyers or another category that is propping up the middle of the economy with spending.
deep down you all know something is just going to randomly get released one week in the near future that makes you go "well pack it up boys", or you just haven't been paying attention
to clarify - just like the site says - I don't think those jobs are going away, maybe entry level will have the same issues as some industries are encountering, but ideas of relative immunity are completely wrong
I'd like to use this on my website and also see if I can create variations for some of the major EU markets.
> Taxi Drivers, Shuttle Drivers, and Chauffeurs
> Overall employment of taxi drivers, shuttle drivers, and chauffeurs is projected to grow 9 percent from 2024 to 2034, much faster than the average for all occupations.
...word?
https://apnews.com/article/trump-jobs-firing-f00e9bf96d01105...
All the "research" on the site comes from a single LLM prompt.
There is definitely impact on Software engineering jobs at the moment, interns/juniors are struggling to find jobs, companies are squeezing every bit of dev slack time to produce more stuff with AI.
Is that notion supported by this content? The BLS Outlook for most software engineering jobs is most in the "much faster than average" growth range.
I guess that was to be expected...
> Rate the occupation's overall AI Exposure on a scale from 0 to 10.
The sad part isn't that this is low-effort AI slop, but that intelligent people and policy makers are going to see it and probably make important decisions impacting themselves and others based on these numbers.
In addition, little work is done to separate the classes. He has probation officers in the same node as teachers, completely separate from law enforcement.
Here's some much better examples:
- https://www.washingtonpost.com/nation/2022/05/04/abortion-nu...
- https://flowingdata.com/2015/04/02/how-we-spend-our-money-a-...
It's not great for them, but it's a definite advantage for people who are already in the mindset of distinguishing and discriminating information and sources on merit, instead of running an "AI bad" rubric as part of their filter.
AI has already won. It's taking over. It might be a year or two, or five, or ten, but AI isn't slowing down, nobody is going to pause, and there's a whole shit ton of work people do that won't be meaningful or economically relevant in the very near term. Jevons paradox isn't relevant to cognitive surplus - you need a very different model to capture what's going to happen.
It's time to surf or drown, because it doesn't look like any of the people in charge have the slightest clue about how to handle what's coming.
Maybe it was linked from a comment somewhere on HN but just today I saw a post saying “Microwaves are the future of all food: if you don’t think so, you better get out of the kitchen”
Microwaves have already won. There will be a microwave in every home over the next few years.
It’s time to start microwave cooking or drown
It's annoying that the dishes still have some pooled water in them when the cycle finishes; it doesn't always get everything perfectly clean; I have to know not to put the knives or the wooden stuff or anything fancy in it. But in spite of all of that, I use it every day, it's a huge productivity boost, and I'd hate to be without it.
You think "there's a whole shit ton of work people do that won't be meaningful or economically relevant in the very near term" is wrong?
No, AI has not "already" won. And phrasing it as you do, "It's taking over. It might be a year or two, or five, or ten" is an admission of that.
People may indeed not pause, but there's never any guarantee that the next step of progress is possible; whatever we reach may be all we can do, and we'll only find out when we get there. Or it might go hyperbolic and give us everything.
I'm not certain, but I suspect Jevons paradox is probably the wrong thing to bring up here, that's about cheaper stuff revealing more latent demand, and sure, that's possible and it may reveal a latent demand for everyone to build their own 1:1 scale model of the USS Enterprise (any of them) as a personal home, but we may also find that AI ends the economic incentives for consumerism which in turn remove a big driver to constantly have more stuff and demand goes down to something closer to a home being a living yurt made out of genetically modified photovoltaic vines that also give us unlimited free food.
(I mean, if we're talking about the AI future, why not push it?)
What I do think is worth bringing up is comparative advantage: Again, this is just an "I think", I'm absolutely not certain here, but if AI can supply all demand at unlimited volumes*, I think the assumptions behind comparative advantage, break.
> It's time to surf or drown, because it doesn't look like any of the people in charge have the slightest clue about how to handle what's coming.
Yes, and I think they've also not even managed to figure out the internet yet.
* and AI may well be able to, even if all models collectively "only" reach the equivalent of a fully-rounded human of IQ 115; and yes I know IQ tests are dodgy, but we all know what they approximate, by "fully rounded" I mean that thing their steel-man form tries to approach, not test passing itself which would have the AI already beat that IQ score despite struggling with handling plates in a dishwasher.
Ah, the classic, forever-untestable "it's just around the corner" hypothesis.
I've lived through multiple "it's gonna be over in 12-18 months" arguments since November 2022. It's a truism for any technology to say that it's going to get better over time. But if you're convinced that "AI has already won", why not make a specific prediction? What jobs are going to be obsolete by when?
Jevons paradox was never relevant to cognitive surplus. That isn't what it's about.
Cognitive surplus only strengthens Jevons paradox. Humans are a competitive advantage for businesses in a world dominated by human needs
1. Brick and mortar is dead.
2. The internet will die.
3. What is the business model? (this one still seems to exist to this day to some extent, lol)
Reality fell between 1 and 2.
just because it was wrong once doesn't mean its never wrong. And was it really that wrong? The internet is great but would it be the worst thing in the world if we didn't live our lives around it?
You'd probably put me into that bucket, although I'd disagree. I'm not at all against using AI to do something like: type up a high level summary of a product featureset for an executive that doesn't require deep technical accuracy.
What I AM against is: "summarize these million datapoints and into an output I can consume".
Why? Because the number of times I've already witnessed in the last year: someone using AI to build out their QBR deck or financial forecast, only to find out the AI completely hallucinated the numbers - makes my brain break. If I can't trust it to build an accurate graph of hard numbers without literally double checking all of its work, why would I bother in the first place?
In the same way, if you tell me you've got this amazing dataset that AI has built for you, my first thought is: I trust that about as much as the Iraqi Information Minister, because I've seen first hand the garbage output from supposedly the best AI platforms in the world.
*And to be clear: I absolutely think businesses across the board are replacing people with AI, and they can do so. And I also think it'll take 18+ months for someone to start asking questions only for them to figure out they've been directing the future of their company on garbage numbers that don't reflect reality.
I think LLMs are the equivalent of someone with a PhD in English literature and a few other things, and can be very intelligent and literate without being particularly good with numbers.
On the other hand you have plenty of machine learning numbers that are absolute beasts at everything number-related. I'm assuming you wouldn't put George RR Martin in charge of building your datasets.
I think AI is not going anywhere.
I also don't think the future will play out as you envision. AI is a very poor replacement for humans.
And I say this as a misanthrope who doesn't have a particular beef against AI.
Published AI generated code is a mild negative signal for quality, but certainly not a fatal one.
Published AI generated English writing is worthless and should be automatically ignored.
I could understand if all the naysayers doing old fashioned stuff like work all of a sudden have no more work to do. But the AI Embracers will have what, in comparison? Five years of experience manipulating large language models that are smarter than them by a thousand fold?
Could you elaborate on this? Is it just a claim, or is there some consensus out there based on something that it doesn't/shouldn't apply?
So... What exactly are you talking about?
a. "Has already won"
b. "Might be a year or two, or five, or ten"
I use AI every day as part of my work, it's very unclear to me where it's going and we have no idea if we're on an exponent or S-curve. Now, normally people talk with conviction because they have more data. But one of the breakthroughs of crypto was this social convention of just have very strong opinions based on nothing. A lot of that culture has come over to AI.
Your comment typifies this, it's all about I need to get on board, AI has already won, you've got an advantage over me because you realise this.
Go back, look at the actual article you're commenting on. Did the AI analysis of job exposure provide anything of value. I'm not totally convinced it did, and you didn't even think about it. What critical thinking did you do about the data that came out of this dashboard.
This cuts both ways...
> there's a whole shit ton of work people do that won't be meaningful or economically relevant in the very near term
What work do you think AI is going to replace? There are whole categories of people who are going to drown in the hubris of "AI being able to do the job" when it cant.
The moment one stops pretending that its going to be AI, that were getting AGI and views it as another tool the perspective changes. Strip away the hype and there is a LOT there... The walls of the garden are gonna get ripped down (Agents force the web open, and create security issues). They end lots of dark patterns, you cant make your crappy service hard to cancel... because an agent is more persistent to that. One size fits all software is going to face a reckoning (how many things are jammed into sales force sideways... that dont have to be). These things are existential threats to how our industry is TODAY, and no one seems to be talking about the impact to existing business models when the overhead of building software gets cut in half (and how it leads to more software not less).
However I was completely unimpressed with this tool when I saw it this weekend for two reasons:
The first is directly related to how this is built:
> These are rough LLM estimates, not rigorous predictions.
This visualization is neat (well except for reason number two), but it's pretty much just AI slop repackaged. There's no substance behind any of these predictions. Now I'm perfectly open to the critique that normal BLS predictions are also potentially slop, but I don't see how this is particularly valuable.
And the second, like 8% of male population I'm colorblind, so I can't read this chart.
For the record, I do agentic coding pretty much everyday, have shipped AI products, done work in AI research, etc.
Ironically, it's comments like yours that keep me the most skeptical. The fact that an attack on a strawman is the top comment really makes me feel like there is some sort of true mania here that I might even be a bit caught up in.
Over the past year (where Opus has supposedly changed the game), we're seeing ~10% more job postings for software developers compared to this time last year [1,2]
A huge amount of our work is not easily verifiable, therefore it's extremely hard to actually train an LLM to be better at it. It doesn't magically get better across the board.
AI HAS WON. SURF OR DROWN. YOU DONT KNOW WHATS COMING!!!?!?!
Stop with this doomer drivel. It's sick. It's not based in reality and all it does is stress innocent people out for no reason.
AI is great for searching. I ll give you that. And that itself is a big deal. In software development, there is also real value provided by AI if you use it for code reviews. But I am not sure how much worth it would be if you have to retrain a model with new information just to give better search results and for code reviews..
Maybe that will be subsidized by all the people like you who want everything to be done by AI, for the rest of us to use it as a better search tool and use it for quick reviews..who knows!
brainbroken by chatbots lmao
Man.. I suggest you touch some grass. You are living in a bubble.
Companies Are Laying Off Workers Because of AI’s Potential - Not Its Performance - https://news.ycombinator.com/item?id=47401368 - March 2026
> Some companies that announced large headcount reductions because of AI have since revised their talent strategies or have faced public criticism. Klarna, for example, the Swedish fintech that offers “buy now, pay later” e-commerce loans, reduced its human workforce by 40% between December 2022 and December 2024 as it invested in AI. (The company used a hiring freeze and natural attrition, not layoffs to achieve this cut.) But in 2025 the company’s CEO told Bloomberg that Klarna was reinvesting in human support, explaining that prioritizing lower costs had also led to “lower quality.” A spokesman told HBR that the company has hired about 20 people to deal with customer service cases the AI assistant can’t handle, and that the use of AI “changes the profile of the human agents you need in the customer support role.” The language-learning company Duolingo announced that AI would be used to replace many human contractors, and it faced considerable criticism on social media.
> For one, AI typically performs specific tasks and not entire jobs. As an example, Nobel laureate Geoffrey Hinton stated in 2016 that it was “completely obvious” that AI would outperform human radiologists within five years. A decade later, there is no evidence that a single radiologist has lost a job to AI—in part because radiologists perform many tasks other than reading scan images. Indeed, there is a substantial shortage of them.
The 'AI-Washing' of Job Cuts Is Corrosive and Confusing - https://news.ycombinator.com/item?id=47401499 - March 2026
* Companies are "AI washing" layoffs, blaming artificial intelligence for workforce reductions they would have made anyway, according to OpenAI CEO Sam Altman.
* A Resume.org survey found that 59% of hiring managers say they emphasize AI's role in layoffs because it "is viewed more favorably by stakeholders than saying layoffs or hiring freezes are driven by financial constraints".
* The stated reason for the layoff matters more than the fact of the layoff, and framing cuts as proactive restructuring around AI can result in a valuation boost, even if the technology doesn't actually work.
> The AI premium isn’t even reliable. By late 2025, Goldman Sachs group Inc. found that investors were actually punishing AI-attributed layoffs, with shares falling an average of 2%. The analysts concluded that investors simply didn’t believe the companies. But Block’s surge shows the incentive hasn’t vanished. It’s just a lottery instead of a sure thing. And executives keep buying tickets.
> The broader data confirms the gap between narrative and reality. A National Bureau of Economic Research study published in February surveyed thousands of C-suite executives across the US, UK, Germany and Australia. Almost 90% said AI had zero impact on employment over the past three years. Challenger, Gray & Christmas tracked 1.2 million layoffs in 2025, and AI was cited in fewer than 55,000 of them. That’s 4.5%. Plain old “market and economic conditions” accounted for four times as many.
So! Sophisticated capital market participants don't believe this; why do people here?
AI is making CEOs delusional [video] - https://www.youtube.com/watch?v=Q6nem-F8AG8
It's been several years and nothing has changed except the AI grift is crumbling as we get out of the post-covid slump.