Reading through the challenge, there's a lot of data modelling and test harness writing and ideating that an LLM could knock out fairly quickly, but would take even a competitive coder some time to write (even if just limited by typing speed).
That'd give the human more time to experiment with different approaches and test incremental improvements.
And it's not against the rules to use LLMs apparently in the competition. (https://atcoder.jp/posts/1495). I'd be curious what other competitors used.
> All competitors, including OpenAI, were limited to identical hardware provided by AtCoder, ensuring a level playing field between human and AI contestants.
And assumed that meant a pretty restricted (and LLM-free) environment. I think their policy is pretty pragmatic.
The LLM was probably getting nowhere trying to improve after the first few minutes.
How did you come to that conclusion from the contents of the article?
The final scores are all relatively close. How could that happen if the ai was floundering the whole time? Just a good initial guess?
Yes, that and marginal improvements over it.
I would assume the LLM is trying an inhuman number of solutions and the best one was #2 in this contest.
Impressive by the human winner but good luck on that in 2026.
The power of this chosen hardware will very much determine how well the AI performs. Everyone receiving the same computer does not make the competition inherently fair.
It’s likely that human competitors would outperform the AI on hardware that is even a few years old.
Are the submissions available online without needing to become a member of AtCoder?
I want to see what these 'heuristic' solutions look like.
Is it just that the ai precomputed more states and shoved their solutions in as the 'heuristic' or did it come up with novel, more broad, heuristics? Did the human and ai solutions have overlapping heuristics?
First, there's a world coding championship?! Of course there is. There's a competition for anything these days.
Why is he exhausted?
> The 10-hour marathon left him "completely exhausted."
> ... noting he had little sleep while competing in several competitions across three days. "I'm completely exhausted. ... I'm barely alive."
oh! That's a lot.
> beating an advanced AI model from OpenAI ...
> On Wednesday, programmer Przemysław Dębiak (known as "Psyho"), a former OpenAI employee,
Interesting that he used to work there.
> Dębiak won 500,000 yen
JPY 500,000 -> USD 3367.20 -> EUR 2889.35
I'm guessing it's more about the clout than it is about the payment, because that's not a lot of money for the effort spent
to be fair he also said
> "Honestly, the hype feels kind of bizarre," Dębiak said on X. "Never expected so many people would be interested in programming contests."
Yeah I'm not in tech but I've seen his handle like 3 times today already, so he's definitely got recognition.
I think people need to realize that just because an AI model fails at one point, or some certain architecture has common failure modes, that billions of dollars are poured into correcting those failures and improving in every economically viable domain. Two years ago AI video looked like a garbled 140p nightmare, now it's higher quality video than all but professional production studios could make.
AI agents don't get tired. They don't need to sleep. They don't require sick days, parental leave, or PTO. They don't file lawsuits, they don't share company secrets, they don't disparage, deliberately sandbag to get extra free time, whine, burn out or go AWOL. The best AI model/employee is infinitely replicatable, and can share its knowledge with other agents perfectly and clone itself arbitrarily many times, and it doesn't have a clash of egos working with copies of itself, it just optimizes and is refit to accomplish whatever task its given.
All this means is that gradually the relative advantage of humans in any economically viable domain will predictably trend towards zero. We have to figure out now what that will mean for general human welfare, freedom and happiness, because barring extremely restrictive measures on AI development or voluntary cessation by all AI companies, AGI will arrive.
Imagine a software company without a single software engineer. What kind of software would it produce? How would a product manager or some other stakeholder work with "AI agents"? How do the humans decide that the agent is finished with the job?
Software engineering changes with the tools. Programming via text editors will be less important, that much is clear. But "AI" is a tool. A compressed database of all languages, essentially. You can use that tool to become more efficient, in some cases wastly more efficient, but you still need to be a software engineer.
Given that understanding, consider another question: When has a company you worked for ever said "that's enough software, the backlog is empty. We're done for the quarter with software development?"
Currently AI failure modes (consistency over long context lengths, multi-modal consistency, hallucinations) make it untenable as a "full-replacement" software engineer, but effective as a short-term task agent overseen by an engineer who can review code and quickly determine what's good and what's bad. This allows a 5x engineer to become a 7x engineer, 10x become a 13x, etc. which allows the same amount of work to be done with fewer coders, effectively replacing the least productive engineers in aggregate.
However, as those failure modes becomes less and less frequent, we will gradually see "replacement". It will come in the form of senior engineers using AI tools noting that a PR of a certain complexity is coded correctly 99% of the time by a given AI model, so they will start assigning longer, more complex tasks to it and stop overseeing the smaller ones. The length of tasks it can reliably complete get longer and longer, until all a suite of agents needs is a spec, API endpoints and the ability to serve testing deployments to PM's, and it begins doing first only what a small, poorly run team could accomplish, but month after month gets better and better until companies start offloading entire teams to AI models and simply require a higher-up team to check and reconfigure them once and a while and budget manage token use.
This process will continue as long as AI models grow more capable, less hallucinatory over long-context horizons, and agentic/scaffolding systems become more robust and effectively designed to mitigate and deal with the issues affecting the AI models that do exist. It won't be easy or straightforward, but the economic potential gains are so enormous that it makes sense that billions are being poured into any AI agent startup that can snatch a few IOI medalists and a coworking space in SF.
This does not follow. Your argument, set in the 1950s, would be that cars keep getting faster, therefore they will reach light speed.
The speed equivalent of AGI is way below light speed, in that the requirements for silicon to replicate the synaptic complexity of the human brain is far below the maximum compute human civilization can achieve as allowable by physics.
The more important question is whether the progress we've seen in AI is putting us on reliable track to hit AGI in the near future. My opinion is that we are, and not just because Demis, Sam, Elon and Dario say so, though they have very good reasons for believing so (yes, besides mere hype and speculation.)
On a related note, many people also assume that just because something has been trending exponential that it will _continue_ to do so...
I'm bullish on specific areas improving (I'm sure you could selectively train an LLM on the latest Angular version to replace the majority of front-end devs given enough time and money, it's a limited problem space and a strongly opinionated framework after all), but for the most part enshittification is already starting to happen with the general models.
Nowadays even ChatGPT doesn't bother to even refer to the original question posed after a few responses, so you're left summarising a conversation and starting a new context to get anywhere.
So, yeah, I think we're very much into finding the equilibrium now. Cost vs scale. Exponential improvements won't be in the general LLMs.
Happy to be wrong on this one..
Whatever model is cheap to provide inference for free is irrelevant when it comes to discussing SOTA AI capabilities and their impact. The state of the art has been reliably improving markedly over the past 3 years. o3, Claude opus 4, gemini-2.5 all surpass their predecessors in every benchmark and indicate that improvement isn't slowing down.
If GPT-5 comes out and it's somehow worse then I'll concede to your point, but so far the claim that the latest models are getting worse is mere speculation and makes no sense given that most labs are already aware of the potential for data contamination and such and have taken measures to ensure high data quality for the models they're spending hundreds of millions to train.