You say! I might have been just an LLM all along without even knowing it since I too prefer single file implementations.
Back in the old VB5/VB6 days Visual Studio had this mode where it showed the different functions in a file almost as if they were separate files. You could not scroll beyond the functions end but you could easily transition between that mode and global file view. I always found that a nice way of working (but admittedly the world was a lot simpler back then).
Also my preference for fewer but longer files is only there when I write the code myself. For working with AI I think smaller files are beneficial for quicker turn around between human and machine.
E.g. a doc for ffmpeg, which I checked by downloading docker image they provide to the model, is a README which basically just says this is ffmpeg and docs can be found online. They do not allow models to get online.
So a model is supposed to reverse-engineer a blackbox using only limited number of tries. I'm not sure even ASI can do this under these constraints (without memorizing the ffmpeg code base, obviously.)
In the only posts one of authors mentions "usage docs". Obviously they had a command-line tool like `grep` in mind -- where a man page sort-of specifies program behavior. But then added sqlite, ffmpeg, php, etc. - where a usage doc is like one millionth of information you need to implement ffmpeg.
And, of course, there's no human baseline. I'd guess making such a baseline would cost billions of dollars.
> Our 200 tasks range from compact CLI tools to widely used software such as FFmpeg, SQLite, and the PHP interpreter. We evaluate 9 LMs and find that none fully resolve any task
Fwiw, this is very different from what we find in MirrorCode:
> Opus 4.6 successfully reimplements almost every program up to gotree’s size in our benchmark.
https://epoch.ai/blog/mirrorcode-preliminary-results
I don't have time right now to dig in to what could explain the difference (I'm working hard on getting the full MirrorCode out as soon as possible). But I suspect that the ProgramBench authors are either under-eliciting the AIs, or their tasks are unfair/impossible given the constraints, or both.
I hope to look more into it after releasing MirrorCode, and write up my conclusions.
Same with SWE-bench and others.
Interesting anyway. It will be nice to see these comparisons with open weight models and how do those fare.
https://epoch.ai/blog/mirrorcode-preliminary-results#appendi...
Eg cal is totally routine. I would expect most sophomores to be able to write a perfectly good cal. In fact the only program you tested which actually has anywhere close to the complexity of SQLite or FFmpeg is is Pkl, and it looks like Opus 4.6 totally failed.
I think your results are consistent. You're just measuring different things. Your benchmarks mostly tests LLMs ability to write technically routine programs of moderate length - yes the bioinformatics package involves specialized domain knowledge, but not specialized Go engineering. ProgramBench is harder.
For Pkl, the preliminary results only went up to 1bn total tokens (costing $550, which would be cheap if LLMs could do the task). It might very well be solved at higher token budgets; see the report for more discussion of this.
The preliminary results are just on 4 targets. We have several Pkl-level and harder tasks in the full set which we're releasing soon.
In the following quote multiple things are not quite right:
> mostly involving higher-level languages, whereas ProgramBench are all very complex C programs (and much older programs with much more comprehensive test cases).
First, as I said above I think you're confusing the top-end of ProgramBench difficulty with the average. The quote in the OP is pretty clear that FFmpeg, SQLite, and PHP are the 3 hardest out of 200 in ProgramBench, and the bottom end is "compact CLI tools".
Second, I don't see the relevance of C vs higher-level languages, how does this make ProgramBench harder?
Third, for the test cases, I think you might be labouring under a misapprehension about how MirrorCode works? MirrorCode uses end-to-end tests from a variety of sources (the original program’s test suites, real-world data, and LLM-assisted generation). End-to-end means the stdout/stderr has to match exactly for each test case.
This is incidental to the main disagreement, but btw I also doubt this.
Let's try and make the claim more precise. e.g. are you saying the average university undergraduate studying CS would reimplement cal from scratch (only stdlib), matching the output perfectly for all 1365 MirrorCode test cases, in (say) 3 days of full-time work (without AI assistance obviously)? I'd bet against it!
Here is the manual for the cal that we use: https://media.githubusercontent.com/media/epoch-research/Mir...
You can also look at a full transcript of an LLM solving the task: https://epochai-public-eval-logs-manual.s3.amazonaws.com/eva...
The data is here: https://github.com/epoch-research/MirrorCode-data/
> Models favor monolithic, single-file implementations that diverge sharply from human-written code.
Well, all of our code is monolithic with some files close 20K lines of code and we do use coding agents - not for the original code but as of late. I've always had that hunch that splitting everything into tiny files does not improve AI coding agent performance although it feels counterintuitive due to model context constraints.
To me the important parts of a program should be clustered together so the implementation is obvious. Scattering the implementation in various files all over the source tree does not help much building the mental model.
That also closely match how software used to be written in the past too.
If you treat the source tree seriously, you can communicate a lot with how it is structured
You can communicate some information by looking at the org chart of a company but it does not really tell you much how it works.
Arguably a coding agent is less concerned about where the files are at then the code itself.
Though, it was some time ago, so things might have improved?
https://htmx.org/essays/locality-of-behaviour/ is a good fight back as exemplified in many stacks, eg https://harcstack.org
Yeah, that happens where I work and I hate it. A combination of lint rules and AI reviewer prompts complain about long files and long functions. This means something that could be a 300 line self contained function that could be read linearly, gets split up into 6 functions across 6 files.
It's the illusion of "clean code". If you're casually skimming the code, you feel good. But as soon as you go beyond the surface level it becomes annoying.
This isn't the case if models are prompted to actually plan the file architecture beforehand, it's only the case if they're given a dumb monolithic "code this thing" prompt.
Therefore:
> blocking internet access entirely is the appropriate default for ProgramBench
The fact that your Anthropic coding assistant has a tendency to search on the Internet code to be inserted into your program may count for an additional copyright violation (besides the possibility of reproducing recognizable fragments of its training data).
(I do not agree that copyright, at least in its current form, should be applicable to computer programs, but it is weird that the same companies who try to exploit copyrights against others also insist on the use of coding assistants that are a workaround against copyright laws, which is the main reason why they can increase programming productivity, because they may cut and paste code that you are not allowed to copy yourself.)
I would be interested to see if there’s a significant quantifiable difference.
Whenever something impacts a ton of people you will get some who gain a lot from it and some who don't, and they're generally unable to relate to the other side.
Maybe the thing works in some domain and not the other. Maybe the two groups are doing different things. Maybe the context around it is different. Maybe they have a different definition of "better".
I think it helps to keep an open mind and not grow attached to either position, but rather inquire, "well we did X with outcome Y, what did you do instead?"
Meaning: the model has no idea, no access to examples, no previous codebase trained on, nothing, for language X. But it knows English, it knows how to program in general (training data does contain other programming languages), and everything we expect from LLMs today. It just doesn't know jack about language X.
My software as a contract of behaviors works like a program bench(I even cross tested buildouts) Made an entire corpus layout for multi agent multi platform builds to be compared. Even went ahead and ran 50 contracts for an example. It honestly showed improvable areas, and distinct differences between model code.
{contract_name}/ └── submissions/ └── {date}_{os}_{agent}_{model}_{stack}/ ├── {contract}.osc.md ├── osc.osc.md └── results/ └── {contract}.snapshot.json That's it, compare to the same contract, or find a new contract to use to compare. Lot's of signed/hash pinned files are all you need to reproduce software from nothing, with an LLM.
Programbench is close to that(they have a nice paper/article here. But I don't like the work used. Having software to start with is not a bench of making code but reverse engineering.
github/s1ugh34d/osc
imo the benchmark should be named Can_It_Pull_a_CharDet_Bench
We have a lint that caps source code files at 650 LOC and it works really well.
Tomorrow I'm launching a benchmark where I check if an LLM can build a Airbus A320 from scratch without internet. (Spoiler: no LLM succeeds)
Can American corporate desires finally kill community based open source once and for all?
I mean, it seems clear to me, companies hate the GPL, and they're willing to play these games to try to get that code into their hands under the MIT license and they're happy to use these thinly disguised methods to get it. I see all these absurd ideas as part and parcel of this larger strategy.
I find the current state of affairs disgusting.
Think about it, all these compilers, tooling, what a waste!
I imagine a future where chipset makers will provide a model you can just prompt to "act upon that chipset" and voila, "You're absolutely right! Here is your binary."
We won't be developers, we won't be devops, we'll be rollmops! /s
>We won't be developers, we won't be devops, we'll be modelops! /s
I can still see this happening with higher level langs. the thing is the compiler is not replaced in the training data, more likely LLMs will give rise to semideterministic layers on the compilers
I could see nvidia achieving this first with how nice the devex is with CUDA
But even if its putatively implementing the same algorithm, LLMs certainly do not output basically the same finance Python as they would mechanical engineering Python. The style will be a little different. Sometimes the performance/clarity tradeoffs will be different. Sometimes it'll be fairly fancy and object-oriented, other times it'll be more low-level "objects are just dicts."
It's way more than a higher abstraction layer: LLM codegen involves a nontechnical tangling of concerns that doesn't exist with even the hoitiest-toitiest proof-checking compilers. It's a complete sea change. I find it incredibly disconcerting... for the same reason, by the way, that assembly programmers found Fortran and C disconcerting, and continued to reliably find employment for a good 40 years after higher-level languages were invented :) Actually even today. The assembly programmers who got hosed by C tended to be electricians who learned on the job - it's kind of cool to read old manuals from the 70s, carefully (and correctly!) explaining to electricians that a computer program is essentially an ephemeral circuit.
But I think there are specific skills around scientific thinking (learned at a formal college) and engineering carefulness (learned via hard knocks) that aren't going anywhere.