// wavelength of used primaries, according to preetham
const vec3 lambda = vec3( 680E-9, 550E-9, 450E-9 );
// this pre-calcuation replaces older TotalRayleigh(vec3 lambda) function:
// (8.0 * pow(pi, 3.0) * pow(pow(n, 2.0) - 1.0, 2.0) * (6.0 + 3.0 * pn)) / (3.0 * N * pow(lambda, vec3(4.0)) * (6.0 - 7.0 * pn))
Who's Preetham? Probably one of the copyright holders on this code. https://tommyhinks.com/2009/02/10/preetham-sky-model/
https://tommyhinks.com/wp-content/uploads/2012/02/1999_a_practical_analytic_model_for_daylight.pdf
Rather than stolen from Mr. Preetham, it's much more likely this fragment is generated from a large number of Preetham algorithm implementations out there, eg. I know at least Blender and Unreal implement it and probably heaps of others was well.Nobody is going to sue you for using their implementation of a skybox algorithm from 1999, give us break. It's so generic you can probably really only write it in a couple of different ways.
If youre worried about it you can always spend a day with Claude, ChatGPT and yourself looking for license infringements and clean up your code.
For personal use maybe not, but that's not the point, the point is it's spitting out licensed code and not even letting you know. Now if you're a business who hire exclusively "vibe" coders with zero experience with enterprise software, now you're on the hook and most likely will be sued.
How would you know? Do you have another AI scan for copyright violations? In terms of a false negative how are disputes resolved?
Seems like a massive attack surface for copyright trolls.
Seems fine given the project is already using threejs and so will have to include license info for it already.
Codex: https://github.com/stopachka/cscodex Gemini: https://github.com/stopachka/csgemini Claude: https://github.com/stopachka/csclaude
Edit:
https://github.com/vorg/pragmatic-pbr/blob/master/local_modu...
https://github.com/vorg/pragmatic-pbr/blob/master/local_modu...
This looks like where the source code was stolen from: this repository is unlicensed, and this is copyright infringement as a result
Unlike your results which aren't exact match, or likely even a close enough match to be copyright infringment if the LLM was inspired by them (consider that copyright doesn't protect functional elements), an exact match of the code is here (and I assume from the comment I linked above this is a dependency of three.js, though I didn't track that down myself): https://github.com/GPUOpen-LibrariesAndSDKs/Cauldron/blob/b9...
Edit: Actually on further thought the date on the copyright header vs the git dates suggests the file in that repo was copied from somewhere else... anyways I think we can be reasonably confident that a version of this file is in the dependency. Again I didn't look at the three.js code myself to track down how its included.
If there's any copyright infringment here it would be because bog standard web tools fail to comply with the licenses of their dependencies and include a copy of the license, not because of LLMs. I think that is actually the case for many of them? I didn't investigate the to check if licenses are included in the network traffic.
Sure. It's a problem that corporations run by more or less insane people are the ones monetizing and controlling access to these tools. But the solution to that can't be even more extended private monopolistic property claims to thought-stuff. Such claims are usually the way those crazy people got where they are.
You think in a world where Elsevier didn't just own the papers, but rights to a share in everything learned from them, would be better for you?
E.g. the latest Anno game (117) received a lot of hate for using AI generated loading screen backgrounds, while I have never heard of a single person caring about code, which probably was heavily AI generated.
I remember when CS Pro Mod was being made between the transition of CS 1.6, Source, the 1.6 community didn't want to move over to Source, before GO/CS2 came around.
Cool to see what's basically Quake1/doom style but this is a far fetch away from counter-strike. Although if netcode could be imagined and implemented I don't see why making a lower tier Counter-Strike wouldn't be doable. I'd play it if it were the quake style old-graphics version of CS that allowed for skill gaps.
Great article, love the nostalgic feeling.
I’d also love a Battle-bits CS version. (Battle-bits was a fun Battlefield low poly spoof).
As for “it-compiles” that is nothing new. I have written code that I go back to later and wonder how it ever compiled. I have a process now of often letting the agent prototype and see if it works. Then go back and re-engineer it properly. Does doing it twice save time? Yeah, cause who’s to say my first take on the problem would have been correct and now I have something to look at and say it is definitely wrong or right when considering how to rebuild it for long term usage.
1. More people that wanted to make games can.
Thanks to unreal engine, you don't need to be a Tim Sweeney level-expert to make compelling games. I see LLMs as another abstraction in the same spirit.
2. You get more leverage
The more abstractions you have, the more you can do with less. This means less bureaucracy, more of a chance to make _exactly_ what you wanted.
I understand how the craft changes underneath you, and that can feel depressing, but if we see it as tools, I think there's lots of good ahead.
I could be wrong of course, and it may be true that your work will change very soon. Maybe someone else has better examples to propose ?
It's the same when I hear people complain about how complex new UI frameworks are. The web still runs perfectly well on simple html, CSS, and Javascript. There is not federal police force that will arrest you for not using React.
Yes, I can do it. In my free time. But that part of my job that was enjoyable? Poof. Not anymore. Can't compete, get on with times, be more productive.
I spend a 40% of my "alive" time in work. It's a massive downgrade.
Furthermore, if you have it sandboxed, you can also ask it to also install any necessary dependencies or toolchains, which is really nice.
Now show us the cost, the time it took, and how much babysit... sorry, "human supervision" was necessary.
https://www.youtube.com/watch?v=fm-OoCWQlmc
The only time I spent outside of the video was to deploy to Vercel. I made a bunch of speedups in the video, but didn't cut anything. The total time was about 2 hours.
I mentioned it in the post, but there was definitely some hand holding towards the end, where I don't think a non-programmer would have succeeded
> Codex, Opus, Gemini try to build Counter Strike
Even though the prompt mentions Counter Strike, it actually asks to build the basics of a generic FPS, and with a few iterations ends up with some sort of minecraft-looking generic FPS with code that would never make it to prod anywhere sane.
It's technically impressive. But functionally very dubious (and not at all anything remotely close to Counter-Strike besides "being an FPS").
Fitting.
the code and output is literal slop
it's not known how much editing and debugging was done by the team either
you could have done this in 2022 with not that much debugging as well
https://github.com/instantdb/instant/pull/2010
Once this lands lightbox should be up. Thank you!
In other words, this is slop. We know these new models can generate slop images, text, videos, and code. Sometimes slop can be useful; maybe you can shape it into something useful, maybe you can slop a slopper. But we're learning it's not economical--this is some of the costliest slop we've ever made.
AI coding needs someone behind to steer it to do better, and in some cases, it does. But still hasn't left the junior phase, and while that doesn't happen, there's still the need for a good developer to deliver good results.
They're not thinking or reasoning or understanding. It's just amazing autocomplete. Humans not being able to keep themselves from extrapolating or gold rushing doesn't change that.
They are. I know a lot of people don't want to admit this, but they are. They're getting better with each release.
> But we're learning it's not economical--this is some of the costliest slop we've ever made.
Huh? How on earth would you know whether my usage of LLM's has been worth it or not?
> Sometimes slop can be useful; maybe you can shape it into something useful
Man, I just spent the last 2 weeks with a CEO who got a Bolt.new subscription to be able to generate some high-level mocks ups for me to utilize that just saved us months of back and forth.
You know what's the best part? Those same mockups can be used to gather user feedback with a functioning UI without me having to spend weeks building it and it ending up wrong anyway.
Sometimes it irks me, but now I've sorta come to embrace devs like you. You're guaranteeing I have a job because you refuse to acknowledge the very obvious thing that's happening.
I’ll take that bet.
"Now let's make shots work. When I shoot, send the shot as a topic, and make it affect the target's HP. When the target HP goes to zero, they should die and respawn."
This is not how shooting is implemented in a competitive first person shooter.
If you don't understand how a multiplayer FPS works, how can you tell if the AI has actually created one for you or not?
Also, Unreal source code will be the very last thing LLMs understand. This is the most complex software ever.
There’s an algorithm called Nanite for automatically reducing the triangle count on geometry that’s far from the camera. As in there are not manually made separate level-of-detail models. The algorithm can modify models, reducing quality as they get farther.
This one algorithm is a tiny piece of the engine yet has a 1,000 page white paper.
Also, even when I don’t know how something works algorithmically, usually I at least have some intuition about where to start. I haven’t the slightest idea how to approach this problem.
No way. Take baby steps. Write an operating system first. Write a compiler first.
[1]: https://jms55.github.io/posts/2024-06-09-virtual-geometry-be...
Edit: I don't mean to sound disparaging - it's some genuinely cool algorithms. It's just that Epic is incentivized to hype it up, and so you get a huge paper and multiple talks that are designed to make it seem even more impressive.