I'm also looking into simplifying it a bit more with environment maps, which I shared on my Bsky: https://bsky.app/profile/dannyspencer.bsky.social/post/3mecu...
How did you get the actual idea to do this in the first place?
It's fuzzy, but I think it was because I was learning GB assembly while working on shaders in Houdini or something (I'm a tech artist). The two worlds collided in my head, saw that there's no native multiplication on the GB, and figured it'd be a fun problem.
This is impressive and cool but I don’t understand the bitterness here.
Here are the frames: https://github.com/nukep/gbshader/tree/main/sequences/gbspin...
The way I look at it if the input and math in the shader is working with 3d vectors its a 3d shader. Whether there is also a 3d rasterizer is a separate question.
Modern 3d games are exploiting it in many different ways. Prerendering a 3D model from multiple views might sound like cheating but use of imposters is a real technique used by proper 3d engines.
https://github.com/gbdk-2020/gbdk-2020/tree/develop/gbdk-lib...
Unfortunately, the 2D imposter mode has pretty significant difficulties with arbitrarily rotated 3D. The GBDK imposter rotation demo needs a 256k cart just to handle 64 rotation frames in a circle for a single object. Expanding that out to fully 3D views and rotations gets quite prohibitive.
Haven't tried downloading RGDBS to compile this yet. However, suspect the final file is probably similar, and pushing the upper limits on GB cart sizes.
⇒ I think they’re correct in calling this a 3D shader.
> I believe in disclosing all attempts or actual uses of generative AI output, because I think it's unethical to deceive people about the process of your work. Not doing so undermines trust, and amounts to disinformation or plagiarism. Disclosure also invites people who have disagreements to engage with the work, which they should be able to. I'm open to feedback, btw.
Thank you for your honesty! Also tremendous project.
I just wanted to document the process for this type of project. shrug
Still, even though they suck at specific artifacts or copy, I've had success asking an LLM to poke for holes in my documentation. Things that need concrete examples, knowledge assumptions I didn't realize I was making, that sort of thing.
Sweet Gameboy shader!
If it’s the norm to use LLMs, which I honestly believe is the case now or at least very soon, why disclose the obvious? I’d do it the other way around, if you made it by hand disclose that it was entirely handmade, without any AI or stackoverflow or anything, and we can treat it with respect and ooh and ahh accordingly. But otherwise it’s totally reasonable to assume LLM usage, at the end of the day the developer is still responsible for the final result, how it functions, just like a company is responsible for its products even if they contracted out the development of them. Or how a filmmaker is responsible for how a scene looks even if they used abobe after effects to content aware remove an object.
// Source: https://stackoverflow.com/q/11828270
With any AI code I use, I adopted this style (at least for now): // Note: This was generated by Claude 4.5 Sonnet (AI).
// Prompt: Do something real cool.Since you're already doing what's essentially demoscene-grade hacking, have you thought about putting together a short demo and entering it at a demoparty? There's a list of events at demoparty.net - this kind of thing would absolutely shine there.
I've still got my Gameboy collection, but rarely use it. It's just so much easier to fire up an emulator these days.
https://www.analogue.co/pocket
I went with getting a GBA SP and replacing the screen with a more modern panel. The kids love it.
By modifying the instruction operand!
2A ld a, [hl+]
D6 08 sub a, 8Probably could have been written that way though, since it is spinning the camera view rather than the object.