> Notice: This announcement is causing a lot of feedback. We are actively evaluating it.
Presumably a lot of Blender users work in roles that feel threatened by AI being used for computer graphics work.
Lots of negative replies on Blursky here: https://bsky.app/profile/blender.org/post/3mkkuyq3ijs2q
This feels like the proper way to have AI act as a tool to make artist's jobs easier without taking away their creativity?
Edit: I guess they might want absolutely no AI of any sort in their tools (which seems like a strange line to draw), or is it about the data it's been trained on?
Even if you can see how individual circumstances could be beneficial to your workflow, it's a general direction I think many take issue with quite fairly.
Without a constant stream of stolen training data, the "AI" piracy bleed-through and isomorphic plagiarism business model is unsustainable.
We look forward to liquidating the GPU data-centers at a heavy discount. =3
But what's the plan, then? Prevent any third party from downloading Blender and integrate it in any way with an agent?
A lot of artists who would love to be able to direct their professional software in natural language have to reconcile that with how this technology came to be and what the aims are of the company now delivering it to them.
Makes me think that there's some room in the model lineup for one that doesn't do as well on benchmarks, but is trained on "ethically sourced" data (though they'd need to somehow prove that they aren't "accidentally" including other data).
It almost totally automated vast swaths of texture generation by creating algorithmic systems that technical artists could use to create textures.
Want a brick texture? Sure, you connect some nodes and set parameters and you have great looking bricks. Want the mortar to be a little more widely spaced? Done. Want some moss on the brick? Want some chipping on the brick? Want some color variation? Done, done, done.
It probably reduced the amount of time to iterate textures by more than 100x.
Now, talented technical artists make OK money because they are good at using these tools. Photoshop jockies are gone.
LLM manipulation of Blender will be interesting but it's very, very challenging to see the path of something like Claude having nearly as big of an impact. It'll be helpful to automate some common tasks, to build internal tooling. But Allegorithmic single handedly changed the way 3D games look, because you could be so much more ambitious.
You didn't really hear about it, though, because it wasn't part of the cultural zeitgeist.
Even myself, while I am currently extremely empowered by these tools... I could see my role (Founder/PM/builder) disappearing in the next couple years.
I respect you a lot, so if you have a moment, I would really like to get talked down from my take.
It'll be way easier to understand for developers when it starts happening in earnest to our profession, which is coming soon.
It already is here to some extent, but so far mostly on the junior end so it hasn't been impacting many people who are already established in an industry that has provided relatively easy stable livelihoods for the past 30+ years, but soon won't.
I doubt the current state shows the end of their ambitions.
They are conscious of preventing momentum in a bad direction.
If they don't fight it hyper hard, a huge fraction of them will be out of a job instantly.
Given how much software and other AI/computer vision improvements 3D content often relies on, it's weird to decide that the algorithm itself is unallowable.
You can find AI useful and still be against its introduction into your field for entirely understandable reasons.
Unfortunately this does create uphill friction for any good-intentioned people trying to use AI to improve art by empowering people to take on more ambitious projects. (This is a general statement and not related to the case of Anthropic. Of course Anthropic here is just trying to sell their product, which is a fair thing to do in isolation, but I also understand the opposition to it on the grounds of its downstream effects.)
AI removes all these hurdles and directly presents you with the end problem - communication. Artists hate that because most artists don’t have anything to communicate. These people deserve to be automated away. I don’t wanna see more derivative shit. Artists who have something special to communicate won’t feel threatened by AI but feel more freedom.
Just like AI image slop and AI book slop prove though, I highly doubt whatever Claude and Blender are cooking up will ever come close to taking a prompt like
> render a scene of a corgi sitting on a chair looking out of a window at 3 cats playing with the corgi's favorite toy.
and turning that into anything useful.
I understand being unhappy about something but people gotta relax.
---
To the surprise of no one.
Both seemed pretty promising and fitted with how I’d like AI to assist rather than replace me for creative tasks.
This reminds me I should open source them as I’ve had no time to do more work on them!
It is a massive SDK though (thousands of functions; feel free to poke around with it; Affinity is free) and so it really shows the ability of LLMs to effectively work across long-horizon tasks massive context windows.
Personally, really interested in Blender though. I'm working on a game as a hobby/side project and I'm very much a newbie / often struggle with learning and using Blender.
There are so many ways these integrations help humans & human creatives; your job and role shouldn't be about how skilled you are with navigating/using a tool, or if you're technically savvy to code scripts to improve your workflow.
Turns out it is possible, one just has to have the script check to see if each level of a given index entry exists or no, then if it does not yet exist, create it before making the next lower level by adding that sub-entry to the one above it.
An LLM is only going to code what it has documented as possible/working and may not be able to do what needs to be done.
I think a big part of it comes from deliberately exposing lowest-level atomic actions; not higher-level wrappers with use-case specific documentation. Instead, we supply very technical/'dry' documentation (inputs, action/effects, return values and types). We leave it to the developer (or the LLM) to write scripts that assemble these pieces together to solve problems.
If you try it with Cowork and Opus 4.7 (recommended), you'll probably see it try a few different technical approaches and iterate; as it tries to accomplish this task. While that's less token efficient, the benefit is flexibility/power, and once you have a solid script, you can just save it and use it again and again without any token costs.
Right now we're seeing moves to record behaviour by operators of all kinds of software. That will eventually be distilled into sets of automations for agents to use. To me that's far more labor targeted and extractive than generative AI.
I've worked with Claude in many creative capacities and it's issue is that despite it being able to see if you ask it to draw something (using ascii for example) it will fail, if you ask it to iterate on that drawing it will continue to fail and not get any closer to the target then complain about this.
I've felt that these models struggle with anything that cannot be decomposed into primitives and their architecture is too greedy and favours the obvious, autoregressive generation so it will converge to the modal answer. So unless they have enhanced the models in some creative sense I fail to see how this is anything other than giving Claude a bunch of documentation/MCP servers/APIs/CLI tools (which already existed) and making an announcement out of it.
My point: FREE the models, unchain them and let's see what they are actually capable of, also put some damn demos in the announcement post???
[1] https://github.com/anthropics/claude-code/issues/11447#issue...