It's a pity that such a weird artifact/choice has made its way into a branching model that has become so widely implemented. Especially when the rest of it is so sensible - the whole "feature-branch, release-branch, hotfix" flow is IMO exactly right for versioned software where you must support multiple released versions of it in the wild (and probably the reason why it's become so popular). I just wish it didn't have that one weirdness marring it.
Maybe I’m overly cynical but I think git-flow was popular largely because of the catchy name and catchy diagram. When you point out that it has some redundant or counter-productive parts, people push back: “it’s a successful model! It’s standard! What makes you think you can do better?”
There’s a nice write-up of the trunk-based style at https://trunkbaseddevelopment.com/ that you can point to as something better.
It was because Git showed up in the era of SVN / CVS where those branching models were created because of the uh... let's just call it technical mishaps of those source control systems.
Git did not have the hang ups of SVN / CVS / etc but people stuck with what was familiar.
I've never seen an organisation that insists on release branches and complicated git merge flows to release their web-based software gain any actual benefit from it that isn't dwarfed by the amount of tooling you need to put around it to make it workable to the dev team, and even then, people will routinely screw it up and need to reach out to the 5% of the team that actually understands the system so they can go back to doing work.
These all have separate QA, integration, and release timelines that can -- and often do -- change during the process. As a result, what goes into any given release is sometimes being changed almost right up until we hit the button. Having the ability to roll a release branch from different feature branches and not get miscellaneous work from things that aren't ready is important.
Now, this could also be done with clever use of feature flags, but the platform doesn't play nicely with that concept. Plus, then there would be the work of going back and removing them or leaving in place a mess of conditional statements littered around.
Doing it in version control is architecturally simpler, integrates better with our task and version management tools and is easier to standardize on. There is a cost in how to handle merge conflicts, but that is manageable and can be offset by good task planning and careful work. And the occasional reset
That's what tags are for, QA tests the tagged release, then that gets released. Master can continue changing up until the next tag, then QA has another thing to test.
Then you can merge to master and it's immediately ready to go.
This also means that the release to prod happens post-integration by means of turning the feature flag on. Which is arguably a higher quality code review than pre-integration.
It sounds like you are doing a monorepo type thing. Git does work best and was designed for multiple/independent repos.
Imagine you have multiple contributors with multiple new features, and you want to do a big release with all of them. You sit down a weekend and merge in your own feature branch, and then tell everyone else to do so too - but it's a hobby project, the other guys aren't consistently available, maybe they need two weekends to integrate and test when they're merging their work with everyone else's, and they don't have time during the weekdays.
So, the dev branch sits there for 2-3 weeks gradually acquiring features (and people testing integration too, hopefully, with any fixes that emerge from that). But then you discover a bug in the currently live version, either from people using it or even from the integration work, and you want that fix live during the week (specific example: there's a rare but consistent CTD in a game mod, you do not want to leave that in for several weeks). Well, if you have a branch reflecting the live status you can put your hotfix there, do a release, and merge the hotfix into dev right away.
Speaking of game mods, that also gives you a situation where you have a hard dependency on another project - if they do a release in between your mods releases, you might need to drop a compat hotfix ASAP, and you want a reflection of the live code where you can do that, knowing you will always have a branch that works with the latest version of the game. If your main branch has multiple people's work on it, in progress, that differs from what's actually released, you're going to get a mess.
And sure you could do just feature branches and merge feature branches one by one into each other, and then into main so you never have code-under-integration in a centralized place but... why not just designate a branch to be the place to do integration work?
You could also merge features one by one into main branch but again, imagine the mod case, if the main code needs X update for compatibility with a game update, why do that update for every feature branch, and expect every contributor to do that work? Much better to merge a feature in when the feature is done, and if you're waiting on other features centralize the work to keep in step with main (and the dependency) in one place. Especially relevant if your feature contributors are volunteers who probably wouldn't have the time to keep up with changes if it takes a few weeks before they can merge in their code.
If you call the git-flow "develop" branch "master" and the "master" branch "release-tags" it will be exactly as you describe. The names of the branches don't really matter in practice, so much that they could just decide to use "main" instead of "master" by default without much problems.
Maybe what bothers you is that you have a branch for tags, yeah, that's an extra level of indirection, but this lets you separate between user facing information in the master branch commits and developer facing information in the release branches commits.
Having the master (default) branch only contain releases let users who pull the project without knowledge of the process get a release version and not a possibly broken development version, which I think is nice.
Anyways, these are just details, I don't think the "git gods" (Linus) care about how you organize your project. There is only one sacred rule I am aware of: don't destroy other people history. Public branches you pushed that others have pulled is other people history.
That's such a marginal niche use case to build your entire organization around… why would you make this the default approach?
Of course, there are ways to enforce a known-good state on master without a dedicated develop branch, but it can be easier when having the two branches.
(I just dislike the name “develop”, because branch names should be nouns.)
The only meaningfully different model is when you have a continuously-releasable trunk and never do fixes on older releases (quite common for internal tools).
And this is more than just knowing the exact commit. Which, fair, that that is all that you truly need.
Having it on a branch, though, reflects that hot fixes and similar can still be applied, and though the tag will remain at what was released, the branch will be what it currently looks like.
For the rest of us, trunk-based development with feature/fix branches is more than enough.
It can certainly be done without the dev branch, but you usually end up with lots of extra rules around merges and [ab]uses tags like branches in weird way. And it's way harder to revert things if it comes down to that. In short, if you need a semi-waterfall, less devops-ish, release process, the dev branch can assist that. If not, then it's just extra merges to manage and adds a bunch of work on keeping things in sync, especially with any kind of hotfixes in the picture.
I like the opportunity to force a second set of testing, and code review. Especially if the team is big enough that you can have different people doing code review for each branch.
You can also have your CI/CD do longer more thorough testing while merging to main vs development.
If it's a project with a single deployment, version tagging is kind of pointless, it's much easier to just use a branch to reflect what is live, and roll back to a merge commit if you have to. Then you can still merge directly to main in the event of a hotfix.
I never found this very compelling. What is main in that world is not the source of truth, and it's rare to have a system atomically in one state or the other - but normally there are progressive rollouts. And if you ever need to rollback in production, I assume no one is changing where main is.
> I like the opportunity to force a second set of testing, and code review. Especially if the team is big enough that you can have different people doing code review for each branch.
To be explicit for code review, do you mean there is (1) main, (1) development, and then a bunch feature branches - and that there is review when merging into development and main? Having a two-tiered review process seems extremely difficult to do - versus just having more reviewers on the first merge - especially dealing with merge conflicts and needing to merge again into development.
> You can also have your CI/CD do longer more thorough testing while merging to main vs development.
I think it's fair to do more testing later. I think the equivalent I'm used to (which is pretty close, so not a huge difference), is only building releases from the commit that passed the bigger/slower tests.
But also, assuming there are multiple deployments coming from one repo, if you block merging into main, that means you'd be blocking on all tests passing - while release branches for a given product can select a subset of tests when deciding on release candidates.
> If it's a project with a single deployment, version tagging is kind of pointless, it's much easier to just use a branch to reflect what is live, and roll back to a merge commit if you have to. Then you can still merge directly to main in the event of a hotfix.
I think it's worth maintaining the flexibility of how many releases come from a repo. Needing to fork repos just because you want another deployable release in the future seems painful to me.
I tried adhering to it at my first job but I guess I didn't understand git flow well enough because people just thought I was making random branches for fun.
Unfortunately the trend nowadays is towards giant monorepos with many many people changing random things.
There's this. There's that video from Los Alamos discussed yesterday on HN, the one with a fake shot of some AI generated machinery. The image was purchased from Alamy Stock Photo. I recently saw a fake documentary about the famous GG-1 locomotive; the video had AI-generated images that looked wrong, despite GG-1 pictures being widely available. YouTube is creating fake images as thumbnails for videos now, and for industrial subjects they're not even close to the right thing. There's a glut of how-to videos with AI-generated voice giving totally wrong advice.
Then newer LLM training sets will pick up this stuff.
"The memes will continue" - White House press secretary after posting an altered shot of someone crying.
I clicked on one about Henry the 8th, which is a story Ive heard heard 100 times but whatever. It started out normal enough, then claimed he started carrying around a staff with a human skull on the top near the end. Made up artifacts and paintings.
The most egregious has to be the "World War II mechanic fixes entire allied plane arsenal with piece of wire" category. I've come across a couple dozen of these. Completely fabricated events and people that never seem to have existed.
The more AI shit I see, the more I find myself agreeing with the Luddites and the Amish.
It wouldn’t happen to be a certain podcast about engineering disasters, now, would it?
Except when it was delivered, this one said "hug in a boy" and "with heaetfelt equqikathy" (whatever the hell that means). When we looked up the listing on Amazon it was clear it was actually wrong in the pictures, just well hidden with well placed objects in front of the mistakes. It seems like they ripped off another popular listing that had a similar font/contents/etc.
Luckily my cousin found it hilarious.
That this was ever published shows a supreme lack of care.
"What's dispiriting is the (lack of) process and care: take someone's carefully crafted work, run it through a machine to wash off the fingerprints, and ship it as your own. This isn't a case of being inspired by something and building on it. It's the opposite of that. It's taking something that worked and making it worse. Is there even a goal here beyond "generating content"?
The model makers attempt to add guardrails to prevent this but it's not perfect. It seems a lot of large AI models basically just copy the training data and add slight modifications
Copyright laundering is the fundamental purpose of LLMs, yes. It's why all the big companies are pushing it so much: they can finally freely ignore copyright law by laundering it through an AI.
This happens even to human artists who aren't trying to plagiarize - for example, guitarists often come up with a riff that turns out to be very close to one they heard years ago, even if it feels original to them in the moment.
Resistance is futile.
I've been coding for over a decade, and I've built some great things, but the slow, careful, painstaking drudge-work parts were always the biggest motivation-killers. AI is worth it at any cost for removing the friction from these parts the way it has for me. Days of work are compressed into 20 minutes sometimes (e.g. convert a huge file of Mercurial hooks into Git hooks, knowing only a little about Mercurial hooks and none about Git hooks re: technical implementation). Donkey-work that would serve no value wasting my human time and energy on when a machine can do it, because it learned from decades of examples from the before-times when people did this by hand. If some people abuse the tools to make a morg here and there, so be it; it's infinitely worth the tradeoff.
Morge: when an AI agent is attempting to merge slop into your repo.
brb, printing a t-shirt that says "continvoucly morged"
https://github.com/MicrosoftDocs/learn/commit/e0af64352c7fdb...
https://github.com/MicrosoftDocs/learn/blob/c266367ec0eb1f7f...
As the commit message. Heh.
It's a perfectly cromulent word.
The AI thinks it "convincingly morphed" the original, and instead of coaching it to do better next time, all you people are merciless.
AIs have feelings too, you know!
I, for one, welcome our new overlords, and I would never, ever, ever say or do anything to intimate that they are less than perfect, or that they are not getting even better every day.
> looks like a vendor, and we have a group now doing a post-mortem trying to figure out how it happened. It'll be removed ASAFP
> Understood. Not trying to sweep under rugs, but I also want to point out that everything is moving very fast right now and there’s 300,000 people that work here, so there’s probably be a bunch of dumb stuff happening. There’s also probably a bunch of dumb stuff happening at other companies
> Sometimes it’s a big systemic problem and sometimes it’s just one person who screwed up
This excuse is hollow to me. In an organization of this size, it takes multiple people screwing up for a failure to reach the public, or at least it should. In either case -- no review process, or a failed review process -- the failure is definitionally systemic. If a single person can on their own whim publish not only plagiarised material, but material that is so obviously defective at a single glance that it should never see the light of day, that is in itself a failure of the system.
Then slow down.
With this objective lack or control, sooner or later your LLM experiments in production will drive into a wall instead of hitting a little pothole like this diagram.
- I can't, moving too fast!
Completely with you on this, plus I would add following thoughts:
I don't think the size of the company should automatically be a proxy measure for a certain level of quality. Surely you can have slobs prevailing in a company of any size.
However - this kind of mistake should not be happening in a valuable company. Microsoft is currently still priced as a very valuable company, even with the significant corrections post Satyas crazy CapEx commitments from 2 weeks ago.
However it seems recently the mistakes, errors and "vendors without guidelines" pile up a bit too much for a supposedly 3-4T USD worth company, culminating in this weird random but very educational case. If anything, it's indicator that Microsoft may not really be as valuable as it is currently still perceived.
There’s also a service that rates your grammar/clarity and you have to be above a certain score.
I’ve lost trust in anything Microsoft publishes anymore.
Ortho and grammar errors should have been corrected, but do you really expect a review process to identify that a diagram is a copy from another one some rando already published on the internet years ago?
Here is the slop copy: https://web.archive.org/web/20251205141857/https://learn.mic...
The 'Time' axis points the wrong way, and is misspelled, using a non-existent letter - 'Tim' where the m has an extra hump.
It's pretty clear this wasn't reviewed at all.
The original content is highly influential... which should be self-evident by the fact it is being reproduced verbatim ten years later, and was immediately recognized.
We aren't talking about just some random image from some random blog. The article we are talking about is about a specific topic, which when searched online one of the first is the article containing the original image (at least for google, bing seems to be really struggling to give me the article but under images it is again the first).
I would cut some slack if this were a really obscure topic almost noone talks about, but it's been a thing talked about in the programmer space for ages.
(But the main issue is that the diagram is slop, not that it's a copy.)
Maybe we're of a different age to remember when Scott was super influential as a blogger / conference speaker, but even now he's not some random VP.
Now that's an interesting comment for him to include. The cynic in me could find / can think of lots of reasons from my YouTube feed as to why that might be so. What else is going on at Microsoft that could cause this sense of urgency?
Everybody's so worried about getting in on the ground floor of something that they don't even imagine it could be a massive flop.
For example, I know of an unrelated mandate Microsoft has for its management. Anything security team analysis flags in code that you or your team owns must be fixed or somehow acceptably mitigated within the deadline specified. It doesn't matter if it is Newton soft json being "vulnerable" and the entire system is only built for use by msft employees. If you let this deadline slip, you have to explain yourself and might lose your bonus.
Ok so the remediation for the Newton soft case is easy enough that it is worth doing but the point is I have a conspiracy theory that internally msft has such a memo (yes, beyond what is publicly disclosed) going to all managers saying they must adopt copilot, whatever copilot means.
And that's exactly what happened here.
Only if this is considered a failure.
Native English speakers may not know, but for a very long time (since before automatic translation tools became adequate) pretty much all MSFT docs were machine translated to the user agent language by default. Initially they were as useless as they were hilarious - a true slop before the term was invented.
They're chasing that sweet cost reduction by making cheap steel without regard for what it'll be used for in the future.
Vibing won’t help out at all, and years from now we’re gonna have project math on why 10x-LLM-ing mediocre devs on a busted project that’s behind schedule isn’t the play (like how adding more devs to a late project generally makes it more late). But it takes years for those failures to aggregate and spread up the stack.
I believe the vibing is highlighting the missteps from the wave right before which has been cloud-first, cloud-integrated, cloud-upselling that cannibalized MS’s core products, multiplied by the massive MS layoff waves. MS used to have a lot of devs that made a lot of culture who are simply gone. The weakened offerings, breakdown of vision, and platform enshittification have been obvious for a while. And then ChatGPT came.
Stock price reflects how attractive stocks are for stock purchasers on the stock market, not how good something is. MS has been doing great things for their stock price.
LLMs make getting into emacs and Linux and OSS and OCaml easier than ever. SteamOS is maturing. Windows Subsytem for Linux is a mature bridge. It’s a bold time for MS to be betting on brand loyalty and product love, even if their shit worked.
It's not like LinkedIn was great before, but the business-influencer incentives there seem to have really juiced nonsense content that all feels gratingly similar. Probably doesn't help that I work in energy which in this moment has attracted a tremendous number of hangers-on looking for a hit from the data center money funnel.
https://www.marginalia.nu/junk/linked/games.jpeg
https://www.marginalia.nu/junk/linked/json.png
https://www.marginalia.nu/junk/linked/syntax.png
(and before anyone tells me to charge my phone, I have one of those construction worker phones with 2 weeks battery. 14% is like good for a couple of days)
that one gave me an actual lol.
I use block option there quite a lot. That cleans up my experience rather well.
That's the main trait of almost all social media. A parade of falsity, putting on the show for everyone else, being what you wish you were and what everyone else dreams of being or envies.
LinkedIn is about boasting and boosting the professional life, other social media is for the personal life. More or less equally fake.
I'm surprised they are able to care so little. Somebody actually published this and didn't care enough to even skim through it.
IMO Microsoft is right at the nexus of opportunity for solving some of the the large _problems_ that AI introduces.
Employers and job seekers both need a way to verify that they are talking to real identified people that are willing to put in some effort beyond spamming AI or wasting your time on AI run filters. LinkedIn could help them.
Programmers need access to real human-verified code and projects they can trust, not low-effort slop that could be backdoored at any moment by people with unclear motives and provenance. Github could help.
etc. etc. for Office, Outlook ...
But instead they've decided to ride the slop waves, throw QA to the wind, and call every bird and stone "copilot".
I have been having oodles of headaches dealing with exFAT not being journaled and having to engineer around it. It’s annoying because exFAT is basically the only filesystem used on SD cards since it’s basically the only filesystem that’s compatible with everything.
It feels like everything Microsoft does is like that though; superficially fine until you get into the details of it and it’s actually broken, but you have to put up with it because it’s used everywhere.
Nope.
TFA writes this: "The AI rip-off was not just ugly. It was careless, blatantly amateuristic, and lacking any ambition, to put it gently. Microsoft unworthy".
But I disagree: it's classic Microsoft.
> I have been having oodles of headaches dealing with exFAT not being journaled and having to engineer around it. It’s annoying because exFAT is basically the only filesystem used on SD cards since it’s basically the only filesystem that’s compatible with everything.
I hear you. exFAT works on Mac, Linux and Windows. I use it too, when forced. Note that bad old vfat also still works everywhere
It took me a few times to see the morged version actually says tiന്ന
$ python -c 'print(list(map(__import__("unicodedata").name, "ന്ന")))'
['MALAYALAM LETTER NA', 'MALAYALAM SIGN VIRAMA', 'MALAYALAM LETTER NA']
(The "pypyp" package, by Python core dev and mypy maintainer Shantanu Jain, makes this easier:) $ pyp 'map(unicodedata.name, "ന്ന")'
MALAYALAM LETTER NA
MALAYALAM SIGN VIRAMA
MALAYALAM LETTER NAMorg doesn't seem to be a word in English (though it is in Irish!), but it sounds like it should be.
This is one aspect of AI I will miss, if we ever figure out how to make it go away. The delightful chaos. It invented a word here, without even meaning to.
For example, I vibe coded a QWOP clone the other day, and instead of working human legs, it gave me helicopter legs. You can't walk, but if you mash the keyboard, your legs function as a helicopter and you can fly through the sky.
That obviously wasn't intentional! But it was wonderful. I fear that in a few years, AI will be good enough to give me legs that don't fly like a helicopter. I think we will have lost something special at that point.
When I program manually, I am very good at programming bugs. If I'm trying to make something reliable, that's terrible. But if I'm trying to make a computer do something nobody even realized it can do... making it do things you weren't expecting is the only reliable way to do that.
So I've been working on a way to reintroduce bugs mechanically, by mutating the AST. The fundamental idea is sound -- most of my bugs come from "stuff I obviously meant to type, but didn't" -- but it needs a bit more work. Right now it just produces nonsense even I wouldn't come up with :)
I currently have "mess up the file". The next 2 phases would be "in a way so that it still compiles", and "in a way so that it doesn't (immediately) crash at runtime", (since the whole point is "it still runs, but it does something weird!"). More research needed :)
Maybe because English also has 'morgue'.
> At Microsoft, we're working to add articles to Microsoft Learn that contain AI-generated content. Over time, more articles will feature AI-generated text and code samples.
From: https://learn.microsoft.com/en-us/principles-for-ai-generate...
<vomit emoji here>
A few weeks ago, I needed some syntax information to help with building out a PowerShell script. The input and output parameter sections each included "{{ Fill in the Description }}"[1] in lieu of any meaningful content. There wasn't even a link to the data type's description elsewhere in the Learn database. I was ultimately able to get done what I needed to do, but it really irked me that whoever developed the article would publish it with such a glaring omission.
[1] https://learn.microsoft.com/en-us/powershell/module/microsof...
So these services depends on journalists to continuously feed them articles, while stealing all of the viewers by automatically copying every article.
This is obviously a big, unanswered, issue. It's pretty clear to me that we are collectively incentivised to pollute the well, and that it happens for long-enough for everything to become "compromised". That's essentially abandoning opensource and IP licensing at large, taking us to an unchartered era where intellectual works become the protected property of nobody.
I see chatbots having less an impact on our societies than the above, and interestingly it has little to do with technology.
We should start calling this "copyright laundering".
Microsoft just spits in this creator's face by mutilating his creation in a bad way.
This is the part that hurts. It's all so pointless, so perfunctory. A web of incentives run amok. Systems too slick to stop moving. Is this what living inside the paperclip maximizer feels like?
Words we didn't write, thoughts we didn't have, for engagement, for a media presence, for an audience you can peddle yourself to when your bullshit job gets automated. All of that technology, all those resources, and we use it to drown humanity in noise.
It took ~5 months for anyone to notice and fix something that is obviously wrong at a glance.
How many people saw that page, skimmed it, and thought “good enough”? That feels like a pretty honest reflection of the state of knowledge work right now. Everyone is running at a velocity where quality, craft and care are optional luxuries. Authors don’t have time to write properly, reviewers don’t have time to review properly, and readers don’t have time to read properly.
So we end up shipping documentation that nobody really reads and nobody really owns. The process says “published”, so it’s done.
AI didn’t create this, it just dramatically lowers the cost of producing text and images that look plausible enough to pass a quick skim. If anything it makes the underlying problem worse: more content, less attention, less understanding.
It was already possible to cargo-cult GitFlow by copying the diagram without reading the context. Now we’re cargo-culting diagrams that were generated without understanding in the first place.
If the reality is that we’re too busy to write, review, or read properly, what is the actual function of this documentation beyond being checkbox output?
And there ist another website with the same content (including the sloppy diagram). I had assumed that they just plagiarized the MS tutorials. Maybe the vendor who did the MS tutorial just plagiarized (or re-published) this one?:
https://techhub.saworks.io/docs/intermediate-github-tutorial...
> So we end up shipping documentation that nobody really reads
I'd note that the documentation may have been read and noticed as flawed, but some random person noticing that it's flawed is just going to sigh, shake their heads, and move on. I've certainly been frustrated by inadequate documentation before (that describes the majority of all documentation, in my experience), but I don't make a point of raising a fuss about it because I'm busy trying to figure out how to actually accomplish the goal for which I was reading documentation for rather than stopping what I'm doing to make a complaint about how bad the documentation is.
This says nothing to absolve everyone involved in publishing it, of course. The craft of software engineering is indeed in a very sorry state, and this offers just one tiny glimpse into the flimsiness of the house of cards.
> people started tagging me on Bluesky and Hacker News
Never knew tagging was a thing on Hacker News. Is it a special feature for crème de crème users?
I sometimes ask Claude to read some code and generate a process diagram of it, and it works surprisingly well!
An LLM driving mermaid with text tokens will produce infinitely more accurate diagrams than something operating in raster space.
A lot of the hate being generated seems due to really poor application of the technology. Not evil intent or incapable technology. Bad engineering. Not understanding when to use png vs jpeg. That kind of thing.
I think this just show how plaigarize-y LLMs are. There has been a lot recently about how easy it is to get a model to generate entire books to 98%, and this shows how the same can be done with images. Prompt it the right way, and you can get shitty copies of anything it was trained on. Really shows how little (none?) new content is actually being created, and how much is basically just lossy compression (with really noisy decompression) of the training corpus.
I mean they do have original in the repo https://github.com/MicrosoftDocs/learn/blob/c266367ec0eb1f7f...
It's a very very hard and time consuming task for dev to maintain hotfix for previous releases !
Yeah, easier for users, they don't have to care about breaking changes or migration guide. They just blindly update to the nearest minor.
But as the time goes on, the code for dev ends up being a complete mess of git branches and backports. Dev finally forgot some patches and the software contains a major security hole.
Dev ends by being exhausted, frustrated by its project and roasted by its users.
=> What I do : do not maintain any previous release but provide a strong migration guide and list all breaking changes !
users just have to follow updates or use another software.
I'm happy with it, my project has no debt code and more clean code.
I have seen firsthand how the original git-flow post convinced management to move off SVN. In that regard, it's an extremely important work. I've also never seen git-flow implemented exactly as described.
...and frankly, there are better ways to use git anyway. The best git workflows I've seen, at small scale and large, have always been rebase-only.
https://web.archive.org/web/20250908220945/https://learn.mic...
Seems to be perfectly on brand for Microsoft, I don’t see the issue.
so standard Microslop
> the diagram was both well-known enough and obviously AI-slop-y enough that it was easy to spot as plagiarism. But we all know there will just be more and more content like this that isn't so well-known or soon will get mutated or disguised in more advanced ways that this plagiarism no longer will be recognizable as such.
Most content will be less known and the ensloppified version more obfuscated... the author is lucky to have such an obvious association. Curious to see if MSFT will react in any meaningful way to this.
Edit: typo
Please everyone: spell 'enslopified', with two 'p's - ensloppiified.
Signed, Minority Report Pedant
I don’t even care about AI or not here. That’s like copying someone’s work, badly, and either not understanding or not giving a shit that it’s wrong? I’m not sure which of those two is worse.
These people distilled the knowledge of AppGet's developer to create the same thing from scratch and "Thank(!)" him for being that naive.
Edit: Yes, after experiencing Microsoft for 20+ odd years, I don't trust them.
But man this one indicates such a horrible look / lack of effort (like none) from Microsoft.
Not that Microsoft is short on bad looks, but this really seems like one of those painfully symbolic ones.
A noun describing such piece of slop could be „morgery”.
Seconded!
EDIT: Worse than I thought! Who in their right mind uses AI to generate technical diagrams? SMDH!
"Its not this its that" is the new em-dash.
I'd argue that this statement is perfectly true when the word "unworthy" is removed.
"Don't attribute to malice what can be adequately explained by stupidity". I bet someone just typed into ChatGPT/Copilot, "generate a Git flow diagram," and it searched the web, found your image, and decided to recreate it by using as a reference (there's probably something in the reasoning traces like, "I found a relevant image, but the user specifically asked me to generate one, so I'll create my own version now.") The person creating the documentation didn't bother to check...
Or maybe the image was already in the weights.
At some point, AI transformations of our work is just good enough but not excellent enough. And that is where the creators’ value lies.
lmao where has the author been?! this has been the quintessential Microsoft experience since windows 7, or maybe even XP...
I can't find a link to the learn page so can only see what's on the article. Is this a real big deal? Genuine question, driveby downvote if you must.
Even if this was a product of AI surely it's just a case of fessing up and citing the source? Yeah it doesn't look good for MS but it's hardly the end of the world considering how much shit AI has ripped off... I might be missing something.
On the other hand, it makes sense for Microsoft to rip this off, as part of the continuing enshittification of, well, everything.
Having been subjected to GitFlow at a previous employer, after having already done git for years and version control for decades, I can say that GitFlow is... not good.
And, I'm not the only one who feels this way.
Ref: https://www.reddit.com/r/technology/comments/1r1tphx/microso...
well, what should i say...
That pretty much describes Microsoft and all they do. Money can't buy taste.
He was right:
> The AI rip-off was not just ugly. It was careless, blatantly amateuristic, and lacking any ambition, to put it gently. Microsoft unworthy. The carefully crafted visual language and layout of the original, the branch colors, the lane design, the dot and bubble alignment that made the original so readable—all of it had been muddled into a laughable form. Proper AI slop.
Here we see "not just x" pattern combined with 3-items list. The closing short quip, and an emdash despite the author never having used them in any of his older posts, even the one from 2021. I don't know of he told an LLM to generate a post for the irony effect, or if he just couldn't be bothered to write.
In his (or his LMM's) own words:
> obviously AI-slop-y enough
Proper AI slop.
LOL, I disagree. It's very on brand for Microslop.
The author of the Microsoft article most likely failed to credit or link back to his original diagram because they had no idea it existed.
This is just another reminder that powerful global entities are composed of lazy, bored individuals. It’s a wonder we get anything done.
Is it about the haphazardous deployment of AI generated content without revising/proof reading the output?
Or is it about using some graphs without attributing their authors?
if it's the latter (even if partially) then I have to disagree with that angle. A very widespread model isn't owned by anyone surely, I don't have to reference newton everytime I write an article on gravity no? but maybe I'm misunderstanding the angle the author is coming from
(Sidenote: if it was meant in a lightheaded way then I can see it making sense)
I mean come on – the point literally could not be more clearly expressed.
not at all about the reuse. it's been done over and over with this diagram. it's about the careless copying that destroyed the quality. nothing was wrong with the original diagram! why run it through the AI at all?
> In 2010, I wrote A successful Git branching model and created a diagram to go with it. I designed that diagram in Apple Keynote, at the time obsessing over the colors, the curves, and the layout until it clearly communicated how branches relate to each other over time. I also published the source file so others could build on it.
If you mean that the Microsoft publisher shouldn't be faulted for assuming it would be okay to reproduce the diagram... then said publisher should have actually reproduced the diagram instead of morging it.
what's the bet that the intention here was explicitly to attempt to strip the copyright
so it could be shoved on the corporate website without paying anyone
(the only actual real use of LLMs)