They're also the ideal place to try out new AI tools that your professional work might not let you experiment with.
(The headline of this piece doesn't really do it justice - it misuses "vibe coded" and fails to communicate that the substance of the post is about visual design traits common with AI-generated frontends, which is a much more interesting conversation to be having. UPDATE: the headline changed, it's now much better - "Show HN submissions tripled and now mostly have the same vibe-coded look" - it was previously "Show HN submissions tripled and are now mostly vibe-coded")
The number of dark‑mode sites I’ve seen where the text (and subtext) are various shades of dark brown or beige is just awful. For reference, you want a contrast ratio between the text and background of at least ~4:1 to be on the safe side.
This isn't even that hard to fix - hell you can add the Web Content Accessibility Guidelines to a skill.
Besides, the idea of paying 200$/month to have the privilege of using ai in my side projects… it’s just stupid for me
I don't think this is overwhelmingly the reason though - I think many are just all AI, but if the project is technically interesting it might be sufficient to get me to grimace through it.
AI might (might not, but often does!) also save you from doing original thinking in the domain, which in a show my side project is what people are interested in
Before, it was like:
"Oh, X idea is really cool, let me try it!" ... (loses interest before idea validated)
Now: "Oh, X idea is really cool, let me try it!" ... with AI, I get to actually validate that it works (ideally), or reformulate the idea if it doesn't.
This.
Coding assistants handle a great deal of the drudge work involved in refactoring. I find myself doing far more deep refactoring work as quick proofs of concept than before. It's also quite convenient to have coding assistants handle troubleshooting steps for you.
- Exploration: I am "vibe coding" to explore a domain, add many features, refactor the app over and over, as a real time exploration of the domain to see what works and what doesn't
- Specific Execution: I have a full design, a full idea, I've thought about architecture, we're making a plan and we're executing this extremely coherent vision
I've enjoyed using AI for both cases.
The trick is to deliberately use it in a way that helps you learn.
I'm primarily a backend developer. Most of my work has been in serving json or occasionally xml. Spring Shell in Java is something that I'm closer to working with than a GUI. When I've done web work, the most complimentary thing that was said about my design is "spartan".
So, if I was to have a web facing personal project... would black text on a white background with the default font and clunky <form> elements be ok? I know we are ok with it on the HN Settings page. They work... but they don't meet what I perceive other people have as minimum standards for web facing interfaces today.
And so... if I was to have some web facing project that I wanted to show to others, I'd probably work with some AI tooling to help create a gui, and it would very likely have the visual design traits that other AI generated front ends have.
(maybe what this post calls "Icon-topped feature card grid." ...that might be the official design pattern term)
https://news.ycombinator.com/showlim (<-- this is what many accounts without much HN history now see, and it's responsible for the downtick to the right on OP's chart)
Ask HN: Please restrict new accounts from posting - https://news.ycombinator.com/item?id=47300329 - March 2026 (515 comments)
Is Show HN dead? No, but it's drowning - https://news.ycombinator.com/item?id=47045804 - Feb 2026 (425 comments)
In 2016, if I saw 10,000 lines of code, that carried a certain proof-of-work with it. They probably couldn't help but give the code some testing as they were working up to that point. We know there has to have been a certain amount of thought in it. They've been living with it for some months, guaranteed.
In 2026, 10,000 lines of code means they spent a minimum amount of money on tokens. 10,000 lines can be generated pretty quickly in a single task, if it's something like "turn this big OpenAPI spec into an API in my language". It's entirely possible 90%+ of the project hasn't actually been tested, except by the unit tests the AI wrote itself, which is a great start, but not more than that for code that hasn't ever actually run in any real scenario from the real world.
Nothing about any of that in intrinsically wrong. But the standards have to be shifted. While the bar for a "Show HN" should perhaps not be high, it should probably be higher than "I typed a few things into a text box". And that not because that's necessarily "bad" either, but because of the mismatch between valuable human attention and the cheapness of being able to make a draw on it.
It's kind of a bummer in some sense... but then again, honestly, the space of things that can be built with an idea and a few prompts to an AI was frankly fairly well covered even before AI coding tools. Already I had a list of "projects we've already seen a lot of so don't expect the community to shower you with adulation" for any language community I've spent any significant time in. AI has grown the list of "projects I've seen too many times" a bit, but a lot of what I've seen is that we're getting an even larger torrent of the same projects we already had too many of before.
That's basically the entire AI landscape atm.
I keep seeing people do things like spend a weekend building a product then charging ridiculous prices for it with the justification that it's what those products would've cost a few years ago.
For some reason, it doens't click for them that those prices were a reflection of the effort it took to get to that point and that the situation has changed.
When the surface dwellers have become crazed by disease and war, and their lands contaminated with the detritus of broken promises of innovation and heavy metals, we must build a new Eden.
As much as I adore Gemini as a concept, I yearn to express myself in the visual medium. Dillo might honestly be enough to render something beautiful within its constraints. With Wireguard meshes as the transport, and invitations offered and withdrawn by personal trust, perhaps we can have a place where our ideas could once again flourish without being amplified and distilled into mediocrity by the great monoliths looming like thunderous currents on the horizon.
We can hope the LLMs hallucinate slightly different CSS once in a while now...
There's always a trend and everyone follows them in Software. Now it's AI.. let's not pretend cutting corners is anything new in our industry.
I guess you can always gloat about your artisan code but people who use Software for business never cared about that to begin with.
Plus, wasn't the entire philosophy of CS was that "everyone can code" ? Opposing licensing requirements, etc ? Well.. there you have it, code is a commodity now and the barrier to entry is next to none.
The other issue of HN being inundated with AI bots is related, but a kind of different problem.
Likewise, the issue is often that many of these projects show no evidence of long term maintenance. That might be the new signal we watch for?
There also used to be a sense in the tech community of "if you build it they will come" and that has been basically completely lost at this point. Between the discussion earlier this week of people's fraudulent GH stars, and this topic, and the wave of submissions I see on e.g. r/rust, it's just hard to imagine how -- as a pure "tech nerd" -- to get eyes or assistance on projects these days.
I have projects I've held off on "Show HN" for years because I felt I wasn't ready for the flood of users or questions and criticisms. Maybe the jokes on me. (Of course like everyone else these days, I've used AI to work on them, but much of them predate agentic tools.)
I find that I just don't learn anything new from Show HN vibe-coded side projects, and I can often replicate them in a couple of hundred of dollars, so why bother looking at them? Also why bother sharing one in the first place, since it doesn't really show any personal prowess, and doesn't bring value to the community due to it being easy to replicate?
There's a lot of ways things can be of interest. The problem being solved, how it's being solved, the UI, UX, etc.
THAT it is vibe coded may or may not be interesting to some, but finding it un-interesting because it's vibe coded is no better than finding that it is.
http://www.catb.org/jargon/html/S/September-that-never-ended... https://en.wikipedia.org/wiki/Eternal_September
The advantage of having so many ideas being tried and published is we are exploring the space of possibility faster, and so there's more to learn from. The disadvantage is that signal to noise is way down. Also, because the system is self-reflective and dynamic, there's a natural downward spiral as the common spaces get overrun and we cannot coordinate signal. The Tragedy of the Commons.
I guess I spent 10 years worrying about this in my MeatballWiki era in my 20s, and now I'm in my midlife crisis era and prefer to just have fun with the world that I have.
I've noticed a crazy amount of clearly AI coded projects that do a small subset of an already existing and very trusted open source project. Comments usually point this out, and the OP never responds. I'm not sure what the end goal is, but the whole thing feels like a waste of time for everybody involved.
(Still plenty of scary stuff, but I should feel like you at least some of the time, healthy balance.)
Also, would be good to show trends over time rather than just a one-time pie chart showing breakdown into arbitrary categories.
It seems many have not updated their understanding to match today’s capabilities.
I am vibe coding.
That does not mean I am incompetent or that the product will be bad. I have 10 years of experience.
Using agentic AI to implement, iterate, and debug issues is now the workflow most teams are targeting.
While last year chances were slim for the agent to debug tricky issues, I feel that now it can figure out a lot once you have it instrument the app and provide logs.
It sometimes feels like some commenters stick with last year’s mindset and feel entitled to yell about ‘AI slop’ at the first sign of an issue in a product and denigrate the author’s competence.
so, n=1 plus Baader-Meinhof? (https://en.wikipedia.org/wiki/Frequency_illusion)
I signed up for a Mobbin account to find inspiration only to find every app and website looks the same. I came to the same conclusion, “this isn’t bad but it’s certainly uninspired”
Great job to everyone who has created something
Models have their own archetypes. Since early this year almost every vibecoded website is Opus, which has its own style. It has different characteristics from a website by GPT. Yet again different from one by Gemini. Each one has its own set of traits. Opus 4.5/4.6 traits are markedly different from earlier versions. Mixing them all into one and then using it to "identify AI coded websites" doesn't work.
But good thing is, it will now include those accessibility items, too. Personally I have misokinesia and migraines so I get it.
Here's what it found if you want to see: https://www.perplexity.ai/search/given-these-how-can-we-crea...
At least in the field I work in (ecommerce/retail), design is often what separates one brand from another when presenting their products. Maybe it won't happen on the web as much in the future, but I suspect it will still be important when it comes to visually communicating to consumers
I use LLM models in my side projects like this guy uses them. So many times I spent days and weeks on a side project just to make sure it was perfect only to to have 0 interest from anyone else after sharing.
Why? Let me guess: because these patterns were frequently seen in human-made sites too, but that won't fit the narrative.
Remember, several AI detectors claimed Declaration Of Independence was AI-generated[0]. Keep this info in mind when someone (like the author of this article) proudly shows you their home-made AI detector.
[0]: https://dallasexpress.com/state/zerogpt-flags-1836-texas-dec...
at my workplace the phrase in status/report-out meetings "I built" now means "I asked claude to build"
All of a sudden managers, architects (who haven't written code in a decade), and directors are all building tools
so now we're debugging the tools "they built" and why our product isn't working with them.
The UI of Electric Minds Reborn (Amsterdam Web Communities System) was not AI-generated. At most, it was AI translated, as I used Claude to help turn old clunky 2006-era HTML into modern styling with Tailwind CSS. See also https://erbosoft.com/blog/2026/04/07/to-ai-or-not-to-ai/.
This has been killing me recently. Apparently I need slightly higher contrast than some people, and these vibe coded UIs are basically unreadable to my eyes
Nooo please don't ruin great fonts by associating them with low effort vibecoding
They may be somewhat overused but they are popular for a reason
maybe i'm an LLM too
It’s entirely possible a Show HN I posted is included and I’d love to know how it scored.
The more interesting question the post raises, at least for me, is that distribution platforms like Show HN, Product Hunt, etc. were designed for an era when launching something was costly enough to be a signal. When a weekend project can ship a production-looking landing page, upvotes on these platforms start selecting for whatever catches the eye fastest, not whatever actually solves a problem. The signal degrades.
I've been thinking about this a lot because I'm building a directory where you have to rank 5 other projects before you can post your own — trying to see if forced engagement produces better signal than one-click upvotes. Too early to say if it works, but I do think "how do we find the good stuff under the slop" is the real problem and it probably isn't solved by detecting AI design patterns.
Comment sections on paid substacks tend to be much better than free ones. And on Hackernews (and Reddit, a decade ago), the old-school, text-heavy approach (complete with voting) help ensure that quality content rises.
I find the balance fascinating — exactly how much friction do you need to create a healthy online community? And what are the best ways of doing that without making people pay?
Heavy slop (5+ patterns) · 105 sites · 21%
Mild (2–4) · 230 sites · 46%
Clean (0–1) · 165 sites · 33%
Can we have a list of the "clean" ones please? Actually, if you give me a list of the IDs for all 3 categories, I'll make URLs for each that people can browse.If the community feels that the division is useful, then we can maybe take you up on your offer to open-source the project, and perhaps find a way to use it on HN itself.
That said, the AI slop problem is real. Most of it has very little depth. I'd love a sidebar tool that rates submissions on engineering rigor so projects with real technical depth don't get overlooked, and there's a clear differentiator between pure vibe-coding and engineering-backed work.
Shadcn works for Vercel, but is actually a human being (I think?).
The UI framework is called shadcn/ui.
Are we going to call 'AI slop' everything that doesn't reinvent design from zero for a marketing page?
- all designs are going to be AI generated and look the same
- well unless you ask your agent to make it look different
Before, you could get away doing business with a basic 1-pager, which is about the same as what everyone else had, but these days looks lazy/incompetent.
You don’t have any more time to throw it together than you did before so… yeah I guess slop it is. Probably not going to be humans reading it past the front page anyway. If you want to engage humans, use LinkedIn or TikTok or something.
In a sense it shows that the creator didn’t care enough to make their UI/presentation unique which causes some like me to question exactly how much effort they bothered to put in at all.
As part of our code security review we have a “sloppification” score. Higher numbers have been reliably usable by people like me as indicators of what to focus my pentesting efforts on.
Before the usual suspects get snarky: Does that mean AI only generates slop? No. But it is an indicator of effort and oversights.
I'm much more critical of closed-source, subscription, wrappers over open source software of simple prompts.
Let’s take the opposite case, where someone handcrafted a website but the actual project/product was just a vibecoded mess? Is that not infinitely worse? Imo, what matters is what they actually made with the thing.
I get that these LLMs are pumping out ugly websites. But unless the product is a design system or website builder, it’s not my main concern.
There is a longterm phenomenon, that quite a lot of pages are presented here, and not existent anymore after 12 months or so... This was already the case before the whole ai slop flodded in... But since then the rate just grew massively.
It's particularly annoying, when there is an actually useful service or app, you sign up, after a couple of months all is gone...
Then the question becomes, do we need to go back to hand-picking every single css element to avoid being suspected of vibe coding? Why is it ok for someone to generate a css template on the fly using shadcn, but not ok to generate styles using claude code? Will someone using shadcn be judged the same as someone using claude code for styles?
> The site is built with Astro. Design inspired by Paul Stamatiou.
Personally what I think I'm seeing is a breaking down of walls. Now ideas that once would have gone back to the imagination vault finally have a pathway to reality.
Kind of off-topic - but why is there always so much focus amongst AI-bros on how good or whether or not LLMs are good at building UI? My shallow assumptions were that the reason is because that's what LLMs are particularly bad at.
But lately I've kind of gotten the sense that a lot of people seem to mostly be building UI stuff with LLMs. Weird.
In a climate where it seems like VC are woefully bereft of the same skills, there's an impetus to just slop garbage up for any vague idea, without taking the care or time to polish it into something which has that intangibly human sense of greatness and clarity.
I see, you've done something -- but why? If you continue to ask this question, you will arrive at good science ... but many submissions are not aimed at that level of communication or stop far ahead of the point at which the question becomes interesting.
There's that phrase: "better to remain silent and be thought a fool than to speak and to remove all doubt" which strikes as poignant, except it seems like the audience today are also fools ... the inmates are running the asylum.