Prompt engineering as a specific skill got blown out of proportion on LinkedIn and podcasts. The core idea that you need to write decent prompts if you want decent output is true, but the idea that it was an expert-level skill that only some people could master was always a lie. Most of it is common sense about having to put your content into the prompt and not expecting the LLM to read your mind.
Harnesses isn’t really a skill you learn. It’s how you get th LLM to interact with something. It’s also not as hard as the LinkedIn posts imply.
Mixture of Experts isn’t a skill you learn at all. It’s a model architecture, not something you do. At most it’s worth understanding if you’re picking models to run on your own hardware but for everything else you don’t even need to think about this phrase.
I think all of this influencer and podcast hype is giving the wrong impression about how hard and complicated LLMs are. The people doing the best with them aren’t studying all of these “skills”, they’re just using the tools and learning what they’re capable of.
Keeping in mind the LinkedIn posters/audience (marketers/recruiters), it probably was quite hard for most of them.
OTOH, tfa specifically said:
> I feel the same way about the current crop of AI tools. I've tried a bunch of them. Some are good. Most are a bit shit. Few are useful to me as they are now. I'm utterly content to wait until their hype has been realised.
So, it's not like he's being deliberate ignorant, rather simply deliberately slow-walking his journey.
If you test specific features of those solutions over time you see very inconsistent results, lots of lies, and seemingly stable solutions that one-shot well but suddenly experience behaviour changes due to tweaks on the backend. Tuesdays awesome agent stack that finally works is loading totally different on Thursday, and debugging is “oh, sorry, it’s better now” even when it isn’t. Compression, lies, and external hosting are a bad combo.
Sometimes I imagine a world where computers executed programs the same way each time. You could write some code once and run it a whole calendar month later with a predictable outcome. What a dream, we can hope I guess.
For a single dev team, vibe coding is great. Write specs, write plans, write code. I know what the project wants and needs because I'm the target market.
At work, I haven't written more than a few lines of code since December. But I work with other people vibe coding this same project. Lots of changing requirements and rapid iteration. Lots of mistakes were made by everyone involved. Lots of tech debt. Sure, we built something in 2 mos that would have otherwise taken us 6 mos, but now I'm fixing the mess that we caused.
I think the critical difference is the attitude towards our situation. My boss said to fix the AI harness so we can vibe code more confidently and freely. But other bosses might cut their losses and ban vibe coding. Who's right? I dunno. In both cases I'd just do what my boss wants me to do. But it's not that I don't want to be left behind. I don't want to lose my job. There's a difference.
Kind of weird tools also incorporate addictive gambling game's UX design. They're literally allowing you to multiply your output: 3x, 4x, 5x (run it 5 times for a better shot at a working prompt). You're being played by billionaires who are selling you a slot machine as a thinking machine.
Yes, it's hard to see how, at this moment in time, "Anybody can write code with an LLM" is so different from "Anybody can make money in the stock market."
The underlying mechanisms are completely different, of course, and the putative goal of the LLM purveyors is to make it where anybody really can write code with an LLM.
I'm typically a nay-sayer and a perfectionist, but many not-so-great things become and stay popular, and this may fall into that category.
> Kind of weird tools also incorporate addictive gambling game's UX design.
It's unclear it started out this way, but since it's obviously going this way, it is certainly prudent to ask if some of this is by design. It would presumably be more worrisome if there were only a single vendor, but even with multiple vendors, it might be lucrative for them to design things so that "true insider knowledge" of how to make good prompts is a sought-after skill.
Why? Because all the folks involved have created a technology in search for a problem to solve. That never, ever works. Steve Jobs of all people left this piece of wisdom behind. Its amazing how few actually apply it.
The internet was never this - its origins go back to the need to able to transmit data - darpa. And this is what we still do now...
Next thing I'm waiting on is building a new server for a powerful locally hosted LLM in 5 years. No need to go through the headaches and cost of doing it now with models that may not be powerful enough.