It feels like ML/AI it might be the beginning of the end for a large class of things (if I wanted to be alarmist I'd say "everything") -- and the fact that Fabrice Bellard has jumped in and done the absolutely obvious rising-tide thing (building an API that abstracts the technologies) speaks volumes.
Releasing something like this fits to Fabrice's pattern of work -- he built Qemu and that served as a similar enabling fabric for people to run virtual machines. QuickJS quietly powers some JS-on-another-platform functionality.
Simon was right. The Stable Diffusion moment[0] is already here. It's going to accelerate. It was already moving at a speed that was hard to follow, and it's about to get even faster.
There are too many world-changing things moving forward at the same time, and I'm only looking at such a small cut of the tech sphere. I don't know what to do with myself, I feel so thoroughly unprepared.
My whole life, my whole personality is architected around making things by hand for other people. My ideal world is a hipster stereotype where we all sit around using a small number of artisanal products to make other artisanal products for each other.
The arc of my programming career has gone lower and lower down the stack because when I create, I enjoy it most when it feels concrete, deliberate, and long-lasting. I get no joy out of duct taping a few libraries together (though I respect others who do).
While I spend a lot of my day doing code review and think it's a valuable, important part of the process, it's not my favorite task. I like making stuff, not just socially interacting with others to loosely guide them towards making stuff. The idea of AI-assisted software development to me just sounds like taking the one part of the job I like most—writing code—and turning it into even more code review, except now I'm reviewing code vomited out by a machine.
And I completely dread the long term societal implications of a world where most people spend most of their day consuming media auto-generated by a machine. Where lonely men and women hide from their social anxiety by cultivating simulated romantic relationships with chatbots. Where teens have their expectations of sex set by watching synthesized porn starring virtual actors doing things that are physically impossible. Where people watch auto-generated videos of impossibly idyllic vistas instead of actually leaving the house and going for a hike. Where our beliefs of the world are formed largely by synthesized news articles that may or may not accurately reflect it. Where children learn to speak, read, and write from AI tutors and pick up all the grammatical and stylistic quircks of the AI model such that they now because actual real parts of human language.
And, of course, where almost all of the massive profit generated by all of that flows to an increasingly small number of huge corporations.
None of that sounds like a world I want to live in.
I totally get the value of AI for things like classification and understanding. But generative AI feels like a pandora's box to me.
I don't think I should despair too much, and simply grant triumph to a statistical steamroller.
The value is we'll still be making stuff, or yearning for it, and AI becomes a part of our toolkit (or never). Perhaps in your case, we will see Prompt Engineering Patterns someday.
I intend to plop down a tiny fortune enabling my children to have obscene hardware for wherever their personal projects take them, and server-grade CPUs, multiple GPUs, and fiber will be a given.
Just know you're standing at a height somewhere above, and as a giant to me I hope you can see farther ahead :)
Maybe we'll retreat more into our personally-named server, managing a handful of like-minded users, crafting rooms in a MUD no one will read. We'll publish stats for our packet filter, read stories typed by hand, and make little games.
There are still lots of problems we would be completely new to in many domains, people suffering injustice, and are those not things we might be interested in as well?
For sake of learning things, we are still satisfied. Even if AI were to generate it in an instant. For sake of satisficing our home labs and side projects, we find a tiny reprieve.
Not sure if there's a conflict here. We still use compilers, right? Unless we write in machine instructions, there's always some kind of program generations somewhere.
> I get no joy out of duct taping a few libraries together (though I respect others who do).
You still can work on stuff that requires human ingenuity . If you look at the the credits section of the GPT-4 paper, you will see many people there because they are the master of "low-level" optimization. For instance, the author of Reformer is a lead in the Long Context section, because apparently he knows how to reduce the O(N^2) of self attention to O(Nlog(N)), to say the least.
We live in a world where digital tech is fantastic for people who have the skill and strategy to place healthy limits on their own use of it, and the worst possible world for those who cannot. And it's all about to get much more extreme.
All of your fears will likely come to pass, but I think there are also much more positive framings and there are fantastic opportunities to build solutions to some of these problems.
- Loneliness. Today, lonely men and women already suffer. A simulated relationship is a legitimate improvement over no human contact at all. Could a bot use the embeddings of your interactions to match you with a compatible partner, saving you the anxiety of having to navigate the dating market.
- Auto-generated videos. Frankly, most movies and TV shows suck today, probably because they come out of a broken Hollywood. I can't wait until the barrier is so low that a random genius college student can make a feature film with almost no resources. Maybe there will be more new movies that are actually good!
- AI tutors. Playing with GPT4 the past few days I've already experienced visceral joy from its use as a teaching tool. It is like having a pretty smart co-worker who knows something about literally everything. I'm honestly so excited about being able to learn new things without dealing with the fundamental barrier of finding good sources of knowledge.
I think the general trend is that actual useful applications are emerging from enormous models trained and owned by billion dollar companies only. Even projects that aim to run models on private consumer hardware are dependent on commercial orgs to produce them. It doesn't seem like that is likely to change.
I don't think there are many positions/jobs/roles for people doing integral, foundational work that requires a deep understanding of ML. Becoming a world expert in ML will probably only open up opportunities at a dozen companies.
A very shallow surface-level understanding of ML already puts you leagues ahead of the general population. In terms of job security, figuring out how to use an ML API will get you hired faster than knowing how to advance the field.
We don't actually need that many Fabrice Bellards.
One way to think about it: Today's LLMs require incredible outlays of capital and processor power (and crews of folks with doctorates), such as billion dollar companies can provide. But how is that different from what Intel brought to commodity CPUs in the '90s/'00s, or what Nvidia brought to GPUs in the '00s/'10s? Or even what Cisco and folks brought to networks?
Though we may never design an artisanal CPU/GPU/router, we get to work with them every day to make things, and to communicate. These LLMs can be that for us at this moment. Let's go out and enjoy them, and see what we can make within their (vast) domain-specific capabilities.
[takes off rose-tinted glasses]
This is the biggest revolution I've seen in my lifetime and it's going to make the PC, internet and smartphone a kids toy.
It might be a blind-leading-the-blind situation, but I wonder if retiring will protect you. Could you imagine having gone into retirement in the early internet? Wouldn't you be really confused at how to use things today? Or maybe you would just learn most useful things passively as a consumer.
It feels like the gap between the people who can make and the people who consume is widening.
> This is the biggest revolution I've seen in my lifetime and it's going to make the PC, internet and smartphone a kids toy.
This coupled with everything else that's happening is all too much. Getting closer and closer to cheap/free energy (batteries, solar, fusion, wind, etc), AI/ML, Robotics...
Is it really so hard to just "go with the flow"?
I'm not much of a control freak -- it's more when you see an credible posisble extinction event in the distance, don't you want to act?
And even if you didn't view it that grimly, isn't the ML stuff just SUPER cool? Trainable prediction engines are really amazing and very actually useful (feels like the first iteration was being able to recognize things in images, which felt like magic).
Going with the flow is awesome, but one thing I've found about life is that if you're not in the right flow, it's completely different. It's not like you have to be in the perfect stream, but you need one with fast moving water.
Imagine being 10 years late to computers or the last one in your area to get a typewriter.
I personally feel like I have to immerse myself in stuff to get it, and the lack of more than surface level understanding of ML is worrying with how big it could potentially be.
I have to disagree. The combination of being closed-source and dynamically linked makes a program a hassle to run on Linux. Even if it isn't at the moment of release, it soon becomes one. While ts_server is better than most, it already requires an old version of libjpeg-turbo not available in my distribution's repositories. I had to run it in a Rocky Linux container:
docker run \
--rm \
--mount type=bind,source="$(pwd)",target=/app/ \
--publish 127.0.0.1:8080:8080 \
rockylinux:9 \
sh -c 'dnf install -y libjpeg libmicrohttpd && cd /app/ && ./ts_server ts_server.cfg'
The solutions to this problem that I am aware of that do not involve releasing the source code are: 1) static linking; 2) containers; 3) shipping a Windows binary :-) ("Win32 is the only stable ABI on Linux" -- https://blog.hiler.eu/win32-the-only-stable-abi/).Of course. The stable ABI is to allow your bundled DLLs to keep functioning. (Check out https://news.ycombinator.com/item?id=32471624 for an extensive discussion of the link.)
I'm curious how soon someone uses these models to effectively ruin the ability to use releasing binaries as an obfuscation method.
I tried just using the Stanford Alpaca fine-tuned version of the llama 7B weights that work with llama.cpp with textsynth but it didn't like that (ggml-alpaca-7b-q4.bin: invalid file header). Having a textsynth HTTP API would save me a lot of hassle . I'm currently wrapping the stdin/out of a execution of a modified llama.cpp binary and that's extremely messy.
I feel like people are often referring to 'coding' when they express these worries. You know, actually writing code, having been given a spec to do so, and perhaps also participating in code review, writing tests, all the usual engineer stuff.
My question is, amongst the HN crowd, what kinds of roles or areas do we think might be somewhat immune to this effect? The first thing that occurs to me are security, infrastructure & ops, networking. And of course the requirements gathering stage of software development. It is already the case that a lot of senior devs probably don't write much code and spend more time on communication between different stakeholders and overseeing whoever (or whatever) is writing the code.
Anyone else been thinking about this? What tech roles might thrive in the face of AI.
The way I personally see it, is that AI such as ChatGPT is another tool in our arsenal that we as developers will have to figure out how it fits into our workflow. I think long term it will help us write better code, and in general be more productive. For example, less time trying to find answers hidden deep in Stack Overflow as we'll be able to get that information directly from ChatGPT.
I can completely see that some smaller places they might not require a developer and instead use ChatGPT to write code, but it still has to be verified and all the other processes around making that code "live", etc.
If anything I'd be more worried if I were a copywriter, as I think it's an under appreciated skill and companies may think they can get away with ChatGPT and a quick glance over the copy.
Either way, I'm positive and look for new ways to help me come a more productive and well-rounded programmer.
No one wants to pay for code reviews. They don't show up on the agile board and we all collectively pretend they are "freebies" the devs give out to the company. Just like unit tests and all the other shit that we just expect devs to do in their spare time between 3 hour meetings and feature work.
I can see ChatGPT being an augmentation. But that only works because you have a human dev that does the merging and can take the blame if their code goes wrong. Remove that human and you're staring into the abyss.
In the context of software engineering I am tending to see it as another layer of abstraction. Once upon a time there was perhaps not much above machine code / assembly, but now you can have quite a few layers providing abstractions over that ending up perhaps at Javascript, or maybe low-code tools. For me, AI sits somewhere vaguely in that category (though with a much higher level of sophistication).
On what time-frame?
Permanently immune? You have to postulate that there's something a human can do that a computer system just can't. To me, given the progress we have already seen, I see no strong reason to imagine that there is such a thing. More precisely, it becomes a metaphysical question about what it means to be human, "What are people for?".
- - - -
In the medium term (and this may only be a few years) the role that will come to the fore is that of the human-computer psychologist, so to speak. We already talk about "prompt engineering". There are two questions: goal and context. What do you want the computer to do, and how do you know when it's doing it successfully? -and- What are the side-effects, the "ecology", of the selected solutions? Especially, otherwise unforeseen side-effects.
Frankly, I have not seen a more impressive portfolio of programming output.
Llama specific:
https://github.com/qwopqwop200/GPTQ-for-LLaMa
> According to GPTQ paper, As the size of the model increases, the difference in performance between FP16 and GPTQ decreases.
https://nolanoorg.substack.com/p/int-4-llama-is-not-enough-i...
https://docs.google.com/document/d/1wZ0g9rHI-6s7ctNlykuK4W5T...
Expect to get away with a factor of 4-5 reduction in memory usage for a minimal loss of quality. :)
This gives off the surreal sci-fi vibe that the binary is the source. And who knows... true wizards work in mysterious ways.
Edit: nevermind, the models are all there, just some of the links aren't.
Please update.
https://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypert...
Shame.
Edit: licensing
Good for him to commercialize it, and at least he is not pretending to be a non-profit accepting VC money.
I do believe one of the failures of GPL was not being AGPL from the start.