Except that it never works like that, does it? The further we go, the harder it becomes. It's impressive how fast they went from nothing to almost-fully-autonomous cars, but actually-fully-autonomous cars may never happen, who knows?
As a developer, I honestly feel more threatened by the energy crisis to come (end of fossil fuels in the next couple of decades) than with AI replacing my job.
The fact that many people do not know exactly enough what they are doing can be seen in the result. The people whose goal is to write as robust and efficient software as possible still do have to know and control the details. It's like driving a car; you do not have to be a engineer to drive one; but the more you want to push the limits of performance, the more you need to know about the details. And as far as AI is concerned, despite the predictions and full-bodied promises, we are obviously still a long way from replacing humans as drivers. I see no reason why software development should be any different. There are so many very complex issues involved that are not mentioned in the article. Just understanding the requirements of software will stretch the capabilities of AI for a few more decades.
This is only because we as a society have an extremely low tolerance for errors in automated driving and essentially require by default superhuman performance (a self-driving car with an error rate of the median human would never be allowed to be set loose by itself). In scenarios where a 0.1% error rate, 1% error rate, or maybe even 10% error rate are acceptable, AI is making huge strides.
> Just understanding the requirements of software will stretch the capabilities of AI for a few more decades.
I hope so. I'm not sure. And for a variety of reasons that's scary. What gives you a timeline of a few more decades?
Now, what tends to be forgotten is that there AI-average vs. human average is that humans can also drive e.g. in Turin or Paris at rush hour, on mountain roads under the snow or in the Cornwalls roads while under tempest rains.
It's not that I believe that self-driving AI will never progress to this level, but let us be honest when comparing; they still drive themselves into fully-visible plots by daylight or run over cyclists at night.
I'm not sure the median human driver can do all that.
This is a potential argument why users don't have to know the arcane details of how the internals of a CPU works, but on the other hand a good argument why programmers should better have a quite good knowledge about that.
If you just want to do the same thing you currenty do but faster, AI will handle that. But modelling a business process properly and making it explicit will still bring huge value to those who care to put the effort in.
Also, in mathematics, many proofs contain a decision procedure or a parametric algorithm for constructing a counterexample.
Okay, so this is an advertisement.
I was sceptical about this until I started playing with GPT 3 and has it not only writen code for me, but also "explained" code to me. Sure, it's kind of limited right now, but it can only be a matter of time now before this all radically improves.
Maybe I should focus on system design and translating the messy real world into systems. That's the hardest bit of my job currently. I was also thinking of moving down the stack and getting deeply into security engineering or something like that (not that this is immune from AI either!!).
I don't see any reason to believe the current approaches can extend to something that actually changes programming. They're not based on understanding code, they're based on generating text that matches what they would expect to see given the context. They have no model of what code means, so they can't model why sometimes code is subtly different if there are no local contextual cues. And when there are your prompt would need to reproduce those contextual cues for it to key off of. In other words, you as the programmer still are directing the generation of the code. You're just doing it via an undocumented and somewhat unpredictable autocomplete.
This doesn't remove the need to have someone who knows what they're doing in the loop. Best case is that it reduces the amount of time you spend typing by a little bit. As long as your job is to know what you're doing rather than to generate text, the current systems are no threat to it.
It's quite possible that this avenue doesn't scale to anything more broadly useful. We shouldn't mistake solving 20% of a problem to being on the right path. Maybe this remains as auto-complete on steroids and it's a dead end. I was honestly just surprised by GPT 3's apparent abilities, smoke and mirrors though they may be!
It's possible our minds are also huge and complicated neutral networks, though I suspect that description is incomplete at best.
But the point is that current tools are trained on text generation. Something that would change programming would have to train on the meaning of programs. It's a rather different task, as it's no longer statistical. Doing it properly requires metacognition as well, to avoid falling into the trivial inconsistencies in most programming languages. And connecting that with real-world tasks that it hasn't seen before would require an understanding of the real world.
I'd call something with all of those capabilities AGI. I honestly don't think any system short of that will ever be more than an autocomplete, because it can only ever fit things together based on some statistics.
I am rather skeptical about the idea that AI is going to do away with programming soon. Yes, ML have shown some impressive results and will definitely show some improvements in the coming decades, but I think it will still take some time before the efficiency of electronic based ML systems will surpass that of organic based ML systems.
Please note that this blog post is from a start-up that aims to work at ML systems that aim at replacing programming. So, this blog is also in a sense a kind of job advertisement and/or investor pitch.
My undergraduate education was in the early 90s, and at no point in my life have I ever had much of a clue regarding the physics underlying transistor design.
EDIT: also at one time I probably did have a reasonably solid grasp on how CPUs work, there's been an awful lot of advancement in the field over the decades, and I won't describe my understanding as anything more than a cartoon model.
SQL and Compilers changed the goal of many programmers from writing custom extractors or serving as computer translators, to writing useful intents for extraction and actions. You still need to know WHY you’re extracting or adding your code in both cases, often need to understand the substructure enough to dive in and debug when results are unexpected or “not optimal enough”, and ultimately much of the cut content caused by the gained efficiency was ultimately boilerplate that, once eliminated, allowed programmers to take on more ambitious project scope than they did previously, because they weren’t spending their time rewriting a new data storage system or translating actions into machine language for the umpteenth time.
I feel we’ll see a similar progression - these code generators, very optimistically assuming a world where they work deterministically “good enough” to trust with even core business logic, will be treated as black box valid action generators, but in a world where action generation is free, but under-specification or incorrect specification is wrong, we still have something curiously resembling programming - the art of programming becomes one of maintainably chaining assemblages of black boxes into cohesive, maintainable superstructures.
I suspect we’ll say the same thing about code which was of simple and safe enough structure that we could trust it to black box generators that we currently say about SQL,
> Thank god I don’t have to redo all that work every time I start a new project
And, as with SQL, the project requirement boundaries will move to match your increased output capacity.
Much like the old saying “What Andy Giveth, Bill Taketh away”, it is perhaps modernized, “What Copilot giveth, Your PM Taketh Away”
However, this bit reads as needlessly hyperbolic to me:
> The engineers of the future will, in a few keystrokes, fire up an instance of a four-quintillion-parameter model that already encodes the full extent of human knowledge (and them some), ready to be given any task required of the machine.
I mean okay, sure, eventually. But people were predicting hand-wavy everything-solutions like this sixty years ago in Star Trek. It's not very imaginative. Not to mention- this four-quintillion-parameter model will be hugely inefficient for simple tasks. I think it'll be a long time before we care that little about efficiency.
But here's a much more near-term scenario I'm imagining:
You need to stand up a new microservice. You have an off-the-shelf "learned microservice" web framework that you reach for. You write a small training set of example request/response JSON, not unlike unit-tests. Maybe the training set includes DB mutations too. You start testing out the service by hand, find corner-cases it doesn't handle correctly, add more training examples until it does everything you need.
Now, in addition to saved effort vs hand-coding (which may or may not be the case, depending on how simple the logic is), you've got what I've started to think of as a "squishy" system component.
Maybe, because AI is fuzzy, this service can handle malformed data. Maybe it won't choke on a missing JSON key, or a weird status code. Maybe it can be taught to detect fishy or malicious requests and reject them. Maybe it can tolerate typos. Maybe it can do reasonable things with unexpected upstream errors (log vs hard-fail vs etc).
This is the really compelling thing for me: so much of what makes software hard is its fragility. Things have to be just so or everything blows up. What if instead, the component pieces of our digital world were a little squishy? Like human beings?
Yeah, fair. Maybe we'll have to change the level of abstraction to where the neural net determines a math equation, instead of openly translating to and from different numbers?
> enforce GDPR compliance
I actually think this would be an excellent use-case because it's such a sprawling problem (and because law is already "squishy" in this sense, because it has to be, because it's all about the messy real-world). Imagine watchdogs being able to hand companies a neural-net that continuously audits them for compliance, which works across different companies and systems (you'd probably have a human make the final ruling once a company is flagged, but still)
The cold-fusion-powered, fully-automated car.
Ten years ago, you needed a team with phd propeller heads to do anything with AI. These days, what you need is a lot of data engineers capable of moving data around via scripts efficiently and people that can use the off the shelf stuff coming out of a handful of AI companies. It's like database technology. You don't have to have a deep understanding of them in order to use them. I can get productive with this stuff pretty quickly. And I need a working knowledge of what's out there in order to lead others to do this stuff.
The consequences of a general AI, or even something close enough to that, coming online would be that, pretty soon after, we'd put that to use to do things currently done by really smart humans. Including programming things. The analogy is maybe that as an executive of a large tech company, you don't necessarily have to be a hard core techie yourself. You can delegate that stuff to "human resources". Adding AI resources to the mix is going to naturally happen. But it will be a while before that's cheap and good enough to replace everybody. For the foreseeable future, we'll have a growing number of AI resources but it will be relatively expensive to use them and we'll use them sparingly until that changes.
Try see ANY large enough project: no matter if it's a kernel or a GUI desktop application, at a certain point ALL of them try to integrate, this, that, that other etc becoming monsters. Original desktops was designed as a single OS-application-framework where "applications" where just "code added to the core image". That's the missing level of integration we can't achieve in modern systems and that's why all complex software became monsters trying to circumvent the lack of integration adding features directly.
Unix at first succeed over classic systems stating they are too complex and expensive, separating "the system" and "users" is the best cheap and quick solution. They they backpedal violating unix KISS logic with X11 and GUIs, libraries, frameworks etc because the KISS principle do not scale. Widget based GUIs born and succeed over document-oriented UIs stating those are too complex and expensive. The modern web prove they were wrong. In another ten years I think we will came back to Xerox...
And other things the author likes to tell themselves.. or perhaps they enjoy building clout for saying outrageous things.. yawn
Sure there will be obsolete concepts, algorithms, and plenty of AI assistance, but programming is building a state machine, or like a house in an abstract space that powers machinery to accomplish tasks of value using a general-purpose computation device.. computer science is informs and is informed by a craft (programming) and that can only be replaced by another craft (whatever this is rests in the imagination the author). You’re still doing creative work, and you will only be as effective as your abilities in practice of applying theory.. the theory will not be “use AI” or “don’t learn computer architecture lmao what a nerd”
Imho that is.
First off, the only one of those three sentences that a 2002 researcher would be stumped by is the first, and that solely due to the unfamiliar nouns. The other two sentences are perfectly classical, and the only difficulty one of the ancients would have is putting their eyes back in after they popped out on seeing the model sizes.
Second, isn't that good? It means the field has advanced, and there are new concepts being used, which I'd have thought is exactly what we want.
Third, how different is this than the past? Would a time traveller from 1982 be equally stymied by a paper from 2002? How about 1962 to 1982?
in 1943, Warren McCulloch and Walter Pitts created a computational model for neural networks. One approach focused on biological processes while the other focused on the application of neural networks to artificial intelligence.
The author stated that a 3 sentence passage would not be comprehensible 20 years ago, but not true. Anyone could understand what they were saying from the context.
Code that writes code, which he is essentially talking about, heck, I wrote one when I was a freshman back in 1980. Because code that writes code is what it all boils down to. Using fancy words might get someone a job promotion for using the current buzzwords, but that's just buzzwords.
Color me not impressed.
At best, you'll reduce the time spent writing code at the cost of greatly increasing the time spent writing tests to give you sufficient confidence that your autogenerated code actually does what it is supposed to.
And if it doesn't, good luck fixing it.
"I'm not a classical style developer and that's fine because because programming is over."
"I'm a classical style developer and I'm safe because AI can never do this."
At the moment, either might be true and either might be false, and both might be merely some percent true and false at the same time.
The only real mistake is probably jumping to either assumption at this time.
Well I'm sure an argument can be made about fence sitting too so whatever.
Of course, I'm assuming that we are writing programs to specify what and how a system should work. It could be that AI (but not AGI) is so advanced that specifying a system can be compressed into training a model.
I think actual programming requires something more concrete; the 'atoms' of a program are not text letters or pixels, but something more abstract, more exact. I think once deep learning incorporates a symbolic or logic system of some kind, that might be a solution, but then that will apply not only to programming. All IMHO.
What happens when you have a NN that understands how to integrate new physical input and render usable actions for creating outputs without human intervention? That's where we get machines building machines.
What happens when we start using AI to find the best recreational drugs? How about recreational drugs designed for specific kind of Overdoses - like crumple zones on a car? Or using AI to find the best cocktail of psychedelics that allow us all to work stoned and to maximum benefit all day long without diminishing returns?
Finally, what happens when these AI can layer themselves together through transfer protocols and problem solving distribution without us telling them to? A self-analyzing, self-correcting and self-improving system can be considered a kind of life.
We really are very close now.
You're going to have to define "understands" and explain how to get there from where the technology is right now, because a model is a statistical artifact and doesn't "understand" anything, including its inputs and outputs.
> How about recreational drugs designed for specific kind of Overdoses - like crumple zones on a car?
Why would anyone want an overdose?
> Or using AI to find the best cocktail of psychedelics that allow us all to work stoned and to maximum benefit all day long without diminishing returns?
Who benefits from this? Because it doesn't sound like it would benefit the people doing the work.
> We really are very close now.
Just a few more puffs and I'm sure you'll have the solution.
Why would anyone want a car accident? Not quite following me huh? Seems like maybe you just want to argue.
Who benefits from humans working on stimulants and psychedelic drugs? How about the entire 20th/21st century? Are you seriously pretending everything from caffeine to opium hasn't been a driving force of production?
The sensing machines we are building are paralleled in their design and utility to the functional parts of our brains. We've networked them all together already, and all we need is a simple spark to set it all in motion.
Cognition rhymes with ignition for a reason.
exhales