I don't think this author understand what is a productivity boost. OSS is a development model, it didn't turn into any "productivity boost", any more than the general technology level (including mainly proprietary technology) offered.
>Programmers will command armies of software agents to build increasingly complex software in insane record times. Non-programmers will also be able to use these agents to get software tasks done. Everyone in the world will be at least John Carmack-level software capable.
/rolls eyes
>At Replit, we're building an AI pair programmer that uses the IDE like a human does and has full access to all the tooling, open-source software, and the internet.
A, OK, this is building up commercial hype. Makes sense now.
Like what that one couldn't build with proprietary software? The main benefit is with OSS you're doing it with free (as in beer) software (whereas before you might not afford a proprietary solution). But that is not a productivity increase, it's budget decrease.
>And what will happen with this technology if all the source code leaks somehow, don't you believe it will become 10x-100x better by some measures?
No.
And the marketplace still isn't interested in fixing bugs over "oooh, shiny", so my concerns might never be addressed.
lol
I'm sure that for simple tasks, AI-based pair programming will offer some level of acceleration, but until it can understand the semantics of the code it's generating, and how it fits into the broader _system_, it will not be able to be trusted. I do not look forward to a world where I have to spend my time debugging AI-generated code.
As far as I understand it, though, these explanations are based on similar code contained in the training dataset, rather than any kind of ability to reason about what's actually going on, no?
I think tools like this can be great for generating skeletons and draft implementations for simple CRUD-like things. For example I asked it "write an Android layout XML for a login screen with username, password and a login spinner using components from the material library" and it did exactly that. I followed up with "write the corresponding activity in Kotlin" and it did. It generated a correct implementation, including a few paragraphs explaining how it worked and that it mocked the login method with an artificial delay for demo purposes.
Another thread that convinced me was when I gave it a Kotlin interface for a CRUD Taskrepository and asked it to write the implementation. It wrote a correct implementation backed by a Map. With some followup prompts it was able to write save/load methods to store state in a JSON file, and generate events to a Flow whenever a task was created, updated or deleted.
Another one: I asked it how I could debug why a gstreamer pipeline had a refcount of 2 after the pipeline stopped running and it pointed me to a number of debug tools and environment variables I could set to trace refs in the pipeline.
However, if there are some courses, videos, detailed documentation about the new way of doing software development, I'd be interested to look at that.
In the same time, testing has not really improved in the last 30 years.
I do not agree with this statement. There is totally no progress in AI since cryptowinter. Just there are too much people with always-online smartphones, so governments considered this field as too big to be out of their control. And it leads to 100x increase of no-brain programming job where everything what is needed from that kind of programmers - to fight against users.
The author is right about big changes is coming, but not the changes he is writing about.
> Simple tools built using the models of today could easily double the productivity of an artist or writer,
This message might be more useful if you give any other searchable names of such a pieces of AI software.