- Low-code tooling will see a growth spurt, not driven by "citizen developers" but by increased developer adoption of technologies that offer a good DX and a capability to integrate seamlessly into "normal" dev workflows. Tech like Retool, Plasmic, Builder.io, PowerPlatform, new-gen ETLs, etc.
- The "citizen developer" paradigm will remain largely unsolved. The vision holds all its initial potential but it requires a multi-pronged, universal, push involving corporate politics, compliance, security, training, etc. that cannot be solved by a single company. We might see interesting initiatives or companies built around this in 2023, but a year won't be enough time to solve it.
- There will be a massive proliferation of GPT-based apps. Many will be shite, a few will be useful.
- Rising interest rates will push investment mass away from startups and into the other end of the spectrum: good old bonds and other fixed-rate assets.
- There will be a rising movement of software dev methodologies and training on how to use AI-assisted coding (Copilot, ChatGPT) effectively.
- ChatGPT will not replace coders at least in 2023, and the bitter non-tech people who are expressing uninformed opinions on its code-gen capabilities, observing from the sidelines and waiting for some sort of comeuppance, are not going to get it.
- React will continue to be a standard but coding a form will be as difficult as it was back in fucking 2013. I love the tech otherwise, but c'mon.
ChatGPT is the smaller model more aimed at chat, the davincy model is already better ;)
- Business analysis / product management skills
- Software architecture skills
- Ability to read the code output, and edit it as needed
- Ability to run the code and deploy it
- Ability to debug it
- A combination of all of the above to prioritize and implement changes as business requirements evolve
Does Davinci directly address any of this?
I'm sure there is at least one NPM package that solves this issue.
Twitter is going to be interesting to watch. I think it'll either crash and burn, or become huge doing something very different from what it is now.
EDIT: I'll add one more. Meta will keep declining and fails to deliver anything meaningful wrt. Metaverse.
For example, we had a case where we had thousands of technical medical descriptions and we wanted to simplify them into "average person" descriptions. In the past, that meant someone going through each one and re-wording them as best as they could. But an engineer tried pasting them all in to ChatGPT and asking it to reword them and it worked surprisingly well.
In another example, my wife had to send an email to a volunteer group asking them to participate in an upcoming activity. She was able to draft a base email to edit in about 1 second with ChatGPT. Then she could edit it as needed.
I've also heard from folks in marketing getting a lot of use out of these tools.
So maybe none of these use cases are individually transformative, but I think it will at least be a tool on par with spellcheck that is super handy and widely used.
1- significant portion of mass market retail becomes automated. Labor is getting too expensive and entitled. Everyone needs to hire no one wants to work and businesses can’t make $20 minimum wages profitable.
2- Drone military spending goes up by 10-100x. Fueled again by enlistment shortages and arms proliferation and need for deterrence.
3- AR/VR becomes the most hyped consumer technology (more than now). People are thirsting for something new and a 100 megapixel camera in the phone doesn’t move the needle
4- More and more companies announce migration away from Chinese manufacturing. Vietnam and India become primary beneficiaries. Global undeclared economic war wages on.
5- Google and Fb will see persistent losses in active usage. They are too addicted to the ad money and user experience is becoming borderline hostile.
6- Still won’t be able to buy PS5 if you want one. Nintendo will continue to milk the switch without a meaningful update. Won’t stop unless sales dry up.
Specific Prediction: Google services are banned by a number of countries, and separately their search business is disrupted by AI generated content to the point that their algorithms stop working as intended.
* Bing grows in popularity as it incorporates OpenAI technologies.
* Web3 starts its decline into irrelevance. VCs stop investing.
* Crypto contagion continues but remains relevant due to CBDCs.
* XR becomes the new hype technology courtesy of the Apple XR launch.
* Uber will decline as Cruise self driving robo-taxis expand across the US.
https://news.ycombinator.com/item?id=29746655
Another one, even more accurate with the energy crisis we are having right now in Europe.
> I will go with most 'outrageous' predictions I can think of:
> - 'Green course' becomes less popular due high energy prices
> - Russia invades a country without admitting that
Turns out these weren't so outrageous at all
Who wouldn't want to reverse aging and almost fix most of the disease and costs associated with it on their population. Who, on individual level, wouldn't want to look 20 again.
Right now, the problem seems to be that we, in general, we do not believe that such thing is possible.
This will change as soon as we have a robust rejuvenation event at least in mice. After that, the rejuvenation is going to be the new field where all the investors money will go.
On a more modest perspective, stuff related to health and longevity will also see growth. New sensors, new ways to gradually improve health using tech.
There is also some interesting application for statistical models like ChatGPT and beyond. I would say, that text generation is not the most interesting application for such models.
For example, what kind of properties we could infer from our genome using network models.
- Mostly a lot of the same stagnation we've seen in the tech industry for the past 5 years.
- Some sprouts of innovation around generative language, probably around fact checking, detecting and removing "hallucinations" from models. But probably not anything significant enough to significantly shift anything for every day users (probably have to wait 2025 for that).
- Crypto folks will keep believing that they're building the future, although everything indicates that Big Tech is finding a way to become safer, more private, without having to become fully decentralized.
Privacy is not having a camera in your bedroom. Promises to protect the data collected from the camera are meaningless.
* Copilot continues growing among developers, and eventually reaches a point where it's as essential as autocomplete or syntax highlighting.
* Venture capital remains dry for half of the year.
* Someone - with bets on figma - ships a feature that auto generates react code from a design. This process is ripe for automation.
* TiKTok continues to face greater pressure from US regulators and politicians until a US hosted spinoff is forced into creation.
A lot of times you also have a page template framework in your app with a reusable header, footer, etc, and are really just designing the content of a page. It's difficult for generated code to slip cleanly in to an existing page framework in a way that doesn't take more time to rework than to just write yourself.
I'm not saying these are unsolvable problems, but it just makes it hard for a code generation product like this to find a single big enough addressable market to be exciting.
As people realize the drawbacks of silos and the benefit of more intentionally crafted data, Linked Data and other network relative data schemes will become more popular, in addition to their uses for AI systems (which should be the biggest "site" of them all, and something LD is created for).
I would characterize AI as 80% accurate so far. But getting one in five things wrong is not good enough for many tasks. Human/machine oriented data formats like Linked Data will help close this gap, as contributed by projects like Wikidata and increasingly smaller scale apps through better defined SEO (schema.org), for example. Breakthroughs in easily working with Linked Data at day to day levels would be helpful here, right now libraries even for specific domains are very nuts and bolts compared to ORM libraries. For common querying, perhaps GraphQL with network schemas will start to gain mainstream popularity.
We should also see breakthroughs in open standards data carrier formats, like decentralized wallets and credentials. These will have have significant impact because they are essentially like free-floating sites that interact with any site.
I think 2023 is the year AGI becomes a serious and mainstream topic. There will be new versions of OpenAI’s work, that will incorporate layers for correctness checking and reinforcement learning. Versions of itself that start to improve off each other.
As technical limitation after technical limitation is solved or lifted, HN and engineers at large start having discussions about what it means to be an AGI in no true Scotsman style (“no, if it can’t exactly be taught how to create a startup, fundraise for it and have a 7bn dollar exit then it’s not REAL AGI”), until at some point in the next years we stop caring about that discussion as we moved on from “are we there yet” to “what now”.
This will be longer than a year as an ordeal. I predict we will look back at 2023 as when it all started. Even though, as we all know, it didn’t, GPT had been around for a while and has been building on the shoulders of giants; but for the mainstream, it absolutely hasn’t started yet.
- Legal issues for AI coding help.
- Commercial support/add-ons for the "fediverse". Resulting in a lot of de-federalization and thus at least two sub-spheres.
- AAA gaming will tackle the new rise of AI. Better bots, for one (your computer will now insult your mom, too). Prompting tech journalists to coin horrible new pseudo-acronyms like "AIAIAI" or "AAAAI".
- More people moving away from VSC due to better LSP integration in existing and new editors (Helix, neovim etc.)
- You'll read a lot more about "permacomputing", with no definite products.
- Metaverse will continue to fail, possible new book by Jared Lanier gloating.
Edit: What would be cool is if ChatGPT et al, could take an implementation as input and generate all the test descriptions in natural language (source too, why not). That would expose unintended hidden or mistaken assumptions in the implementation.
More chinise apps will conquer the market like tiktok is doing
2) A new Spam filter will be desperately required to fight mail and blog junk generated by chatgpt like models.
3) Students might be able to learn coding fast assisted by AI, instead of searching for issues, as they can get instant response.
the only interesting spaces will be anonymous and invite-only
the internet becomes balkanized as international cyberattacks increase in frequency
For example, there have always been ideas floating around about creating a blockchain based repository of scientific knowledge or social media or whatever. The main blockers for ideas like that were always content moderation at scale: how do you know that what someone is uploading is legitimate or follows the rules? How do you moderate content without an overlord? There need to be systems in place to:
- filter out garbage and toxicity,
- allow heterodox submissions (no political censorship),
- and do it all without some kind of admin/moderator
Now GPT-3 can be used to pre-validate content. It can tell you whether a submission follows some kind of logical reasoning, isn't spam, and isn't anti-social. It's not 100% accurate, but it's good enough, and it can be tweaked.
Beyond that, GPT-3 can be used for all sorts of tools like automated codebase documentation (no more writing for developers, yay!), news trend-spotting (for traders/finance), generating landing page copy, etc.
Which means that, since those models cannot verify the correctness of facts, they will accept anything created by... those models.
I find these kinds of descriptions confusing. What does "garbage and toxicity" mean? You might as well have written "filter out bad things", it would be equally vague and subjective.
For example, if we go the human nature route, then any comment - no matter how inflammatory - is fair game as long as the goal of the comment is to "help the tribe" in some sense. That's kind of like political speech. So, you could faithfully argue ideas like "nazis are good, and we should be like them", as long as you aren't using toxic language and attacks on other people while doing it.
EDIT: Actually fascist/communist ideas contain beliefs that are anti-social, so wouldn't be considered pro-social according to human nature, and that's where the conversation would stop. It still stands that heterodox ideas that don't work, or people don't like, can be technically pro-social.
If we go the cultural route, then identify the mainstream beliefs and rules of discussion and enforce them. This is like the "no swearing" rule, or "no nazis" rule.
You could just codify a set of rules for content that the AI adheres to.
That wouldn't be possible on a blockchain-based social platform because the algorithm would be public, provably in use, and there would likely be voting mechanisms for changes to the algorithm.