This is just the next incarnation of trying to shift the output of someone else's algorithm in your favor. Be wary of building a career on top of that. It's very easy for the algorithm owner to change things up and obviate any value you used to provide.
>https://sites.google.com/view/automatic-prompt-engineer
Not exactly a "toaster go brrrr" job, but it could be obsolete one day
WaPo does need to chill though. There’s barely any Prompt Engineer jobs
Edit: If anyone's curious, I've been following this for prompt stuff: https://github.com/dair-ai/Prompt-Engineering-Guide
You've got a huge blind spot if you think prompt engineer isn't already a thing.
It may be a "thing", because generating BS is a viable business model and ChatGPT makes it more efficient.
..but I submit as a working hypothesis, that it is completely impossible to gain knowledge you do not already possess from a language model, no matter how clever your prompting.
I'm very interested in counter-examples, but I have seen a few that turn out to be fake already.
People are graduating watered-down educations, earning inflated cash, with inflated titles. It all helps people believe they're higher status, that they have a university degree and are a manager earning $80k, surely they're getting close to the top of the totem pole now. But they have a worse standard of living and education equivalent to high school in the '60s.
[1] https://news.ycombinator.com/item?id=34641549
[1] https://www.cbsnews.com/news/salary-manager-jobs-fake-titles...
-mlsu: https://news.ycombinator.com/item?id=34884683
At this point the word "engineer" has lost its original meaning. Until there's a formal theory of how we can interact with LLMs and you make use of that in a systematic fashion, "prompt engineering" is really closer to "prompt artist."
Interesting angle. Are you saying there are rarely any "software engineers" out there, that they are all merely "software artists"? Cause none of these uses a formal theory for their craft. If they were then all those highly opinionated discussions of whether to use goto in C or what are the greatest flaws of node.js would just not exist.
I don't see this in this prompt engineering. In my limited experience (I played a few hours with Stable Diffusion and more hours with the OAI davinci-003 model), you can get good at it within a few days.
Well, you do you. That's old world thinking for a field that's going to dramatically morph into something that barely resembles what we have today.
I'm hiring a contract prompt engineer for my startup.
If you want to help us achieve better "TV replacement" results, send me an email (see profile).
https://fakeyou.com/news (early demo, more coming soon!)
Lawyers have a bad reputation, sure, but there’s a lot of education about the interpretation of our law and the absurdly large corpus of legal documentation that must be read in order to even become a lawyer is far and above anything you describe.
Crazy.
https://arxiv.org/abs/2302.06541
That is not to say, that integrating LLMs won't create a lot of jobs. Think of it as systems engineering. Knowing how computers work, as well as a software engineer does, will always be useful.
(This was before JS, before CSS, etc. Mostly just your original HTML simplified LaTeX article.cls elements, plus `A` and `IMG`, and maybe a `FONT`.)
HTML was easier to use than many word processors, but because it was new and unfamiliar, yet looked like it might be huge... for a brief period, practically anyone who could spell "HTML" or "WWW" could posture as a whiz kid, and make big bucks.
I'd guess that "prompt engineer" will evolve into real careers soon, but the nature of the technology and the role will be very different than it is this quarter.
I've been playing around with generating stories with ChatGPT for a while and...English (or any natural language) is really bad at being specific. I've made progress by learning some specific words to describe the type of scene I want and how much of it I want ChatGPT to generate (such as a scene for just that evening verses a few paragraphs describing weeks of traveling). I've also started getting some intuition for when I've given ChatGPT too much info (it'll cram all the facts in in weird ways) and too little info (it'll get really random and start inserting new characters and stuff).
Having a way to manage the meta aspects of story generation would be a big help.
Edit: maybe I should have kept reading.
> Anthropic, founded by former OpenAI employees and the maker of a language-AI system called Claude, recently listed a job opening for a “prompt engineer and librarian” in San Francisco with a salary ranging up to $335,000. (Must “have a creative hacker spirit and love solving puzzles,” the listing states.)
In the end what we all value is what solves problems. Those who embrace AI tech and learn to use the tool and work around its flaws will solve more problems than those who don't. This includes coming up with a system to validate the work. Those who use the tool recklessly will create more problems than they solve.
What side are we on here? I've been in the industry for over two decades and I for one cannot wait to command the computer in complex ways in my natural language. I am not threatened by other people being able to do the same. The tool is just a tool. What you build with it is what will separate the "professionals" from the "hobbyists".
Maybe they realized that too many jumped on the blockchain BS train.
We will see.
As mind viruses operating on human brains, they do not seem completely different technologies.
Did crypto crash?
Last I looked BTC was at 20K USD a pop ... strange definition of a crash for something that used to trade below a dollar.