This is just the next incarnation of trying to shift the output of someone else's algorithm in your favor. Be wary of building a career on top of that. It's very easy for the algorithm owner to change things up and obviate any value you used to provide.
>https://sites.google.com/view/automatic-prompt-engineer
Not exactly a "toaster go brrrr" job, but it could be obsolete one day
WaPo does need to chill though. There’s barely any Prompt Engineer jobs
Edit: If anyone's curious, I've been following this for prompt stuff: https://github.com/dair-ai/Prompt-Engineering-Guide
You've got a huge blind spot if you think prompt engineer isn't already a thing.
It may be a "thing", because generating BS is a viable business model and ChatGPT makes it more efficient.
..but I submit as a working hypothesis, that it is completely impossible to gain knowledge you do not already possess from a language model, no matter how clever your prompting.
I'm very interested in counter-examples, but I have seen a few that turn out to be fake already.
Not true. Emergent abilities is an active research area in LLMs [0]. They even have pretty graphs on the topic.
[0] https://ai.googleblog.com/2022/11/characterizing-emergent-ph...
Is Art Director just a "BS" job? I don't get it.
Checking some of the facts it gives me against other sites it’s all correct, but better organized and more accessible. There’s your counter-example. This works for basically any well-documented process.
People are graduating watered-down educations, earning inflated cash, with inflated titles. It all helps people believe they're higher status, that they have a university degree and are a manager earning $80k, surely they're getting close to the top of the totem pole now. But they have a worse standard of living and education equivalent to high school in the '60s.
[1] https://news.ycombinator.com/item?id=34641549
[1] https://www.cbsnews.com/news/salary-manager-jobs-fake-titles...
-mlsu: https://news.ycombinator.com/item?id=34884683
At this point the word "engineer" has lost its original meaning. Until there's a formal theory of how we can interact with LLMs and you make use of that in a systematic fashion, "prompt engineering" is really closer to "prompt artist."
Interesting angle. Are you saying there are rarely any "software engineers" out there, that they are all merely "software artists"? Cause none of these uses a formal theory for their craft. If they were then all those highly opinionated discussions of whether to use goto in C or what are the greatest flaws of node.js would just not exist.
There are other narrower senses of "software engineer" such as "person who optimizes code" and to me those more qualify as engineering because we not only have a decent enough theoretical background (see Agner Fog's work) but also can experimentally verify things. On the other hand it's a lot harder to quantitatively say if one design is better than another.
I think there's also some work in terms of rigorously modeling concurrent/distributed systems (Lamport's TLA+) work which I'd like to see more of.
I don't see this in this prompt engineering. In my limited experience (I played a few hours with Stable Diffusion and more hours with the OAI davinci-003 model), you can get good at it within a few days.
I'd imagine that being a "prompt engineer" entails finding out and mapping the structures that gives you the desired result. Think of it as a novice user of search engines VS expert user of search engines.
Well, you do you. That's old world thinking for a field that's going to dramatically morph into something that barely resembles what we have today.
I'm hiring a contract prompt engineer for my startup.
If you want to help us achieve better "TV replacement" results, send me an email (see profile).
https://fakeyou.com/news (early demo, more coming soon!)
Oh like hell it requires no skill.
You tell me how you'll generate better photos, improve dialogue coherence across multiple speakers, and control camera direction and movement (something we're using LLMs for too as we experiment with special-purposed models).
All of this is not known a priori, by the way. And I won't accept building a database or lookup table as an answer.
I also want to know how you'll test, benchmark, and refine.
You also need to budget for inference complexity.
I'm waiting :)
I can do this myself, but it is a full time job. I am so busy with all other aspects of my business I'm looking for people to bring on board.
Lawyers have a bad reputation, sure, but there’s a lot of education about the interpretation of our law and the absurdly large corpus of legal documentation that must be read in order to even become a lawyer is far and above anything you describe.
Crazy.
https://arxiv.org/abs/2302.06541
That is not to say, that integrating LLMs won't create a lot of jobs. Think of it as systems engineering. Knowing how computers work, as well as a software engineer does, will always be useful.
(This was before JS, before CSS, etc. Mostly just your original HTML simplified LaTeX article.cls elements, plus `A` and `IMG`, and maybe a `FONT`.)
HTML was easier to use than many word processors, but because it was new and unfamiliar, yet looked like it might be huge... for a brief period, practically anyone who could spell "HTML" or "WWW" could posture as a whiz kid, and make big bucks.
I'd guess that "prompt engineer" will evolve into real careers soon, but the nature of the technology and the role will be very different than it is this quarter.
I've been playing around with generating stories with ChatGPT for a while and...English (or any natural language) is really bad at being specific. I've made progress by learning some specific words to describe the type of scene I want and how much of it I want ChatGPT to generate (such as a scene for just that evening verses a few paragraphs describing weeks of traveling). I've also started getting some intuition for when I've given ChatGPT too much info (it'll cram all the facts in in weird ways) and too little info (it'll get really random and start inserting new characters and stuff).
Having a way to manage the meta aspects of story generation would be a big help.
Edit: maybe I should have kept reading.
> Anthropic, founded by former OpenAI employees and the maker of a language-AI system called Claude, recently listed a job opening for a “prompt engineer and librarian” in San Francisco with a salary ranging up to $335,000. (Must “have a creative hacker spirit and love solving puzzles,” the listing states.)
In the end what we all value is what solves problems. Those who embrace AI tech and learn to use the tool and work around its flaws will solve more problems than those who don't. This includes coming up with a system to validate the work. Those who use the tool recklessly will create more problems than they solve.
What side are we on here? I've been in the industry for over two decades and I for one cannot wait to command the computer in complex ways in my natural language. I am not threatened by other people being able to do the same. The tool is just a tool. What you build with it is what will separate the "professionals" from the "hobbyists".
Maybe they realized that too many jumped on the blockchain BS train.
We will see.
As mind viruses operating on human brains, they do not seem completely different technologies.
Did crypto crash?
Last I looked BTC was at 20K USD a pop ... strange definition of a crash for something that used to trade below a dollar.
Same thing it's been doing every year since 2011.
Yet BTC is still trading at 20k USD ... I'm not sure we have the same definition of the word "crash".
But whatever floats your boat, man.
If the dollar (or any other currency)lost 70% of its value in less than a year then we would certainly say it crashed
At this point, BTC is very much known for its extremely high volatility (source: look at the price history since inception).
There hasn't been a single year since it launched where it hasn't displayed outrageously wild swings: at this point, it's pretty clear that the wild volatility is an intrinsic attribute of this particular asset class.
Therefore: not a crash, just Bitcoin's business as usual.
People training and releasing custom models that can replace entire workflows of disparate steps needed to produce an image that would normally result from that workflow.
There was that video or maybe it was an article where the guy made it so he could just use natural language to describe the edits he wanted made and it would make them etc.