"Google-fu" sounds like a fun skill to learn and acquire, where "prompt engineering" sounds either like something well out of reach or like pretentious nonsense depending on the audience.
More likely, "prompt engineering" is a marketing term made up by AI marketroids (cf. androids) hoping to make developers feel better about their reduced roles in this "grand new AI age".
I don't think so.
I mean, clearly calling it "engineering" threw some people off, in the same blend as some gatekeepers cringe at calling train drivers "railroad engineers". But that's puerile gatekeeping that misses the whole reason why there is a vast need to know how to "engineer prompts".
The truth of the matter is that the focus of "prompt engineering" is being able to put together inputs that solve business problems in professional settings. You need to have full control over the generative process to integrate it's output in a business setting. That requires specialized knowledge way beyond naive requests expressed in natural language.
Complaining about "prompt engineering" because that only focuses on specifying queries and operating a specific service makes as much sense as complaining about SQL/database/postgres engineering because that only focuses on specifying queries and operating a specific service.
Before trying to dismiss "prompt engineering" through gatekeeping logic, first you need to justify why there is no need to know what you're doing to get outputs by feeding the right inputs. Even in subreddits dedicated to using generative AI to create images and videos,they started to outright ban posts where the contents are posted without the prompts used to create it.
That’s just not an interesting or rewarding way to interact with a computer, and the last thing I want to do is add long wait times and nickel-and-dime cost to the process. Layer on using different LLMs for different tasks or trying them out against each other and cross-checking output and it’s a mind-numbingly indirect way to get anything accomplished that in the end teaches me nothing and develops no useful skill that I enjoy practicing.
If it works for you, great, but even the most honest and genuine fans make it sound like a nightmare to me.
I think of it as similar to Googling in the early days. What started as a skill I had to pick up became second nature and I could find things faster than my family without even really thinking about what I was doing. It just became natural.
Most of my colleagues communicate with chatgpt in broken english, or they ask a question while leaving out crucial details about their problem. They’re always surprised when i am able to get a useful response from chatgpt when they couldn’t. it’s comical sometimes.
I 100% hear you on the “not a fun way to interact” though. To each their own. I personally enjoy it, it’s like a rubber duck that can actually talk back. :) not for everyone though.
The problem is that GenAI is a complete black box with nondeterministic outputs. I can write code and I know with a very high degree of confidence what I expect it to do. Asking an LLM or a generative image program for something, I have no idea what it'll give me. It gives no feedback other than results, which may or may not be what I want. If not, I have to reverse engineer what I think it might want me to say in order to get desired results. And the same query placed another time might give a completely different answer. I don't deny that it can do some impressive things given the correct inputs, but I am not inclined to spend my time searching for the magic words.
You're showing a fundamental misunderstanding (or ignorance) of the whole problem domain.
For starters, you place an awful lot of emphasis on what you think is "carefully craft English language prompts". That makes as much sense as characterizing the job of a database engineer as "carefully crafting quasi-English language prompts". The language used is completely irrelevant, and being able to use in some circumstances something resembling natural language to build up context does not take away from it.
Any remotely honest and objective analysis of the topic would start from similar activities, and to start off the areas of work where Llama are being used. For image/video generation you need to look at graphics design, video editing, video production, illustrators, etc. These activities, by their own nature, are iterative and exploratory. Then for text you have the work of copywriters and editors, and even writers and essayisgs. The work is fundamentally iterative and exploratory. Then you have work like exploratory data analysis/statistics/data mining. Every aspect of that work is iterative, even the reporting part.
So yes, the actual question for software engineering would be how to get AI to produce and iterate on an OS. The hallucinations aren't the only problem then, the lack of predictability in the answers is the biggest issue.
Cosign prompt engineering. My startup is tl;dr "what if i made a on-every-platform app that can sync and let you choose whatever ai provider, and you pay at cost. and then give you a simple UI for piecing together steps like dialogue / ai chat / search / retrieve / use files"
Seems to me the bigs are completely off the mark, lets cede the idea there's an omniscient AI available. Literally right now.
Cool.
It still has no idea how you work.
you could see 42, in hitchhiker's guide the galaxy, as a deep parody of this category error