Essentially handling large language models.
Early prompt engineers will probably be drawn from “data science” communities and will be similarly high status, well but not as well paid, and require less mathematical knowledge.
I’m personally expecting an “Alignment Engineer” role monitoring AI systems for unwanted behavior.
This will be structurally similar to current cyber security roles but mostly recruited from Machine Learning communities, and embedded in a broader ML ecosystem.