Many, many AI scientists are very clearly aware of this problem. AI safety researchers have been predicting exactly the failure modes we see in the various incarnations of GPT for years. Issues like alignment and over-fitting are incredibly hard problems to solve, and there's still a lot that isn't known, but they are recognised in the field. The problem is business leaders, investors and governments are utterly clueless on any of this, hence the Google Bard and Microsoft Sydney debacles when they were rushed into the public view.
There is also the risk that the recent massive success of LLMs will bring a lot of excited bandwagon seekers into the field that don't have or appreciate the background on AI safety.