You are right in the edit. The hash needs to be using a secret salt that is unavailable to any potential attacker to not be PII.
You're mixing up the termso psedonymization and anononymization, though. If something provably not PII, it is considered anonymous. Psedonymization specifically means to keep the data as PII, but where the risk of misuse is reduced by making the identification hard.
In practical terms, psedonymous data is data that someone like a data scientist will only be able to link to a person if making a deliberate effort to do so, which will almost certainly mean that she KNOWS she is breaking some law. And it may also mean that the link between the person and the pseudonym is stored in a locked down database where most data scientists (or others that may have interest in doing the linking) do not even have access.
The GDPR does promote the use of pseudonymization as a layer of protection, and if a business does keep some PII data around, properly categorizes their data as such (in compliance with Article 30 of GDPR, with a defined "Legal Ground" for processing activities) AND properly protects the data both through "Security by Design" and "Privacy by Design" (of which pseudoymization is an important element), their legal exposure can be either completely negated or at least radically reduced if the "Legal Ground" is challenged.
Overall, though, fully understanding GDPR is terribly difficult, as it requires significant understanding of both Law (International AND local within each country covered by the GDPR), Computer Science (development AND IT security) AND a good understanding of Data Science.
I rarely meet people with enough understanding of all 3 to assess practices that are in the gray zone.
Lawyers (and most DPO's) tend to have little understanding of the IT or Data Science aspects, but tend to be good at stretching a "Legal Ground" to whatever is needed by the business to continue to be profitable.
Data Scientists tend to know how to de-pseudonymize data, and may even be taught "Privacy by Design" (this usually has to be forced on them, though, as it makes their job harder). Most data scientists struggle with IT security aspects, though, and would in many cases happily download all data to their laptops if they could.
Developers/engineers may understand concepts such as hashing, and even know the difference between hashed and encrypted data. However, as they live in a boolean world of True vs False, using judgement to evaluate the risk impact of some practice for data subjects tends to be alien to them. In a black and white world, this group tends to think that every bad practice is equally bad, instead of going for the "lesser wrong" or "good enough". Especially if the measures needed to be "good enough" makes the coding harder or the system slower.
Finally, IT security (the experts, not the drones) MAY have a better understanding of degrees of risk than developers, but tend to know/care less about the actual data than any other group.
And each group tend to hold the other groups to a higher standard than their own. The lawyers tend to assume that all aspects of development and infrastructure is properly hardened. Data Scientists tend to interpret the "Legal Ground" to cover whatever they want to use the data for. Developers tend to think that the infra that runs their systems is fully secured by shell protection, and may even store "secrets" in more or less open git repos (and even if they delete it later, they don't clean up the git history or create new secrets). And networking often do not even care about anything in the "Application Level" or higher of the networking stack.
So in practice, any large corporation will have a huge number of vulnerabilities. The only way any sensitive asset (from a privacy, intellectual property or operational stability perspective) can be considered properly protected is to have multiple layers of protection, all or most of which must fail for major incidents to happen.