Because being able to draw "a painting of joe biden as king kong on top of a skyscraper in the style of monet" was something that until very recently were thought of as requiring intelligence. Of course, now it is not so impressive anymore because it is all mathematics and digital logic. But that is the problem with defining artificial intelligence. Any time a task is implemented on a computer you can point to that implementation as evidence that the task didn't require intelligence after all. Many decades ago researchers thought that playing chess on a high level required intelligence, then go, then poker, then composing music, then driving a car, etc... Nowadays researchers are more cautious and don't state that "solving task X implies intelligence". Thus it becomes a moving target and a computer can never prove itself intelligent.
The Turing test in its original formulation has already been soundly defeated. People now hedge their bets and require that an AI must fool leading AI researchers to pass the test. But the original test supposed that the interrogator was "average" and fooling 99.99% of the world's population must be good enough. Either way, as LaMDA demonstrates, it is only a matter of time before even the strongest imaginable version of the Turing is also defeated.