There are some really good bulshitters who have led smart people into deep trouble. These bulshitters behaved really like ducks but they weren't ducks. The duck test just isn't good enough.
The -1 day is where people say that because LLMs behave like humans then humans must be based on the same tech. I just wonder if these people have ever debugged a complex system only to discover that their initial model of how it worked was way off.