It seems to me extremely clear that we are attacking the problem from two directions - neuroscience to try to understand how the only example of general intelligence works, and machine learning to try to engineer our way from solving specific problems to creating a generalized problem solver. Both directions are producing some results, but slowly, and with no ability to collaborate for now (no one is taking inspiration from actual neural networks in ML, despite the naming; and there is no insight from ML that could be applicable in formulating hypotheses about living brains).
So I can't imagine how anyone really believes that we are close to AGI. The only way I can see that happen is if the problem turns out to be much, much simpler than we believe - if it turns out that you can actually find a simple mathematical model that works more or less as well as the entire human brain.
I wouldn't hold my breath for this, since evolution has had almost a billion years to arrive at complex brains, while basic computation has started from the first unicellular organisms (even organelles inside the cell and nucleus are implementing simple algorithms to digest and reproduce, and even unicellular organisms tend to have some amount of directed movement and environmental awareness).
This is all not to mention that we have no way right now of tackling the problem of teaching the vast amounts of human common sense knowledge that is likely baked into our genes to an AI, and it's hard to tell how much that will impact true AGI.
And even then, we shouldn't forget that there is no obvious way to go from approximately human-level AGI to the kinds of sci-fi super-super-human AGIs that some AI catastrophists imagine. There isn't even any fundamental reason to assume that it is even possible to be significantly more intelligent than a human, in a general sort of way (there is also no reason to assume that you can't be!).