Computer software doesn't play chess. It doesn't understand positions. It has a bunch of algorithms and processes that turn a board position, tries each candidate move, turns the resulting position into some sort of number, through an evaluation function. Goes down a tree of that until either it reaches a definite conclusion, or reaches a depth where trying to go deeper takes far too much more time.
All of this is an artificial simulation of how chess is played. And it is, long-term, at the mercy of the accuracy of the evaluation function. Because right at the end of it's search-depth, it has to evaluate that position, effectively "a guess".
The most accurate way of evaluating a position is to try each move and go down a search tree of best moves until you reach a definite conclusion. But the evaluation is used because at that point it is computationally too expensive to go down the move tree any deeper. It's a fudge of brute force analysis - there isn't enough computational resources available to go any deeper, and so the computer must guess. This is the horizon effect, computers can't see past it.
That evaluation function is a human written piece of code, that takes various on-board factors: piece placement, pawn structures, strong/weak-squares, king-safety, piece activity, central control. In effect, the human is trying to program intuition into the machine. Humans don't understand intuition, let alone program a computer to do it.
Chess strength of software is determined substantially by the number and speed of CPUs, Memory capacity, and the evaluation function. The evaluation function is the weakest part of that chain, so there's considerable effort to hold off the use of the evaluation function for as long as possible.
I think only the Rybka developers spent more time trying to teach the engine how to play chess, and that proved more successful for a few years than the "more power quicker" led industry.
The evaluation function is the most difficult part of a chess engine. It is the typical AI problem. And humans can only take that so far. But how do you find a human who can comprehend how a grandmaster thinks about a chess position and determining the right move, and still be capable of emulating that process in software. At least Rybka's developers were International Masters.
Also, humans have the strength to adapt and refine - patch their own chess playing abilities. Look at Carlsen's style, it's an ever more refined Karpov style, which itself was a more refined Capablanca style. Carlsen excels in the kinds of positions computers don't manage well - deep strategical long-range plans, well executed. A lot of Carlsen's chess strength is intuition and feel, with an impeccable analysis to confirm his hypothesis. And he's one of the post-Chessbase crowd, grown up learning chess with computers. That is an opponent a computer should fear, if it could ever comprehend the notion.
So yes, in terms of chess playing strength, humans still play chess better than computers.
... but humans can't beat top computers any more? You're not making any sense here. The fact is no human has beat a top computer in almost 10 years. Not even once! Claiming that humans are still better is just nonsense.