It still uses a Monte-Carlo Tree Search to get to the level where it can beat human pro players.
>Starcraft is real-time, there are tons of different actions you can take, the actions are not independent (pressing attack does something different if you have a unit selected or not), the game state is not fully known and a given state can mean different things depending on what preceeded it.
And yet StarCraft is extremely primitive as far as strategy games go. Most of the stuff you can do in the game simply doesn't matter, and the stuff that matters could be modeled at a much coarser level than what people see on the screen. Knowing how this stuff works, I'm willing to bet this is exactly how Deep Mind will approach the problem. They will try many different sets of hand-engineered features and game representations, then not mention any of the failed efforts in their press releases and research papers.
The choice of StarCraft as their next target reeks of a PR stunt. Sure, there might be no AIs that play at pro level now, but there wasn't any serious effort or incentive to build one either, and now Google will throw millions of dollars and a data-center worth of hardware at this problem.
As far as I'm concerned, real AI research right now isn't about surpassing human performance at tasks where computers are already doing okay. It's about achieving reasonable level of performance in domains where computers are doing extremely badly. But that won't get you a lot of coverage from the clueless tech press, I guess.