Consider: If someone offered to give you $2 every time a fair coin toss came up heads, or take $0.50 every time it came up tails, you'd be foolish not to take that bet a million times as you can because you know that the coin has exactly a 50% chance of coming up heads.
However, if it was an unfair coin, you'd want to know the degree to which it was unfair, and you'd have to measure it. How much do you trust those measurements? You might say that you're 90% sure that the coin has a 40-60% chance of coming up heads, or give a probability of 2% that a $1.04 to $0.96 wager would be profitable while a $1.03 to $0.97 wager would be unprofitable.
Hillary had a 95% chance to win the election. But on top of the fact that 1 in 20 times she'd lose that election if that really was the probability, the 95% number was uncertain because the measurements were difficult to pin down - maybe she'd have lost 1 in 40 times, or maybe she'd have lost 1 in 5 times. All we know now is that she lost, and that many of the assumptions and measurements the pollsters had to make concerning factors like voter turnout, nationalism, corruption, foreign interference, debate results, and fundraising turned out to be inaccurate.
With unfair coin measurements, you can get very accurate numbers with just a handful tests. When predicting election results or World Cup games, you're much less likely to make an accurate estimate. The confidence is an estimate of how likely that estimate is to be accurate.