Quantum computers don't have any space advantage, I thought?
You can represent any unit vector in a vector space with 2^n basis elements using only n qubits.
Constructing an arbitrary vector might not be more efficient than doing it on a classical computer, but constructing certain vectors that would probably be very large and/or unwieldy on a classical computer is possible. For example, it’s easy to represent any single standard basis vector in a vector space with size 2^64 on a classical computer; just use a list of (64-bit word to describe which basis vector you’re talking about and a weight). Now go ahead and try to use that same representation for a Fourier transform, and you’re out of luck. But this is no problem on a quantum computer. You just end up with a 64-qubit system that has a non-zero value for all of those 2^64 basis states (using only 64 qubits). You can also construct the circuit required to do the Fourier transform with ~64log(64) gates.
So for this problem, there’s definitely a space advantage. The trouble is that most of these advantages don’t generalize in any obvious way. Maybe there’s even a representation on classical computers that’s equally efficient and we just haven’t found it yet.
And as you mention below, this isn’t useful for arbitrary linear operations (many of them seem to be just as expensive on a quantum computer as on a classical computer) but there are certain known-to-be-useful linear operations that are extremely cheap to represent compared to what we know how to do on a classical computer.
A quantum computer, with a qubit register 0101, can be put into a superposition state between all 4 qubits, where the state of the qubit register is ALL 2^4 different configurations simultaneously....now scale that up to 50-100-1000 qubits and you get the idea. The quantum computer has a ridiculous advantage in terms of holding state spaces.
The point of the space argument I was making was that, if you need N bits of space to represent the input/output in classical computers, you should still need (I'm not sure if it's the case, but I don't see why it's not) Ω(N) qubits to do the same representation in a quantum computer.
The quantum bullshit fallacy is that, since an entangled N-qubit is represented as a 2^N-dimension vector with complex arguments, it's really a computation on 2^N elements in O(1) time. The first sign that this is fallacious is that the basis here isn't of size 2^N but of size 2^N - 1. We're still only capable of reading N bits of information out of an N-qubit register; the fact that classical simulation requires a much larger state space to compute the probability doesn't mean that there's an inherent ability to freely vary through all of those states to represent the entire state space.