There are three limitations (whuch apply to many papers about P=NP).
1. The market could still be efficient, because the situations which must arise to cause P vs NP problems are very complicated. In particular thry require very expensive indivisible things to buy, whereas in most situations we can treat things like shares as continuous with only a small error.
2. Markets could be efficient if P=NP and we know how to solve NP probkems in P, and we do it. The title makes it sound like the market will already be efficient if P=NP, which isnt true.
3. Even if P=NP, the polynomial could still be big enough the market cant be efficient. Similarly, P could not equal NP but the expoenential be small enough markets can still be efficient in reality.
Also to further hammer home the point, due to the phrasing of the EMH, although no one may currently be using P=NP, markets would still have the efficiency property now even if no one is exploiting it. Perhaps this sort of vacuously true statement rubs you the wrong way (like it does me a bit) with the strength of the "if and only if" the author used. But if you read "markets are efficient" as the EMH then it is still a valid literal formulation.
On three, sure that's great for reality. But for the formulation of markets being efficient as an inherent property (again the EMH) of markets, the size of the market could be held as effectively infinite (or at least extremely large) and the property should still hold. At some point the size of the theoretical market will explode the polynomial, and for the EMH to hold P=NP must be true.
IE, it is almost as good.
So sure, maybe the market isn't 100% efficient. Maybe it is instead 99.99999% efficient, and that's good enough.
Or in other words, The author of the paper is trying to be clever, and in the process he kinda misses the point of why the efficent market hypothesis is important to begin with.
Also, I once heard an Econ explain EH as “true if enough people operate under the assumption that it is not true”
Edit-After waking up a little more, I’m not entirely sure of my statement that amazon, wellsfargo, and Verizon serve as examples against EH. But the later quote I heard from someone with 30+ years of research.
While you're right in your points here, the CS aspect of this paper leans on the CS theory. When a computer scientist say, "we cannot do this efficiently" it means, "We might be able to do this efficiently for some instances of the problem, but not for all instances". And when the scientist say "we can do this efficiently" it means "we know how to solve this in a way that when we double the input size, the difficulty will be a fixed factor larger. But it might be infeasible to solve any instance of the problem what so ever".
Personally I think it is very unlikely that the markets are close to the best approximation we can afford. The current market-making agents use limited intelligence on limited data. It is an efficient system in the sense that it beats randomness and it beats a central (human) intelligence with (allegedly) superior access to information.
The project was very short-lived, but (IIRC) the primary designer was Stafford Beer (https://en.wikipedia.org/wiki/Stafford_Beer) who based the work on his so-called Viable System Model (https://en.wikipedia.org/wiki/Viable_system_model). For the fundamental theories of the time you'd want to read Beer's works as well as those of his contemporaries, who espoused competing theories and methodologies. (I say "at the time" but the state of the art never really changed. Project Cybersyn is a fascinating chapter in the long-history of AI, and if there's any dominate thread to AI it's one of diminishing expectations and migration to ever more circumscribed problem domains.)
As I mention in my Amazon review the author, Eden Medina, seems to have compiled a ridiculous amount of material but only a fraction of it bleeds through into her book. The book is fascinating but if you're interested in the theory and history more generally then the book's bibliography is priceless.
It's been awhile since I read the book but here's one lasting impression: one of the biggest problems with Project Cybersyn was communication between producers and consumers. Much of the budget and time was actually spent on telecommunications infrastructure and then figuring out how to get people to use it properly. Which hints at one of the most important functions of a market: price signaling. Regardless of whether a market is efficient, given the dynamic nature of a complex economy any system you setup that tries to centralize price signaling (capturing pricing information is a prerequisite for processing it and generating optimal allocations) seems like it'd very quickly become antiquated and a hindrance. Markets may be inefficient but at scale not only are they remarkably powerful distributed computation engines, they co-evolve with the economy. But that doesn't mean there isn't room for applying these techniques in sub-domains (e.g. trading engines, city governance, etc), improving overall efficiency.
- "Red Plenty" by Francis Spufford, a mix of fiction and non-fiction about planning experience in the USSR. It includes a rich bibliography and references to papers published over the past 70 years around this issue.
- There are few papers by Chinese economists, most notably this one: https://boingboing.net/2017/09/14/platform-socialism.html (you have to mess around with Sci-Hub mirrors to get a free copy).
- There are few papers and books by Michael Ellman, e.g.: https://www.amazon.com/Planning-Problems-USSR-Contribution-M...
- I also have a few primers on linear programming in my to-do list, e.g.: https://www.amazon.com/gp/product/0486654915/
- Somewhat tangential, but "the greatest American capitalist" ripping into EMH is a fun read too: https://www8.gsb.columbia.edu/articles/columbia-business/sup...
None of these really talk about the software still, but I would imagine a combination of:
- existing supply-chain systems already in place at Amazon, Walmart, etc.,
- something along the lines of non-monetary Kickstarter to gauge popularity of ideas from the ground up, and encourage innovation
- strong democratic institutions
- still allow free market at low levels, like individual entrepreneurs that don't employ anybody (once you employ someone, it must be a co-op).
Dunno, these are just random ideas in my head. :)
https://content.iospress.com/articles/algorithmic-finance/af...
If you want a full critique, read Steve Keen's Debunking Economics, it has a chapter on EMH.
Oh and by the way, there is quite a bit of people who believe that P=NP. Most famously Donald Knuth. I recently became convinced about that as well.
Amusingly enough, the first thing that a rational person would do upon discovering a relatively efficient algorithm to solve NP problems would be to cash in all the Bitcoins. Thus proving in practice that no, markets are not really efficient. :-)
Frankly, everybody with a bit of economic common sense knows
Said every hot headed teenager with an attention span too short to think through scientific arguments.Citation? True Scotsman fallacy?
I am pretty sure that most economic Nobel prize winners do not believe in EMH, from the top of my head, Akerlof and Kahneman.
Isn't it also ironic that you criticize the book as being pop-sci by perusing Wikipedia? You know, in both cases, being pop-sci doesn't imply being wrong.
It's not even binary and is not a static property, just like our laws are just a proxy to a question of what is ethical or moral; and the truth of their purpose is entirely based on the historical context.
Note that I said "people invest in markets", and not index funds above. There is a subtle differences, modern index funds are not true market indexes. There are fund managers who trade actively - however they trade mush less often than a traditional actively managed funds and with different rules. The managers do watch earnings reports and sell bad stocks, but the managers are also in for the long term and re willing to wait out a downturn. The managers also know they are judged by the index, and so bad stocks are not a negative in the same way that active funds are judged. Thus an index fund trades much less often, but they are not really a buy the index as published.
I would argue that belief in EMH is supported by existence of index funds. But index funds are also supported by laws that require certain level of openness and information sharing (such as laws against insider trading). When this information sharing is not enforced by law, it is a major source of market inefficiencies.
Someone correct me if I'm wrong, but if 1. Nash Equilibrium ⊂ FNP 2. "Markets are efficient" => FNP ⊂ FP
How is this different from "FP = FNP if and only if P = NP" [2], which is a result already found?
[1] https://en.wikipedia.org/wiki/PPAD_(complexity) [2] https://en.wikipedia.org/wiki/FNP_(complexity)
If a strategy is a sequence of BUY-HOLD-SELL decisions, then just because there's O(3^n) strategies doesn't mean you need to evaluate them all. It seems pretty easy (if a strategy is only defined in retrospect) to define a greedy algorithm that finds the optimal strategy. (see page 16.)
The author goes on to compare this to the Knapsack problem (pg 19). The thing that makes the Knapsack question (and NP-complete problems generally) hard is that greedy algorithms don't work (as far as we know), whereas it seems like a greedy algorithm work for the problem the author has laid out.
> The basic argument is as follows. For simplicity but without loss of generality, assume there are n past price changes, each of which is either UP (1) or DOWN (0). How many possible trading strategies are there? Suppose we allow a strategy to either be long, short, or neutral to the market. Then the list of all possible strategies includes the strategy which would have been always neutral, the ones that would have been always neutral but for the last day when it would have been either long or short, and so on. In other words, there are three possibilities for each of the n past price changes; there are 3n possible strategies.
There are 2^n possible histories of length n. If a strategy maps each history to one of three positions, there are 3^(2^n) strategies that consider n bits of history.
"Trading is ultimately a partial information, sequential game with an unknown number of participants whose payoff functions (and utilities) are also unknown."
Markets in general are essentially an infinitely-armed bandit problem where everyone plays whether they know it or not, and resources are allocated over time in a manner not unlike evolution.
Edit: "Does P=NP imply efficiency? Not at all." This isn't even a claim of the paper. The paper is claiming the opposite, that market efficiency is true only if P=NP.
> So what should the market do? If it is truly efficient, and there exists some way to execute all of those separate OCO orders in such a way that an overall profit is guaranteed, including taking into account the larger transactions costs from executing more orders, then the market, by its assumed efficiency, ought to be able to find a way to do so. In other words, the market allows us to compute in polynomial time the solution to an arbitrary 3-SAT problem.
In reality, most financial markets are pretty efficient but none are perfectly efficient -- if they were perfectly efficient, it would imply not only perfectly efficient trading systems and an inability to get an 'edge' on the market, but also perfectly efficient market systems, which are limited by technology, conventions, and regulations (for example, significant inefficiencies arise in US securities from not being open all the time, with very little liquidity still available in the 'after hours' markets). To achieve even 'pretty good' efficiency requires significant energy, and I'm not sure I understand how the author can imply that the energy used in the past to calculate the current price is equivalent to the energy to verify the current price. As a trader, I can tell you that most market participants do not care to verify past calculations of the current price; they only care about the future price, and will generate an action from the differential between the predicted price and the current price.
That's just what P = NP means: the cost to verify a solution is the same as the cost of finding one.
But in the article, there is an interesting section on that:
> If P = NP, even with a high exponent on the polynomial, that means that checking strategies from the past becomes only polynomially harder as time passes and data is aggregated. But so long as the combined computational power of financial market participants continues to grow exponentially, either through population growth or technological advances, then there will always come a time when all past strategies can be quickly backtested by the then-prevailing amount of computational power. In short, so long as P = NP, the markets will ultimately be efficient, regardless of how high the exponent on the polynomial is; the only question is when, and the answer depends on the available computational power.
So this section, if I understand it correctly, says that problems in P are easy because the computational power in the world grows exponentially and we can assume that they will at least once become feasible to solve.
That's an interesting way of looking at it. Is this really the reason why we consider polynomial problems much easier than NP-hard ones?
The importance of the difference between P and NP appears to be much greater in domains that are starkly discrete, such as cryptography, than in those that are approximately continuous, like finance, where there are often approximate methods in P that are very effective and efficient, at least for most real-world cases.
[1] Only in the last section does the author look at all at real market data, and there only to look for evidence in support of a weak prediction from the thesis. This is so riddled with uncontrolled factors that it tells us nothing about the quantitative consequences of N != NP on market efficiency.
We give importance to this question because it is open, it is big, it relates to many other parts of computer science, it has real-world implications, etc. The fact that you don't think this question is important likely means that you don't have the complexity-theoretic background required to appreciate the stark deepness of the question. This isn't bad, but it's myopic.
The fact that the exponents tend to be small in the algorithms that our puny brains were able to produce does not convince me one bit that all problems in P have algorithms with small exponents.
Why are you so sure that P vs NP is important? Does it really relate to so many parts of computer science?
I do agree that complexity analysis is a very (perhaps the most) important part of CS. But that does not necessarily mean that P vs NP is a fundamental question.
RSA wouldn't break even if P = NP. Even if we came up with a polynomial algorithm for integer factorisation in could have a much higher exponent than the multiplication required to verify the result.
This is what bothers me the most, people saying that P = NP implies that solving a problem is as difficult as verifying a solution. That's simply not true: P = NP implies that solving a problem is as difficult as verifying a solution (modulo polynomial reductions).
The part about ignoring polynomial reductions seems very important to me. Otherwise I could also say that: "all even numbers are the same", when I actually mean: "all even numbers are the same (modulo 2)".
Would you care to convince me why we chose to ignore polynomial reductions and not some other complexity class?
The best explanation I see is the fact that different Turing machine formulations (single-tape, multi-tape, one-way infinite, two-way infinite, ...) and also the random access machines have polynomial reductions between them. But even this does not justify the importance we give to P vs NP in my eyes.
However, even in an exponential world, I am reminded of a quote:
"exponential algorithms make polynomially slow progress, while polynomial algorithms advance exponentially fast".
So, not only do you have the resource problem highlighted in sibling posts but also a recursion problem:
Vis, when the problem becomes solvable the optimisation must add the market transactions for the sale of resources to solve the optimisation, so now one needs to optimise that larger problem ... which again involves transactions, which shift the original optimisation.
If you can predict the optimisational perturbations needed to solve the enlarged system, then you might be able to break the cycle .. but won't you need to transact the processing of the perturbation, the optimisations of which will shift the inner optimisation.
The solution itself would be marketable, perturbing the market and negating the optimisation?
My instinct may be wrong but it suggests the market can't be [perfectly] optimised (but in practise that probably doesn't matter).
Optimising markets might be the extent goal of capitalism, but it's not the extent goal of the rich (in general) who would rather remain rich than have perfect allocation of resources.
> The results of this paper should not be interpreted as support for government intervention into the market; on the contrary, the fact that market efficiency and computational efficiency are linked suggests that government should no more intervene in the market or regulate market participants than it should intervene in computations or regulate computer algorithms.
And if a government sponsored and modified such an algorithm in an attempt to optimize for equality (second only to efficiency of usage), such a system could be an effective socialism. It is at least an interesting avenue to consider if mathematical and computational parallels could be constructed.
How so? It implies simply that markets are not efficient. It does not imply that state control would be more efficient and it does not imply that markets are not the most efficient way of determining the value of a resource.
What it definitely says however, is that a government committee could not, in any way, successfully determine the value of all goods and fix prices based on that determination.
The real question is how efficient are they.
2. If you're serious about
> if a government sponsored and modified such an algorithm in an attempt to optimize for equality (second only to efficiency of usage), such a system could be an effective socialism.
The big problems you have to overcome are probably the Economic Calculation Problem [0] and the Principal Agent Problem [1]. I have yet to see any reasonable solutions proposed.
[0] https://en.wikipedia.org/wiki/Economic_calculation_problem [1] https://en.wikipedia.org/wiki/Principal%E2%80%93agent_proble...
Even if all the information was true, there is still the problem that humans have very poor reasoning abilities combined with herd mentality which almost always overrides the reasoning part.
To predict the market, you don't need to understand the market, you need to understand people's distorted view of the market.
And even looking at the highly capitalized stock market I know at least one story of a major player that got its start noticing a violation of weak-form efficiency and becoming very rich by fixing the violation.
There's no reason to think that actual markets are now finally efficient and in fact there are some that are publicly know, but which only bring a small return and require decades of investment to fix so nobody is interested in trying.
That's the case here. This paper posits a definition of efficiency, but does not explain why that definition is correct or how it relates to other efficiency measures.
A better proof of arbitrage opportunities in markets is Wah 2016, which identifies actual arbitrage opportunities in actual markets.
Separately, what does "Since P probably does not equal NP" mean as a probabilistic statement?
And what is the correct way to concisely and precisely write: "most people familiar with the P = NP problem believe with varying degrees of confidence that P is not equal to NP, but so far no proof exists."
It actually does have a formal meaning! This is one of the most interesting results (in my opinion) on the P vs NP problem. Given a random oracle R, Pr(P^R != NP^R) = 1. So given a random universe, it is almost certainly true that your version of P != your version of NP. At least that's how my theory prof liked to explain it.
Here's a link with the proof: http://theory.stanford.edu/~trevisan/cs254-14/lecture04.pdf
Your excerpted sentence is already quite concise compared to serious surveys like https://www.scottaaronson.com/papers/pnp.pdf
I don't see that as an argument against the EMH. I think it is inherent in the EMH that the market has more computing power then any individual.
I would say that it is the core of the EMH. That more people make better predictions. Because they have more computing power.
I always found it strange that the EMH is often defined as the market price including all available data. As if the data can simply be added without interpretation.
In the sense that it takes a huge amount of time and images to train the thing to become smart (in other words, to go through the entire network of neurons one by one and assign better weight values etc) but when it comes to actually verifying if our network is smart all it takes is to run an image straight through the network?
And the issue of trying to prove N=NP is essentially trying to prove that there isn't a magical way to train a neural network with just one image of training data?
NP problems are essentially problems where the following two properties hold:
1. No better algorithm is known than using brute force -- generating every possible result and checking if it's a valid result.
2. Checking that the given result is valid is doable in polynomial time or better. (This is less critical to understanding, but essentially your `isValidSolution(input, solution)` function needs to take `O(n)`, `O(n^2)`, `O(n^3)`, (etc.) time or better.)
Essentially, a NP problem is a problem where you have to brute force the solution, but it's easy to know when you've found the correct solution.
If P = NP that means that there are no problems where this is true -- it would mean that any problem where it's easy to know if you've found the solution also has an algorithm for finding it more efficiently than brute force.
Your neural network example doesn't apply because training a neural network doesn't require brute forcing the solution space of the neural network weights. That would be crazy. So it's a problem in P.
A non-deterministic turing machine can interprete multiple instructions (write 5 to tape and go to state 2 OR write 3 to tape and go to state 19) and (depending on interpretation) use the instruction that gives the result fastest or explore all instruction branches at once.
Note that the "OR" in there is not a classical if-else, it means literally the machine can pick one. There is no memory value that tells it which is correct.
Basically, problems that are polynomial on a NDTM will likely be NP on a normal machine (with some exceptions depending on the problem).
Re2: As above, verifying is actually easy as you can then simply follow the execution branch the NDTM picked on a normal deterministic turing machine.
Atleast this above is what I learned in CS course 2nd semester.
We consider a 2-layer, 3-node, n-input neural network whose nodes compute linear threshold functions of their inputs. We show that it is NP-complete to decide whether there exist weights and thresholds for the three nodes of this network so that it will produce output consistent with a given set of training examples.
https://papers.nips.cc/paper/125-training-a-3-node-neural-ne...
I get the reasoning why people aren't sure if P = NP.
Here is another question.
What defines time? And what about a solution?
Consider the fact that in the theory of general relativity we have the twin paradox.
What if I sent a computer on a rocket at close to the speed of light and then on earth I had the same machine running the calculation that takes a million years, and then when the other computer gets back from it's interstellar vacation, I just have the computer ask the other what it's part of the solution is.
According to the computer that flew away the amount of time that it takes to solve the problem can be considered ~P in a way.
My intuition is telling me that this is probably the wrong way to think about time complexity though.
> And the issue of trying to prove N=NP is essentially trying to prove that there isn't a magical way to train a neural network with just one image of training data
No it's not. You can't do reasoning on bullshit inaccurate analogies.
Without knowing too much about neural networks myself:
Theory stuff:
P, NP, NP-hard and NP-complete only refer to classes of complexity (of algorithms), i.e. you can "categorize" decision problems this way (by currently known solutions or mathematical proofs of lower and upper boundaries for complexity).
P contains any problem for which there is an algorithm that can decide it on a deterministic turing machine (read: abstract definition for a computer) in an amount of time that is growing polynomially with the size (n) of its input.
Example: Checking if a list contains a specific item can be achieved in O(n) time - just iterate through the list. Searching in a pre-sorted list (binary search) can be achieved in O(log(n)) time. This means that if your list is longer by a large factor, both algorithms will take accordingly more time, which means that the amount of computation necessary for the sorted case will grow much less than the generic case. Both problems are within P, though.
NP contains all problems that can be solved by a nondeterministic turing machine in polynomial time. If you think of your problem as an automaton with states, this machine could deal with ambiguous state transitions in an efficient manner, "magically" only picking those options that succeed. This is basically the promise of a quantum computer. P is thus a subset of NP, since the nondeterministic touring machine can do anything a deterministic one could do too.
You can still solve those problems on a normal computer, but not in polynomial time (unless P=NP, which is neither officially proven nor disproven until now).
Then there is the idea that you can solve problems using solutions for other problems merely by transforming the input in polynomial time.
NP hard problems are those which you could use to solve any other problem within NP this way. Those don't necessarily have to be in NP - their complexity could be even worse. There is an intersection with NP, though: Problems within NP that you can use to solve all other problems in NP. Those are called NP complete.
If you would now find a way to transform the input for an NP complete problem into one for any P problem in the same way (remember: in polynomial time for a DTM), then you would have effectively proven that P=NP.
On the NN comment:
I've always understood AI in general and especially trained algorithms to be heuristics. If you talk about P and NP, you're talking about mathematically proven solutions, so the comparison doesn't work out in the strict sense. On the same note, NNs are often used to solve problems that don't have strictly "correct" solutions. If your problem is "does image X depict a goat?", you won't get far with traditional means. A NN, on the other hand, trades accuracy (false positives and false negatives will happen) and up-front work (training with a dataset not provided by the actual problem instance) for speed.
It is related to the discussion though, since a generic trainig algorithm for neural networks actually will have a non-polynomial complexity as far as I understand (at least Google tells me so ;-) ). Note however that training a NN is also not a decision problem: The result is not true or false, but the resulting network.
Edit: Replaced a reference to sorting with searching for an item, since sorting is not a decision problem.
Aren't all proofs flawed in this way?
Like for example, 2 = fish. Prove that there isn't an undiscovered mathematical axiom that makes this not nonsense?
Knuth is a notable exception, although he has to my knowledge never really vigorously advocated P=NP but merely suggested the possibility that P=NP and that the algorithms to transform NP problems into P problems could be very, very complex but still in P. Seems unlikely, though.
The original Fama paper distinguishes between three forms of the EMH: weak, semi strong and strong.
The weak form only alleges that historical price info is fully incorporated in the current price. You can still make money by working hard and figuring things out from outside the price history.
The weak form applies to purely technical trading that extracts value from price history like momentum and the simple moving averages cross over studies you see.
If you buy a call option, the market-maker buys stock to hedge it. That moves the underlier, and raises the price of the call option he just sold. The reverse is true if you sell a call (or buy a put).
Not only that, but there is a cost for all the people and computers that your order touches as it gets executed.
Without price impact from transactions, markets can't be efficient, because new information has to get priced into the market somehow.
From the paper: "if markets are weak-form efficient, meaning current prices fully reflect all information available in past prices, then P = NP, meaning every computational problem whose solution can be verified in polynomial time can also be solved in polynomial time. I also prove the converse by showing how we can "program" the market to solve NP-complete problems.
However, there has been no proof that there does not exist some algorithm that can determine satisfiability in polynomial time. If such an algorithm were found, then we would have that P = NP. If a proof is discovered that no such algorithm exists, then we would have that P ≠ NP. Just as most people in the field of finance believe markets are at least weak-form efficient, most computer scientists believe P ≠ NP. Gasarch (2002) reports that of 100 respondents to his poll of various theorists who are “people whose opinions can be taken seriously,” the majority thought the ultimate resolution to the question would be that P ≠ NP; only 9 percent thought the ultimate resolution would be that P = NP. The majority of financial academics believe in weak form efficiency and the majority of computer scientists believe that P ≠ NP. The result of this paper is that they cannot both be right: either P = NP and the markets are weakly efficient, or P ≠ NP and the markets are not weakly efficient. "
We use mathematical abstractions to model real-world mechanism every day for millennia.
Not only that, but there are all kinds of mathematically defined limits that no real-world mechanism can bypass, from a purely logical perspective.
If you only have 10 dollars and I give you 20 dollars, you'll have 30 dollars, not 500 -- that's a mathematical truth that absolutely holds in the real world too.
Computational complexity and markets have more in common that most think. Mechanisms which rely on algorithms, such on auctions, have runtimes.
Hence, why we have a field called algorithmic game theory [1].
Um, am I wrong in thinking that P and NP refer to non-parallel algorithms?
(Apologies in advance if I'm having a brain fart.)
My understanding of the central argument in the paper is the following:
Definition: Let N be a positive integer denoting the length of the history, M be a positive integer denoting the number of assets. A market realization is an element of {-1, 1}^(NxM), i.e. M vectors of length n where all entries are either +1 or -1.
Definition: We call a function f: {-1, 1}^(TxM) -> {0, 1}^M satisfying f \circ s = s \circ f for s any element in the symmetric group S_M a technical strategy of lookback T. The symmetric group condition just states that permuting the vectors of length T among the M assets is the same as permuting the output the strategy. I.e. that the strategy has no inherent preferences among the assets.
The payoff of a technical strategy s on a market realization h is given by payoff(s, h) = \sum_{i=T}^N s(h_{i-T, ..., i-1}) \cdot h_i where the indexation is in the time dimension i.e. h_i denotes a length M vector. The budget of a technical strategy is budget(s) = max_{v \in {-1, 1}^TxM} s(v) \cdot (1, ..., 1). That is the maximal number of assets it wants to hold in any given state of the world.
Given a market realization h, positive integers B and K we say that h is (B, K) EMH-inconsistent if there exists a technical strategy s such that budget(s) <= B and payoff(s, h) >= K. If a market realization h is not (B, K) EMH-inconsistent we call it (B, K) EMH-consistent.
Claim (presented as a theorem in the paper): The problem of determining whether a market realization is (B, K) EMH-consistent is in P if and only if the knapsack problem is in P.
Claim: The weak efficient market hypothesis is true if and only if EMH-consistency is in P.
In the second part of the paper he indicates a model of an order book where he wants to encode 3-SAT as combinations of market orders. I do not understand how this is intended to work, i.e. if all information is available and incorporated into the market and the information generating process is stopped, and I have bid-offer spreads because of transaction costs, and I irrationally (remember I am not interested in the buying or selling the security, I am just interested in solving a 3-SAT problem, thus my actions should not influence the price generation process of an efficient market) enter an OCO-3 order to buy A, B or C at mid. Why should this result in a transaction? In the case (a or a or a) and (!a or !a or !a) I make one trade with myself in the case (a or a or a) and (b or b or b) and (!a or b or b) I make one trade with myself, but one of the problems is satisfiable the other is not. Now it seems obvious that by inventing new order types we can get order book rules that allow for complex computation to resolve clearing, however this is a problem with the proposed order types not the efficient market hypothesis? A (to me) equivalent avenue of investigation would be to imagine different order types such that to decide the clearing of an order book it would involve solving an undecidable problem - i.e. what are the most reasonable order types and order book rules such that we can encode the halting problem?
This paper is entertaining but it depends on the assumption that computing power grows exponentially forever and will eventually be able to solve all P problems at negligible cost. That assumption is clearly not justified.
And the belief that any market perfectly incorporates data, not a single point off in any stock, is basically a strawman anyway.
You know, this is the kind of thing that makes me wonder about how much quantum computation is going to change the game.
Where we aren't just calculating ups and downs but superpositions between the two.
Either way, I suspect that the superposition between the change in value and the change in time, in relation to all of the other superpositions in value/time changes, is the true computational power of quantum technology.
https://arxiv.org/abs/1011.0423
(didn't set the date so in the document it's wrong, this was published 2010.)
It doesn't depend on P = NP, it's simply a rigorous proof that EMH is false.
Let's switch gears a second. Here's a famous elementary proof[1] that there are infinite primes. Suppose there are just finite primes, up to some largest. Multiply them together and add one. No prime divides the new number (because "every" prime leaves a remainder 1), so you've just produced a new prime. This new prime is larger than the largest prime in your finite set because you multiplied that by the rest of them and added one to get it. So this is a contradiction, you couldn't have had finite primes up to some largest.
Anyway if you think there might be a largest prime after what you just read, it just means you don't understand the proof. If you believe EMH might be true it just means you don't understand the proof that it is false.
Of course, nobody ever even hypothesized that academia was efficient :)
[1] https://en.wikipedia.org/wiki/Euclid%27s_theorem#Euclid's_pr...
--
EDIT: no mistake in my comment
"Multiply them together and add one. No prime divides the new number (because "every" prime leaves a remainder 1), so you've just produced a new prime. This new prime is larger than the largest prime in your finite set because you multiplied that by the rest of them and added one to get it."
Instead you have created a number that may or may not be prime but definitely requires a new prime number (not in your set) to factorise it. Counter example: Take your set of prime numbers to be {2,3,5,7,11,13} then
(2.3.5.7.11.13)+1 = 30031
30031 factorises into 59.509 so you have found two prime numbers that are not in your original set.
EDIT: Responding to the edit above. The problem is that you claim that you make a new prime number by multiplying them all together and adding one. You didn't multiply all the numbers and added one to get the prime number, you multiplied all the numbers and added one to get a number (POSSIBLY NOT PRIME) whose FACTORS are prime numbers not in your original 'supposed' finite set. Your proof essentially lacks the step: IF new_number is prime: proof finished ELSE: factor new_number and show that at least one of the factors is not in your finite set.
EDIT 2: Counterexample number 2. Suppose your finite set of primes is {2,7} (2.7)+1 = 15 So you have found 3 and 5 as primes that are not in your original set and are SMALLER than the largest prime in your original set. This is now a second mistake in your proof. Whether you are trolling or just too arrogant to see the mistake/error I do not know.
Edit: I've thought about this a bit more, and you can save the parent post by prepending a result like "Every number larger than one is either prime, or divisible by a prime smaller than itself". In this case you can then assert that your constructed number is prime. However, this result requires its own proof and was not mentioned by the parent post.
It is possible that the OP did not make the mistake that you are attributing to them, although best case is that the exposition was simply unclear.
This little mental exercise proves that 2 and 7 can't be the only primes in existence. It doesn't really matter what 15 really is or isn't. No need to add noise about it. We only care about the supposition, which you cut out of your quote.
Hope this explains why I didn't deal with the true status of new_number! It just doesn't matter.
However, I'm disappointed in you, in the other respondent, and in the moderation here, and I'm glad I made this alternate account.
But it seems to me that even in the example from your note the stock price will tend to reflect the the correct price. As you realease new information the factorization becomes more feasible and once it's feasible enough the stock price will get the correct value. Maybe we should amend EMH to say that asset prices fully reflect all the facts that are feasible to compute from the available information.
Anyway, the reason I'm writing this comment is to say that you should no be so sure of your proofs, especially because your proof of the infinitude of primes is wrong. This proof does not show that the new number (the product of primes + 1) is prime. It shows that none of the primes are factors in this new number and from that it follows that there must be more primes, but it does not follow that the new number is necessarily prime (it might be a product of another unknown primer and the others).
For the prime proof, the new number is prime as long as as it has no prime factors < itself, which we assumed at the start. This guarantees it's prime under our assumption and this falsifies the assumption. Doesn't matter if it's really prime.
I think if we were to approach the EMH from a constructive or computability approach, we'd have to make that definition much crisper. That is, the feasibility of computing the factorization based on limited information determines whether the information is actually "available" in the sense of the EMH.
In the real world we can see things like drug trials that will determine the success of a pharmaceutical company. The information that the drug works or doesn't work, or has side effects, would be "apparent" to a hypothetical "ideal" expert in human biochemistry -- so simply knowing the chemical formula for a new drug, the information about its working is "available" in that sense.
I like your drug trial example but a "hypothetical ideal expert" isn't assured to exist. Do you have an existence proof?
On the other hand, unique prime factorization is assured, we know that given infinite time anyone can factor any large number.
Stop worrying about karma and what some random dudes think about you/your comments on HN or whatever online board.
If you think it is wrong simply don't do it.
You know, just how people used to tell homosexuals to not be gay when other people thought they were freaks because of it.