It should go without saying that Solomonoff induction is totally useless for practical applications, if interesting theoretically. Brute forcing the space of all programs is ridiculously mindbogglingly universe-crushingly expensive. (Actually, even if the process was magically tractable, there would still be limitations and dangers [3].)
1: http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.108...
Indeed, just like Turing machines and the lambda calculus. Still an important theoretical step though!
I've always thought this kind of concept might define the limits of standard science, as currently practiced. Science requires reproducibility. But by whom? Well, other scientists of course. If every scientist tries your experiment and gets the same result, then you have a validated scientific theory.
But suppose that you manage to set up an experiment where the perception of which measurement resulted depends upon who is perceiving the result (I can think of a few ways that this scenario might arise if we could ever figure out a way to generate macroscopic, human-scale, superpositions [which is unlikely, I'll add]). That would really throw a wrench in things. In that case, you would have to have each scientist convincingly prove to each other scientist that they all see something different, in which case perhaps the result could still be universally accepted. But there may be a limit on how much consensus we can ultimately get.
Actually, the reproducibility applies to the hypothesis (conclusion), not to the experiment. It's only that, since the observer is irrelevant, people simplify and call it "reproducing the experiment".
If your experiment depends on the observer, and you get to know that, that dependence will go on the conclusion, and the people reproducing your experiment will expect to see the result your hypothesis say they would, not the same one you got. If you have a correct predicting model, everybody will conclude it's correct.
One way that I think you could prove it to others is to have a way to predict the measurement depending on who's perceiving it, that is >=60% effective. This would consist of some form of proof to others at least.
I don't think that joining everyone's brains together would make everyone feel like a small body part of a single, greater being, because our brain architecture really wouldn't support that. You might be able to handle some shared "input" with another person or a small group of people (like these conjoined twins who share a thalamus[0]), but you're going to run into bandwidth issues pretty quickly given that there are only ~1-2 million nerves in each optic nerve[1]--if you're trying to split that 7 billion ways, you're going to have a difficult time getting coherent information through, let alone processing it.
The second major limitation to joining brains together is the speed of light--once we're able to open up communication between brains to allow "communicating via thoughts", we'll be communicating at the speed of our thoughts, which is much faster than physical speech. Connecting your brain with the brain of someone on the other side of the world might be a pretty disappointing experience because they wouldn't be nearly as responsive as someone physically nearby. Uploading brains and running them at higher "clock speeds" than biological hardware permits would make this limitation more significant, because you might subjectively experience a communication time lag that would feel like hours, days, or longer when connected to someone far away. In other words, there would be a limiting radius in physical reality for effective brain-connecting communication that varied depending on the speed of your subjective experience.
Those limitations aside, sign me up! Brain-AI merging and brain-brain communication are going to be the bees knees.
[0]: https://en.wikipedia.org/wiki/Krista_and_Tatiana_Hogan#Progr...
> Is it possible to define a process that Solomonoff induction cannot predict? The short answer is yes, but the kinds of computers needed to simulate these processes don’t exist in the real world, and it’s unlikely that we’ll ever be able to build them.
Doesn't "pick a truly random number" qualify? And we can build those today.
But you're right that random numbers are inherently unpredictable; maybe I should add another footnote explaining what I meant there. (Edit: I added a clarification to the paragraph you quoted.)
[1] http://twistedoakstudios.com/blog/Post5623_solomonoffs-mad-s... in the "Thinking with Programs: Random Data" section
If you're feeding in unbiased independent random bits, any and every process is already an optimal predictor.
At the beginning of the blog post, it claims that it explains 2 things:
1. where exactly might you be able to use learning algorithms where you can't just use existing physics theories instead 2. a hands on guide to applying learning algorithms in these situations
This is physicist code for "this blog post claims that it solves a major unsolved problem in physics." Let me explain.
Currently, we have the standard model and general relativity, which have been experimentally verified to extreme precision but are fundamentally incompatible with each other. So people have proposed theories of everything such as string theory, loop quantum gravity, and information/digital physics (which I'm obviously a fan of) to resolve these incompatibilities.
One of the biggest problems in fundamental physics right now is that the standard model and general relativity have been verified to such precision that it's hard to think of a practical experiment to show how they are wrong. The conventional wisdom is that this is only possible if we do things like measure the Planck scale or what happens inside a black hole, which are completely impractical on human timescales.
What this post proposes is that you actually don't need to measure the Planck scale or what happens in a black hole in order to test the proposed theories of everything, and instead you can do it with a sufficiently powerful computer simulation and a sufficiently good brain-computer interface. If our technology keeps improving exponentially, this may be possible in the next several decades.
So yeah, I made a bit of a white lie when I framed this post as a summary of recent research in information physics. I can back up almost everything in the post with the sources I linked to, but the part about the 0 or 1 experiment and predicting its outcome using Solomonoff induction is actually original research on my part, and I suspect it would actually be a very big deal if this works the way I think it does.
So here are the possible outcomes for this blog post:
1. The problem in physics I just described is actually already solved. 2. The blog post is fundamentally flawed, and/or it actually doesn't solve the problem that I'm claiming it solves. 3. The blog post actually does solve a major unsolved problem in physics, and this is a huge deal.
This is why I am so surprised at the comments I'm getting so far, since this proposal for experimentally testing theories of everything seems to be passing the internet commenter test. So if no one on HN finds anything seriously wrong with the blog post, can we get people like Scott Aaronson, John Baez, Juergen Schmidhuber, Stephen Hawking, or people of that caliber to look at it so we can get a more definitive answer to whether this actually solves an unsolved problem in physics?
Also, kudos to Xcelerate's comment, which is the closest to the point I was trying to get at with the blog post.
There may exist an alternate form of causality that isn't time bound, which may be exposed here over short periods of time. I would hesitate to judge it "computationally infeasible" until we know more. :)
With regard to the cloning phase of your argument, is this not effectively the same as what the many worlds interpretation of quantum mechanics says happens all the time? FWIW, while I don't feel as if those other versions of me are me (assuming many-worlds is true), I realize that I am in no position to assert that I am the 'real' me, and in fact that it is beside the point to ask which one is.