I don't think it takes a math genius to see how this is a bad idea. In the same way that trading algorithms can get into a feedback loop that crash markets, these "computational contracts" can cause cascading failures that hurt society as a whole. This is why human intelligence is so critical in running society: it has the ability to question whether its "programming" is correct and having the intended effects, and adjust accordingly. Computational contracts have no such introspection, by definition. They resolve because the rules are satisfied, for better or worse.
And all of that isn't even considering the attack surface area for malicious actors to target.
While AGI might be far off, I can certainly imagine computers running larger and larger sub-worlds. e.g, if all cars were self-driving, I am reasonably sure we can design traffic to be more efficient with all cars coordinating with each other instead of humans trying to do so.
I can give you a partial answer, which is that human intelligence is somewhat slower, which mitigates the ability to crash the entire economy in 15 seconds. That's why a stock market can crash in 15 seconds nowadays, but "the housing market" can't. This gives people some time to do some things about it with some degree of thought, rather than all the agents in the system suddenly being forced to act in whatever manner they can afford to act with 15 milliseconds of computation to decide.
We don't have good programming abstractions for "This doesn't make sense." You want something between proceeding and aborting.
Had the system been a mathematical contract, we would all be dead. Instead, a hero determined sat pondering whether he should do it and then realized the sensors had to have been defective as he wasn't dead yet.
There's no doubt that more things will be managed by machines and artificial intelligence but let's not make human intelligence a second class citizen.
This is the reason why few systems of significant complexity and risk exist today without humans in the loop.
I agree, it has long been a key trope of sci-fi;
>We've crashed her gates disguised as an audit and three subpoenas, but her defenses are specifically geared to cope with that kind of official intrusion. Her most sophisticated ice is structured to fend off warrants, writs, subpoenas. When we breached the first gate, the bulk of her data vanished behind core-command ice, these walls we see as leagues of corridor, mazes of shadow. Five separate landlines spurted May Day signals to law firms, but the virus had already taken over the parameter ice. The glitch systems gobble the distress calls as our mimetic subprograms scan anything that hasn't been blanked by core command.
>The Russian program lifts a Tokyo number from the unscreened data, choosing it for frequency of calls, average length of calls, the speed with which Chrome returned those calls.
>"Okay," says Bobby, "we're an incoming scrambler call from a pal of hers in Japan. That should help."
from Burning Chrome, by William Gibson.
You could even argue that it is one of science fiction's founding tropes, as it can be traced back to the stories regarding golem.
In other words, if I could easily use the Mathematica library from Clojure, I wouldn't give the Wolfram Language a second glance. I can't think of even a single language aspect that would tempt me to use the Wolfram Language [context: I've been a Mathematica user since 1993] over Clojure. I have (much) better data structures, a consistent library for dealing with them, transducers, core.async, good support for parallel computation, pattern matching through core.match, and finally a decent programming interface with a REPL, which I can use from an editor that does paren-matching for me (ever tried to debug a complex Mathematica expression with mismatched parens/brackets/braces?).
This is why the man is a contradiction: his thoughts are undeniably interesting, but his focus and anchoring in the "Wolfram Language" is jarring.
Newton, Maxwell and Einstein didn't need to waste any of their time thinking about how to use a computer to solve the problems they worked on.
If I ask Google for the 2018 Wimbledon Champ it tells me it has found 47,00,000 results in (0.90 seconds). Take a step back and think about this.
They have their own knowledge graph. They have Wikipedia access. They have the ATP site cached. They have the Wimbledon site cached. But they aren't able to tell the problem being solved doesn't need 4.7 Million results.
This is the kind of mindlessness that happens when the focus is not on the actual problem, but what the computer can do.
Actually a lost bit of maths history is that computation ability was really important, it was just really manual. Think about Gauss doing least squares to find the orbital parameters of Ceres by hand! Then the stories about him being able to multiply large numbers in his head start to make a bit more sense, not just as a parlor trick.
The cursor for the slide rule was invented by Newton.
This has nothing to do with "The Evans Field Equation"
In the official documentation (as well as his blog posts), the term “Wolfram Language” is used to refer to the combination of the two: both the syntax and the huge standard library, because you almost always encounter one using the other. This seems pretty common to me — I’ve said things like “I used the Ruby language” when I’m also using its built-in modules.
I agree with you in saying that the language part of the Language is nothing particularly special, though.
On the other hand, I feel that his theory of cellular automata as some fundamental underpinning to mathematics is misguided.
Also, I never managed to achieve any reasonable degree of code reuse in Mathematica. Most of my notebooks are single-purpose.
He is a bona fide genius. But it's difficult to tell if his jarring references to the the 'Wolfram Language' and 'Wolfram Alpha' are simple, cynical selling, or if his vanity has blinded him into thinking the 'Wolfram Language' is a notable accomplishment on par with the other useful work he's done with physics, cellular automata and Mathematica.
In general, I am a conflicted fan. By many accounts he's an unpleasant person, and having read _A New Kind of Science_ his Principle of Computational Equivalance is real hand-wavy and not terribly rigorous.
And yet whenever he's mentioned in an HN story I always need to read it.
Being an expert at one thing doesn't make you an expert at everything. But it's hard to realize that when you're king of your world.
A different perspective. Highly creative (smart) people are those who make mental connections that others do not see but which exist. Crazy people are those who make mental connections that others do not see and which don't exist. Perhaps he sees the Wolfram language is superior in some way that doesn't actually exist, or perhaps we're not seeing something that does.
The highest pursuit is the search for truth. This includes the challenging act of discarding things that we'd really like to be true, but are false. The success of science can be attributed to extreme selfless intellectual honesty. The more ego gets in the way, the more the truth is compromised. Intelligence can be used in service of finding truth or in feeding our delusions. My view on the distinctions made here between genius and madness are that they correspond to the degree in which intelligence serves truth or delusions. Therefore I'd expect the most outstanding scientists in history were also very humble (perhaps someone with better knowledge on the personality of great scientists can shed more light on this).
And to keep things consistent - I might be wrong and thus welcome challenges to this theory :)
Therefor it's a form of arrogance to dismiss (generally) arrogant people as always wrong. We have to be selective or we might miss something important they've seen.
Wolfram is a combination of irritating ego and inspiration.
I eventually learned this was just the 13 year old's version of "why's the sky blue?" into "but why is Rayleigh scattering a thing?" and there are several limitations to both human understanding of science, and theoretical computation limits -- a computer to have sufficient accuracy to model the world would by definition need to have as much memory as the world, and model itself in it. I moved on from that idea shortly after learning of that.
Is Stephen Wolfram just an overgrown child? Maybe unironically that's what being a genius is about.
But what if we had certain "compression" abilities that allowed us to simplify the world? Similar to how storing an audio codec lets us massively minimize how much storage we need for music?
https://blog.stephenwolfram.com/2016/05/solomon-golomb-19322...
Am I also the only one who’s very skeptical of AI? I see no correlation between what we call “biological thinking” and computation.
Even though I don’t know much of the theory behind AI, to me it’s similar to saying that since we have lots of simple calculators, we can arrange them together in some specific way and emergent intelligence will arise.
Sure yes I mean but you can say that about everything: let’s arrange a bunch of forks together and intelligence might emerge.
And actually from a math point of view you could luckily arrange some forks together and have intelligence since intelligence seems to emerge from an arrangement of atoms.
I don’t see why computation has to get a go at intelligent, while anything else not. What’s so unique about computation?
I don't know of any impressive results with arranging forks. If there were, you could model the fork behavior on computers and probably run it 1000000x faster.
It's certainly possible that some other elements than the simple calculators we use today will lead to the big breakthrough in AI. Perhaps quantum computation is needed. But right now, arrangements of simple calculators seem like the most promising.
I view that as biological intelligence figuring out all sorts of clever ways to make calculating tools produce impressive results.
> Perhaps quantum computation is needed.
But why invoke physics if we're comparing biological intelligence to our computing tools? Isn't it enough to note the big differences? And here I'm not talking so much brain function as I am being a living, social animal that has to survive with a vast human culture.
This is the difference between looking at computers as as stepping stones to artificial replacements as opposed to enhancing our abilities. The AI stuff captures the imagination, promises fully automated utopias, scares us with apocalyptic scenarios, and is the stuff of lots of SciFi. While the reality is that computers have always been tools aiding human intelligence.
For some reason we view these tools as one day being like Pinocchio. The mythos of the movie AI is based on that vision where the robot boy becomes obsessed with a the blue fairy turning him into a real boy so his adoptive biological mother will love him and take him back.
But maybe like in the movie, someday the sentient robots will show up and gave our robot boy his wish.
Humans (via intelligence) can also follow these rules, and produce the same output for a given input. So if some arrangement of forks gave rise to intelligence, then they would necessarily be a computer.
The central question is whether intelligence is something over and above computation. Coming down on this would be require either: (a) an example of something intelligence can do that a computer (with the right program) cannot, or (b) some proof as to why no such example can exist.
I'm not confident we have the right philosophical tools to even approach this question right now. What does it mean for human intelligence to do a thing that a computer cannot? Say you do X -- I might just ask you to write down how you did it. If you are precise enough in how you did it, you've just given me a program.
Alternatively, even if you can't write it down, it seems plausible we could (in theory) write down the rules of physics. Then, given as input the state of your brain, we might iterate those rules forward and produce the output.
Going a bit off track here, but it seems to me that the only way intelligence could not be computed would be if intelligence was in some way non-physical -- and more, causal (so, not just epiphenomenal), it would actually have to be useful.
You can select for an intelligent swarm of fruit-flies, but it’s going to take you a lot longer to get your result than if you could simulate the whole process. Now, if it’s simulated, will you be able to walk your final result back to meat-world DNA? Not unless you included that level of detail in your simulation, which would be extraordinarily expensive to do and fraught with problems due to the scale. But even then, the simulation gives you some advantages: affect the flow of time, backtrack in time to explore other paths under alternate simulated conditions, etc. In the real world, you can’t have that kind of perfect reproducibility.
(I wrote universal computation - to mske a distinction to common physical processes like what forks usually do - we may obviously say that a glass of water computes in a second where water molecules of that water will be in a second, but that is too direct and missing the abstraction level that my mind has (sensing and modelling my environment) or a universal turing machine has (manipulating symbols which may stand for(modell) something)
> Am I also the only one who’s very skeptical of AI? I see no correlation between what we call “biological thinking” and computation.
I think you're confusing several things together here, possibly (partly?) because:
> Even though I don’t know much of the theory behind AI,
I'll try to give you (to the best of my understanding, which may be faulty, so be aware of that) a "10000 foot understanding" - and hopefully (not likely) I won't drone on and bore you to death.
So we have here three things: Something we call "AI", something called "biological thinking" (I'm not exactly sure what you mean by this - but I'll assume for now you mean the process of "though" humans do with their brains), and "computation".
Of the three, computation is probably the simplest to define: It's what happens when a program - a set of rules - is run and followed. Usually there is some "input" or data that flows into these sets of rules, and at the end of the running of the program or process, there is an output. It is possible that the program may not "halt" (ie, stop); it may loop forever, and output something forever. Or it might hit a condition that does cause it to stop. Regardless, that set of rules being run (whether those rules are those of law, those of mathematics, a recipe to bake a cake, etc) is what causes computation.
I won't go into it deeply, but what Wolfram believes (whether he is right or not is for others to decide) is that there are simple programs - or sets of rules - throughout nature, and that these programs are executed due to natural processes, and that from these programs running, and thus performing computation, their output is what we see in our world. Furthermore, according to Wolfram, the computation being done is equivalent throughout nature; whether it is done on the substrate of silicon (ie, a computer), or the substrate of physics in a stream computing eddies and whirlpools...
Ok - enough of that; let's move on. Biological Thinking...
As I noted, I think you are simply referring to the process that humans (and perhaps other creatures) using their brains to take in information via their senses, do some "processing" on it, and then output something (perhaps speech, or moving around, etc). Or maybe output nothing at all; just "thinking" or "ruminating" about something.
Do we understand how this works? No. We are studying it, and making fits and starts of progress, but it's been very slow going. We don't know how to define certain things, or even if they are definable, or if they are simply fictions of the brain; some things are - for instance, there have been studies that show that you make a decision to do something before you are consciously aware of making the decision to do something, and your brain makes up a story after the fact to explain it. There are many, many other weird things like this that our brains do. We don't know why. We also don't know how they play into or alter our perception of "free will" - of if that is even a real thing.
One thing we do know - to a certain level - is how brains actually "work". I won't go into details - but yes, neurons, action potentials, spiking, etc. We think that certain other chemical things may be at play, among other theories and ideas - we not completely sure it's all "electrical" (and actually at the synapse level, it's chemical - well, ion exchange and such - which is kinda electrical, but kinda not, too). But we know electricity plays a large role in how our brains work, how neurons work, and how they communicate with each other.
We also know they work together in networks, seemingly jumbled and disorganized at the cellular level, but not completely, and less so at higher levels.
So in short, we believe that "biological thinking" occurs due to networks of neurons communicating using mostly electrical activity, coupled with various chemical and other activity.
It would be akin to looking at a factory and only being able to understand how the individual parts of each machine work together, and maybe certain subsystems of those machines, but still have no understanding of how the factory produces what it does, but believing it had to be in a large part because of those individual parts working together.
Forests and trees, as you can certainly tell...
...part 2 follows...
So lastly, AI. Something to know off hand is that AI encompasses many things historically, but if we are talking about today, then AI is basically focused on two areas:
* Machine Learning
* Neural Networks (Deep Learning)
Machine Learning can be thought of more as "applied statistics" - although that is a gross simplification. But, it is somewhat accurate, in that in statistics there are various known methods and algorithms that can be "trained" using a set of data, and that training, once completed, can allow these algorithms to decide on an output based on completely new data fed into them. Most of these algorithms and other methods can only be easily fed a few data points, and can only output either a "continuous function" (if you will); that is, a usually floating point number that represents some needed output (ie, if the method were trained on two data points of current temperature and current humidity, it might output a value representing the speed of a fan) - or they can output 2 or more (but usually only a few) "categories" to indicate that the input means particular things (taking our earlier example, maybe instead of fan speed, it would output whether to turn a fan on or off, or whether to open or close a window).
Neural networks, on the other hand, work closer to how "biological thinking" (well, more like the neurons and the networks they form) actually works. Basically, you have a bunch of different nodes, arranged in layers; think of a simple three-layer network...
The first layer would be considered the "input layer" - it could consist of a few nodes, to hundreds of thousands or more, depending on what is being input into the network. The middle layer is usually much, much bigger than the input layer. It is also termed "hidden", in that it isn't directly interacted with. Each individual "input node" in the first layer is connected to every individual neuron in the hidden layer. So, input node 0 is connected to hidden layer neurons 0...n, and input node 1 is connected to hidden layer neurons 0...n and input node i is connected to hidden layer neurons 0...n.
As you can see, it's a very tangled, but organized web that forms between those first two layers.
The third layer is similar, except it is formed of a scant few neurons; it's known as the output layer (a brief note here - it is possible to have more than one hidden layer of neurons, but mathematically, from what I understand, multiple layers are no different than one large single middle layer - that's just my understanding).
The output layer could have only a single neuron; it's value could again be that "continuous function" I mentioned earlier. Or it could be multiple neurons, each representing a "classification" of some sort; thus, the layer could have anything from one neuron to potentially hundreds, depending on what you are training the neural network for.
Let's say your training a network to take a simplified image of a road, and transform that into an output to be fed into the steering system of a vehicle. Your input node layer might consist of say 10000 nodes (for a 100 x 100 b/w pixel image). Your middle layer might consist of 10x that number of neurons, and your output layer could consist of a single neuron (outputting a continuous function representing the steering wheel angle) or a set of discrete neurons representing a class of various steering wheel positions (hard left, soft left, left, straight ahead, right, soft right, hard right).
You'd train this network on a variety of images, and it would output (hopefully) the correct answer for driving the vehicle around based on images and what was done in response. Essentially, the images it is given to train on would consist of individual black and white images of a roadway, and what should happen at that point (keep going straight, or turn in some manner). If, during training, it does the wrong thing, an error amount is calculated, and that is used to update numbers within the neurons that make up each layer (output and hidden - there are no neurons in the first layer), to make their calculation the next time more accurate. This process is called "back-propagation".
Interestingly, at a very base level, the computations being performed by a neural network, aren't much different than those from "classical machine learning" algorithms, but because their inner workings, which for the most part are fairly opaque to study, are extremely complex, they allow for things which the classical algorithms couldn't touch, namely the ability to feed in very large numbers of inputs, and get back out very large numbers of outputs.
Given enough "labeled" training examples, this process works extremely well, but for there to be usefulness for a variety of tasks, such networks need to be composed of hundreds of thousands to millions of neurons, each connected to each layer above and below, and it take immense amount of computational hardware (in the form of GPUs, usually) and power to do so. It also needs a boatload of training examples. All of these needs is why neural networks, despite being played with in various ways for well over 60 years, didn't really take off with promising and useful results until very recently, when the amount of data to be had, and the computational ability to process that vast amount of data became available (again, GPUs). Basically a perfect storm. This is all known as "deep learning".
Now, I'm going to leave you with something to ponder about:
We are doing something wrong. For one thing, as far as anyone has been able to determine, the idea and workings of "back-propagation" has no biological analog. Back-propagation is something that only happens in the realm of these artificial neural networks, and does not occur (as far as we know) in a natural neural network.
It is also very, very computationally intense - ie, it sucks a lot of power. We haven't even scratched the surface of building a very large scale artificial neural network, and already what we currently have takes a ton of power, well beyond the very meager power consumption of a single human brain.
It should be noted, if it wasn't apparent earlier, that the model of the neurons used in an ANN (artificial neural network), is grossly simplified to the actual workings of real neurons. There are studies and systems out there that seek to implement and study ANNs using models which more closely represent how actual neurons work (ie - spike trains, things that happen at the synapse level, etc) - but they take even greater amounts of power to run, and can't be used for much more than research.
You can take this and still be "skeptical of AI", but that would also dismiss the vast strides we have made in the last decade or two in the AI/ML field. I also hope I've been able to show or allude as to where/how there is correlation between AI (well, the ANN part of it) and "biological thinking".
I hope this helps in some manner to explain things and to lessen the confusion as to what everything is and what it means...
I also think the word computation is used a bit too grandiosely by Wolfram. Which is evidenced in the writing here.
I do admire Wolfram for even advertising his CEOing meetings though. He goes into the detail to a level you would not necessarily think a CEO would do. Credit where credit is due, he is not shy. Many CEO's would not bother.
I mean, technically, everything is a computation, but we accredit actually complex things to that term, not everything. To use it "whilly-nilly" deflates the words impact.
Maybe there is a something I am missing though?
there are people who write code until it works, and people who rewrite until it doesn't. room enough for both.
These sessions are vastly interesting. I think many commenters here should listen-in sometime.
The Wolfram Language is hard because mapping knowledge is hard.
What many people here call "ego" is one guy, and his talented team, tackling an enormous problem nobody has licked. So far.
The worst part is that he misses the detail it being possible to describe all of Math as computation (so they are perfectly equivalent) and creates entire books with keen observations of how much of Math one can describe as computers.
computer - 19x
computation* - 96x
computational - 63x
Make everything computational! computational intelligence - 2x
computational contracts - 7x
computational universe - 7x
computational language - 18x
computational thinking - 2x
computational fact - 1x
computational acts - 1x
computational equivalence - 8x
computational irreducibility - 6x
computational system - 1x
computational process - 1x
computational work - 1x
computational essays - 2x
computational law - 1x
Throwing all these words around may sound smart but they lose any meaning or relevance that they were supposed to deliver if being overused in such a larger-than-life manner.</rant>
At this point Wolfram is lost to us. "What the hell are you talking about, Steve?". I was just fantasizing about resurrecting Richard Feynman and having him ask Wolfram this question but it turns out I don't have to:
In a letter from Richard Feynman to Stephen Wolfram:
> You don’t understand "ordinary people." To you they are "stupid fools" - so you will not tolerate them or treat their foibles with tolerance or patience - but will drive yourself wild (or they will drive you wild) trying to deal with them in an effective way.
> Find a way to do your research with as little contact with non-technical people as possible, with one exception, fall madly in love! That is my advice, my friend.
But he returns saying:
"I went to Munich. I hired a bunch of guys. I told them exactly what I wanted them to do... and the problem was... they did it. No pushback from Roger. None of your rewrites. None of his funny looks. I need you. And you need me."
For another analogy, just because you frame a house with one material e.g. wood, doesnt mean that its structurally unsound.
You've added nothing of value.
Wolfram is brilliant, but I'd rather read a book from him that showed him solving all sorts of neat problems.
:/
If, however, he's talking about general computational systems vis a vis creating programs to run on today's microprocessors, he has obviously drunk too much of his own kool-aid.
I mean, python, matlab, julia, octave, sage, and maple would all fit that definition I think. I do think Mathematica's CAS is the best in the business but not the only player for sure.
I'm sure Steven has something in mind that sets Wolfram Language apart, just not sure what that is.