story
I would say we have a demonstrated ability of seeing the big picture, and a pretty good track record of making it work.
alternative explanation, given for the sake of argument:
we have a terrible ability to see the big picture, but have come up with some ingenious constructions where the small picture of each component in the system is correctly calibrated so the big picture outcome is successful. as you yourself pointed out, the supply chains are so complex that nobody involved understands all of it.
now, how would we go about distinguishing between which of these possible interpretations is correct?
thought experiment goes like this: suppose the big picture requires that some actors in the system do not receive satisfactory treatment in their local context, and that the only benefits those actors receive will be indirect, as benefits accrued to other actors in the system, but not to adjacent actors. will those actors still agree to participate or not?
Hence I said:
> without a lot of time
An AI spending a lot of time doing effectively the same thing humans have been doing (read: propagation of immense amounts of suffering) is not really something I'd want to see repeated. It seems rather obvious that these conclusions are very difficult and slow to arrive at at a proper scale, so no AI will have them by default. They'll be aggressive by default, just like your average animal in evolution. The fact that given their own millions of years (sped up) they may eventually arrive at the rudimentary level of cooperation that humans possess does not instill a lot of hope in me.
> I would say we have a demonstrated ability of seeing the big picture, and a pretty good track record of making it work.
I'm talking about this: http://slatestarcodex.com/2014/07/30/meditations-on-moloch/
A good example that's going to be hard to ignore will be the upcoming climate change due to humans catastrophically failing to see the big picture and focusing on smaller gains within their sub-groups. It really doesn't have much to do with complexity, but it has everything to do with the very same behavior you're seeing the AI execute here.
https://news.ycombinator.com/reply?id=13636150&goto=threads%...
why?
Frankly i don't feel it's productive or rational to attach the name of a biblical villain to new technology.
That reply, while informative, continues with the weighted religious terminology. It may find better reception here couched differently.
> Frankly i don't feel it's productive or rational to attach the name of a biblical villain to new technology.
Well, frankly, I disagree. Humans have an inherent blind spot when it comes to complex systemic forces. We tend to imagine them as weak and irrelevant. Reframing them as villains seems to be necessary to understand their power and reach.
And, by the way, I had not considered the Moloch article as a direct re-framing of a problem until you put it as such. I must say thinking about it in that light I find humanizing `complex systemic forces` a rather novel transformation and quite useful. Even having read the article a few times, I hadn't thought to describe it as such. But morphing a problem from one fairly inscrutable set of phenomena to be a villain allows us to use a different set of mental tools to tackle understanding the problem.
Typically I had though more restrictively about such transformations, for example, viewing a sound's waveform graphically can be illuminating in a certain sense (transforming audio-temporal, to visual-spatial). The biggest issue with the toMoloch transform is that the conversion process is obviously going to be significantly more noisy and provide the author the ability copious amounts of wiggle-room to steer the reader towards their own conclusions. But just expressing the facets of the problem and making its existence more well known has a lot of value. Anyhow thanks for helping me see an article I have gotten quite a bit of insight out of in another way.