In the early days of There, Inc., I was recruited by them, and was in on some early design discussions. They wanted to build a planet-sized 3D virtual world with physical simulation, with all the scaling problems that implies. My comment was "Now let us all join hands around the world", which produced groans. (Doing physical simulation across a cell boundary is hard enough, but if the users can force you to have to do it across N cells, you have a real problem.)
For that system (and for Second Life) the assumption was that all the servers were in the same data center, with low inter-server lag. Trying to do this with home servers and lag in the 100ms and up range is going to suck. However, a distributed system where all sites are hosted in data centers on the same continent with low-latency connections might work.
Now they have to deal with the security problems, the delay problems, etc. They also need a really big MMORPG to exercise the system. Something like a planet-sized version of Minecraft would be a good test. The rules are simple and users create the content. A tougher test would be to create a simple, but huge, driving game, using Open Street Map, "seamless.gov", and topo data. Then load it up with automated traffic and let people drive in it.
The problem is, how the fuck do you make it fun?
Not only can I already visit Times Square in real life, I choose not to. Ditto on the more branded, corporatized, ad-infested portions of the internet (ie: the "downtown internet" approved by official, polite society).
Many scifi novels (insofar as Snow Crash can even be called "fiction" rather than "prediction" these days :-() are written to make fun of wacky technologists and our just-somewhat-off-target ideas, you know.
Now, to end on a zog note rather than zig or zag, here's an idea: someone figure out the massive-game or VR equivalent of 4chan. There's something people will actually want to use. (For all the official disapproval from all sectors, 4chan and 2chan are massively popular and influential. They are offering something users want: anonymity, obscurity, simplicity, and freedom!) EDIT: http://kazerad.tumblr.com/post/96020280368/faceless-together
But I think one reason 3D hasn't done much is that we're looking at it through a little 2D window. When VR really hits the market I think 3D worlds will be a lot more compelling.
If it's about simulation/training for doctors/firefighters/etc then I can see the uses because you're wanting to simulate a complex situation in the real world. But what purely virtual tasks will be better accomplished with VR than without? Is it more simulation of reality or something totally new? My guess is for whatever is new it mostly will be about navigating something inherently 3D such as DNA. If that's true, then do we really need a decentralized metaverse to populate it, or can whatever random DNA VR software be enough?
This seems to be an attempt to build a... protocol? engine? both? for moving programmable entities between volumes of space that may be controlled by different hosts.
The "operating system" / Metaverse verbiage is a little grandiose, but I really like the idea. Or maybe I just miss Adobe Atmosphere :)
The major problems faced in doing what they propose are fundamental issues with computer networking and real-world limitations in the latency of sending lots of state across the wires.
And the internet infrastructurally (in the US anyway) isn't significantly better today than it was in years past. Arguably it has gotten worse in a lot of ways when it comes to latency, particularly on the last hop to the user/player over consumer ISPs.
(No affiliation other than having tried it a couple months ago and written a little bit of code for it.)
I strongly recommend deleting the requirement about seamless visualization across servers. Of all the items on the list, that strikes me as adding the most difficulty for the least value.
And no, I haven't read Snow Crash. I shouldn't have to read a novel to get excited about a project...
Everyone should read Snow Crash.
Its an astonishing Novel, given the time it was written. Its really a very good novel, no idea about the project but the name generates a lot of hype for sure.
This technology is pretty impressive, and I get the marketing speak that will appeal to certain folks, but I'd rather call it what it is --- an impressive, distributed, rendering engine.
IDK, maybe I'm thinking about this all wrong.
William Gibson's "Matrix" from Neuromancer was more of a visualization of data as it flew about a virtual world, doing whatever it is data does when you watch it. Certain servers are protected by layers of "ice," which you have to "drill" through in order to gain access, etc.
There are a few working groups that appear to also still be working on standards for the Metaverse: https://en.wikipedia.org/wiki/Metaverse
Their Facebook page header has what looks like some F# on it.
Maybe we don't have to have the most advanced open source 3d networking game engine out there as a protocol.
Maybe we can start by building off of Named Data Networking (NDN) http://named-data.net/doc/NDN-TLV/current/
Would be nice to have everything including content represented in Manchester OWL syntax but that's probably not realistic.
So perhaps use the NDN protocol with TLV encoding, NDN name format/URI scheme, order. Use the interest packets for subscribing to rooms in the metaverse. Then exchange NDN Data Packets with updates to object positions etc.
Maybe for data contents use something with YAML for a change from XML.
name/smith/john50/homeworld
base: org/sim
libs: dim/1, objs/1, phys/1
dimension: real
- terrain:
heightmap: maps/hills.png
- objects:
- house:
windows: 5
type: simple
bedrooms: 3
- avatar:
model: models/man.dae
name: name/smith/john50/me
pose: walk 2
position: [50, 50, 2]
velocity: [0.5, 1.0, 0.1]
owner: name/smith/john50
- portal:
world: name/wallace/liz10/homeworld
type: simplecircle
position: [123, 435, 123]
dimension: real
- webview:
name: John's Smart TV
url: http://en.wikipedia.org/wiki/Metaverse
org/sim/objs/1
base: org/sim
dimension: real
- procedures:
room:
- for: [1, walls]
- makegeom: [cube, 1.0, 0.5, 1.0, 0.0, 0.0, 0.0]
house:
- for: [b, 1, bedrooms]
- room:
walls: 4
- for: [b, 1, windows]
- cutgeom: [quad, 0.25, 0.25, 0.25, b, 0, 0]
name/smith/john50/update102
timestamp: xxxxxxxxxxxx
which: name/smith/john50/me
update:
operation: impulse
vector: [ 0.25, 1.0, 0.0 ]
startstate:
velocity: [ 0.5, 0.0, 0.0 ]
The idea with the "procedures:" thing is to take advantage of YAML's flexibility in order to create a language independent representation of an algorithm for doing simple procedural generation. It would be designed in such a way as to make it easy to generate code from the data tree in different languages like C, C++, Nim(rod), Java, Python, whatever. That would be compiled on the fly and loaded as a sort of plugin.Then rather than doing simple state updates, it uses operational transformations like saying "at time A force B was applied to this object and its velocity before that was C".
So my idea uses an emerging standard (NDN) for efficiently distributing information and updates about the metaverse, an easy to read standard in YAML for describing the details, includes the concept of portals linking between worlds (you gotta have portals, they are one of the coolest things about the metaverse), includes the idea of embedded webviews (too much awesome stuff on the web to leave that out of the metaverse browser), a YAML-format for encoding algorithms that can be easily translated to common languages and compiled on the fly, and operational transformations for informing other clients about operations on the world state.
It allows (theoretically) for us to be sharing the same space, but I see a modern office while you see a Japanese dojo. Effectively, Spatial Style Sheets, right?
The big problem (one we still run in to everywhere else) is that people are going to want to be guaranteed that the dragon dildo they proudly display in their 'verse is seen the same by every visitor, and that's only accomplished by low-level data such as triangle meshes, UVs, and images.
Still, a very intriguing thought.
base: org/sim
dimension: real
procedures:
room:
- for:
range: [w, 1, walls]
block:
- makegeom: [cube, 1.0, 0.5, 1.0, 0.0, 0.0, 0.0]
house:
- for:
range: [b, 1, bedrooms]
block:
- room:
walls: 4
- for:
range: [b, 1, windows]
block:
- cutgeom: [quad, 0.25, 0.25, 0.25, b, 0, 0]It does look quite cool though.
Why!!!???