I just close my eyes for a minute and think (or try to), what would it be like for those people that are finally able to reach, say, Vega (I know it's not the closest). Sure, this is not a big deal in sci-fi, but for reality, it's pretty mind blowing. This is 100% why I seriously want to live for a few hundred years: to have an opportunity to see the first time we actually go to the nearest star.
In the meantime, I guess this will have to suffice.
I also love this image that is not interactive like this, but still mind blowing: http://en.wikipedia.org/wiki/File:Earths_Location_in_the_Uni...
Same.
If you don't already own a pair, I'd recommend getting a basic pair of binoculars and doing some backyard astronomy. You'd be amazed how much more you can see with even a basic 10x50 pair, even in thoroughly light-polluted skies.
Also SpaceRip [1] collects hundreds of interesting, easily digestible and pretty timely videos.
If you've got the spare cash, get image stabilized ones. I could clearly see the moons around Jupiter with my Canon 12x36 IS binos the other night despite my hand tremors. The real party trick is handing them to a friend and telling them to look at the moon. Blows them away every time - to most people it's just a yellowish glowing thing in the sky, rather than a scarred rocky globe.
For starters, there's this assertion: "The far-fetched version is to use back holes as power sources [1] as this is, as far as I've read anyway, the only remotely viable method of providing propulsion without reaction mass to speak of and reaction mass is the death of any form of interstellar propulsion."
Not true. We can definitely build something with today's technology that allows for propulsion without reaction mass: light-sails pushed by lasers[1]. I can address some of his other points but it's not necessary. If you crunch the numbers, it should be doable to travel to another star in about 150 years.
[1] See Humble's canonical text on space propulsion design: http://www.amazon.com/Propulsion-Analysis-Design-Ronald-Humb...
But it seems a lot more likely that we'll just send some strong AI robots to send us back the data until then.
EDIT: I suggest checking out Space Engine. It seems to cover more than one galaxy and being able to change the viewing/moving speed feels like being in a Star Trek starship. Going through space like that feels a bit unsettling:
For those people unlucky enough to not be able to load this app (it took me quite a while) here is a particularly fantastic image I took (without asking or any right to, of course) - http://shanearmstrong.co.uk/content/cdn/the_beauty_of_the_co... - I apologize for any slow load times.
Being a video it is not interactive, but definitely does strike something in me. It's almost the Total Perspective Vortex.
This really puts in to perspective the brevity of human life and how little we have achieved so far, from leaving the primordial soup to firing Glee across our television network to entertain teenage girls and travelling to and from the moon.
We are irrelevantly small and unimportant and yet we have already done the hardest thing, of what we know for sure, there are 8.5 million species on the planet and we are the only constructively intelligent one present here, there are 400 known satellites in our solar system, if you assume that every solar system in the milky way has a similar amount of planetoids on which some minor form of life could have grown, placed around 300 billion stars. I'm going to take a complete guess that only 1 in 10,000 of those contain a similar amount of life, which could be light years off, or could be spot on, or could even be far, far less tan the actual number - we simply don't know yet.
The maths is breathtakingly overwhelming:
(1 / ((8500000 * 400) * 300000000000)) * 10000 = 0.000000000000000098
Our significance in the Milky Way is 0.000000000000000098.
We account for only 0.0000000000000098% of potential life in this galaxy.
But we survived. We made it this far. From here the only way is up, or down, or left or under (depending on the location of the camera when we finally make it far enough off this rock to consider it interstellar travel.)
Disclaimer: This maths was about as well as I could do at 5.30am and is the product of a Google search of the accumulated human knowledge of the last few thousand years
They could at least try to put the Alpha Cassiopeiae and and Beta Cassiopeiae in the same "general" direction from the Sun. It would fool more people.
Distances seem to be correct but the coordinates aren't true at all.
Regardless, it looks beautiful. But it would be more beautiful if you could actually see their true locations. Without it, it is a just a game ui demo.
Edit: and my apologies if I have wasted your afternoon.
Could someone explain how this is built or give an overview of how it works? In the 'about' page http://www.chromeexperiments.com/detail/100000-stars/ it says WebGL and CSS3D, but I'm wondering how they fit together and what does what.
Is there a better way to view the source than just 'view source' in chrome?
I know a number of programming languages and I'd like to learn more about how this project works. [Saw the link to book on graphic programming in other comments below http://www.arcsynthesis.org/gltut/index.html, but how to "take apart and study" this project? ] Kudos to anyone who can point me in the right direction. Thanks!
Here is another one showing an animation of asteroids discovered in our solar system from 1980 to 2011. It starts off pretty tame, and by the end gets scary! https://www.youtube.com/watch?v=ONUSP23cmAE
Firefox on the same machine works flawlessly.
(Shameless plug: I used both to implement the Common Lisp sky renderization engine for my startup, http://greaterskies.com, that makes pretty personalized posters out of thousands of stars)
The page is so beautiful! Until now I've never felt the need to say this: wish I could upvote it more :-)
I would really love to see a search box that would allow me to jump to a specific star.
67:3 [And] who created seven heavens in layers. You do not see in the creation of the Most Merciful any inconsistency. So return [your] vision [to the sky]; do you see any breaks?
67:4 Then return [your] vision twice again. [Your] vision will return to you humbled while it is fatigued.
edit: loaded it in chrome instead, even better (should have been obvious given it's located on chromeexperiments.com)
http://www.arcsynthesis.org/gltut/index.html will give you a good general background, though.
To rotate a point cloud, you multiply each point by a rotation matrix to get the rotated point. A rotation matrix that rotates around the X-axis looks like
[[1 0 0]
[0 c s]
[0 -s c]]
where s and c are the sin and cos of the angle you want to rotate. Then you can do an orthographic projection by just dropping the Z coordinate, leaving just X and Y coordinates (which you may need to scale to your screen), or a perspective projection by dividing X and Y by Z. (Be wary of division by zero.)The usual approach is to maintain the original points unrotated and make a rotated copy of them for every frame, instead of overwriting them with a rotated version every frame, so that numerical errors don't accumulate and you can get away with single-precision floating-point. Also, conventionally, positive Z coordinates are in front of the camera and negative Z coordinates are behind it.
If the above isn't sufficiently clear, there's some code I wrote to generate an ASCII-art animation of a perspective-projected point cloud (the corners of a cube) at http://lists.canonical.org/pipermail/kragen-hacks/2012-April.... It's 15 lines of code and the only library it depends on are Python's functions to sleep for a fraction of a second, output stuff to stdout, and round to integer.
EXTRAS:
DISTANCE: For things that aren't points, you might be interested in how far away they are from the camera, too, like to scale them or figure out which ones are in front. That's the Z-coordinate after you rotate into camera space.
TRANSFORM COMPOSITION: If you want to rotate around two axes, it's probably better to multiply the two rotation matrices together, then multiply each point by the resulting transformation matrix, rather than doing two matrix multiplies for each point. You can also scale camera space to screen coordinates this way.
TRANSLATION: If you want to move the camera, you probably want to translate your points so the camera is at the origin before rotating them. If you represent your transformations as 4x4 matrices, with a possibly implicit fourth element in each point vector that is 1, you can represent translation in your transformation matrices too.
MULTIPLE SEPARATELY MOVING OBJECTS: A point cloud is a single rigid object. But whether you're drawing point clouds or something more complicated, it's often interesting to be able to move multiple objects separately. The usual way is to go from two coordinate systems, camera and world, to N: camera, world, and one for each object. Each object has a transformation matrix that maps its object space into world space. You move the object by changing its transformation matrix.
POLYGONS: If you're drawing polygons, straight lines are still straight lines when you rotate them, and in either perspective or orthographic projections, so you can just rotate and project the corners of the polygons into your canvas space, and then connect them with 2-D straight lines (or fill the resulting 2-D triangle).
FLAT SHADING: The color resulting from ordinary illumination ("diffuse reflection") is the underlying color of the polygon, multiplied by the cosine of the angle between the normal (perpendicular) to the surface and the direction of illumination; it's easiest to compute that cosine by taking a dot-product between two unit vectors, and to compute the normal by normalizing a cross-product between two of the sides. If you have more than one lighting source, add together the colors generated by each lighting source. You probably want to treat negative cosines as zero, or you'll get negative lighting when faces are illuminated from behind.
BACKFACE REMOVAL: if you're drawing a single convex object made of polygons, you can do correct hidden surface removal just by not drawing polygons whose normal points away from the camera (has a positive Z component). This is a useful optimization even if your object is more complicated, because it halves the load on the heavier-weight algorithms below.
HIDDEN SURFACE REMOVAL: If your polygons don't intersect, or only intersect at their edges, you can use the "painter's algorithm" to get correctly displayed hidden surfaces by just drawing them in order from the furthest to the closest; if they do intersect, you can either cut them up so they don't intersect any more, or you can use a "Z buffer" which tells you which object is closest to the camera at each pixel --- as you draw your things, you check the Z buffer to see what's the currently closest Z coordinate at each pixel you're drawing, and if the relevant point on that object has a lower Z coordinate, you update that pixel in both the Z buffer and the canvas.
SMOOTH SHADING: you can get apparently smooth surfaces out of quite rough polygon grids by storing a separate surface normal at each vertex, and then instead of coloring the whole polygon a single flat color, interpolate. You can either compute the colors at the corners of the polygons and interpolate the colors at each point you draw (Gouraud shading) or you can interpolate the normals and redo the lighting calculation for each point (Phong shading), which gives you dramatically better results if you have specular highlights.
SPECULAR HIGHLIGHTS: The diffuse-illumination calculation explained in "FLAT SHADING" above is sufficient for things that aren't shiny at all. For things that are somewhat shiny, you want "specular highlights", and the usual way to do those is to do the lighting calculation a second time, but instead of directly using the cosine of the angle between the light source direction and the surface normal, you take that cosine to some power (called the "shininess" or "Phong exponent") first. The 5th power is pretty shiny.
FOG: Faraway things fade exponentially. That is, you take the density of the fog (a fraction slightly less than 1) to the power of the Z coordinate of the point on the object, and multiply that by the color of the object.
TEXTURE MAPPING: If you want your surfaces not to be a single solid color, you can use a raster image (called a "texture") to map colors onto the surface. You just figure out where you are on the surface (by doing a matrix multiply from your surface point into "texture space") and figure out which texture pixel ("texel") you're at, or which ones you should interpolate between. (You can also use some other function to generate the color, rather than having an explicitly stored texture. The important thing is that it maps a 3-D point in object space to a color.) This is the start of the whole universe of "shaders", which represents a big part of current 3-D work. Another application of shaders is bump mapping:
BUMP MAPPING: If you're doing Phong shading, you can get apparent texture (in the usual sense: something you could feel if you could touch the object) on your surfaces without having to transform more points by simply perturbing the interpolated surface normals you're using to do your shading calculations. It's helpful if you perturb them in a deterministic way so that the texture moves with the surface.
Distances are closer in Globular Clusters. Alas, those are "metal poor", so the only planets are gas giants like Jupiter.
Distances are closer in Open Galactic Clusters like the Pleiades. Unfortunately those cluster tend to disperse. By the time a space faring civilization has evolved, the stars are no longer close.
Distances are closer in the Galactic Core. Unfortunately that is a high radiation environment due to Sagitarius A* (the mega-black hole at the center of the galaxy_ and all the nebulae the hole is dragging in.
Short answer: places where the distance between stars is closer are unlikely to have space faring civilizations.
Of course there is always Zeta Reticuli A and B.
In any case, great visualisation. Would be a perfect use for a 3D monitor.
Not sure if that's what you're looking for, but it should contain the info for what is being used. I took a quick glance and it looked like it was at minimum using the following 3 svg filters:
‘feGaussianBlur’ ‘feOffset’ ‘feMerge’
I always wondered how scientist determine the position of earth in our galaxy and the center of the galaxy. can somebody throw some light on this ???
Works fine in Safari.
b. If you haven't already, Toggle the spectral index... so sick
Awesome
Edit: I'm sorry for my misleading and value-free comment. Allow me to clarify: THIS EXPERIMENT LOOKS LIKE HOT BROKEN GARBAGE ON CHROME FOR MAC, A FACT WHICH MAY BE OF INTEREST TO PERHAPS HALF OF THE HACKER NEWS READERSHIP WHO WILL MOST LIKELY EXPERIENCE THE SAME VISUAL CORRUPTION AT VARIOUS VIEW LEVELS. SOME MAY VIEW THIS AS UNFORTUNATE, AS THE INTENDED EXPERIENCE IS A WORTHY ONE THAT EXERCISES A NUMBER OF CUTTING EDGE WEB PRESENTATION TECHNIQUES THAT ARE LIKELY TO GAIN SIGNIFICANT TRACTION IN THE NEAR FUTURE.
Ever at your service! -JPXXX
There is a security issue in the OSX drivers for certain chipsets, and AA is broken. Google Chrome has taken the step to disable AA on these systems to prevent an exploit of the OS.
The stars appear properly placed, but instead of a point there's an effect-ruining translucent square around each one. The single-pixel distance lines also look wrong during transitions.
Since this mystery ends in a Radar ticket about GPU drivers, I won't hold my breath for a fix. As far as Apple is concerned, it ain't broke until Final Cut is broke.
As surmised below, this appears to be a OpenGL driver issue with NVidia and Intel GPUs.