story
Explain please?
You can practically issue render commands only from one thread. And there is no way to save bunch of commands anymore as display lists were deprecated. Also it's still very much state machine based. So you have to do a lot of individual calls to set everything up for actual draw call.
I personally love OpenGL and use it on my work. However this is one of the biggest drawbacks on OpenGL currently.
OpenGL has context sharing, which means several contexts in different threads sharing the same objects. You can issue commands as long as you synchronise access to objects yourself. In practice, that means filling buffers, rendering to an off-screen framebuffer, etc. from other threads.
If it works on Windows, then I wonder if it really matters. I've seen a lot of games (e.g. Team Fortress 2) still offer multithreading as an option in the UI, so there's already two code paths there.
In D3D it works as OpenGL display lists would, except with arbitrary commands. So you have multiple threads composing the scene and then a single thread just issues few commands to replay the command buffers created by the other threads.
It would be rather simple to implement the same in OpenGL as we already have the display list concept, even if it originally was made to reduce the amount of glvertex3f calls.
"complete lack" was maybe bit of an exaggeration, however in practice it is true.
This leads to developers pursuing various less optimal solutions that all involve more startup time for users and less predictable performance and robustness for developers (at least when compared to the solution D3D has offered for more than 10 years). So when people say OpenGL is years behind D3D this is one of the things they mean. D3D isn't perfect here either. There is a fair amount of configuration-specific recompilation going on, but the formats are more compact than the optimized/minimized GLSL source formats people are pursuing on OpenGL and while the startup time (shader create time) is still too long, it is still much better than OpenGL. Shader robustness is generally more predictable and better on D3D but it's hard to disentangle shader pipeline issues from driver quality.
To be fair multicore is also an issue for OpenGL, but D3D isn't great at that either. The current spec for D3D11 includes a multicore rendering feature called "deferred contexts" but performance scaling using that feature has been disappointing so it isn't a clear win for D3D. Other APIs (e.g. hardware-specific console graphics APIs) expose more of the GPU command buffer and reducing abstraction there allows for a real solution to the multicore rendering problem. There should be a vendor neutral solution here, but so far neither of the APIs has delivered one that is close to the hardware-specific solutions in performance scaling.
Which is kind of understandable - I use Mesa as my "assumed" OGL level, and it just hit 3.3. So you absolutely can't assume anything from the 4 line.
Though I still couldn't use geometry shaders as an assumption, because as recently as Sandy Bridge Intel GPUs didn't support it. Add on if you want to port to mobile you need to refactor to GLES2.