This is because alternating current is 50Hz in PAL countries (eg Europe), and 60Hz in NTSC countries (eg America). Analogue TVs vertical refresh rate was synced to the AC frequency for a bunch of practical reasons, which meant gaming consoles had to send signals to the television at either 50Hz or 60Hz.
Result? Many PAL console games actually ran at 5/6ths of NTSC speed. Most notoriously, Sonic the Hedgehog, whose agility was more sluggish (and soundtrack less vivacious) for a large fraction of the world. More information in this video: https://www.youtube.com/watch?v=cWSIhf8q9Ao
From the perspective of the grid operator, however, 50 or 60Hz is not always 50 or 60Hz. A sudden load or a generator tripping offline (to preserve itself) results in a transient slowdown of the frequency of the entire grid. I spent a summer in high school helping out with the analysis of these kinds of disturbances, and there's a distinct pattern to the fluctuation of grid frequency. There are also slight longer term errors in grid frequency, although operators are held to strict standards.
Getting back to clocks, integrating these transient frequency errors over time results in clocks that shift forward and backward relative to real time. This integrated time error is often displayed in grid control rooms, and it is something they deliberately manage to ensure that the 'grid time' is accurate. In practical terms, this means a period of ever so slightly less than nominal frequency is likely to be followed by a period of deliberately induced slightly higher than normal frequency, so that the overall integrated error tends to zero.
More details on the time control aspect on page 13 here: http://www.nerc.com/docs/oc/rs/NERC%20Balancing%20and%20Freq...
Basically makes it cheaper to build. You have a natural frequency there to use, and you don't have to come up with all this additional hardware to smooth out the existing frequency and come up with a new one.
I think there was a flip-side to this ... though NTSC had faster refresh, PAL had a higher resolution (more lines). I'm not sure, but I think this may have been a tradeoff.
At 50Hz, even if the vertical sync wasn't actually locked to the mains frequency, any mains distortion would roll much, much more slowly, and be less offputting.
My (basic) understanding is that having the beam move at roughly the same frequency as the A/C current is done to mitigate noise and distortion caused by other appliances, which earlier tubes were much more susceptible to.
So in many cases you end up with games that run at 5/6th the nominal speed and have black bars at the top and bottom of the screen. PAL gaming was pretty crap, but of course at the time I didn't know any better and I didn't understand english anyway, so it's not like I had a choice...
I'm not an expert, but I read that it was the idea but it was never implemented and TVs used independent generators.
https://www.youtube.com/watch?v=niKblgZupOc
He shows all the interesting artifacting that makes these extra colours possible.
People would try many things to render as many Bobs as possible.
https://www.youtube.com/watch?v=eDQTxbudvDg
At some point the infinite bob demo appeared (I couldn't find a video for this). Seemingly endless sprites rendered on the screen - all moving, with no slowdown. As a 15 year old programmer learning to code demos in assembly I was very confused, how was it done?
I did eventually (after many hours spent in devpac disassembling code) work it out. They were not rendering infinite bobs, they were rendering one!
If you create three screen buffers. On the first screen buffer draw the bob at position (x,y), then switch to the second buffer draw at (x+delta,y+delta), then switch to the third buffer and draw at (x+delta,y+delta) and then repeat the cycle through the buffers as you move the bob.
If you switch between the buffers on the vsync (50/60hz) then the difference between where you drew the bobs appears like movement. You can draw out infinitely complex patterns and make it look like you're rendering millions of bobs!
Happy days.
There is no scanline striping; that is strictly an artifact of poor NTSC capture devices (like most HDVTs) that naively assume the incoming signal is a 480i image with odd and even fields.
And nonetheless a CRT from the time would show these as different scanlines.
(Well... it's been a long time since I've seen a C64 plugged into a TV, but I don't see how anything else could happen! You definitely don't get an alternating effect from a BBC Micro.)
Apart from the fact that they would have advertised the increased resolution (albeit flickery). On the Amiga, which did do interlaced modes. you could tell on a screen showing a single colour whether it was interlaced or not.
The monitor doesn't "know", in that case. The interlacing is simply a physical phenomenon. Later digital capture devices must recognize this situation and handle it appropriately. Many assume it's always happening, as progressive-scan 252-field NTSC is out of spec anyway.
TVs were designed in the 30's, where electronics were extremely primitive and expensive. You wanted the consumer device to be as simple as possible so it could be within the consumer price range. So TVs were little more than a Radio receiver hooked up to a cathode ray tube (CRT).
To drive a CRT you need 3 signals: X position, Y position and brightness. The dumbest possible design is to have 3 radio receivers and transmit all 3 signals over the air. But the extra receivers are expensive and besides the X and Y signals are very repetitive, which would be a waste of bandwidth.
So two flyback transformers were added to the design of the TV, which generate a saw wave pattern. Starting at 0% they would consistently increase power until 100% before rapidly snapping back to 0. One would run at the vertical refresh rate to drive the CRT's Y signal and the other would run at the horizontal refresh rate to drive the CRT's X signal. The Brightness would come from the radio receiver.
With this design, you just need a way of synchronizing the TV studio's cameras and all the TVs in the area to the same horizontal and vertical refresh rates. You might think: Easy, we just use the mains power frequency for vertical and then divide it by 525 to get the horizontal frequency.
But a 525 frequency divider way too expensive to put in every TV. Instead, they only put one divider in the studio to calculate the horizontal refresh rate and transmit a synchronization pulse embedded into the brightness signal. A simple circuit in the TV detects the synchronization pulse and nudges the flyback transformer to match. A second, longer synchronization pulse is transmitted between every field for the TV to synchronize the vertical flyback transformer.
So a basic Black and White TV is just a radio receiver, two flyback transformers and two synchronization detectors hooked up to a CRT. It doesn't know anything about interlacing or even how many lines there should be in every frame. Back then, a TV studio could theoretically start transmitting at 61 Hz or with a few extra lines per frame and every TV would follow along, right up until the point where the horizontal or vertical refresh rates went out of the spec of the shittest flyback transformers in consumer TVs.
Interlacing is a brilliant hack that is 100% done at the studio end. All they do is pick a horizontal and vertical refresh rate that don't divide into each other a whole number of times. 60 hz divided by 15.750 kHz is 262.5 lines. This means that when the TV's vertical flyback transformer reverts to zero (putting the CRT's Y position back to zero), every second frame, the X position of the CRT will be halfway along the screen.
One thing you might have noticed is that the Y position is constantly incrementing, it doesn't step down by one line worth of y position at the end of each line. This means that the TV signal is actually rotated slightly, with the end of each line having almost the same Y position as the start of the next line.
Which means if the field starts halfway through a line, the start of the first full line on that field (and every line after that) will be half a line lower than it was on the previous field.
Other interlacing schemes are theoretically possible, just by picking appropriate horizontal and vertical refresh rates. You could have Triple interlacing or quadruple interlacing (though I doubt either would be pleasing to look at). But most early game consoles and computers pick a horizontal and vertical refresh rate which divide into each other with a whole number of lines, resulting in a progressive display.
I wonder how much better the modern version of this effect (as seen in the article) would be if they implemented this.
While the bullet point listed resolution for standard VGA was 320x200. Hitting the hardware registers and paying the price of a rather peculiar addressing mechanism for pixels you could get a lot more(and double buffering to boot).
320x240 was the most common tweaked mode, because it gave you square pixels and page flipping.
At the edge of what momitors could handle there was a 400x300 mode which ran at 87Hz Flipping two images with this mechanism give you a 43Hz shimmer, which is amost impossible to pick on colours if the two components are similar luminance.
I never saw this get used for anything, but it would have made an excellent paint program for standert VGA.
Of course, it meant you had to spend 90% of the frame updating the pallete registers and only had the vertical blank time to draw everything into the frame buffer. But, the focus of the demo was the idea that high-color was at all possible on VGA hardware.
The C64 missed out on some of this programmability, but had other goodies to compensate(more sophisticated sprite hardware, a really solid default palette).
You could actually change them mid scanline too, but it was tricky to get the timing right, so the exact pixel of the change would be a bit random.
Sheesh, when was that? Probably 1985.
Watch this demo for state-of-the-art in that kind of programming on the C64.
I guess people grow to like the look of the machines they have fond memories of. (Apple II, Coco, Sinclair)
The Amiga was the first machine to really impress me with its pallete.
But if you compare the C64 palette with other 8 bit computers of the time, it looks better for me. And as he explains in his article, it is easier to generate more realistic looking pictures with it than with a more vibrant colour palette.
There are too many layers, half of which don't synchronize correctly, and if even one is out of alignment you'll get tearing/stuttering.
The spectrum was capable of all sorts of odd things that were never intended. http://tarjan.uw.hu/zx_gfx_modes_en.htm
The Amstrad CPC probably had the most vibrant palette of all of the 8-bits.
A C64 artist even tried to prove otherwise, but IMHO scored an own goal : http://www.indieretronews.com/2016/02/is-c64-palette-far-sup...
The fact that we are having a conversation about it looking dull and washed out makes it clear that this unauthorized shortcut was cracked down upon. But it also helps to explain why there are so many palettes available in most c64 emulators - people tried to match their memories, or a screen grab, of wherever their particular c64 was adjusted.
I'd adjusted mine; if I recall correctly it came with the intensity all the way up. I turned it down but didn't turn it down to the point most people's seem to have been set at, judging from most modern emulated screenshots.
Also, having two separate ditherings of each sprite would use twice the memory per sprite, and that dragon looks to be composed of multiple sprites and possibly multiple frames of animation per sprite. Most games were memory limited, which means the devs had to make several compromises just to get the game to fit at all.
I wonder if the demoscene has some nice examples where they are more perfectionist about this.