When you access hardware on say a PCI bus (which you would in this scenario). Your call to the PCI bus does not take place WHEN you call for it to take place. You call the Kernel, which calls the scheduler, which calls the hardware manager, which calls the driver, which finally processes your request.
Now your request is processed all this is dumped and something else runs while the processor waits to hear back from the PCI bus with the response because this takes ages in processor time.
Finally an interrupt arrives, is made sense of, the appropriate driver is called, then it gives your information back to your process and you back on your merry little way.
:.:.:
The problem is when you called OS to start this long chain of events the real world didn't stop. Your real time module is still counting the 32,600,000 pulses per second.
This is where you'll get errors. Because often its easy to think things in a computer happen instantly, or so blindingly fast you don't care what happens, order or speed.
The situation you described is what originally gave me 0.1% error. Eventually I switched to a more aggressive tact of polling asynchronously in a separate thread and when a process called for the time responding with the latest received time.
This got me down to 0.07% error. Still not acceptable.
:.:.:
Its nice to be starry eyed and think there is no reason to give up security. But sometimes secure software can't do what you need it to do. The more crap you put between you and the metal the more time it'll take to execute.
This is logically provable.
If you take process A and B. Both are the optimal way to do something. There is no faster way to do this task. Therefore one can assume A and B's execution time are equal.
Yet B operates in a secure sand-boxed environment with a time sharing OS. Therefore B's true execution is B+C+D.
We know A = B, but for A = (B+C+D) C and D must be Zero, which they can never be in the real world.
Or you run a real time OS which may have problems because they aren't developed with security but IO timing in mind.
It's a fundamental flaw of time shared OS's.
:.:.:
Second security works in a simple way.
Cost to secure vs money lost.
Lab equipment is expensive. The loss of an entire calibration bench could run into the $250,000 to $1million and beyond range.
But redeveloping and entire OS to do this? Your talking about spending 20 to 100x MORE on security then your losses. That's idiotic at best.
> The only real time component of the software stack is
> the kernel. If you want another real time module your
> screwed because you need to run it in kernel space, but
> if you have a kernel you can't.
I think you're confused about what RyanZAG is saying. If I'm reading correctly, he's saying "don't run the real-time stuff on the CPU." Have that stuff run on a much simpler piece of hardware that is real-time, and runs the real-time code, then have the non-real-time userland on the CPU talk to it in not-real-time.To take your example:
> When you access hardware on say a PCI bus (which you would in
> this scenario). Your call to the PCI bus does not take place
> WHEN you call for it to take place. You call the Kernel, which
> calls the scheduler, which calls the hardware manager, which
> calls the driver, which finally processes your request.
Design the PCI card to have it's own, smaller CPU (or FPGA, or whatever), that does the real-time interaction with the "32,600,000 pulses per second." Don't have the real-time bits depend in any way with the code running on the CPU. Have it buffer the data. The, when the PCI card is accessed by the userland program on the CPU, it dumps the buffer onto the PCI bus. The userland would obviously have be fast enough that the buffer doesn't fill up, but that speed is much less than "real time". You can then work with the data in the userland, running in non-real time.