There was an alternate microcode load for it, which implemented an instruction set similar to that of a PDP-11, and could run an ancient version of Unix. (maybe not so ancient back in 1985, but definitely pre-BSD) We used one or two of those for our development machines, and it was my job to write software tools on them, using C with 20-bit words and 10-bit bytes.
Man, it was a pain in the ass.
Here's the C/30 Programmer's Reference for the "Native Mode Firmware System". That was the software that ran in the native 20-bit mode rather than emulating the 16-bit Honeywell mini.
https://walden-family.com/impcode/c30-nmfs-programmers-refer...
They're fun to work with; some versions have FRAM instead of Flash memory.
To add to this, the reason that the RAM was available with 9 bits in the first place is so that it could be used to make systems with ECC. It's just that you didn't have to use that 9th bit for error correction, you could use it for extra data, if you designed the system to use it that way.
The idea being maybe an intelligent race with different balance of motivations might not do only the minimum economical thing, instead being willing to trade X% of memory bits or X% of instructions per second or X% more chips for other purposes. For example extra tag bits might be used to encode where the data came from or its datatype, or additional clock cycles between instructions might be used to run a reliability check on the program as it executes, etc.
It would be thematically even better if you used ternary logic, but I’m not sure that FPGA can handle more than two voltage levels.
There is a high-Z (high impedance) state you can set I/O pins to for a third state, but no way to detect that high impedance state from the FPGA. It's just used to share an output line with more than one pin. You could make a peripheral that could detect the three states though, with a voltage divider and an analog input.
How did you do that? I couldn't even remove the battery to replace it.
> So I quickly gave up on creating something that could only exist on my FPGA board
I've been doing some FGPA stuff and I think that's the wrong way to look at it. Yes FPGAs are often useful when you need raw speed but that's not the only advantage over CPUs. You also get extremely low latency and direct control of IO pins. With software you are limited to the existing hardware peripherals, but with an FPGA you can make your own!
I had a brief stint in hardware design, and an FPGA is almost always going to be worse than dedicated hardware for a task, but it's extraordinarily flexible.
Most workflows I saw - you design the hardware on the FPGA (hugely useful for quickly testing and prototyping) then you outsource and actually build a custom chip if you really want speed.
It's also a great polyfill tool - since it can take the place of a lot of other hardware peripherals at a moment's notice.
This is definitely the wrong way to ask but in another thread about the PineNote, you mentioned you have a flow to sync your RM2 to Bookstack -- are your scripts on github or any other place? If not, would you consider making them public or posting on Reddit /r/selfhosted/ and/or /r/RemarkableTablet/ ?
I'm new to HN (as a poster, long-time lurker) and I didn't find a way to directly send you a message.
https://2017.notmalware.ru/89dc90a0ffc5dd90ea68a7aece686544/... (link from https://blog.legitbs.net/2017/07/the-clemency-architecture.h...)
Embedded Linux is great, but if you’re trying to do something like read from a high-speed ADC then the only way to do it is with an FPGA. The FPGA reads from the ADC at precise intervals and buffers the data. The embedded Linux system can then periodically read the buffer with all of the jitter and latencies that come with using Linux.
Virtually every Linux-based software defined radio, oscilloscope, and logic analyzer work on this architecture. For lower speeds you can get away with a microcontroller running bare metal code to do the buffering, but the high speed stuff enters the domain of FPGAs.
You just have the peripheral DMA and flag/interrupt when done. If you need an "immediate" reaction you use a DSP. There are only so many useful calculations you can do with a single input stream and DSP can handle them.
With an FPGA and external DDS chips, this is difficult to do just because of mismatches in PCB trace lengths and/or small temperature fluctuations. With a microcontroller, it is nearly impossible to do even when using DMA because of memory bus contention.
So it's not always raw speed, per se, but anything that's sensitive to timing. Linux on a PI can be busy doing something else and miss a critical time to have output (or read) something. An FPGA based solution is working with known loop/io/etc times that don't change.
https://intellijel.com/shop/eurorack/cylonix-rainmaker/
In this module, the FPGA's ability to do LOTS of computations in parallel is used to produce 16 taps of pitch-shiftable delay along with a 64-tap comb filter.
At normal speeds, an image can take 10-15 ms to clock out of the sensor. At that point, there's little reason not to run your image processing on a $3 CPU rather than $$$ FPGA because what's another < 30ms at that point and what would need a reaction that quick anyway?
Had a 36 bit word length resulting in a 9 bit 'byte'.
Which is why we have UTF-9 and UTF-18, as defined in RFC 4042.
https://datatracker.ietf.org/doc/html/rfc4042
(Spoiler: It's an April Fool's joke.)
I went ahead and tried to design a machine language for that computer. There are three registers: the three-byte accumulator, the two-byte stack pointer and a 6 bit wide flag register and these addressing modes: accumulator, immediate, absolute, relative and stack.
It's possible that I will try to implement this system with the help of a FGPA.
Just out of curiosity. As a hobby.
Somehow I was sure that sentence was going to end with "... in MineCraft!"
https://github.com/hneemann/Digital
You can export your project as a Verilog file that can be used in the various FPGA tools.