+ DRAM
+ I2C
+ GPIO (with stuff like 3.3v, tristate outputs, pull up/down, etc...)
+ USB2 host/device
+ SD/MMC
And that's just at the very basic level. Once you get into the consumer world you need to start talking about video output, camera input, video decode and encode acceleration, programmable GPUs,...Really the CPU is, in some sense, the most solved problem from the perspective of open source. The designs themselves may be closed IP, but the instruction sets are meticulously documented and their behavior is very standard across many vendors and ISAs.
GPIO, I2C, and SD interfaces (in approximately increasing order of complexity from my point of view) are one-person jobs for the right person. I've been in charge of all the GPIO for complex mixed-signal chips several times in the past, and I could crank these out in no time. But someone who's never designed for ESD and latchup, beyond-the-rail inputs, etc., would probably find these pretty nasty.
Forget using fab-supplied GPIO; those designs are almost always much (3-5x or more) larger than they need to be, and in advanced nodes, they tend to add additional processing cost. Specifically, most fabs want to stick with per-pad snapback clamps, which are (1) big and (2) require (in processes newer than about 0.13u; you can get away without it in earlier nodes) an additional "ESD" layer to fix doping gradients so that the devices don't destroy themselves as soon as they snap back. A much better solution, for many reasons, is transient rail clamps and steering diodes. First, because diodes can handle insane amounts of current per micron of perimeter (think: easily 50 mA / um of perimeter for a diode, vs single digit mA / um for snapback devices), their layouts are more compact, and they don't require ballasting to prevent current crowding. Second, and more importantly, clamps and diodes can be simulated (and the simulations, if not correct, are at least predictable in the way they fail depending on the models the fab gives you). Snapback is effectively voodoo: design what looks like it should work, test it, and hope that some circuit you accidentally put too close to the pad doesn't change the behavior enough to cause failures.
DRAM controllers are another step up in complexity. Depending what standard you're going after, this is going to take some reasonable work.
USB2 is, in a word, hideous. A team starting fresh is looking at several person-years (or more) for a well-designed physical interface, control logic, etc.
One wonders if they can convince someone to donate designs. Come to think of it, I'd do their GPIO/ESD/latchup design for them just for the fun of it; my current employer certainly wouldn't object.
But absent some random hackery on opencores.org, no one seems to have really put effort into doing it on an open part in a serious way.
I won't mind if it's 10x slower— though the thousands of times slower that I'd get with a software simulation would likely be too slow to be practical.
What? You say that the chips I currently used have closed and secret designs and are not available for modification? But I thought you said that the CPU is the most solved problem from the perspective of open source??
I guess it's good that people are working on actually open CPUs so that things like http://www.cl.cam.ac.uk/research/security/ctsrd/cheri/ can be built.
In other words, the characteristics of SoCs using this core will likely be very similar to the many out there using MIPS: cheap and simple, with performance that's acceptable for applications like routers and other embedded devices.
Even the smallest changes to MIPS to clean up things like branch delay slots means it's a new ISA anyways, so you get zero benefit keeping it "mostly MIPS". You can read a bit more about this in the "history" section in the back of the user-level ISA manual.
They also avoided the patented instruction issue completely by removing all alignment restrictions from the regular load/store; probably a good idea, with memory bandwidths being the bottleneck now and buses growing wider - the extra hardware is also negligible, basically a barrel shifter and logic to do an extra bus cycle if needed.
MIPS has also MIPS16..
Have a CPU that is a CISC (but internally a microcoded TTA), but with a large chunk of the microcode user-writable (So you have push-inst and pop-inst, where push-inst pushes the new instruction microcode into the microcode storage and copies the old instruction microcode onto the stack and pop-inst does the opposite). It keeps the advantages of fixed-width instructions while, depending on how the microcode is encoded, potentially having significant memory savings.
Regarding Sodor, they're designed to be instructional (we use them on our undergrads at Berkeley) and open for anybody with a C++ compiler so they can learn about Chisel and RISC-V. I pushed them through synthesis once just for kicks, but I didn't work on making them FPGA-ready. Chisel will give you the Verilog of the core, but you'd still need to write a test-harness that's specific to your FPGA.
The RISC-V user manual lists some of our existing RISC-V silicon implementations (8 so far, listed in Section 19.2), whose RTL aren't (yet) open-source.
I highly doubt it. But if it's true, that would be the missing link for all open source hardware design.
It would also be nice if they gave some idea of the kind of performance or implementation they are considering.
It looks like the privileged part of the RISC-V ISA is not finished yet. This is a great project, but it seems a long way off.
We are far from the first to contemplate an open ISA design suitable for hardware implementation. We also considered other existing open ISA designs, of which the closest to our goals was the OpenRISC architecture. We decided against adopting the OpenRISC ISA for several technical reasons:
-- OpenRISC has condition codes and branch delay slots, which complicate higher performance implementations.
-- OpenRISC uses a fixed 32-bit encoding and 16-bit immediates, which precludes a denser instruction encoding and limits space for later expansion of the ISA.
-- OpenRISC does not support the 2008 revision to the IEEE 754 floating-point standard.
-- The OpenRISC 64-bit design had not been completed when we began.
[1] http://www.eecs.berkeley.edu/Pubs/TechRpts/2014/EECS-2014-54...
I wish the page was a little more clear on what the intentions are, etc, but seeing RV64 in silicon would be immensely exciting. Producing a simple in-order machine, even with usual set of peripherals isn't very hard at all and nor that expensive on an older process node, but there's a world of difference if we start talking superscalar out-of-order multi-core SMP. Seeing OpenRISC on the Advisory Board I suspect it's more the former than the latter.