It's quite an interesting architecture. From an initial perusal, I found these features note-worthy:
* Explicit access to PC as a register, makes "computed goto" trivial and very natural.
* Because of the above, there is no JMP instruction.
* Treating operations on registers as "values" makes the instruction set very orthogonal.
* No instructions for bit-manipulation.
* Lots of possible NOP instructions, like "SET PC,PC" (which I think should work, depends a bit on exactly when it's incremented but it looks safe).
* Word-addressed memory will make string processing interesting to implement. Maybe in space, everyone uses UTF-16.
* Single 64 K address space will put quite the limit on the amount of code, I guess the game will show how much code is needed to do anything.
* No I/O instructions means I/O must be memory-mapped, further reducing the space available for code. Maybe there will be bank-switching?
* No interrupts.
What? Plenty of opcodes for that. SHL, SHR, AND, BOR, XOR are all bitwise operators.
Unless you mean bitset and bitclear macros, but no self respecting embedded programmer uses those. I've disagreed with almost everything else Notch has done, but from a simple pedagogical standpoint, he's doing the right thing by leaving those out.
It has completely demystified the whole low level world of computers for me.
If you can't get the book, a number of exercises and chapters are available for free at this website[2].
[1]: http://www1.idc.ac.il/tecs/ [2]: http://diycomputerscience.com/courses/course/the-elements-of...
But who can say. The second Fortran compiler I used ran on an IBM 1800 (equivalent to the IBM 1130, but for process control) had something like 29 phases, and ran on a machine with a total of 4k 16-bit words.
A variant like cython that compiles to this assembly? maybe...
http://en.wikipedia.org/wiki/Program_counter
http://en.wikipedia.org/wiki/Word_(computer_architecture)
http://en.wikipedia.org/wiki/JMP_(x86_instruction)
http://en.wikipedia.org/wiki/NOP
http://en.wikipedia.org/wiki/Orthogonal#Computer_science
http://en.wikipedia.org/wiki/Address_space
http://en.wikipedia.org/wiki/Memory-mapped_I/O
http://en.wikipedia.org/wiki/Interrupt
And no, don't expect Python anytime soon. Expect a C compiler, FORTH, possibly some sort of Pascal, but a highly dynamic language is unlikely. The system is too resource-constrained to make it practical. A static language that looks kind of like Python isn't out of the question, but it won't do a lot of the things you expect from Python, Ruby, PHP, or Perl.
Lua, perhaps? There was an article how to reduce its binary size, for an older version of the language (Lua 4.0): http://www.lua.org/notes/ltn002.html
With no reductions, and for x86 assembly, it started at ~64KB, so not very useful here; but after dropping standard libraries and parser, they got to ~24KB. Now however, some further questions arise I'm not sure about:
- whether such a virtual machine, when without parser, would be anyhow more useful than the underlying system alone?
- how much the binary code would be bigger when compiled for the "DCPU-16" instruction set instead of x86?
In a tweet, notch said It's going to be dynamic later on, but right now [the keyboard is] a cyclic 16 letter buffer at 0x9000. 0=no char yet. Set to 0 after read. http://pastebin.com/raw.php?i=aJSkRMyC
>* IFE, IFN, IFG, IFB take 2 cycles, plus the cost of a and b, plus 1 if the test fails*
It is a long time since I worked in assembly, but I don't remember comparison functions having different timings depending on results when I were a lad.
(FYI, most of the assembler I played with was for 6502s (in the old beebs) with a little for the Z80 family and early x86)
The time that is wasted in case of a branch misprediction is equal to the number of stages in the pipeline from the fetch stage to the execute stage. Modern microprocessors tend to have quite long pipelines so that the misprediction delay is between 10 and 20 clock cycles. The longer the pipeline the higher the need for a good branch predictor.
http://en.wikipedia.org/wiki/Branch_predictor
EDIT: the inclusion of this is somewhat interesting as there's not much of a point in simulating a pipelined processor unless you care about hardware details. My best guess is they're adding this "feature" to make compilation to assembly MORE difficult and increase the advantages of hand-compiled assembly. Branch prediction is a tricky thing to do right in compilers.
Am I right in thinking ARM uses this model? I haven't worked on them since ARM2, but I have a hazy recollection...
http://www.wss.co.uk/pinknoise/ARMinstrs/ARMinstrs.html#Bran...
(Thinks of a typical SWI handler...)
Everyone in Minecraft, too -- almost. The string encoding in Minecraft's protocol spec is UCS-2, just a sneeze away from UTF-16. It seems Notch has a soft-spot for large encodings. It makes sense from a calculation and lookup perspective, but I wonder if the increased bandwidth and storage of 16-bit blocks has a measurable impact.
If I was designing the game, I would favor internationalization vs space efficiency in the virtual computer. That way russian kids would get to have fun learning to write silly programs too.
The reason we use "RAM" as we generally think of it is basically that other forms of storage are obscenely slow, right? In this case, however, our "mass storage" device would actually be... RAM. Just not directly-addressable RAM. Loading code and data from "disk" as-needed could be relatively fast.
Edit:
> No interrupts.
Good catch, that's probably something we'll see added. I don't think Notch will like emulating a thousand polling CPUs...
Edit 2:
Just spotted this in Notch's twitter feed: https://twitter.com/#!/notch/status/187454444571598848
"An emulator is coming Eventually(tm). I want to sort out some more IO design first, not release too early."
So, I/O is still up in the air.
"Question: can we trade the programs we create? How will you stop malicious viruses etc?"
"yes. And I won't stop viruses, the players will have to do that themselves."
I can imagine nothing pissing off a noob more than getting a virus within 10 minutes of play and having no idea how to stop his ship from self destructing.
Perhaps this will need some sort of firewall system built in where ships cannot communicate unless ports have been explicitly opened. Perhaps some sort of communications proxy that can serve for safe communications.
It could have a system similar to EVE where high security systems have infrastructural to mitigate risk whereas low sec systems lack this but contain the best rewards.
I can imagine how an economy of reputation could arise where some people are trusted to distribute malware-free code.
Plasma Shield Generator requests the following permissions:
* Read/write to the ship's log
* Draw power from the core
* Use the red alert system
Plasma Shield Generator will not: * Access communication protocolsHere is an analogy, what you are talking about is secure walls with a lockable door; what you have in this chip is some wood, a saw, some nails and a hammer.
You can't think about this in modern terms. It's not a modern computer. It's a 1980s computer, and it's supposed to run a spaceship. Think microcontrollers, not smartphones or set-top boxes.
So not everybody needs to actually know how to program, they can just buy stuff from NPCs or from the market off other players. But obviously, the ones who do know the system inside out will have an advantage.
A road location would be rwxr--r-- while a road texture could be rwxrwxrwx.
Your player's character look could be rwxr--rwx.
A signpost coudl be rwxrwxr--.
I hope this is the opposite of EVE in practically every way... except the high-level concept. Let's not give Notch any ideas.
Examples would be: The Economy. Ship Configuration System. The Artstyle. The security model. The unforgivingness (made you think carefully before you acted).
The problem was just that the combat always felt a bit bland and favored players with the biggest ISK supply and XP. So in reality you needed to be part of a huge corp and wait for months for skill training to be competitive.
It has 64K 16-bit words. What exactly are you expecting the default software to do that opens it up to viruses?
Obligatory (but illustrative of my point): http://www.pbm.com/~lindahl/mel.html
Not to mention the usual social engineering stuff. "Hey install this great program, it'll make your guns 200% more powerful"
Question: Will we need to program an OS? And does it run Linux?
This would be a tremendous way to do that.
Programming on normal computers is always kind of disconnected from the real world. So, your program can add two and two. Great. Now what?
Microcontrollers alleviate that issue to some extent. Now your program can walk and talk, and push real-world objects. But you kind of need to learn a bit of electronics too.
Notch's Universe solves this issue neatly. It's a mock real world, with programmable computers. Now your software can do some interesting mock-real-world stuff, like fly spaceships. That's pretty neat.
This is why most people advocate learning python or ruby. You don't have to deal with the underlying manipulation of the computer until you've decided you actually like programming.
It will not run Linux, because Linux requires a 32-bit CPU with megabytes of RAM. Someone might write a Linux-like system for it though.
Well, actually that sounds like a recurring theme, now only taken to a more complex extent.
For those who are puzzled (as I was) as to what this CPU is for, I found this:
> Notch's next game, 0x10c, "will allow players to create their own spaceship in the far, far future of an alternate universe ... More exciting - especially for those versed in programming languages - is the ship's computer. It's a fully-functional 16-Bit CPU that controls the entire vessel..." http://www.dealspwn.com/notchs-space-trading-game-real-calle...
Also: http://www.theverge.com/2012/4/4/2924594/minecraft-creator-n...
The verbatim search returns a very good match in #1 and about 80% of the results seem appropriate. It seems the recent synonyms fixes are not nearly enough.
And, for the interested: http://blog.steveklabnik.com/posts/2009-12-28-the-little-cod...
For what its worth, _why's article is the third result for me, and a couple other articles on it are in the top 10.
I started thinking that if I was going to get people interested in coding, I'd start with something more approachable - maybe something like a subset of Python, Java, Ruby or even JavaScript (it's where so many people write little programs anyway now). I heard Lua mentioned as a good scripting language too.
Then I wondered - is this CPU a reasonable target for those kinds of little languages? Is this a way to be language independent and have them all? If so, why not something more like the JVM to make it easy?
It seems to me that coding for this CPU is more like a mini-game in the main game; the fact that it's a challenge is part of the game.
That said, I can see a nice friendly language becoming popular for non-time-critical tasks where ease of experimentation is most important. I just hope it's not BASIC. :)
I previously wrote some assembler routines for x86, which is very complex, working with Notch's design is a breeze and actually enjoyable.
Does he somewhere mention if the code is loaded into the RAM? This would make self-modifiable code possible.
FORTH ?KNOW IF HONK ELSE FORTH LEARN THEN
1 2 + >r
becomes something like SET PUSH, 0x1
SET PUSH, 0x2
SET A, POP
ADD A, POP
SET X, SP // back up data stack pointer
SET SP, Y // switch to return stack
SET PUSH, A
Thoughts?The first thing to do is to write a DCPU-16 assembler in Forth, and use that to write the primitives. That's pretty simple -- just look at the 6502 assembler: http://www.forth.org/fd/FD-V03N5.pdf
Some Forth systems have metacompilers so they can retarget themselves to different architectures. http://www.ultratechnology.com/meta.html
Using a metacompiler with a Forth DCPU-16 assembler would be the best way to go. Then you could easily experiment with different threading schemes, stack architectures, etc.
Edit: Just to clarify about the interpreter point, I'd expect something like "1 2 + >r" to be represented at run time as four words in memory and there'd be an interpreter, written in machine code, reading code word sequences and jumping to small bits of machine code to either do a built-in function or push the call stack.
Now I know I was dead wrong.
So, er, on a PC if I write data to address 0x378 it'll appear on LPT1. What's the equivalent on DCPU-16? If I write data to a certain address it'll appear on the missile bay ports?
Or is there a level of abstraction?
I haven't tested it much, but it seems to work pretty well.
Also in this game I feel like I'll be playing the reverse role I play in real life. IRL I generally always advocate using things like Python and open sourcing everything. In this game I'm definitely going to pull a Steve Jobs, hire out a few friends of mine to program my ASM apps and sell them in some kind of Market for insanely marked up prices telling people that my Orange(TM) software is so much better than Macrosofts(TM)! (And it will be!)
LLVM specifies types with bitwidths and 32 is most commonly used, meaning a backend would have to either emulate 32 bits, forbid the frontends from generating 32 bits, or silently cast them all down to 16 bits.
That said, a LLVM backend is overkill for this and I'm sure the simple interpreter Notch has rigged up will be just fine for its purpose.
Conditional branching is strange.
JSR POP
Does the argument POP (or [SP++]) get evaluated before, or after, the "[--SP] <- PC" implicit in JSR?Spec says 'a is always handled by the processor before b, and is the lower six bits.'
For Non-basic opcodes, 'a' is actually the opcode, and b is the argument. This would imply JSR is evaluated before POP.
What we want JSR POP to mean, of course, is 'jump to the last item on the stack, then push PC+1 to stack'. So I would guess that's how it actually works, and the spec needs a revision.
"For Non-basic opcodes, 'a' is actually the opcode"
Actually it's the single argument -- 'o' is the opcode.
And what about creating this cpu?
SET A,B
Does A have to represent a register, or is it a memory location or is it both and the registry is just a location in ram?
https://github.com/swetland/dcpu16
(Found on Reddit. I haven't tried it.)
The PDP-8 was designed to be inexpensive to implement with the hardware available at the time. It would not be an ideal choice for this purpose.