Love the quick compile times and go-like single binary compilation.
Case-insensitivity seems very cool for interoperability. Total underappreciated gem.
My only hangup is the syntax feels wordy, but it's leagues better compared to Rust or Go in that regard. A bit concerned about the performance cost of using macros to cut down on the wordiness (I would do this).
Otherwise great to see a 2.0 and always keeping a close eye on the Nim ecosystem as I would definitely consider adopting it.
The whole point is that they happen at compile time.
EDIT: FWIW, compile-times impact mostly just depends on what he wants to do. The macro evaluator not that slow a virtual machine. Simple substitution macros tend to run quite fast. But it's also easy (with loops!) to generate a large pile of Nim code that generates a ginormous pile of C code that the backend might have to chew on for quite a while.
There is also a really nice looking binding for unreal engine 5 that is pretty far along
Contrasting that to my experience with nim, I could get going almost immediately and thus didn't get held up much with the basics, which gave me more time to dive into more interesting concepts that it provides, like templates, compile-time-evaluation (new for me as a pythonista back then), macros etc.. I still got stuck here and there, mind you, but that was for more complex question than "How do I get a config from place X without the compiler yelling at me", and the discord was very helpful for that.
Of course. Compare, at the extremes, languages like APL or Brainfuck to something like Python or Scratch. Syntax is a huge factor in people wanting to learn to code or not.
Q: "Why haven't you implemented feature X -- python has it, so nim should too"
A: "Nim is not python, it is a different language, not all things implemented there are good, so we don't implement everything"
So this is not "against python" it is against specific attitude towards nim where people think it should be a python clone instead of a separate language
The fact that nim doesn't have exactly the same stdlib defined as python (though there's a package for that)? That Araq has said that nim isn't a variation of python?
I've since switched to Rust. The syntax isn't quite as nice but it's outweighed by the fact that the community is great and the memory safety is a game changer!
Heap allocators are very significant cost over stack. Nim's designed for different use cases. Rust statically checks memory usage, and provides Arc for use cases that can only be modeled dynamically.
In Nim using ARC/ORC the compiler statically checks the memory usage, possibly helped by lent and sink annotations (similar to borrow and lifetime annotations in Rust, but not as powerful yet) and for what can't be proved statically it adds reference counting. And even under ARC you will probably be using stack allocations a lot in Nim.
I think the main difference between Rust and Nim approaches to memory management is opt-in vs opt-out. In Rust you are forced from the start to think about it and be more explicit about it, while in Nim the lifetime annotations are more like optional optimizations that may be added gradually to help the compiler, that might otherwise be adding reference counting and copies behind your back to maintain correctness.
For some use cases, being forced from the start to think in a lower level of abstraction is helpful and makes the performance more predictable and transparent. People still use C partially because of that. But I like Nim's approach where I can use it as productively as an scripting language avoiding premature optimization, but can easily delve as low level as I want to write a keyboard firmware or GBA game.
For stack based values Nim's 'var' and 'openArray[T]' are roughly equivalent to simple implicit lifetimes. I don't know the proper parlance vs more complicated lifetime situations like "capturing" a lifetime.
Its not too different to C++ references either. Sounds like D will also check for similar "lifetime" violations soon as well.
The real difference is heap memory. There Rust's lifetimes allow you to give ownership away. Nim's var parameters can't do that, but instead it gives you fast and cheap non-atomic ref counting. Rusts way does encourage slightly faster code on average at the expense of more effort on programmers.
Whether Rust or Nim, allocations are expensive. Though in my experience Nim's allocator is amazing. It can sometimes be faster to use ref's than stack values.
[1] https://forum.nim-lang.org/t/9132
[2] https://gist.github.com/j-james/08cd36b7475d461d6291416381ce...
* Remove backwards-compatible switches / deprecated features
* Remove every memory option except orc, arc, none, go
* Shrink stdlib
* Comprehensive stdlib cleanup
* Support default values in object construction
* Language features designed for parallelism
* Lock in --threads:on and --mm:arc/--mm:orc
* Official support for overloadable enum names
* Robust concurrency/multithreading
* Fix bugs!
I feel like memory option removals remain more "planned/maybe revisable/maybe not really agreed upon as the right direction" with only a change of AMM defaults happening in 2.0. (I'm also not sure what value in removal really is except less to learn about/more inflexibility..). The rest of the list seems represented, though.
It is like TypeScript to JS in C/C++ world.
Very clever to stand this way on the shoulders of giants, but the amount of moving parts is staggering and horrifying.
So Nim's approach has some challenges, but surprisingly has less moving parts than you'd think. I'd be confident I could get the current Nim compiler up and running in 5, or 10 years with minimal effort.
However, there was a lot of effort to get things like pointers and strings in the stdlib to play nicely across the NimVM, JS, and C backends. Theres some gross details there. But in the end its beautiful.
[1] https://forum.nim-lang.org/t/8677 [2] https://musl.libc.org/ [3] https://github.com/kaushalmodi/hello_musl/blob/master/config...
Or C++ and Objective-C in their yearly days, or the P2C Pascal compiler from 40 years ago.
the program is nothing more than a sequence of plain characters, stored in a text file. This abstraction is a complete mystery for the computer, which understands only instructions written in machine language. Thus, if we want to execute this program, the first thing we must do is parse the string of characters of which the high-level code is made, uncover its semantics—figure out what the program seeks to do—and then generate low-level code that reexpresses this semantics using the machine language of the target computer. The result of this elaborate translation process, known as compilation, will be an executable sequence of machine language instructions.
Of course, machine language is also an abstraction—an agreed upon set of binary codes. To make this abstraction concrete, it must be realized by some hardware architecture. And this architecture, in turn, is implemented by a certain set of chips—registers, memory units, adders, and so on. Now, every one of these hardware devices is constructed from lower-level, elementary logic gates. And these gates, in turn, can be built from primitive gates like Nand and Nor. These primitive gates are very low in the hierarchy, but they, too, are made of several switching devices, typically implemented by transistors. And each transistor is made of—Well, we won’t go further than that, because that’s where computer science ends and physics starts.
Excerpt from https://www.nand2tetris.org/bookEven the C code (on many platforms) is going to transport to LLVM IR
If I understood correctly, like the Vala language: https://vala.dev/ (Note: Vala is strongly integrated with GObject).
The syntax is very pleasant to write and read, even I have some small issues with it. No unnecessary noise like braces and semicolons, no misusing the less-than and greater-than signs as brackets.
Also, it uses terminology more correctly than other languages. Procedures are procedures, not “functions”. Resizable arrays are called sequences, not “vectors”. Immutable variables are immutable variables, not “constants”.
The standard library is quite good (though it could use improvements) and extensive enough that you don't get hundreds of dependencies per project like in JavaScript or Rust.
With Python, I used repl all the time. There's bpython, ptpython, ipython, and probably a couple more great repls because repl is really important for Python development.
Nim's INim is no match for those. But here's the great part: with Nim you don't have to test code snippets all the time. I get all my error messages before they happen. This felt liberating after Python.
That only leaves go after that, which I heard about around the same time as nim. I started out with nim and I liked it enough so I stuck with it.
Why I use it over other languages? I feel like I can express myself in english readable sentences (which gives it the python vibe for me) and at the same time I have static typing, a really nice type system in general and very little chance of crap like nullpointers occurring.
My first ah-ha moment was pretty much 3 hours in when I noticed I could already already write simple stuff and only needed to consult the std lib docs here and there. The second was when I reimplemented a webserver backend that I previously had in Django (not for practical reasons, just to see how fast it could go even very little optimization) and found it performing roughly 2-5 times faster (measured by looking at request response time), despite having optimized Django's ORM to as few queries as physically possible to get the data I needed. (For reference: That's quite surprising given a decent chunk of that time is literally just network latency)
In no particular order, things I like about Nim:
- It has most of the benefits of a scripting language, without most of the tradeoffs. Hello World is a one-liner, there isn't much syntactic noise, and it's very easy to write short, simple, useful programs that look like pseudocode, live in a single file without any dependencies, and can be kicked off with a shebang line. However, unlike scripting languages, it scales very well to large projects.
- Progressive disclosure. You can use Nim effectively while knowing very few of its features, but there's a lot of functionality available when you're ready.
- The "if it compiles, it works" factor is quite high. Not as high as Rust or Haskell, but higher than Java or Go, in my experience.
- The "it compiles on the first try" factor is also quite high. Higher than any language I've tried. "I wonder if this will work..." code usually does, the advanced features of the language mostly stay out of the way until you need them, and I rarely find myself working just to make the compiler happy. Just as an example, unless you add specific constraints, generics are "duck typed". If I pass a type to a generic proc the compiler will verify that it has the properties and functions the proc needs, but I don't have to define a specific interface up front.
- Similar to the above, the productivity vs safety balance seems right, at least for me. Code is fairly safe by default, but it's easy to work around the compiler if you need to do something it doesn't want you to do. It's also pretty easy to enforce additional safety when you need it, like ensuring a function can't throw exceptions or have side-effects.
- It's very good at building abstractions and eliminating boilerplate. Nim templates and generics are easy to use and quite powerful, and macros are there if you need something more advanced. Many features that need explicit compiler support in other languages, like async/await and string interpolation, are implemented in the Nim sdlib with macros, not the compiler.
- Nim produces standalone binaries that are both small and fast.
- The compiler is faster than most.
- The compiler can be imported as a library, making it pretty easy to write tools that understand Nim code.
- Nim programs are usually compiled, but there's also an interpreter. The compiler uses this for macro evaluation and for running build scripts, and it's easy to embed if you want your program to be scriptable.
- Nim can run on pretty much anything.
- Nim can be used to build almost anything. You can use it for systems programming, webdev (frontend and backend), games, ML, scripting, scientific computing, and basically anything else. There are definitely some domains where library support is lacking currently, but the language itself is suitable for any type of program.
- It's very flexible. Most Nim code is imperative, but it's easy to write functional, declarative, or OO style code if that's your thing.
If you prefer languages like Go that favor an abstraction-free style, where everyone's code looks more or less the same, you probably won't like Nim. However, if you want something more expressive like Ruby or Lisp, but don't want to sacrifice performance or safety, Nim is definitely worth a look.
The simple answer is this: anything I want to do, I can do in Nim easier than other languages, while also having direct access to C/C++/JS ecosystems as well.
Productivity:
1. I write pseudocode, it compiles fast and runs at C speeds. Programming is fun again!
2. No `::!>><>^<<@{};` cruft everywhere. Write as you want to read, even spanning multiple lines is clean and without separators.
3. Procedural: only data types and code. No need for OOP/Trait abstractions to be forced into everything (it's there if you must).
4. UFCS and overloading by parameter types make everything naturally extendable: `len("string")` can also be written `"string".len` or `len "string"` - you don't have to remember which way round it goes, and 99% of the organisational benefit of OOP emerges from this lexing rule. A compile time error if you use the same name and parameter types, so no ambiguity.
5. Sensible defaults everywhere: type inference, stack allocated and pre-zero'd variables by default, extend with GC/manual management by type (`type HeapInt = ref int`), GC is deterministic with scope based destructors and move semantics as an optimisation rather than a straight jacket, detailed opt-in control down to assembly as you wish.
6. Arguably the best compile time support of any language.
7. AST procedural metaprogramming is a core language feature. It's hard to express how powerful and well integrated this is. You can just chuck code around and recombine it effortlessly. Whether it's simple DRY replacements, automating serialisation from types, custom DSLs at your convenience, or even generating entire frameworks at compile time from data, you effectively have another dimension to programming. I can't go back to flatland, now.
8. Flexible static typing that's as strict (`type specialId = distinct int`) or generic as you want, with concepts matching any statement against a type. You can also calculate or compose types from static values which is really nice.
9. Low overhead and high control makes it great for embedded: https://github.com/EmbeddedNim
10. Fantastic FFI that can even use C++ templates, along with loads of converters/wrappers like c2nim, futhark, pas2nim that add even more sugar to FFI interop.
Portability and glue:
- Single portable executable output.
- Compiles to C89/C99, which covers basically every piece of hardware.
- Compiles to C++ so you have C++ ABI compatibility.
- Compiles to JavaScript.
- Compiles to ObjC.
- Compiles to LLVM.
- Excellent Python interop (see: Nimpy).
- Libraries for interfacing with C# and .Net.
Similarly, case insensitivity, of course, means `foo == fOO == fOo == foO`. Reducing these to one identifier means less ambiguity to the programmer and encourages either using sensible names that can't easily be confused, and/or unambiguous mechanisms like the type system designed for this purpose.
Say you have a constant for 'light enabled' Instead of naming it `lEn`, it's better to use `type LightState = enum disabled, enabled` and write `light.set enabled` for so many reasons. Same goes for pretty much everything. It's worth noting as well that the language is very good at resolving overloads by type, so if you have your own object and create a `len` for it, it's not going to get confused with the string `len` or even a `lEn` constant. Even if you have a `lEn` and `len` that are used in the same context with the same types (why though), you can qualify it anyway with `module1.lEn` or `module2.len`.
The language has so many easy tools to make things more explicit and easier to read at the same time. Ultimately, case sensitivity only really ends up encouraging bad naming practice without adding anything back.
https://twitter.com/ID_AA_Carmack/status/1592560938208710659...
Why would having two functions, named makeFile and make_file in the same program ever be a good idea?
Think of it as less of a language syntax feature and more of a code style enforcement paradigm.
Good lsp support makes it mostly fine imo.
That's the real problem
The way that modern languages force a single style is great, and it's a big strike against nim, which is an otherwise nice language.
That's your take? I can see a couple of actual nim users complaining, the rest are some randos that tried to derail any productive discussion with "I have an old C library that uses three different styles for init, how would I bind it to Nim?" and although a bunch of workarounds exist, they were too loud and stubborn. For me it showed that most nimmers prefer style insensitivity as an option, although they don't use it in the same project.
https://nim-lang.org/docs/manual.html#lexical-analysis-ident...
I do agree that this is a red flag decision.
Honestly though, I'm a bit let down in that it doesn't seem to cover what I find the most interesting aspect of Nim : its ability to compile to multiple intermediate programming languages like C, C++ or Javascript and use their libraries. I was hoping to find out how to write VSCode extensions entirely in TypeScript (I know it's possible because the Nim vscode extension itself is now 100% Nim, but there seems to be no tutorial for how to do it online)
I had started reading "Nim in Action" and I might finish that first since it does cover FFI, it it is rather old (it was released before version 1.0)
You can find Nim in Action on the Mannings website : https://www.manning.com/books/nim-in-action
I tend to use Java and C for multithreaded problems but having the availability of Nim which looks similar to Python is really promising.
Go is a M:N scheduler with M kernel threads and N lightweight threads
I wrote a 1:M:N lightweight scheduler which preempts hot loops. I don't know when Golang preempt goroutines outside of a channel send - I believe it's in stack growth or in other words when a method is called.
My userspace scheduler preempts while true and for loops by setting the looping variable to the limit.
I wrote it in Java, C and Rust.
https://GitHub.com/samsquire/preemptible-thread
Does anybody know if Nim loop variables can be easily mutated from another thread?
It looks they cannot
I think Elixir's syntax and idioms are better than Go's, but that's personal preference.
It might be premature to judge Nim's popularity, respective timelines considered. At the same time language options are probably nicer in 2022 compared to 2010, so it will be harder to take off.
like gophers for go, crabs for rust, dinosaurs for zig
though I'd imagine the number will remain small as Nim's too artisanal for enterprise to grok.
https://github.com/orgs/status-im/repositories?q=&type=all&l...