I think the language is really solidly designed, and gives you ridiculously more power AND productivity than python for a whole range of workloads. There are of course issues, but even in the short time I've been following & using the language these are being rapidly addressed. In particular: generally less rich system of libraries (but some Julia libraries are state of the art across all languages, mainly due to easy metaprogramming and multiple dispatch) + generally slow compile times (but this is improving rapidly with caching etc). I would also note that you often don't really need as many "libraries" as you do in python or R, since you can typically just write down the code you want to write, rather than being forced to find a library that wraps a C/C++ implementation like in python/r.
I don't think this is really a feature. It's nice that you can write more performant code in Julia directly and don't need to wrap lower level languages, without question, but the lack of libraries or library features is not a good thing. It's always better to use a general purpose library that's been battle tested than to write your own numerical mathematics code (because bugs in numerical code can take a long time to get noticed)
For specialized scientific computing applications, which would normally be written in C/C++, I would absolutely look into using Julia instead (though not sure what the openmp/mpi support is like). But I would also recommend against rolling your own numerical software unless you need to
You are much less likely to reinvent the wheel if you can add your one critical niche feature / bugfix to an existing library. In python, learning C and C build systems and python's C API are gigantic barriers to doing that.
More importantly, if every fast data manipulation needs to be written in C, a few of them can be profitably shared, but you need more than a few of them. Pretty soon you wind up with a giant dumping ground of undiscoverable API bloat. See: pandas.
The format for the code samples goes like (code chunk —> output/plots —> bullet points explaining the code line-by-line). This creates a bit of a readability issue. The reader will likely follow a pattern like: (Skim past the code chunk to the explanation —> Read first bullet, referencing line X —> Go back to code to find line X, keeping the explanation in mental memory —> Read second bullet point —> ...). In other words, too much switching/scrolling between sections that can be pages apart. Look at the example on pages 185-187 to see what I mean.
I’m not sure what the optimal solution is. Adding comments in the code chunks themselves adds clutter and is probably worse (not to mention creates formatting nightmares). I think my favorite format is two columns, with the code on the left side and the explanations on the right.
Here’s what I have in mind (doesn’t work on mobile): https://allennlp.org/tutorials. Does anyone know of a solution for formatting something like this?
Happy for more feedback (Yoni Nazarathy).
[1] https://github.com/JuliaLang/julia/milestone/30
[2] https://discourse.julialang.org/t/julia-v1-2-0-rc2-is-now-av...
Take, for example, a simple program that creates a line plot (https://docs.juliaplots.org/latest/tutorial/):
using Plots
x = 1:10
y = rand(10)
plot(x, y)
After installing the package, the first run has to precompile(?), and subsequent runs use the package cache. But ~25 s to create a simple plot is incredibly slow and frustrating to work with. $ julia --version
julia version 1.1.1
$ time julia plot.jl
julia plot.jl 73.71s user 4.45s system 110% cpu 1:11.04 total
$ time julia plot.jl
julia plot.jl 24.41s user 0.39s system 100% cpu 24.633 total
$ time julia plot.jl
julia plot.jl 23.38s user 0.36s system 100% cpu 23.519 total $ julia --compile=min -e '@time (using GR; plot(rand(20)))'
0.375836 seconds (368.83 k allocations: 20.190 MiB, 1.65% gc time)
$ julia --compile=min -e '@time (using Plots; plot(rand(20)))'
4.302867 seconds (6.41 M allocations: 371.485 MiB, 5.07% gc time)Of course, we continue to work on improving compile times. About half of the time is spent in LLVM compilation, which has actually become slower over time.
$ time julia -e "using PyPlot;x=1:10;y=rand(10);plot(x,y);"
real 0m5.676sThe next day I just ended up using C++/Eigen with a simple matplotlib binding [1]. The code is nearly indistinguishable from Python/Julia (except for having more verbose types where it makes sense, using "auto" otherwise), and the entire compile+run cycle takes less time for some short runs than it takes Julia to print "Hello World".
That being said, I'm not advocating for people to use C++. I would love to use Julia, and applaud the developers for their hard work and contribution to scientific computing, but as it stands right now, it doesn't seem to be the right tool for me, since I'm relying on fast editing/execution cycles.
I’d say for most people, there’s so much great progress and improvements happening that the breakages are well worth it.
using PyCall
np = pyimport("numpy")
np.fft.fft(rand(ComplexF64, 10))
Thats it. You call it with a julia native array, the result is in a julia native array as well.
Same with cpp
https://github.com/JuliaInterop/Cxx.jl
Or matlab
https://github.com/JuliaInterop/MATLAB.JL
It's legit magic
The “Ju” in Jupyter is for Julia, so it’s designed to be used as an interactive notebook language also. The Juno IDE is modeled after RStudio.
I'd like to offer a counter point or add on to this.
It's quirky enough to have many packages backed by some expert statistician.
I hope Julia get to be successful in this regard too.
Also the macro system allows one to define powerful DSLs (see Gen.jl for AI).
Although JuliaBox has been provided for free by Julia Computing, there has been discussion that this may not be possible in the future. However, Julia Computing does provide a distribution of Julia, the Juno IDE, and supported packages known as JuliaPro for free.
For new users, would the free JuliaPro distribution be a good alternative to JuliaBox and/or downloading the REPL and kernal from julialang.org?
JuliaBox (and https://nextjournal.com/) are cloud services, but if you have a real computer and want to do this for more than a few minutes, just install it. (There's also no need for virtualenv etc.)
[1] https://github.com/JuliaPlots/Makie.jl
I remain skeptical that this solves a lot of real-world problems (I know a lot of users of R/Python who never need to resort to writing their own C/C++ code), but that's the sales pitch.
The moment Julia shines is when your workloads can't be phrased by stringing together the limited set of vectorised verbs that python / r libraries give you: this is anything stateful and loopy like reinforcement learning, systematic trading, monte carlo simulations etc. It's also useful if you really care about performance and are doing "vanilla" computations at a truly large scale. If you want to avoid copying memory (i.e. doing vectorised operations), or want to tightly optimise / fused some numerical operations, it's great.
The other issue with python / r wrapping c++ libraries is that different libraries will generally not play well together (without coming out into python / r space, and doing a lot of copying / allocation). This tends to encourage large monolithic c/++ codebases like numpy and pandas, that are pretty impenetrable and difficult to extend / modify.
I tried to recreate something like AlphaGo in Python using Keras, I never got the learning to work (probably because I was impatient and training on a laptop CPU), but a lot of the CPU time was simply being spent on manipulating the board state.
So I ported my "Board" object to Rust, and it was a lot faster. Things like counting liberties or removing dead stones were a lot faster, which was important.
Then I rewrote the whole thing in Julia and it was just as fast as my Python / Rust combo.
So I saw for myself that Julia does solve the two language problem. It is as pleasant to write as Python (and I like it better actually), and performed as well as Rust, based on my informal benchmarks.
Yoni Nazarathy.
1. Indices by default start with 1. This honestly makes a ton of sense and off by one errors are less likely to happen. You have nice symmetry between the length of a collection and the last element, and in general just have to do less "+ 1" or "- 1" things in your code.
2. Native syntax for creation of matrices. Nicer and easier to use than ndarray in Python.
3. Easy one-line mathematical function definitions: f(x) = 2*x. Also being able to omit the multiplication sign (f(x) = 2x) is super nice and makes things more readable.
4. Real and powerful macros ala lisp.
5. Optional static typing. Sometimes when doing data science work static typing can get in your way (more so than for other kinds of programs), but it's useful to use most of the time.
6. A simple and easy to understand polymorphism system. Might not be structured enough for big programs, but more than suitable for Julia's niche.
Really the only thing I don't like about the language is the begin/end block syntax, but I've mentioned that before on HN and don't need to get into it again.
Besides Dijkstra's classic paper[1] showing why 0-based indexing is superior, in practice I find myself grateful for 0-based indexing in Python because of how slices and things just work out without needing +1/-1.
I'd like to understand. Could you give an example of when 1-based indexing works out better than 0-based?
Also I find it elegant that for 1-indexing that the start and end value for slices are both inclusive, instead of the first one being inclusive and the last being exclusive.
Also, isn’t it just weird that the index of an element is one less than it’s “standard” index? Like if I take the first nth elements of a list, it would stand to reason that the nth element should be the last element, right?
The reason for zero indexing is historical, related to pointer offsets. I don’t think anyone chose them to be easier for people. They just made them that way because it maps closer to how contiguous values in arrays are accessed.
Also, with 1-indexing I can multiply numbers by arrays and get reasonable offsets. 3 x 1 is three, so I would get the third element of the list. But with 0-indexing, I have 0 x 3 which gives me the same element, clearly inconsistent.
There are some good reasons for 0-indexing and I have been using it in every language for my entire career. The amount of code I’ve written in Julia is marginal compared to my 0-indexing experience, so I might be missing something.
One nice this about 0-indexing is that I can slice a list in half with the same midpoint. For example a Python array with 10 elements:
fst, snd = arr[0:5], arr[5:10]
A little nicer than:
fst, snd = arr[1:5], arr[6:10]
Though you could have inclusive slices with 0-indexing, but it would be inconvenient and suffer from the same problem as 1-indexing.
For python teaching this is almost a whole chapter, with people sharing cheat sheets and building graphics to show how slicing works what not. You don't see these things in R teaching materials.
I'm sure that for the implementation of algorithms, things might be easier with zero indexing, but for a user asking for element 4,5 and 6, 1-indexing is much, much easier on the user.
for (size_t i = v.size() - 1; i >= 0; --i) {
std::cout << i << ": " << v[i] << std::endl;
To fix the infinite loop you could write: for (size_t i = v.size(); i > 0; --i) {
std::cout << i - 1 << ": " << v[i - 1] << std::endl;
Neither is great. Switching to signed integers might make your compiler throw warnings at you.However, 1-based indexing does not work out well with modular arithmetic:
# 1 based
v[1 + (i - 1) % v.size()]
# 0 based
v[i % v.size()]
There's pros and cons with both schemes.The goals of Python were quite different from the goals of Julia.
Python programmers seem content implementing the same things over and over again. Like, for example, flattening a list/monad.
List of things python doesn’t have but should: pattern matching, multi-line lambdas, more data structures (look at Scala for an example of what kind of data structures a standard library should provide), real threading, options, monads, futures, better performance, and more.
I wouldn’t go that far and say Python is suitable for large programs. It’s clearly not. Working on a large python code base is hell.