Installing PyTorch with Poetry is next to impossible. Flux got this right by bundling the GPU drivers. Their installation is also standardized and does not require the weird pip -f flag for CPU only installations.
It had. It's now around parity with pytorch.
And no, it wasn't about a usability tradeoff.
It was about being more general- More general compiler, more general code, more composable code.
Then, the team has been optimizing that and including compiler optimizations in the language that benefit all code. ML type code stressed that in a particular way. Pytorch does ML array heavy stuff as a special case.
Julia will be doing the same, but it's setting the groundwork for domain specific optimizations to be done in package and user space. A different sort of philosophy
It was about being more greedy and setting the groundwork for a more powerful tool in general, at some short term cost.
They could have just wrote a framework that baked in fp 32/64,16 with cuda kernels and tracing and operator overloading computational graphs and gotten more speedup over pytorch (in fact, avalon.jl takes that approach.), with better usability.
But they didn't and now there's a burgeoning ecosystem that does things no other framework can't. It's not quite as marginally beneficial for current vanilla ML because that is stuck in a local optimum, but I think that is going to change: https://www.stochasticlifestyle.com/useful-algorithms-that-a...
In the meantime, places like MIT, moderna, NASA etc are reaping the benefits.
1. Better compile time memory management (https://github.com/aviatesk/EscapeAnalysis.jl)
2. Linalg passes built on generic composable compiler ecosystem: https://youtu.be/IlFVwabDh6Q?t=818
3. Metatheory.jl egraph based symbolic optimization interleaved with the abstract interpreter: https://github.com/0x0f0f0f/Metatheory.jl
4. Partial eval mixed concrete and abstract interpretation
5. Compiler based autoparallel with dagger.jl
6. New compiler integrated AD (as a package) that isn't based on an accidental lispy compiler hack like zygote: https://github.com/JuliaDiff/Diffractor.jl
7. Changes to array semantics which will include generic immutability/ ownership concepts.
And many more. The key is that all the initial groundwork that traded off fundamental flexibility for specific speed will then feed back into making the ML usecase faster than if it had focused on that initially. People can do all kinds of crazy yet composable things, in pure Julia without modifying the base compiler.
Bonus: Being able to modify the type lattice to track custom program properties. This means that you don't need to be stuck into global tradeoffs with a static type system and can do things like opt in track array shapes at compile time per module: https://twitter.com/KenoFischer/status/1407810981338796035 Other packages like for quantum computing are planning to do their own analyses. It's generic and the usecases and compositions aren't frozen at the outset. (unlike for example, the swift tensors fitting perfectly proposal).
Can you elaborate more? MIT is well known but would interesting to know how Moderna and NASA are using Flux?
NASA: https://www.youtube.com/watch?v=tQpqsmwlfY0
Moderna: https://pumas.ai/ https://discourse.julialang.org/t/has-moderna-used-pumas-ai-...
There are many many more. These unique and sought after capability are what got Julia Computing its 24 mil series A (https://twitter.com/Viral_B_Shah/status/1417128416206376960)
In some cases, it is much faster.
Consider Neural Stochastic Differential Equations, Flux is literally over 70,000x faster than Google's PyTorch-based implementation:
https://gist.github.com/ChrisRackauckas/6a03e7b151c86b32d74b...
GPU drivers would be kernel-land and I don't think we actually can install GPU drivers as part of a `pip install`. Will look into what Flux is doing, but I doubt they ship GPU drivers.
Separately, thanks for flagging the Poetry issue, we might prioritize it, especially if the fix is easy.