Yes, there's a kernel component which needs to be installed, but that's usually pretty easy these days, because it's usually one of
1) You're using a container-ish environment where the host kernel has the CUDA drivers installed anyway (but your base container image probably doesn't have the userspace libraries)
2) The kernel driver comes with your OS distribution, but the userspace libraries are outdated (userspace libraries here includes things like JIT compilers, which have lots of bugs and need frequent updates) or don't have some of the optional components that have restrictive redistribution clauses
3) Your sysadmin installed everything, but then helpfully moved the CUDA libraries into some obscure system specific directory where no software can find it.
4) You need to install the kernel driver yourself, so you find it on the NVIDIA website, but don't realize there's another 5 separate installers you need for all the optional libraries.
5) Maybe you have the NVIDIA-provided libraries, but then you need to figure out how to get the third-party libraries that depend on them installed. Given the variety of ways to install CUDA, this is a pretty hard problem to solve for other ecosystems.
In Julia, as long as you have the kernel driver, everything else will get automatically set up and installed for you. As a result, people are usually up and running with GPUs in a few minutes in Julia.