> Magic Lantern is a free software add-on that runs from the SD/CF card and adds a host of new features to Canon EOS cameras that weren't included from the factory by Canon.
It also backports new features to old Canon cameras that aren't supported anymore, and is generally just a really impressive feat of both (1) reverse engineering and (2) keeping old hardware relevant and useful.
Today, cameras like Blackmagic and editing platforms like DaVinci handle RAW seamlessly, but it wasn't like this even a few years ago.
I think possibly someone thought it sounded a bit like firmware?
I'm the current lead dev, so please ask questions.
Got a Canon DSLR or mirrorless and like a bit of software reverse engineering? Consider joining in; it's quite an approachable hardware target. No code obfuscation, just classic reversing. You can pick up a well supported cam for a little less than $100. Cams range from ARMv5te up to AArch64.
Is this situation still the same? (Apologies for the hazy details -- this was 5 years ago!)
Anyway, I can happily talk you through how to do it. Our discord is probably easiest, or you can ask on the forum. Discord is linked from the forum: https://www.magiclantern.fm/forum/
Whatever code you had back then won't build without some updates. 4000D is a good target for ML, lots of features that could be added.
> I'm the current lead dev, so please ask questions.
Well, you asked for it!
One question I've always wondered about the project is: what is the difference between a model that you can support, and a model you currently can't? Is there a hard line where ML future compatibility becomes a brick wall? Are there models where something about the hardware / firmware makes you go 'ooh, that's a good candidate! I bet we can get that one working next'?
Also, as someone from the outside looking in who would be down to spend $100 to see if this something I can do or am interested in, which (cheap) model would be the easiest to grab and load up as dev environment (or in a configuration that mimics what someone might do to work on a feature), and where can I find documentation on how to do that? Is there a compendium of knowledge about how these cameras work from a reverse-engineering angle, or does everyone cut their teeth on forum posts and official canon technical docs?
edit: Found the RE guide on the website, gonna take a look at this later tonight
Re what we can support - it's a reverse engineering project, we can support anything with enough time ;) The very newest cams have software changes that make enabling ML slightly harder for normal users, but don't make much difference from a developer perspective. I don't see any signs of Canon trying to lock out reverse engineers. Gaining access and doing a basic, ML GUI but no features port, is not hard when you have experience.
What we choose to support: I work on the cams that I have. And the cams that I have are whatever I find for cheap, so it's pretty random. Other devs have whatever priorities they have :)
The first cam I ported to was 200D, unsupported at the time. This took me a few months to get ML GUI working (with no features enabled), and I had significant help. Now I can get a new cam to that standard in a few days in most cases. All the cams are fairly similar for the core OS. It's the peripherals that change the most as hardware improves, so this takes the most time. And the newer the camera, the more the hw and sw has diverged from the best supported cams.
The cheapest way for you to get started is to use your 5D3 - which you can do in our fork of qemu. You can dump the roms (using software, no disassembly required), then emulate a full Canon and ML GUI, which can run your custom ML changes. There are limitations, mostly around emulation of peripherals. It's still very useful if you want to improve / customise the UI.
https://github.com/reticulatedpines/qemu-eos/tree/qemu-eos-v...
Re docs - they're not in a great shape. It's scattered over a few different wikis, a forum, and commit messages in multiple repos. Quick discussion happens on Discord. We're very responsive there, it's the best place for dev questions. The forum is the best single source for reference knowledge. From a developer perspective, I have made some efforts on a Dev Guide, but it's far from complete, e.g.:
https://github.com/reticulatedpines/magiclantern_simplified/...
If you want physical hardware to play with (it is more fun after all), you might be able to find a 650d or 700d for about $100. Anything that's Digic 5 green here is a capable target:
https://en.wikipedia.org/wiki/Template:Canon_EOS_digital_cam...
Digic 4 stuff is also easy to support, and will be cheaper, but it's less capable and will be showing its age generally - depends if that bothers you.
Thanks for your work keeping it going, and for those that have worked on it before.
Could this be a conflict with long exposures? Conceivably AF, too. The intervalometer will attempt to trigger capture every 5s wall time. If the combined time to AF seek, expose, and finish saving to card (etc) is >5s, you will skip a shot.
When the time comes, compare the price of a used 5d3 vs a shutter replacement on the 5d2, maybe you'll get a "free" upgrade :) Thanks for the kind words!
I am a hobbyist nature photographer and it helped me capture some incredible moments. Though I have a Canon R7, the Canon 5d3 is my favorite camera because I prefer the feel of DSLR optical viewfinders when viewing wildlife subjects, and I prefer certain Canon EF lenses.
More here:
https://amontalenti.com/photos
When I hang out with programmer friends and demo Magic Lantern to them, they are always blown away.
Please recruit your programmer friends to the cause :) The R7 is a target cam, but nobody has started work on it yet. There is some early work on the R5 and R6. I don't remember for the R7, but from the age and tier, this may be one of the new gen quad core AArch64.
I expect these modern cams to be powerful enough to run YOLO on cam, perhaps with sub 1s latency. Could be some fun things to do there.
It’s been a huge blessing!
I am a compiler dev with decent low level skills, anything in particular I should look at that would be good for the project as well as my ‘new’ 6D? (No experience with video unfortunately)
I have a newer R62 as well, but would rather not try anything with it yet.
I've had a fun idea knocking around for a while for astro. These cams have a fairly accessible serial port, hidden under the thumb grip rubber. I think the 6D may have one in the battery grip pins, too. We can sample LV data at any time, and do some tricks to boost exposure for "night vision". Soooo, you could turn the cam itself into a star tracker, which controlled a mount over serial. While doing the photo sequence. I bet you could do some very cool tricks with that. Bit involved for a first time project though :D
The 6D is a fairly well understood and supported cam, and your compiler background should really help you - so really the question is what would you like to add? I can then give a decent guess about how hard various things might be. I believe the 6D has integrated Wifi. We understand the network stack (surprisingly standard!) and a few demo things have been written, but nothing very useful so far. Maybe an auto image upload service? Would be cool to support something like OAuth, integrate with imgur etc?
It's slow work, but hopefully you don't mind that too much, compilers have a similar reputation.
(I literally only want a raw histogram)
(I also have a 1Dx2 but that's probably a harder port)
Thank you and the magic lantern team!
Heh, a little like saying "the main thing you need is to be able to play the violin, which is a small instrument with good tutorials".
Maaaaybe I'm hiding a tradeoff around complexity vs built-in features, but volunteers can work that out themselves later on.
You honestly don't need much knowledge of C to get started in some areas. The ML GUI is easy to modify if you stay within the lines. Other areas, e.g., porting a complex feature to a new camera, are much harder. But that's the life of a reverse engineer.
Very impressive! Thankless work. A reminder to myself to chase down some warnings in projects I am a part of...
I have an xcconfig file[0], that I add to all my projects, that turns on treat warnings as errors, and enables all warnings. In C, I used to compile -wall.
I also use SwiftLint[1].
But these days, I almost never trigger any warnings, because I’ve developed the habit of good coding.
Since Magic Lantern is firmware, I’m surprised that this was not already the case. Firmware needs to be as close to perfect as possible (I used to write firmware. It’s one of the reasons I’m so anal about Quality).
[0] https://github.com/RiftValleySoftware/RVS_Checkbox/blob/main... (I need to switch the header to MIT license, to match the rest of the project. It’s been a long time, since I used GPL, but I’ve been using this file, forever).
We build with: -Wall -Wextra -Werror-implicit-function-declaration -Wdouble-promotion -Winline -Wundef -Wno-unused-parameter -Wno-unused-function -Wno-format
Warnings are treated as errors for release builds.
By the way, rift valley software? I'm writing to you from Kenya, one of the homes of the great rift valley. It is truly remarkable to drive down the escarpment just North of Nairobi!
The photography world is mired in proprietary software/ formats, and locked down hardware; and while it has always been true that a digital camera is “just” a computer, now more than ever it is painful just how limited and archaic on-board camera software is when compared to what we’ve grown accustomed to in the mobile phone era.
If I compare photography to another creative discipline I am somewhat familiar with, music production - the latter has way more open software/hardware initiatives, and freedom of not having to tether yourself to large, slow, user-abusing companies when choosing gear to work with.
Long live Magic Lantern!
cries in .x3f & Sigma Photo Pro
> git clone https://github.com/reticulatedpines/magiclantern_simplified
*No judgement, maintaining a niche and complex reverse-engineering project must be a thankless task
One of those projects I wanted to take on but always back logged. Wild that they've been on a 5 year hiatus -- https://www.newsshooter.com/2025/06/21/the-genie-is-out-of-t... -- that's the not-so-happy side of cool free wares.
It is actually easier to get started now, as I spent several months updating the dev infrastructure so it all works on modern platforms with modern tooling.
Plus Ghidra exists now, which was a massive help for us.
We didn't really go on hiatus - the prior lead dev left the project, and the target hardware changed significantly. So everything slowed down. Now we are back to a more normal speed. Of course, we still need more devs; currently we have 3.
Because a lot of features that cost a lot of money are only software limitations. With many of the cheaper cameras the max shutter speed and video capabilities are limited by software to make the distinction with the more expensive cameras bigger. So they do sell hardware - but opening up the software will make their higher-end offerings less compelling.
Camera manufacturers live and die on their reputation for making tools that deliver for the professional users of those tools. On a modern camera, the firmware and software needs to 100% Just Work and completely get out of the photographer's way, and a photographer needs to be able to grab a (camera) body out of the locker and know exactly what it's going to do for given settings.
The more cameras out there running customized firmware, the more likely someone misses a shot because "shutter priority is different on this specific 5d4" or similar.
I'm sure Canon is quietly pleased that Magic Lantern has kept up the resale value of their older bodies. I'm happy that Magic Lantern exists-- I no longer need an external intervalometer! It does make sense, though, that camera manufacturers don't deliberately ship cameras as openly-programmable computational photography tools.
Also another thing, Magic Lantern adds optional features which are arbitrarily(?) not present on some models. Perhaps Canon doesn't think you're "pro enough" (e.g. spent enough money) so they don't switch on focus peeking or whatever on your model.
Same here. I used to live in a fairly tall building in Manhattan, so found my way to the roof, found an outlet, and would set it up to do timelapses of sunsets over the Hudson.
The camera lens was pretty dirty, so they weren't great, but I enjoyed them: https://www.youtube.com/watch?v=OVpOgP-8c9A
However, a lot of the features exposed are more video oriented. The Canon bodies were primarily photo cameras that could shoot video in a cumbersome way. ML brings features a video shooter would need without diving into the menus like audio metering. The older bodies also have hardware limitations on write speed, so people use the HDMI out to external recorders to record a larger framesize/bitrate/codec than natively possible. Also, that feed normally has the camera UI overlay which prevents clean recordings. ML allows turning that off.
There are just too many features that ML unlocks. You'd really just need to find the camera body you are interested in using on their site, and see what it does for that body. Different bodies have different features. So some effort is required on your part to know exactly what it can do for you.
Frankly: I once tried to maintain a help file and browsed through a lot of lesser known features. Took me days and I didn't even test RAW/MLV recording.
In fact make this all devices with firmware, printers, streamers etc.
But forcing is never a right thing.
Extending this to enable software access by 3rd parties doesn't feel controversial to me. The core intent of copyright and patent seems to be "when the time limit expires, everyone should be able to use the IP". But in practice you often can't, where hardware with software is concerned.
I was pleasantly surprised to find out this was something very different.
https://en.wikipedia.org/wiki/Magic_Leap
> As of December 2024, the Magic Leap One is no longer supported or working, becoming end of life and abruptly losing functionality when cloud access was ended. This happened whilst encouraging users to buy a newer model.
Ah, that’s about how I thought that would end up.
It's not firmware, which is a nice bonus, no risk of a bad rom flash damaging your camera (only our software!).
We load as normal software from the SD card. The cam is running a variant of uITRON: https://en.wikipedia.org/wiki/ITRON_project
We're a normal program, running on their OS, DryOS, a variant of uITRON.
This has the benefit that we never flash the OS, removing a source of risk.
Firmwares should be open-source by law. Especially when products are discontinued.
The high end cams need ML less, they have more features stock, plus devs need access to the cam to make a good port. So higher end cams tend to be less attractive to developers.
200D is much newer, but less well supported by ML. I own this cam and am actively working on improving it. 200D has DPAF, which means considerably improved auto-focus, especially for video. Also it can run Doom.
Are there any ML features in particular you're interested in?
So ideally I'd imagine getting a second-hand 600D or 200D and having a similar setup. We did have a setup (previously) where a GoPro or mini-HDMI campera is captured and then processed by a RBPi 2/3/4, but this seems an overkill compared to the DroidCAM Setup.
And, of course, the optics on the 600D/200D are expected to be much more correct than those on an iPhone or similar phone/mobile device.
Thanks for your kind attention.
But this is actually really cool because, as it turns out, I've got an old Canon Eos DSLR that I haven't used for a long time and didn't know this thing existed before.
Around 2020, our old lead dev, a1ex, after years of hard work, left the project. The documentation was fragmentary. Nobody understood the build system. A very small number of volunteers kept things alive, but nothing worked well. Nobody had deep knowledge of Magic Lantern code.
Sounds like a bit of a dick move. Part of being a lead dev is making sure you can get hit by a bus and the project continues. That means documentation, simple enough and standard build system (It's C after all), etc. As a lead dev you should ensure the people on the project get familiarity with other part than their niche too, so that one can succeed you.It doesn't take much work to not leave a gigantic pile of trash behind you.
If anything, it's an even more a self-responsible thing to do in the OSS world, as there isn't a chain of command such as in the corporate world, enforcing this.
It's selfish to engage in group relation with other people building something without the conscious decision of continuity.
A job worth doing is a job worth doing well. Maybe I'm just a gray beard with unrealistic expectations, or maybe I care about quality.