There are entire forums for this, ie osdev http://wiki.osdev.org/Expanded_Main_Page
There are entire books for this http://www.amazon.com/s/ref=sr_ex_n_1?rh=n%3A283155%2Cn%3A5%... amazon lists about 57,000 results for operating systems books
You will get out of LFS what you put into it. It tells you how to download the source, extract and install it. When you have the source, nothing is preventing you from looking at it.
I also take issue with "how can we hope to continue developing and innovating on the OS level when it’s practically impossible to figure out what anything does or how it works."
There are general concepts that you will learn from getting a book on os development (including the kernel) and writing a minimal kernel yourself will give you insights into the work that goes into a full kernel such as the linux one. The concepts, not the specifics are what you need to learn, and you do this through theory, you pick up the rest from implementation (but starting with a kernel, half of the os is already developed fof you).
http://www.linuxfromscratch.org/hints/downloads/files/packag...
which explains reasoning for the package manager and how to implement it yourself (in a much less copy-compile-execute manner than LFS itself).
Implementation is a great way to learn but generally people will survey the literature first to understand why things are implemented the way that they are. Understanding what all package managers have in common will allow you to distill it down to the simplest factor and implement that.
A lot of the core utilities and why they are expected should be covered in the standard (POSIX in this case).
I guess what I am trying to say is that implementation is a useful way to learn but it requires you to know the details whereas understanding the literature first will give you a better idea of what to implement.
Good luck with your project.
For as long as I can remember, I’ve wanted to create my
very own operating system, one which I have complete and
total control over. Because I have neither the time, nor
the skill to write my own kernel, this means that I’d
eventually land up creating a OS based on the Linux
Kernel.
Non sequitur. If you want complete and total control, you're going to have to learn the ins and outs of a kernel.I applaud your decision to go forth with this, but is it really worthy of an HN self-post given that Kirk does not exist and is just a blog post at this point?
Don't fret, we've all been there and done this ourselves :). Some people catch the bug and create stuff like Gentoo and Arch.
If you can afford it and have a week to spare, I'd really recommend taking the Linux Foundation's one week crash course on the kernel. It talks about how the scheduler works, gives you insight into modules, how processes and threads work and hooks the kernel libraries give you for semaphores, mutexes, etc. It's expensive, but it's well worth the money.
http://training.linuxfoundation.org/linux-courses/developmen...
And the blog post is about the same dust that was relevant 10 years ago and sparked the creation of LFS. Nothing new to see here.
I see the benefit of documenting each step. The result will be a pretty great tutorial, or at least a very interesting series of blog posts. But what is closed about the development of the current distros?
I hope that Ian is hoping to build a very different kind of distro, breaking the mold. NEXTSTEP was fundamentally a Unix, but everything about it was synthesized to create a totally new organism. Same with OS X. This kind of experimentation hasn't been attempted very much with Linux distros, which mostly amount to different installers and package managers. Etoile comes to mind, and apparently CoreOS is discarding much of the mold, but a completely new, holistically built system would be quite interesting to see. I suspect it hasn't been attempted much because the sheer amount of work is overwhelming. Writing a new shell is a lot of work. Writing a different init, shell, filesystem layout, and window manager, all unified around some new concepts, would be a lot of work. If that's what he's aiming to do, the result will be really interesting to use and learn from.
OS X is BSD with a fresh coat of paint. It really depends if you want to consider the whole experience (which I consider less stellar than most people would care to admit) or the internals (which are the same old unix stuff).
Reality: Another Debian clone out there.
Would be a good idea to host it on Github.
I'm speaking from experience when I say that trying to provide a userland sucks. You've said elsewhere on this page that you want to write your own shell and such; the problem is that people don't want IanSH, they want bash or zsh or whatever. If you write IanMACS, they'll ask how to run emacs. And I hope the libraries you write can compile Firefox.
Or if you just want to write init replacements, modify how configuration is managed... remember that Ubuntu (and Gentoo, but nobody cares about Gentoo anymore) did a ton of this and everyone I've talked to hates it.
So here's my concerns:
- If you intend to write an entirely new userland, you're going to spend a lot of time replicating existing work, only to have people complain about it.
- If you intend to do a new init, improve configuration management, change how logging works, whatever, your life will be consumed with modifying every program to use your stuff.
If you want to learn about operating systems, I'll echo the others and say write your own kernel and a minimal userland.
Right now the standard language to build Linux software is C, which is great for performance, but not great for development speed. There are ports of things like GTK+ to other languages, but ports are less supported and usually don't take full advantage of their language's unique flavor.
To maximize impact per unit of time spent developing open source software, and also perhaps to entice more people into contributing to open source software, I believe it would be a good idea to build a desktop environment/widget toolkit with an opinionated API in Ruby. Anyone who has experience writing Ruby on Rails code with an automated red-green-refactor cycle likely could testify that it is pleasant and fast. Why not bring that sort of experience to building desktop software?
I can't believe I left this out of the post, but all/most new code will be written in Go - I'll explain why more clearly in my next post
I mean, even distros like Ubuntu are highly configurable. You can kill off as many packages as you want and even compile your own kernel if you really want to.
Here are my slight annoyances:
ArchLinux: got more opinionated see /usr/bin and python3 in recent history, breaks a lot for a distro
Linux Mint: Broke firefox forced google custom search then DDG
Ubuntu: Unity Mir
Debian: Slow moving, long freezes in packages
Fedora: I don't need SELinux on my desktop box, font rendering (don't like infinality)
Opensuse: Need to use OBS to get packages that probably should be in a main repo
That being said I've never been that annoyed with any of the Distributions to actually go and build one.
I used arch linux for many years but after python3 broke everything I left for ubuntu 10.04 then after that went up I tried unity but didn't really like that, but I stuck with it until ubuntugnome was an official thing.
Right now Debian Wheezy on desktop, Ubuntu Gnome laptop, and both seem to work well enough. I've lost my want to customize my OS so much anymore since I'm not on arch linux and it becomes a little harder when you don't have something as easy to use as the arch build system.
I'll definetly follow your posts about this.
I don't get why so many negative comments here. There were many times that I really wanted to understand and configure all my machine just for the sake of it.
Understanding and learning, it's simple. That's the main reason I use Arch, but even with it I don't usually go really under it to understand what program calls what and what is the function of program foobar, why it really is needed or how many switches and power can it give.
This is great :).
Shut up and code.