I wonder how many non-developers understand this. I, along with the rest of my team, am trained in PSP (http://www.sei.cmu.edu/library/abstracts/reports/00tr022.cfm) and TSP (http://www.sei.cmu.edu/tsp/) and we use it in our day-to-day development.
It definitely helps us keep our defect rate below one bug/kLOC but it's an expensive process that results in very low LOC/day productivity. If very low shipped bug counts are very important to your organization, great. But most businesses these days seem to care more about having a usable product than they do a perfect (or close to it) product. Especially if it's on the Web where you can do multiple releases per day.
As an industry, we really need to bear in mind that different business domains need radically different approaches to software engineering.
I looked into it about a year ago and thought it was a ridiculous amount of overhead, and the blurbs about the initial data Humphrey used to create it was not persuasive. The "take our class" ads were not encouraging either.
So if its working for you in actual development, I'd LOVE to hear more about what it does/doesn't do for you.
The good: PSP encourages a high level of developer responsibility for quality. So you use a checklist to review your code before running the unit tests. You record every defect you find and if applicable, use that information to make a better checklist. Every team has a TSP-trained Coach to guide the process, answer questions, and keep the team on track. The metrics generated from the process are analyzed weekly to see if the team is on target, if quality is where the predictions say it should be and if there are any roadblocks.
The bad: it can be a major change to how you are used to working. The data collection, while as automated as possible, is annoying. The constant emphasis on tracking time on various stages of fixing a bug/adding a feature adds a noticeable amount of friction to your workflow. While it's not Waterfall, TSP is definitely not Agile. Its entire focus is on predictability of output. It's an attempt to take what works well for Manufacturing and apply some of that to software dev.
In short, TSP/PSP is a good idea at heart for those types of development where initial product quality is critical or where you may never have a chance to fix a defect. This is not the case for most instances of modern software projects.
Part of it is that he really does spend 8+ hours per day coding, every weekday, and has done so for 20 years. You'd think his experience level there is about as high as you can get, so it's always cool to hear him talk about the new things he's still learning at his work. I have to wonder if there's anyone else in the world that has both his raw ability and all those man-years of programming experience. It seems like most successful technical people end up doing management and business.
There's a couple of things people probably don't know about Carmack. For one, he can talk intelligently on a lot of different topics. A lot of nitty-gritty aerospace engineering, as well as the history of the space program and NASA for example. He's also up to speed on the latest across a wide range of technology, including things like cleantech.
Second, he has a pretty good sense of humor and can be quite funny. Which is surprising I think just because he spends so little time (effectively zero) out being traditionally social, which you'd think would be necessary to getting good at making people laugh. But in conversation he has a pretty sense of comedy and timing.
An example from his twitter feed that I clipped a while back:
https://twitter.com/ID_AA_Carmack/status/167739644853747712 "Adding film grain, chromatic aberration, and rendering at 24 hz for film look is like putting horse shit in a car for the buggy experience."
But I think this really misses the point. In our industry it is really easy to disguise oneself as a professional (or even just someone who knows what they are doing), without really knowing much of anything. Meaning, our focus as an industry has been on making the simple things as simple as possible (i.e. scripting/dynamic languages, code generation, frameworks).
But what I see happening in the Haskell space for example (and even further in languages such as Agda) are attempts to distill things down to their elements. To find the true semantics behind a problem. This not only helps by producing cleaner and more readable code, but it also helps with communication.
I really do believe software is a scientific (and mathematical) exercise. The problem is most of industry does not treat it as such, and hence we end up in the mess we are in.
In the same way, monads, arrows, and recursion are great ways of describing many classes of programs. Additionally, they help with communication when your problem is a monad or an arrow. However, certain classes of problems are better described under other paradigms than being forced into the functional one.
This comes back to Carmack's point. It's important to know Haskell, since it's distilled computation down to a set of elements which are useful for describing a large class of problems. Being able to communicate these solutions is important. However, other paradigms are less error prone and do communicate solutions more clearly on other classes of problems.
Rather, what I am saying is I see a trend in the Haskell community where the developers strive to find the best semantics for a problem and not just stop at the first arrived at solution because it works.
See all the conversations on pipes vs. conduit if you want an example.
These statements suggest a view of science and math I find particularly insidious -- please correct me if that's not your view.
It's the view that software engineering should really be computational physics. All we need to do is figure out the laws, set up the equations for a specific problem, pick the best algorithms for the job, and hit Enter.
It's not unlike how the ultimate watchmaker created the universe. And best of all, there's no "monkey programming" (how I hate that term) in computational physics.
Truly romantic, and hey, I think I've just persuaded myself it's not so insidious after all!
> I would like to be able to enable even more restrictive subsets of languages and restrict programmers even more because we make mistakes constantly.
Which I think is perfectly in line with the rigid Haskell type system and philosophy of "making the compiler do 90% of the work". The constructs that Haskell provides are very down to earth ways of composing programs that ensure that side-effects aren't implicit and lead to more correct composition semantics.
To link that back to the post, this is the type of constriction he's talking about to make better programmers. Cocoa and Objective-C restricted me to only writing at least halfway decent code. With PHP, because of it's flexibility, you're free to get things done quickly, but in a terrible way. Sure with PHP you can do things right too, but it takes a lot more self-discipline and also a priori knowledge.
Sorry to post yet another rag on PHP.
if(obj == null && obj.isValid())
I'd have approximately 960 dollars per project.
Why would you encounter this (often)? It seems to me that this code would never evaluate to true.
I think it had a lasting effect on my personal coding habits. But every once in a while, I will use the tools on my new code and it still finds things. I would probably benefit from being more persistent in using these tools.
[1]: This assumes that the issues caught be your static analysis tool are valid concerns, which in my experience, they tend to be.
[2]: Some static analysis tools that I've used with Ruby are reek, roodi, flay, and flog. Reek and roodi report code smells. Flay reports structural similarities (opportunities for refactoring). And flog estimates the complexity of your methods.
DNA is also code, and it's full of bugs. That code lives for hundred of thousands of years, if not millions.
Biological processes offer the suggestion that your system can be functional in the face of constant failures and random variations in behavior.
Biology can even offer a very high reliability rate. While we get sick all the time, and people are born with all sorts of genetically disadvantageous traits, many key processes are mind-bogglingly reliable. (No sight v No sense of touch: Compare the rates of blindness to the rates of congenital analgesia type 2)
While the math behind CS offers tantalizing guarantees of reliability the reality of software development and developers deliver a reliability far lower.
I think it is a fascinating thought experiment to imagine a development process where instead of writing any code, all you're writing is tests (or feature descriptions) and let the code adapt to the environment you've defined.
Agreed, and I think it's easy to observe that fact with nothing more than your DNA example. In biology, perfection will always be outcompeted by "good enough."
"We could, of course, use any notation we want; do not laugh at notations; invent them, they are powerful. In fact, mathematics is, to a large extent, invention of better notations. The whole idea of a four-vector, in fact, is an improvement in notation so that the transformations can be remembered easily."
What he said about mathematics, I think it applies even more to programming.
[1] The Feynman Lecture on Physics, Volume 1, Chapter 17
At PARC we had a slogan: "Point of view is worth 80 IQ points." It was based on a
few things from the past like how smart you had to be in Roman times to multiply two
numbers together; only geniuses did it. We haven't gotten any smarter, we've just
changed our representation system. We think better generally by inventing better
representations; that's something that we as computer scientists recognize as one of
the main things that we try to do.
Alan Kay http://billkerr2.blogspot.com.au/2006/12/point-of-view-is-wo...If you haven't yet run across this book I highly recommend you check it out. At least for me it really meshed with my own quest to further delve into the mix of social and technical issues around software development. For more info on the book besides amazon reviews etc I also wrote up a blog entry last year which goes into more depth on the book http://benjamin-meyer.blogspot.com/2011/02/book-review-makin...
That would be a great book though.
The thing I like about John Carmack is that he really appears to live this through and through. I can't say I know him or have worked with him or anything, but whenever I read something like this of his, I enjoy the fact that it is virtually free of ego and posture. He doesn't proclaim or state - he tries things, explores things, and talks about what he found, his successes and failures, and the next hill he wants to climb. He openly admits when things are more challenging than he thought they would be or if something he worked on didn't turn out how he wanted it to.
an example: in grade school you learn how to mix colors, primary colors with paints, etc. then a little later, in middle school you learn no.. for light based, its not the same type of addition. then a little later you learn there are like an infinite possible primary colors - then a little later you learn color is actually a frequency or combination of frequencies, and you start asking yourself if your friend detects these colors the same way you do. does red to you really look blue to your friend.
it just gets weird.
Also, the smarter you are the more you tend to doubt yourself. Whereas less intelligent people tend to have more confidence in what they're doing.
Bites me in the ass all the time.
Also, this article's kind of silly... there's almost no discussion? Just watch the video.
EDIT: I guess it's not so much an article as a sharing of info. Still. Watch the video.
"Other interesting sort of PC-ish platforms, we have... the Mac still remains a viable platform for us. The Mac has never required any charity from id, all of those ports have carried their own weight there; they've been viable business platforms.
I actually think that the Mac is going to become a little bit more important for us. Interestingly, we have a ton of people that use, like Macbooks at the office, but we don't have any really rabid, OS X fanboys at the company that drive us to go ahead and get the native ports out early.
But, one of my pushes on the greater use of static analysis and verification technologies, is I pretty strongly suspect that the Clang LLVM sort of ecosystem that's living on OS X is going to be, I hope, fertile ground for a whole lot of analysis tools and we'll wind up benefiting by moving more of our platform continuously onto OS X just for that ability to take advantage of additional tools there.
Linux is an issue that's taken a lot more currency with Valve announcing Steam for Linux, and that does change, factor, you know, changes things a bit, but we've made two forays into the Linux commercial market, most recently with Quake Live client, and, you know, that platform just hasn't carried its weight compared to the Mac on there. It's great that people are enthusiastic about it, but there's just not nearly as many people that are interested in paying for a game on the platform, and that just seems to be the reality. Valve will probably pull a bunch more people there. I know absolutely nothing about any Valve plans for console, Steam-box stuff on there; I can speculate without violating anything.
One thing that also speaks to the favor of Linux and potential open source things is that the integrated graphics cards are getting better and better, and they really are good enough now. Intel's latest integrated graphics cards are good. The drivers still have issues. They're still certainly not going to blow away somebody's top of the line SLI system, but they are completely competent parts that are delivering pretty good performance.
And one of the wonderful things is that Intel has been completely supportive of open source driver efforts, that they have chipset docs out there, and they work openly with community to develop that, and that's pretty wonderful. I mean, anybody that's a graphics guy, if you program to a graphics API, use D3D or OpenGL, you owe it to yourself at some point to go download the Intel chipset docs. There's hundreds of pages of them, but you really should read through and see what happens at the hardware level. It's not the same architecture that Invida and AMD have on there, but there's a lot of commonalities there. You'll grow as a graphics developer to know what happens down at the bit level.
Another one of those things, if I had more time, if I could go ahead and clone myself a few times, I would love to be involved in working on optimizing the Intel open source drivers there.
So, it's enticing, the thought there that you might have a well-supported, completely open platform that you could deliver content through the Steam ecosystem there. It's a tough sell on there, but Valve gets huge kudos for having the vision for what they did with Steam, sticking through all of it. It's funny talking about Doom 3, where we can remember back in the days when they're like, 'Well, should you ship Doom 3 on Steam, go out there, make a splash?' ... I'm like, 'You're kidding, right?' That made no sense at all at that time, but you know Valve stuck with it and they're in a really enviable position from all of that now.
It still seems, probably crazy to me that they would be doing anything like that, you know, but, it's something that's not technically impossible, but would be really difficult from a market, sort of ecosystems standpoint."
Standard implementations/algorithms/patterns are commodities and are purposely so.