Human factors are very well studied and standardized, and there is a well-established discipline called "Human Factors Engineering", which also provides established test and evaluation methods. Human Factors research is considered solid and well-established because it has been built on rigorous experimental psychology and engineering principles developed over more than a century, with systematic methodology and empirical validation. Even if much of it is unknown or ignored by computer science or programming language design, there are many disciplines where Human Factors Engineering is critical (see e.g. ANSI/AAMI HE75).
Usability is therefore neither ill-defined nor hard to measure. Several ISO 9241 series standards address textual and command-based interaction directly relevant to programming language assessment. ETSI EG 202 116 provides extensive human factors guidelines for command language style. ITU-T Recommendation Z.361 provides guidelines for Human-Computer Interfaces in telecommunications management, incorporating the ISO 9241-10 dialogue principles. ISO/IEC TR 24772 addresses programming language vulnerabilities specifically from a human factors perspective.
E.g. Ada did have substantial human factors considerations documented in its design rationale, directly contradicting the notion that such principles don't apply to professional developers or programming languages. It's rather that computer science seems to continue ignoring established fields ("Not invented here"). Human factors in software development have been overlooked by researchers in the software engineering and development research areas, despite their obvious relevance. So what is lacking is primarily interest or willingness, not the fundamentals and means.
ADA was designed with a goal of improving software reliability. Potentially at the cost of other factors, like programming speed. Real life projects using it demonstrated that the error rate of large ADA projects was around half of equivalent C or FORTRAN projects.
However popularity is determined by other factors. Such as personal productivity, and accessibility for novices. These other factors are often in direct opposition to long-term maintainability. It is good for productivity to be able to do things in whatever way is convenient. But that flexibility is a burden for the maintenance programmer. Likewise novices frequently create useful code, which becomes hard to maintain down the line. (Excel spreadsheets are a classic example of this.)
The result is that ADA was a good fit if you have a large project, time to market was not your top issue, and reliability and maintainability were top concerns. Many defense projects have these exact characteristics. Which is why ADA was developed and used there.
But consider a startup. Projects are small. The top concern is time to market. Maintainability will only become an issue if your product launches, gets to market, and succeeds. That's a future problem that can take care of itself.
That's why, when you look at startups, you find a high density of scripting languages. Which language has changed over time. For example Amazon used Perl, Facebook used PHP, and Instagram used Python. But all scripting languages share the similar characteristic of having fast initial development times, and poor long-term maintainability. (Yes, even Python. Internal data showing that is why Google began walking away from Python around 15 years ago.)
Also Alan Kay and the Xerox PARC team designed Smalltalk (as Papert did before with Logo) with profound human-centered considerations, and they even "tested" their early concepts with children.
Also some other languages explicitly state human-centered design goals (e.g. Python, Eiffel), but as with Pascal or Ada the approach was more based on expert judgment, formal analysis, and established principles, not practical studies.
They may not have used standards such as the gp comment mentions, but they definitely considered human factors a lot.
E.g. TIMTOWTDI - There Is More Than One Way To Do It.
But that's not the only area in which they applied it.
A lot of my early expertise in performance analysis was heavily informed by my SIGPLAN membership. Many of the improvements showing up in compilers and interpreters would show up in some form there, and of course those developers were either involved in the papers or had access to the same research. So when some new version came out with a big explanation of what it did, I already had a reasonably good notion of how it worked.
It was a dark day when they got rid of the paper proceedings.
Because we can. Because a compiler is nothing more than a fancy text translator.
Outside affine types, all the praise for Rust's type system traces back to Standard ML from 1976.
The heretic of doing systems programming in GC enabled programming languages (GC in the CS sense, including RC), goes back to research at Xerox PARC, DEC and ETHZ, late 1970's, early 1980's.
Other things that we know, like dependent types, effects, formal proofs, capabilities, linear and affine type systems, are equally a few decades old from 1980's, early 1990's.
Unfortunely while we have progressed, it seems easier to sell stuff like ChatGPT than better ways to approach software development.
As it became easier to abstract away from pure assembly, people did just that. Some ideas stuck, some converged, and the ones with the most tenacious, adaptable and attractive communities remained.
Before I learned to code, no programming language was even remotely readable to me. But the more I learned, the more I could shed the notion that this was purely my fault, and accept that sometimes things are a certain way because someone found it interesting or useful. Applies to natural languages and mathematics, too.
not to promote FP but imperative stateful vs closures/function oriented is quite a strong example of that
a different paradigm can really be a massive intellectual tool
IMHO, not the ideas were bad, but the execution of them was. Ideas were too difficult/unfinished/not battle-tested at the time. A desire for premature optimisation without a full understanding of the problem space. The problem is that most programmers are beginners, and many teachers are intermediate programmers at best, and managers don't understand what programmers actually do. Skill issues abound. "Drive a nail with a screwdriver" indeed.
Nowadays, Round-Trip Engineering might be ready for a new try.
I wish even only half the OOP world actually understood it as the above.
The main answer is that we have only a limited ability to modernize existing programming languages. For example, most languages are not null safe, because most languages are old and we can't make them null safe without breaking backward compatibility with most existing code. And we can't break backward compatibility for practical reasons. So Java will never be null safe, PHP will never be strongly or statically typed, etc.
So for fundamental language features, replacing older languages is the only way to achieve progress. Unfortunately that's a very slow process. Python, the currently most popular language, is already over 30 years old.
But that turns into the trap of short-term thinking - eventually you reach the point where you would have been better off throwing it away and starting over. You don't reach that in the year you throw it away, though, nor in the year after.
The catch is you will also have a bit of old code that cannot be modernized reasonably, and new code has to somehow interoperate with it. Which means languages can't break anything because it might be the one thing you can't figure out how to not use anymore even though you know better and would do it different if you started today. Worse often the problem is an early design decision and so the bad practice is everywhere and you can't get rid of it in any one place because everything depends on it.
This reminds me of recreational math & gamedev, you simply do whatever you feel is fun and design it exactly as you'd like it to be.
When I was learning Rust I started out just associating patterns with lib types. Need to dynamically hold items? Vec. Need a global mutex? install lazy_static.
This is fine if you're beginning, but at some point you need to read about why people choose this. 9/10 times there's a more elegant option you didn't know about because you just did what everyone else does. This separates programmers from coders.
The only reason I learned this was because my current company has programmers, not coders. I learned a ton from them
It's super important because those concepts get measured, and absorbed into existing languages (as best they can), but that wouldn't have happened without the new languages
New concepts like Rust's "ownership model", Smalltalk's "Object Orientation", Lisp's "Functional programming", Haskell's "Lazy evaluation", Java's "Green threads"
Rust's "ownership model", is a simplification of Cyclone, AT&T's research on a better C, based on mix of affine and linear type systems.
https://en.wikipedia.org/wiki/Cyclone_(programming_language)
Haskell's "Lazy evaluation" was present in Miranda, before all related researchers came up with Haskell as common playground.
https://en.wikipedia.org/wiki/Miranda_(programming_language)
"History of Haskell"
https://www.microsoft.com/en-us/research/wp-content/uploads/...
Java's "Green threads" go back to systems like Concurrent Pascal.
To avoid feature bloat of unconnected pieces of ad-hoc syntax Java does the right thing and focuses on expressing easily-composable language building blocks.
As a last adopter language, Java has the luxory of cherry-picking good features of other languages but I think their line of thinking should be the reference moving forward.
>we create programming languages to experience new ideas; ideas that would have remained inaccessible had we stayed with the old languages.
I sincerely want to ask if this was an ironic comment given the topic? Because obviously none of these core concepts were really new to the languages you ascribe them to.
However, as a trench-line coder, I enjoy dabbling in languages to learn different techniques for achieving a similar set of goals without sacrificing pragmatism. In that sense, I rarely have the luxury to explore purely for exploration’s sake. So I wouldn’t describe abstraction, performance, or usability as “aesthetics,” nor would I spend time on a frivolous language that I know won’t gain much traction outside academia.
I like reading the perspectives of academics just to see how wildly different they are from those of the people I work with in the industry. This is probably a good thing.
That just is not true at all. These are all legitimate engineering tradeoffs, which any serious project has to balance. Calling this "aesthetics" is completely dishonest. These aren't arbitrary categories, these are meaningful distinctions engineers use when evaluating tools to write software. I think the students better understand what programming languages are than the teacher.
If you accept that a programming language is a tool and not just an academic game of terms, then all these questions have clear answers.
Agree, and we actually have both the standards and established methods to conduct representative tradeoff studies. But this knowledge is mostly ignored by CS and programming language designs. Even for Ada, there was little empirical evidence for specific design decision. A systematic survey discovered only 22 randomized controlled trials of textual programming language features conducted between the early 1950s through 2012, across six decades. This staggering scarcity explains why language designers rely heavily on intuition rather than evidence (see e.g. https://www.cs.cmu.edu/~NatProg/programminglanguageusability...).
Been there, done that: https://esolangs.org/wiki/Ziim
I think it is more interesting to see which languages are still used today and how popular these are. Because this is also tied to the human user/developer.
For instance, I used BASIC when I was young - not as a professional but as a hobbyist. I liked it too. I wouldn't use BASIC today because it would be entirely useless and inefficient.
I started with BASIC too. Also enjoyed BlitzBasic2 for a long time on the Amiga. That's where I learned programming… back then when programming was still fun.
One thing I've learned over the years is that the language is almost irrelevant, the tooling and 3rd-party library support are much more important.
So to me the study languages was interesting from this DSL perspective.
“This class is about the study of programming languages.”
Where is that class?
> I encourage everyone to create the most absurd, implausible, and impractical languages. Chasing the measurable is often useful, expressing the expressible is insightful, but never forget the true goal of language design: to explore and create what isn’t.
Sorry, but this sounds more like an artsclass to me. Don't get me wrong, there was a point in time where exploration of the unknown was the only way to move forward. But these days we would need greater insights into higher-level language semantics and inherent tradeoffs to guide language-design and language evolution.
There is plenty to choose from and one can learn already so much just by reading up on the Java-EG mailing lists. Brian Goetz has a true academic mindset and I frequently feel inspired when I read his reasoning which is both highly structured and accessible.
Otherwise we would just be left with another compiler class. Compiler basics really aren't that difficult.
Indeed, it is, and that's the point! Being interfaces to computers for humans, programming languages sit at the intersection of computer science and humanities. Lots of people like to treat programming languages like they're math class, but that's only half the picture. The other half is usability, ergonomics, learnability, and especially community. Not to mention the form of the language is all about aesthetics. How many times has someone on Hacker News called a language "beautiful" or "ugly" referring to the way it looks? When people praise Python they talk about how easy it is to read and how pleasant it is to look at compared to C++. Or look at what people say about Elm error messages versus C++ template errors. Actually a lot of what's wrong with C++ could have been averted if the designers had paid more attention in art class.
> But these days we would need greater insights into higher-level language semantics and inherent tradeoffs to guide language-design and language evolution.
Here's a talk that argues there's much more fertile languages ground for ideas outside of the "programming languages are math" area, which has been thoroughly strip-mined for decades:
https://medium.com/bits-and-behavior/my-splash-2016-keynote-...
This author takes the perspective that programming languages are much greater than the sum of the syntax + semantics + toolchain + libraries, and treating them as such is limiting their potential.
No actually. Why is that important? I dont quite see why that is relevant. Could you elaborate?
Firstly, when using them to create software it's pretty obvious that experienced devs and people who understand theory have a greater ability to guide, curate and control them.
Secondly, as they improve in ability we can see a paradigm change for people using them at least as significant as the jump from assembly to high level languages. Most programmers would have no need to study assembly these days.
Either way, their omission (while appropriate for the year, if somewhat lacking in foresight) is a significant one that renders it somewhat dated already.
Edit: I assume this comment gets downvoted because people don't like where we are heading, not because they really think LLM programming capabilities won't continue to improve at a staggering pace.
The error rate of models make language design, tooling, testing methodology and human review more important than ever before. This demands language evolution. You could get faar with lax testing and language tooling with enough caution and skill. but when LLMs enter the picture, that no longer flies.
we need tooling, static analysis, testing paradigms, language design that restrict how dangerous the LLM is allowed to act.
natural language is faar to fuzzy to replace programming (system specification is already famously impossible thing to do right). If you think it truly will replace code, i highly suspect you work om webbdesign, where testing and reliability was always a secondary concern.
And even then, I think were already on the convergence platoe of LLM code. The companies are raising prices as diminishing improvement and balooning compute costs.
There is no reason to study programming languages in 2025, other than as a historical curiosity - the same way one may study languages equally as pitiable as e.g. COBOL, Lisp, or MIPS assembly.
Well LLMs finally offer that, and what they are proving is what programmers have known for decades -- natural language is a terrible way to specify a program to a computer. So what is happening in the LLM world is they are reinventing programming languages and software engineering. They're just calling it "prompt engineering" and "context engineering".
What this tell us is that natural languages are not only not sufficient for the task of programming, to make them sufficient you need to bring back all the properties you lost by ditching the programming language. Things like reliability, reproducibility, determinism, unambiguity are thrown away when you use an LLM, and context engineering / prompt engineering are ways of trying to get that back. They won't work well. What you really want is a programming language.
Downthread there is an example of an ICPC problem statement, [0] given as natural language, (modulo some inequalities and example program inputs/outputs) which was sufficient for Gemini to program & implement the correct solution where no other human could.
[0] https://worldfinals.icpc.global/problems/2025/finals/problem...
Wouldn't LLM need to directly output machine code for that to be true?
[0] https://modelcontextprotocol.io/specification/2025-06-18/ser...
Why? Consider Gemini's recent performance at the International Collegiate Programming Contest [0], in which it solved a problem that no other human team was able to solve.
Wetware intelligence is itself obsolete, at least as concerns the domain of computing.
[0] https://deepmind.google/discover/blog/gemini-achieves-gold-l...
I call it, "wetware LLM prompt engineering".