No one has ever argued that dynamic languages are more expressive than static languages. This is impossible, as long as we're considering Turing-complete languages. The fact that Harmony/EcmaScript 5 reference compiler was implemented on OCaml. However, it is equally futile to argue that they are less expressive[1].
"you are imposing a serious bit of runtime overhead to represent the class itself [...] and to check [...] the class [...] on the value each time it is used."
Users of dynamic languages simple trade runtime efficiency for compile-time efficiency. What's wrong with that?
The point of having multiple languages is that different languages make different things simple and easily expressible. Sure, you can do dynamic programming in Haskell or OCaml, but the language is going to work against you in some way, requiring you to specify your intension in a particularly lengthy and awkward way (think Java, except Java makes you do that for any kind of programming).
[1] Ironically, the author fails to point out the one way in which static languages are more "expressive" than dynamic languages: overloading on function's return type. I have yet to see that in a dynamically typed language.
tl;dr: All Turing-complete languages are equally expressive, but different languages make different things simple.
>> No one has ever argued that dynamic languages are more expressive than static languages. This is impossible, as long as we're considering Turing-complete languages.
As I think you note in your last sentence, "expressive" here is not meant to imply whether something can be expressed in the language or not. It's rather about being able to express the concept in the language in a way that is no more or less complex than the concept itself, and expresses no more or less than the concept itself. So the Turing-complete argument does not really apply. (John McCarthy in fact had to say the same thing about Tuning-machines themselves -- that while they help to understand limits of any machine, for AI programs they are at too low a level to help real humans build new insights into AI beyond understanding those limits.)
>> Users of dynamic languages simple trade runtime efficiency for compile-time efficiency. What's wrong with that?
1. A program is run more times than it is compiled (hopefully). So the goal is really to make the effective combination of the two more efficient in a given usage scenario. On the other hand, with a dynamic typed language, if N seconds are shoved away from compilation, at least N seconds will be added to run-time.
2. The bigger issue I have is that dynamic languages often turn compile-time "errors" into run-time errors. Run-time errors take longer to detect and correct and the process of doing so is bound to see less automation. This is not to say of course that dynamic typing is always an issue.
You're not counting development time, or rather, cost.
Even if I eventually need the efficiency of a statically typed language, dynamic languages let me save the time that I would have spent satisfying the type checker on code that didn't make it into that version.
> On the other hand, with a dynamic typed language, if N seconds are shoved away from compilation, at least N seconds will be added to run-time.
It's unclear that that's true. In fact, the cost of compile-time type checking is typically more than the cost of run-time type checking during much of development.
> Run-time errors take longer to detect and correct and the process of doing so is bound to see less automation.
Run-time errors aren't detected until run-time but since folks with dynamic languages get to run-time faster, they're often detected earlier.
As to "less automation", I don't see it. Do you have an automated system that corrects type errors?
The difference between a static and a dynamic language would be when this dispatch is made (compile-time versus runtime).
I use the term "dynamic language" to refer to a well-recognized (though not perfectly agreed upon) set of languages. I didn't choose the term "dynamic," and I've never tried to argue that since they're dynamic, they're better/more fun/more expressive than non-dynamic languages. Hence the straw man.
I use the term simply to distinguish between, say, the set {Java, C, C++} and the set {Python, JavaScript, Ruby}. Most people are aware of this usage and immediately understand the distinction. You could call the latter set "cranberry languages" instead of "dynamic languages" and it would be okay. As long as everyone understands the usage, it's a functional (no programming language pun intended) term.
Plus it's rather meaningless. Bald is a hair color. Clear is a paint color. Silence is a syllable.
Computation is about expressing the type system that is inherent in the computation. Modern computers not having types are an artifact of their implementation, but not of computation per se.
The interesting bit is that this argument is one of the sides of an equivalence. Dig up the old archives of LtU,
http://lambda-the-ultimate.org/classic/lambda-archive.html
and search for the big threads. This direction of the argument (dynamic languages are static languages) was made most forcefully, in my recollection, by Frank Atanassow.
However, the other direction _also_ applies: static languages are dynamic languages. This was first pointed out in LtU, to the best of my memory, by Kevin Millikin, here:
http://lambda-the-ultimate.org/node/100#comment-1197
Someone ought to write up the old days of LtU holy wars somewhere. They taught me more about programming languages than everything I ever read anywhere else.
(edit: minor wording)
Fact: I know of a single statically typed language that could be used for real-world usage and that gets operator-overloading right, and that is Haskell, a language who's authors list contains many PhDs.
Another fact: it is perfectly within reach of a talented sophomore student to implement a dynamically typed language with support for multimethods, which would get operator-overloading right, in the course of a single semester.
And yet another fact: dynamic versus static really is runtime versus compile-time and many languages to which we refer as being "static" or "dynamic" are in fact somewhere in between.
That's because -- real language designers ship.
And it would be perfect if the compiler would tell you all kind of things about your algorithm at compile-time, but you have to draw the line somewhere and start making compromises. And did I mention the halting problem? Yes, that's a problem for compile-time too.
I really appreciate a well-thought out static language like Haskell, but it still has some way to go for the common programmer. For now, I don't think that the practical outcome of this article is for programmers to abandon dynamic languages.
Most decent OO, statically-typed compilers allow for runtime type information. Delphi (native) and the .NET compilers allow you to query all sorts of information about a given object instance, including its class name, properties, methods, etc. Here's some links on Delphi's native compiler RTTI and attributes:
http://stackoverflow.com/questions/2217068/why-should-i-care...
http://robstechcorner.blogspot.com/2009/09/so-what-is-rtti-r...
Object instances in environments like .NET and Delphi are inherently dynamically-typed.
Statically-typed languages provide the safety of checking the "stupid stuff" during compilation without sacrificing the flexibility of using dynamic types in the form of classes. And many do so without any major compilation overhead (Delphi and .NET compilers are very fast).
Most of the points he makes do not touch any practical problems of this devide. For example: How do I get my XUnit tests to run if one of the functions/methods/whatever in a file does not compile because the software has changed? Ruby really excells at that: it fails at runtime. Java? Not so, it will break my whole test file at compile time. Any other number of similar examples can be found. Yes, its a problem that the language marrying both of these properties has not been found yet. But Haskell certainly isn't the one.
Also, the much respected Erik Meijer made a similar point long ago in a much better fashion: http://lambda-the-ultimate.org/node/834
Thats actually the reason why I know more then one Java and C shop that use Ruby/Python/similar for their test suite.
Or you can use a tracing jit. You monitor at runtime the actual values taken by the variables and produce type-specialized code.
Yes, that's right, but you're overlooking the upside of doing things this way. What this gives us is the ability to define new types -- to extend the universal type, if you want to put it that way -- at runtime. No longer do we need this strict separation between compilation time and runtime; no longer do we need the compiler to bless the entire program as being type-correct before we can run any of it. This is what gives us incremental compilation, which (as I just argued elsewhere, http://news.ycombinator.com/item?id=2345424) is a wonderful thing for productivity.
"[...] you are depriving yourself of the ability to state and enforce the invariant that the value at a particular program point must be an integer."
This is just false. Common Lisp implementations of the CMUCL family interpret type declarations as assertions, and under some circumstances will warn at compile time when they can't be shown to hold. Granted, not every CL implementation does this, and the ones that do don't necessarily do it as well as one would like; plus, the type system is very simple (no parametric polymorphism). Nonetheless, we have an existence proof that it's possible at least some of the time (of course, it's uncomputable in general).
"[...] you are imposing a serious bit of run-time overhead to represent the class itself (a tag of some sort) and to check and remove and apply the class tag on the value each time it is used."
For many kinds of programming, the price -- which is not as high as you suggest, anyway -- is well worth paying.
In particular, dynamicity is necessary whenever data live longer than the code manipulating them. If you want to be able to change the program arbitrarily while not losing the data you're working with, you need dynamicity. In dynamic languages, the data can remain live in the program's address space while you modify and recompile the code. With static languages, what you have to do is write the data into files, change your program, and read them back in. Ah, but when you read them in, you have to check that their contents are of the correct type: you've pushed the dynamicity to the edges of your program, but it's still there.
For this reason, database systems -- the prototypical case of long-lived data -- have to be dynamic environments, in which types (relational schemata, e.g.) can be modified without destroying the existing data.
So to argue -- rather arrogantly, I might add -- that dynamic languages are really static languages is to overlook an operational difference that is a commonplace to anyone who uses both.
Dynamic languages give you a middle ground: the stuff in memory is more structured than "raw seething bits", but less structured than data in a statically typed program. This is often very handy, as it's much more convenient to operate on data in memory; the slight performance cost relative to fully statically typed data is often no big deal.
2) I don't get the database analogy. When I add a column to a table, the size of the table on disk does indeed change. Not to mention the schema can be changed statically. If you add/remove/modify a Field in a FieldList, the type doesn't change. So that really isn't a static vs dynamic issue.
Look at it this way. Programs have lots of important properties. Some of these we have figured out how to encode as types so that we can verify them statically. But (although research in this area is ongoing) there are still a lot of important properties we require of our programs that can't be statically verified. The upshot is, we have to test them. Yes, testing is necessarily imperfect, but we have to do it anyway. In my experience, in the course of testing a program in a dynamic language, the kinds of errors that would have been caught by static typing are relatively easy to find by testing; usually, the errors that are hard to find by testing are also well beyond the scope of static typing. Maybe that will change eventually, though I'm skeptical, but it's certainly the state of type systems in common use at the moment.
So the parachute analogy is grossly exaggerated; it's more like leaving your shoelaces untied.
When Doel says "you're depriving yourself of the ability to state and enforce the invariant...", it's not clear whether he means "always" or "at least sometimes". I concede that he could have meant the latter, in which case you're right, my counterargument fails. But as I've just argued, there are lots of other invariants that we can't statically enforce anyway, and they tend to be the more important ones.
2) A table is a bag (multiset) of tuples of some specific type (schema). When you add a column, the table now has a different type, because it's now a bag of tuples of a different type.
Imagine that instead of using a database you keep all your data in a running process of a program written in a static language. (Let's ignore the possibility that the program or the machine might crash.) How are you going to structure the data? To make use of static typing, you need to use collections of records (or "structs" or whatever you like to call them). How then would you add a field to one of these record types? There's no way to do it without recompiling your program, and there's no way to do that without either losing your data or dumping them to files and reading them back in to the new version of the program.
Now, you could store your data differently, by explicitly modelling rows as maps from keys to values. But what's the range type of your map? Why, it's the union of the types of the possible data values you might want to store; you've lost static typing for the data values, and will have to check at runtime whether each value you read out of one of these row maps is of the correct type.
Not completely true. While I don't know of many use cases for doing what you're talking about, there is at least one that I know of, which is debugging.
In the case of debugging, you can change code with static languages almost arbitrarily while the data stays live ine program's address space. Although one thing that is not possible to do, with current implementations, is change the type definition. But otherwise changes can be made to code w/ data never leaving main memory (I probably do this 10 times per day).
Now it is theoretically possible to also change type definitions, but that would require a notion of constructor that builds the new type from the old type. In some cases this can effectively just be an interface, then its really just a noop.
I don't know anyone who prefers dynamic languages who actually thinks this way. I program in both statically- and dynamically-typed languages, and I prefer dynamically-typed languages because it's less code I have to write.
I do like the direction C# is going with the dynamic type. I want a statically typed language, but I want the ability to have dynamically extensible type classification -- when I want it.
Thanks
The C(/#/++)/Java languages had a lot of people convinced that being statically typed required manifest typing. Including me. I thought I was against static typing, what I was against was manifest typing.
[1]: http://en.wikipedia.org/wiki/Manifest_typing
[2]: http://en.wikipedia.org/wiki/Type_inference
[3]: http://en.wikipedia.org/wiki/Type_inference#Hindley.E2.80.93...
I think it's because "ultra strict" and "dynamic" both oriented on programmer. "Non-strict" oriented on compiler.
"Ultra strictness" and dynamic typing in different ways, but both gives help to programmer. Non-strict is just a pain without benefit.
Example:
"There are ill-defined languages, and there are well-defined languages. Well-defined languages are statically typed, and languages with rich static type systems subsume dynamic languages as a corner case of narrow, but significant, interest."
I would like to see justification for the claim that there does not exist a single well defined language with runtime type enforcement.
This tone runs through the entire article, which is mostly begging the question rather than supporting the premise. I'd love to see the author's point illustrated by contrasting Haskell/ML and Python/Scheme/Ruby code listings or something similar.
Instead the article merely restates its premise in an attempt to make an impression, rather than to inform. Disappointing.
who markets the dynamic languages? pls show me this person/firm. i'll try to hire them