https://www.cs.uoregon.edu/research/summerschool/summer15/cu...
Very clean and easy to follow video lectures on the relation between, types, programs, and logical proofs. One does not need functors and monoids to appreciate the beauty of functional type systems. (And to see why such type systems are indeed discovered rather than invented.)
If anything, I think the conceptual father of type theory is Gottlob Frege, and Alonzo Church was the first to apply it concretely.
Frege certainly articulated a clear notion of what a function is, which is significant.
[1] https://gilbert.ghost.io/type-systems-for-beginners-an-intro...
> program3 fails because runFunction can only run first-order functions and runFunction is a second-order function – a function that takes a first-order function as a parameter.
Here I had no idea that JavaScript implicitly typed `runFunction` that way. That's cool.
Also, I never thought of "higher-order functions" as breaking into a countable hierarchy of nth-order functions, which is an interesting thought. In Haskell the hierarchy would start off as
zerothOrderFunction :: a
firstOrderFunction :: a -> b
SecondOrderFunction :: a -> (b -> c)
SecondOrderFunction' :: (a -> b) -> c
In general, an nth-order function is a function which either1. Takes an (n-1)th order function as an input and returns a simple type.
2. Takes a simple type as an input and returns an (n-1)th order function as an output.
The upshot is that typed programming languages not only catch bugs, but prevent you from legally expressing many non-sensical expressions (analogous to the set theory paradoxes). For example, consider the expression
(\x -> x x) (\x -> x x)
If you expand this expression, it reduces to itself: (\x -> x x) (\x -> x x) == (\x -> x x) (\x -> x x)
In a purely untyped language, this expression would be legal. But what would be its meaning? Arguably, it is non-sense, and should be excluded from the set of legally expressible expressions. The way to do this is through a type system. And indeed, if you typed this statement into a Haskell REPL you would get a type error; this statement can actually be proven to be untypable (IIRC).On the other hand, this means that typed systems are in some sense strictly less expressive than their untyped counterparts. It would therefore be interesting if somebody found an expression which was both (i) meaningful and (ii) only expressible in an untyped language. You would then have an argument for untyped languages :-)
In case it's not clear, it's just that it eventually ends up with a run time error (the talk about runFunction being a second-order function is just an explanation for why you should expect it to end up with a run time error). The evaluation is
program3()
== runFunction(runFunction)
== runFunction(1)
== 1(1)
== Error: func (with value 1) is not a function1(1) -> 1
Just because the common idea of application is that it only applies to functions does not mean that application applied to other types is nonsensical and should end up with a runtime error.
If your language allows application to be defined for different types, then the type system should be capable of determining the type that returns from application.
A practical example of a language in which application is valid for integers is Unicon/Icon. Both functions and integers have specific semantics with regards to application. And failure is an option.
foo :: a -> (b -> c)
Not saying it's wrong, but "foo is a second-order function" is a distinctly minority opinion.To a useful first approximation, Haskell works as a cartesian-closed category, which basically means we have tuples and that tuples harmonize with functions so that foo behaves just like
uncurry foo :: (a,b) -> c
which is a first-order function.So the majority opinion calls foo a first-order arity-2 function.
Here's a bar function that's truly second-order:
bar :: (a -> b) -> c
In general, the order of a function is the count of parens nesting the leftmost innermost arrow. It is invariant to currying/uncurrying as well as permutation of arguments.The same counting principle applies to other invariants such as ranks and kinds.
"How high can the order of a function go?" is the question explored in this blog post [0]. Note how the author's definition of order -- essentially the same given here -- got left and right mixed up.
[0] http://math.andrej.com/2006/03/21/interesting-higher-order-f...
Here's an example: The W combinator `\f x -> f x x` can be expressed in Haskell but it doesn't work everywhere it should, unlike in an untyped universe:
http://us5.campaign-archive1.com/?u=4937a9a2eb9eee0a26e1e0a2...
In other words, types can sometimes make you lose reuse.
That's amply compensated by types removing massive amounts of junk in the untyped world. If you don't believe me, try hacking in an untyped calculus sometime.
That's an example in one type system, not a feature of all type systems. There's a good argument that typing is more expressive than dynamically typed languages: you can always add a dynamic type to your style system and recover all dynamically typed language features, but you can't go the other way around and recover all the features of a statically typed language. For instance, the performance benefits.
Any type system will reject some correct program. As a trivial example, consider:
if( f() ):
1 + "a"
else
1
This is a perfectly fine expression iff the result of f() is always false.Usually one would have:
zerothOrderFunction :: a
firstOrderFunction :: a -> b
firstOrderFunction' :: a -> (b -> c)
SecondOrderFunction :: (a -> b) -> c
After all, the functions firstOrderFunction and firstOrderFunction' are the same up to currying.The standard definition of order is given here (at the end of the section):
https://en.wikipedia.org/wiki/Simply_typed_lambda_calculus#T...
This happens quite often. Take datastructures. It is often very useful to structure different types of data together in a common structure. Now in most cases, you don't know in advance which type will go where in the datastructure, as the data might only be known or derived at runtime.
In fact I'd say dynamic behavior is the major expressiveness loss. Granted, I'm not sure how runtime vs static type system relate to type theory. Anyone can educate?
"Very useful" often just means: coding a properly typed, easily-understandable solution takes longer than doing it untyped.
I argue that is the case only when using a plain-text editor. When you have a good IDE for a typed-language at hand that can refactor, complete, analyed and follow code on the press of a button, you lose this "advantage" of untyped languages.
It’s just that historically people have found it markedly more useful to have that information available statically, because it lets you prove things about your code that hold “for all” cases (universals), not just “for some” that are executed (existentials). And static information can be used for optimisations because of those guarantees—if you have proof, you don’t need a fallback.
I wish the author would expand on this piece with either more installments or a even short book.
I find myself interested in type systems as it relates to programming language design but I haven't found much middle ground between the basic types described in introductory texts about a language and the opposite extreme heavy academic texts such as the ones the author is breaking down in this article.
Can anyone recommend any other such middle ground resources on type systems and type system theory?
I really enjoyed Pierce's Types and Programming Languages. As I recall, he starts with the simple untyped lambda calculus, and builds a motivation for a type system, as well as the type system itself. It then switches to ML (or perhaps OCaml) and shows how features can be built-in to a type system as they're described.
I think it's a good fit - I know you said not heavy academic texts, but I don't think it's too heavy, it's at the level of an introductory undergraduate course. (Beware Advanced Topics in ~ by the same author which probably is on the heavier end, I don't know, I haven't braved it yet!)
But I guess what I was referring to as a "middle ground"qa any resources for learning about types systems written in a similar approachable tone like this article.
This was another article I read recently that I thought was similarly accessible on the subject of types systems:
https://medium.com/@thejameskyle/type-systems-structural-vs-...
So I guess I'm wondering if there exists such a book or series that might allow one to further their knowledge of type systems without requiring university study.
In Boyer-Moore theory, all functions are total - you can apply any function to any object. Types are predicates. Here's a definition of ORDERED for a list of number:
(DEFN ORDERED (L)
(IF (LISTP L)
(IF (LISTP (CDR L))
(IF (LESSP (CADR L)
(CAR L))
F
(ORDERED (CDR L)))
T)
T))
If L is not a list, it is considered to be ordered. This makes it a total function, runnable on any input, even though the result for the "wrong" type is not useful. This provides the benefits of types without requiring a theory of types. It's a very clean way to look at the foundations of mathematics. It's simpler than Russell and Whitehead.When you prove things using definitions like this, there's a lot of case analysis. This gets worse combinatorially; a theorem with four variables with type constraints will generate at least 2⁴ cases, only one of which is interesting. Most of the cases, such as when L is not a list, are trivial, but have to be analyzed. Computers are great at this, and the Boyer-Moore prover deals with those cases without any trouble. But it went against a long tradition in mathematics of avoiding case analysis. That made this approach unpopular with the generation of pre-computer mathematicians. Today, it would be more acceptable.
(It's fun to run the Boyer-Moore prover today. In the 1980s, it took 45 minutes to grind through the basic theory of numbers. Now it takes a few seconds.)
[1] https://www.amazon.com/Computational-Logic-Robert-S-Boyer/dp... [2] https://github.com/John-Nagle/nqthm
What happens with OOP? Every class has to understand how to "bark", not only the dog class?
If any class can somehow "bark" without throwing an exception, that may not be in alignment with the programmer's intent, or promote the furtherance of his or her goals in any way.
For intance, the intent may be that the programmer wanted to ask the local variable dog to "bark", but misspelled it as ndog and the integer which counts the number of dogs was asked to bark instead.
There is much value in identifying the problem that an integer doesn't bark.
Shell Definition.
Add the shell ADD1 of one argument
with bottom object (ZERO),
recognizer NUMBERP,
accessor SUB1,
type restriction (NUMBERP X1),
default value (ZERO), and
well-founded relation SUB1P.
It's not intended that you run programs in Boyer-Moore theory, although you can. It's a proof tool.First incompleteness theorem
Any consistent formal system F within which a certain amount of elementary arithmetic can be carried out is incomplete; i.e., there are statements of the language of F which can neither be proved nor disproved in F.
Second incompleteness theorem
For any consistent system F within which a certain amount of elementary arithmetic can be carried out, the consistency of F cannot be proved in F itself.
Russell stumbled onto Russell's paradox (among others) and it shook mathematicians' confidence that everything in math was built on top of a perfectly consistent and stable foundation. If you can define a set that it "the set of sets that don't contain themself" then what other kind of crazy talk can you say in math? How do you know proven things are true and false things can't be proven in the face of weirdness like that?
Russell tried to solve the problem by inventing type theory. Types stratify the universe of values such that "the set of sets that don't contain themself" is no longer a valid statement to make.
Meanwhile, Gödel went and proved that, sorry, no, math is not consistent and complete. There are statements that are true but which cannot be proven.
[1]: https://en.wikipedia.org/wiki/Foundations_of_mathematics#Fou...
> Similarly, type theory wasn’t enough to describe new foundations for mathematics from which all mathematical truths could in principle be proven using symbolic logic. It wasn’t enough because this goal in its full extent is unattainable.
For the average programmer you may as well be spouting gibberish because the average programmer will have no way to evaluate the claims (if any) you are making. Note, I am saying that you may as well be and not that you are. Please do not misunderstand me.
Types systems certainly are formal theoretical systems but I personally have come to believe that the majority of coders are ill-served by the mathematical leanings of type theorists.
I'm not sure I can explain myself better than that at the moment.
Now I believe there's an artificial split between math leaning people and pragmatics, the former end up as PhD, the latter in IT or close. But in reality the average coder could understand and even enjoy the land of abstractions, it's just that the river he swims in isn't flowing there so one has to run against the flow.
Not to say that ideals are the only-tru-way.
My reading is that it was invented by Scotus as Haecceity ! This was required by Catholic Christianity because of the difficulty that The Creed introduces about the identity of God - there are three entities which represent God, the Trinity - how to account for this? Well; the thisness of God is joined with the thisness of man, the thisness of the creator and the thisness of the thing which is motion (I have never understood The Holy Spirit). You can think of this as multiple inheritance! Theologians then had to account for "why three" as you can carry on making aspects of god with this mechanism infinitely, god the mother, god the lover, god the hunter and so on. But there are three - why? The answer was provided by Scotus's student Occam, entities should not multiply beyond necessity and hence there are three aspects of god because it is necessary for creation that there are.
The fun bit it that this procession of thought is somewhat guessed at because writing things like this down or debating them publically was a quick route to the afterlife via a bonfire!
Theatre and Philosophy have always been able to have a lively chat with one another
Any more articles in this vein?
This is great point, and I think it is absolutely worthwhile to put time into researching better, more powerful type systems.
Tony Hoare said[1] that his research into formal systems and his hope that the pr Framing world would embrace these new innovations that increase safety and reliability was futile, but I think what we need is a new approach, with particular care given to practicality and adoptability.
Then there’s the opinions that users of languages with more elaborate, expressive type systems have, like how some people really enjoy Haskell or Elm because they feel that the type system helps them express their ideas clearly, avoid errors, and aids refactoring and maintenance.
If you’re worried about this one, don’t I guess? If you can use dynamic languages to achieve your goals, and you like them, then that’s fine! There are plenty of languages you can play with if you want to get a feel for programming with types. Even Java 8 and C++11 are decent at this point (I’m sure a Haskell programmer is fuming right now).
Then there’s like, a few thousand people in the world who have well-informed opinions on research into the theory of programming languages, the Curry-Howard correspondence between types and proofs. Also a lot of Hacker News posters who have heard these words. Some of them pretend like they know what they’re talking about.
This isn't true. There are no types in lisp or untyped lambda calculus.
http://internethalloffame.org/about/advisory-board/cl-liu
Easily my second favorite instructor, possibly my favorite. I felt pretty prepared to deal with analysis and design in statically typed languages just from that grounding in set theory and logic. Fond of saying things like, "We have a box. What's inside that box? Another box. What's inside that box? We don't care."
Now retired, he was an early proponent of distance learning, so surely some of his stuff is accessible still.
There's nothing wrong with not caring about type systems if they don't make your life or your job better or worse.
As long as you enjoy what you are doing, everything else is optional.
I didn't start caring about type systems until I started running into cases where I really wished for a static one (when I was working on a large system in Python) and later when I was prototyping things where I had to make a lot of guesses in C#. Both situations frustrated me, and then I got to start really caring a lot about type systems.
To a certain extent, I think it's human nature that we often don't really start caring about things that much until we experience real, personal frustration with them. Then we start caring a lot.
The reason you see so many people weighing in on this here on HN, is that many regulars are the kind of person to start feeling pain very very soon and over small inconveniences where other people will just sort of deal with the minor inconvenience and focus on other aspects.
One attitude is not better than the other, nor is one more ideologically pure or a marker of a better programmer or engineer. The only thing it implies is different pain thresholds.
Depending on your area of focus as a developer a low or a high threshold could be either a benefit or a drawback. A language designer needs to have a very low threshold. A front-end developer/designer can afford to have a very high threshold and focus on things besides being provably correct.
One of the things I like the most about software engineering is that there are opportunities for joy and discovery for everyone. And as careers progress, you can easily find yourself caring about different things at different times, and there's nothing wrong with that.
There's nothing to be ashamed of any more than you should be ashamed of preferring strawberry to chocolate ice cream. (Although, in keeping with tradition here on HN, if you say that you prefer strawberry ice cream, you are dead to me and practically Hitler. :).
Is this actually true? Of course Haskell, Idris, etc. leverage type theory, but how much type theory underlies the type systems of widespread practical languages like C# or Java? Can something like C++'s SFINAE be grounded in type theory, or is it just a hack?
Are there any approachable resources on the subject?
Not even 1/10th of HN has that. Tons of business types, lowly JS programmers, designers, sys-admin types, old-school programmers in C/C++, etc around.
For me, types are sets of possible values, plus sets of valid operations on those values. I don't much care where they come from. As far as I am concerned, they are an engineering construct to make programming easier and safer, and are interesting only to the degree that they accomplish those goals. Any connection to pure math is completely incidental; if there were no such connection, it would not make (programming) types any less useful.
Now, math often gives deeper insight into what's going on, and enables you to create more powerful (useful) abstractions. But if the useful abstractions don't correspond perfectly to types as used by mathematics, I don't care.
mh, given Curry–Howard correspondence, aren't those the same? so the goal is indeed not having logical contradictions?
I may write another article about this.
It's actually just a belief. Nothing suggests that type systems and type theories can be improved to be practical at preventing bugs. I'd say it's the opposite, even with as much understanding about the nature of bugs as we have today, they don't look very promising, unlikely to make it even into the top ten of other different approaches.
If you allotted me a finite amount of effort to put a codebase (a) into a strong type system, (b) festoon it with lots of pre- and post-conditions and asserts, (c) run it through every static checker under the sun, (d) build a huge suite of tests, (e) find a way to formally verify critical algorithms in it with (say) Z3, (f) carry out fuzz testing, ... I'd probably say "do the easy stuff from most of these categories" rather than "pick the One True Path and figure that that will save you from all your bugs".
So saying that they aren't practical at solving bugs is a little disingenuous.
Why aren't they used? Probably existing code bases, inertia, and ignorance play a role. I suspect, though, that at least part of the problem is that most programmers find those type systems too hard to use. In that sense, the type systems aren't practical.
And if you're going to blame the programmers for that, well, if your plan for making programming better requires different programmers than the ones we have, your plan isn't very practical, either.
Seems you don't know much about types then. I suggest you look up theorems provers and compcert and the TyPiCal language, as but a few examples.
We've done testing since the start. Still many bugs. We've done ad hoc modeling (behavior driven) for years with some improvement. Formal methods aren't popular but are successful at least at some (small to the low end of medium) scales or within portions of large scale systems. Type systems can and have been used for codifying concepts from all of these into the program semantics. So I'm not sure how it is that they don't help.
Exactly, as the OP clearly doesn't understand, but every researcher in programming languages does, test can only prove the presence of bugs, it cannot prove their absence. Types can.
Type systems are a necessary part of computing. One can even define computing in terms of how types are transformed. Types are an extremely primitive and fundamental description of how computation happens, and without them, it'd be hard to imagine how anything could be computed at all, let alone correctly.
> Why there’s so much research around types if perfectly applying them to programming languages is impractical?
Somehow Haskell does this perfectly. Whaddya say to that?
So no, they never get in to this stuff.
The whole of programming, nevermind types, perhaps the most mathematical part of modern programming, arises from mathematics. There's some good history here, but the early paragraphs in particular are a display of ignorance if not arrogance.
The author quotes Newton, the very chap who's said to have said he merely stood on the shoulders of giants (to 'see' such insight). Any programmer in the 21st century stands on the shoulders of mathematicians and computer scientists of the 20th,; who were in turn standing on the shoulders of the mathematicians of the 19th centuries.
I thought it was intended to speak to people who might be intimidated or feel obtuse when they encounter really dense academic texts when trying to learn more about type systems as it relates to programming. As such I really appreciated it.
I didn't think the author was grinding any axes at all, quite the contrary.
Among whom I cannot count myself, for whatever it's worth.
When we say that some field is inaccessible, we don't blame the reader trying to understand. At the same time we're not saying that the field is wrong either, but that communication could be improved.