app : Vect n a -> Vect m a -> Vect (n + m) a
That is pretty amazing if you ask me. I look forward to the day when we all use languages which save programmers from themselves.The whole line is the type declaration for a function (app) that takes a Vector of length m, and a Vector of length n, and produces a Vector of length (m+n). Which is to say, the compiler knows at compile time what length the resulting Vector will be, and will fail to compile if the function you've written doesn't - provably - always meet that requirement.
From a Haskeller's perspective it's amazing because container types are famous loopholes for code that produces runtime errors. For instance, if you call the "head" function (returns the first element) on an empty list it will halt the program at run time with an error, despite the fact that it passed the type check (because an empty list and a full one have the same type).
As someone who learned Haskell and subsequently have been writing a lot of Python, I keep a mental tally of how many of my bugs (some of which took ages to track down) would have been caught immediately by a type system like Haskell or Idris'. I'd say it's well over half.
In Haskell those kind of errors are drastically reduced, but a few still slip through. Languages like Idris can catch even more of them, in a clever way, which is awesome.
Contrast that with a language with method signatures that can only return types like "Vector" or "List", without any information that is dependent on the incoming parameters.
So then, the logic is checked - if the method returns a vector that is anything other than the size of the two incoming parameter vector sizes added together, it won't compile.
In other words, even more behavior that would normally be a runtime bug is checked at compile time, which is a good thing if you are working in a domain where correctness is important and want to avoid runtime bugs.
In reading Haskell (and apparently Idris) types, a name that starts with a lower case letter is an unbound type variable - sort of like a template parameter in C++. The `a` in each of the Vect's must be the same type but it can be any type (picked at the call site, for any given call).
You can have number systems modulo n, which are only compatible with other numbers modulo the same n because they are distinct types.
It's just another level of abstraction that pretty much nothing else has.
app : Vect n a -> Vect m a -> Vect (n + m) a
is a type signature for a function, `app`, which should append a vector (`Vect`) of length `n` to a vector of length `m`.What's cool about Idris and other dependently-typed languages is that the length of the resulting vector, `n + m`, can be tracked in the type. This can prevent a whole slew of errors.
That's the Haskell type notation (Idris and Haskell are similar). Here's an example:
plus5 :: Int -> Int
plus5 n = n + 5
The first line is the type declaration of the `plus5` function. You can read it as "plus5 takes an Int as argument and returns another Int". The second line is the function declaration. The equivalent in Python would be: def plus5(n):
return n + 5
The syntax is the same when functions have more arguments: add :: Int -> Int -> Int
add a b = a + b
This means that add takes two integers as arguments and returns another integer. This syntax might look strange (you could be asking yourself "How do I distinguish between the argument and return types?") but that's because in Haskell functions are curried[1].Haskell (and Idris as well) also has parametric polymorphism, so you can define functions which operate on generic types. For example, the identity function:
id :: a -> a
id x = x
You can read this as "`id` is a function that takes an argument of type a and returns another argument of type a (so the return type and argument type must be the same)". This means the id function will work for any type of argument (String, Float, Int, etc.). Note that of course this is not dynamic typing, Haskell and Idris are statically typed and therefore all the types are checked at compile-time.Now, let's move on to Idris. One of the reasons there's a lot of excitement over Idris is because it has dependent types, i.e., you can declare types that depend on values and these are checked at compile-time.
The example GP gave was this:
app : Vect n a -> Vect m a -> Vect (n + m) a
`app` here stands for append. This is the function that appends two vectors (lists of static length) together. In Idris, `Vect x y` is a vector of size x and type y, so Vect 5 Int
is a vector of 5 integers. So, essentially, `app` is a function that takes two vectors as arguments, `Vect n a` and `Vect m a` (this means they can be of different sizes but their content must be of the same type). It returns a `Vect (n + m) a`.Think about it for a minute. That is amazing! We can be sure that, at compile-time, just by looking at the function signature, our append function, given two vectors of size n and m, will _always_ return a vector of size (n + m).
If we accidentally implement a function that does adhere to the type specification (for example, it appends the first argument to itself, returning a Vect of size (n + n) instead of (n + m)), the code will not compile and the compiler will tell us where is the bug.
[1]: http://learnyouahaskell.com/higher-order-functions#curried-f...
template<typename T, std::size_t X, std::size_t Y>
std::array<T, X + Y> app(std::array<T, X>, std::array<T, Y>);There's also a huge difference between untyped templates and strongly typed generics, but in this particular case the difference is somewhat subtle.
My understanding was that concepts began as an attempt to fix this "problem".
To understand the complexity of the task, try proving that insertion sort is really a sort algorithm. You write more code, and writing it will take more time than you would do in a 'normal' typed language.
I'd highly recommend reading Chapter 1, Section 1.3 from Type-Driven Development. The first chapter is free.
https://www.manning.com/books/type-driven-development-with-i...
I'll see if I can contrive an example. Supposing you have a list L1 of vectors of length 5. You also have a vector V1 with user input, with length x (unknown at compile time). You then make a new list L2 by taking each vector in L1 and append V1 to it. L2 is now a list of vectors of length (5 + x). Even though x is unknown at compile time, the compiler still knows that all vectors in L2 are of the same length. It can make restrictions based on this fact.
It seemed strange to me when I heard this concept, I thought the compiler would need to consider infinite possibilities, but apparently similar things are possible even in Haskell. For instance, you can define a list as a recursive type that holds a value of a Null type (not just a null value of the list type) or another List. Apparently it can still reason about this.
However, these are things I've heard. Hopefully someone with more experience can chime in and let me know if I'm right here.
Github: https://github.com/idris-hackers
It's pretty exciting!
[1]: https://ghc.haskell.org/trac/ghc/wiki/DependentHaskell/Phase...
What's more, it seems to me that Haskell syntax is the lingua franca when discussing anything related to data types and functional programming these days. Those ->'s are everywhere, it's useful to know what they stand for.
Just skim a few chapters of learnyouahaskell.com; you won't regret it.
For example, I got into Ruby via Rails, because Rails lets you quickly prototype simple web apps. So I could go from "I wish I had an app that does X" to actually building it, deploying it and sharing it with others. What would a similar "learning flow" look like in Haskell? (doesn't have to be web-based)
Put another way, when I come across a problem, how do I recognize it as the type of problem that is best solved using Haskell, vs. an imperative language?
func train_model(X: [n d], y: [n 1])
So many lines of code are spent verifying that the sizes of two function arguments are compatible. function *{T, M, N, R}(a::Mat{M, N, T}, b::Mat{N, R, T}) function train_model(X::Array{Float64,d}, y::Array{Float64,1})It would be fantastic if I could casually throw in a proof while debugging a hard problem.
More importantly can we possibly implement church numerals in Haskell? /rhetorical
Yes.
I wonder how Idris is going to be affected when Dependent types come to Haskell as well (announced at last ICFP)
Not to mention, there is hope of giving a semantics to Idris since it is based on a fairly routine variant of type theory, whereas I don't think there is any hope at all of understanding "which" type theory the Haskell folks shall have implemented.
If you go all the way with Agda, you have significantly constrained recursion to the point that the language isn't turing-complete anymore, but in return you get termination checking and other nice things. As it turns out, most of the code we write doesn't need unbounded recursion. Seriously, can you think of the last time you wrote something like that?
And I think that'll be a hard pill for people to swallow, much like how it was really hard to sell memory safety to C/C++ guys back in the day.
It looks like it was released in 2012 on hackage but I don't know how far back 'Foldable' was envisioned.
[^1]: Ivor the Proof Engine is here, although apparently not actively updated any more: https://github.com/edwinb/Ivor
http://docs.idris-lang.org/en/latest/faq/faq.html
but could not infer why a dragon from Ivor the engine had been chosen. That the author had previously written a proof engine makes all the sense now. :)
:(
in my opinion, one of the worst ideas to plague many new languages
brackets are really, seriously, honestly a better visual cue for grouping
For example, what's the editing experience like compared to e.g. OCaml with emacs/Merlin? Is there anyone in a position to compare 1ML's approach to unifying type/value languages vs using dependent types?
Personally, I find 1ML's approach more pragmatic, or, at least, easier to digest for most programmers - including Haskell programmers. Dependent types are more powerful, but they aren't free from complications. For instance, unifying the type and value languages willy-nilly can cause trouble if you want your value language to be Turing-complete, because now type-level computation can diverge too! In a language based on System F-omega, type-level computation is guaranteed not to diverge, because its type level is the simply typed lambda calculus. Some use cases might warrant providing more powerful type-level computation facilities (e.g. calculating the shapes of multidimensional matrices), but it isn't clear to me that the full power of a Martin-Loef-style dependent type theory is needed.
These are functional languages, and there isn't a lot of code to chunk together with brackets. Functions are often just one line, and it's a bit cumbersome to make functions that require multiple lines. Do you believe brackets add to the readability of a one line function?
int func()
{
while dosomething()
{
dosomething()
dosomething()
doanotherthing()
}
dosomething()
}
def func():
while dosomething():
dosomething()
dosomething()
doanotherthing()
dosomething()A machine can look at the braces and re-indent the code, so that you don't have to look at the braces. All while you rest assured that the meaning didn't change:
I just popped it into Vim, selected all and hit =:
int func()
{
while dosomething()
{
dosomething()
dosomething()
doanotherthing()
}
dosomething()
}
Some semicolons are expected so things are a little off.There is no objective best answer here. I vastly prefer indentation based grouping, but I respect that you don't.