story
I makes me think of Elm [1] and (functional) reactive programming. Reactive programming is fantastic. It's kind of like how a spreadsheet program works. If a variable changes, all variables who depend on it change as well. Given "a = b + c", if c increments by 1, so does a.
It has many advantages over event based systems, like Javascript. Reactive programs don't need callbacks. The changing values propagate the "event" through the system.
I'd love to hear what you guys think about this direction of programming. It seems very natural to me.
Edit: I also see reactive programming as the golden way of having changing state in functional languages. Functional languages have no problem with data or state. They have a problem with change of state. The reactive paradigm solves that problem. All change is implicit and code can be exactly as functional as before.
EDIT: re FRP, you might find this Lambda The Ultimate post insightful: http://lambda-the-ultimate.org/node/4900
FRP has issues with openness and isn't real great at dealing with collections. It also forces you to express things kind of unnaturally (e.g. instead of "click this and increment x", you say "the counter is the count of all click events"). There are other methods of managing time, like Glitch[1] and Bloom[2] that seem more promising :)
[1]: http://lambda-the-ultimate.org/node/4910 [2]: http://boom.cs.berkeley.edu/
I suppose it's personal preference, but I actually think that the latter of the two statements is much more understandable. This is why I like functional programming in general; a single value can be expressed as a declarative transformation of other pieces of data, making it clear how the value is derived. Using the imperative model, it's up to variable names and documentation to make it clear what 'x' is. It's like looking at one of those pictures that contains multiple shapes simultaneously; suddenly you see the one you haven't been seeing, and then you wonder how you didn't see it all along.
Finally, events are often manipulated in very tricky ways. Take the problem of implementing a drag adapter: you have to remember to capture the mouse on the down event and release your capture on the up event, but then you also need to capture widget and mouse positions on the down event so you can compute deltas that avoid sampling resolution problems on the move events. Rx always gets these two points wrong, which is very annoying, but they have to fudge it otherwise event stream processing can't win in elegance.
Would you say Aurora and these other languages are "reactive"? Or do you have a better term for it?
BTW: I'm reading "Toward a better programming" right now :)
As a concrete example, learning pure functional programming gives me another perspective on the OO code I write; ie. that methods are functions where the first argument is the implicit "this" and the body of the function contains a top-level pattern-match on the class tag. This perspective has helped me avoid over-complicating my code at times.
Learning Verilog was an eye opening experience for me. It reminded me of the time I switched from unstructured BASIC to C when I was a kid. At first it seems complex and weird then suddenly it clicks and it all starts making sense.
1) Synthesis/simulation mismatch: Your design might work in simulation but not in hardware, and vice versa. Often, this is due to X-value (representing unknown/invalid values) problems.
2) Signed datatypes: If you mix signed and unsigned values in an expression, the result is unsigned. So, if a is unsigned and has the value 4, b is signed and has the value -3, then
(a * b) < 5
will return 0 (false), while ($signed(a) * b) < 5
will correctly return 1 (true). That's because signed datatypes are just an afterthought and weren't officially introduced until 2001, more than 15 years after the language's inception.3) If an undeclared net is used in an module instantiation, the compiler silently creates a 1 bit net with that name. No error message, no warning. You can turn off that stupid behavior with a compiler directive, but you have to turn it back on at the end of your file, because otherwise its effects are active in all files compiled afterwards, and many third-party sources don't work correctly with this setting.
VHDL is also a bad language, but for very different reasons. And yet, alternative HDLs have a hard time getting traction, for various reasons and non-reasons.
I found verilog very similar to C in a way, if you're not familiar with all the quirks of the language it's very easy to write code that looks completely benign and straightforward and yet ends up blowing up in your face because of an implicit cast or undefined behaviour...
On the other hand there's not much of a "hacker scene" for driving up the innovation for HDLs, unfortunately. And I don't expect the big companies making ASICs and FPGAs to come up with a radical new solution any time soon.
So basically, if one of you HNers want to try their hand at writing a new programming language, consider designing a modern Verilog instead of yet an other Javascript transpiler :)
The same is true, maybe even worse, for our MEs.
(For testbenches, everything goes..)
Value set is ('U','X','0','1','Z','W','L','H','-')
TYPE std_logic IS ( 'U', -- Uninitialized
'X', -- Forcing unknown
'0', -- Forcing 0
'1', -- Forsing 1
'Z', -- High impedance
'W', -- Weak unknown
'L', -- Weak 0
'H', -- Weak 1
'-'); -- Don't care
SIGNAL stdLogicName : STD_LOGIC:='0';
stdLogicName <= 'Z'; signal x, y: std_logic;
...
x <= 'X';
...
y <= '0' when x = '0' else '1';
will result in y getting the value '1' in simulation, but you don't know what it'll be for synthesis. So if you're unlucky, it'll simulate correctly but fail in hardware. Which is exactly what simulation should protect you from.And even if you assign only '0' and '1', you might get an 'X' from third-party IP cores.
I'm a CSE Major and I took a hardware design class last fall, we programmed and Xilinx FPGA using VHDL.
It was a huge change from everything I had learned before, I had no idea what a latch was and why it was bad to imply them. The thing that bothered me the most was the simulation/synthesis disconnect, the only efficient way to debug a program is by using the simulator because debugging it on the board takes a very long time when changing code. But even after you've debugged it in the simulation it might not even synthesize, which is an even harder "bug" to debug.
Exactly. VHDL and Verilog make hardware design twice as hard as necessary: first you have to think about what structures you need, and then you have to figure out how to describe it with the inappropriate tools your HDL gives you (in such a way that the synthesis software correctly recognizes your intent).
With hardware design, you have to keep in mind that you are modeling hardware. A language that did it all for you might be nice, but you still have to know what framework you are working in. The simulation/synthesis disconnect is inherent to that. You can think of a ton of structures that seem fine at a code level but just don't have a workable analog in silicon. Synthesis in that sense is just another debug tool.
1) (At least at the X's point) Don't build designs that rely on X behavior or could result in X behavior. Where I work, X's are completely unacceptable and a sign that something is broken. In general, if writing Verilog to model hardware, don't write non-synthesizeable constructs.
2) Signed datatypes don't make any sense when designing hardware. Verilog treats everything as bit vectors. While 2's complement can implement signed behavior, hardware doesn't know or care about signed types. Again, if modeling hardware, write code that models hardware.
3) This is annoying, but another issue that is stepped around with design practices and not a major problem.
Overall, when talking about Verilog, you have to keep in mind its a language built to model hardware. Most features of modern languages don't make much sense in that context.
Although I agree, you can't completely avoid 'X' values, because third-party cores might return them in output signals.
2) > Signed datatypes don't make any sense when designing hardware.
Now that is just plain and utterly wrong. Of course signed datatypes make sense when designing hardware: You can (and have to) perform arithmetic in hardware, and signed arithmetic is a part of arithmetic.
> Verilog treats everything as bit vectors.
By your argumentation, unsigned datatypes also wouldn't make sense when designing hardware. And yet you need it all the time.
> hardware doesn't know or care about signed types
But the language should care about it. "It's all 0s and 1s in the end" is not an argument for not providing proper abstractions.
3) > This is annoying, but another issue that is stepped around with design practices and not a major problem.
That's like saying "you can avoid bugs if you're really careful". Great. A "feature" which doesn't offer any advantage (because you're supposed to avoid it anyway) and which makes it so easy to make mistakes which won't be seen until simulation, should have never made it into the language.
> Overall, when talking about Verilog, you have to keep in mind its a language built to model hardware.
Exactly. Verilog and VHDL were intended to model hardware, not to design it. Nowadays, they are mostly used to design it.
I don't mean to attack you, but attitudes like yours are the main reason why we're stuck with such crappy, ill-designed, inappropriate languages for hardware design. You're so used to the problems that you don't even see them anymore.
Can a dependent type system catch all type errors at compile time? For example, suppose I write the following (in pseudo-code):
// Variable x is an integer greater than or equal to 0 and less than 256.
int x (>=0, <256)
x = 128 // This is valid.
x = x * 3 // This violates the type.
I can imagine how a compiler could catch that kind of error. But that's trivial. What happens in programs like this: int x (>= 0, <=10)
x = parseInt(getKeyboardInput)
Now the compiler can't know for sure whether the type has been violated, because the value of getKeyboardInput could be anything. To take a page from Haskell, you could do something like this (which is still pseudocode, not valid Haskell): // x is a value that is either 1) an int from 0 to 10, or 2) nothing at all.
maybe (int (>= 0, <=10) x
// applyConstraint recognizes that parseInt may return a value violating x's contraints.
// Thus it transforms the return type of parseInt from int to maybe (int (>= 0, <=10)
x = applyConstraint(parseInt(getKeyboardInput))
Or perhaps applyConstraint wouldn't have to be called explicitly, but would be implicitly added by the compiler as needed. I'm not sure which is better stylistically.Either way, applyConstraint would be required any time a computation could return an invalid value. That would get tricky, because the compiler would have to track the constraints on every variable, even where those constraints aren't declared. For example:
int w (>= 0, <= 10)
int x (>= 0, <= 2)
int y
int z (>= 0, <= 20)
y = w * x
z = y
Here, the compiler would have to infer from the assignment "y = w * x" that y is always between 0 and 20.Do any languages currently take the idea this far (or farther)?
In a DT language when you express such a type the compiler will demand that you write a type like maybe which can fail at runtime and demand that you handle that failure.
To the compiler, `int` and `int (>= 0, <256)` are different types which require an explicit conversion (though the particular choice of conversion may be implicitly defined, viz. Idris' Coerce typeclass). In order to do multiplication like you suggested you would have to provide enough information to the compiler (either at compile time via constants or via a possibly failing mechanism which generates proof requirements at runtime) that your multiplication will not violate the constraints of the type.
To you final point about tracking constraints, even those defined implicitly, I think the answer is no-ish. Most frequently, DT languages have such a richness of types that there's no way to infer them—you're forced to write out all of the top-level types. But the idea of passing around constraints like you're suggesting here is not impossible.
You might imagine a pair type like (in no particular notation)
(x, x => [Constraint])
where the first is a number and the second indicates a type-level function taking that particular number to a set of proofs that certain constraints are satisfied. We could indicate all of this at the type level and demand the compiler ensure that constraints are not violated (i.e. we end up coercing between constrained types like this and some coercions, those that coerce to supersets of the current constraints, are valid only) and we could also define a loose kind of (+), (*), (-) etc on this pair type such that the result passes along the proper constraints.Their latest research language is "Liquid Haskell", I think it could deal with jarrett's example out of the box (with the caveat that yes, you still need to write the test explicitly). They used to have an online demo where you could just type programs into a web form, but that doesn't seem to work right now...
(::) : a -> Vector a n -> Vector a (S n)
This says that an element cons a vector of any size results in a vector that is guaranteed to be non-empty. This means that you could safely write a function to retrieve the head of a list by restricting the type to lists that are guaranteed to be non-empty. head : Vector a (S n) -> a
Something like the following pseudo-code in a REPL(I don't actually know a thing about Idris): let nums : Vector Int n = []
head nums
...would fail to compile, since the second type param to nums is simply `n` rather than `(S n)`, as required by our `head` function. All of the things you would expect from Haskell like the power of ADTs and pattern-matching apply here, so you get even more compile time type-soundness.In case anyone's wondering, only the first of these (the element type) is a type parameter, since it has a constant value in all sub-expressions (ie. the cons cells all contain elements of the same type).
The second is a 'type index', which is a more general term (type parameters are a special case of type indices), since its value changes in the sub-expressions (the cons cells have different lengths).
The difference is that we don't need to pattern-match parameters on every recursive call, since they're guaranteed to be constant (although we can if we like).
I didn't know this until I asked http://cs.stackexchange.com/questions/20100/what-are-the-dif...
EDIT: It occurs to me that this is well summarised by the apparently facetious remark "Ask not what your compiler can do for you; ask what you can do for your compiler."
subtype my_type is Integer range 0..10; my_type x = 9; my_type y = 10; my_type a = x + y;
So, obviously there is enough information to deduce at compile time that the sum operation yields an incorrect value. In practice, as many of the replies point out, you can determine no such thing (my_type x = user_input()).
In that case Ada will throw a runtime exception. All this checking is expensive, so it ends up being much like C/C++, where you have debug and release builds, where the release builds do not do the checking.
int x (>= 0, <=10)
x = parseInt(getKeyboardInput)
Dependent Type systems don't have to know what is possible keyboard input. They have to know what type the getKeyboardType function returns (and the parseInt), and if it's possible for that type to be out of the range of your constraint.If that's so, it throws an error. At that point you simply have to compose your types and functions so they match the constraint, and you know you are safe.
If I'm understanding you correctly, that would seem to take a lot of the power out of dependent type systems. Almost all values will eventually depend on input from the outside world, so it seems you'd rarely have a chance to use your dependent type system.
Haskell (which doesn't have dependent types baked in) has a good approach to possibly-invalid data via the Maybe type. E.g. Maybe Int means "either an integer, or nothing." So it forces you to explicitly handle the possible error cases.[1]
To spell it out in Haskell terms: In the example of getting an integer from the keyboard, x is a Maybe Int. If the input is invalid, the value is set to Nothing, which is a unique constant. If it's valid, the value is of type Just Int.
> At that point you simply have to compose your types and functions so they match the constraint, and you know you are safe.
But how do you get from untrusted, potentially invalid data into the world of trusted, guaranteed-valid data? How do you bridge the gap? I was suggesting some combination of an applyConstraint function and a Maybe type to achieve that. Are there other ways?
[1] Actually, it will let you bypass the usual compile-time protections with the fromJust function. But if you want compile-time safety, either refrain from using fromJust, or use it only where you're absolutely sure the value won't be Nothing.
The main point is that you can write provably correct validation functions. Then your program can branch on the result of that function, and in the "success" branch you can process data further, as usual, assuming (justifiably) the validity of the data. It doesn't make any difference whether some particular piece of data was validated runtime or compile time, so far as the validation does what you want it to do and the types reflect the result in sufficient detail. Verified programming is mostly not about trying to check everything compile time, but rather checking that when our program eventually runs, it does the right thing. Take the following pseudo-Idris-Agda function:
boundsCheck : Nat -> (bound : Nat) -> Maybe (n : Nat ** n < bound)
boundsCheck n b with n <? b
| yes proof = Just (n , proof)
| no disproof = Nothing
This either returns the input natural number (if the input is within bounds) along with a proof of being in bound, or returns failure. The type of the function makes it impossible to write a function body such that the returned number is not within the supplied bounds. Any subsequent code in the program may work with the output of this function and be utterly sure that the "n" in "Just (n, p)" observes the bound, and the may also safely omit any further checks for the same bound or greater bounds. Or here's a standard Idris function: filter : (a -> Bool) -> Vect n a -> (n : Nat ** Vect n a)
filter p [] = (_ ** [])
filter p (x :: xs) with (filter p xs)
| (_ ** xs') = if p x then (_ ** x :: xs') else (_ ** xs')
This is a plain old filter, except that it works on a length-indexed list type "Vect". We have no idea (compile time) how long the resulting filtered Vect will be. So we return a dependent pair containing some Nat and a Vect such that its length is equal to that Nat.You might think that this is rather boring, since we could have used a list type not indexed by its length, and then just computed the length whenever we needed it runtime. Except there is a big difference. It is impossible to return a value of type
(n : Nat ** Vect n a)
from any function, such that in the value the Nat index does not equal the length of the vector! You could slap together a plain list and a Nat in a tuple, and then say: "This a list coupled with its length. Let's try to preserve this invariant by writing code carefully". You could also trying hiding the possibly unsafe internals behind some OOP class firewall, but even that doesn't protect you from writing the internals incorrectly. Dependent types can protect you even then; they only fail to prevent you from writing the wrong types in the first place.head . sort . f
where f is a function that you know returns a non-empty list, perhaps because that's just the way it operates or because you checked it further up. You know that head won't crash since any reasonable implementation of sort returns a list which is just as long as the one that was given to it. Similarly:
tail . sort . f
won't crash. Moreover, the list will still be sorted, since you just removed the head of the list.
This is all known to you, but you can't necessarily easily show it to the compiler.
I don't know the motivation for wanting to know the input before it is given. Putting it in a Maybe is the only sensible option. Some things are just not known/unknowable before running a program. You wouldn't even need to run the program if you could prove absolutely anything about the properties of a program.
Will the user input bleed through to all corners of the program? Well, why would it? You could have a sanitizing loop that refuses to pass "wrong" data to the pure (and dependently typed!) parts of the program, just looping over and over until the correct input is given.
I'm probably not clever enough, but someone could probably prove that'd be equivalent to solving the halting problem. It seems impossible.
> because the value of getKeyboardInput could be anything
That makes my brain hurt. I think implicitly narrowing the type is stylistically better. Maybe just adding a dependent type declaration:
x = (parseInt(getKey) :: Int(>=0, <=10))
Then you type-annotate everything you can, marking every non-annotated expression as wild: w = ... :: Int(>=0, <=10)
x = ... :: Int(>=0, <=2)
z :: Int(>=0, <=20)
y = w * x -- :: Wild
z = y -- compiler should Scream!
Asking the compiler to infer arbitrary type declarations seems hard. I think y probably shouldn't be inferred as :: Int(0<=..<=20) unless there is a rule somewhere that Int(a<=..<=b) * Int(c<=..<=d) = .. :: Int(f<=..<=g).The work still needs to be done (no magic bullet here), but we should be able to use the computer to generate well-defined type combinations.
IIRC dependently-typed languages dodge this bullet by not being completely Turing-complete (as 'twere).
Think about the following Java code:
public int getMyInt() {
return getSomeOtherInt();
}
How hard does Java have to work to figure out whether getMyInt is well-typed? Does it have to solve the halting problem? No. It just checks the information that you have given it. If you wrote that getSomeOtherInt has return type "int" then getMyInt will type-check. If you gave it some other type, it won't type-check. If you didn't give it a type, it will complain about a syntax error. At no point will Java try to write your program for you (which would hit against the halting problem). The same is true in dependently-typed languages, except the types happen to be much stronger. You still have to write them down, write down values of those types, write conversion functions when you need to combine different types, etc.Incidentally, if a language is Turing-complete, it actually becomes really easy to get a program to type-check; we just write infinite loops everywhere, which leads to logical fallacies ;) That's why many dependently-typed languages aren't Turing-complete (although many are; eg. those which distinguish compile-time terms from run-time terms, like ATS).
Guessing you're plenty clever...just didn't think about it enough :)
Simply transform a program into one which halts on a type error rather than whatever else it might be doing. Done.
Provably not, if you want a type checker guaranteed to finish.
(This bit of pure-mathematics wonkery ignores the substance of jarrett's question, though; one doesn't need the full Gödelian power to observe that a priori theorems that must be verified at compile time can never account for data that are provided only after compilation.)
I'm not sure I agree with "theorems that must be verified at compile time can never account for data that are provided only after compilation". At compile time, you prove the assertion "for every x, the program outputs correct answer for x". Now, you don't know that the user will enter say x=5 at runtime, but since you proved a theorem about _every_ x, this includes x=5. You cannot predict the future (the path that will be taken by the program), but once you prepare for all possible futures, you're safe.
It seems to me they could. Suppose my program depends on external input D. That input could be a file it loads, keyboard input, network input, or anything else. Could I declare that I expect D to have properties P? And then, could the language force me to implement a code path for cases where D does not satisfy P? An example of such a code path would be to print an error message and then terminate the program.
They don't prove theorems. You do this by writing a program.
Or do these unprovable-but-true facts mostly elude them, since they are operating in a formal system that 'hides' them?
Will be a compile time error because getKeyBoard input returns integers that are between 0-256(?) not 0-10 like 'x' wants.
int x (>= 0, <=10) | x = parseInt(getKeyboardInput)
...was meant as an example of what won't work.There's an interesting article [1] on how Jonny Greenwood of Radiohead uses it extensively, in there you can see some examples of how it works - modules wired together visually.
I think there is a lot of potential for a really nice mix between text-based programming and graphical programming to work for general programming too.
[0]:http://cycling74.com/products/max/ [1]:http://thekingofgear.com/post/25443456600/max-msp
There are many solved problems in text-based programming that would need to be resolved in order for a graphical programming language to be as useful.
How would one post a "snippet" to StackOverflow? How would diffs work? Consequently, how would source code management work?
To take a diff for example, not only do you need to illustrate the diff version of (I drew a new line from here to here and made this other box green), you would also need to illustrate patches to textual details and how they related to the graphical code.
Maybe I can imagine a solution that could do that (at great expense). I cannot imagine the language being friendly enough that third-party compilers, IDEs, or static-analysis tools would be feasible.
For anyone that wants to play around with a very similar language, PureData is an open-source application that is a relative of Max/MSP (they share a creator, and are quite similar). Not wanting to come in to do the homework or purchase the software myself, I used it quite a bit.
- parallel and concurrent are 2 different things
- the 'symbolic languages' definition seems off. Wikipedia puts it right:
> symbolic programming is computer programming in which the program can manipulate formulas and program components as data
So it's not "using graphs & such to program"
The symbolic paradigm isn't necessary for this, but it makes it easier. Implementation-wise these things requires less "special case-ing" than you would think because the symbolic paradigm in essence encourages your representations to be naked. Thus images, plots, etc are not "black box" items. They're structured data represented in the same language, but which the frontend happens to display in visual forms.
(Code itself is also data, except that if a symbol in the data has an associated transformation rule, that transformation rule is applied. This mechanism is how you make data do 'variables' and 'functions'. These variables and functions work more or less the same as they would in a common programming language, but the underlying mechanism is nothing but data transformations.)
[1] http://reference.wolfram.com/mathematica/ref/EdgeDetect.html
Mathematica is a commercial piece of desktop software that uses the Wolfram Language. Just like how RStudio uses R.
There are other product platforms (coming soon) that employ the Wolfram Language, both in cloud and desktop incarnations: http://www.wolframcloud.com/
for some valid critique.
That should be an implementation detail as far as the programmer is concerned. And languages should strive for parallelism anyway, not merely concurrency.
http://existentialtype.wordpress.com/2011/03/17/parallelism-... http://existentialtype.wordpress.com/2014/04/09/parallelism-...
[1] http://en.wikipedia.org/wiki/HP-48_series [2] http://en.wikipedia.org/wiki/Conway%27s_Game_of_Life
1: factorcode.org
I rarely use it because organization is such a pain, but its "data-flow" paradigm does simplify a lot of logic.
Organization gets much, much easier once you get the big architecture concepts - produce/consumer patterns, messenger-queues, event structures, actor framework. Yes, it can be painful, but the newer functionalities (clean-up, riddiculously easy sub-vi creation,..) subtly improve it.
Btw: If you are using LabVIEW and have never used Quickdrop, try it. It is the most underutilized and amazing feature.
I think that any future that the Declarative paradigm has within general purpose languages is the kind applied by compilers. For example, x = (a + b) - a can be reduced to x = a or even eliminated altogether with subsequent in-scope references to x being replaced with a. Another example is dead code elimination. These forms of declarative let you use an imperative or functional language immediately but gently introduce you to declarative benefits without having to deal with all the mind bending that is necessary to optimize pure declarative code.
That being said I love having logical variables and I would love having them in other situations.
http://fogbeam.blogspot.com/2013/05/prolog-im-going-to-learn...
I might be wrong, but I think where the logic community failed is in taking advantages of multiple processors. Goals can be broken down and shared across multiple core/processors. So without changing code, the same program can be parallelized. It's just seems no one has implemented it yet.
This is not immediately obvious to me, here's why (from the same wikipedia article [1]):
While functional languages typically do appear to specify "how", a compiler for a purely functional programming language is free to extensively rewrite the operational behavior of a function, so long as the same result is returned for the same inputs. This can be used to, for example, make a function compute its result in parallel, or to perform substantial optimizations (such as deforestation) that a compiler may not be able to safely apply to a language with side effects.
But I still think that functional programming goes beyond simply specifying "what you want". Take the problem of sorting an array for example. A declarative specification of the sorting problem would be: Given an array T, compute a permutation of T such that for all i, it's true that T[i] < T[i+1] (some languages support this type of specification e.g. [1]).
Clearly here, you don't describe "how to do it". In functional programming you can't do this, you have to explain "how", implementing quicksort, for example.
> This is not your grandma's "functional programming will change the world!" blog post
Presumably because a lot of people write blog posts about how FP will change your approach. That's true but, while I wouldn't say FP is mainstream, it's well-known in circles such as ours; whereas the topics he mentions are much less run-of-the-mill. It makes for a more interesting and original read.
I notice he didn't include array-based languages, like APL and J. They were flavour of the week here a couple of weeks ago, but you still don't get much written about them and they would fit nicely in this article.
Forth (concatenative) used to be pretty big, back then software was simpler and you did not need a Qt wrapper, XML parser etc. Forth was very practical for the things it was originally used for. Like that embedded control software for a telescope where the "user interface" consisted of a few buttons, as in physical buttons, no screen, no mouse, no harddisk, no network connection to worry about. In that environment Forth made a lot of sense, on a modern computer, as a tool to write a modern GUI application it makes no sense at all if you ask me.
And "declarative".. Prolog is old, back then it was usually called "logic-based programming", though.
Really, Forth and Prolog, every older guy who cares about programming knows those fellas.
And Prolog did not change the way I think about programming at all. I considered it breathtakingly clean and elegant (compared to C and Pascal).. for the few things it was good at (e.g. expert systems).. but utterly unsuitable as a general-purpose language / paradigm. I felt the same way about Standard ML (it was the dominant FP language before Haskell became big). "Wow, writing a simple input/output transformer (e.g. academic compiler) or doing some basic math is so clean and elegant with this! Everything else.. not so much."
In contrast, Forth actually changed my thinking. According to Charles Moore, Forth's creator, proper software development with Forth meant finding just the right set of words, the right vocabulary, for the problem domain. Once that was accomplished writing down the program logic became trivial. Note that in Forth functions are called "words", are usually very short, and indeed word-like. So you end up with code like "ARM 90 DEGREES ROTATE".
As I said my approach to programming was changed by the exposure to Forth. I aim to discover/define an efficient "vocabulary" for the problem domain in every language. That is not the same as designing a "domain-specific language" by the way. DSLs usually go much further, making up a syntax of their own. I do not like DSLs at all. Mostly because of the associated learning curve forced upon you by every new DSL and by the usually poor tool support.
EDIT: Removed needlessly trollish part
(Aside: I was somewhat disappointed by the blogger calling out the "Wolfram Language" as something special or to be admired. It's a ridiculous ad-hoc hodgepodge of a language. I stress the word language.)
Both Promela (Spin) and TLA+ have active communities and have found fairly wide use in industry. They are generally used for model checking, model extraction by guided abstraction, and development by refinement, but can be used in a much more adhoc way to just experiment with parallel ideas.
Is this line of evolution in languages considered dead?
It may not be a language feature, but the things in the article seem rather academic by comparison.
[1] https://code.google.com/p/anic/wiki/Tutorial [2] http://lampwww.epfl.ch/funnel/
A true Code Search would work like this: 1. Type in your search term in your code in a comment line. i.e. "Bubble sort StampArray by name" 2. Google/Bing/StackOverflow searches for your string. Replaces your terms with generics. Searches for "Bubble sort [an array of objects] by [string variable]" 3. Takes code results and shows them to you. Replaces all instances of [string variable] with getName() and all instances of [Object[]] with StampArray. 4. You pick your favorite. 5. Your IDE adds the code to a "code search module" which you can edit. 6. Your edits get added to the search database.
The best part? You could even put your Declarative Programming engine -inside- of the Search just by populating initial search results. What about better code coming to exist in the future, you say? Well, you don't necessarily have to keep the same result forever. If it's been deprecated, you can re-do the search.
"It feels like I am sitting at the controls of a quantum computer. I've got all my qubits (terms) all wired together in some complicated expression and when power is applied every qubit will instantly collapse out of superposition and crystallize into a perfect answer."
(From something I've been working on, http://kmkeen.com/sat/ )
If you want to experiment with them you don't need an FPGA, you can just start with a simulator such as Icarus Verilog[1] and a waveform viewer like gtkwave[2] and get a feel of the language. There are a bunch of tutorials on the net.
[1] http://iverilog.icarus.com/ [2] http://gtkwave.sourceforge.net/
Also, debugging verilog using only $display doesn't sound very fun...
It's a pretty cool language with a lot of paradigms one can easily try out.
[1] http://en.wikipedia.org/wiki/Oz_%28programming_language%29#D...
http://hackage.haskell.org/package/monad-par-0.3.4.6/docs/Co...
Dealing with process coordination using the resolution of logical variables gave me a refreshing new perspective. The finite domain constraint system design in Oz is an awesome example of this in action.
An interesting tidbit - the Mozart/Oz team invented "pickling" before it caught on with Python.
I might be confusing this with custom literals.
You also could create a class that wrapped an integer, that would have a range, and would maintain that range as a class invariant. It could either clip or throw an exception when an attempt was made to go out of the range. But that's just regular class stuff, so I don't think it's what you had in mind...
[1,2,3] : Vector 3 Int
we can write `append` which combines Vectors append : (Vector n a, Vector m a) -> Vector (n + m) a
which works as you expect because the expression in the type (n+m) is written in the value language.Here's another (slightly pathological) dependent type. Normally `if` statements require that the then and the else branches result in the same type, but in a DT language you can write
if' : (cond : Bool) -> a -> b -> (if cond then a else b)
In other words, `if'` takes the type where you first provide it a boolean, we'll call it `cond`, and then two differently type continuation parameters. The type of the result is an expression: "if cond is true then this is the type of the first continuation, otherwise this is the type of the second continuation".Both of these examples would be proud to call themselves trivial. The DT rabbit hole goes very deep.
Longer answer: Sort of. An array has an element type and length associated with it (and it ATS at least, the type-level length isn't kept at run-time) and that length is set when the array is created, and modified if elements are added or removed. However, functions can require parameter types or provide return values of "array T n | n > 0 && n < m" --- the array cannot be empty and has to be smaller than some limit.
There's a good deal more to dependent types, but they are much more flexible than simply attaching the length of the array to its type.
I would add object oriented programming (eg: smalltalk), macro programming (eg: lisp) and functional programming (eg: Haskell) to the list of things that change your thinking, but since most of the article's prospective audience has already heard of those I was happy that he stuck with paradigms that are LESS well known.
http://www.reddit.com/r/programming/comments/22nhb2/six_prog...
This is so wrong I don't know where to begin.
http://en.wikipedia.org/wiki/Declarative_programming
"Wikipedia is mistaken" is totally a defensible position, but it should be defended, so please find a place to begin.
To me, in a declarative programming language you tell the computer what you want without exactly telling it how to achieve it. It's the compiler's job to figure that out. That classic example of a declarative programming language is Prolog.
In databases there are two traditional approaches to extracting queries: the relational algebra and the relational calculus. One of these (the relational calculus) is very clearly declarative: you essentially are saying "give me all students whose ID numbers are less than 1000 and who took a class from someone who is no longer a member of the faculty." The interpreter figures how what relational operations should be performed to do this.
The other option is the relational algebra, which to me is very obviously not declarative: you are telling the system exactly what to do, and giving it an implicit ordering (though just like in any procedural language it can change the ordering if it thinks it's a good idea). Thus in the relational algebra you'd say "get the table of students. Reduce it to those whose ID numbers are less than 1000. Join it with those students who took a class. Take the list of faculty. Reduce it to those who are longer teaching. Join that with the previously generated students relation. Project out just the student names."
The primary language for the declarative relational calculus is Datalog.
The primary language for the (non-declarative) relational algebra is SQL. Though SQL has a few extra gizmos added to compensate for the fact that it's less expressive than the relational calculus.
Let's also remember that many companies stuck with big RDBMs systems have entire teams of people whose job includes turning declarative statements into extremely procedural ones.
So in practice, writing a SQL statement has little to do with making a query readable, but with abusing knowledge of internals to make it work fast. So declarative, not so much.
Has anyone made an RDBM that let you write SQL and separately control how it is evaluated?
For example, I consider the sublanguage in printf to be more declarative than the rest of C as well.
SQL as it stands, especially in contrast to what the existing models were at the time it was introduced, such as CODASYL.
SQL is the one language we use that somewhat operates at the level of thought.