Good job Facebook.
I can certainly see being stuck with Javascript (just like we're stuck with the x86 instruction set even if simpler alternatives exist), but I'm not sure it's something I rejoice about. Javascript is like anti-Batman: a language we all deserve, but not one we need.
While its no Haskell, it certainly isn't much more verbose than other dynamic languages anymore. And a typesystem like TypeScript or Flow pretty much eliminates the rest of the gotchas.
Off the top of my head, there are two embarrassing holes: bigger integers and parallelism. Can't think of anything else at the moment (macros maybe, but they're a double-edged sword wrt tooling). Wonder if anything else is missing?
I think we all agree. JS has very ugly things, but it isn't going anywhere for the forseeeable future.
If we're going to use for at least a few more years, I'd applaud anyone making better tools and rejoice when I see better frameworks and easier to use libraries.
Now by the time alternatives to javascript become viable, we might have made javascript something way better than what we already have now. It could survive a long long time and with people actively making it evolve, and it could be very enjoyable. Who knows.
If you factor in all these improvements, and the fact that it runs brilliantly on the server, it's a vastly different situation than just a few years ago.
You're looking at it from a technical perspective, and from a technical perspective JS is an awful language.
It may be a turd. But it's the only sandboxed-by-default, zero-install, reasonably-fast, free, preinstalled-on-every-machine turd that we have.
For example, yes callbacks were very messy but very soon we will have generators. And, yes, the scoping was nasty but soon we will have the 'let' keyword. I cannot remember where I read it, but I do also remember seeing a talk about some proposals to extend "use strict" to allow people to fix some of the type-casting behaviours made infamous by wat, too.
My point isn't that everything is fixed and we can stop complaining about the bad parts. My point is that it is very impressive how those in charge are handling the evolution of the language.
I think it's worth taking a bet on a language which improves so much every year.
I would consider C++ to be way more verbose than Javascript. Especially with ES6 coming (and things like Flow / Typescript allowing you to use ES6 today).
Weird scoping? What are you talking about here? It's not the same as other languages. That does not make it weird. When you understand how it works, it's not a problem. Use it to your advantage.
Not nice to optimize for? Javascript is fast enough for most tasks, provided you use best practices. You can even build AAA games with it nowadays, through ASM.js. I'd like to learn more about what you mean exactly when you say it's not nice to optimize for, if you have the time.
I get that Javascript has its quirks. But so do most languages. What's awesome is that JS is easy to get started with, but can be used to build complex apps (especially with things like Flow / TS). And it works everywhere. And it has an amazing ecosystem of client side and side libraries.
- Rule #1: C-like syntax
- Rule #2: Dynamic typing with optional static types
- Rule #3: Performance
- Rule #4: Tools
- Rule #5: Kitchen Sink
- Rule #6: Multi-Platform
This tool provides #2 for Javascript, the NBL.
JS the language of the future? Why? It has probably the worst gotchas of any language I've coded in
You have conflated quality and popularity.I'm interested in who is the driving force behind this open source change in Facebook, I don't recall facebook in behaving these way 4 years ago.
Can anyone find anything on a policy change that happened? They really turned around.
From my point of view, most of what has changed is resources and the immediacy of our survival-level concerns. Four years ago Google had declared nuclear war on us, we had far fewer users, we were not profitable, there were constant fires to put out with basic production operations stuff we've gotten better at, and we were enormously more under-staffed. I was working on HHVM already, but it was in a million little pieces spread across Drew's, Jason's, and my desks. The tools we're open sourcing over the last two years mostly did not yet exist, and if they existed, it was in some primordial form. We also have gotten much, much better imho at being good stewards of our open source projects; HHVM's predecessor system, the HipHop compiler, was also open source, but we people were spread way too thin to be able to respond to bug reports, pull requests, get FB's latest code into public hands, build binary packages for popular distros, etc. on a timely basis. Huge props are due to all of the technical people on our open source teams.
I don't think there's been any policy changes. I think the policy has always been that we're open to open sourcing stuff, so long as it is useful to others (ie, not just a code drop that nobody can use) and someone signs up for the work and there isn't something important that's being dropped in the process.
The difference I think is people, energy, and momentum.
We've been able to find people (many already at the company) genuinely interested in some of the less glamorous parts of building a scalable program to open source things - things like sync processes and pull request management and UIs for ACLs and CLAs and so forth.
We've had people who, often due to the availability of these tools, have developed an internal energy to want to put in the extra effort to make their project ready to be open sourced (like making sure all dependencies are available already, or scoping out or stubbing out things that are Facebook-specific (our asset management flow, for example) while still keeping the software useful, and so forth.
The momentum has also made the idea of open sourcing code more top-of-mind to people, which helps to get people to rewrite their changes to accommodate a nascent open sourcing effort on a piece of code, or to get more discretionary time to investigate or work on making something open source. Or even just moral support from your team and colleagues.
Not to say that Google's OSS hasn't done much of the same, but AFAIK many of their projects fail on at least a couple of these points.
My guess is that Facebook was just much smaller 4 years ago, over that period its staff grew very quickly. And even today it has an order of magnitude less employees than say Google, Apple, Microsoft, etc.
Larger, more established companies have more opportunity to open source things on this scale.
"...on the web language of the past, which we are unfortunately stuck with for the foreseeable future: JavaScript" might have been a better summation of the current (rather dismal) state of affairs with regard to web scripting.
That said, I am curious what solutions this solves that isn't already solved by enforcing good code coverage. Full disclaimer, the largest js projects I've worked on were in the tens of thousands of lines, not hundreds of thousands, but type checking just seemed completely unnecessary provided a good coding guide and test coverage were maintained and enforced.
It solves not having to write a bunch of tests that can trivially be caught by a program and never having to update or maintain those tests. It hits every code path automatically, you don't have to think of cases to try to hit edge cases.
- It type checks JSX - It supports some ES6 features others don't (like destructuring) as the build step - It has union types (TS will get those soon, already in master) - It does a lot more inference and a lot more assumptions. It assumes you won't multiply a string by a number for example (although technically '10' * 5 is legal in JS). So it's opinionated in that it enforces checks that rule out code that is legal but not likely intended in JS (an opinin I agree with).
Yes - it is possible to write tests and get good coverage and not use those tools - however static analysis can be very valuable in finding errors early. While I'm not sure I'd bother with explicit typing in this in smaller projects from the examples and unit tests it looks like it could find errors people would not easily notice otherwise. (Think JSHint, on steroids).
That said, only time will tell how good the implementation will really get in understanding your code.
you're not wrong, but out of curiosity, I just genuinely wonder how often modern programmers really hit type errors on smaller projects (obviously not FB size). I don't think I've ever had a type cast bug in my js code, provided we don't include accidental nulls in that statement. So in my experience, if I were ever told that I now had to always use annotations, I would feel like I was losing flexibility in the language for little gain. And I feel like the reason I never hit type issues, is because I write full test coverage, which in turn make it painfully obvious what is and is not expected in each method.
For the reasons above, I love the idea of this tool dynamically checking all my code paths, and looking for things that are likely mistakes or result in null exceptions.
But I'm afraid to recommend this to my boss, for fear that for now on, everything must be maximum static... everything annotated, no union types, etc.
Just my 2 cents.
The (supposed) need to have "good code coverage" is itself a problem I'd like solved.
This tool claims it can do advanced inference. If it were possible to hook it up to an editor or IDE to analyse existing untyped js in this way it could be invaluable.
Typescript (from MS) sounds a lot like what you describe - it doesn't change Javascript, it just adds types to it (and makes some ES6 features available), and it puts a lot of work into playing nice with the wider JS ecosystem (e.g. via the definitelytyped project, which integrates type definitions for most popular "third-party" javascript libraries).
> That said, I am curious what solutions this solves that isn't already solved by enforcing good code coverage. Full disclaimer, the largest js projects I've worked on were in the tens of thousands of lines, not hundreds of thousands, but type checking just seemed completely unnecessary provided a good coding guide and test coverage were maintained and enforced.
Perfect testing can do everything a type system can. But a type system can do it with less programmer effort, much lower maintenance overhead, and in a standard form that makes it easier to maintain. So you can do the same thing but cheaper - or, more realistically for how software is developed in most companies, you can get better reliability for the same engineering budget.
The Google Closure Compiler (https://developers.google.com/closure/compiler/) has been around for years and years.
It does everything you mentioned (100% optional type annotations, type inference), plus more (dead code removal, inlining, compiler-time constants).
+ it's not incremental hence unusably slow, requires Java. No, Closure is not the same as Flow even from a brief look at both.
> code intelligence, which aids code maintenance, navigation, transformation, and optimization
I suspect that for facebook, this could be just as important, if not more so, than code quality benefits.
I specifically didn't send this tool to the team I work on, because my team lead is a sql / java / c# guy who loves static languages, and only touches the front end with a stick if he has to, and then only some basic jQuery or angular.
I've sold him on jasmine and requiring front end test coverage, recently. But right before I hit the send button I realized that if I sent him this tool, I'd never be allowed to use dynamic typing or non annotated functions/arguments in js ever again.
Hence my question : /
(ref this absolutely fascinating paper
http://bibliography.selflanguage.org/_static/implementation....
and this piece of V8 dox quoting the aforementioned paper
https://developers.google.com/v8/design)
It seems that adding a type system to a dynamic language has little real drawbacks compared to designing language and type system at the same time, for both performance and type safety considerations.
This all seem extremely cool.
I went straight from hacking Scala and Haskell as a hobbyist to doing (mostly) front-end JS job, and I've always found that my code, and a lot of good libraries I read, naturally emulate something close to Hindley-Milner typing, by using objects as tuples/records and arrays as (hopefully well-typed) lists, as well as the natural flexibility of objects as a poor substitute for Either types.
I'm definitely pleased to see that the designers of this library have also realized that strongly-typed javascript was just a few annotations and a type inference algorithm away.
I'm just wondering why are nullable types inmplemented as such and not as a natural consequence of full sum types, which are inexplicably absent.
Promise<A,E> -> ((A -> (Promise<B,F> | B)),(E -> (Promise<C,G> | C))) -> Promise<B|C,E|G|C>
That is - a promise's then - takes the promise (as this) and executes either a `.then` fulfillment handler or a catch handler.
If the `fulfill` handler executes the value is unwrapped and either a new value, or a Promise over a new value and its own type of error is returned.
Now, if the `reject` handler is executed the error is unwrapped and either a new value, or a promise over a new value or a new error is returned.
This is quite simple and easy to use because it behaves like try/catch in the dynamic type system of JS with recursive unwrapping - however it is challenging to reason about when you're starting to type code and you want to actually have correct type information with promises.
Static languages generally approach these problems with pattern matching on the type - in JS that's not common nor is it feasible at runtime - you just expect a value of a certain type. When I implemented promises in another language (swift) this was a lot of fun to work through and not very trivial - if their compiler cna do this I'd be very impressed.
Promises are just one example.
Anyway - this looks cool. I definitely agree that full sum types would've made more sense - having explicit nullables is usually a smell (like in C#).
Let's say a promise is a thing which can either succeed or fail eventually. If it fails, it gives a type e, if it succeeds a type a
Promise e a
Now, `then` operates on the successful result, transforming it into a new promise of a different kind. The result is a total promise of the new kind then :: Promise e a -> (a -> Promise e b) -> Promise e b
I'll contest now that this is sufficient. Here's what you appear to lose: 1. No differentiation of error types
2. No explicit annotation of the ability to
return constant/non-promise values
3. No tied-in error handling
That's fine, though. First, for (1), we'll note that it ought to be easy to provide an error-mapping function. This is just a continuation which gets applied to errors upon generation (if they occur) mapError :: (e -> e') -> Promise e a -> Promise e' a
For (2) we'll note that it's always possible to turn a non-promised value into a promise by returning it immediately pure :: a -> Promise e a
Then for (3), we can build in error catching continuations catch :: Promise e a -> (e -> Promise e' a) -> Promise e a
We appear to lose the ability to change the result type of the promise upon catching an error, but we can regain that by pre-composition with `then`.So, each of these smaller types is now very nice to work with. They are equivalent in power to the fully-loaded `then` you gave, but their use is much more compartmentalized. This is how you avoid frightful types.
I don't care about the type signature of promises. I actually don't care about the type of anything which has a complicated type; I think it's just incredibly winning that I will now be able to describe the shape of the raw data circulating in my code.
I've always found really annoying, in Haskell, to see incredibly complex type idioms emerging to allow stuff that doesn't really deserve it. And yes, I'm talking about monad transformers.
Don't get me wrong, I think that Haskell and the typing techniques and idioms it has fostered are a tremendous achievement, but right now, I'm more focused on bringing my web code, which right now is unfortunately a jungle of implicitly-typed garbage, closer to a safe and predictable better-typed form.
I tried to do that in Haskell in the back-end, but everytime I tried, I lost mind-boggling amounts of time dealing with the monadic stack of the framework I tried.
As the saying goes, I'm not clever enough to use dynamic typing, and bugs happen. Unfortunately, I'm also not clever enough to use real strong typing, and nothing compiles, let alone gets done.
Hence why I'm immensely thankful to see facebook embracing gradual typing in a way that lets me leverage my knowledge of algebraic typing.
Promise A E -> (A -> A') -> (E -> E') -> Promise A' E'
That would eliminate some strange corner cases and make it easier to explain the function. The special cases could just get their own functions. Real algebraic data types would probably eliminate the need for E entirely and make it even simpler.
case mx of Just x -> f(x)
vs if (x != null){ f(x) }
What you can do in a Javascript-like language is use union and intersection types. However, they can get a bit complicated (specially if you allow unions of non-primitive types) and the extra flexibility can confuse the type inference a bit so I can understand them restricting things to the common case of handling null. function nothing() {
return {
match: function(cases) { return cases.nothing(); }
};
}
function just(x) {
return {
match: function(cases) { return cases.just(x); }
};
}
just(1).match({
just: function(x) { return x + 1; }
nothing: function() { return 0; }
});Haskell-style sum types are a generic type with multiple values describing the alternatives.
Since the goal of Flow is to typecheck existing JS semantics, the addition of wrapper types and objects to support such sum types makes very little sense while the addition of "anonymous" union types makes a lot of sense. Dialyzer took the exact same path (except even more so as its type unions can contain values) as it tried to encode Erlang's existing semantics.
I missed the part about sum types!
My god this is perfect
It has static type checking with optional type annotations and type inference.
It doesn't have compiler-time constants, dead code removal, inlining, or other optimizations.
But....still really cool.
- more powerful type system (union types, hurray)
- support for JSX
- no windows binaries
- supports more of ES6 stuff
- ...but has no support for modules yet
- no generics (??)
How about performance? and workflow? Didn't yet find this: does it use a normal "write then compile" model like TS or has something like Hack (if I'm not mistaken it has a daemon running in the background, checking the code as you write it).
Wonder why FB decided to roll this on instead of using TS.
- Has other comments have said: TypeScript is getting (already in master branch) union types.
- Support for JSX isn't really a huge deal.
- Windows support will probably come - it's an open source library. I hope it's not the OCAML tooling.
You make some great points about generics and using their own type system instead of TS, especially since TS has investments from both Microsoft and Google (with AtScript which supersets it).
They state they use a model like Hack - and the repo also looks this way but I'm also curious, it looks like a very peculiar choice.
http://flowtype.org/docs/classes.html#polymorphic-classes
http://flowtype.org/docs/functions.html#polymorphic-function...
For workflow, check out: http://flowtype.org/docs/getting-started.html#_
I see some mentions of import in the tests and the grammar: https://github.com/facebook/flow/search?utf8=%E2%9C%93&q=imp...
Then perhaps you are understating the importance of software correctness. While type systems can be an almost religious topic, the benefits of type checking are real -- a whole class of bugs to disregard, less testing code, and a more maintainable codebase for other developers. Moreover, for languages with ADTs and exhaustive type-checking, you are forced to reason about boundary and error conditions. All of which leads to higher quality software, at the minimal cost of up-front work when designing your types and fixing what the compiler/checker says.
> type related errors are usually easy to find and fix and have rarely if ever been the root cause of our most difficult problems.
Type related errors may be easy to diagnose and fix -- although by definition, in the absence of type-checking, type errors are only so after the fact, eg after causing a crash. Hence the utility of type checkers, especially if it means fewer avoidable errors appearing in production.
> Dynamic type checking and implicit conversion is one of the more powerful features of JavaScript and certainly no less prone to error or counter-productive than type-casting, making variadic functions or class templates are in other languages.
Type checking doesn't diminish the usefulness/convenience of dynamic languages, but IMO lends more weight to the benefits of strongly static languages -- benefits which are negated by the abuse of "features" such as typecasting or variadic functions.
There are other issues as well, JavaScript doesn't check function arity on function calls (the arguments not supplied just get the "undefined" value), control structures do not introduce a new lexical scope etc.
We've had plenty of other bugs in the past months that tooks hours to track down but would have been caught in seconds in a sensible language. A lot of them won't be caught by flow either, unfortunately. What I really want is for operations like key lookup to fail instead of just returning me gibberish.
Typecasting is at least explicit. In js every single operation is a timebomb.
In a language without a strong type system you rarely appreciate quite how many of your invariants could be lifted into the type system. When I wrote Python most of my errors didn't seem like type errors (e.g. I remember forgetting to close a connection and so leaking connections), but now that I write Scala I can see how I'd structure my program using types so that that would be a type error (I'd use monads to compose the idea of an operation that uses a connection, and then execute them in one place).
(Python now has an ad-hoc fix for this specific problem in the form of the "with" statement, just as 3.4 adds an ad-hoc fix for the proliferation of different ways of doing async calls. But a good type system is a general solution to both these problems and more).
> Dynamic type checking and implicit conversion is one of the more powerful features of JavaScript
In a strongly typed language you can do this in a controlled way; it can be part of the language, and can apply equally well to user-defined structures. Whereas with Javascript you're stuck with those implicit conversions built into the language, and if you want to convert e.g. an address datatype, you're out of luck.
> certainly no less prone to error or counter-productive than type-casting, making variadic functions or class templates are in other languages.
Then use a language that doesn't have those problems either. There are good languages out there - if Scala isn't for you then how about OCaml or Haskell?
Then again their rationale is very clear and pretty good: You need builds if you're using Facebook's stack anyway (for JSX) so this should not interfere with your current build - which you have to do anyway.
flow examples/01_HelloWorld/hello.js
/hello.js:7:5,19: string
This type is incompatible with hello.js:4:10,13: number
Found 1 error tsc --noImplicitAny hello.tsc
which will result in hello.ts(2,14): error TS7006: Parameter 'x' implicitly has an 'any' type.
I do not see much difference.[1]: http://flowtype.org
function length(x) {
return x.length;
}
length(null);
can never be a compile-time error in typescript.tl;dr: they want to support both of those features, the question is what syntax to use and how to introduce those features into the existing ecosystem.
TypeScript doesn't have that concept, although it's been suggested by the community more than once.
Besides perhaps the extra 'compile' time added to do both translation with TypeScript and then static analysis with Flow. Both tools have their advantages and disadvantages.
May as well throw in a linting tool as well..
Tern.js actually detect types, and may be it would be possible for eslint to incorporate it somehow to detect invalid use of types.
One big ternjs plus for me is the fact that tern.js knows about require.js modules and can look in other require'd files.
Thanks to everyone at Facebook who worked on this. You guys are awesome.
Also: The fact that this is written primarily in OCaml (as opposed to JS) is an excellent example of people choosing the right tool for the job.
JavaScript has a weird ecosystem where it is extremely helpful to have all of your tools in the same language. browser-based IDEs, Node, portability, etc, and just one fewer runtime to juggle.
Same reasons why closure is awkward as a Java program.
And on the side note, I bet Facebook did this just to make nerds install OCaml and show them the light :)
I think the key is that these projects actually provide value - they make things faster, more reliable, more scalable - whether that's the code's execution or the people writing the code (or debugging issues, or whatever).
They generally aren't solutions seeking problems - they are responses to problems that exist.
Engineers generally don't build things like this on the weekend - unless they like to structure their time like that, I guess. It may or may not be a full-time job, but the job whatever it is isn't some search for abstract perfection, it is again to solve real problems encountered by others in the company. Often it is a part-time component built as part of trying to solve some more direct goal - like fighting spam, or serving bits, or whatever.
Often it is something the engineers just do - it makes sense to break things up into libraries, or services, or whatever, and they do that, and then that library or service is usable/useful elsewhere, and that's it. Other times they may suggest and motivate it as a goal-in-itself in a team goal setting situation.
I doubt that's a particularly useful answer, but maybe with further questions I can make it more useful to you?
It does raise the interesting question of whether facebook employees are doing this work just to avoid the work that is the "core competency" of the company. Especially given the fact they don't gain a competitive advantage from releasing the work that facebook paid for into the wild. By this I mean, the company benefit would seem to be attract other talent. And the personal benefit for the devs is to get their name out there are something cool and interesting. Certainly, working on the best way to advertise to users is a lot less exciting/sexy than working on static type checking for javascript.
It seems like the vast majority of companies are not far enough out in front of their production issues and requests from the business side that engineers could do this sort of thing. So I guess it's impressive that Facebook (and probably Google) are in that position.
(It's a different matter that we would have done this on our free time anyway, because it's so much fun!)
What mechanism ensures this becomes valid javascript? does the code need to be compiled?
The code has a build step - yes. You will not be able to run this code without a step. It could be awesome if they allowed annotations in comments like other tools do - there is a GH issue on that and it looks like it should be possible https://github.com/facebook/flow/blob/master/src/typing/comm...
http://flowtype.org/docs/existing.html#_ http://flowtype.org/docs/running.html#_
The caveats I heard about transpilers often boil down to difficulty of debugging and lack of libraries. But with the amazing browser dev tools we have, debugging potential issues is not that painful. Every language compiling to js provides FFI and/or some escape hatch so you can write javascript manually, for performance tuning or for using 3rd party libs.
Even if you do write "raw" javascript, some sort of compile step is unavoidable, for running jshint, concatenating, minifying, etc. Why not walk the extra mile and use a better language?
BTW, I'm not saying a tool like this is not super-useful, specially if you already have thousands of lines of js code that you can't get rid of. Congrats to the Facebook team for the release!
I think Dart seems to be a great language and IE, Firefox and Safari should have implemented it years ago, but they didn't. Now I think TypeScript is a great addition to javascript and I hope they build it into the browsers but I suppose they won't (maybe EcmaScript 7 will have some parts of TypeScript in it, or parts from AtScript from Google).
By the way, you probably still want minification and concateneted files when you create js from other languages. That stops me from using them, I would have many levels of tools between my source code and the production code.
People will tell me that it is a good language if you know how to use it, comparing javascript mastery to C mastery in a sense. I think there lies the problem.
It helps that companies like Google and Facebook have invested a significant amount of research power into designing frameworks and tooling around it. Just from there two companies alone, we have tools like React, Angular, Karma, JSX, Jest, and now Flow. Tooling that involves the browser more include Polymer and Traceur (ES6 to ES5 transpiler).
To contrast this, I have been doing development with Cordova the past week & writing Cordova plugins to fill in missing functionality - the plugin ecosystem with Cordova is horrid, and the documentation is often awful. To compound it, Android developers don't seem to believe in documenting their libraries well.
I will take the JS ecosystem any day when confronted with a choice like that.
My personal preference is to have annotations because it helps future readers and maintainers understand the code better. Instead of looking through the function to see that the variable is in-fact a number, I'd rather just read "@param x {number}". And at that point, one may as well as use closure.
Also, some techniques from Typed Racket (occurrence typing): http://www.ccs.neu.edu/racket/pubs/popl08-thf.pdf
Many other papers have influenced the design in some way or the other. For example, Abadi/Cardelli's theory of objects.
Looks really handy.
[1] - http://flowtype.org/
The React.js team has been explicit about the library being GCC advanced mode compatible, so they certainly have awareness of its capabilities. Whether they are using them together internally or they use a different solution for tree shaking et al is another question.
Flow starts with JS and adds a static type system (with attempts to infer types directly from the code). With the exception of the type annotations, Flow is JavaScript. Dart is not JavaScript.
I wonder which will have the most impact: code quality or types as documentation (esp for tooling)?
They are adapting to common idioms, rather than designing it from the ground up. This ad hoc approach is a great way to build useful tools (and startups), but it's also usually a mess. Like NN4. But, they seem to be type experts - plus they're using ocaml. Maybe ad hoc by experts is the way to get these ideas adopted?
Also, if you're code is implicitly statically typed (as checked by Flow) you will likely hit all the right optimizations in the underlying JavaScript VM.
The existing available definitions from the DefinitelyTyped project is a huge productivity booster. Apparently Flow also has similar .d.flow files, but it will probably be a while until they exist for common projects.
After investigating this thought, it looks like this already at the top of the list for future plans for Flow:
(1) zero-work: works instantly with existing code and esp third party libraries; and
(2) instant-benefit: provides some compelling benefit in that zero-work case above (of course, it's OK if it provides more benefit if you do more work, adding type annotations etc).
[1] http://flowtype.org/docs/react-example.html#general-annotati...
It'd be quite possible to add the ability to output these annotations to the CoffeeScript compiler itself though. Could be an interesting fork.
You can probably hack it using the backtick operator though.
`function foo(x: string, y: number): string {`
x.length * y
`}`I'm surprised I didn't hear more about this before since it was apparently unveiled at the "Flow" conference. Wasn't at the conference and somehow I missed any prior mention of it.
The problem was that there were two identical comments, the software killed one as a dupe, and avik deleted the other one. The software tries to fix this very scenario—it normally would have automatically unkilled the remaining member of the pair. But there are some corner cases where that doesn't work, and avik seems to have outsmarted it.
We'll take a look and try to fix the fix.
Edit: I assume it just checks bool/number/string and doesn't care about prototypes?
If I am not mistaken, this tech could be used to build IDEs roughly similar to whats available for Java, couldn't it?
I'm surprised that they don't seem to have launched with a public editor plugin and that the documentation doesn't seem to mention it.
If CoffeeScript is to support these annotations one day - it would require the CoffeeScript compiler to support them itself in order to generate correct annotated JavaScript for Flow.
http://en.wikipedia.org/wiki/Type_system#Static_type-checkin...
Even if that wasn't the case, their design just avoids the issue by not trying to do any typecast, ever.
Things like multi threading which aren't possible in javascript aren't supported, but a lot of the JRE which is used in most code, does come out of the box. Its a huge improvement over regular javascript, anyway. And client + server code can be shared.
Direct link
I mean, people could not get this saarcaaasm!
I mean, people could not get this saarcaaasm!
This makes me sad!