And certainly not bidirectional type inference; the author of this post's definition of this concept isn't even right (bidirectional typing refers to having a distinction between typing judgements which are used to infer types and those which are used to restrict types, not moving bidirectionally between parent nodes & child nodes). I don't know if the mistake comes from the post author or Chris Lattner, and I don't know if the word "bidirectional" is relevant to Swift's typing; I don't know if Swift has a formal description of its type system or that formal description is bidirectional or not.
EDIT: watching the video the Chris Lattner quote comes from, it appears the mistake about the word "bidirectional" is his. Bidirectional type systems are an improvement over ordinary type systems in exactly the direction he desires: they distinguish which direction typing judgements can be used (to infer or check types), whereas normal formal descriptions of type systems don't make this distinction, causing the problems he describes. "Bottom up" type checking is just a specific pattern of bidirectional type checking.
Regardless, the problem with Swift is that a literal can have any of an unbound arity of types which implement a certain protocol, all of which have supertypes and subtypes, and the set of possibilities grows combinatorially because of these language features.
But you can absolutely construct slow expressions just with functions that are overloaded on both sides (i.e. parameters and return type). Generics and Closures also drive up the complexity a lot, though.
print("a" + "b" + "c" + 11 + "d")
...if you reduce it even further to: print("a" + "b" + "c" + 11)
...the error message is: <stdin>:3:23: error: binary operator '+' cannot be applied to operands of type 'String' and 'Int'
print("a" + "b" + "c" + 11)
~~~~~~~~~~~~~~~ ^ ~~
<stdin>:3:23: note: overloads for '+' exist with these partially matching parameter lists: (Int, Int), (String, String)
...it still falls down on with the "reasonable time" error on this tiny example, even if you fully specify the types: let a:String = "a"
let b:String = "b"
let c:String = "c"
let d:String = "d"
let e:String = "e"
let n:Int = 11
print(a + b + c + n + d + e)
...is that pointing to a problem with type-checking or overload resolution more than type inference? Is there a way in Swift to annotate the types of sub-expressions in-line, so you could try something like: print((a + b + c):String + n + d + e)(I’m not sure of the precedence, that might need another pair of parens)
let a:String = "a" as String
let b:String = "b" as String
let c:String = "c" as String
let d:String = "d" as String
let e:String = "e" as String
let n:Int = 11 as Int
print((a + b + c) + n + d + e)
...has the long timeout, while: let a:String = "a" as String
let b:String = "b" as String
let c:String = "c" as String
let d:String = "d" as String
let e:String = "e" as String
let n:Int = 11 as Int
print((a + b + c) as String + n + d + e)
...fails pretty much instantaneously.Programmers don't manually write those types however.
error[E0277]: cannot add `i64` to `i32`
1i32 + 2i64;
^ no implementation for `i32 + i64`
It bothered me at first, there are a lot of explicit annotations for conversions when dealing with mixed precision stuff. But I now feel that it was the exactly correct choice, and not just because it makes inference easier.Go is the language that makes your vector classes required to have syntax like v1.VecMult(v2) and v1.ScalarMult(s) because there's no operator overloading at all (even though there's a largely useless baked-in complex number class).
“Conversions between integer and floating-point numeric types must be made explicit:
let three = 3
let pointOneFourOneFiveNine = 0.14159
let pi = Double(three) + pointOneFourOneFiveNine
// pi equals 3.14159, and is inferred to be of type Double
“If you've ever worked on a project with a 40 minute build (me) you can appreciate a language like go that puts compilation speed ahead of everything else. Lately I've been blown away by the "uv" package manager for Python which not only seems to be the first correct one but is also so fast I can be left wondering if it really did anything.
On the other hand, there's a less popular argument that the focus on speed is a reason why we can't have nice things and, for people working on smaller systems, languages should be focused on other affordances so we have things like
One area I've thought about a lot is the design of parsers: for instance there is a drumbeat you hear about Lisp being "homoiconic" but if you had composable parsers and your language exposed its own parser, and if every parser also worked as an unparser, you could do magical metaprogramming with ease similar to LISP. Python almost went there with PEG but stopped short of it being a real revolution because of... speed.
As for the kind of problem he's worried about (algorithms that don't scale) one answer is compilation units and careful caching.
In the case of Rust it's more of a cultural choice. Early people involved in the language pragmatically put everything else (correctness, ability to ship, maintainability, etc.) before compilation speed. Eventually the people attracted to contribute to the language weren't the sort that prioritized compilation speed. Many of the early library authors reflected that mindset as well. That compounds and eventually it's very difficult to crawl out from under.
I suspect the same is true for other languages as well. It's not strictly a bad thing. It's a tradeoff but my point is that it's less of an inevitability than people think.
But ideally I want both.
And on the beginner end, even simple things like "distributing a simple CLI program" or "running a simple HTTP service" are complicated. In the former case you have to make sure your target environment has the right version of Python installed and the dependencies and the source files (this can be mitigated with something like shiv or better yet an OS package, but those are yet another thing to understand). In the latter case you have to choose between async (better take care not to call any sync I/O anywhere in your endpoints!) or an external webserver like uwsgi. With Go in both cases you just have to `go build` and send the resulting static, native binary to your target and you're good to go.
And in the middle of the experience spectrum, there's a bunch of stuff like "how to make my program fast", or "how do I ensure that my builds are reproducible", or "what happens if I call a sync function in an async http endpoint?". In particular, knowing why "just write the slow parts in multiprocessing/C/Rust/Pandas" may make programs _slower_. With Go, builds are reproducible by default, naively written programs run about 2-3 orders of magnitude faster than in Python, and you can optimize allocations and use shared memory multithreading to parallelize (no need to worry if marshaling costs are going to eat all of your parallelism gains).
"Python is easy" has _never_ been true as far as I can tell. It just looks easy in toy examples because it uses `and` instead of `&&` and `or` instead of `||` and so on.
Those languages show one can have both, expressive type systems, and fast compilation turnarounds, when the authors aren't into anti-PhD level languages kind of sentiment.
I've worked on plenty of C++ code based that had a 2 day build time!
If you were lucky, incremental builds only took a few hours.
Yes, but I don't think that compile speed has really been pushed aggressively enough to properly weigh this tradeoff. For me, compilation speed is the #1 most important priority. Static type checking is #2, significantly below #1 and everything else I consider low priority.
Nothing breaks my flow like waiting for compilation. With a sufficiently fast compiler (and Go is not fast enough for me), you can run it on every keystroke and get realtime feedback on your code. Now that I have had this experience for a while, I have completely lost interest in any language that cannot provide it no matter how nice their other features are.
When you recompile your program, usually a tiny portion of the lines of code have actually changed. So almost all the work the compiler does is identical to the previous time it compiled. But, we write compilers and linkers as batch programs that redo all the compilation work from scratch every time.
This is quite silly. Surely it’s possible to make a compiler that takes time proportional to how much of my code has changed, not how large the program is in total. “Oh I see you changed these 3 functions. We’ll recompile them and patch the binary by swapping those functions out with the new versions.” “Oh this struct layout changed - these 20 other places need to be updated”. But the whole rest of my program is left as it was.
I don’t mind if the binary is larger and less efficient while developing, so long as I can later switch to release mode and build the program for .. well, releasing. With a properly incremental compiler, we should be able to compile small changes into our software more or less instantly. Even in complex languages like Rust.
I already get fast feedback on my code inlined in my editor, and for most languages it only takes 1-2 seconds after I finish typing to update (much longer for Rust, of course). I've never personally found that those 1-2 seconds are a barrier, since I type way faster than I can think anyway.
By the time I've finished typing and am ready to evaluate what I've written, the error highlighting has already popped up letting me know what's wrong.
I understand the benefits of super fast iteration if you're tweaking a GUI layout or something, but for the most part I'd prioritize many many other features first.
What language are you using?
A lot of the issues that Swift is currently facing are the same issues that C# has, but C# had the benefit of Mono and Xamarin, and in general more time. Plus you have things like JetBrains Rider to fill in for Visual Studio. Maybe in a few years Swift will get there, but I'm just wary because Apple really doesn't have any incentive to support it.
Funnily enough, the biggest proponent of cross platform Swift has been Miguel De Icaza, Gnome creator, and cofounder of Mono the cross platform C# implementation pre .net core. His Swift Godot project even got a shout out recently by Apple
If Foundation was genuinely cross platform and open source, that description becomes more plausible for at least some subset of engineers. * (for non-Apple devs, Foundation ~= Apple's stdlib, things like dateformatting)
I don't mean to be argumentative, I'm genuinely curious what it looks like through someone else's eyes and the only way to start that conversation is taking an opposing position.
I am familiar with an argument it's better than Rust, but I'd very curious to understand if "better than" is "easier to pick up" or "better at the things people use Rust for": i.e. I bet it is easier to read & write, but AFAIK it's missing a whole lot of what I'll call "necessary footguns for performance" that Rust offers.
* IIRC there is a open source Foundation intended for Linux? but sort of just thrown at the community to build.
This seems to be a (bad) pattern with Apple, one that Google used to (and still does) get a lot of flack for, this habit of not investing in things and then thing dying slow, painful deaths.
E.g. I remember this criticism being leveraged at e.g. Safari a lot.
But, for better or worse, Apple is not a technology company, really, its a design company. They focus on their cash-cow (iPhone) and main dev tools (macbook) and nearly everything else is irrelevant. Even their arm-laptops aren't really about being a great silicon competitor, I suspect. Their aim is to simplify their development model across phone/laptop/tablet and design seamless things, not make technically great or good things.
The reason(s) they haven't turned (as much) to enshittification probably are that a) it goes against their general design principles b) they have enough to do improve what they have and so release new stuff c) they aren't in a dominant/monopolistic market position where they can suddenly create utter trash and get away with it because there's nothing else.
And yes, they exhibit monopolistic behaviors within their "walled garden", but if they make a product bad enough, people can and will flee for e.g. Android (or possibly even something microsoft-ish). They can't afford to make a terrible product, but they can afford to abandon anything that doesn't directly benefit their bottom line.
Which is why I suppose I generally stopped caring about most things Apple.
It isn't a particularly good protocol specification, but at least it got adoption thanks to Microsoft offering a reference client implementation as a library for VS Code extensions, and other editors like Neovim adding support in some form.
Meanwhile, Swift has a long way to go to reach at least the state of Kotlin Multiplatform, which is still mostly in beta and lacks libraries that can work outside of Android.
This is very true, Apple sees compiler jobs as a cost center.
I think you're painting with too heavy a brush. Apple clearly is dedicating resources to long-tail issues. We just saw numerous examples two days ago at WWDC24.
On top of that, the big thing we didn't see announced this year was anything at all related to addressing the massive hit to compile times that using macros causes.
I know it's not helpful to judge in hindsight, lots of smart people, etc.
But why on earth would you make this decision for a language aimed at app developers? How is this not a design failure?
If I read this article correctly, it would have been an unacceptable decision to make users write setThreatLevel(ThreatLevel.midnight) in order to have great compile times and error messages.
Can someone shed some light on this to make it appear less stupid? Because I'm sure there must be something less stupid going on.
I'm a native Swift app developer, for Apple platforms, so I assume that I'm the target audience.
Apps aren't major-league toolsets. My projects tend to be fairly big, for apps, but the compile time is pretty much irrelevant, to me. The linking and deployment times seem to be bigger than the compile times, especially in debug mode, which is where I spend most of my time.
When it comes time to ship, I just do an optimized archive, and get myself a cup of coffee. It doesn't happen that often, and is not unbearable.
If I was writing a full-fat server or toolset, with hundreds of files, and tens of thousands of lines of code, I might have a different outlook, but I really appreciate the language, so it's worth it, for me.
Of course, I'm one of those oldtimers that used to have to start the machine, by clocking in the bootloader, so there's that...
That was pretty horrifying. I’ve never seen a compiler that errors nondeterministically based on how fast your cpu is. Whatever design choices in the compiler team led to that moment were terrible.
The tools team probably had ultra-fast Macs, and never encountered that.
It definitely sounds like a bug in the toolset. I hope that it was reported.
But where zig breaks down is on any more complicated inference. It's common to end up needing code like `@as(f32, 0)` because zig just can't work it out.
In awkward cases you can have chains of several @ statements just to keep the compiler in the loop of what type to use in a statement
I like zig, but it has its own costs too
So maybe the decision comes down to not wanting to trade off that smooth, “natural” feel when writing it.
import foo.SomeEnum.MIDNIGHT
setThreadLevel(MIDNIGHT)
In practice, you write out ThreatLevel.MIDNIGHT, let the IDE import it for you, and then use an IDE hotkey to do the static import and eliminate the prefix.Calling it “a language aimed at app developers” is reductive.
He doesn't claim its not a design failure.
He doesn't say they sat down and said "You know what? Lets do beautiful minimal syntax but have awful error messages & really bad compile times"
The light here is recursive. As you lay out, it is extremely s̶t̶u̶p̶i̶d̶ unlikely that choice was made, actively.
Left with an unlikely scenario, we take a step back and question if we have any assumptions: and our assumption is they made the choice actively.
Surely, early in the development someone noticed compile times were very slow for certain simple but realistic examples. (Alternatives: they didn't have users? They didn't provide a way to get their feedback? They didn't measure compile times?)
Then, surely they sat down considered whether they could improve compile times and at what cost, and determined that any improvement would come at the cost of requiring more explicit type annotations. (Alternatives: they couldn't do the analysis the author did? The author is wrong? They found other improvements, but never implemented them?)
Then, surely they made a decision that the philosophy of this project is to prioritize other aspects of the developer experience ahead of compile times, and memorialized that somewhere. (Alternatives: they made the opposite decision, but didn't act on it? They made that decision, but didn't record it and left it to each future developer to infer?)
The only path here that reflects well on the Swift team decision makers is the happy path. I mean, say what you like about the tenets of Swift, dude, at least it's an ethos.
Correct, it is well known that they kept Swift a bizarre secret internally. It seems no one thought it would be a good idea to consult with the vast swathes of engineers that had been using the language this was intended to replace for the last 30 or so years, nor to consult with the maintainers of the frameworks this language was supposedly going to help write, etc. As you can imagine, this led to many problems beyond just not getting a large enough surface area of compiler performance use cases.
Of course, after it was released, when they seemed very willing to make backward-incompatible changes for 5 years, and in theory they then had plenty of people running into this, they apparently still decided to not prioritize it.
Broad note: there's something off with the approach, in general. ex. we're not trying to find the interpretation that's most favorable to them, just a likely one. Ex. It assumes perfect future knowledge to allow objectively correct decisions on sequencing at any point in the project lifecycle. ex. Entirely possible they had automated testing on this but it turns out the #s go deep red anytime anyone adds operator overloading anyway in Apple-bunded frameworks.
Simple note: As a burned out ex-bigco: Someone got wedded to operator overriding and it was an attractive CS problem where "I can fix it...or at least, I can fix it in enough cases" was a silent thought in a lot of ICs heads
That's a guess, but somewhat informed in that this was "fixed"/"addressed" and a recognized issue several years ago, and I watched two big drives at it with two different Apple people taking lead on patching/commenting on it publicly
Its just as stupid to insist on that being the case.
If that's not convincing to you on its merits, consider another aspect, you expressly were inviting conversation on why that wasn't the case
1. They wrote it to replace C++ instead of Objective-C. This is obvious from hearing Lattner speak, he always compares it to C++. Which makes sense, he dealt with C++ every day, since he is a compiler writer. This language does not actually address the problems of Objective-C from a user-perspective. They designed it to address the problems of C++ from a user-perspective, and the problems of Objective-C from a compiler's perspective. The "Objective-C problems" they fixed were things that made Objective-C annoying to optimize, not annoying to write (except if you are a big hater of square brackets I suppose).
2. They designed the language in complete isolation, to the point that most people at Apple heard of its existence the same day as the rest of us. They gave Swift the iPad treatment. Instead of leaning on the largest collection of Objective-C experts and dogfooding this for things like ergonomics, they just announced one day publicly that this was Apple's new language. Then proceeded to make backwards-incompatible changes for 5 years.
3. They took the opposite approach of Objective-C, designing a language around "abstract principles" vs. practical app decisions. This meant that the second they actually started working on a UI framework for Swift (the theoretical point of an Objective-C successor), 5 years after Swift was announced, they immediately had to add huge language features (view builders), since the language was not actually designed for this use case.
4. They ignored the existing community's culture (dynamic dispatch, focus on frameworks vs. language features, etc.) and just said "we are a type obsessed community now". You could tell a year in that the conversation had shifted from how to make interesting animations to how to make JSON parsers type-check correctly. In the process they created a situation where they spent years working on silly things like renaming all the Foundation framework methods to be more "Swifty" instead of...
5. Actually addressing the clearly lacking parts of Objective-C with simple iterative improvements which could have dramatically simplified and improved AppKit and UIKit. 9 years ago I was wishing they'd just add async/await to ObjC so that we could get modern async versions of animation functions in AppKit and UIKit instead of the incredibly error-prone chained didFinish:completionHandler: versions of animation methods. Instead, this was delayed until 2021 while we futzed about with half a dozen other academic concerns. The vast majority of bugs I find in apps from a user perspective are from improper reasoning about async/await, not null dereferences. Instead the entire ecosystem was changed to prevent nil from existing and under the false promise of some sort of incredible performance enhancement, despite the fact that all the frameworks were still written in ObjC, so even if your entire app was written in Swift it wouldn't really make that much of a difference in your performance.
6. They were initially obsessed with "taking over the world" instead of being a great replacement for the actual language they were replacing. You can see this from the early marketing and interviews. They literally billed it as "everything from scripting to systems programming," which generally speaking should always be a red flag, but makes a lot of sense given that the authors did not have a lot of experience with anything other than systems programming and thus figured "everything else" was probably simple. This is not an assumption, he even mentions in his ATP interview that he believes that once they added string interpolation they'd probably convert the "script writers".
The list goes on and on. The reality is that this was a failure in management, not language design though. The restraint should have come from above, a clear mission statement of what the point of this huge time-sink of a transition was for. Instead there was some vague general notion that "our ecosystem is old", and then zero responsibility or care was taken under the understanding that you are more or less going to force people to switch. This isn't some open source group releasing a new language and it competing fairly in the market (like, say, Rust for example). No, this was the platform vendor declaring this is the future, which IMO raises the bar on the care that should be taken.
I suppose the ironic thing is that the vast majority of apps are just written in UnityScript or C++ or whatever, since most the AppStore is actually games and not utility apps written in the official platform language/frameworks, so perhaps at the end of the day ObjC vs. Swift doesn't even matter.
I wanted to push back on this a bit:
> The "Objective-C problems" they fixed were things that made Objective-C annoying to optimize, not annoying to write (except if you are a big hater of square brackets I suppose).
From an outsider's perspective, this was the point of Swift: Objective C was and is hard to optimize. Optimal code means programs which do more and drain your battery less. That was Swift's pitch: the old Apple inherited Objective C from NExT, and built the Mac around it, back when a Mac was plugged into the wall and burning 500 watts to browse the Internet. The new Apple's priority was a language which wasn't such a hog, for computers that fit in your pocket.
Do you think it would have been possible to keep the good dynamic Smalltalk parts of Objective C, and also make a language which is more efficient? For that matter, do you think that Swift even succeeded in being that more efficient language?
All this to say, it is hard to answer this question in one comment, but to try to sum up my position on this, I believe the performance benefits of Swift were and remain overblown. It’s a micro benchmark based approach, which as we’ll see in a second is particularly misguided for Swift's theoretically intended use case as an app language. I think increasingly people agree with this as they haven't really found Swift to deliver on some amazing performance that wouldn’t have been possible in Objective-C. This is for a number of reasons:
1. As mentioned above, the most important flaw with a performance based Swift argument is that the vast majority of the stack is still written in Objective-C/C/etc.. So even if Swift was dramatically better, it’s only usually affecting your app’s code. Oftentimes the vast majority of the time is spent in framework code. Think of it this way: pretend that all of iOS and UIKit were written in JavaScript, but then in order to “improve performance” you write your app code in C. Would it be faster? I guess, but you can imagine why it may not actually end up having that much of an effect. This was ironically the bizarre position we found ourselves in with Swift: your app code was in a super strict typed language, but the underlying frameworks were written in a loosey-goosey dynamic language. This is exact opposite of how you'd want to design a stack. Just look at games, where performance is often the absolute top priority: the actual game engine is usually written in something like C++, but then the game logic is often written in a scripting language like Lua. Swift iOS apps are the reverse of this. Now, I'm sure someone will argue that the real goal is for the entire stack to eventually be in Swift, at which point this won't be an issue anymore, but now we're talking about a 20-year plan, where it seems weird to prioritize my Calculator app's code as the critical first step.
2. As it turns out, Objective-C was already really fast! Especially since, due to its ability to trivially interface with C and C++, a lot of existing apps in the wild had already probably topped out on performance. This wasn't like you were taking an install base of python apps and getting them all to move over to C. This was an already low-level language, where many of the developers were already comfortable with the "performance kings" of the C-family of languages. Languages, which, for the record, have decades of really really good tooling specifically to make things performant, and decades of engineering experience by their users to make things performant. And so, in practice, for existing apps, this often felt more like a lateral move. I actually remember feeling confused when after the announcement of Swift people started talking about Objective-C as if it was some slow language or something. Like, literally the year before, Objective-C was considered the low-level performance beast compared to, say, Android's use of Java. Objective-C just wasn't that that slow of a comparison point to improve that much on. The two languages even share the same memory management model (something that ends up having a big affect on its performance characteristics). Dynamic dispatch (objc_msgSend) just does not really end up really dominating your performance graph when you profile your app.
3. But perhaps most importantly, I think there is a mirror misguided focus on language over frameworks as with the developer ergonomics issues I pointed out above. If you look at where the actual performance gains have come from in apps, I’d argue that it’s overwhelmingly been from conceptual framework improvements, not tiny language wins. A great example of this is CoreAnimation. Making hardware accelerated graphics accessible through a nice declarative API, such that we can move as much animation off the CPU and onto the GPU as possible, is one of the key reasons everything feels so great on iOS. I promise no language change will make anywhere near as big of a dent as Apple's investment in CoreAnimation did. I’d argue that if we had invested development time in, e.g., async/await in Objective-C, rather than basically delaying that work for a decade in Swift, we’d very possibly be in a much more performant world today.
Anyways, these are just a few of thoughts on the performance-side of things. Unfortunately, as time moves on, now a decade into this transition, while I find more people agreeing with me than, say, when Swift was first announced, it also becomes more academic since it's not like Apple is going to go back and try to make Objective-C 3 or something now. That being said, I do think it is still useful to look back and analyze these decisions, to avoid making similar mistakes in the future. I think the Python 2 to 3 transition provided an important lesson to other languages, I hope someday we look at the Swift introduction as a similar cautionary tale of programming language design and community/ecosystem stewardship and management.
1. https://forums.swift.org/t/standard-vapor-website-drops-1-5-...
as a SwiftUI app dev user I feel like this (and the OP's post) lines up with my experience but I've never tried it for e.g. writing an API server or CLI tool.
{-# LANGUAGE OverloadedStrings #-} -- Let strings turn into any type defining IsString
{-# LANGUAGE GeneralizedNewtypeDeriving #-} -- simplify/automate defining IsString
import Data.String (IsString)
main = do
-- Each of these expressions might be a String or one of the 30 Foo types below
let address = "127.0.0.1"
let username = "steve"
let password = "1234"
let channel = "11"
let url = "http://" <> username
<> ":" <> password
<> "@" <> address
<> "/api/" <> channel
<> "/picture"
print url
newtype Foo01 = Foo01 String deriving (IsString, Show, Semigroup)
newtype Foo02 = Foo02 String deriving (IsString, Show, Semigroup)
-- ... eliding 27 other type definitions for the comment
newtype Foo30 = Foo30 String deriving (IsString, Show, Semigroup)
Do we think I've captured the combinatorics well enough?The url expression is 9 adjoining expressions, where each expression (and pair of expressions, and triplet of expressions ...) could be 1 of at least 31 types.
$ ghc --version
The Glorious Glasgow Haskell Compilation System, version 9.0.2
$ time ghc -fforce-recomp foo.hs [1 of 1] Compiling Main ( foo.hs, foo.o )
Linking foo ...
real 0m0.544s
user 0m0.418s
sys 0m0.118s
Feels more sluggish than usual, but bad combinatorics shouldn't just make it slightly slower.I tried compiling the simplest possible program and that took `real 0m0.332s` so who knows what's going on with my setup...
Specifically, `channel = 11`, an integer.
If it was a string then it parses very quickly.
In my code channel is not a string, it's one type of the 31-set of (String, Foo01, Foo02, .., Foo30). So it needs to be inferred via HM.
> If it was a string then it parses very quickly.
"Parses"? I don't think that's the issue. Did you try it?
----- EDIT ------
I made it an Int
let channel = 11 :: Int
instance IsString Int where
fromString = undefined
instance Semigroup Int where
(<>) = undefined
real 0m0.543s
user 0m0.396s
sys 0m0.148sThe reason this causes issues with the type checker is it has to consider all the possible combinations of the `+` operator against all the possible types that can be represented by an inferred integer literal.
This is whats causing the type checker to try every possible combination of types implementing the `+` operator, types implementing `ExpressibleByIntegerLiteral` and `ExpressibleByStringLiteral` in the standard library. That combination produces 59k+ permutations without even looking at non-standard library types.
If any of the types in the expression had an explicit type then it would be type checked basically instantly. Its the fact that none of the values in the expression have explicit types that is causing the type checker to consider so many different combinations.
So far it's working quite nicely. Every now and then you take a look and notice that your modules are now at the top, so you quickly fix them, passing the honour to the next victim.
It seems like there's a combinatorial explosion of possible overloads in Swift, whereas if you implement a function with the same ergonomics in Haskell (e.g. a printf-like function), the only thing the compiler has to do is ask "Does type X have an implementation for typeclass Show? Yes? Done."
Essentially Haskell solved this overload inference problem in the same way that iterators solve the M*N problem for basic algorithms: convert all these disparate types to a single type, and run your algorithm on that.
let f0 = fun x -> (x, x) in
let f1 = fun y -> f0(f0 y) in
let f2 = fun y -> f1(f1 y) in
let f3 = fun y -> f2(f2 y) in
let f4 = fun y -> f3(f3 y) in
let f5 = fun y -> f4(f4 y) in
f5 (fun z -> z)
Lifted from https://dl.acm.org/doi/pdf/10.1145/96709.96748 via Pierce, Types And Programming Languages.It's still very fast for "normal size" types. That reduced version compiles in 151 milliseconds.
I almost never bother putting types in Haskell, unless I want to guarantee some constraint, in which case I typically use typeclasses. Maybe I'm just weird but I don't think so. One of the very few things I actually like about Haskell is how good the type inference is.
HM isn't bidirectional in the special case, so probably the features they added vs the top level universal quantifier type that has pathological low time complexity.
One of the features of Rational was that it would distribute precompiled headers around. Whoever changed the header files had to pay to recompile them, but everyone else just got the results instead.
https://static.aminer.org/pdf/20170130/pdfs/popl/o8rbwxmj6h2...
Another thing I would add to swift as a flag is to make imports based on specific files vs. an abstract "module", there is a lot of repeated work that happens because of that last time I looked.
I'm not so sure the problem is intractable because it's so well-structured. Someone would have to look at it and check that there aren't any low-hanging fruits. The challenge might be that anyone who could fix this could make much more impactful contributions to the compiler. But it's hard to know without trying.
In particular there are no guarantees that as I am writing a new method that I have the brackets balanced to make the code around it be seen as valid code. I'm so tired of not being able to use autocomplete in the middle of a complex edit unless I ritualistically write the code in an order that is unnatural to me.
Similarly, if I'm iteratively trying to fix a type error, the previous edit was not sane, and is of no help at all. You may have to go several edits back.
I kind of feel like that for more advanced languages this sort of back-and-forth between the type checker and the language has some potential to it.
Edit: I just remembered my favorite one: I had a view where the compile times doubled with every alert I added to it.
The compiler is open-source, and discussed on open forums. Readers would love some summary/investigation into slow-down causes and prospects for fixes.
(I maintains s4nnc and a fork of PythonKit).
What a ridiculous statement. I’m willing to bet everything I have in life that this is never going to happen.
And without this type system, swift is just Objective C in a prettier syntax, so Apple has to bite the bullet and bear with it.
I'm convinced operator overloading is an anti-feature. It serves two purposes:
1) to make a small set of math operations easier to read (not write), in the case where there are no mistakes and all readers perfectly understand the role of each operator; and,
2) to make library developers feel clever.
Operator-named functions are strictly worse than properly named functions for all other uses. Yes, yes, person reading this comment, I know you like them because they make you feel smart when you write them, and you're going to reply with that one time in university that you really needed to solve a linear algebra problem in C++ for some reason. But they really are terrible for everyone who has to use that code after you. They're just badly named functions, they're un-searchable, they make error messages unreadable, and they are the cause the naming conflict that is at the root of the linked blog post. It's time to ditch operator overloading.
[1] Or because they look like the same symbol used in some entirely other context, god, please strike down everyone who has ever written an operator-/ to combine filesystem paths.
If you discard that, we are back to Objective C.
I am reminded of this classic CodeGolf.SE challenge of "P = NP" https://codegolf.stackexchange.com/a/24419
The problem was set as:
> Your task is to write a program for SAT that executes in polynomial time, but may not solve all cases.
To which Eric Lippert wrote:
> "Appears" is unnecessary. I can write a program that really does execute in polynomial time to solve SAT problems. This is quite straightforward in fact.
And has a spoiler that starts out as:
> You said polynomial runtime. You said nothing about polynomial compile time. This program forces the C# compiler to try all possible type combinations for x1, x2 and x3, and choose the unique one that exhibits no type errors. The compiler does all the work, so the runtime doesn't have to. ...
Unfortunately the blog post which it was linked to that went into greater detail at https://devblogs.microsoft.com/ericlippert/lambda-expression... is no longer available (even via wayback).
These days, when you have a SAT-like problem, you're often done because you can throw a SAT solver at it, and it will give you an answer in a reasonable time. Particularly for such small problems like this one here. We routinely solve much larger and less structured SAT instances, e.g. when running package managers.
I don't know. All my experiences with SAT solvers have been bad. Sometimes adding constraints (and making the problem overconstrained) makes them orders of magnitude faster, sometimes it makes them orders of magnitude slower. Same goes for changing variables from integer to real or vice versa. And it varies from SAT solver to SAT solver, even on the same problem. I know enough about SAT solvers to know why this might happen but they're complete black boxes (as far as I can tell) and I can't predict their performance at all nor can I predict if a change I attempt will make them behave better or worse. I can't even tell if the bad performance is my fault or just the problem being legitimately to hard. And when it works I'm never sure if it will always work or if there are some inputs we have that will slip through the cracks of the heuristics it's using and hit us with a running time measured in days. If I could never use a SAT solver ever again I wouldn't.
The reason we don't do this in production is that the solvers take an unpredictable amount of time. A small problem can take forever and a large problem can be instant. You can't gave that in a compiler.
There are enough people at Apple who know about SAT solvers. :)
It's infer everything and make sure there's only one possible interpretation, with certain exceptions.
Isn't it pretty evident that implicit conversions should only go from integer to floating point?
> precision loss by implicit conversion
That's a reasonable worry, but "Int" in general is only safe to store 32 bits, and 32 bit integers will losslessly convert to doubles.
E.g. `someDouble * 1` is valid, without needing to write `1.0` or `1f`.
This is because `Double` conforms to the `ExpressibleByIntegerLiteral` protocol. There's other similar protocols for other literal types, which e.g. you could write:
let s: Set = [1, 2, 3]
Where it would have defaulted to being an Array without the annotation.If this isn't valid why are we even taking about it? The compiler should report syntax error or something
Guesses:
1. Successfully compiles
2. Reports an error
3. Never halts
4. Nobody knows
This really seems like a design flaw. If there are 59,049 overloads for string concatenation, surely either
- one of them should be expressive enough to allow concatenation with an integer, which we can do after all in some other languages
- or, the type system should have some way to express that no type reachable by concatenating subtypes of String can ever get concatenated to an integer.
Is this unreasonable? Probably there's some theorem about why I'm wrong.