I generally don't like a random file impacting several other files. Extension methods are ... tolerated and ... "fine" but I still feel unpleasant using them.
File-Scoped namespaces seem like someone's really, really tired of having nested folders and seems actively unnecessary.
I like natural lambda types
Good update on parameterless structs. I assumed that's how they worked already. I haven't used C# in 2 years; but you could do that with classes back when, so I assumed it would be the same with structs.
Constant interpolated strings is nice.
Extended property patterns is fine, just probably not for me.
It's been a feature of Visual Basic .NET for a very long time (from recall: at least 2008), I'd hope Microsoft heavily queried user feedback before implementing.
https://docs.microsoft.com/en-us/visualstudio/ide/how-to-add...
With respect to parameterless struct constructors: the reason why C# didn't have that historically is because there are many corner cases where those aren't invoked in CLR. Basically any place where you can't do "new" directly on the struct itself - e.g. when you create an array of structs, its elements do not have the constructor run for them. So C# designers originally decided that it would be less confusing overall if structs were always default-init, in all contexts - which means no parameterless constructors. I'm not sure what prompted the change of mind.
I'm of two minds:
- I think they're an anti-pattern, because it creates a global scope that can get messy/annoying.
- It makes a ton of sense for the implicit usings functionality, and I'm tired of needing to add the basic SDK usings to every source file.
So the ideal is to enable .Net's implicit usings, then to use an analyzer to "ban" adding more global usings directly from your solution/project. Best of both worlds that way. Alternatively just make them a no-pass item for code reviews.
[0] https://docs.microsoft.com/en-us/dotnet/core/compatibility/s...
using System; using System.Collections.Generic;
and a few more lines like that. Think of this part of the BCL as the prelude in Haskell.
I like it. This goes way back. They give the example of a globalusings file.
That's a cheap / easy way to do a config file (at least one use case).
Here's where the mess starts.
It seems far too easy to sneak a global import statement in to random files and then have the entire codebase polluted by it.
I feel like this is too foot-gun'y, myself.
I would consider this the same: Project should have a `GlobalUsings.cs` file.
Does C# have any linters available that could enforce such a conventions?
I'd love to see a C#/.Net Standard release that was dedicated to getting rid of things. It may be less exciting in the short term, but it is well past due.
That's basically F#. One can program just using OOP in F#, and it is much more clean and concise.
The great thing about C#/kotlin throwing the kitchen sink at stuff is the good stuff eventually makes it's way into Java.
e.g. see their take on concurrency by means of project Loom. No need for async/await and providing separate APIs for sync vs async operations. It has records and sealed types and pattern matching, and is getting destructuring soon.
Can a language not become "feature complete", while still improving over time?
"Here's a feature" Now, will it makes code bases better? Will it make them worse? Do we even have a way to quantify better or worse?
What looks like is happening to me is that general purpose languages are all slowly migrating to look a lot like ML with some sort of existential mechanism. So that is static type system with generics, lambdas, algebraic data types, pattern matching. The existential part is typically expressed with interfaces, but it looks like there's a few options floating around.
Meanwhile, low level programming language designers are all going crazy trying to find a way to replace c / c++. Rust, Odin, Zig, Jai (if it ever actually gets released), etc. That probably won't look like ML or at least it will need to have some other stuff to handle the domain without driving developers crazy.
I'm sure other domains will slowly figure out that they can cheat the triumvirate of engineering (fast, cheap, good) by developing languages that suit their domain.
But I suspect we're looking at 50-100 years before we really start to see any progress that lets us have "feature complete" languages.
Every year, new words are constantly added to official dictionaries ... while old words continually fall out of favor/use.
And concepts in one language (e.g. "English" or "Rust") then get adopted/imported into another language (e.g. "French" or "Go").
If you want OO in Scheme, you can do it (and various models of OO at that). If you want a concurrent model, you can do it. If you want a relational programming model, you can have it.
Try doing the same with, for example, C. You can accomplish it, but you have to jump through hoops or rely on OS libraries or other things. And it will rarely, if ever, feel "natural" within the language.
That said, I think I'd prefer the addition of structural inheritance to the existing nominal inheritance. Also, algebraic types.
Using the example of Console.Read from the article, in C# 9, you could do `Func<int> read = Console.Read;`. Now, if someone adds an overload for the Read method to Console, that C# 9 code will break.
In C# 10, that doesn't change. What changes is that we don't have to specify `Func<int>`. We can just use `var`.
void Foo(double x) { ... }
Foo(123);
This works, but now I add an overload: void Foo(decimal x) { ... }
and the above call is now ambiguous. Note that this example goes all the way back to C# 1.0!Method overloading (and how it interacts with other language features) is probably the single most complicated part of C# today, for good reasons.
In any popular language, if you have some method whose single argument is being implicitly upcast by a caller then you add a more specific overload on that arguments inheritance hierarchy, the caller will now be calling the new method.