Other languages have unions with named choices, where each name selects a type and the types are not necessarily all different. Rust, Haskell, Lean4, and even plain C unions are in this category (although plain C unions are not discriminated at all, so they’re not nearly as convenient).
I personally much prefer the latter design.
If you're interested, i can point you to specs i'm writing that address the area you care about :)
Just wanted to add just another opinion on a few things.
I think many people already mentioned it, but I also don't feel to good about non-boxed unions not being the default. I'd personally like the path of least resistance to lead to not boxing. Having to opt-in like the current preview shows it looks like a PITA that I'd quickly become tired of.
Also, ad-hoc union types could be really nice. At least in Python those are really nice, stuff like `def foo(x: str | int)` is just very nice. If I had to first give this union type a name I'd enjoy it way less.
But I'm aware that you are trying your best to find a good trade-off and I'm sure I don't know all the implications of the things I wish you'd do. But I just wanted to mention those to have another data point you can weigh into your decision process.
My belief is that we will have a `union struct` just like we have `record` and `record struct`. Where you can simply say you want value-type, non-boxing, behavior by adding the `struct` keyword. This feels very nice to me, and in line with how we've treated other similar types. You'll only have to state this on the decl point, so for each union type it's one and done.
The problem is that the only safe way for the compiler to generate non-boxed unions would require non-overlapping fields for most value types.
Specifically the CLR has a hard rule that it must know with certainty where all managed pointers are at all times, so that the GC can update them if it moves the referenced object. This means you can only overlap value types if the locations of all managed pointers line up perfectly. So sure, you can safely overlap "unmanaged" structs (those that recursively don't contain any managed pointers), but even for those, you need to know the size of the largest one.
The big problem with the compiler doing any attempt to overlap value types is that if the value types as defined at compile time may not match the definitions at runtime, especially for types defined in another assembly. A new library version can add more fields. This may mean one unmanaged struct has become too big to fit in the field, or that two types that were previously overlap compatible are not anymore.
Making the C# compiler jump though a bunch of hoops to try to determine if overlapping is safe and even then leaving room for an updated library at runtime to crash the whole things means that the compiler will probably never even try. I guess the primitive numeric types could be special cased, as their size is known and will never change.
Again, very rough. We go round and round on things. Loving to decompose, debate and determine how we want to tackle all these large and interesting areas :)
An example that may show the difference: if you have a language with nullable types, then you basically have a language with union types like String|Null, where the Null type has a single value called `null` and String can not be null.
Now if you pass this around a function that itself may return `null`, then your type coalesces to String|Null still (you still get a nullable string, there is no doubly nullable). This is not true for Maybe/Option whatever you call types, where Some(None) (or Optional.of(Optional.empty())) is different from None only.
Rich Hickey once made a case that sort of became controversial in some FP circles, that the former can sometimes be preferred (e.g. at public API surfaces), as in for a parameter you take a non-nullable String but for returns you return a String|Null. In this case you can have an API-compatible change widening the input parameters' type, or restricting the return type - meanwhile with sum types you would have to do a refactor because Maybe String is not API compatible with String.
This is a really good point. I'd love to be able to have a sum type of two strings ("escaped" and "unescaped"); or any two kinds of the same type really, to model two kinds of the same type where one has already passed some sort of validation and the other one hasn't.
Edit to add: I figure what I want is for enums to be extended such that different branches are able to carry different properties.
Edit again (I should learn to think things through before posting. sorry): I suppose it can be faked using a union of different wrapper types, and in fact it might be the best way to do it so then methods can take just one of the types in arguments and maybe even provide different overloads.
I've seen exactly one Rust type which is actually a union, and it's a pretty good justification for the existence of this feature, but one isn't really enough. That type is MaybeUninit<T> which is a union of a T and the empty tuple. Very, very, clever and valuable, but I didn't run into any similarly good uses outside that.
That is essentially the motivation, primarily in the context of FFI where matching C's union behaviour using transmute is tricky and error-prone.
Also you can manually tag them and get s.th. more like other high level languages. It will just look ugly.
As for C, it is a sharp blade on its own.
I can't even think of any prominent C FFI problems where I'd reach for the union's C representation. Too many languages can't handle that so it seems less useful at an FFI edge.
ColdString is easier to explain, the whole trick here is the "Maybe this isn't a pointer?" trick, ColdString might be a single raw pointer onto your heap with the rest of the data structure at the far end of the pointer, this case is expensive because nothing about the text lives inline, but... the other case is that your entire text was hidden in the pointer, on modern hardware that's 8 bytes of text, at no overhead, awesome.
CompactString is more like a drop-in replacement, it's much bigger, the same size as String, so 24 bytes on modern hardware, but that's all SSO, so text like "This will all fit nicely" fits inline, yet the out-of-line case has the usual affordances such as capacity and length in the data structure. This isn't doing the "Maybe this isn't a pointer?" trick but is instead relying on knowing that the last byte of a UTF-8 string can't have certain values by definition.
Sure, but the comparable Rust feature is enum, not union.
I suppose an LLM could do a pretty good job at this.
- enum: essentially just a typed uint tag collection
- union: the plain data structure that can contain any of several types
- tagged union: combines enums and unions, so you can dispatch on its tag to get one of the union types
Read from this section and they appear in order:
It's trying to generalize - we might have exactly one T, fine, or a collection of T, and that's more T... except no, the collection might be zero of them, not at least one and so our type is really "OneOrMoreOrNone" and wow, that's just maybe some T.
https://hackage.haskell.org/package/oneormore or scala: https://typelevel.org/cats/datatypes/nel.html
it's for type purists, because sometimes you want the first element of the list but if you do that you will get T? which is stupid if you know that the list always holds an element, because now you need to have an unnecessary assertion to "fix" the type.
They could’ve done the Either type which would’ve been more correct or maybe EitherT (if the latter is even possible)
You are free to call it `public union Some<T>(T, IEnumerable<T>)`
If I understand correctly, it’s actually OneOrOneOrMoreOrNone. Because you have two different distinguishable representations of “one”.
The only reason to use this would be if you typically have exactly one, and you want to avoid the overhead of an enumeration in that typical case. In other words, AnyNumberButOftenJustOne<T>.
And you’re exactly right.
It’s not “one or more.”
It’s “one or not one.”
Need two or not two.
So IEnumerable<T> ? What's up with wrapping everything into fancy types just to arrive at the exact same place.
The problem with ad-hoc unions is that without discipline, it invariably ends in a mess that is very, very hard to wrap your head around and often requires digging through several layers to understand the source types.
In TS codebases with heavy usage of utility types like `Pick`, `Omit`, or ad-hoc return types, it is often exceedingly difficult to know how to correctly work with a shape once you get closer to the boundary of the application (e.g. API or database interface since shapes must "materialize" at these layers). Where does this property come from? How do I get this value? I end up having to trace through several layers to understand how the shape I'm holding came to be because there's no discrete type to jump to.
This tends to lead to another behavior which is lack of documentation because there's no discrete type to attach documentation to; there's a "behavioral slop trigger" that happens with ad-hoc types, in my experience. The more it gets used, the more it gets abused, the harder it is to understand the intent of the data structures because much of the intent is now ad-hoc and lacking in forethought because (by its nature) it removes the requirement of forethought.
"I am here. I need this additional field or this additional type. I'll just add it."
This creates a kind of "type spaghetti" that makes code reuse very difficult.So even when I write TS and I have the option of using ad-hoc types and utility types, I almost always explicitly define the type. Same with types for props in React, Vue, etc; it is almost always better to just explicitly define the type, IME. You will thank yourself later; other devs will thank you.
Are you sure? This is a feature of OCaml but not F# IIUIR
Edit: https://github.com/fsharp/fslang-suggestions/issues/538
Yes and no. C# unions aren’t sealed types, that’s a separate feature. But they are strictly nominal - they must be formally declared:
union Foo(Bar, Baz);
Which isn’t at all the same as saying: Bar | Baz
It is the same as the night and day difference between tuples and nominal records.We're very interesting in this space. And we're referring to it as, unsurprisingly, 'anonymous unions' (since the ones we're delivering in C#15 are 'nominal' ones).
An unfortunate aspect of lang design is that if you do something in one version, and not another, that people think you don't want the other (not saying you think that! but some do :)). That's definitely not the case. We just like to break things over many versions so we can get the time to see how people feel about things and where are limited resources can be spent best next. We have wanted to explore the entire space of unions for a long time. Nominal unions. Anonymous unions. Discriminated unions. It's all of interest to us :)
=> named sum type implicitly tagged by it's variant types
but not "sealed", as in no artificial constraints like that the variant types need to be defined in the "same place" or "as variant type", they can be arbitrary nameable types
> unions enable designs that traditional hierarchies can’t express, composing any combination of existing types into a single, compiler-verified contract.
To me that "compiler-verified" maps to "sealed", not "on the fly". Probably.
Their example is:
public union Pet(Cat, Dog, Bird);
Pet pet = new Cat("Whiskers");
- the union type is declared upfront, as is usually the case in c#. And the types that it contains are a fixed set in that declaration. Meaning "sealed" ?
As the language designer notes in the comments, these are named unions, as opposed to anonymous ones, but they are also working on the latter.
"Sealed" is probably not the correct word to use here, as it would be sealed in both case (it doesn't really make sense to "add" a type to the A | B union). The difference is that you have to add a definition and name it.
Here is visual layout if anyone is interested - https://vectree.io/c/memory-layout-tagging-and-payload-overl...
F# offers actual field sharing for value-type (struct) unions by explicitly sharing field names across cases, which is as far as you can push it on the CLR without extra runtime support.
* https://devblogs.microsoft.com/dotnet/csharp-15-union-types/...
We have been pushing toward higher performance for years and this is a performance pitfall for unions would are often thought of as being lighter weight than inheritance hierarchies.
F# just stores a field-per-case, with the optimization that cases with the same type are unified which is still type safe.
I expect the same will old here. But given the former group is multiple orders of magnitude higher than the latter, we tend to design the language in that order accordingly.
Trust me, we're very intersted in the low-overhead space as well. But it will be for more advanced users that can accept the tradeoffs involved.
And, in the meantime, we're designing it in C#15 that you can always roll the perfect implementation for your use case, and still be thought of as a union from teh language.
To be clear, this requires explicitly using the same field name as well.
This is essentially "sealed interface" from Java/Kotlin or "enum" in Rust
One thing I miss here (and admittedly I only skimmed through the post so if I missed this, please do correct me) is "ad hoc" unions.
It would be great to be able to do something like
public Union<TypeA, TypeB> GetThing()...
Without having to declare the union first. Basically OneOf<...> but built-in and more integrated union Union<T1, T2>(T1, T2);
Add a bunch of overloads and you'd replicate for T1|T2 syntax the equivalent mess to (Value)Tuple<...> that eventually became the backing for actual tuple syntax.What a missed opportunity. I think really F# if you combine all of its features, and what it left out, was the way. Pulling them all into C# just makes C# seem like a big bag of stuff, with no direction.
F#'s features, and also what it did not included, gave it a style and 'terseness', that still can't really be done in C#.
I don't really get it. Was a functional approach really so 'difficult'? That it didn't continue to grow and takeover.
So far everything that was added to C# very much reduces the amount of dead boilerplate code other languages struggle with.
Really give it an honest try before you judge it based on the summation of headlines.
given that most of the thinks added seem more inspired by other languages then "moved over" from F# the "other languages struggle with" part makes not that much sense
like some languages which had been ahead of C# and made union type a "expected general purpose" feature of "some kind":
- Java: sealed interfaces (on high level the same this C# features, details differ)
- Rust: it's enum type (but better at reducing boilerplate due to not needing to define a separate type per variant, but being able to do so if you need to)
- TypeScript: untagged sum types + literal types => tagged sum types
- C++: std::variant (let's ignore raw union usage, that is more a landmine then a feature)
either way, grate to have it, it's really convenient to represent a `TYPE is either of TYPES` relationship. Which are conceptually very common and working around them without proper type system support is annoying (but very viable).
I also would say that while it is often associated with functional programing it has become generally expected even if you language isn't functional. Comparable to e.g. having some limited closure support.
I honestly have no idea where you would get this idea from. C# is a pretty opinionated language and it's worst faults all come from version 1.0 where it was mostly a clone of Java. They've been very carefully undoing that for years now.
It's a far more comfortable and strict language now than before.
The only things that I wish for are: rusts borrow-checker and memory management. And the AOT story would be more natural.
Besides that, for me, it is the general purpose language.
Yes, C# is a jack of all trades and can be used at many things. Web, desktop mobile, microservices, CLI, embedded software, games. Probably is not fitted for writing operating systems kernels due to the GC but most areas can be tackled with C#.
Just look at this feature: https://learn.microsoft.com/en-us/dotnet/csharp/whats-new/cs...
Was this needed? Was this necessary? It's reusing an existing keyword, fine. It's not hard to understand. But it adds a new syntax to a language that's already filled to the brim, just to save a few keystrokes?
Try teaching someone C# nowadays. Completely impossible. Really, I wish they would've given F# just a tenth of the love that C# got over the years. It has issues but it could've been so much more.
For you it may be fine to write:
List<string> strs = new List<string>();
And sure if you have been using C# for years you know all the things going on here.
But it shouldn’t be an argument that:
List<string> strs = [];
Is substentionally easier to grasp.
And that has been the theme of all changes.
The example you point out is the advanced case, someone only needs in a very specific case. It does not have a lot todo with learning the language.
The language design team is really making sure that the features work well throughout and I think that does deserve some credit.
I agree that = [] is perfectly fine syntax. But I would definitely argue that:
[with(capacity: values.Length * 2), ..
is non-intuitive and unnecessary. What other language is there that has this syntax? Alternatively, is this a natural way of writing this? I wouldn't say so.
My main language in my free time is Rust, a few years ago it was F#. So, I'm absolutely open to other syntax ideas. But I feel that there has to be a direction, things have to work together to make a language feel coherent.
Another example would be Clojure, which I started learning a few months ago (before we all got swept up in AI FOMO :D). Clojure as a language feels very coherent, very logical. I'm still a beginner, but every time I learn something about it, it just makes sense. It feels as if I could have guessed that it works this way. I don't get that feeling at all in many of the new features of C#.
> The example you point out is the advanced case, someone only needs in a very specific case. It does not have a lot todo with learning the language.
I disagree. When learning the language, you're going to have to read other people's code and understand it. It's the same basic principle, but, I'd argue, much worse in C++. Yes, in theory, you don't have to understand SFINAE and template metaprogramming and (now) concepts and all those things. You could just work in a subset of C++ that doesn't use those things. But in practice, you're always going to have issues if you don't.
Dictionary<string, List<Tuple<string, string>>> foo = new Dictionary<string, List<Tuple<string, string>>>
Or things like: Dictionary<string, List<Tuple<string, string>>> foo = DoSomeWork();
And you'd have to get the type right, even though the compiler knew the type, because it'd tell you off for getting it wrong. Sometimes it was easiest to just grab the type from the compiler error. ( This example is of course a bit OTT, and it's a bit of a code-smell to be exposing that detail of typing to consumers. )No-one wants to go back to that, and anyone who says C# is over-complicated I think is forgetting how rough it was in the earliest versions.
While introduction of auto-typing through "var" helped a lot with that, you'd still regularly have to fight if you wanted to properly initialise arrays with values, because the syntax was just not always obvious.
Collection literals are amazing, and now the ability to pass things into the constructor means they can be used when you need constructor parameters too, that's just a good thing as you say.
This is exactly how C++ landed where it is now. Every time it's "you only need to know that syntax if..." well it ends up everyone has to know that syntax because someone will use it and if you're a responsible programmer you'll end up reading a lot code written from other people.
I work on multiple applications with different versions of C# and/or Dotnet. I find it quite annoying to have to remember what syntax sugar is allowed in which versions.
If C# did not want verbose syntax, then Java was a poor choice to imitate.
Do you actually have a datapoint of someone failing to understand C# or are you just hyperbolically saying its a big language? The tooling, the ecosystem, the linting, the frameworks. Its a very easy language to get into...
So I'm happy to discuss the thinking here. It's not about saving keystrokes. It's about our decision that users shouldn't have 7 (yes 7) different ways of creating collections. They should just be able to target at least 99% of all cases where a collection is needed, with one simple and uniform syntax across all those cases.
When we created and introduced collection expressions, it was able to get close to that goal. But there were still cases left out, leaving people in the unenviable position of having to keep their code inconsistent.
This feature was tiny, and is really intended for those few percent of cases where you were stuck having to do things the much more complex way (see things like immutable builders as an example), just to do something simple, like adding an `IEqualityComparer<>`. This was also something that would become even more relevant as we add `k:v` support to our collections to be able to make dictionaries.
That isn't a reasonable take. Failing to teach a language by enumerating all its features is an indictment of the instructor and not the language.
You're right, it's not impossible and in general it's not among the hardest languages to teach. But I would argue, it is heading that way.
There are already so many ways to do things in C#. For example, try explaining the difference between fields and properties; sounds easy, but making it really stick is quite a challenge. And that's one of the simplest cases (and a feature I'm 100% in favor of).
And you will have to explain it at some point, because real codebases contain these features so at some point, it'll need to be taught. Learning a language doesn't stop when you can write a simple application, it continues up until at least you're comfortable with most of its features and their practical use. The quicker one can get people to that point, the easier the language is to teach, I'd argue.
One might also argue that learning never really stops, but that's beside the point :)
In general, my issue isn't any specific feature. C# has many features that are non-trivial to learn but still great: value types, generics, expression trees. Source generators are relatively new and I like them! I like most of the things they're doing in the standard library or the runtime. Spans everywhere is a nice improvement, most new APIs are sensible and nice to use and the runtime just keeps getting faster every release. Great. It's more the pure C# language side I have an issue with.
But every language has a budget of innovation and cognitive load that you can expect people to deal with, and C# is not using its budget very wisely in my opinion.
If they actually put effort in F#, it would have reached "unteachable" state already :)
I would've loved an F# that found a way to improve on the performance issues, especially when using computation expressions. That and, either, a deeper integration of .NETs native OOP subtyping, or some form of OCaml-like module system, would have been enough to make it an almost perfect language for my tastes.
Obviously, these are big, and maybe impossible, issues. But Microsoft as a whole never really dedicated enough resources to find out. I feel for the people still working on it, their work is definitely appreciated :)
Better than a new language for each task, like you have with Go (microservices) and Dart (GUI).
I'm using F# on a personal project and while it is a great language I think the syntax can be less readable than that of C#. C# code can contain a bit too much boilerplate keywords, but it has a clear structure. Lack of parenthesis in F# make it harder to grasp the structure of the code at a glance.
also called general purpose, general style langue
> that still can't really be done in C#
I would think about it more as them including features other more general purpose languages with a "general" style have adopted then "migrating F# features into C#, as you have mentioned there are major differences between how C# and F# do discriminated sum types.
I.e. it look more like it got inspired by it's competition like e.g. Java (via. sealed interface), Rust (via. enum), TypeScript (via structural typing & literal types) etc.
> Was a functional approach really so 'difficult'?
it was never difficult to use
but it was very different in most aspects
which makes it difficult to push, sell, adapt etc.
that the maybe most wide used functional language (Haskel) has a very bad reputation about being unnecessary complicated and obscure to use with a lot of CS-terminology/pseudo-elitism gate keeping doesn't exactly help. (Also to be clear I'm not saying it has this properties, but it has the reputation, or at least had that reputation for a long time)
Probably more this than any technical reason. More about culture and installed view points.
I don't want to get into the objects/function wars, but do think pretty much every technical problem can be solved better with functions. BUT, it would take an entire industries to re-tool. So think it was more about inertia.
Inertia won.
> I don't really get it
To me it makes sense because C# is a very general purpose language that has many audiences. Desktop GUI apps, web APIs, a scripting engine for gaming SDKs, console apps.It does each reasonably well (with web APIs being where I think they truly shine).
> Was a functional approach really so 'difficult'
It is surprisingly difficult for folks to grasp functional techniques and even writing code that uses `Func`, `Action`, and delegates. Devs have no problem consuming such code, but writing such code is a different matter altogether; there is just very little training for devs to think functionally. Even after explaining why devs might want to write such code (e.g. makes testing much easier), it happens very, very rarely in our codebase.But I do agree. C# is heading to a weird place. At first glance C# looks like a very explicit language, but then you have all the hidden magical tricks: you can't even tell if a (x) => x will be a Func or Expression[0], or if a $"{x}"[1] will actually be evaluated, without looking at the callee's signature.
[0]: https://learn.microsoft.com/en-us/dotnet/csharp/advanced-top...
[1]: https://learn.microsoft.com/en-us/dotnet/csharp/advanced-top...
In the places where that is a thing, I've never needed to care. (Which is kind of the point)
Note that most of its development is still by the open source community and its tooling is an outsider for Visual Studio, where everything else is shared between Visual Basic and C#.
With the official deprecation of VB, and C++/CLI, even though the community keeps going with F#, CLR has changed meaning to C# Language Runtime, for all practical purposes.
Also UWP never officially supported F#, although you could get it running with some hacks.
Similarly with ongoing Native AOT, there are some F# features that break under AOT and might never be rewritten.
A lost opportunity indeed.
Not adding functional features to F# doesn't mean F# would have gained more usage. And if someone wants to use F#, no one is stopping him or her.
But, that would takes, schools changing, companies changing, everything. So it was really the installed base that won, not what was better.
We'd have to go back in time, and have some ML Language win over C++ .
For example, C# chose not to go down the route of type erasure for the sake of generics and because of that you don't get the same sort of runtime type issues that Java might have.
Agreed. Java is on the same trail.
I work much with C# these days and wish C# had as cohesive a syntax story. It often feels like "island of special syntax that makes you fall of a cliff".
For performance-sensitive scenarios where case types include value types, libraries can also implement the non-boxing access pattern by adding a HasValue property and TryGetValue methods. This lets the compiler implement pattern matching without boxing.Even outside of C#, this trend seems to show up everywhere — trying to reduce boilerplate while keeping things safe.
The only thing I wish now is for someone to build a functional Web framework for C#.
Active patterns, computation expressions, structural typing, statically resolved type parameters, explicit inlining, function composition, structural equality, custom operators and much richer generators.
Not sure I would want the last thing in C#, I think having boundaries at the function signature for that.
https://github.com/manifold-systems/manifold/tree/master/man...
In the article, the example with the switch works because it switches on the class of the instance.
The fact that they flatten a union of optional types into an optional union of the corresponding non-optional types is indeed a weird feature, which I do not like, because I think that a union must preserve the structural hierarchy of the united types, e.g. a union of unions must be different from a union of all types included in the component unions, and the same for a union of optional types, where an optional type is equivalent with a union between the void/null type and the non-optional type, but this C# behavior still does not make the C# unions anything else but discriminated unions, even if with a peculiar feature.
You can see in the examples, how "switch" uses the implicit discriminant of the union to select the code branch that must be executed, depending on the current type of the value stored in an union.
The syntax of the "switch" seems acceptable, without too many superfluous elements. It could have been made slightly better, by not needing dummy variables that must be used for selecting structure members (or used in whichever else expression is put in the right hand side; the repetition of the dummy variable names could have been avoided if the name of the union variable could have been reused with a different meaning within a "switch", i.e. as a variable of the selected type).
I do not see what else do you want. Perhaps Rust has reused the traditional term "discriminated unions", which had been used for many decades before Rust, and which means the same thing as the more recent terms "tagged unions" or "sum types", with a non-standard meaning and you have that in mind.
> Union types — exhaustive matching over a closed set of types
> Closed hierarchies — exhaustive matching over a sealed class hierarchy
> Closed enums — exhaustive matching over a fixed set of enum values
I believe the last one would be sum types (disc. unions). This one allows overlapping types.
This is in contrast with what is called "union" in the C language, where you must know a priori the type of a value in order to use it correctly.
Moreover, if you can discriminate at run time which is the actual type, that means that you can discriminate between values that happen to have the same representation, but which come from distinct types, e.g. if you had a union between "signed integer" and "unsigned integer", you could discriminate between a signed "3" and an unsigned "3".
This property ensures that the number of possible values for a discriminated union type is the sum of the numbers of possible values for the component types, hence the alternative name "sum type", unlike for a non-discriminated union, where the number of possible values is smaller, because from the sum you must subtract the number of possible values of the intersection sets.
The C# unions described in the article are discriminated unions, except that their component types are not the types literally listed in their definition. If some of the component types are optional, than the true union components are the corresponding non-optional types, together with the null a.k.a. void type, which I find as a rather strange choice.
Or is it becoming a ball-of-mud/bad language compared to its contemporaries?
(Honest questions. I have never used .NET much. I'm curious)
Are there particular things about the ecosystem that you worry about (or have heard about)? Biggest complaint I would have is that it seems like many popular open source libraries in the .NET ecosystem decide to go closed source and commercial once they get popular enough.
(The amount of times I hear "the standard lib is great!" seems more to attempt to defend the plethora of commercial libraries, more than anything)
The community feels rather insular too? The 9-5 dayjob types with employers who don't understand or embrace open source? At my age I can respect that though
And is Postgresql a 2nd-class citizen? If so, your boss will tell you to use SQL Server surely?
I guess it's hard to get a grasp on the state/health of .NET as to me it seems 99.99999% of the code is in private repos companies, as it's not a popular choice for open source projects. Which itself seems like a proxy signal though
> And is Postgresql a 2nd-class citizen?
No, it is not.Microsoft maintains the Npgsql project[0] and I say that it is a very capable, feature rich adapter.
I have not used C# with SQL Server in almost a decade.
It is what it is but I wouldn't say its actually the fault of the language, especially now.
It serves many audiences so it can feel like the language is a jack of all trades and master of none (because it is) and because it is largely backwards compatible over its 20+ years of existence.
That said, I think people make a mountain out of a molehill with respect to keyword sprawl. Depending on what you're building, you really only need to focus on the slice of the language and platform you're working with. If you don't want to use certain language features...just don't use them?
I think it excels in a few areas: web APIs and EF Core being possibly the best ORM out there. For me, it is "just right". Excellent platform tooling, very stable platform, very good performance, hot reload (good, but not perfect), easy to pick up the language if you already know TypeScript[1]; there are many reasons it is a good language and platform.
[0] https://learn.microsoft.com/en-us/dotnet/csharp/advanced-top...
VB.NET's Object was created to support interfacing with COM interop easier. VB.NET's one key niche versus C# for many early years was COM interop through Object.
C#'s dynamic keyword was more directly added as a part of the DLR (Dynamic Language Runtime aka System.Dynamic) effort spurred by IronPython. It had the side benefit of making COM interop easier in C#, but the original purpose was better interop with IronPython, IronRuby, and any other DLR language. That's also why under the hood C#'s dynamic keyword supports a lot of DLR complexity/power. You can do a lot of really interesting things with `System.Dynamic.IDynamicMetaObjectProvider` [1]. The DLR's dependency on `System.Linq.Expressions` also points out to how much further in time the DLR was compared to VB.NET's Object which was "just" the VB7 rename of VB6 Variant originally (it did also pick up DLR support).
The DLR hasn't been invested into in a while, but it was really cool and a bit of an interesting "alternate universe" to still explore.
[0] https://learn.microsoft.com/en-us/dotnet/api/system.dynamic....
That's why I like it so much. And now, I can write mostly functional code.
>I think it excels in a few areas: web APIs and EF Core being possibly the best ORM out there
It's awesome for web stuff and microservices.
> It's awesome for web stuff and microservices.
The gRPC platform support is top notch and seamless and Aspire is just :chefs_kiss:Is it good at the wrong thing? Eg compare to strongly-typed query generators
It has a bad rep because Microsoft could Microsoft as they do.
.NET is a fantastic ecosystem. Has a decent build and dependency system (NuGet, dotnet run/build, declarative builds in XML). Massive standard library, with a consistent and wide focus on correctness, ergonomics, and performance across the board.
You can write everything in many languages, all on the same runtime: business logic in C#; hot paths interfacing with native libraries in C++/CLI; shell wrappers in PowerShell, document attachments with VB, data pipelines in F#.
I feel more people should use it, or at least try it, but sadly it is saddled with the perception that it is Windows-only, which hasn't been true for a decade (also, IMO, not necessarily a negative, because Windows is a decent OS, sue me).
In my experience it does not work very well outside of the sanctioned Linux distributions. Quirky heisenbugs and nonsensical crashes made it virtually unusable for me on Void. I doubt that's changed in the years that have since passed.
> not necessarily a negative, because Windows is a decent OS
Is a language runtime worth an operating system? I think that's a paradigm we left behind in the 1970s when the two were effectively inseperable (and interwoven with hardware!) I wouldn't expect someone to swap to using a Unix system because they really want a better Haskell experience.
I just don't see any actual interesting or meaningful reasons to care about .NET, I effectively feel the same way about it that I do about Go. Just not something that solves any problem I have, and doesn't have anything that interests me. Although effectively I did try it, so it's a moot point considering that's one of the outcomes you're wishing for.
It's open source. Did you follow the spirit of Linux to file a bug report of as much sense of the crashes as you could make? Most OSS only supports as many distros as people are willing to test and file accurate bug reports (and/or scratch the itch themselves and solve it). It seems a bit unfair to expect .NET to magically have a test matrix including every possible distro when almost nothing else does. (It's what keeps distro maintainers employed, testing other people's apps, too.)
It probably has gotten better since then, for what it is worth. .NET has gotten a lot of hardening on Linux and a lot of companies are relying on Linux servers for .NET apps now.
At the very least there are very tiny Alpine-based containers that run .NET considerably well and are very well tested, so Docker is always a strong option for .NET today no matter what Linux distro you want on the "bare metal" running Docker.
The build and dependency systems are an abysmal esoteric, poorly documented mix between csproj files, sln files, random scattered json files, etc.
The standard library in my experience sucks and has all sorts of issues, especially around Uris, DateTimes, etc.
And the ecosystem itself has such a low quality bar, ironically _especially_ with anything made by microsoft. For every nuget package that's well-designed, well-documented, and easy-to-use, there's five which have bugs and undocumented exceptions and poorly-designed APIs.
As much as I hate Microsoft, I admit they are doing great things for C#