- "The setup is noise and boilerplate heavy." Actually the signals example looks just as noisy and boilerplate heavy to me. And it introduces new boilerplate concepts which are hard for beginners to understand.
- "If the counter changes but parity does not (e.g. counter goes from 2 to 4), then we do unnecessary computation of the parity and unnecessary rendering." - Sounds like they want premature memoization.
- "What if another part of our UI just wants to render when the counter updates?" Then I agree the strawman example is probably not what you want. At that point you might want to handle the state using signals, event handling, central state store (e.g. redux-like tools), or some other method. I think this is also what they meant by "The counter state is tightly coupled to the rendering system."? Some of this document feels a little repetitive.
- "What if another part of our UI is dependent on isEven or parity alone?" Sure, you could change your entire approach because of this if that's a really central part of your app, but most often it's not. And "The render function, which is only dependent on parity must instead "know" that it actually needs to subscribe to counter." is often not an unreasonable obligation. I mean, that's one of the nice things about pure computed functions- it's easy to spot their inputs.
I think an effort in standardizing signals, a concept that is increasingly used in UI development is a laudable effort. I don't want to get into the nitty gritty about what is too much boilerplate and whether you should build an event system or not, but since signals are something that is used in a variety of frameworks, there might be a good reason to it? And why not make an effort and standardize them over time?
For a desktop app developer that's a pretty funny statement, given that the Qt framework introduced signals and slots in the mid 90s.
I am curious how many web devs think that signals are a new concept. (I don't necessarily mean the parent poster.)
You're basically saying you want this thing, but you don't want to have to justify it
...common usage is not really a justification for putting it into the language standard though. Glancing over the readme I'm not seeing anything that would require changes to the language syntax and can't be implemented in a regular 3rd-party library.
In a couple of years, another fancy technique will make the rounds and make signals look stupid, and then we are left with more legacy baggage in the language that can't be removed because of backwards compatibility (let C++ be a warning).
https://preactjs.com/guide/v10/signals
"In Preact, when a signal is passed down through a tree as props or context, we're only passing around references to the signal. The signal can be updated without re-rendering any components, since components see the signal and not its value. This lets us skip all of the expensive rendering work and jump immediately to any components in the tree that actually access the signal's .value property."
"Signals have a second important characteristic, which is that they track when their value is accessed and when it is updated. In Preact, accessing a signal's .value property from within a component automatically re-renders the component when that signal's value changes."
I think it makes a lot more sense in a context like that.
I have found that passing props makes React-like applications very complex and messy and props are to be avoided as must as practical.
The mechanism for avoiding props is Custom events.
It concerns me to see the concept of signals being passed as props when surely signals/events should be removing the need for props?
In my experience, the big benefit is the ability to make reactive state modular. In an imperative style, additional state is needed to track changes. Modularity is achieved using abstraction. Only use when needed.
> Sounds like they want premature memoization
It's a balance to present a simple example that is applicable. Cases where reactivity have a clear benefit tend to be more complex examples. Which is more difficult to demonstrate than a simple, less applicable example.
You can do it that way, but… why? When you could just not?
Even if that were true for this example, the signal-based model grows linearly in complexity and overhead with the number of derived nodes. The callback-based version is super linear in complexity because you have an undefined/ unpredictable evaluation order for callbacks producing a combinatorial explosion of possible side effect traces. It also scales less efficiently because you could potentially run side effects and updates multiple times, where the signal version makes additional guarantees that can prevent this.
counter.set(counter.get() + 1)
One would think that proper integration into the language also means getting rid of those "noisy" setter/getter calls.In practice, I can count on two hands the number of times I’ve written `new Promise`. What did happen, though, is I started to write `.then` a whole lot more, especially when working with third party libraries.
In the end, the actual day-to-day effect of the Promise addition to JavaScript was it gave me a fairly simple, usually solid, and mostly universal interface to a wide variety of special behaviors and capabilities provided by third party libraries. Whether it’s a file read or an api request or a build step output, I know I can write `.then(res => …)` and I’m already 50% of the way to something workable.
If this Signal proposal can do something similar for me when it comes to the Cambrian explosion of reactive UI frameworks, I am in favor! What’s more, maybe it will even help take reactivity beyond UI; I’ve often daydreamed about some kind of incrementally re-computed state tree for things other than UI state.
While .then in initial promises was a great improvement over nested delegates and is fine for simple chained promises, once you start conditionally chaining different promises or need different error handling for particular chains in the promise, or wanting to do an early return, the code can become much harder to read and work with.
With async/await though you just write the call essentially as if it’s not a promise and can easily put try/catch around particular promise calls, easily have early returns, etc.
However you do quite often need to use Promise.all, even when using async/await.
Thinking that the current crop of JS UI libraries designed their signals in such a good way that it needs to become a part of the language is hubris. Signals have many possible implementations with different tradeoffs, and none of them deserve to have a special place in the JavaScript spec.
Before these libraries used signals, they or their predecessors used virtual DOM. Luckily, that didn't become part of JS, but how are signals any different? They aren't. The argument for making them standard is even worse than for virtual DOM.
Are we just going pile every fad into a runtime that basically has no way to dispose of no-longer-wanted features without breaking the web? That is quite short sighted.
Reactive UI won. The main thing stopping me from using vanilla JS is the absolute explosion in complexity managing state for even small sized applications. To me, any reactive framework is better than vanilla, so perhaps there is a construct missing? Now that it’s been a decade or so, we should start thinking about possible cut-points for standardization. Like with promises, this could bring down complexity for extremely common use-cases, if done right.
I think a better way to evaluate it would be: “would this proposal be used by existing reactive frameworks?”. If not, why? What’s missing? What’s superfluous? What about lessons from UI annd reactivity from other languages? There’s a lot of fragmented experience to distill, but it’s a worthwhile endeavor imo.
But that's not what you're doing. You're gunning for becoming the standard from the start – you are trying to convince people to use your draft implementation based on its status as a proposed standard, instead of them using it on its own technical merits.
Ditch the status, and see if anyone still wants it.
--
Yes, reactive UI won – just fine, without having signals in the JS standard. Because not having signals in the language was never an actual problem holding back reactive UI development in JS.
Your proposal does not "bring down complexity". It simply moves the complexity from UI libraries into the JS standard. In doing that, you forcefully marry the ecosystem to that particular style of complexity. Every browser vendor will need to implement it and support it... for how many decades?
And to what end? Unlike e.g. promises that are useful on their own, your proposal isn't nearly ergonomic enough to allow building reactive UIs in vanilla JS. Users will still need to use libraries for that, just like they do today. You're just moving one piece of such libraries into the standard, without building the case for why it's needed there.
--
Your proposal spends pages selling the readers on signals, but that is not what you need to sell. We already have many implementations of signals. You need to sell why your (or any) signals implementation needs to be in the JavaScript standard.
You have one tiny "Benefits of a standard library" subsection under "Secondary benefits" but it's just ridiculous. You're basically saying that we should add signals to JS because we've added (much simpler or more needed) things to JS before – is that really your best argument?
And... "saving bundle size"? You want to bless one implementation of a complex problem to save what, 5KB of this: https://cdn.jsdelivr.net/npm/s-js
Sorry, just – nothing about this makes sense to me.
a = () => 42
b = () => a() - 1
c = () => a() + b() * 2
It isn't a bigger nightmare than debugging pure functions. The source for `c` is `a` and `b`. All signal values (as proposed) will be lexically available in a body of a dependent signal, so there's no hidden registry to navigate anyway. If in-browser IDEs want to record a call tree for an activation record, they can do that without a standard.You can argue about the need for this, but if we're going to extend the standard lib, then looking at what is popular is a good approach IMO.
> Before these libraries used signals, they or their predecessors used virtual DOM
Signals are not a replacement for virtual DOM.
And there's the new one which seems to be getting implemented in node right now: https://github.com/WICG/observable
window.dispatchEvent(new Event('counterChange'));
And every part of the application that wants to react to it can subscribe via window.addEventListener('counterChange', () => {
... do something ...
});
Anything wrong with that?Event handling is getting messy very easily. If you want to get deeper into it, have a look at event bubbling and propagation.
Large applications need a robust event handling. This is the nowadays hidden benefit of frameworks like Angular, Vue etc.
Believe me, you don’t want to use the standard event handling API without a framework. Adding, deleting, cloning, firing, removing, fire once etc on many elements can have serious unwanted side effects.
In this example, the event is on the Window. There is no bubbling. It is already at the top level.
>Believe me, you don’t want to use the standard event handling API without a framework. Adding, deleting, cloning, firing, removing, fire once etc on many elements can have serious unwanted side effects.
I don't know what this means. The frameworks do not have much to do with this topic.
The difference with signals is that the resulting value is only ever calculated when the end consumer reads the value- so you schedule render updates asynchronously from the actual writes to the signal, and whatever chain of computations the watchers perform is done just the one time during the render.
Interim values sent to the signal will get lost, so you really can't do too much interesting work in them. It's really just a fancy abstraction layer to coordinate a rendering cycle.
It can also be more performant, eg, say you have a computation that depends on 2 values:
`result = a ? b : 0`
Then if a is falsy, we don't need to recompute if b changes. This is achieved automatically with signals, but would require quite some code with classic pub/sub.
And it's hard to ensure that all listener don't cause that trigger cascade.
It has all the downsides of the pub/sub architecture highlighted in the proposal.
I might not be describing that well, because once you go down that road it really becomes a whole overall approach that infects the whole program (like functional reactive programming), and so it's really about how the whole flow fits together from top to bottom, and that can be very elegant.
I don't think that's the right fit for everything, i.e. in gamedev it might make more sense to just update some object's position imperatively, but for UI it tends to work pretty well.
window.addEventListener('counterChange', () => {
element.innerHTML = 20components.map(|c| c.renderHTML(newCounterValue)).join('');
});
or, you have to check the components on case by case basis on a. if the component need to be updated or not, and b. how is the most efficient way to update such component. And in TFA, if counter is changed from odd to odd, the label doesn't need to be updated.Also, multiply the number of events to the number of components can make the application go out of hand very quickly.
A alt proposal would be some kind of auto remove listener if it goes out of context
Well, don't events only bubble upwards? You need to know the exact element of it is not on a lower level in the DOM tree.
Events were too messy, so I wrote a small pub/sub message queue type of thing. Anyone anywhere in the DOM can subscribe to messages based on subject regexes.
Makes things a lot easier, especially when I added web components to wrap existing elements so that publishing and subscribing is done with attributes, not js.
It only works in browser environments.
See, for instance, https://www.electronjs.org/docs/latest/api/ipc-renderer
Unfortunately, modern frameworks really want to use their own event channels, which makes hooking a pain.
Welcome to Node.js v21.6.2.
Type ".help" for more information.
> window.dispatchEvent(new Event('counterChange'))
Uncaught ReferenceError: window is not definedSure, it requires a bit of discipline, but it’s vastly simpler to me than whatever solution comes up every few years (Backbone, Knockout, Angular, React, modifying the language itself, etc). There must be something profoundly different with the way I think.
It even expresses itself in the function naming. They call updating innerText “render”. You’re not rendering anything. At most, the browser is, but so is everything else it does related to painting. It feels like a desperate attempt to complicate what is one of the simplest DOM functions. It really baffles me.
More complex it is not easy.
I strongly suspect that in ages past you would have been an ASM programmer shaking your fist at those crazy portable C programmers, and then a C programmer shaking his fist angrily at those crazy memory safe Java programmers.
Progress in programming can be marked by eliminating the need for ceremonial kinds of strict discipline needed to achieve good results.
Which isn't to say React is some kind of next evolution, but signals certainly are a step in the right direction.
Keeping the DOM in sync with a data state isn’t too difficult but doing so in a highly performant (60 fps) way _is_ tremendously difficult. Especially when it comes to creating APIs that aren’t leaky but also not too cumbersome.
It would frankly be easier to just paint pixels to a canvas game-style, than translating changes to a living DOM tree.
> The current draft is based on design input from the authors/maintainers of Angular, Bubble, Ember, FAST, MobX, Preact, Qwik, RxJS, Solid, Starbeam, Svelte, Vue, Wiz, and more…
Would be interested what existing library authors think of this proposal. Interesting that React is not in that list
Signals are a bit like channels, except they're broadcast instead of single receiver. It'd be neat if this could somehow be leveraged to allow web workers to communicate with channels instead of onMessage callbacks. Specifically being able to `select` over signals/channels/promises like in Go would over a syntactic benefit over having to try manage multiple concurrent messaging mechanism with callbacks (maybe by allowing signals to be included in `Promise.any`)
Hard disagree.
`x instanceof Promise` simply doesn't work. If my library has a then method that accepts a catch callback and yours doesn't, they're silently non-interoperable, and there's no way to detect it. When does `finally` run? What expectations can you have around how async the callbacks are? Without a standard, every single library that uses promises needs to bring its own polyfill because you can't trust what's there. And you can't actually consume any other library's promises, because you can't trust that they behave in the way you expect them to.
And I'm not just speculating, this was reality for many years and a hell that many of us had to endure.
One benefit of standardisation that's not tied to async/await is that the JavaScript engines has been able to do performance optimisations not otherwise possible which benefit Promise-heavy applications
Signals are not a part of the core React API, unlike Preact.
My vague gut feeling is that signals are too much like a generalized useEffect() and would only introduce further confusion into React by muddling what happens during the render cycle. For better and worse, React takes a different tack to updates than signals do. But maybe I’m wrong about their applicability.
This looks a lot like ember data binding which becomes an imperative nightmare. Its default state is “foot gun” with tons of cognitive overhead and meta patterns to keep it from getting that way.
https://nodejs.org/en/learn/asynchronous-work/the-nodejs-eve...
// A library or framework defines effects based on other Signal primitives
declare function effect(cb: () => void): (() => void);
What library? What framework? I lost here. What's effect? effect(() => element.innerText = parity.get());
How does effect knows that it needs to call this lambda whenever parity gets changed? Will it call this lambda on any signal change? Why this talk about caching then? Probably not.Anyway I think that signal idea is sound, if I understood correctly what the authors tried to convey. My main issue with those decoupling architectures is that once your application is complex enough, you will get lost trying to figure out why this particular event being emitting. Ideally signals should fix this by modifying stacktrace, so when my callback is being called, it'd already contain a stacktrace of the code which triggered that signal in the first place.
There are various libraries that export a function called effect which allows you to run arbitrary code in response to a signal update. The Preact docs have a great primer on signals and effects: https://preactjs.com/guide/v10/signals#effectfn
As I understand it, these effect functions run the callback once initially to see which signals were accessed while the callback was executing, and then call the callback again whenever the signals it depends on update. As long as signal access is synchronous and single-threaded, you know that if a signal was accessed during the callback's execution that the callback should be subscribed to those signals.
> How does effect knows that it needs to call this lambda whenever parity gets changed? Will it call this lambda on any signal change?
You can do this with getters [1], where the effect function tracks which properties of the signal were accessed in a getter method (I believe Vue historically did this in version 2), but you can also track object access using proxies [2]. The example from the proposal simply has a 'get' method that is called to access the value of the signal, and executing this method allows dependencies to be tracked.
[1] https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...
[2] https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...
The call `parity.get()` will register a dependency on the function that is passed to `effect()`. When `parity` is updated, the function is called.
> Will it call this lambda on any signal change?
Only when a dependent signal changes.
In this case, `parity` depends on `isEven` and `isEven` depends on `counter`. So when `counter` is updated, that whole dependency chain is invalidated leading to `parity` getting invalidated, and that callback re-running.
I'm guessing this is why they don't propose to add this function to the standard: That fact makes it not very pretty.
Basically all implementations of signals (by whatever name) build a dynamic dependency graph, where edges are established by reading nodes. In a tracking context like this hypothetical `effect`, the read also establishes an edge between the signal’s state node and the effect’s computation node—effectively subscribing the latter to subsequent writes to the former, in order to determine when to rerun the computation.
2. with signals the dependency tracking mechanism knows what values need to be recalculated and as a result the system knows which functions to call again
I love signals. I prefer them when making UIs over any other primitive (besides, perhaps, the cassowary constraint algorithm). I try to replicate them in every language I use, just for fun.
I also don't believe they belong in the Javascript language whatsoever. Let the language be for a while, people already struggle to keep up with it. TC-39 is already scaring away people from the language.
Here is the mobx version:
import { observable, computed, autorun } from 'mobx';
const counter = observable.box(0);
const isEven = computed(() => (counter.get() & 1) === 0);
const parity = computed(() => isEven.get() ? "even" : "odd");
autorun(() => {
element.innerText = parity.get();
});
// Simulate external updates to counter...
setInterval(() => counter.set(counter.get() + 1), 1000);It's a bit like tattooing your girlfriend's name onto yourself.
It's baking a building block into the standard library, that most frameworks have converged on using.
Promises became widely used, then they got included in the standard library. This is like that.
There's a fundamental question of why modern sites/apps reach for patterns like signals, memorization, hybrid rendering patterns, etc. I wouldn't begin to claim I have all the answers, but clearly there are gaps in the platform with regards to the patterns people want to implement and I'm not sure that jumping to signals as a standard helps better understand whether its the platform or our mental models that need updating to get back in sync.
Personally I've found code much easier to maintain when the frontend is only responsible for state that truly is temporary and doesn't live on the back end at all. For me any persisted state belongs on the server, as does any rendering that depends on it. This largely makes signals unnecessary, very few apps have such complex temporary state that I need a complicated setup to manage it.
This feels arguably cleaner: something = "else";
Than: setSomething("else");
As for why some libraries choose the `[thing, setThing] = signal()` API (like solid.js) that's often referred to as read write segregation. Which essentially encourages the practice of passing read only values by default, and opting in to allowing consumer writes on a value by explicitly passing it's setter function. This is something that was popularized by React hooks.
Either way this proposal isn't limiting the API choice of libraries since you can create whatever kind of wrappers around the primitive that you want.
something = "else";
That is not observable for the primitive JS types that aren't objects and have no methods or properties or getters/setters (string, number, boolean, undefined, symbol, null). some.thing = "else";
The `some` can be proxied and observed. Most frameworks are setting up the `some` container/proxy/object so everything can be accessed and observed consistently. Whether the framework exposes the object, a function, or hides it away in a DSL depends on the implementation.Adding these features in-browser would seriously slow down DOM and JS and thus all websites for real. So instead we load megabytes of JS abstraction wrappers and run them in a browser to only simulate the effect.
I've very much enjoyed this kind of consistency wherever is found (having a common prefix for common behaviors, in this case, setters)
Glad that didn't happen. And I think everybody else is, too. Maybe we should keep that in mind when standardizing features of frameworks.
Vue reactivity isn't compatible with Svelte, nor Angular.
As a counter example to your question, what if we all had competing implementations of the object primitive. Libraries would barely work with one another and would need an interop layer between them (just as reactivity layers do today!)
As to your counterexample, I agree with current JavaScript that would be a problem, but with good language support it would certainly be possible. For example, Rust (and C++?) have competing implementations of the global allocator, and must users will never notice.
I struggle to see what more one could want. Do you have any specific features you think would help a massive ecosystem of packages be able to work together, when for example different packages have different Signal implementations?
Other frameworks usually struggle a lot with this. With workarounds like having to override equality functions in useMemo() or call .set after a mutation even if you pass in the same instance as before.
The hard part of signals is similar to over complicating events.
It’s hard to debug what is going on and what to fix when things go wrong.
If this is to be a base for VueJS, it should handle deep changes. They have a note about support Map and Set, but being able to to control depth is nice in VueJS. (I'd say watch() should be deep by default, since non-deep is an optimization that can lead to accidental inconsistent state.)
Streams. Generally, I find RxJS backwards. Usually you just need "state", so that should be the easiest thing to implement. But I can't deny that the programming model is beautiful. Standardizing "state" without also considering streams seems odd to me. The "computed" creates a pipeline of updates, very similar to one you'd do with a map over a stream. If RxJS didn't already exist, I probably wouldn't have cared about this duality.
Async. Sure, signals can be synchronous, but Computed should definitely play well with async functions. This is a big shortcoming in VueJS (that people work around on their own.) That also implies handling "pending computation" gracefully for debuggability. I see there's a "computing" state, but this would have to be surfaced to be able to debug stuck promises.
Exceptions. I like the idea of .get() rethrowing exceptions from Computed. VueJS is a bit vague on that front, and just stops.
these can be implemented in userland via proxy -- and I think probably should, as is proven by this collection of utils: https://twitter.com/nullvoxpopuli/status/1772669749991739788
If we were to try implementing everything as reactive versions, there'd be be no end, and implementations couldn't keep up -- by pushing reactive Map/Set/etc to userland/library land, we can implement what we need when we need it incrementally, built on the solid foundation of the signal primitives.
> since non-deep is an optimization that can lead to accidental inconsistent state.
conversely, deep-all-the-time is a performance hit that we don't want to be default. Svelte and Ember take this approach of opt-in-deep reactivity.
I don't think this needs to be a language feature, rather abstraction of existing features. In other words, can't this be a library?
I know that SolidJS is able to figure out dependent signals, but probably doing so on the first execution.
Which does make me question the mention of Svelte in the proposal, and makes me wonder what the Svelte developers think of it - because IIUC they indeed don't need this (at runtime), if I'm not mistaken.
You answered your own question:
> I know that SolidJS is able to
It already is, obviously. But how is SolidJS supposed to work with other non-SolidJS code? It can't. Unless every library builds support for every other library, they can't possibly interoperate.
Who actually writes code like this? People use some signal graph library for application code typically, I’ve never seen anyone mixing SolidJs with MobX in application code or as a consequence of a library dep.
E.g. in Rust it's much more common to rely on existing "building blocks" like futures, tokio, syn, serde even just for basic bindings, favoring interoperability.
It is explicitly mentioned in the proposal. The problem with 3rd party libraries is their interoperability and diversity of implementations, which might be unnecessary for this kind of things.
React boilerplate for this case looks so much better in my opinion, take a look
```
function Component() {
const [counter, tick] = useReducer(st => st + 1, 0)
useEffect(() => setInterval(tick, 1000))
return counter % 2 ? 'odd' : 'even'
}```
Three lines, declarative, functional, noice.
> Within JS frameworks and libraries, there has been a large amount of experimentation across different ways to represent this binding, and experience has shown the power of one-way data flow in conjunction with a first-class data type representing a cell of state or computation derived from other data, now often called "Signals".
This could be a much more powerful feature if signal-dependencies were discovered statically, rather than after use in runtime.
If you’ve reached the point where you agree that the library should be standardized, why not take it even further to integrate it even more?
This is primarily the only handful of consumers of this stdlib.
> JavaScript has had a fairly minimal standard library, but a trend in TC39 has been to make JS more of a "batteries-included" language, with a high-quality, built-in set of functionality available
I think the description "minimal" is fairer than "no" wrt the standard library.
JS projects exist in npm hell because people have been taught to use a library to save typing 10 characters. No standard library is going to fix. Because someone can just call in a new lib that just curries something in the standard lib with minor improvements like caching.
That's a complaint I've heard since ES3 if not earlier and the trend ever since ES2015 has been to address that so I'm genuinely wondering what you think the JS "standard library" (aka built-ins) is lacking right now.
Given the different environments JS runs in these days, I'm also okay with extending the definition of "standard library" to "things browsers should have as built-ins" or "things Node.js should offer as a built-in module".
I’m aware of much conversation around dismissing macros, often in the context of bad dev experience — but this sounds like a shallow dismissal to me.
At the end of the day, we have some of the results of macros in the JavaScript ecosystem, but rather than being supported by the language they are kicked out to transpilers and compilers.
Can anyone point me to authoritative sources discussing macros in JavaScript? I have a hard time finding deep and earnest discussion around macros by searching myself.
But more importantly, do you really want script tags on webpages defining macros that globally affect how other files are parsed/interpreted? What if the macro references an identifier that's not global? What if I define a macro in a script that loads after some other JavaScript has already run? Do macros affect eval() and the output of Function.prototype.toString?
Sure, you could scope macros to one script/module to prevent code from blowing up left and right, but now you need to repeat your macro definitions in every file you create. You could avoid that by bundling your js into one file, but now you're back to using a compiler, which makes the whole thing moot.
I don’t know why macros are approached with apprehension. As I briefly get at in my first comment, I’m aware of a lot of dismissals of macros as a tool, but those dismissals don’t make sense to me in context. I’m missing some backstory or critical mind-share tipping points in the history of the concept.
What could be a good set of sources to understand the background perspective with which TC39 members approach the concept of macros?
Smalltalk, and Lisp derived ones did just fine without them.
Modern JavaScript already has them as well, no need for what is basically a closure with a listeners list.
Im inclined to say no. Signals seem like they are a UI concept, primarily. Just use any signal lib you want. Utility libraries dont really need to understand signals.
They solve a neat subset of problems in front-end developments. But they don't solve all of them.
Adding it to JavaScript as language construct is unnecessary.
So is abstracted crap only ok if it's already in the JS standard library?
Seriously the JavaScript ecosystem is so strange..
Signals predate both and originate in KnockoutJS at least. They were popularized in recent years by SolidJS. And then adopted into Preact, Vue and others. Angular is a very late newcomer to the signals game.
Edit: and this is literally in the introduction section:
--- start quote ---
This first-class reactive value approach seems to have made its first popular appearance in open-source JavaScript web frameworks with Knockout in 2010. In the years since, many variations and implementations have been created. Within the last 3-4 years, the Signal primitive and related approaches have gained further traction, with nearly every modern JavaScript library or framework having something similar, under one name or another.
--- end quote ---
The list of libraries in the README is alphabetized, and doesn't reflect the evolution of signals in the frameworks and libraries.
No fucking thank you.
We keep adding things to the language, and never subtract anything, which means learning it as a language is getting harder and harder.
Given that almost every single framework under the sun (except React) has converged on signals, it makes sense to move that into the browser. This... this is how the web is supposed to work.
?? Is my favourite.
I also want set functions and possibly a match statement thingy.
- They introduce what is effectively a black-box system
- The new system is expected to handle all application state
- They try to push it for frontend folks while also remarking that it would be useful for build systems
This has the same red flags the xz saga had.
Have we learnt nothing.
Lots of ”users” here vouching for the pattern and hoping it gets adopted. I bet this gets some nice damage control replies because there’s social engineering going on here right now and most seem to not be aware of it.