- "The setup is noise and boilerplate heavy." Actually the signals example looks just as noisy and boilerplate heavy to me. And it introduces new boilerplate concepts which are hard for beginners to understand.
- "If the counter changes but parity does not (e.g. counter goes from 2 to 4), then we do unnecessary computation of the parity and unnecessary rendering." - Sounds like they want premature memoization.
- "What if another part of our UI just wants to render when the counter updates?" Then I agree the strawman example is probably not what you want. At that point you might want to handle the state using signals, event handling, central state store (e.g. redux-like tools), or some other method. I think this is also what they meant by "The counter state is tightly coupled to the rendering system."? Some of this document feels a little repetitive.
- "What if another part of our UI is dependent on isEven or parity alone?" Sure, you could change your entire approach because of this if that's a really central part of your app, but most often it's not. And "The render function, which is only dependent on parity must instead "know" that it actually needs to subscribe to counter." is often not an unreasonable obligation. I mean, that's one of the nice things about pure computed functions- it's easy to spot their inputs.
I think an effort in standardizing signals, a concept that is increasingly used in UI development is a laudable effort. I don't want to get into the nitty gritty about what is too much boilerplate and whether you should build an event system or not, but since signals are something that is used in a variety of frameworks, there might be a good reason to it? And why not make an effort and standardize them over time?
For a desktop app developer that's a pretty funny statement, given that the Qt framework introduced signals and slots in the mid 90s.
I am curious how many web devs think that signals are a new concept. (I don't necessarily mean the parent poster.)
The main one is that QT signals are, as far as I understand, a fairly static construct - as you construct the various components of the application, you also construct the reactive graph. This graph might be updated over time, but usually when components are mounted and unmounted. JS signals, however, are built fresh every time they are executed, which makes them much more dynamic.
In addition, dependencies in JS signals are automatic rather than needing to be explicitly defined. There's no need to call a function like connect, addEventListener, or subscribe, you just call the original signal within the context of a computation, and the computation will subscribe to that signal.
Thirdly, in JS signals, you don't necessarily need to have a signal object to be able to subscribe to that signal. You can build an abstraction that doesn't necessarily expose the signal value itself, and instead provides getter functions that may call the underlying signal getter. And this same abstraction can be used both inside and outside of other reactive computations.
So on the one hand, yes, JS signals are just another reactivity tool and therefore will share features with many existing tools like signals and slots, observables, event emitters, and so on. But within that space, they are also a meaningful difference in how that reactivity occurs and is used.
From my reading I understood that Qt signals & slots (and Qt events) are much more closely related to JavaScript events (native and custom).
In both you can explicitly emit, handle, listen to events/signals. JavaScript events seem to combine both Qt signals & slots and Qt events. Of course without the type safety.
For example, taken from https://doc.qt.io/qt-6/signalsandslots.html
"Signals are emitted by objects when they change their state in a way that may be interesting to other objects."
However what I think they are proposing in the article is a much more complex abstraction: they want to automate it so that whenever any part of a complex graph of states changes, every piece of code depending on that specific state gets notified, without the programmer explicitly writing code to notify other pieces of code, or doing connect() or addEventListener() etc.
What are your thoughts on that? I'd be interested to hear since I'm sure you have more experience than me.
You're basically saying you want this thing, but you don't want to have to justify it
...common usage is not really a justification for putting it into the language standard though. Glancing over the readme I'm not seeing anything that would require changes to the language syntax and can't be implemented in a regular 3rd-party library.
In a couple of years, another fancy technique will make the rounds and make signals look stupid, and then we are left with more legacy baggage in the language that can't be removed because of backwards compatibility (let C++ be a warning).
https://preactjs.com/guide/v10/signals
"In Preact, when a signal is passed down through a tree as props or context, we're only passing around references to the signal. The signal can be updated without re-rendering any components, since components see the signal and not its value. This lets us skip all of the expensive rendering work and jump immediately to any components in the tree that actually access the signal's .value property."
"Signals have a second important characteristic, which is that they track when their value is accessed and when it is updated. In Preact, accessing a signal's .value property from within a component automatically re-renders the component when that signal's value changes."
I think it makes a lot more sense in a context like that.
I have found that passing props makes React-like applications very complex and messy and props are to be avoided as must as practical.
The mechanism for avoiding props is Custom events.
It concerns me to see the concept of signals being passed as props when surely signals/events should be removing the need for props?
React signals have become my go to state management tool. So easy to use and very flexible.
In my experience, the big benefit is the ability to make reactive state modular. In an imperative style, additional state is needed to track changes. Modularity is achieved using abstraction. Only use when needed.
> Sounds like they want premature memoization
It's a balance to present a simple example that is applicable. Cases where reactivity have a clear benefit tend to be more complex examples. Which is more difficult to demonstrate than a simple, less applicable example.
You can do it that way, but… why? When you could just not?
Even if that were true for this example, the signal-based model grows linearly in complexity and overhead with the number of derived nodes. The callback-based version is super linear in complexity because you have an undefined/ unpredictable evaluation order for callbacks producing a combinatorial explosion of possible side effect traces. It also scales less efficiently because you could potentially run side effects and updates multiple times, where the signal version makes additional guarantees that can prevent this.
counter.set(counter.get() + 1)
One would think that proper integration into the language also means getting rid of those "noisy" setter/getter calls.In practice, I can count on two hands the number of times I’ve written `new Promise`. What did happen, though, is I started to write `.then` a whole lot more, especially when working with third party libraries.
In the end, the actual day-to-day effect of the Promise addition to JavaScript was it gave me a fairly simple, usually solid, and mostly universal interface to a wide variety of special behaviors and capabilities provided by third party libraries. Whether it’s a file read or an api request or a build step output, I know I can write `.then(res => …)` and I’m already 50% of the way to something workable.
If this Signal proposal can do something similar for me when it comes to the Cambrian explosion of reactive UI frameworks, I am in favor! What’s more, maybe it will even help take reactivity beyond UI; I’ve often daydreamed about some kind of incrementally re-computed state tree for things other than UI state.
While .then in initial promises was a great improvement over nested delegates and is fine for simple chained promises, once you start conditionally chaining different promises or need different error handling for particular chains in the promise, or wanting to do an early return, the code can become much harder to read and work with.
With async/await though you just write the call essentially as if it’s not a promise and can easily put try/catch around particular promise calls, easily have early returns, etc.
However you do quite often need to use Promise.all, even when using async/await.
function sleep(ms) {
return new Promise((resolve) => setTimeout(resolve, ms));
}
await sleep(100);
There are some use cases for the Promise class still, and it’s great to have that kind of control facility at hand when you need it.Thinking that the current crop of JS UI libraries designed their signals in such a good way that it needs to become a part of the language is hubris. Signals have many possible implementations with different tradeoffs, and none of them deserve to have a special place in the JavaScript spec.
Before these libraries used signals, they or their predecessors used virtual DOM. Luckily, that didn't become part of JS, but how are signals any different? They aren't. The argument for making them standard is even worse than for virtual DOM.
Are we just going pile every fad into a runtime that basically has no way to dispose of no-longer-wanted features without breaking the web? That is quite short sighted.
Reactive UI won. The main thing stopping me from using vanilla JS is the absolute explosion in complexity managing state for even small sized applications. To me, any reactive framework is better than vanilla, so perhaps there is a construct missing? Now that it’s been a decade or so, we should start thinking about possible cut-points for standardization. Like with promises, this could bring down complexity for extremely common use-cases, if done right.
I think a better way to evaluate it would be: “would this proposal be used by existing reactive frameworks?”. If not, why? What’s missing? What’s superfluous? What about lessons from UI annd reactivity from other languages? There’s a lot of fragmented experience to distill, but it’s a worthwhile endeavor imo.
But that's not what you're doing. You're gunning for becoming the standard from the start – you are trying to convince people to use your draft implementation based on its status as a proposed standard, instead of them using it on its own technical merits.
Ditch the status, and see if anyone still wants it.
--
Yes, reactive UI won – just fine, without having signals in the JS standard. Because not having signals in the language was never an actual problem holding back reactive UI development in JS.
Your proposal does not "bring down complexity". It simply moves the complexity from UI libraries into the JS standard. In doing that, you forcefully marry the ecosystem to that particular style of complexity. Every browser vendor will need to implement it and support it... for how many decades?
And to what end? Unlike e.g. promises that are useful on their own, your proposal isn't nearly ergonomic enough to allow building reactive UIs in vanilla JS. Users will still need to use libraries for that, just like they do today. You're just moving one piece of such libraries into the standard, without building the case for why it's needed there.
--
Your proposal spends pages selling the readers on signals, but that is not what you need to sell. We already have many implementations of signals. You need to sell why your (or any) signals implementation needs to be in the JavaScript standard.
You have one tiny "Benefits of a standard library" subsection under "Secondary benefits" but it's just ridiculous. You're basically saying that we should add signals to JS because we've added (much simpler or more needed) things to JS before – is that really your best argument?
And... "saving bundle size"? You want to bless one implementation of a complex problem to save what, 5KB of this: https://cdn.jsdelivr.net/npm/s-js
Sorry, just – nothing about this makes sense to me.
The main benefit is interop. Same with promises. You can implement all of promises with custom callbacks - in fact it’s trivial. But competing implementations don’t typically land on API compatibility simply because they’re solving the same problem. That causes a fractured ecosystem. Maybe interop could be important with signals? I think they should argue that, if so!
> Users will still need to use libraries for that, just like they do today.
Yes? But you reduce the lifting by the libs - ideally enabling a class of vanilla use-cases which can be made demonstrably improved. You could say querySelector was unnecessary because you can do it in lib. Or filter, or map. Standardization can cover std-lib like features too no?
Doesn’t mean I am in favor. I think you should always default to no unless strong and consistent proven benefits. But why not have good faith arguments for what problems this will or won’t solve. For instance, if hypothetically let’s say react or svelte has a different model that cannot possibly use these signals, then that’s probably a sign it’s not good. My philosophy with proposals is balancing the curiosity and honest inquiry with a grumpy defensive inquisition before saying aye. Flaming though is really not helpful.
> You're basically saying that we should add signals to JS because we've added (much simpler or more needed) things to JS before
> saving bundle size
Yes, I agree these are weak arguments.
I think that the Promise API was not the actual thing people directly wanted (and on its own had no compelling reason to be added to the base library), but a standardised Promise API was needed in order to add the hugely useful async/await keywords which are unachievable without changes to the language.
I am however a big fan of a really good base library (it’s one of the things I love about working with .NET), but they should be focussing on functionality with the broadest reach (as in most encountered by average JS devs working on day to day tasks), e.g. things like better tools for working with dates and times.
That is almost literally exactly what happened: most major JS frameworks (except React) converged on Signals.
> Because not having signals in the language was never an actual problem holding back reactive UI development in JS.
Oh, but it did hold back reactive development. There are many limitations on current implementations of signals precisely because there's no proper support for many things in the language.
> You need to sell why your (or any) signals implementation needs to be in the JavaScript standard.
That is why it is:
- a proposal that
- calls for input from implementers, users, library developers etc.
> Unlike e.g. promises that are useful on their own,
But the exact same thing happened with promises: everyone had their implementation, there was no need for a proposal to add that specific API to the standard library. Deferred (the precursor to Promises) existed for several years before Promises. Here's the full history: https://samsaccone.com/posts/history-of-promises.html
And yet, 15 years later here we are
a = () => 42
b = () => a() - 1
c = () => a() + b() * 2
It isn't a bigger nightmare than debugging pure functions. The source for `c` is `a` and `b`. All signal values (as proposed) will be lexically available in a body of a dependent signal, so there's no hidden registry to navigate anyway. If in-browser IDEs want to record a call tree for an activation record, they can do that without a standard.When watch fires out of control, you need debugging tools to understand why your render() function is being invoked more often than it should.
This type of problem happens all the time in react and you need to trace upwards to find that 7 components up the chain someone accidentally included new Date() in the state that propagates down through props and re-renders everything.
You can argue about the need for this, but if we're going to extend the standard lib, then looking at what is popular is a good approach IMO.
> Before these libraries used signals, they or their predecessors used virtual DOM
Signals are not a replacement for virtual DOM.
And there's the new one which seems to be getting implemented in node right now: https://github.com/WICG/observable
window.dispatchEvent(new Event('counterChange'));
And every part of the application that wants to react to it can subscribe via window.addEventListener('counterChange', () => {
... do something ...
});
Anything wrong with that?Event handling is getting messy very easily. If you want to get deeper into it, have a look at event bubbling and propagation.
Large applications need a robust event handling. This is the nowadays hidden benefit of frameworks like Angular, Vue etc.
Believe me, you don’t want to use the standard event handling API without a framework. Adding, deleting, cloning, firing, removing, fire once etc on many elements can have serious unwanted side effects.
In this example, the event is on the Window. There is no bubbling. It is already at the top level.
>Believe me, you don’t want to use the standard event handling API without a framework. Adding, deleting, cloning, firing, removing, fire once etc on many elements can have serious unwanted side effects.
I don't know what this means. The frameworks do not have much to do with this topic.
Frameworks batch changes so that updates are efficient, and in many cases figures out the "correct" order of doing things. If you do all of those yourself in a large and complex UI, very likely you are updating the DOM less efficiently than what frameworks are doing, and very likely you introduced some subtle bugs. Speaking of that from my first-hand experience.
The difference with signals is that the resulting value is only ever calculated when the end consumer reads the value- so you schedule render updates asynchronously from the actual writes to the signal, and whatever chain of computations the watchers perform is done just the one time during the render.
Interim values sent to the signal will get lost, so you really can't do too much interesting work in them. It's really just a fancy abstraction layer to coordinate a rendering cycle.
It can also be more performant, eg, say you have a computation that depends on 2 values:
`result = a ? b : 0`
Then if a is falsy, we don't need to recompute if b changes. This is achieved automatically with signals, but would require quite some code with classic pub/sub.
And it's hard to ensure that all listener don't cause that trigger cascade.
In fact, the GOF spend an entire paragraph on the problem of complex update semantics in "design Patterns" (1995) ch Observer p299.
So, while it is a real problem, it's one that has been solved (for at least 29 years)
It has all the downsides of the pub/sub architecture highlighted in the proposal.
I might not be describing that well, because once you go down that road it really becomes a whole overall approach that infects the whole program (like functional reactive programming), and so it's really about how the whole flow fits together from top to bottom, and that can be very elegant.
I don't think that's the right fit for everything, i.e. in gamedev it might make more sense to just update some object's position imperatively, but for UI it tends to work pretty well.
window.addEventListener('counterChange', () => {
element.innerHTML = 20components.map(|c| c.renderHTML(newCounterValue)).join('');
});
or, you have to check the components on case by case basis on a. if the component need to be updated or not, and b. how is the most efficient way to update such component. And in TFA, if counter is changed from odd to odd, the label doesn't need to be updated.Also, multiply the number of events to the number of components can make the application go out of hand very quickly.
A alt proposal would be some kind of auto remove listener if it goes out of context
Best case scenario, it just slows down garage collection a little bit, as you're holding into a lot of references that aren't going anywhere.
On the other hand, I recall a bug in a particular version of AngularJS where component DOM nodes wouldn't get cleaned up when navigating with the router unless you manually set all of the scope values their templates used to null.
We had a data dense application with tables and what not, and you could clearly watch memory jump tens or more megabytes flipping between pages in the chrome dev tooling.
Eventually (this was a SPA intended to be used for upwards of hours at a time) the page would crash.
More often than not that’s due to a memory leak.
Well, don't events only bubble upwards? You need to know the exact element of it is not on a lower level in the DOM tree.
Events were too messy, so I wrote a small pub/sub message queue type of thing. Anyone anywhere in the DOM can subscribe to messages based on subject regexes.
Makes things a lot easier, especially when I added web components to wrap existing elements so that publishing and subscribing is done with attributes, not js.
Only if you have a reference to the object.
The reason I made up my own message queue pub/sub is because events required a lot of complexity in acquiring the correct reference to the correct object or subtree in the DOM.
With pub/sub type message queue, any element can emit a message with subject "POST /getnetpage" and the function listening for (subscribed) POST messages emits a RESPONSE message when the response comes back. This lets a third element listen (subscribe) for a "RESPONSE FROM /getnextpage" subject message, and handle it.
None of the 3 parties in the above need to have a reference to each other, nor do they even have to know about each other, and I can inject some nice observability tools because I can just add a subscriber for specific message types, which makes debugging a breeze.
It only works in browser environments.
See, for instance, https://www.electronjs.org/docs/latest/api/ipc-renderer
Unfortunately, modern frameworks really want to use their own event channels, which makes hooking a pain.
Welcome to Node.js v21.6.2.
Type ".help" for more information.
> window.dispatchEvent(new Event('counterChange'))
Uncaught ReferenceError: window is not definedSure, it requires a bit of discipline, but it’s vastly simpler to me than whatever solution comes up every few years (Backbone, Knockout, Angular, React, modifying the language itself, etc). There must be something profoundly different with the way I think.
It even expresses itself in the function naming. They call updating innerText “render”. You’re not rendering anything. At most, the browser is, but so is everything else it does related to painting. It feels like a desperate attempt to complicate what is one of the simplest DOM functions. It really baffles me.
More complex it is not easy.
I know some guys who over the years wrote their own framework. It works great… for them.
If you are working on your own, or only create small web apps, sure, you can avoid frameworks in some cases.
What do you mean by "web apps" here? My memory of the web in 1999 was that the only rich web UX was in Java applets
But I don't think React necessary for _every_ app and it really depends on what kind of apps you are making.
Certainly you can do the original style of app where the templating is on the server and any js is just to hook up already existing nodes. The js community has more or less moved away from that "rails" style of app years ago…
I strongly suspect that in ages past you would have been an ASM programmer shaking your fist at those crazy portable C programmers, and then a C programmer shaking his fist angrily at those crazy memory safe Java programmers.
Progress in programming can be marked by eliminating the need for ceremonial kinds of strict discipline needed to achieve good results.
Which isn't to say React is some kind of next evolution, but signals certainly are a step in the right direction.
Keeping the DOM in sync with a data state isn’t too difficult but doing so in a highly performant (60 fps) way _is_ tremendously difficult. Especially when it comes to creating APIs that aren’t leaky but also not too cumbersome.
It would frankly be easier to just paint pixels to a canvas game-style, than translating changes to a living DOM tree.
The point of the original comment is that DOM updates should be fast, efficient and avoid visible delays to user's eye, which is not easy. If you are not careful, small changes in a large application could lead to too many DOM updates -- that is where frameworks shine.
And canvas is not the solution to everything and has its own problems. To begin with, accessibility.
> The current draft is based on design input from the authors/maintainers of Angular, Bubble, Ember, FAST, MobX, Preact, Qwik, RxJS, Solid, Starbeam, Svelte, Vue, Wiz, and more…
Would be interested what existing library authors think of this proposal. Interesting that React is not in that list
Signals are a bit like channels, except they're broadcast instead of single receiver. It'd be neat if this could somehow be leveraged to allow web workers to communicate with channels instead of onMessage callbacks. Specifically being able to `select` over signals/channels/promises like in Go would over a syntactic benefit over having to try manage multiple concurrent messaging mechanism with callbacks (maybe by allowing signals to be included in `Promise.any`)
Hard disagree.
`x instanceof Promise` simply doesn't work. If my library has a then method that accepts a catch callback and yours doesn't, they're silently non-interoperable, and there's no way to detect it. When does `finally` run? What expectations can you have around how async the callbacks are? Without a standard, every single library that uses promises needs to bring its own polyfill because you can't trust what's there. And you can't actually consume any other library's promises, because you can't trust that they behave in the way you expect them to.
And I'm not just speculating, this was reality for many years and a hell that many of us had to endure.
One benefit of standardisation that's not tied to async/await is that the JavaScript engines has been able to do performance optimisations not otherwise possible which benefit Promise-heavy applications
Signals are not a part of the core React API, unlike Preact.
My vague gut feeling is that signals are too much like a generalized useEffect() and would only introduce further confusion into React by muddling what happens during the render cycle. For better and worse, React takes a different tack to updates than signals do. But maybe I’m wrong about their applicability.
function One(props) {
const doubleCount = props.count * 2;
return <div>Count: {doubleCount}</div>;
}
function Two(props) {
return <div>Count: {props.count * 2}</div>;
}
It honestly made me wonder whether the article was dated April 1 and I’d been had.More generously, JS framework design is hard. If you’re ambitious at all, you end up fighting the language and your runtime paradigms will hang like ill-fitting clothes on its syntax. The One/Two example above shows how easily expectations break in this world of extensions to extensions. There’s no way to know what an apparently simple piece of code will actually do without knowing the specifics of a given framework.
This looks a lot like ember data binding which becomes an imperative nightmare. Its default state is “foot gun” with tons of cognitive overhead and meta patterns to keep it from getting that way.
https://nodejs.org/en/learn/asynchronous-work/the-nodejs-eve...
// A library or framework defines effects based on other Signal primitives
declare function effect(cb: () => void): (() => void);
What library? What framework? I lost here. What's effect? effect(() => element.innerText = parity.get());
How does effect knows that it needs to call this lambda whenever parity gets changed? Will it call this lambda on any signal change? Why this talk about caching then? Probably not.Anyway I think that signal idea is sound, if I understood correctly what the authors tried to convey. My main issue with those decoupling architectures is that once your application is complex enough, you will get lost trying to figure out why this particular event being emitting. Ideally signals should fix this by modifying stacktrace, so when my callback is being called, it'd already contain a stacktrace of the code which triggered that signal in the first place.
There are various libraries that export a function called effect which allows you to run arbitrary code in response to a signal update. The Preact docs have a great primer on signals and effects: https://preactjs.com/guide/v10/signals#effectfn
As I understand it, these effect functions run the callback once initially to see which signals were accessed while the callback was executing, and then call the callback again whenever the signals it depends on update. As long as signal access is synchronous and single-threaded, you know that if a signal was accessed during the callback's execution that the callback should be subscribed to those signals.
> How does effect knows that it needs to call this lambda whenever parity gets changed? Will it call this lambda on any signal change?
You can do this with getters [1], where the effect function tracks which properties of the signal were accessed in a getter method (I believe Vue historically did this in version 2), but you can also track object access using proxies [2]. The example from the proposal simply has a 'get' method that is called to access the value of the signal, and executing this method allows dependencies to be tracked.
[1] https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...
[2] https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...
The call `parity.get()` will register a dependency on the function that is passed to `effect()`. When `parity` is updated, the function is called.
> Will it call this lambda on any signal change?
Only when a dependent signal changes.
In this case, `parity` depends on `isEven` and `isEven` depends on `counter`. So when `counter` is updated, that whole dependency chain is invalidated leading to `parity` getting invalidated, and that callback re-running.
I'm guessing this is why they don't propose to add this function to the standard: That fact makes it not very pretty.
Basically all implementations of signals (by whatever name) build a dynamic dependency graph, where edges are established by reading nodes. In a tracking context like this hypothetical `effect`, the read also establishes an edge between the signal’s state node and the effect’s computation node—effectively subscribing the latter to subsequent writes to the former, in order to determine when to rerun the computation.
2. with signals the dependency tracking mechanism knows what values need to be recalculated and as a result the system knows which functions to call again
I love signals. I prefer them when making UIs over any other primitive (besides, perhaps, the cassowary constraint algorithm). I try to replicate them in every language I use, just for fun.
I also don't believe they belong in the Javascript language whatsoever. Let the language be for a while, people already struggle to keep up with it. TC-39 is already scaring away people from the language.
Here is the mobx version:
import { observable, computed, autorun } from 'mobx';
const counter = observable.box(0);
const isEven = computed(() => (counter.get() & 1) === 0);
const parity = computed(() => isEven.get() ? "even" : "odd");
autorun(() => {
element.innerText = parity.get();
});
// Simulate external updates to counter...
setInterval(() => counter.set(counter.get() + 1), 1000);It's a bit like tattooing your girlfriend's name onto yourself.
It's baking a building block into the standard library, that most frameworks have converged on using.
Promises became widely used, then they got included in the standard library. This is like that.
There's a fundamental question of why modern sites/apps reach for patterns like signals, memorization, hybrid rendering patterns, etc. I wouldn't begin to claim I have all the answers, but clearly there are gaps in the platform with regards to the patterns people want to implement and I'm not sure that jumping to signals as a standard helps better understand whether its the platform or our mental models that need updating to get back in sync.
Personally I've found code much easier to maintain when the frontend is only responsible for state that truly is temporary and doesn't live on the back end at all. For me any persisted state belongs on the server, as does any rendering that depends on it. This largely makes signals unnecessary, very few apps have such complex temporary state that I need a complicated setup to manage it.
This feels arguably cleaner: something = "else";
Than: setSomething("else");
As for why some libraries choose the `[thing, setThing] = signal()` API (like solid.js) that's often referred to as read write segregation. Which essentially encourages the practice of passing read only values by default, and opting in to allowing consumer writes on a value by explicitly passing it's setter function. This is something that was popularized by React hooks.
Either way this proposal isn't limiting the API choice of libraries since you can create whatever kind of wrappers around the primitive that you want.
something = "else";
That is not observable for the primitive JS types that aren't objects and have no methods or properties or getters/setters (string, number, boolean, undefined, symbol, null). some.thing = "else";
The `some` can be proxied and observed. Most frameworks are setting up the `some` container/proxy/object so everything can be accessed and observed consistently. Whether the framework exposes the object, a function, or hides it away in a DSL depends on the implementation.Adding these features in-browser would seriously slow down DOM and JS and thus all websites for real. So instead we load megabytes of JS abstraction wrappers and run them in a browser to only simulate the effect.
I've very much enjoyed this kind of consistency wherever is found (having a common prefix for common behaviors, in this case, setters)
Glad that didn't happen. And I think everybody else is, too. Maybe we should keep that in mind when standardizing features of frameworks.
Vue reactivity isn't compatible with Svelte, nor Angular.
As a counter example to your question, what if we all had competing implementations of the object primitive. Libraries would barely work with one another and would need an interop layer between them (just as reactivity layers do today!)
As to your counterexample, I agree with current JavaScript that would be a problem, but with good language support it would certainly be possible. For example, Rust (and C++?) have competing implementations of the global allocator, and must users will never notice.
As for polymorphism, even the current class syntax largely operates in the same way as the original prototypal inheritance mechanism, with a few exceptions in constructor behavior to support subclassing certain built-in objects.
You can pretty easily create run-time traits- like functions with prototpyes, the class construct is an expression whose resulting value can be passed around, mutated, etc.
For example, you can write a function that takes a class as an argument, and returns a new class that extends it.
I struggle to see what more one could want. Do you have any specific features you think would help a massive ecosystem of packages be able to work together, when for example different packages have different Signal implementations?
Other frameworks usually struggle a lot with this. With workarounds like having to override equality functions in useMemo() or call .set after a mutation even if you pass in the same instance as before.
The hard part of signals is similar to over complicating events.
It’s hard to debug what is going on and what to fix when things go wrong.
If this is to be a base for VueJS, it should handle deep changes. They have a note about support Map and Set, but being able to to control depth is nice in VueJS. (I'd say watch() should be deep by default, since non-deep is an optimization that can lead to accidental inconsistent state.)
Streams. Generally, I find RxJS backwards. Usually you just need "state", so that should be the easiest thing to implement. But I can't deny that the programming model is beautiful. Standardizing "state" without also considering streams seems odd to me. The "computed" creates a pipeline of updates, very similar to one you'd do with a map over a stream. If RxJS didn't already exist, I probably wouldn't have cared about this duality.
Async. Sure, signals can be synchronous, but Computed should definitely play well with async functions. This is a big shortcoming in VueJS (that people work around on their own.) That also implies handling "pending computation" gracefully for debuggability. I see there's a "computing" state, but this would have to be surfaced to be able to debug stuck promises.
Exceptions. I like the idea of .get() rethrowing exceptions from Computed. VueJS is a bit vague on that front, and just stops.
these can be implemented in userland via proxy -- and I think probably should, as is proven by this collection of utils: https://twitter.com/nullvoxpopuli/status/1772669749991739788
If we were to try implementing everything as reactive versions, there'd be be no end, and implementations couldn't keep up -- by pushing reactive Map/Set/etc to userland/library land, we can implement what we need when we need it incrementally, built on the solid foundation of the signal primitives.
> since non-deep is an optimization that can lead to accidental inconsistent state.
conversely, deep-all-the-time is a performance hit that we don't want to be default. Svelte and Ember take this approach of opt-in-deep reactivity.
I don't think this needs to be a language feature, rather abstraction of existing features. In other words, can't this be a library?
I know that SolidJS is able to figure out dependent signals, but probably doing so on the first execution.
Which does make me question the mention of Svelte in the proposal, and makes me wonder what the Svelte developers think of it - because IIUC they indeed don't need this (at runtime), if I'm not mistaken.
You answered your own question:
> I know that SolidJS is able to
It already is, obviously. But how is SolidJS supposed to work with other non-SolidJS code? It can't. Unless every library builds support for every other library, they can't possibly interoperate.
Who actually writes code like this? People use some signal graph library for application code typically, I’ve never seen anyone mixing SolidJs with MobX in application code or as a consequence of a library dep.
All of that goes away when the plumbing for handling state is standardized by the runtime.
https://github.com/preactjs/preact/blob/757746a915d186a90954...
E.g. in Rust it's much more common to rely on existing "building blocks" like futures, tokio, syn, serde even just for basic bindings, favoring interoperability.
Rust will have the same thing happen in another decade or so. It's prevalence will grow the community, which will grow the ecosystem, and the positive feedback loop will mean that there's a huge amount of choice. That's a good thing, but it will have downsides, like decreased interoperability (unless the language evolves). JavaScript was cursed for a very long time by leadership that was essentially asleep at the wheel.
It is explicitly mentioned in the proposal. The problem with 3rd party libraries is their interoperability and diversity of implementations, which might be unnecessary for this kind of things.
React boilerplate for this case looks so much better in my opinion, take a look
```
function Component() {
const [counter, tick] = useReducer(st => st + 1, 0)
useEffect(() => setInterval(tick, 1000))
return counter % 2 ? 'odd' : 'even'
}```
Three lines, declarative, functional, noice.
> Within JS frameworks and libraries, there has been a large amount of experimentation across different ways to represent this binding, and experience has shown the power of one-way data flow in conjunction with a first-class data type representing a cell of state or computation derived from other data, now often called "Signals".
This could be a much more powerful feature if signal-dependencies were discovered statically, rather than after use in runtime.
If you’ve reached the point where you agree that the library should be standardized, why not take it even further to integrate it even more?
This is primarily the only handful of consumers of this stdlib.
> JavaScript has had a fairly minimal standard library, but a trend in TC39 has been to make JS more of a "batteries-included" language, with a high-quality, built-in set of functionality available
I think the description "minimal" is fairer than "no" wrt the standard library.
JS projects exist in npm hell because people have been taught to use a library to save typing 10 characters. No standard library is going to fix. Because someone can just call in a new lib that just curries something in the standard lib with minor improvements like caching.
That's a complaint I've heard since ES3 if not earlier and the trend ever since ES2015 has been to address that so I'm genuinely wondering what you think the JS "standard library" (aka built-ins) is lacking right now.
Given the different environments JS runs in these days, I'm also okay with extending the definition of "standard library" to "things browsers should have as built-ins" or "things Node.js should offer as a built-in module".
I’m aware of much conversation around dismissing macros, often in the context of bad dev experience — but this sounds like a shallow dismissal to me.
At the end of the day, we have some of the results of macros in the JavaScript ecosystem, but rather than being supported by the language they are kicked out to transpilers and compilers.
Can anyone point me to authoritative sources discussing macros in JavaScript? I have a hard time finding deep and earnest discussion around macros by searching myself.
But more importantly, do you really want script tags on webpages defining macros that globally affect how other files are parsed/interpreted? What if the macro references an identifier that's not global? What if I define a macro in a script that loads after some other JavaScript has already run? Do macros affect eval() and the output of Function.prototype.toString?
Sure, you could scope macros to one script/module to prevent code from blowing up left and right, but now you need to repeat your macro definitions in every file you create. You could avoid that by bundling your js into one file, but now you're back to using a compiler, which makes the whole thing moot.
I don’t know why macros are approached with apprehension. As I briefly get at in my first comment, I’m aware of a lot of dismissals of macros as a tool, but those dismissals don’t make sense to me in context. I’m missing some backstory or critical mind-share tipping points in the history of the concept.
What could be a good set of sources to understand the background perspective with which TC39 members approach the concept of macros?
Smalltalk, and Lisp derived ones did just fine without them.
Modern JavaScript already has them as well, no need for what is basically a closure with a listeners list.
Im inclined to say no. Signals seem like they are a UI concept, primarily. Just use any signal lib you want. Utility libraries dont really need to understand signals.
They solve a neat subset of problems in front-end developments. But they don't solve all of them.
Adding it to JavaScript as language construct is unnecessary.
So is abstracted crap only ok if it's already in the JS standard library?
Seriously the JavaScript ecosystem is so strange..
Signals predate both and originate in KnockoutJS at least. They were popularized in recent years by SolidJS. And then adopted into Preact, Vue and others. Angular is a very late newcomer to the signals game.
Edit: and this is literally in the introduction section:
--- start quote ---
This first-class reactive value approach seems to have made its first popular appearance in open-source JavaScript web frameworks with Knockout in 2010. In the years since, many variations and implementations have been created. Within the last 3-4 years, the Signal primitive and related approaches have gained further traction, with nearly every modern JavaScript library or framework having something similar, under one name or another.
--- end quote ---
The list of libraries in the README is alphabetized, and doesn't reflect the evolution of signals in the frameworks and libraries.
No fucking thank you.
We keep adding things to the language, and never subtract anything, which means learning it as a language is getting harder and harder.
Given that almost every single framework under the sun (except React) has converged on signals, it makes sense to move that into the browser. This... this is how the web is supposed to work.
?? Is my favourite.
I also want set functions and possibly a match statement thingy.
- They introduce what is effectively a black-box system
- The new system is expected to handle all application state
- They try to push it for frontend folks while also remarking that it would be useful for build systems
This has the same red flags the xz saga had.
Have we learnt nothing.
Lots of ”users” here vouching for the pattern and hoping it gets adopted. I bet this gets some nice damage control replies because there’s social engineering going on here right now and most seem to not be aware of it.