Some thoughts off the top of my head: Proxies are presented as a possible candidate to improve performance, but in conversations w/ some vdom library authors, I learned that proxy performance is far too bad to make it a viable option for high-performance vdom engines (in addition to having abysmal cross-browser support today)
Another issue is that observable overhead must be offset by savings in number of DOM operations in order for change propagation to be worth it. For example, a `reverse` operation would not benefit much, if at all, since it requires touching almost all DOM nodes in a list, and would incur worst case overhead on top of it.
While naive vdom can lose in needle-in-haystack scenarios, vdom libraries often provide other mechanisms (thunks, shouldComponentUpdate, per-component redraws, etc) to cope w/ those scenarios.
In addition, the field of vdom performance has very strong traction currently. Authors of vdom libraries often share knowledge and implementation ideas and there are now libraries than can perform faster than naive vanilla js in some cases by employing techniques like DOM recycling, cloning and static tree diff shortcircuiting, as well as libraries w/ strong focus on granular localized updates.
I'm not sure how those would deal with the scenarios focused on in the article. The only one I'm familiar with is per-component redraws which wouldn't apply.
Ultimately, there are a vast number of different scenarios, some of which are likely to remain open problems for the foreseable future (e.g. a 100,000 item reverse), and some of which can be worked around via application space "escape hatches" such as the use of granular update APIs and techniques like occlusion culling.
As I said, it's very interesting to see work that tackles the problem from an algorithmic complexity angle, and it's always healthy to explore different performance characteristics, but I think it's important to also keep in perspective what is the performance profile of solutions currently in the market, because looking at their theoretical algorithmic complexity alone doesn't tell the whole story.