The biggest problem with Browsix today stems from Spectre mitigations: Shared Array Buffers aren't enabled by default in browsers, and static hosting sites like GitHub Pages don't let you set the right COOP/COEP ( https://web.dev/coop-coep/ ) headers to get them enabled AFAICT.
Additionally, Browsix hasn't been updated in a while, although I still believe the idea is sound. I don't have a lot of time for Browsix these days, but it would be straightforward to update Browsix with WASI support which would free us from having to use a modified Emscripten toolchain (and instantly enable running Rust binaries under Browsix).
[edit]: for additional context: we use Shared Array Buffers to give the kernel and program a shared view of a process's address space to enable fast system calls between Web Workers and the kernel (running in the main browser thread/context). Without them performance is unacceptably slow, as things like read/write system calls (a) require several memcpys and (b) create a bunch of JS garbage. Additionally, without them there isn't a good way to "block" handling a syscall in a WebWorker (which we need to do for non-event-loop C/C++/Rust programs).
I've been working on a modified Emscripten runtime that treats threads as separate "programs" with their own memory, etc. Not an optimal solution at all but sometimes you need to hack your way around a problem.
0 - https://dev.to/stefnotch/enabling-coop-coep-without-touching...
Chrome stable is at 97 currently.
I was stuck at parents for xmas and I picked Tannenbaum “distributed systems” and “Modern operating systems”, which gave me an idea of running a "kernel" on a browser. It was more of an academic exercise than anything else, but my intention was to have a the following:
Being able to unload and reload javascript. The initial idea was to write the website inside the website, but at the core level it requires having something akin to process isolation for javascript. It also requires the dom to be isolated.
Implementing 9p2000, and share resources across browsers. I’ve been reading about the ideas of plan 9 and i would like to implement something that allows me to connect point to point to other browsers and mount their FS into mine so we can share resources.
One of the cool results that I got was that since the dom is not directly changed (each process/worker has its own partial dom and every time that it changes it a delta is sent back to the main thread for sync) it allows javascript to be running somewhere else (another browser, back end server) and sync’ed back (much like vadaain, but more agnostic).
Most of the code was inspired by the linux kernel (which gave me a reason to go learn its internals) and is kinda nasty at some points but is written in typescript as some of you have already mentioned. Someone might find it interesting even if just for the educational purpose of it
Join a plan 9 community. Tons of resources and people doing what you want to do. Check my profile for links and channels.
We failed at making network protocols, so every protocol improvement now goes over HTTPS. We failed at making universal virtual machine-based applications, so every new app is in the browser. We failed to bridge the gaps of client-based, server-based and p2p computing, so we build all 3 into one interface. None of academic computer science seems to reflect this, and we still write most of our code by hand like it's the 1970's. We use fixed-width text-based 80-character terminals embedded in 8K OLED displays. Our telephones have as much processing power, memory and storage as our desktop computers and use batteries that last for 2 days (and are 1/10th the size), but we haven't yet standardized on one way to create new lines in a text file.
We're farmers from the 17th century working at a biotech startup.
Run C, C++, Go and Node.js programs as processes in browsers - https://news.ycombinator.com/item?id=13108027 - Dec 2016 (1 comment)
Browsix: Unix in the browser tab - https://news.ycombinator.com/item?id=13063232 - Nov 2016 (5 comments)
Guacamole is a good open source one.
If I understood you correctly.
There's also a relatively new service called Mighty App, which streams just a browser with the idea that you can manage 100's of tabs with all the browser processing being done on the remote server.
[edit] Oh, I guess this isn't really what you were looking for. Whoops!
Wish I could get jslinux to work for what I want with this!
I really prefer it being it local though.
[1] https://en.wikipedia.org/wiki/OnLive
EDIT: Technically Gaikai was founded a year before OnLive, but at the time OnLive was more accessible as Gaikai remained in a limited release up until Sony bought them to use for Playstation Now
Browsix addresses this by letting you run Unix programs directly in the browser, taking advantage of the significant compute power on peoples laptops (and even phones nowadays!), eliminating a whole class of security issues (software explicitly will only have access to the current users data in the browser tab) and scaling concerns.
(I'm not super familiar with this ecosystem so I'm probably missing something obvious like libraries or something)
So it's not about running a JS implementation on top of a JS implementation per se, but being able to run JS code that uses things like CommonJS require(), Node builtin modules, etc
The latex editor example is the one that best demonstrates the concept. It is a standard JS frontend that is invoking an unmodified pdflatex for rendering. In all other 'OS in the Browser' projects I'm aware of it would have to be a desktop application that just happens to be getting rendered in the browser. With plain emscripten or webassembly you would need to modify pdflatex to support a method to accept jobs and return results while not relying on any OS level features.
0 - https://wasi.dev
I’m imagining all the ways I could use this for offensive security tools.
If I had a way to import a js library that enabled running web servers, invoking OS commands, or running a reverse http proxy I’d be able to do so much damage to any target client.
> Sockets include support for TCP socket servers and clients, making it possible to run applications like databases and HTTP servers *together with their clients in the browser*.
Emphasis is mine. You need to run the server and the client within Browsix. Furthermore the "OS commands" are commands within the Browsix environment.
But nonetheless, this is useful technology for a malicious actor.
For example, a functioning http server would enable an http proxy that could intercept/modify requests made from the client no?
Now I can add headers to requests made by an html form submit. This might allow for more potent csrf attacks, or circumvention of controls like the HttpOnly cookie flag.
Can I use a victims browser as a c2 server now? I bet with some brainstorming we could come up with some creative offensive capabilities using this technology.
* why?
* security?
Big use case for this I would imagine is games
Why? Why not? (Just don't put it in production lol)
> (Just don't put it in production lol)
Why not? If you were to use multiple "processes" using Browsix in your app, it actually may be _more_ secure as Web Workers do not share state with each other and the main thread. (EDIT: Although for me I would at best take inspiration from this rather than use it for a real app)