WebAssembly: Escaping the Browser
WebAssembly was announced as a way to run native code in the browser. Most early coverage treated it as a novelty — C++ games without a plugin. That framing is not entirely wrong — Wasm is already used in the browser for exactly that kind of heavy compute: game engines, code editors, image processors, things where you need near-native performance and JS cannot get you there. But it undersells what Wasm actually is.
Strip away the browser. What you have is a compact binary format that any compliant runtime can execute. The security model is capability-based by design. The module cannot touch the filesystem, open a network socket, or call a host function unless explicitly permitted. That sandbox is not bolted on. It is the architecture.
I used this in Quelle, a CLI library manager for web novels. I needed a plugin system where users could install third-party scrapers without me having to trust what those scrapers actually did. Wasm solved that cleanly — each scraper runs in an isolated module with exactly the host functions I expose at the boundary, nothing else. That experience made me think seriously about where this runtime primitive could go.
What I actually want it to become
Here is the hopeful version: Wasm becomes the compute layer of the web. Not an alternative to JavaScript — a replacement for it, or at least a target for it. JavaScript compiles to Wasm. You write JS or TypeScript, the toolchain emits a .wasm module, and the browser executes that. The language stays. The interpreter goes.
The architecture already half-supports this thinking. Wasm is designed to be a compilation target. Rust, C, Go, and others already compile to it. The component model, once it matures, gives you a standard way to compose modules regardless of what language produced them. The runtime is fast, sandboxed, and portable. If you squint, it looks like an obvious foundation.
And JavaScript as a compiled language is not a stretch — it is already what happens with V8. The engine compiles JS to native machine code at runtime. Wasm would just move that step earlier, to build time, and make the output portable across any compliant runtime instead of tied to a specific engine’s JIT.
Why it probably won’t
The existing web use cases tell you something important about where Wasm actually fits today. Games and editors work well because they are largely self-contained — the compute happens inside the module, and the results get pushed to a canvas or a custom renderer. The moment you need to interact with the DOM frequently, the picture changes.
Every DOM interaction from Wasm crosses a serialization boundary. Data has to be marshalled out of the Wasm linear memory, handed to a JS bridge, and then acted on. For occasional interactions that cost is negligible. For anything that touches the DOM in a tight loop — which is most of what a typical web UI does — it adds up fast and erodes the performance advantage you were reaching for in the first place. You end up with Wasm calling JS calling the DOM, which is not simpler than what we have now — it is just a different layer of indirection. Wasm does not replace JS for UI work today, it sits beside it and handles the parts JS cannot do efficiently.
This is a solvable problem. There are proposals in progress. But “proposals in progress” is where Wasm features tend to live for a long time. The standards process is slow by design, and browser vendors move at their own pace on top of that. Features that seem obviously necessary spend years working through the W3C before they land anywhere users can rely on.
File sizes are the other friction point. Wasm modules are compact relative to native binaries, but they are not small. A Rust crate compiled to Wasm with even a modest set of dependencies produces something meaningfully larger than an equivalent JS bundle. Tree-shaking and compression help, but the web has spent a decade optimising for minimal initial payload. Wasm does not slot cleanly into that culture yet.
I want JavaScript to compile to Wasm the way CoffeeScript compiled to JavaScript. I think the trajectory points there. I also think it is at least a decade away, if it happens at all.
What it is good for right now
Plugin systems where the extension author cannot be trusted. Portable compute that needs to run at the edge, on mobile, and in a CLI without a separate host-layer rewrite per platform. Environments where you need a hard capability boundary and cannot afford the overhead of a subprocess or container.
That is a narrower use case than the hopeful version. But it is a real one, and the architecture it enables is genuinely clean in a way that other approaches are not. The Quelle extension system would have been much messier — and much less safe — without it.
The hopeful version might still happen. The trajectory is there. I just try not to mistake a good architecture for an inevitable one.