What does HackerNews think of workerd?

The JavaScript / Wasm runtime that powers Cloudflare Workers

Language: C++

I think this is for people who want to run their own cloudflare workers (sort of) and since nobody wants to run full node for that, they want a small runtime that just executes js/wasm in an isolated way. But I wonder why they don't tell me how I can be sure that this is safe or how it's safe. Surely I can't just trust them and it explicitly mentions that it still has file IO so clearly there is still work I need to do customize the isolation further. The reason why they dont show more info about this is probably because they don't really want you to use this to run it on your own, they are selling you on running things on their edge platform called "Wasmer Edge".

So that's probably why this is so light on information.. the motivation isn't to get you to use this yourself, just to use this via their hosted edge platform. But then I wonder why I wouldn't just use https://github.com/cloudflare/workerd which is also open source. Surely that is fast enough? If not then it should show some benchmarks? I suppose the performance claim is here: https://wasmer.io/products/edge#:~:text=Cloud-,Cold%20Startu...

Cloudflare Workers is C++. I would assume that a lot of the related things (KV, Durable Objects, etc) too.

https://github.com/cloudflare/workerd

The equivalent here to kubernetes is workerd, it's open source https://github.com/cloudflare/workerd/

The platform that VPSes kubernetes uses run on though, all the big clouds have a proprietary one.

First major adopter of the new Sandbox API?

Some very good tech comments happening in the last thread on Kuasar. https://news.ycombinator.com/item?id=35649189

It was a bit surprising to me that the model is a forking model, that each VM is it's own process. Got a great reply on that:

> It is very insightful to point out that forking a wasm runtime may not be the best choice, actually this is because we chose the WasmEdge as our first supported wasm runtime, and it do not support redirecting the standard io fds, so we can only fork a new process and redefine the stdin/stdout/stderr to the named pipe. we already submit an issue to WasmEdge community, we may change the fork way to starting directly. For Wasmtime (we will support it later), there is no such constraint, and we will start runtime directly.

Other good discussion on how much people appreciate getting away from the much higher overhead per-container Shim v2 model!

It strikes me that this could be an excellent starting place for a CloudFlare workerd js/wasm provider too. That's been a pet desire of mine, and this could drastically slash the effort required to experiment. https://github.com/cloudflare/workerd

It's awesome to see thr container model maturing. Much better runtime behavior here. And the ability to nicely go beyond mini-Linux-containers into new frontiers is hella exciting.

We're not afraid of helping the competition. Example: we open sourced the entire runtime for Cloudflare Workers! https://github.com/cloudflare/workerd

Open sourcing something has a cost and we mostly only open source things like stand-alone libraries. We also mostly open source things that are fairly mature because it's hard to manage software that's rapidly changing internally and handle PRs from outside.

It's more like Cloudflare forked nginx a long time ago, and is meanwhile in the very slow (like, decade-long) process of replacing it entirely.

The Cloudflare Workers Runtime∗, for instance, is built directly around V8; it does not use nginx or any other existing web server stack. Many new features of Cloudflare are in turn built on Workers, and much of the old stack build on nginx is gradually being migrated to Workers. https://workers.dev https://github.com/cloudflare/workerd

In another part of the stack, there is Pingora, another built-from-scratch web server focused on high-performance proxying and caching: https://blog.cloudflare.com/how-we-built-pingora-the-proxy-t...

Even when using nginx, Cloudflare has rewritten or added big chunks of code, such as implementing HTTP/3: https://github.com/cloudflare/quiche And of course there is a ton of business logic written in Lua on top of that nginx base.

Though arguably, Cloudflare's biggest piece of magic is the layer 3 network. It's so magical that people don't even think about it, it just works. Seamlessly balancing traffic across hundreds of locations without even varying IP addresses is, well, not easy.

I could go on... automatic SSL certificate provisioning? DDoS protection? etc. These aren't nginx features.

So while Cloudflare may have gotten started being more-or-less nginx-as-a-service I don't think you can really call it that anymore.

∗ I'm the tech lead for Cloudflare Workers.

The engine is open source: https://github.com/cloudflare/workerd

We did not create our own engine to create "lock-in". On the contrary, it would be a huge win for us if we could simply run FastCGI or Node or whatever applications unmodified. We'd be able to onboard a lot more customers more quickly that way! Our product is our physical network (with hundreds of locations around the world), not our specific development environment.

But none of that tech scales well enough that we could run every app in every one of our locations at an affordable price. Meeting that goal required new tech. The best we could do is base it on open standard APIs (browser APIs, mostly), so a lot of code designed for browsers ports over nicely.

(I'm the lead engineer for Workers. These were my design choices, originally.)

Because V8's low-overhead tenant isolation is fundamental to how Cloudflare Workers works[1], and without it it would be a completely different product with worse performance. This means it can only support JavaScript and WebAssembly.

You can run it locally[2], and efforts are underway to standardize much of it so that the same code can run on Workers, Node.js, and Deno[3]. The Workers runtime is also open source[4], although in practice this is likely to be less useful for most use cases than just using one of the other JavaScript runtimes if you're self-hosting.

[1] https://developers.cloudflare.com/workers/learning/how-worke... [2] https://developers.cloudflare.com/pages/platform/functions#d... [3] https://wintercg.org/ [4] https://github.com/cloudflare/workerd

For Cloudflare Workers, it's no longer a joke either: https://github.com/cloudflare/workerd
This is great. They mentioned a few months ago their will to open-source it [1] and now it's finally a reality, so props to Kenton and the Workers team on achieving it!

I've dug a bit on the OSS repo [2], and here are some of the findings I got:

    1. They are using a forked version of V8 with very small/clean set of changes (the diff with the changes is really minimal. Not sure if this makes sense, but it might be interesting to see if changes could be added upstream)
    2. It's coded mainly in C++, with Bazel for the builds and a mix between CapNProto and Protocol Buffers for schema
    3. They worked deeply the Developer Experience, so as developer you can start using `workerd` with pre-build binaries via NPM/NPX  (`npx workerd`). Still a few rough edges around but things are looking promising [3].
    4. I couldn't find the implementation anywhere of the KV store. Is it possible to use it? How it will work?
Open sourcing the engine, mixed with their recent announcement on a $1.25B investment fund for startups built on workers [4] I think will propel a lot the Wasm ecosystem... which I'm quite excited about!

[1] https://twitter.com/KentonVarda/status/1523666343412654081

[2] https://github.com/cloudflare/workerd

[3] https://www.npmjs.com/package/workerd

[4] https://blog.cloudflare.com/workers-launchpad/