What does HackerNews think of HVM?

A massively parallel, optimal functional runtime in Rust

Language: JavaScript

I'm pretty excited by what Val is doing. I feel like this is the next step in languages: finding a way to combine the techniques of the last decade (Pony's unique references, Rust's borrow checking, etc.) in new and interesting ways, to get closer to the ultimate goal: a language that has speed, safety, and simplicity.

The simplicity is particularly important. We've almost certainly surpassed the complexity limit in languages like C++, Rust, and Haskell. Each of these can be easier with time, but even past the learning curve they can impose a lot of artificial complexity.

It's also interesting that a lot of languages adopting subsets of Rust's model:

* Val is using unique references for its data, and not including shared references.

* HVM is using shared reference borrowing and implicit .clone()s for its data, and not including unique references. [0]

* Vale is using the borrow checker at a higher "region" level to make it opt-in, and using generational references which were inspired by Rust slotmap / generational indices. [1] (disclaimer: am Vale lead)

* Lobster is using implicit borrow checking for the simpler cases, and falling back to reference counting everywhere else. [2]

* Cone is using borrow checking on top of garbage collection, reference counting, single ownership, and any other user-defined memory strategy. [3]

Just as Haskell (and FP languages before it) showed us how far we can take immutability and what interesting patterns might emerge, I think Rust shows us what happens when you try to apply aliasability-xor-mutability to every problem. And just as how a lot of languages are adopting the good parts of Haskell, a lot of language's are adopting the good parts of Rust.

I hope Val goes far!

Also, I love their name. I tend to like languages that have the V and L sounds in them, though I might be uniquely biased ;)

[0] https://github.com/Kindelia/HVM

[1] https://verdagon.dev/blog/generational-references

[2] https://aardappel.github.io/lobster/memory_management.html

[3] https://cone.jondgoodwin.com/

> Nice balanced writeup! However, I am not convinced one can make something solving all the things Rust solves which is substantially simpler as language.

I used to think this too, but my opinions have evolved a bit.

Recall the transition from goto-heavy assembly code to C; it solved all the things assembly solved, and was a substantially simpler language. One of the keys to designing a good language (or any good abstraction, really) is to identify the patterns that already informally exist.

So if we want to see a simpler Rust, we need to look at the informal patterns that exist in Rust.

* Zig did this with Rust's async/await, by making it colorblind. [0]

* Vale did this with Rust's generational indices, by making generational references, a way to get memory-safe single ownership without a borrow checker. [1] It then adds a hardened FFI boundary to get more solid safety guarantees than Rust could. [2]

* HVM did this for Haskell by looking at Rust's particular blend of borrowing and cloning, and making it happen automatically. [3]

* Lobster did something similar for an imperative mutable language, to reduce RC costs lower than ever before. [4]

I think Rust is an amazing first attempt at an interesting new problem space, and a huge step forward for the field. Now, language designers are starting to notice that the building blocks that Rust trailblazed might be combined in new interesting ways, to make new interesting languages.

It's an exciting time for programming languages!

[0] https://kristoff.it/blog/zig-colorblind-async-await/

[1] https://verdagon.dev/blog/generational-references

[2] https://vale.dev/memory-safe

[3] https://github.com/Kindelia/HVM

[4] https://www.strlen.com/lobster/

Check out HVM, an experiment to make copying immutable data really fast: https://github.com/Kindelia/HVM
In my experience, people believe that programming languages are a solved space, and we should stick with what we have. It's an unfortunate view.

Languages are actually very polarized today. I think there's a lot of room for a mainstream language that could be safe, fast, and most importantly, easy. Today's languages are generally two out of three.

Luckily, a lot of languages are exploring that space!

* Vale is blending generational references with regions, to have memory-safe single ownership without garbage collection or a borrow checker. [0]

* Cone is adding a borrow checker on top of GC, RC, single ownership, and even custom user allocators. [1]

* Lobster found a way to add borrow-checker-like static analysis to reference counting. [2]

* HVM is using borrowing and cloning under the hood to make pure functional programming ridiculously fast. [3]

* Ante is using lifetime inference and algebraic effects to make programs faster and more flexible. [4]

* D is adding a borrow checker! [5]

[0] https://vale.dev/

[1] https://cone.jondgoodwin.com/

[2] https://www.strlen.com/lobster/

[3] https://github.com/Kindelia/HVM

[4] https://antelang.org/

[5] https://dlang.org/blog/2022/06/21/dip1000-memory-safety-in-a...

It's amazing what strides languages are making in the realms of memory safety. I think D's got the right idea, to have pockets of zero-cost memory safety, on top of an overall architecture that embraces shared mutability's benefits.

A lot of languages are going this direction too:

* Cone is putting a full borrow checker on top of GC, RC, and single ownership: https://cone.jondgoodwin.com/

* Vale is putting opt-in "region borrow checking" on top of single ownership with generational references: https://verdagon.dev/blog/zero-cost-refs-regions

* Lobster is employing an automatic-borrow-checker-esque algorithm on top of RC for some brilliant speedups: https://www.strlen.com/lobster/

* Ante has some designs for lifetime inference, which I think will work really well: https://antelang.org/

* HVM is a runtime for Haskell that uses borrow checker semantics and falls back to clone. It's not shared mutability, but Haskell without GC is pretty cool! https://github.com/Kindelia/HVM

I particularly like D's approach because it's opt-in; we can use it in the areas of the program where we need performance the most, and prioritize flexibility and development velocity everywhere else, which I think is the right balance for most programs developed today.

It also means one can gradually ease into learning how to use static analysis and their more complex constructs (like return scope, mentioned in the article). One can be proficient in the basic language quickly, and then improve their craft gradually. In modern software engineering, this is a big win, in my opinion. Props to D for going this direction!

I develop languages full time, and it's clear to me that RC will make massive strides forward in the next decade, for a few reasons:

1. First-class regions allow us to skip a surprising amount of RC overhead. [0]

2. There are entire new fields of static analysis coming out, such as in Lobster which does something like an automatic borrow checker. [1]

3. For immutable data, a lot of options open up, such as the static lazy cloning done by HVM [2] and the in-place updates in Perceus/Koka [3]

Buckle up yall, it's gonna be a wild ride!

[0] https://verdagon.dev/blog/zero-cost-refs-regions

[1] https://aardappel.github.io/lobster/memory_management.html

[2] https://github.com/Kindelia/HVM

[3] https://www.microsoft.com/en-us/research/publication/perceus...