My biggest observation from switching from Haskell to Rust is the change in thinking in terms of program composition.

In Rust you need to really make some careful design choices in your producer/library code depending on the expected use. All of them quite on the operational level and not really semantically. So many internal choices about refs/arcs/pins/mut/ownership leak out.

In Haskell you don’t really do that. If your library internals are neatly written combinators with the right type and semantics you can just expose them and you’re done. Like the article explains, laziness is a big part of this.

I understand where that difference comes from, but it’s really a very different way of thinking.

I really wished there was a GC-ed but strictly-evaluating Haskell. A Rust-like syntax would probably help with adoption but I don't have a strong preference in that.

> I really wished there was a GC-ed but strictly-evaluating Haskell.

To answer this question literally, there are PureScript (transpile to JavaScript), Idris (dependently typed), OCaml, and Standard ML (as discarded1023 pointed out).

But I think the wish for a strictly-evaluating Haskell is sometimes in fact a wish for a Haskell with more predictable performance (especially in memory usage). If so, the Linear Type/Arrow of recent GHC may fit the bill [1].

My wish is for Haskell to have a better runtime [2] with optimal reductions (with more predictable performance and without GC) [3], which could have Rust-like performance without lifetimes, while being lazy.

[1]: https://www.tweag.io/blog/2017-03-13-linear-types/

[2]: https://discourse.haskell.org/t/high-order-virtual-machine-h...

[3]: https://github.com/HigherOrderCO/HVM