You could use EC2 but Geforce Now has left beta and announced pricing and it is much cheaper at $5/month.

Doing this on-premise is also pretty tantalizing - I watched a video recently of Linus Tech Tips where he built a 64 core Threadripper and used virtual machines to replace four physical computers in his home including two players simultaneously gaming.

https://www.youtube.com/watch?v=jvzeZCZluJ0

>Doing this on-premise is also pretty tantalizing - I watched a video recently of Linus Tech Tips where he built a 64 core Threadripper and used virtual machines to replace four physical computers in his home including two players simultaneously gaming.

so basically a mainframe? I can't imagine it's economically viable though. a 64 core threadripper costs more than eight ryzen 3700x and clocks lower.

If you live with 2 or 3 other SWEs this is an attractive option. You have enough PCIe lanes to pack in 4 GPUs. You have enough spare horse power to also host basic home services like a file server, gitlab instance, home automation stuff, vpn, etc.

I think personally I'd opt for building four or five separate machines and managing them as a cluster though.

Why would this be attractive to SWEs? Seems like something that would be more relevant to gamers.

Half of the SWEs I work with don’t game.

My workstation is set up like this, with one giant LVM pool for storage and two GPUs, it gives a few advantages:

* Can run two OS at one time, each with half the resources * Can run one with full resources if needed * Can have multiple linux and windows installs * Can have snapshots of installs * Takes around 5-10 seconds to swap one of the running installs for a different install * Can run headless VMs on a core or two while doing all the above, ie a test runner or similar service if needed

I use a 49" ultrawide with PBP, have one GPU connected to each side, so the booted installs automatically appear side to side, and Synergy to make mouse movement seamless, etc, etc

It took a little work to set up, but I've worked this way for ~ 3 years now and never had to think about the setup after the initial time investment, and during upgrades. Highly recommend it.

I definitely can see the advantage for a small team for having a single large machine with multiple GPUs, and letting them sit at a thin workspace and "Check out" whatever install they want to use, how much CPU power and RAM they need, etc, clone and duplicate installs and get a copy with their personal files in home, ready to go, and can check out larger slices of the machine when they have more CPU/GPU intensive tasks, that's probably my ideal office machine, after using my solo workstation for a while

Any guide on this? I would love MacOS and Windows side by side.

The arch linux wiki page [1] is a good place to start (Note you don't have to use arch for the host to get value out of the page, I use nixos), and/or the macos repo [2] (Or maybe this newer one [3])

The only real hurdle hardware wise is your IOMMU groups and your CPU compatibility, if you have a moderately modern system it should be a problem.

I also have a couple of inexpensive PCIe USB cards that I pass through to the guests for direct USB access, highly recommended.

The guides will use qcow2 images or pass through a block device, as I mentioned I have a giant LVM pool, I just create lvs for each vm and pass the volume through to the vm as a disk , and let the vm handle it. In the host you can use kpartx to bind and mount the partitions if you ever need to open them up.

[1] https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVM...

[2] https://github.com/kholia/OSX-KVM

[3] https://github.com/yoonsikp/macos-kvm-pci-passthrough