I'm slightly surprised that GitHub is still basically storing a git repo on a regular filesystem using the Git CLI. I would have expected that the repos were broken up into individual objects and stored in an object store. This should make pushes much faster as you have basically infinitely scalable writes. However it does make pulls more difficult. However computing packfiles could still be done (asynchronously) and with some attention to data-locality it should be possible.

This would be a huge rewrite of some internals but seems like it would be a lot easier to manage. It would also provide a some benefits as objects could be shared between repos (although some care would probably be necessary for hash collisions) and it would remove some of the oddness about forks (as IIUC they effectively share the repo with the "parent" repo).

I would love to know if something like this has been considered and why the decided against it.

I am not a github employee, but my 2 cents.

An object store lacks an index which your typical FS will provide with a relatively high degree of efficiency. FS's can be distributed to arbitrary write velocity given an appropriately distributed block storage solution ( which will provide the k/v API of an object store that you're looking for ). Distributed FS's are conveniently compatible with most POSIX operations rather than requiring bespoke integration. Most object stores are optimized for largish objects and lack the ability to condense records into an individual write (via the block API) or pre-emptively pre-fetch the most likely next set of requested blocks.

In the GitHub's case the choice of diverging from GitCLI/FS based storage APIs could lead to long term support issues and an implicit "github" flavor of git rather than improving the core git toolchain.

Object Stores are great, but if you need some form of index they get slow and painful really fast.

https://github.com/juicedata/juicefs has an index implementation and is backed up by object storage.