Be warned that their code quality is pretty bad. There was a bug I was dealing with last year where it did not delete objects but returned the correct HTTP response code indicating it did. This was widespread, not just some edge case I encountered. Their broken test suite doesn't actually verify the object on disk changed. I tried to engage them but they blew me off.

Minio isn't durable. Any S3 operation might not be on disk after it is completed successfully. They had an environment variable MINIO_DRIVE_SYNC for a while that fixed some cases. Looking at the current code this setting is called MINIO_FS_OSYNC now (for some reason) https://github.com/minio/minio/pull/9581/commits/ce63c75575a... (but I wouldn't trust that... are they fsyncing directories correctly? Making sure object metadata gets deleted with the data in one transaction etc.). Totally undocumented, too.

I guess this makes minio "fast". But it might eat your data. Please use something like Ceph+RadosGW instead. It might be okay for running tests where durability isn't a requirement.

That had me curious, so I searched a bit in their issues.

Their attitude about it isn't great: https://github.com/minio/minio/issues/3536

That's too bad, as it seems well thought out in other areas, like clustering.

MinIO team care about an issue if you are paid customer, not for people who use the open source. Indeed MinIO is not even fully S3 compatible with many edge cases and close the issues related to it by saying it’s not a priority.

You might want to look at other options as well like SeaweedFS [0] a POSIX compliant S3 compatible distributed file system.

[0] https://github.com/chrislusf/seaweedfs