Why not upload those files separately, or in ZIP format?
Doing S3 put requests for 260M files every week would cost around $1300 USD/week which was too much for our budget
> or in ZIP format?
We looked at zip's but due to the way the header (well central file directory) was laid out it mean that finding a specific file inside the zip would require the system to download most of the CFD.
The zip CFD is basically a list of header entries where they vary in size of 30 bytes + file_name length, to find a specific file you have to iterate the CFD until you find the file you want.
assuming you have a smallish archive (~1 million files) the CFD for the zip would be somewhere in the order of 50MB+ (depending on filename length)
Using a hash index you know exactly where in the index you need to look for the header entry, so you can use a range request to load the header entry
offset = hash(file_name) % slot_count
Another file format which is gaining popularity recently is PMTiles[1] which uses tree index, however it is specifically for tiled geospatial data.