I think there is really no doubt that the storage prices itself ( for all types) are pretty amazing. But let's face it: traffic costs are the huge elephant in the room. By charging 12ct and more per GB, the traffic costs easily become your biggest expense, and make the storage price reduction from 2.6 to 2ct per GB almost forgetable.

For me this is in no way acceptable and it seems to be a vicious attempt to sneak in some extra profit without having the customer noticing this upfront. Shure, these harsh words seam like a big exaggeration, but I literally never ever hear or read something about traffic costs in Googles fancy blog posts, and I would make the assumption that only a fraction of the HN community is aware of this fact. 120 bucks for a sloppy TB of traffic is just way too high.

This x 1000. Even when you throw your own CDN on top of this, the bandwidth markup is nasty. Neocities would be unsustainable if we used GCS for our hosting.

I'm paying about $0.01/GB right now, and I've seen market rate at half that. And that's not even directly using IP transit providers. You can get a gigabit unmetered for $450-1000/mo which you can shove a theoretical 324TB through every month. The difference in the numbers is so staggering I sometimes wonder if I'm even doing the math right.

Perhaps their bandwidth is better somehow (prove it), but 12-18x better it is probably not, and having truffle shavings added to your IP transit really adds up when you're hauling a lot of traffic. If you're doing something with heavy BW usage and low margins, be careful with stuff like this. It quickly becomes much more expensive than doing it yourself.

I'd love to be wrong here. I'm sitting next to 60 pounds of storage servers I'm setting up for a data center, they're taking up my entire living room. I would love to get out of the data persistence business forever. But at these BW rates, it's never going to happen.

out of curiosity, what storage do you use?

any experience using backblaze b2 as hot storage, and glacier or google coldline as backup

I'm planning to build a CephFS cluster at this point. I've given up on finding a cloud storage provider that will work well for us. This will require a fairly high fixed monthly cost to get space and transit at a datacenter, but after ~10-30TB BW you start to save a lot of money.

There are trade offs. Aside from more upfront costs and a fixed monthly, Ceph is ridiculously complicated. Interface wise, they need some much better abstractions.

B2 was a strong candidate, their BW rates are a little high still but approaching reasonable. The one issue is that they seem to have inconsistent latency. Not a problem for most use cases, but I need nearly all requests to come back in <100ms consistently, as I'm using this for web hosting. Their use case seems more focused on "hot standby backup" than on high availability ATM. I'm strongly considering them as the backup provider for my storage cluster.

FWIW, GCS is not winning any best-of-show awards for latency either. S3 at the time I ran tests was doing a better job.

What makes Ceph complicated? The underlying idea and how Ceph works is fairly simple.

This is the "quick installation": http://docs.ceph.com/docs/master/start/

It really shouldn't be this complex. I would love to just be able to boot an executable with a simple config file and be done with it. SeaweedFS shines a light on how this could be improved: https://github.com/chrislusf/seaweedfs