Does anyone have any experience on how this works at scale?

Let’s say I have a directory tree with 100MM files in a nested structure, where the average file is 4+ directories deep. When I `ls` the top few directories, is it fast? How long until I discover updates?

Reading the docs, it looks like it’s using this API for traversal [0]?

What about metadata like creation times, permission, owner, group?

Any consistency concerns?

[0] https://cloud.google.com/storage/docs/json_api/v1/objects/li...

If you really expect a file system experience over GCS, please try JuiceFS [1], which scales to 10 billions of files pretty well with TiKV or FoundationDB as meta engine.

PS, I'm founder of JuiceFS.

[1] https://github.com/juicedata/juicefs