What frequency of sending GET requests to the servers of HN is an acceptable rate for a bot? I tried to look for an answer on this but didn't find any.

In the past I got the IP of my server banned from accessing HN for sending too many requests in too short of a time span. I found the unban interface that you provide, lowered the request rate of my crawler and tried again but was still sending too many requests in a limited amount of time and got the IP of my server banned again. If I recall correctly, I got it unbanned a third time and lowered the request rate even more but then got banned again and then I think it would not allow automatic unbanning.

Don't remember if I just gave up at that point or if I sent an email about it, or if I just waited some amount of time if there was a statement about how long I would have to wait before being able to use the unban interface for the IP of my server again.

Anyway, an official answer about the acceptable request rate would be nice. Perhaps put it in the FAQ?

Also, if people doing automated GET requests were to create a unique UA string for their scrapers that include a way for HN staff to get in touch, like for example (but with actual names of bot and site)

    Examplebot/0.95 (+http://www.example.com/bot.html)
where the page on that URL would list an e-mail address for getting in touch, as well as having a statement about how to verify that a given crawler belongs to that service, would that help in not getting server IP banned automatically?
Once per 30 seconds. That's in our robots.txt: https://news.ycombinator.com/robots.txt. We've been working for a long time on (edit: what we expect to be) some serious performance improvements that might allow us to relax that limit. For now though, HN's process still runs on a single core and we don't have much performance to spare.

If you need more than that, you should use the Firebase-based API (https://github.com/HackerNews/API). The public dataset is also available as a Google BigQuery table: https://bigquery.cloud.google.com/dataset/bigquery-public-da....

Edit: since this subthread is not really on topic I detached it from https://news.ycombinator.com/item?id=21617478.