You can safely go to a billion rows. Use indexes. Don't do silly things like count(*) or select *, select the small data sets you need using your indexes. If you need a full table count for some reason, use a primary key that auto increments and select the max, or use information_schema for an estimate. Nobody will sweat the slight inaccuracy on a table that size.

yeah i was also under the 20mil impression initially until building a system that worked with a 500mil row table without any problems (just a beefy RDS instance, nothing crazy distributed or anything). Schema changes became a pain at that point, so i would probably split it if I were to do it again, but reads and writes never gave us much trouble.

check out gh-ost migrations: https://github.com/github/gh-ost