ITT: people dramatically under-estimating the risk to their accounts from credential stuffing and dramatically over-estimating their security benefits from not running JS.

They're probably right that not running JS is privacy accretive, but only if you consider their individual privacy, and not the net increase in privacy for all users by being able to defend accounts against cred stuffing using JS. The privacy loss of one account being popped is likely far greater than the privacy loss of thousands of users' browsing patterns being correlated.

tl; dr: Good luck detecting and preventing automation of sign in pages at scale without robust JS based defenses. I think there's a shortsightedness and self-centeredness to a lot of these comments.

> Good luck detecting and preventing automation of sign in pages at scale without robust JS based defenses

Why is it not sufficient simply to throttle logins at the server?

Modern cred stuffing is done by botnets. When I see a cred stuffing attack, it's maybe 1-3 attempts per IP address spread over 100-500k IP addresses. Often you'll have a family of legitimate users behind an IP address that's cred stuffing you at the same time.

Throttling by IP address may have worked 10 years ago, unfortunately it's not an effective measure anymore.

Modern cred stuffing countermeasures include a wide variety of exotic fingerprinting, behavioral analysis, and other de-anonymization tech - not because anyone wants to destroy user privacy, but because the threat is that significant and has evolved so much in the past few years.

To be entirely honest, I'm kinda surprised Google didn't require javascript enabled to log in already.

Any advice on where to read more about these modern cred stuffing countermeasures? I'd love to learn more.

Unfortunately I don't have much reading material to provide. It's a bit of an arms war, so the latest and greatest countermeasures are typically kept secret/protected by NDA. The rabbit hole can go very deep and can differ from company to company.

The most drastic example I can think of was an unverified rumor that a certain company would "fake" log users in when presented with valid credentials from a client they considered suspicious. They would then monitor what the client did - from the client's point of view it successfully logged in and would begin normal operation. If server observed the device was acting "correctly" with the fake login token, they would fully log it in. If the client deviated from expected behavior, it would present false data to the client & ban the client based on a bunch of fancy fingerprinting.

Every once in awhile, someone will publish their methods/software; Salesforce and their SSL fingerprinting software comes to mind: https://github.com/salesforce/ja3