Wow, this is much worse than I assumed:

> In fact, we were able to circumvent the issue just by changing the name of the Chrome app on a Windows desktop. It seems that Microsoft threw up the roadblock specifically for Chrome, the main competitor to its Edge browser.

From the beginning of the article, I was led to believe Microsoft had changed an API around checking/setting the default browser to show Microsoft's dialog instead. Which doesn't seem great to change an API without warning, but maybe you can make an argument that it ensures a "neutral" choice rather than apps pushing the choice on users.

But this shows it's specifically against Chrome. Regardless of whether it's legal or not, it's unforgivably anti-competitive behavior. It's a truly shameful tactic.

Given that Chrome is aggressively pursuing privacy sandbox, which is roundly rejected by everybody in the web privacy community, is aggressively user hostile, and designed solely as a means to leverage chrome’s (current) dominance to support google ad business, it’s fair to start treating chrome as malware.

Honest question, why is the privacy sandbox user hostile? I assume it's because Google is using it to collect your information but blocking everyone else from collecting it?

> Honest question, why is the privacy sandbox user hostile?

From the horse's mouth, "privacy sandbox" is explicitly designed "to build thriving digital businesses."

https://privacysandbox.com/intl/en_us/

At face value they claim it's designed to eliminate tracking techniques like fingerprinting, it's actually a system explicitly designed to collect the users' private information. From the horse's mouth.

"To provide this free resource without relying on intrusive tracking, publishers and developers need privacy-preserving alternatives for their key business needs, including serving relevant content and ads."

So, it's tracking just not intrusive tracking?

It sounds like it's plain old tracking, but pushed and owned by Google through it's control over Chrome.

No it's objectively less intrusive.

The proposals involve reducing UA data, ip tracking, etc.

But still allows for some amount of targeting. From my understanding instead of you being an identifiable individual via fingerprinting, the aim is to make you "probably one in [large group] of technology people".

I'm not saying I think it's a good thing, but on the surface it does appear _better_.

Does Privacy sandbox prevent fingerprinting completely (for example, canvas fingerprinting, WebGL fingerprinting, audio fingerprinting)? Or the advertisers would be able to use both fingerprinting and newly provided data?

I don't understand why we need to trade here. Just block figerprinting and do not provide any alternatives for advertisers. This is the best for users.

You can't block fingerprinting completely without breaking a ton of useful features. But the sandbox has a concept called the privacy budget which tries to determine if a site is collecting too much information. It should allow sites that actually use some of these features to continue to work.

The idea is that if sites that query fonts, engage canvas, read the user agent information, etc, they are likely trying to build a fingerprint, so the browser will start to return generic data.

Presumably - hopefully - it would allow users to set their own privacy budgets. Even better if it supports granular per-site control, which may be needed for certain specialized websites.

https://github.com/mikewest/privacy-budget