It has been around for quite a while. I thought anyone who tried to reverse engineer an APK had already known it.
Everyone's experiences are different! It's helpful to share things we may consider "old news" because it may just end up benefiting those around us.
I am one of those lucky 10,000 today.
I've been puttering around for years but never really paid much attention to APKs because I hate working on my phone. I am going to play around with this to check out why some Google store apps tend to demand pretty extensive privileges for fairly benign use cases. I figured it was a huge pain in the butt and never even looked for a decompiler.
I just shoveled it off on to my never ending list of things I'm going to look at when the secret for immortality is found.
IIUC, the permissions apps request ultimately just enable access to certain APIs; they don't do anything on their own: https://stackoverflow.com/questions/24858462/how-to-check-if... (see 1st comment). So apps like https://play.google.com/store/apps/details?id=sk.styk.martin... and https://play.google.com/store/apps/details?id=com.ubqsoft.se... basically reason about the ceiling of everything an app might use across its lifetime. It's arguably a tad misrepresentative, like seeing a wall of text from a distance can be scary.
Furthermore, the very orthogonal way permissions are categorized relative to internal API architecture is woefully unintuitive at best, making it next to impossible to come up with good summary judgements of what a given app might be trying to do. For example, a given game might want access to your "cell ID information" because the analytics SDK it uses is overly invasive (while the game itself never needs the info), while a smart-device controller app might request "real-time location information" (I forget exactly what the permission is called) just because the smart device happens to use BTLE (Bluetooth LE), because BTLE's fast connect/disconnect characteristics made it ripe for abuse by indoor location tracking systems in shops and whatnot, and so Android had to make that the permission name since it's impossible to determine programmatically what a given BTLE connection is being used for.
So not only is the mapping from policy to implementation a case of a pile of arrows all pointing at each other, the current permissions model has a really big "wait what" problem because it's based around enabling access to APIs ahead-of-time so they can be used when needed. Android's trying to go down a just-in-time model where for example something requests access to storage as and when needed (I think this is called "instant" permissions); this contextualizes and thus justifies the request, allowing for more informed consent.
With the ahead-of-time way things work nowadays, though... I'd be a bit bullish that APKTool on its own would be useful. You're basically in an equivalent situation to wondering why a given Chrome extension might be asking for a certain permission, only to download the CRX, unzip it, and find everything minified. Intractable? Check. "Now what?": check. Suspicious? Good question :(
In practice a reasonable number of Chrome extensions incidentally aren't minified and contain perfectly readable source, sometimes even with comments (which is great for figuring out how other developers have solved certain complex integration problems ;D) - but the bytecode-based nature of the Java runtime means you're always working with some level of minification. Control flow is generally always somewhat permuted in much the same way pseudo-decompiled C code doesn't quite look the same as the original. If a given app isn't using obfuscation, you might be able to see some symbol names however.
Android Studio adds the Proguard obfuscator (which ships for free with 'Studio) into the build instructions of every new project by default, but switched off by default to make builds faster. Once enabled by just changing a couple build settings to "true", obfuscation Just Works™ without any additional steps. Given this state of commoditization it's often a good question whether an app's symbols are available or not.
JADX (https://github.com/skylot/jadx) is one tool people mostly use to fight their way through this status quo. Like with IDA, you generally need a very good idea of exactly what you want to do when using it (particularly when symbols are unavailable). "Find why this wants all these permissions" is a sadly very open-ended question from this low-level perspective. :(
FWIW, there are "interestinger" obfuscators that Proguard out there; I once wondered how a random Chinese smart-device companion app worked internally, and found that it shipped with a .so (shared library!)-based obfuscation/protection runtime. Frida (https://frida.re/) proved particularly awesome here, as it turned that for all the obfuscation and insanity the runtime brought to the table, it was to hide the application's original .dex files, which it briefly wrote to a temporary location on launch - so that was just a question of winning a race condition in an Android emulator. Frida is honestly a pretty amazing tool, and picks up almost everywhere JADX and similar decompilers/static analysers leave off. You either run it in the background on rooted devices or bundle Frida into a given app (using JADX to inject the shared library, and also to actually modify the app's bytecode to launch Frida), and then it basically makes V8 (yes, the JavaScript interpreter) available over ADB so you can play around with the app remotely (hook Java threads, make them run whatever code you want, etc) in a dedicated thread the app cannot block. Yes, it's that crazy, and yes, that sort of flexibility lets you do almost anything (once you figure out how to express it...). lol
TL;DR: You are sadly fundamentally correct in your gut assumption that this is a generally intractable question to straightforwardly answer. :(
I think one of the most viable realistic goals in pursuit of ideal privacy is to run all traffic through a captive proxy and install CA certificates on at least all phones to enable MITMing all TLS traffic. I've seen the occasional comment on here by people who have done just that; they just uninstall whatever doesn't cooperate (with certificate pinning etc). I've been wanting to do this myself for quite a while but don't have the hardware to pull it off effectively/seamlessly yet. FWIW, device policy controller apps can install CA certificates and start VPNs without any persistent notifications cluttering up the screen (:D) - and they're surprisingly easy to write.
If there was a specific angle or takeaway I'd like to focus on here, it's that the ecosystem has organically evolved into a headdesk-inducingly awkward but still so incredibly interesting status quo, that sadly requires a bit of attention-span buy-in to get past all the "...!!! *run away*", but in much the same way that learning about Slackware taught me a tonne about Linux (and sed, incidentally, because it was one of the few things that weren't corrupted on the install CD I used, haha) that I wouldn't have known if I hadn't taken everything apart and gone "ok, now maintain this mess", this provides a great hands-on opportunity to learn about network security (it's kind of amazing how the pieces of the current status quo fit together - and then disappear!). I'm looking forward to playing around more when I get the chance.