My anecdote on how we do it:

- We have AWS Org

- Each account has no root IAM and cost/pricing goes through root AWS Org Account

- You move between accounts with AWS SSO (now IAM Federation) - No more password per account

- AWS SSO standardizes boundaries across account with IAM policies, like eu-centeral-1 only for dev IAM etc.

- Inside Account more granular access with IAM Assume Roles

- Each account Cloudtrail to a central S3 for audit (Same region pricing works for diff accounts!)

- Each account is `project.env` like micro_service1.dev or company1_k8s+s3.prod

- That way, we have pricing per project built in where tag support ends (and it do have limits)

We came to this structure realization after we saw Azure Resource Groups and Google Projects. We also saw the 5 VPC soft limit AWS has per region and think it's kind of a clue from amazon: "pss.. this account soft/hard limits is sized for 1 project/deployment"

The only problems come from automating stuff, like Route53 records that can point automatically to load balancers in that account or as mentioned in the article, VPC2VPC, although we use serverless stuff like lambda and s3 and experience less of that.

But as time go on we realized, like the VPC example in the article, that those problems forced us to structure our stuff in a more SOLID way, which became a feature for us. Just like moving docker-compose to k8s is not creating app-mesh and ops problem, but REVEALING them. So the solution for the Route53 example is to have a separate subdomain zone in each account for automation and ADDING another account `main_route53.prod` with a root zone pointing to them.

Hope this helps for the curious.

> You move between accounts with AWS SSO (now IAM Federation) - No more password per account

The only thing I really hate about this is that it is tied/bound to your browser. If you switch browsers or use an incognito window you have to through the whole dance of setting up your account switching set up. Imagine you're in multiple orgs that are set up this way...

Firefox Containers is a great way to handle this. I usually only need to log into 3 or 4 accounts, max, at the same time. I have AWS Containers 1-4 setup for just that.

I do the same.

Pro tip time!

You can use this ext and accompanying cli tool to launch URLs in a container.

https://github.com/honsiorovskyi/open-url-in-container

Couple that with aws-vault login link generator.

Then pipe thru fzf.

Now you have a quick script to open an aws profile in a new container. Containers don’t even need to exist and can be created on the fly.

I just craft the name like “aws-$profile”.