GitHub Actions are a fantastic experience for serverless applications. I am working on a serverless project where we use GitHub Actions exclusively for CI/CD as well as running automated tests. We rely heavily on Lambda, S3, and DynamoDB. Our client app is static JS files we serve over Cloud Font. GitHub Actions make our piplines accessible to any developer on the team. Since we only pay for what we use with our serverless infrastructure, we can even deploy each pull request to the cloud rather inexpensively and leverage GitHub's environments to help manage the cleanup for us. This allows our team members to review and test changes in their browser before we pull them into our development branch. We additionally can run Playwright E2E tests to verify that none of our critical user workflow scenarios have broken resulting from the PR changes. I love this development experience and would have a hard time going back to anything else.

thank you for sharing -- as i was reading through i tried to understand what about the workflow was specific to github actions vs other CI automation.

> GitHub Actions make our pipelines accessible to any developer on the team

Do you reckon this accessibility is a combination of (i) storing the pipeline definitions in the application's source repo, where application developers can find them easily, not hidden/scattered elsewhere in other repos or behind management UIs, and (ii) a relatively simple and documented pipeline syntax?

The first example I can think of a tool that supported this workflow was Travis CI ~ 2011 - 2012. Appveyor offered similar capabilities quite early as well. Same workflow can be done with Gitlab, Google cloud build.

> we can even deploy each pull request to the cloud rather inexpensively and leverage GitHub's environments to help manage the cleanup for us. This allows our team members to review and test changes in their browser before we pull them into our development branch

Yeah, this kind of workflow is great. Another way this kind of workflow can be done is to create simple command line tools that developers can use to create and destroy temporary test environments running their speculative changes. In some cases, for rapid experimentation, it can be great to be able to spin up N temporary environments in parallel with different changes without tying it to pull requests. But I can see that tying the temporary environment lifecycle to the lifecycle of a PR might make it easier to share demos of proposed changes with reviewers.

Out of curiosity, how reliable do you find the environment cleanup is? I remember building a similar create-temp-environment / destroy-temp-environment workflow for ephemeral databases running in AWS RDS driven by jenkins pipelines. It took a few months of tweaking to figure out how to ensure the RDS databases got torn down correctly and not "leaked" even if the jenkins master or workers failed midway through pipeline execution. From memory we had a bunch of exception handling in a jenkins groovy scripted pipeline that would run on the master jenkins to try to do cleanup, and even that wouldn't work all of the time, so we had a second cleanup job on a cron-schedule to detect and kill leaked resources.

> Do you reckon this accessibility is a combination of

Yes, exactly. All of our build pipelines for a repository are included in the .github folder in the root of the repo. It makes it easier for team members to feel comfortable making changes and submitting a PR for them. You can setup an ACT container so you can test GitHub Action changes locally before pushing them too (see https://github.com/nektos/act )

> Out of curiosity, how reliable do you find the environment cleanup is?

So far, environment cleanup has been reliable, but I have noticed where it failed to cleanup some provisioned resources once in a blue moon. I blame this more on our code than GitHub Actions. I periodically review our sandbox environments to ensure we didn't miss deleting anything.