FWIW, there's nektos/act[^1], which aims to duplicate GHA behavior locally, but I haven't tried it yet.
There are many pipelines you can't run locally, because they're production, for example, but there's no reason why we can't capture these workflows to run them locally at less-critical stages of development. Garden offers portable pipelines and then adds caching across your entire web of dependencies. Some of our customers see 80% or higher reductions in run times plus devs get that immediate feedback on what tests are failing or passing without pushing to git first using our Garden Workflows.
We're OSS. [2]
In the past I used Jenkins with the web hooks to run builds, which worked ok. Actually I think we just polled GitHub for changes instead of doing the web hook, but it's a similar effect.
- https://gitea.io (repos)
- https://discourse.org (forums)
- https://github.com/nektos/act (CI)
- https://www.goatcounter.com (analytics)
- https://bestpractical.com/request-tracker (support)
- https://couchdb.apache.org (a slave db to backup https://rxdb.info [client db])
- deps: nginx, redis, postgres, mqtt
# Life
- https://matrix.org (comms)
- https://www.teamspeak.com (p2p voip for gaming)
- https://nextcloud.com (files, dav, etc.)
- https://jellyfin.org (+ the sync & swarm shit, radarr, etc.)
- https://mopidy.com (audio)
- https://photoprism.app (photos)
- https://actualbudget.com (finance)
- http://tileserver.org (map tiles)
- https://github.com/FreeTAKTeam/FreeTakServer (hiking nav)
...and more (reply to initiate detail sequence)
Conda > Adding packages > Running unit tests: https://conda-forge.org/docs/maintainer/adding_pkgs.html#run...
From https://github.com/thonny/thonny/issues/2181 :
> * https://conda-forge.org/docs/maintainer/updating_pkgs.html
> Pushing to regro-cf-autotick-bot branchΒΆ When a new version of a package is released on PyPI/CRAN/.., we have a bot that automatically creates version updates for the feedstock. In most cases you can simply merge this PR and it should include all changes. When certain things have changed upstream, e.g. the dependencies, you will still have to do changes to the created PR. As feedstock maintainer, you donβt have to create a new PR for that but can simply push to the branch the bot created. There are two alternatives [β¦]
nektos/act is one way to run a github-actions.yml build definition locally; without CI (e.g. GitLab Runner, which requires ~--privileged access to the docker/Podman socket) to check whether you get the exact same build artifacts as the CI build farm https://github.com/nektos/act
A Multi-stage Dockerfile has multiple FROM instructions: you can build 1) a container for running the build which has build essentials like a compiler (GCC, LLVM) and packaging tools and keys; and 2) COPY the build artifact (probably one or more signed software packages) --from the build stage container to a container which appropriately lacks a compiler for production. https://www.google.com/search?q=multi+stage+Dockerfile
Are there guidelines for excluding entropy like the commit hash and build time so that the artifact hashes are exactly the same; are reproducible on my machine, too?
There does appear to be some solutions on the marketplace that will allow you to ssh into a runner, but I haven't had a need for them as of yet.
I recently used act[0] to locally test my GitHub actions pipelines and worked okay, the fact that I could interact with the Dagger API via a Python SDK could be even more convenient, will definitely try!
But yes, your observation is my whole complaint: it is not _reasonable_ to ask a GLCI developer to run a local copy of GL, complete with any shared GLCI template repos, in a local docker container just to have local execution. Maybe I wouldn't complain about it so much had I not started with circleci so long ago and had such a "wow, this is amazing" followed by gitlab-runner's :troll_face: -- to say nothing of GitHub Actions just straight up ignoring that whole demographic and hoping https://github.com/nektos/act emulates enough to have people not notice the massive feature gap
> What does mise in place tell me? Separate the tasks. Find a way to prep just the action file, then just the credentials, then the idiosyncrasies of my projects. After that I can integrate the pipeline with the documentation proper.
The problem I find is that unless you've done something very similar before (and recently, because services and APIs are always in flux), what can happen is you complete a few tasks, then on a later task you find some gotcha that means your original approach isn't going to work, so you come up with a workaround which requires you to redo/change the earlier tasks. Development whack-a-mole.
So what I tend to do is take the riskiest task first, do just enough to convince myself it can be finished, then move on to the next task etc. and then iterate back around filling in the gaps until I'm done.
By the way, I recently did some GitHub Action automation and it was super frustrating. There's no official way to run/test this locally (there's https://github.com/nektos/act which doesn't replicate a lot of features) so you end up editing-saving-pushing-waiting-diagnosing (e.g. even just for basic syntax errors) in this painfully slow feedback loop until it eventually works and hope it doesn't break later. I'd rather just stick to the simplest single script I can come up with than use GitHub Action features because of this.
There is act[0] which aims to let you run github actions locally via Docker. It isn't perfect but it does a decent job at it, and for the most part your pipeline can be run locally.
After MS bought GH, I had hopes that they would build a tool to run action locally, but nothing yet.
I don't think github is trying to create lock-in, I think rather they were trying to make a way to easily share actions (not sure what other CIs systems are designed to have an ecosystem of publicly shared actions). The actions are public and therefore easy to make something that interprets them.
I can only guess at some point there will be a push for CIs to converge on some "actions" standard, maybe?
We install enough utilities with `brew` to get pyenv working, use that to build all python versions. Then iirc `brew install pipx`, maybe it's `pip3 install --user pipx`. Anyway, that's the only python library binary installed outside a venv.
Pipx installs isort, black, dvc, and pre-commit.
Every repo has a Makefile. This drives all the common operations. Pyproject.toml (/eslint.json?) set the config for isort and black (or eslint). `make format` runs isort and black on python, eslint on js. `make lint` just verifies.
Pre-commit only runs the lint, it doesn't format. It also runs some scripts to ensure you aren't accidentally committing large files. Pre-commit also runs several DVC actions (the default dvc hooks) on commit, push, and checkout. These run in a venv managed by pre-commit. We just pin the version.
Github actions has a dedicated lint.yaml which runs a python linter action. We use the black version here to define which black pipx installs. We use `act` if we wanna see how an action runs without sending a commit just to trigger jobs.
As an aside, I'm still fiddling with the dvc `pre-commit` post-checkout hooks. They don't always pull the files when they ought to.
Most of the actual unit/integration tests run in containers, but they can run in a venv with the same logic, thanks to makefile. We use a dvc action to sync files in CI.
So yeah there's technically 2 copies of black and dvc, but we just use pinning. In practice, we've only had one issue with discrepancies in behavior locally vs CI, which was local black not catching a rule to avoid ''' for docstrings; using """ fixed it. On the whole, pre-commit saves against a lot of annoying goofs, but CI system is law, so we largely harmonize against that.
IMHO, this is the least egregious "double accounting" we have in local vs staging ci vs production ci (I lost that battle, manager would rather keep staing.yaml and production.yaml, rather than parameterize. Shrug.gif).
Other knowledge nuggets:
- pre-commit manages its own dependencies. This leads to surprising behavior if you aren't expecting it. Eg you need a special line to specify dvc[s3].
- black has yet to release a non-beta semver, which messes with solvers. This is super annoying. They might as well use 0ver if they don't want to commit to stability. Don't expect any kind of stability of formatting between versions. Hope they settle down soon.
- git-lfs is a nightmare. Two projects at $lastco used it. It's more trouble than it's worth. Just use DVC for yucky files. I have no affiliation with dvc, other than a few bug reports.
- makefiles are great and IMHO underrated. But they have their limits. More complex logic should be broken out into scripts.
- python dependency management is still a kafkaesque nightmare. I say this with over a decade of python experience and it's my favorite language despite this.
- Suggestions welcome!
Technologies referenced:
https://github.com/iterative/setup-dvc
- Have a sandbox repo or two just for trying things out and playing around - Use Act to test your workflows locally. It doesn't support all the GH Actions functionality just yet, but it's got all the necessities and most niceties, and saves you having to constantly commit and push just to try things.
Regarding general testability with GitHub Actions, I'd recommend checking out the tmate action[3], which lets you debug your CI run with SSH.
[1] https://github.com/nektos/act
By the way, just discovered a nice library to test GitHub Actions locally: https://github.com/nektos/act
Yes, exactly. All of our build pipelines for a repository are included in the .github folder in the root of the repo. It makes it easier for team members to feel comfortable making changes and submitting a PR for them. You can setup an ACT container so you can test GitHub Action changes locally before pushing them too (see https://github.com/nektos/act )
> Out of curiosity, how reliable do you find the environment cleanup is?
So far, environment cleanup has been reliable, but I have noticed where it failed to cleanup some provisioned resources once in a blue moon. I blame this more on our code than GitHub Actions. I periodically review our sandbox environments to ensure we didn't miss deleting anything.
You might also want to take a look at act which allows you to run github actions locally, this is typically how I do actions development:
I'm curious what prevents you from writing your own actions in typescript now?
I'm confused as to the difference between https://github.com/nektos/act and https://github.com/actions/runner
Is it not possible to use 'runner' to run github actions locally? The docs for this seem like they are extremely sparse - probably because this competes with the paid option?
[1]: https://github.com/mxschmitt/action-tmate [2]: https://github.com/nektos/act
I did experience a bad outcome using act[0] for a super weird matrix-y workflow, but I'd say for the most part it behaved sanely; did you report your complaints to their issue tracker?
* https://github.com/nektos/act * https://github.com/phishy/wflow