January 20th, 2025

I'll think twice before using GitHub Actions again

The author criticizes GitHub Actions for its limitations in monorepo setups, including complex merging processes, inadequate local support, and maintenance challenges, suggesting alternatives like GitLab or Jenkins.

Read original articleLink Icon
FrustrationCriticismDisappointment
I'll think twice before using GitHub Actions again

The author expresses dissatisfaction with GitHub Actions, particularly in the context of a monorepo setup used by their team of 15 engineers. They highlight several limitations, including challenges with pull requests and required checks, where checks only run for specific folders, complicating the merging process. The author notes that workarounds are difficult and costly, as they require running additional pipelines. They also criticize the complexity of managing growing pipelines, which often necessitate numerous conditional statements and lead to duplicated code. The lack of local development support for GitHub Actions is another significant drawback, with existing tools proving inadequate. The author feels that GitHub has shown little interest in addressing these long-standing issues, leading to frustration within the community. As a result, they suggest considering alternative CI/CD solutions like GitLab, Jenkins, or TeamCity, which they believe may offer better services.

- GitHub Actions may not be suitable for all development environments, particularly in monorepo setups.

- The merging process can be complicated due to required checks that only run for specific folders.

- Managing complex pipelines in GitHub Actions can lead to increased maintenance and duplicated code.

- There is a lack of effective local development support for GitHub Actions.

- The author recommends exploring alternative CI/CD tools due to perceived neglect from GitHub regarding user concerns.

AI: What people are saying
The comments reflect a range of opinions on GitHub Actions, particularly regarding its limitations and alternatives.
  • Many users express frustration with GitHub Actions' inability to effectively handle monorepos, leading to complex workflows and maintenance issues.
  • Several commenters suggest that relying on external CI tools like Jenkins or GitLab may provide better support and flexibility.
  • There is a consensus that local testing capabilities are inadequate, with users advocating for better local execution options.
  • Some users recommend structuring CI logic in scripts rather than within GitHub Actions to improve portability and ease of testing.
  • Concerns about security and performance issues with GitHub Actions are frequently mentioned, highlighting the need for better documentation and reliability.
Link Icon 57 comments
By @arghwhat - 16 days
> no way of running actions locally

My policy is to never let pipeline DSLs contain any actual logic outside orchestration for the task, relying solely on one-liner build or test commands. If the task is more complicated than a one-liner, make a script for it in the repo to make it a one-liner. Doesn't matter if it's GitHub Actions, Jenkins, Azure DevOps (which has super cursed yaml), etc.

This in turn means that you can do what the pipeline does with a one-liner too, whether manually, from a vscode launch command, a git hook, etc.

This same approach can fix the mess of path-specific validation too - write a regular script (shell, python, JS, whatever you fancy) that checks what has changed and calls the appropriate validation script. The GitHub action is only used to run the script on PR and to prepare the CI container for whatever the script needs, and the same pipeline will always run.

By @benrutter - 16 days
Oh boy, there's a special kind of hell I enter into everytime I set up new github actions. I wrote a blog post a few months ago about my pain[0] but one of the main things I've found over the years is you can massively reduce how horrible writing github actions is by avoiding prebuilt actions, and just using it as a handy shell runner.

If you write behaviour in python/ruby/bash/hell-rust-if-you-really-want and leave your github action at `run: python some/script.py` then you'll have something that's much easy to test locally, and save yourself a lot of pain, even if you wind up with slightly more boilerplate.

[0] https://benrutter.github.io/posts/github-actions/

By @spooneybarger - 15 days
A lot of folks in this thread are focusing on the monorepo aspect of things. The "Pull request and required checks" problem exists regardless of monorepo or not.

GitHub Actions allows you to only run checks if certain conditions are met, like "only lint markdown if the PR contains *.md files". The moment you decide to use such rules, you have the "Pull request and required checks" problem. No "monorepo" required.

GitHub required checks at this time allow you to use with external services where GitHub has no idea what might run. For this reason, required checks HAVE to pass. There's no "if it runs" step. A required check on an external service might never run, or it might be delayed. Therefore, if GH doesn't have an affirmation that it passed, you can't merge.

It would be wonderful if for jobs that run on GH where GH can know if the action is supposed to run, if required checks could be "require all these checks if they will be triggered".

I have encountered this problem on every non-trivial project I use with GitHub actions; monorepo or not.

By @p1necone - 15 days
There's a workaround for the 'pull request and required check' issue. You create an alternative 'no op' version of each required check workflow that just does nothing and exits with code 0 with the inverse of the trigger for the "real" one.

The required check configuration on github is just based off of job name, so either the trigger condition is true, and the real one has to succeed or the trigger condition is false and the no op one satisfies the PR completion rules instead.

It seems crazy to me that such basic functionality needs such a hacky workaround, but there it is.

By @nunez - 16 days
Posts like this make me miss Travis. Travis CI was incredible, especially for testing CI locally. (I agree with the author that act is a well done hack. I've stopped using it because of how often I'd have something pass in act and fail in GHA.)

> GitHub doesn't care

My take: GitHub only built Actions to compete against GitLab CI, as built-in CI was taking large chunks of market share from them in the enterprise.

By @ryanisnan - 15 days
One really interesting omission to this post is how the architecture of GitHub actions encourages (or at the very least makes deceivingly easy) making bad security decisions.

Common examples are secrets. Organization or repository secrets are very convenient, but they are also massive security holes just waiting for unsuspecting victims to fall into.

Repository environments have the ability to have distinct secrets, but you have to ensure that the right workflows can only access the right environments. It's a real pain to manage at scale.

Being able to `inherit` secrets also is a massive footgun, just waiting to leak credentials to a shared action. Search for and leak `AWS_ACCESS_KEY_ID` anyone?

Cross-repository workflow triggering is also a disaster, and in some circumstances you can abuse the differences in configuration to do things the source repository didn't intend.

Other misc. things about GHA also are cool in theory, but fall down in practice. One example is the wait-timer concept of environments. If you have a multi-job workflow using the same environment, wait-timer applies to EACH JOB in the environment. So if you have a build-and-test workflow with 2 jobs, one for build, and one for test, each step will wait `wait-timer` before it executes. This makes things like multi-environment deployment pipelines impossible to use this feature, unless you refactor your workflows.

Overall, I'd recommend against using GHA and looking elsewhere.

By @ripped_britches - 15 days
My man/woman - you gotta try buildkite. It’s a bit more extra setup since you have to interface with another company, more API keys, etc. But when you outgrow GH actions, this is the way. Have used buildkite in my last two jobs (big US tech companies) and it has been the only pleasant part of CI.
By @bramblerose - 16 days
In the end, this is the age old "I built by thing on top of a 3rd party platform, it doesn't quite match my use case (anymore) and now I'm stuck".

Would GitLab have been better? Maybe. But chances are that there is another edge case that is not handled well there. You're in a PaaS world, don't expect the platform to adjust to your workflow; adjust your workflow to the platform.

You could of course choose to "step down" (PaaS to IaaS) by just having a "ci" script in your repo that is called by GA/other CI tooling. That gives you immense flexibility but also you lose specific features (e.g. pipeline display).

By @tevon - 16 days
I call writing GitHub Actions "Search and Deploy", constantly pushing to a branch to get an action to run is a terrible pattern...

You'd think, especially with the deep VS Code integration, they'd have at least a basic sanity-check locally, even if not running the full pipeline.

By @hinkley - 15 days
Re: monorepo

> In GitHub you can specify a "required check", the name of the step in your pipeline that always has to be green before a pull request is merged. As an example, I can say that web-app1 - Unit tests are required to pass. The problem is that this step will only run when I change something in the web-app1 folder. So if my pull request only made changes in api1 I will never be able to merge my pull request!

Continuous Integration is not continuous integration if we don’t test that a change has no deleterious side effects on the rest of the system. That’s what integration is. So if you aren’t running all of the tests because they’re slow, then you’re engaging in false economy. Make your tests run faster. Modern hardware with reasonable test runners should be able to whack out 10k unit tests in under a minute. The time to run the tests goes up by a factor of ~7-10 depending on framework as you climb each step in the testing pyramid. And while it takes more tests to cover the same ground, with a little care you can still almost halve the run time replacing one test with a handful of tests that check the same requirement one layer down, or about 70% moving down two layers.

One thing that’s been missing from most of the recent CI pipelines I’ve used is being able to see that a build is going to fail before the tests finish. The earlier the reporting of the failure the better the ergonomics for the person who triggered the build. That’s why the testing pyramid even exists.

By @androa - 16 days
GitHub (Actions) is simply not built to support monorepos. Square peg in a round hole and all that. We've opted for using `meta` to simulate monorepos, while being able to use GitHub Actions without too much downsides.
By @keybored - 15 days
Why is this so difficult?

1. We apparently don’t even have a name for it. We just call it “CI” because that’s the adjacent practice. “Oh no the CI failed”

2. It’s conceptually a program that reports failure if whatever it is running fails and... that’s it

3. The long-standing principle of running “the CI” after merging is so backwards that that-other Hoare disparagingly called the correct way (guard “main” with a bot) for The Not Rocket Science Principle or something. And that smug blog title is still used to this day (or “what bors does”)

4. It’s supposed to be configured declaratively but in the most gross way that “declarative” has ever seen

5. In the true spirit of centralization “value add”: the local option of (2) (report failure if failed) has to be hard or at the very least inconvenient to set up

I’m not outraged when someone doesn’t “run CI”.

By @bhaney - 16 days
Article title: "[Common thing] doesn't work very well!"

Article body: "So we use a monorepo and-"

Tale as old as time

By @jcarrano - 15 days
The general philosophy of these CI systems is flawed. Instead of CI running your code, your code should run the CI. In other words, the CI should present an API such that one can have arbitrary code which informs the system of what is going on. E.g. "I'm starting jobs A,B,C", "Job A done successfully", "This file is an artifact for job B".

Information should only from from the user scripts to the CI, and communication should be done by creating files in a specific format and location. This way the system can run and produce the same results anywhere provided it has the right environment/container.

By @posix86 - 15 days
One thing that sounds very nice about Github are merge queues: Once your PR is ready, rather than merging, you submit it to the merge queue, which will rebase it on the last PR also on the merge queue. It then runs the CI on each PR, and finally merges them automatically once successful. If CI fails, doesn't get merged, and the next PR skips yours on the chain.

Still a lot of computation & some wait time, but you can just click & forget. You can also parallelize it; since branches are rebased on each other, you can run CI in advance and, assuming your predecessor is also successful, reuse the result from yours.

Only available for enterprise orgs though.

By @jpgvm - 15 days
Use Bazel.

GHA/Gitlab CI/Buildkite/whatever else floats your boat then just builds a bunch of Bazel targets, naively, in-order etc. Just lean on Bazel fine-grained caching until that isn't enough anymore and stir in remote build execution for more parallelism when you need it.

This works up until ~10M+ lines of code or ~10ish reasonably large services. After that you need to do a bit more work to only build graph of targets that have been changed by the diff. That will get you far enough that you will have a whole team that works on these problems.

Allowing the CI tools to do any orchestration or dictate how your projects are built is insanity. Expressing dependencies etc in YAML is is the path to darkness and is only really justifiable for very small projects.

By @flohofwoe - 16 days
IMHO the main problem with GH Actions is that the runners are so slow. Feels like running your build on a frigging C64 sometimes ;)
By @zxor - 15 days
> The problem is that this step will only run when I change something in the web-app1 folder. So if my pull request only made changes in api1 I will never be able to merge my pull request!

This just seems like a bad implementation to me?

There are definitely ways to set up your actions so that they run all of the unit tests without changes if you'd like, or so that api1's unit tests are not required for a web-app1 related PR to be merged.

By @SamuelAdams - 15 days
> Our code sits in a monorepo which is further divided into folders. Every folder is independent of each other and can be tested, built, and deployed separately.

If this is true, and you still have problems running specific Actions, why not break this into separate repositories?

By @OptionOfT - 16 days
So the way I've solved the multiple folders with independent checks is like this:

    all-done:
      name: All done
      # this is the job that should be marked as required on GitHub. It's the only one that'll reliably trigger
      # when any upstream fails: success
      # when all upstream skips: pass
      # when all upstream success: success
      # combination of upstream skip and success: success
      runs-on: ubuntu-latest
      needs:
        - calculate-version
        - cargo-build
        - cargo-fmt
        - cargo-clippy-and-report
        - cargo-test-and-report
        - docker-build
        - docker-publish
      if: |
        always()
      steps:
        - name: Fail!
          shell: bash
          if: |
            contains(needs.*.result, 'failure') ||
            contains(needs.*.result, 'cancelled')
          run: |
            echo "One / more upstream failed or was cancelled. Failing job..."
  
            exit 1
  
        - name: Success!
          shell: bash
          run: |
            echo "Great success!"

That way it is resilient against checks not running because they're not needed, but it still fails when any upstream actually fails.

Now, I did end up running the tests of the front-end and back-end because they upload coverage, and if my coverage tool doesn't get both, it'll consider it as a drop in coverage and fail its check.

But in general, I agree with the writer of the post that it all feels like it's not getting enough love.

For example, there is no support for yaml anchors, which really hampers reusability on things that cannot be extracted to separate flows (not to mention separate flows can only be nested 4 deep).

There is also the issue that any commit made by GitHub actions doesn't trigger another build. This is understandable, as you want to avoid endless builds, but sometimes it's needed, and then you need to do the ugly workaround with a PAT (and I believe it can't even be a fine-grained one). Combine that with policies that set a maximum time limit on tokens, your build becomes brittle, as now you need to chase down the person with admin access.

Then there is the issue of Docker actions. They tell you to pin the action to an sha to prevent replacements. Except the action itself points to a replaceable tag.

Lastly, there is a bug where when you create a report for your action, you cannot specify the parent it belongs to. So your ESLint report could be made a child of your coverage report.

By @cjk - 15 days
I have never used a CI system more flaky and slow than GitHub Actions. The one and only positive thing about it is that you get some Actions usage for free.

The Azure machines GitHub uses for the runners by default have terrible performance in almost every regard (network, disk, CPU). Presumably it would be more reliable when using your own runners, but even the Actions control plane is flaky and doesn't always schedule jobs correctly.

We switched to Buildkite at $DAYJOB and haven't looked back.

By @ironfootnz - 15 days
I’ve seen many teams get stuck when they rely too heavily on GitHub Actions’ magic. The key issue is how tightly your build logic and config become tied to one CI tool. If the declarative YAML gets too big and tries to handle complex branching or monorepos, it devolves into a maintenance headache—especially when you can’t test it locally and must push blind changes just to see what happens.

A healthier workflow is to keep all the logic (build, test, deploy) in portable scripts and let the CI only orchestrate each script as a single step. It’s easier to troubleshoot, possible to run everything on a dev machine, and simpler if you ever migrate away from GitHub.

For monorepos, required checks are maddening. This should be a first-class feature where CI can dynamically mark which checks apply on a PR, then require only those. Otherwise, you do hacky “no-op” jobs or you force your entire pipeline to run every time.

In short, GitHub Actions can be powerful for smaller codebases or straightforward pipelines, but if your repo is big and you want advanced control, it starts to feel like you’re fighting the tool. If there’s no sign that GitHub wants to address these issues, it’s totally reasonable to look elsewhere or build your own thin orchestration on top of more flexible CI runners.

By @ashishb - 15 days
There are a lot of subtle pitfalls as well. Like no default timeouts, excess permissions etc.

I wrote about it in detail https://ashishb.net/tech/common-pitfalls-of-github-actions/ And even created a tool to generate good configs http://github.com/ashishb/gabo

By @rednafi - 16 days
You can't run AWS lambda or DyanmoDB locally too (well you can but it's a hassle). So by that logic, we shouldn't use them at all. I don't like working with CI too but I'll take GitHub Actions over Jenkins/CircleCI/TravisCI any day.
By @xinayder - 16 days
I tried to use GitHub Actions on Forgejo and... It's so much worse than using an actual CI pipeline.

With Woodpecker/Jenkins you know exactly what your pipeline is doing. With GitHub actions, not even the developers of the actions themselves know what the runner does.

By @baobun - 15 days
GitHub Actions supporting yaml anchors would resolve one of the gripes, which I share.

https://github.com/actions/runner/issues/1182

By @verdverm - 15 days
Monorepos come with a different set of tradeoffs from polyrepos. Both have their pains. We have a similar setup with Jenkins, and have used CUE to tame a number of these issues. We did so by creating (1) a software catalog (2) per-branch config for versions and CI switches

Similarly, we are adopting Dagger, more as part of a larger "containerize all of our CI steps" which works great for bringing parity to CI & local dev work. There are a number of secondary benefits and the TUI / WUI logs are awesome.

Between the two, I have removed much of the yaml engineering in my work

By @kjuulh - 16 days
I am biased because I built the rust SDK for dagger. But I think it is a real step forward for CI. Is it perfect? Nope. But it allows fixing a lot of the shortcomings the author has.

Pros:

- pipeline as code, write it as golang, python, typescript or a mix of thr above.

- Really fast once cached

- Use your languages library for code sharing, versioning and testing

- Runs everywhere local, ci etc. Easy to change from github actions to something else.

Cons:

- Slow on the first run. Lots of pulling of docker images

- The DSL and modules can feel foreign initially.

- Modules are definitely a framework, I prefer just building having a binary I can ship (which is why the rust SDK doesnt support modules yet).

- Doesn't handle large mono repos well, it relies heavily on caching and currently runs on a single node. It can work if you don't have 100 of services especially if the builder is a large machine.

Just the fact that you can actually write ci pipelines that can be tested, packaged, versioned etc. Allows us to ship our pipelines as products which is quite nice and something we've come to rely on heavily

By @zzo38computer - 15 days
I do not use GitHub Actions for these purposes, and if I did, I would want to ensure that it is a file that can run locally or whatever else just as well. I don't use GitHub Actions to prevent pull requests from being merged (I will always manage them manually), and do not use GitHub Actions to manage writing the program, for testing the program (it would be possible to do this, but I would insist on doing it in a way that is not vendor-locked to GitHub, and by putting most of the stuff outside of the GitHub Actions file itself), etc.

I do have a GitHub Actions file for a purpose which is not related to the program itself; specifically, for auto-assignment of issues. In this case, it is clearly not intended to run locally (although in this case you could do so anyways if you could install the "gh" program on your computer and run the command mentioned there locally, but it is not necessary since GitHub will do it automatically on their computer).

  on:
    issues:
      types:
        - opened
    pull_request:
      types:
        - opened
  permissions:
    contents: read
    issues: write
    pull-requests: write
  jobs:
    default:
      runs-on: ubuntu-latest
      steps:
        - run: gh issue edit ${{ github.event.issue.number }} --add-assignee ${{ github.repository_owner }}
          env:
            GH_TOKEN: ${{ github.token }}
            GH_REPO: ${{ github.repository }}
By @habosa - 15 days
Shameless plug but I built GitGuard (https://gitguard.dev) to solve the "Pull request and required checks" problem mentioned here (and other problems).

Basically: you set GitGuard as your required check and then write a simple GitGuard workflow like this:

    if anymatch(pull_files,"src/backend/.*") {
      assert(checkpassed("backend-tests"))
    }
Email in my bio for anyone interested.
By @fuzzy2 - 15 days
I hate GitHub Actions, and I hate Azure Pipelines, which are basically the same. I especially hate that GitHub Actions has the worst documentation.

However, I’ve come full circle on this topic. My current position is that you must go all-in with a given CI platform, or forego the benefits it offers. So all my pipelines use all features, to offer a great experience for devs relying on them: Fast, reproducible, steps that are easy to reason about, useful parameters for runs, ...

By @angoragoats - 16 days
Why is this team sticking multiple directories that are “independent of each other” into a single repository? This sounds like a clear case of doing version control wrong. Monorepos come with their own set of challenges, and I don’t think there are many situations where they’re actually warranted. They certainly don’t help for completely independent projects.
By @forty - 15 days
For the first point, some mono repo orchestrators (I'm thinking of at least pnpm) have a way to do : run all the (for example) tests for all the packages that had change from master branch + all packages that depend transitively from those packages.

It's very convenient and avoid having to mess with the CI limitations on the matter

By @aa-jv - 16 days
I use Github Actions as a fertile testing playground to work out how to do things locally.

For example, if you've ever had to wade into the codesigning/notarization quagmire, observing the methods projects use with Github Actions to do it, can teach you a lot about how to do things, locally.

By @theknarf - 15 days
Just have Github Actions run a monorepo tool like turborepo. You're just trying to do to much in a yaml file... The solution for all build pipeline tools are always to do most of your build logic in a bash-script/makefile/monorepo-tool.
By @joshdavham - 16 days
> It's a known thing that there is no way of running GitHub Actions locally. There is a tool called act but in my experience it's subpar.

I really hope there will be a nice, official tool to run gh actions locally in the future. That would be incredible.

By @the_gipsy - 15 days
In my newest hobby project, I decided to bite the bullet and use the flake.nix as single source of truth. And it's surprisingly fast! I used cargo-crane to cache rust deps. This also works locally just running "nix flake check". Much better than dealing with github actions, caches, and whatnot.

Apart from the nix gh action that just runs "nix flake check", the only other actions are for making a github release on certain tags, and uploading release artifacts - which is something that should be built-in IMO.

By @melezhik - 14 days
Many if not all mentioned issues derive from the fact that nowadays pipelines are most of the time - YML based - which is terrible choise for programming , you might want take a look at Sparky which is 100% Raku cicd system thst does not have many of mentioned pitfalls and super flexible …

Disclaimer I am the tool author - https://github.com/melezhik/sparky

By @aswerty - 15 days
I once used Team City and Octopus Deploy in a company. And ever since then, dealing with Gitlab Pipelines and Github Actions, I find them so much poorer as a toolkit.

We are very much in the part of the platform cycle where best-in-breed is losing out to all-in-one. Hopefully we see things swing in the other direction in the next few years where composable best-in-breed solutions recapture the hearts and minds of the community.

By @robertritz - 15 days
It's simple. Don't believe that the company purchased by Microsoft wants anything other than for you to use more compute.
By @rickette - 16 days
Every CI system has its flaws but GitHub Actions in my opinion is pretty nice especially in terms of productivity; easy to setup, tons of prebuild actions, lots of examples, etc.

I've used Tekton, Jenkins, Travis, Hudson, StarTeam, Rational Jazz, Continuum and a host of other CI systems over the years but GitHub Actions ain't bad.

By @dalton_zk - 15 days
One options its create your own CI, I think the others tools have pros/cons.

This month I start to create to my team our own tool to build CI, I'm using go lang and create a webhook who call my API and apply what is need.

I'm saying this because you can create the CI with your features.

By @manx - 15 days
I recommend to try earthfiles: https://earthly.dev/earthfile

This basically brings docker layer caching to CI. Only Things that changed are rebuilt and tested.

By @makingstuffs - 16 days
Not sure if I am missing something but you can definitely run (some?) GH actions locally with act: https://github.com/nektos/act

Seen a couple posts on here say otherwise.

By @spzb - 15 days
My recent experience with Github Actions is that it will randomly fail running a pipeline that hasn't changed with an incomprehensible error message. I re-run the action a few hours later and it works perfectly.
By @Sparkyte - 15 days
Blindly using automation or implementing it with validation will always bite a person in the butt. Been there done that. It is good but it should always be event driven with a point of user validation.
By @TigerC10 - 15 days
Google made Release-Please to make monorepo development easier, there is a GitHub Action for it in the marketplace. Would probably make things a lot cleaner for this situation.
By @esafak - 15 days
Where did the terrible idea of pipelines as config come from anyway?
By @webprofusion - 15 days
GitHub Action has supported your own locally hosted runners for years, so I presume "there is no way of running GitHub Actions locally" is referring to something else.
By @vrnvu - 16 days
> GitHub doesn't care

GitHub cares. GitHub cares about active users on their platform. Whether it's managing PRs, doing code reviews, or checking the logs of another failed action.

By @bitliner2 - 16 days
Welcome to the jungle.

https://medium.com/@bitliner/why-gitlab-can-be-a-pain-ae1aa6...

I think it’s not only GitHub.

Ideally we should handle it as any other code, that is: do tests, handle multiple environments including the local environment, lint/build time error detection etc

By @kazinator - 15 days
> My team consists of about 15 engineers

If it's not open source, I have no idea why you'd use GitHub at all. (And even then.)

Keep your eggs in your own nest.

By @yunusefendi52 - 15 days
I think devbox.sh would solve some of the issues, especially local development. You can also run devbox in CI
By @mgaunard - 15 days
assuming that every folder is independent sounds like bad design.

if they're really independent out them in separate repos.

By @pshirshov - 16 days
> Jenkins, TeamCity

Yeah-yeah, but it's not like they allow you to run your build definitions locally nor they address some other concerns. With GHA you may use nix-quick-install in a declarative manner, nixify your builds and then easily run them locally and under GHA. In case of jenkins/tc you would have to jump through much more hoops.

By @alkonaut - 16 days
That GH Actions and Azure Pipelines both settled for this cursed Yaml is hard to understand. Just make a real programming language do it! And ffs make a local test env so I can run the thing.