June 21st, 2024

The plan-execute pattern

The plan-execute pattern in software engineering involves planning decisions in a data structure before execution, aiding testing and debugging. Practical examples and benefits are discussed, emphasizing improved code quality and complexity management.

Read original articleLink Icon
The plan-execute pattern

The article discusses the plan-execute pattern, a universal technique often overlooked in software engineering discussions. The pattern involves two stages: planning, where decisions are encapsulated into a data structure, and execution, where the plan is realized. By separating decision-making from action, the pattern allows for comprehensive testing and better debugging capabilities. The text provides a practical example of a build system design to illustrate how the pattern works in real-world scenarios. Additionally, it mentions instances where the pattern is applied, such as in query planning in RDBMS and interpreter patterns. The conclusion emphasizes the benefits of the plan-execute pattern in managing complexity and improving code quality compared to the more intertwined "just do it" approach. Overall, the article highlights the importance of considering alternative implementation strategies to enhance software development practices.

Related

Software design gets worse before it gets better

Software design gets worse before it gets better

The "Trough of Despair" in software design signifies a phase where design worsens before improving. Designers must manage expectations, make strategic decisions, and take incremental steps to navigate this phase successfully.

Formal methods: Just good engineering practice?

Formal methods: Just good engineering practice?

Formal methods in software engineering, highlighted by Marc Brooker from Amazon Web Services, optimize time and money by exploring designs effectively before implementation. They lead to faster development, reduced risk, and more optimal systems, proving valuable in well-understood requirements.

The software world is destroying itself (2018)

The software world is destroying itself (2018)

The software development industry faces sustainability challenges like application size growth and performance issues. Emphasizing efficient coding, it urges reevaluation of practices for quality improvement and environmental impact reduction.

Software Engineering Practices (2022)

Software Engineering Practices (2022)

Gergely Orosz sparked a Twitter discussion on software engineering practices. Simon Willison elaborated on key practices in a blog post, emphasizing documentation, test data creation, database migrations, templates, code formatting, environment setup automation, and preview environments. Willison highlights the productivity and quality benefits of investing in these practices and recommends tools like Docker, Gitpod, and Codespaces for implementation.

Optimizing the Roc parser/compiler with data-oriented design

Optimizing the Roc parser/compiler with data-oriented design

The blog post explores optimizing a parser/compiler with data-oriented design (DoD), comparing Array of Structs and Struct of Arrays for improved performance through memory efficiency and cache utilization. Restructuring data in the Roc compiler showcases enhanced efficiency and performance gains.

Link Icon 18 comments
By @evmar - 5 months
[ninja author] I like how their example is a build system, given Ninja constructs an object literally called "Plan"[1].

However, when I later revisited the design of Ninja I found that removing the separate planning phase better reflected the dynamic nature of how builds end up working out, where you discover information during the build that impacts what work you have planned. See the "Single pass" discussion in my design notes[2], which includes some negatives of the change.

If the author reads this, in [2] I made a Rust-based Ninja rewrite that has some similar data structures to those in the blog post, you might find it interesting!

[1] https://github.com/ninja-build/ninja/blob/dcefb838534a56b262... [2] https://neugierig.org/software/blog/2022/03/n2.html

By @JohnMakin - 5 months
This is a similar pattern to Terraform, correct? (Plan/Apply). I like this pattern, however, when implementing it into automation it presents a conundrum. Often, code will be checked into a repository with a "plan" attached to it. Great. Merge after plan is approved, then automation executes the plan. However, lots of providers will only show errors on the "execute" phase (e.g., on apply, "resource already exists" errors, etc.), leading to a ridiculously slow feedback loop of having to open new PR's, go through all approval chains again, merge, execute plan again. Debugging a simple issue like a misnamed variable can take days this way.

I have yet to see a graceful work flow here in a production environment and would love to hear about one. (No I will not use terraform cloud)

By @skrebbel - 5 months
I love this pattern. I agree with the author that design patterns that only exist because, say, your programming languages doesn't have function types, aren’t worth writing a book on.

This pattern seems like a specific application of the general idea to favor data structures over logic. If you manage to make a clear datastructure that encapsulates all the different ways in which your data/users/etc might behave, the implementation of the logic is often very straight forward. Conversely, if your data structure is either too generic or too specific, you end up having to code a lot of special cases, exceptions, mutable state and so on in the logic to deal with that.

By @BoppreH - 5 months
This pattern has several other advantages that are worth mentioning:

- You can attach the plan to your Change Management, both for documentation and approval.

- You can manually edit the plan.

- The plan can be processed by other tools (linting, optimizations).

- The "complexity" of the pair planner+executor is often lower than a monolithic algorithm (above a certain size).

- Having a side-effect-free planner makes testing so much easier.

The big downside, of course, is that you're building a VM. You'll have to design operations, parameters, encoding. And stacktraces will make much less sense.

My rule of thumb is to use plan-execute for scripts that combine business logic with destructive operations or long runtime.

By @advael - 5 months
I spent the first bit of this article going "Wait isn't that just what a finite state machine is?" and then was subsequently super satisfied by the next bit where they say that this is indeed a really good way to handle the general case

Reducing program logic to a small state machine and business logic that interacts with it in predefined ways is a pattern I learned while doing asynchronous netcode for a game project, but gradually realized was applicable in a ton of places, and framing it as a robust version of a "plan-execute" pattern is really intuitive but also really powerful. Great article

By @bradleybuda - 5 months
I use this pattern a lot (and I often find myself wishing I used it more). Another way to think about this is "noun-ification" (aka the Command pattern) - instead of invoking a function directly, capture all of its arguments, serialize them to some "store", then provide the ability to rehydrate them and actually run it.
By @onetimeuse92304 - 5 months
Used something very similar many times in my past without knowing it is formalised as a pattern.

For example, one application of this was a long migration project where a large collection of files (some petabytes of data) was to be migrated from an on-prem NAS to cloud filesystem. The files on NAS were managed with additional asset management solution which stored metadata in a PostgreSQL (actual filenames, etc.)

The application I wrote was composed of a series of small tools. One tool would contact all of the sources of information and create a file with a series of commands (copy file from location A to location B, create a folder, set metadata on a file, etc.)

Other tools could take that large file and run operations on it. Split it into smaller pieces, prioritise specific folders, filter out modified files by date, calculate fingerprints from actual file data for deduplication, etc. These tools would just operate on this common file format without actually doing any operations on files.

And finally tool that could be instantiated somewhere and execute a plan.

I designed it all this way as it was much more resilient process. I could have many different processes running at the same times and I had a common format (a file with a collection of directives) as a way to communicate between all these different tools.

By @tomberek - 5 months
This is an excellent pattern. Nix uses it to allow one to create a full transitive build graph, serialized that plan, move it around, and execute it in a distributed and reproducible fashion.

There are always temptations to loosen the constraints this imposes, but they nearly always come at a cost of undermining the value of the pattern.

By @krick - 5 months
I don't see how that is a pattern. Rather, it's a new name to an old thing: writing a virtual machine or an JIT-compiler. And no, interpreter is not a "special case of plan-execute pattern": by definition, if data structure can be run, it means it is a program for some virtual machine. And "a plan" is a lousy definition of "something that can be run", i.e. a program. So it's not a "special case", it's synonymous.

That also explains why it isn't a good and useful "pattern": inventing new DSLs with custom compilers all the time is not a trivial thing to pull off without shooting yourself in the leg, so if are writing a VM (e.g. an RDBMS query planner) you must be already perfectly aware that you are writing a fucking VM with its own programming language (e.g. SQL). And if you are not, inventing a data structure that encapsulates a whole program is probably a bigger task than the problem you are actually trying to solve.

By @ijustlovemath - 5 months
I do this for a code generation tool we use internally! Plans contain all the metadata to actually do a thing, but allow transformation, filtering etc before execution. It also forces you to abstract your code in a cleaner way, and minimize the surface of your plan generation/execution functions, which makes them infinitely more testable.

The way I think about when to use a plan is whenever you're doing batches of I/O of some kind, or anything that you might want to make idempotent.

By @CGamesPlay - 5 months
A key motivator of this pattern is identifying invalid configurations early, and in particular preventing execution if a known invalid configuration is requested (e.g. fail-fast). For example, if you delete a resource from a configuration, and it ends up being one that is depended upon by another, a "plan-execute" pattern should identify this at the start and prevent any execution from happening.
By @pshirshov - 5 months
The correct name is "staged generative programming".

The best form for plans is DAG w/o control flow.

You may run perception->planning->execution in a loop.

By @jkaptur - 5 months
> I feel uneasy about design patterns. ... they solve problems that a choice of programming language or paradigm creates.

I don't understand the ambivalence. I have to make some choice of language and paradigm, and then I have to solve problems using it.

Great article and description of the plan-execute pattern, though!

By @WhyNotHugo - 5 months
I really like this pattern and have been using it in the rewrite for vdirsyncer.

One nice advantage is that an application can show me the whole plan for me to review before executing it. This is basically what terraform did too.

This pattern also makes coding and testing much much easier.

By @gfaure - 5 months
I've also successfully used this in production — another side effect is that you can inspect the exact information that each step is using to compute its own output, if you ensure that the output plan is a pure function of the input plan.
By @Vadim_samokhin - 5 months
I think postgres really excelled at this pattern. As well as other databases implementing a query planner.
By @rmckayfleming - 5 months
"I feel uneasy about design patterns. On the one hand, my university class on design patterns revived my interest in programming. On the other hand, I find most patterns in the Gang of Four book to be irrelevant to my daily work; they solve problems that a choice of programming language or paradigm creates."

My relationship with design patterns changed when I stopped viewing them as prescriptive and started viewing them as descriptive.

By @PaulHoule - 5 months
There are a wide range of “patterns” that derive from compilers that are the higher teachings behind functional programming and OO patterns. People don’t usually call them patterns. (Scheme isn’t special because it has closures, it is special because you can write functions that write functions.)

Reminds me of a thread a few days back when someone was talking about state machines as if they were obscure. At a systems programming level they aren’t obscure at all (how do you write non-blocking network code without async/await? how does regex and lexers work?). Application programmers are usually “parsing” with regex.match() or index() or split() and frequently run into brick walls when it gets complex (say parsing email headers that can span several lines) but state machines make it easy.