Documentation Driven Development (2022)
Documentation Driven Development (DDD) is proposed as a more effective approach than Test-Driven Development (TDD) for software development. DDD involves starting with documentation to iron out implementation details before coding, helping to address API shifts and scope misunderstandings early on. By documenting requirements and potential future API changes, developers can better plan and refine their code, avoiding costly refactors later. DDD emphasizes the importance of various forms of documentation, including design mockups, API references, and tests, to communicate thoughts and guide development. This method encourages a feedback loop on APIs and work scope, enhancing code quality and project outcomes. DDD aligns with concepts like Behavioral Driven Development (BDD) and Acceptance Test-Driven Development (ATDD), emphasizing user behavior validation and strong communication practices. By incorporating DDD into their workflow, developers can improve collaboration, goal refinement, and code quality, leading to more successful project outcomes.
Read original articleDocumentation Driven Development (DDD) is proposed as a more effective approach than Test-Driven Development (TDD) for software development. DDD involves starting with documentation to iron out implementation details before coding, helping to address API shifts and scope misunderstandings early on. By documenting requirements and potential future API changes, developers can better plan and refine their code, avoiding costly refactors later. DDD emphasizes the importance of various forms of documentation, including design mockups, API references, and tests, to communicate thoughts and guide development. This method encourages a feedback loop on APIs and work scope, enhancing code quality and project outcomes. DDD aligns with concepts like Behavioral Driven Development (BDD) and Acceptance Test-Driven Development (ATDD), emphasizing user behavior validation and strong communication practices. By incorporating DDD into their workflow, developers can improve collaboration, goal refinement, and code quality, leading to more successful project outcomes.
Related
Software design gets worse before it gets better
The "Trough of Despair" in software design signifies a phase where design worsens before improving. Designers must manage expectations, make strategic decisions, and take incremental steps to navigate this phase successfully.
Documenting Software Architectures
Documenting software architectures is crucial for guiding developers, facilitating communication, and capturing decisions effectively. The arc42 template and C4 model offer structured approaches to achieve this, balancing detail and clarity.
Formal methods: Just good engineering practice?
Formal methods in software engineering, highlighted by Marc Brooker from Amazon Web Services, optimize time and money by exploring designs effectively before implementation. They lead to faster development, reduced risk, and more optimal systems, proving valuable in well-understood requirements.
Software Engineering Practices (2022)
Gergely Orosz sparked a Twitter discussion on software engineering practices. Simon Willison elaborated on key practices in a blog post, emphasizing documentation, test data creation, database migrations, templates, code formatting, environment setup automation, and preview environments. Willison highlights the productivity and quality benefits of investing in these practices and recommends tools like Docker, Gitpod, and Codespaces for implementation.
Optimizing the Roc parser/compiler with data-oriented design
The blog post explores optimizing a parser/compiler with data-oriented design (DoD), comparing Array of Structs and Struct of Arrays for improved performance through memory efficiency and cache utilization. Restructuring data in the Roc compiler showcases enhanced efficiency and performance gains.
The danger is that either you fail to update the documentation to account for the changes in the system that emerge in development or you only update parts of the documentation as you go which causes the documentation to become inconsistent and unreliable.
A middle ground is write the documentation up front and then rewrite it after the system is done. The initial draft helps guide the design and the final version captures the full and complete essence of the finished program, which is nearly impossible to do up front.
Certainly one should think about the design before implementing, but I think writing acts to force active reflection on the design - be it a plan, or documentation, or a test.
At this point my approach is to plan out my current ticket into TODOs before I make a branch. I do this in phases until I get down to the level of what functions need to be modified/made. If I know there is a function that is going to be doing a bunch of branching or is otherwise complex, then I know it will need a good bit of unit testing, and I might opt for TDD - though its rare. I'm used enough to planning my tickets now that it is usually sufficient to get to a good design.
I would recommend trying to do some sort of planning of your work. It can be hard and time consuming it first but it can be a superpower if you get into the habit of it.
The gist is that you start by writing example client code demonstrating small but useful interactions with the API, including explanatory comments. They won't work yet, because the API hasn't been implemented. But you're already in a position to talk them over with your team and start thinking about user experience issues. And they give you a nucleus around which you can start building high-quality user documentation.
And they're your automated end-to-end acceptance tests.
Once things look good (not perfect), then you can start implementing the API. For this portion I don't necessarily do TDD, because I remain unconvinced that writing unit tests first is inherently better than writing unit tests afterward. But I might. Depends on my mood. But I do keep running the walking skeleton tests, which serve as my primary feedback mechanism for evaluating implementation progress and success.
The more important thing is that I can start stepping through those walking scenario tests and seeing how well the interaction actually works in practice. And I can use those observations to help inform an ongoing conversation about the API's design and functionality. I can also get quick feedback on whether some aspect of the high-level design isn't going to work as well as we had hoped. Ongoing updates to the design are memorialized and communicated by updating the walking skeleton tests.
It also reminds me of clitest[1] (very similar name to the author's lib, but it is another thing).
You can write examples with prose in markdown, then let clitest verify them for you as if they were tests. Best of both worlds.
There are a set of artifacts which are designated the validation artifacts which includes a user requirements specification, functional design specification, technical design specification, and configuration specification.
Before any release, the team will start by identifying which of these documents need to be updated and which sections need to be updated. This is codified in a formal, signed document.
Then work starts and an updated version of each of the document is created. Prior to releasing the software, the validation process kicks in and basically verifies the correctness of each of those documents in reverse order: the configuration specification -> installation qualification, the functional design specification -> operational qualification, the user requirements specification -> performance qualification (there is no explicit qualification of the technical design). The qualification phase has a mountain of evidence of testing.
The result of this rigor is: 1) the products tend to have very low defect rates because every change is accounted for, 2) the release cycle can be as long as the build cycle, 3) it's very hard to fix bugs "on the fly" as these then need to be documented as a deviation and accounted for, 4) anything that breaks is easily accounted for, 5) it's very, very hard for startups to succeed in this space because of how much rigor is required.
I initially hated it because I wanted to move fast. But after leading a few release cycles, I could see a lot of the benefits as we rarely "built the wrong thing", we rarely had major defects, and work-life balance was generally pretty good because every release cycle was meticulously scoped since we had to start by describing the planned changes starting from the documentation. There is less scope creep once work on the release starts since the start of the release has already identified what will change and adding scope means adding more paperwork after the fact.
Maybe a paradigm to consider more is:
Literate programming (LP) tools are used to obtain two representations from a source file: one understandable by a compiler or interpreter, the "tangled" code, and another for viewing as formatted documentation, which is said to be "woven" from the literate source.
The idea of writing documentation before the code is interesting, it's thinking out loud
The source has baggages, and the intent of every single function calls is not always evident. Writing documentation up-front can help direct the source, but this is a tug-of-war environment. Each affect the other in its own ways.
And for that reason, documentation driven development can be a real drag. You start writing documentation with the best intentions, everything works great for this first release. But 2 months down the road you need to modify something and it has a ripple effect on many of the things you documented. It's a non-negligible cost.
I've been working on this open-source tool(https://github.com/pier-oliviert/sequencer) and I've spent a lot of time on the documentation. And what I described above happened. I wanted to make a not-too-big change, and it required me to rewrite 30% of the documentation. I still love the documentation aspect of it, but it definitively has a higher cost than tests, in my experience.
Rational Rose UML RequistePro. Doors. Just plain prototype something through many iterations until is good then document. Doxygen
All of the above except the prototyping a waste of resources more or less.
If I had my software shop (which I don't) I would expect a software module to come with a README.md Which a brief explanation of what is is and no more than 3 step instruction for a reader to see the software module running and doing something useful.
- https://course.ccs.neu.edu/cs2500f18/design_recipe.html - https://cs110.students.cs.ubc.ca/reference/design-recipes.ht... - https://docs.racket-lang.org/htdf/index.html
Between a signature, purpose statement and examples, you've declared most of what documentation provides short of a longer contextual statement of the functions role in a codebase.
For larger modules there is How to Design Worlds.
A high-level design is best not written in code, but in a separate document. Writing documentation in tickets is just plain silly, IMNSHO.
Writing design documentation used to be commonplace. If it helps to put a name on this, then that's great.
Isn't this kind of the point of TDD? TDD makes it harder to write tests based on assumptions about implementation details, making it less likely that you'll make such assumptions. You end up with less unnecessary coupling between your tests and the implementation details of the code they test, which means fewer false failures (tests that fail because the structure changes, rather than the behavior), and a more robust test suite.
Related
Software design gets worse before it gets better
The "Trough of Despair" in software design signifies a phase where design worsens before improving. Designers must manage expectations, make strategic decisions, and take incremental steps to navigate this phase successfully.
Documenting Software Architectures
Documenting software architectures is crucial for guiding developers, facilitating communication, and capturing decisions effectively. The arc42 template and C4 model offer structured approaches to achieve this, balancing detail and clarity.
Formal methods: Just good engineering practice?
Formal methods in software engineering, highlighted by Marc Brooker from Amazon Web Services, optimize time and money by exploring designs effectively before implementation. They lead to faster development, reduced risk, and more optimal systems, proving valuable in well-understood requirements.
Software Engineering Practices (2022)
Gergely Orosz sparked a Twitter discussion on software engineering practices. Simon Willison elaborated on key practices in a blog post, emphasizing documentation, test data creation, database migrations, templates, code formatting, environment setup automation, and preview environments. Willison highlights the productivity and quality benefits of investing in these practices and recommends tools like Docker, Gitpod, and Codespaces for implementation.
Optimizing the Roc parser/compiler with data-oriented design
The blog post explores optimizing a parser/compiler with data-oriented design (DoD), comparing Array of Structs and Struct of Arrays for improved performance through memory efficiency and cache utilization. Restructuring data in the Roc compiler showcases enhanced efficiency and performance gains.