Co-Dfns v5.7.0
The Co-dfns Compiler project enhances Dyalog dfns with task parallelism, synchronization, and determinism for optimized code and improved reliability. Contact arcfide@sacrideo.us for inquiries. Contributions and support are welcomed.
Read original articleThe Co-dfns Compiler project focuses on developing a high-performance and reliable compiler for a parallel extension of the Dyalog dfns programming language. Co-dfns introduces explicit task parallelism with synchronization and determinism structures to enhance program analysis, compiler optimization, and programmer productivity. The language aims to facilitate formal program analysis for improved code reliability. For inquiries about APL or Co-dfns, contact the project author at arcfide@sacrideo.us. Contributions and support from users are encouraged, including code contributions, feedback, and funding through Patreon. Related projects like Mystika and apixlib leverage Co-dfns technology. Detailed information can be found in the repository's documentation. Publications and presentations on Co-dfns offer additional insights for those interested in exploring further.
Related
Cognate: Readable and concise concatenative programming
Cognate is a concise, readable concatenative programming language emphasizing simplicity and flexibility. It supports operators as functions, stack evaluation, control flow statements, list manipulation, recursion, and mutable variables through boxes.
Optimizing the Roc parser/compiler with data-oriented design
The blog post explores optimizing a parser/compiler with data-oriented design (DoD), comparing Array of Structs and Struct of Arrays for improved performance through memory efficiency and cache utilization. Restructuring data in the Roc compiler showcases enhanced efficiency and performance gains.
Deriving Dependently-Typed OOP from First Principles
The paper delves into the expression problem in programming, comparing extensibility in functional and object-oriented paradigms. It introduces dependently-typed object-oriented programming, emphasizing duality and showcasing transformations. Additional appendices are included for OOPSLA 2024.
Hardware FPGA DPS-8M Mainframe and FNP Project
A new project led by Dean S. Anderson aims to implement the DPS‑8/M mainframe architecture using FPGAs to run Multics OS. Progress includes FNP component implementation and transitioning software gradually. Ongoing development updates available.
The APL Forge
The APL Forge competition promotes APL development. Participants create APL projects for a chance to win £2,500 and present at the Dyalog user meeting. Submissions for the 2025 round close on June 23, 2025.
In the compiler, it's working with dense adjacency matrix representations, so this will be at best O(n^2) in whatever the relevant sort of node is (expressions? functions?). "at present is not optimized" seems a bit of an understatement here: I've never heard anything about these sorts of graph algorithms being possible with good asymptotics in an array style. In practice, I think any value of Co-dfns would be more in the fact that it emits GPU code than that it does it quickly, but this calls into question the value of writing it in APL, I would say (for what it's worth, I believe Co-dfns is still not able to compile itself).
The phrase "allocate all intermediate arrays" seems fairly confusing: what's actually being allocated must be space for the pointers to these arrays and other metadata, not the data of the array itself. As the data is variable-size, it can't be fully planned out at compile time, and I'm fairly sure the register allocation is done when there's not enough shape or type information to even make an attempt at data allocation. This change can only improve constant overhead for primitive calls, and won't be relevant to computations where most of the work is on large arrays. I think it's helpful for Co-dfns in a practical sense, but not too exciting as it only helps with bad cases: if this is important for a user then they'd probably end up with better performance by switching to a statically-typed language when it comes time to optimize.
It does mention that "For most small array values, we will use static allocation types, which prevents the need for allocating additional space for an array above and beyond its header.", so small arrays are taken care of by register allocation. Tradeoff there between being able to fit larger arrays in this space versus wasting space for values that don't use the full header.
Constant lifting within the compiler is pretty cool, I'll have to look into that.
because i had no clue what co-dfns is
Related
Cognate: Readable and concise concatenative programming
Cognate is a concise, readable concatenative programming language emphasizing simplicity and flexibility. It supports operators as functions, stack evaluation, control flow statements, list manipulation, recursion, and mutable variables through boxes.
Optimizing the Roc parser/compiler with data-oriented design
The blog post explores optimizing a parser/compiler with data-oriented design (DoD), comparing Array of Structs and Struct of Arrays for improved performance through memory efficiency and cache utilization. Restructuring data in the Roc compiler showcases enhanced efficiency and performance gains.
Deriving Dependently-Typed OOP from First Principles
The paper delves into the expression problem in programming, comparing extensibility in functional and object-oriented paradigms. It introduces dependently-typed object-oriented programming, emphasizing duality and showcasing transformations. Additional appendices are included for OOPSLA 2024.
Hardware FPGA DPS-8M Mainframe and FNP Project
A new project led by Dean S. Anderson aims to implement the DPS‑8/M mainframe architecture using FPGAs to run Multics OS. Progress includes FNP component implementation and transitioning software gradually. Ongoing development updates available.
The APL Forge
The APL Forge competition promotes APL development. Participants create APL projects for a chance to win £2,500 and present at the Dyalog user meeting. Submissions for the 2025 round close on June 23, 2025.