Exploring biphasic programming: a new approach in language design
Biphasic programming introduces new language design trends like Zig's "comptime" for compile-time execution, React Server Components for flexible rendering, and Winglang's phase-specific code for cloud applications.
Read original articleBiphasic programming is a new trend in language design where code can run in two distinct phases, such as build time versus runtime or server-side versus client-side. Zig, a systems programming language, introduces "comptime" for compile-time execution of functions without adding a new domain-specific language. React Server Components (RSC) allow developers to choose where components are rendered, optimizing performance by rendering on the server or client as needed. Winglang focuses on cloud applications, using preflight code for defining infrastructure and inflight code for runtime interactions, enforcing phase-related invariants. These examples showcase how biphasic programming can address various challenges, from metaprogramming in Zig to optimizing frontend apps with RSC and modeling distributed programs in Wing. The distinction between phases in these languages offers unique capabilities, with potential for further exploration on how biphasic solutions overlap or differ, and whether existing languages can achieve similar functionality without dedicated features. Overall, biphasic programming presents a versatile approach to solving diverse programming problems across different domains.
Related
Understanding React Compiler
React's core architecture simplifies app development but can lead to performance issues. The React team introduced React Compiler to automate performance tuning by rewriting code using AST, memoization, and hook storage for optimization.
Understanding React Compiler
React's core architecture simplifies development but can lead to performance issues. The React team introduced the React Compiler to automate performance tuning by rewriting code. Transpilers like Babel convert JSX for efficiency. Compilers, transpilers, and optimizers analyze and produce equivalent code. React Compiler enhances functionality using Abstract Syntax Trees, memoization, and hook storage for optimized performance.
Zig-style generics are not well-suited for most languages
Zig-style generics, inspired by C++, are critiqued for limited universality. Zig's simplicity contrasts with Rust and Go's constraints. Metaprogramming praised for accessibility, but error messages and compiler support pose challenges. Limited type inference compared to Swift and Rust.
I Probably Hate Writing Code in Your Favorite Language
The author critiques popular programming languages like Python and Java, favoring Elixir and Haskell for immutability and functional programming benefits. They emphasize personal language preferences for hobby projects, not sparking conflict.
Improving Your Zig Language Server Experience
Enhance Zig Language Server (ZLS) by configuring it to run build scripts on save for immediate error display. Zig project progresses include faster builds, incremental compilation, and code intelligence. Support via Zig Software Foundation donations.
https://en.wikipedia.org/wiki/Multi-stage_programming
https://okmij.org/ftp/meta-programming/index.html
Comment from 2019 about it, which mentions Zig, Terra/Lua, Scala LMS, etc.:
https://news.ycombinator.com/item?id=19013437
We should also mention big data frameworks and ML frameworks like TensorFlow / Pytorch.
The "eager mode" that Chris Lattner wanted in Swift for ML and Mojo is to actually to get rid of the separation between the stage of creating a graph of operators (in Python, serially) and then evaluating the graph (on GPUs, in parallel).
And also CMake/Make and even autoconf/make and Bazel have stages -- "programming" a graph, and then executing it in parallel:
Language Design: Staged Execution Models - https://www.oilshell.org/blog/2021/04/build-ci-comments.html...
> And compared to Lisps like Scheme and Racket which support hygenic macros, well, Zig doesn’t require everything to be a list.
This comment is a bit ignorant. Racket has the most advanced staging system of any language that I'm aware of. You can build languages in Racket with conventional yet extensible syntax: https://docs.racket-lang.org/rhombus/index.html Zig's metaprogramming facilities are very simple in comparison.
I think staging could be extremely useful in many application, and I wish it was better supported in mainstream langauges.
* dynamic typing vs static typing, a continuum that JIT-ing and compiling attack from either end -- in some sense dynamically typed programs are ALSO statically typed -- with all function types are being dependent function types and all value types being sum types. After all, a term of a dependent sum, a dependent pair, is just a boxed value.
* monomorphisation vs polymorphism-via-vtables/interfaces/protocols, which trade roughly speaking instruction cache density for data cache density
* RC vs GC vs heap allocation via compiler-assisted proof of memory ownership relationships of how this is supposed to happen
* privileging the stack and instruction pointer rather than making this kind of transient program state a first-class data structure like any other, to enable implementing your own co-routines and whatever else. an analogous situation: Zig deciding that memory allocation should NOT be so privileged as to be an "invisible facility" one assumes is global.
* privileging pointers themselves as a global type constructor rather than as typeclasses. we could have pointer-using functions that transparently monomorphize in more efficient ways when you happen to know how many items you need and how they can be accessed, owned, allocated, and de-allocated. global heap pointers waste so much space.
Instead, one would have code for which it makes more or less sense to spend time optimizing in ways that privilege memory usage, execution efficiency, instruction density, clarity of denotational semantics, etc, etc, etc.
Currently, we have these weird siloed ways of doing certain kinds of privileging in certain languages with rather arbitrary boundaries for how far you can go. I hope one day we have languages that just dissolve all of this decision making and engineering into universal facilities in which the language can be anything you need it to be -- it's just a neutral substrate for expressing computation and how you want to produce machine artifacts that can be run in various ways.
Presumably a future language like this, if it ever exists, would descend from one of today's proof assistants.
// Import some libraries.
bring s3;
If the keyword was the usual "import" there would be no need to explain what "bring" is. Or, if "bring" is so good, why not // Bring some libraries.
?
What I recently realized is that while compilers in the standard perspective process a language into an AST, do some transformations, and then output some kind of executable, from another perspective they are really no different than interpreters for a DSL.
There tends to be this big divide between what we call a compiler and what we call an interpreter. And we classify languages as being either interpreted or compiled.
But what I realized, as I'm sure many others have before me, is that that distinction is very thin.
What I mean is this: from a certain perspective a compiler is really just an interpreter for the meta language that encodes and hosts the compiled language. The meta-language directs the compiler, generally via statements, to synthesize blocks of code, create classes with particular shapes, and eventually write out certain files. These meta-languages don't support functions, or control flow, or variables, in fact they are entirely declarative languages. And yet they are the same as the normal language being compiled.
To a certain degree I think the biphasic model captures this distinction well. Our execution/compilation models for languages don't tend to capture and differentiate interpreter+script from os+compiled-binary very well. Or where they do they tend to make metaprogramming very difficult. I think finding a way to unify those notions will help languages if and when they add support for metaprogramming.
https://docs.raku.org/language/phasers
It has many more than 2 phases.
Phasers is one of the ideas Raku takes as pretty core and really runs with it. So in addition to compile time programming, it has phasers for run time events like catching exceptions and one that's equivalent to the defer keyboard in several languages.
The block inside of a class or module definition is executed first, and then the application can work on the resulting structure generated after that pass. Sorbet (a Ruby static typing library) uses this first-pass to generate its type metadata, without running application code. (I think by stubbing the class and module classes themselves?)
- Documentation generated from inline code comments (Knuth's literate programming)
- Test code
We could expand to
- security (beyond perl taint)
- O(n) runtime and memory analysis
- parallelism or clustering
- latency budgets
And for those academically inclined, formal language semantics like https://en.wikipedia.org/wiki/Denotational_semantics versus operational and others..
Very excited for multi-stage - especially it's potential to provide very good LSP/diagnostics for library users (and authors). It's hard to provide good error messages from libraries for static errors that are hard to represent in the type system, so sometimes a library user sees vague/unrelated errors.
[1] https://github.com/gsuuon/kita/blob/d741c0519914369da9c89241...
it's interesting to read this biphasic programming article in the context of pg's tendentious reading of programming language history
> Over time, the default language, embodied in a succession of popular languages, has gradually evolved toward Lisp. 1-5 are now widespread. 6 is starting to appear in the mainstream. Python has a form of 7, though there doesn't seem to be any syntax for it. 8, which (with 9) is what makes Lisp macros possible, is so far still unique to Lisp, perhaps because (a) it requires those parens, or something just as bad, and (b) if you add that final increment of power, you can no longer claim to have invented a new language, but only to have designed a new dialect of Lisp ; -)
it of course isn't absolutely unique to lisp; forth also has it
i think the academic concept of 'staged programming' https://scholar.google.com/scholar?cites=2747410401001453059... is a generalization of this, and partial evaluation is a very general way to blur the lines between compile time and run time
The Metamine language allowed for a magic equals := if I recall correctly, which had the effect of always updating the result anytime the assigned value changed for the rest of the life of the program. Mixing it with normal assignments and code made for some interesting capabilities.
[0] https://guix.gnu.org/manual/en/html_node/G_002dExpressions.h... [1] https://guix.gnu.org/manual/en/html_node/Build-Phases.html#i... [2] https://guix.gnu.org/manual/en/html_node/Shepherd-Services.h...
The implementation shall be JIT compiled with a separate linter running in the editor for that is the right thing.
We aren't there yet but I believe it's where we'll end up.
I think the actual problem is the glacial pace of applying it, and the lack of support in trait impls (e.g. i32.min) and syntax. If it were applied to every pure fn+syntax it would probably cover a great deal of what Zig is doing.
Multi-stage programming and distribution with the same syntax between clients and servers has been _the_ key feature of Opa (opalang.org) 15 years back. Funny because Opa was a key inspiration for React and its JSX syntax but it took a lot of time to match the rest of the features.
I've been rendering the same React components on the server and browser side for close to decade and I've come across some really good patterns that I don't really see anywhere else.
Here's the architectural pattern that I use for my own personal projects. For fun I've starting writing it in F# and using Fable to compile to JS:
A foundational element is a port of express to the browser, aptly named browser express:
https://github.com/williamcotton/browser-express
With this you write not only biphasic UI components but also route handlers. In my opinion and through lots of experience with other React frameworks this is far superior to approaches taken by the mainstream frameworks and even how the React developers expect their tool to be used. One great side effect is that the site works the same with Javascript enabled. This also means the time to interaction is immediate.
It keeps a focus on the request itself with a mock HTTP request created from click and form post events in the browser. It properly architects around middleware that processes an incoming request and outgoing response, with parallel middleware for either the browser or server runtime. It uses web and browser native concepts like links and forms to handle user input instead of doubling the state handling of the browser with controlled forms in React. I can't help but notice that React is starting to move away from controlled forms. They have finally realized that this design was a mistake.
Because the code is written in this biphasic manner and the runtime context is injected it avoids any sort of conditionals around browser or server runtime. In my opinion it is a leaky abstraction to mark a file as "use client" or "use server".
Anyways, I enjoyed the article and I plan on using this term in practice!
Linq. Have a a set of collection manipulation methods that could be run in c# or transformed in SQL.
Blazor. Have components that can run on the server, or in the browser, or several other rendering tactics.
Suggesting that the macros of C and Rust may be the same is an insane failure.
BTW: meta-programming means "code which generates code" and not "code which runs earlier than other code".
Related
Understanding React Compiler
React's core architecture simplifies app development but can lead to performance issues. The React team introduced React Compiler to automate performance tuning by rewriting code using AST, memoization, and hook storage for optimization.
Understanding React Compiler
React's core architecture simplifies development but can lead to performance issues. The React team introduced the React Compiler to automate performance tuning by rewriting code. Transpilers like Babel convert JSX for efficiency. Compilers, transpilers, and optimizers analyze and produce equivalent code. React Compiler enhances functionality using Abstract Syntax Trees, memoization, and hook storage for optimized performance.
Zig-style generics are not well-suited for most languages
Zig-style generics, inspired by C++, are critiqued for limited universality. Zig's simplicity contrasts with Rust and Go's constraints. Metaprogramming praised for accessibility, but error messages and compiler support pose challenges. Limited type inference compared to Swift and Rust.
I Probably Hate Writing Code in Your Favorite Language
The author critiques popular programming languages like Python and Java, favoring Elixir and Haskell for immutability and functional programming benefits. They emphasize personal language preferences for hobby projects, not sparking conflict.
Improving Your Zig Language Server Experience
Enhance Zig Language Server (ZLS) by configuring it to run build scripts on save for immediate error display. Zig project progresses include faster builds, incremental compilation, and code intelligence. Support via Zig Software Foundation donations.