Quote-unquote "macros"
The blog post examines macros in the Janet programming language, highlighting their advantages over traditional memoization, particularly in compile-time storage allocation and the significance of quoting and unquoting in macro development.
Read original articleThe blog post discusses the concept of macros in programming, particularly in the context of the Janet language. It begins with a familiar example of memoization in Python and JavaScript, illustrating how decorators and higher-order functions can be implemented. The author then shifts focus to the unique capabilities of macros, emphasizing their ability to provide more than just syntactic transformations. The post explores a specific memoization scheme that allows memoization to occur at the caller side rather than the callee side, which is challenging to implement in JavaScript. The author argues that macros can allocate storage for results at compile time, enabling more complex behaviors that are not possible with standard functions. The discussion includes a detailed examination of how quoting and unquoting work in Janet, demonstrating how to create a macro that retains a reference to a mutable value allocated at compile time. The author concludes by reflecting on the utility of the quote-unquote pattern in writing macros, noting its relevance in more complex scenarios, such as compiler development.
- The blog explores the concept of macros in programming, particularly in Janet.
- It contrasts traditional memoization techniques with the potential of macros for more complex behaviors.
- The author emphasizes the importance of quoting and unquoting in macro development.
- The post illustrates how macros can allocate storage at compile time for dynamic behavior.
- The author reflects on the practical applications of the quote-unquote pattern in macro writing.
Related
Tail Recursion for Macros in C
The blog discusses tail recursion for macros in C, introducing __VA_TAIL__() to enable real recursion, overcoming inefficiencies of traditional recursive macro calls. It showcases how tail recursion improves processing efficiency in macros.
Tonsky: Don't go crazy with Clojure unless it makes you happy
The author shares their journey using Clojure macros for Humble UI's component library, emphasizing the language's flexibility in enabling unconventional solutions like reading source files and fetching code seamlessly.
"Maxwell's equations of software" examined
Ken Shirriff's blog post analyzes a historic Lisp code snippet, showcasing Lisp's core principles. It highlights code-data interchangeability and the essence of Lisp programming, referencing Alan Kay's "Maxwell's Equations of Software."
Clojure macros continue to surprise me
The author shares their journey using Clojure macros for Humble UI's component library, creating a macro to display running code alongside its source. Despite challenges, they find Clojure's flexibility rewarding for experimentation.
Ask HN: Why do people say "Lisp has no syntax"? It has infinite syntax
The author discusses Lisp's syntax, highlighting its list-based structure and challenges with constructs like `cond`. They conclude that Lisp's complexity resembles other languages, despite its unique features.
- Some commenters express confusion about Lisp macros and their syntax, indicating a steep learning curve.
- Several users share their own experiences with memoization in different programming languages, such as JavaScript and Rust.
- There is a discussion about the complexities and potential pitfalls of using macros, with some labeling them as unmaintainable.
- Some commenters appreciate the article's insights while others critique the example of Fibonacci recursion as overly complicated.
- There are mentions of alternative approaches to macro functionality, such as John Shutt's Vau calculus, suggesting ongoing exploration in the field.
(defun compute-hash (key hash f)
"Get the value for KEY in HASH or compute it with F, enter into HASH and return."
(multiple-value-bind (val win)
(gethash key hash)
(if win
val
(setf (gethash key hash) (funcall f key)))))
(defun memoized (f)
(let ((cache (make-hash-table)))
(flet ((memo (x g) (compute-hash x cache g)))
(lambda (&rest args) (apply f #'memo args)))))
(defun fib (n)
(if (<= n 1)
n
(+ (fib (1- n)) (fib (- n 2)))))
;; MEMO is a function that takes a key and a computing function.
;; If a key has been memoized, it returns the cached value, otherwise it calls the computing
;; function with the key and caches the result.
(let ((example (memoized (lambda (memo x)
(format t "X: ~a~%" x)
(let ((result (funcall memo x #'fib)))
(format t "~a~%" (* 2 result)))))))
(trace fib)
(funcall example 5)
(funcall example 5)
(funcall example 5)
(untrace fib))
> I think that you basically can’t, in JavaScript. Or, more accurately: I can’t think of a way to do it.[1]
Oh, this is a case for WeakMaps right?
const MemoCache = new WeakMap();
function memoize(f, x) {
const cache = MemoCache.get(f) || new Map()
MemoCache.set(f, cache)
if (!cache.has(x)) {
cache.set(x, f(x))
}
return cache.get(x);
}
Oh wait:> 1. You could create a global memoization map keyed on the function that you’re calling, but this would actually have different semantics than I’m imagining. If I said `memoize(f, 1) + memoize(f, 1)` I would expect those to each invoke `f`, because instances of `memoize` shouldn’t share results. Why not? Because this is a fake example, and a global memoization is a different (easier!) thing than per-call-site memoization.
Like I get what you're saying but you could just cache the call site too?
const MemoCache2 = new WeakMap();
function memoize2(f, x) {
const callsite = new Error().stack
const macro_cache = MemoCache2.get(f) || {};
const micro_cache = macro_cache[callsite] || new Map();
macro_cache[callsite] = micro_cache;
MemoCache2.set(f, macro_cache)
if (!micro_cache.has(x)) {
micro_cache.set(x, f(x))
}
return micro_cache.get(x);
}
I admit that this is something of a trickery though, but I mean, it's trickery specifically to work around that this person doesn't want to write `const my_f1 = memoize(f), my_f2 = memoize(f)` in some location on the screen. Precisely because people who write JavaScript are not accustomed to macros, they are not expecting `memoize(f, 1) + memoize(f, 1)` to be a proper memoization expression, they aren't expecting weird stuff with weakmaps and inspecting stack traces to identify call sites and all that. how to fibonacci space complexity time complexity
------------------------- ---------------- ---------------
insane recursion exponential exponential
memoized insane recursion linear linear
The space complexity of "insane recursion" without memoization is the maximum stack-depth; the worst case stack is, fib(n)
fib(n-1)
fib(n-2)
...
fib(1)
Which is n stack frames (and the stack frames are of constant size); the space complexity of the whole thing is thus linear in the size of n. (While the call tree is itself exponential in size, the memory required is only the depth of that tree, since we can't call fib(n-1) & fib(n-2) simultaneously[2].(The time complexity is, of course, exponential, and I agree with the "insane" moniker. I also like your comment elsewhere in this thread about people hyperfocusing on the example and missing the larger point of the article … and I'm so sorry but I've been sniped by this.)
[1]: https://ianthehenry.com/posts/fibonacci/
[2]: the little demons in my mind are now trying to scheme up an insaner recursion that attempts this. Threads maybe?
I've wanted the ability to reference the callsite in functions, and lobbied the V8 team for something like arguments.callsite, but was (probably rightly) politely told no.
But if you're willing to abuse tagged template literal syntax, they're really just function calls, so you can do something like:
const dumbExample = (x) => {
while (someComplicatedStuffHappens()) {
pretendLikeThisFunctionIsBig();
}
const result = memoize(doSomethingVeryExpensive, x)``;
doMoreInterestingWork();
}
memoize() must return a template tag function, which will be invoked with a TemplateStringsArray (because of the ``) that can act like a callsite identifier, which can be a key into a WeakMap of memoization dictionaries.It's mostly a curiosity because who wants that syntax, but it's interesting that JavaScript does have the special power hidden behind one feature.
Is it really absurd? If the compiler can turn it into iteration, then it's a big boy compiler. If not, then meh?
Related
Tail Recursion for Macros in C
The blog discusses tail recursion for macros in C, introducing __VA_TAIL__() to enable real recursion, overcoming inefficiencies of traditional recursive macro calls. It showcases how tail recursion improves processing efficiency in macros.
Tonsky: Don't go crazy with Clojure unless it makes you happy
The author shares their journey using Clojure macros for Humble UI's component library, emphasizing the language's flexibility in enabling unconventional solutions like reading source files and fetching code seamlessly.
"Maxwell's equations of software" examined
Ken Shirriff's blog post analyzes a historic Lisp code snippet, showcasing Lisp's core principles. It highlights code-data interchangeability and the essence of Lisp programming, referencing Alan Kay's "Maxwell's Equations of Software."
Clojure macros continue to surprise me
The author shares their journey using Clojure macros for Humble UI's component library, creating a macro to display running code alongside its source. Despite challenges, they find Clojure's flexibility rewarding for experimentation.
Ask HN: Why do people say "Lisp has no syntax"? It has infinite syntax
The author discusses Lisp's syntax, highlighting its list-based structure and challenges with constructs like `cond`. They conclude that Lisp's complexity resembles other languages, despite its unique features.