August 12th, 2024

Quote-unquote "macros"

The blog post examines macros in the Janet programming language, highlighting their advantages over traditional memoization, particularly in compile-time storage allocation and the significance of quoting and unquoting in macro development.

Read original articleLink Icon
ConfusionExcitementFrustration
Quote-unquote "macros"

The blog post discusses the concept of macros in programming, particularly in the context of the Janet language. It begins with a familiar example of memoization in Python and JavaScript, illustrating how decorators and higher-order functions can be implemented. The author then shifts focus to the unique capabilities of macros, emphasizing their ability to provide more than just syntactic transformations. The post explores a specific memoization scheme that allows memoization to occur at the caller side rather than the callee side, which is challenging to implement in JavaScript. The author argues that macros can allocate storage for results at compile time, enabling more complex behaviors that are not possible with standard functions. The discussion includes a detailed examination of how quoting and unquoting work in Janet, demonstrating how to create a macro that retains a reference to a mutable value allocated at compile time. The author concludes by reflecting on the utility of the quote-unquote pattern in writing macros, noting its relevance in more complex scenarios, such as compiler development.

- The blog explores the concept of macros in programming, particularly in Janet.

- It contrasts traditional memoization techniques with the potential of macros for more complex behaviors.

- The author emphasizes the importance of quoting and unquoting in macro development.

- The post illustrates how macros can allocate storage at compile time for dynamic behavior.

- The author reflects on the practical applications of the quote-unquote pattern in macro writing.

AI: What people are saying
The comments reflect a mix of reactions to the blog post about macros in the Janet programming language and their advantages over traditional memoization.
  • Some commenters express confusion about Lisp macros and their syntax, indicating a steep learning curve.
  • Several users share their own experiences with memoization in different programming languages, such as JavaScript and Rust.
  • There is a discussion about the complexities and potential pitfalls of using macros, with some labeling them as unmaintainable.
  • Some commenters appreciate the article's insights while others critique the example of Fibonacci recursion as overly complicated.
  • There are mentions of alternative approaches to macro functionality, such as John Shutt's Vau calculus, suggesting ongoing exploration in the field.
Link Icon 13 comments
By @tyg13 - 8 months
Every time I see a post from Lisp fans about macros, I want to be amazed, but I always just walk away confused. I can tell there's something interesting in there, but the quote-unquote-quasiquote syntax is just so dense that my brain is incapable of comprehending it.
By @homedirectory - 8 months
You can achieve memoization of an expression inside a function without any global state and macros:

    (defun compute-hash (key hash f)
      "Get the value for KEY in HASH or compute it with F, enter into HASH and return."
      (multiple-value-bind (val win)
          (gethash key hash)
        (if win
            val
            (setf (gethash key hash) (funcall f key)))))

    (defun memoized (f)
      (let ((cache (make-hash-table)))
        (flet ((memo (x g) (compute-hash x cache g)))
          (lambda (&rest args) (apply f #'memo args)))))

    (defun fib (n)
      (if (<= n 1)
          n
          (+ (fib (1- n)) (fib (- n 2)))))

    ;; MEMO is a function that takes a key and a computing function.
    ;; If a key has been memoized, it returns the cached value, otherwise it calls the computing
    ;; function with the key and caches the result.
    (let ((example (memoized (lambda (memo x)
                               (format t "X: ~a~%" x)
                               (let ((result (funcall memo x #'fib)))
                                 (format t "~a~%" (* 2 result)))))))
      (trace fib)
      (funcall example 5)
      (funcall example 5)
      (funcall example 5)
      (untrace fib))
By @anonymoushn - 8 months
Rust macros are sort of sufficient to do the kind of rewriting mentioned, but it's maybe cheating because you have to annotate the function with the macro which allows the macro to mangle the whole function body.
By @crdrost - 8 months
> How do you implement `memoize`?

> I think that you basically can’t, in JavaScript. Or, more accurately: I can’t think of a way to do it.[1]

Oh, this is a case for WeakMaps right?

    const MemoCache = new WeakMap();
    function memoize(f, x) {
        const cache = MemoCache.get(f) || new Map()
        MemoCache.set(f, cache)
        if (!cache.has(x)) {
            cache.set(x, f(x))
        }
        return cache.get(x);
    }
Oh wait:

> 1. You could create a global memoization map keyed on the function that you’re calling, but this would actually have different semantics than I’m imagining. If I said `memoize(f, 1) + memoize(f, 1)` I would expect those to each invoke `f`, because instances of `memoize` shouldn’t share results. Why not? Because this is a fake example, and a global memoization is a different (easier!) thing than per-call-site memoization.

Like I get what you're saying but you could just cache the call site too?

    const MemoCache2 = new WeakMap();
    function memoize2(f, x) {
        const callsite = new Error().stack
        const macro_cache = MemoCache2.get(f) || {};
        const micro_cache = macro_cache[callsite] || new Map();
        macro_cache[callsite] = micro_cache;
        MemoCache2.set(f, macro_cache)

        if (!micro_cache.has(x)) {
            micro_cache.set(x, f(x))
        }
        return micro_cache.get(x);
    }
I admit that this is something of a trickery though, but I mean, it's trickery specifically to work around that this person doesn't want to write `const my_f1 = memoize(f), my_f2 = memoize(f)` in some location on the screen. Precisely because people who write JavaScript are not accustomed to macros, they are not expecting `memoize(f, 1) + memoize(f, 1)` to be a proper memoization expression, they aren't expecting weird stuff with weakmaps and inspecting stack traces to identify call sites and all that.
By @deathanatos - 8 months
In the associated article linked to at "Leaving aside the absurdity of computing Fibonacci numbers recursively,"[1] (which, yes, I agree), we list the various algorithms as (roughly):

  how to fibonacci           space complexity  time complexity
  -------------------------  ----------------  ---------------
  insane recursion           exponential       exponential
  memoized insane recursion  linear            linear
The space complexity of "insane recursion" without memoization is the maximum stack-depth; the worst case stack is,

  fib(n)
  fib(n-1)
  fib(n-2)
  ...
  fib(1)
Which is n stack frames (and the stack frames are of constant size); the space complexity of the whole thing is thus linear in the size of n. (While the call tree is itself exponential in size, the memory required is only the depth of that tree, since we can't call fib(n-1) & fib(n-2) simultaneously[2].

(The time complexity is, of course, exponential, and I agree with the "insane" moniker. I also like your comment elsewhere in this thread about people hyperfocusing on the example and missing the larger point of the article … and I'm so sorry but I've been sniped by this.)

[1]: https://ianthehenry.com/posts/fibonacci/

[2]: the little demons in my mind are now trying to scheme up an insaner recursion that attempts this. Threads maybe?

By @spankalee - 8 months
In JavaScript tagged template literals have the ability to identify the callsite. This is really powerful and used to create template identity in lit-html.

I've wanted the ability to reference the callsite in functions, and lobbied the V8 team for something like arguments.callsite, but was (probably rightly) politely told no.

But if you're willing to abuse tagged template literal syntax, they're really just function calls, so you can do something like:

    const dumbExample = (x) => {
        while (someComplicatedStuffHappens()) {
            pretendLikeThisFunctionIsBig();
        }

        const result = memoize(doSomethingVeryExpensive, x)``;

        doMoreInterestingWork();
    }
memoize() must return a template tag function, which will be invoked with a TemplateStringsArray (because of the ``) that can act like a callsite identifier, which can be a key into a WeakMap of memoization dictionaries.

It's mostly a curiosity because who wants that syntax, but it's interesting that JavaScript does have the special power hidden behind one feature.

By @pxc - 8 months
Wow. I haven't really played with Lisp since college. But I just started reading The Little Schemer with some friends, and hope to move on to SICP some time this year or next. This blog post made me a little dizzy, but also a little excited about what I'm hoping to explore with these lessons.
By @artemonster - 8 months
Isnt this something that John Shutt solved with his Vau calculus? Basically, each "macro" (actually kinda like fexpr) invocation creates its own static environment, which neatly solves all hygiene problems and problems outlined in this article?
By @markovs_gun - 8 months
I am going to be honest I didn't really understand what an eigenvalue was until reading this. I'd read the definition but like I didn't really understand why you'd care about that. This was a great article
By @MathMonkeyMan - 8 months
Programmer uses lisp macro to invent new keyword. It's a beautiful thing.
By @cryptonector - 8 months
> Leaving aside the absurdity of computing Fibonacci numbers recursively

Is it really absurd? If the compiler can turn it into iteration, then it's a big boy compiler. If not, then meh?

By @dools - 8 months
""macros""
By @29athrowaway - 8 months
Macros are an unmaintainable mess.