July 28th, 2024

LeanDojo: Theorem Proving in Lean Using LLMs

LeanDojo enhances theorem proving in Lean with retrieval-augmented language models, introducing Lean Copilot for tactic suggestions and proof searches, and includes benchmarks for training machine learning models in theorem proving.

Read original articleLink Icon
LeanDojo: Theorem Proving in Lean Using LLMs

LeanDojo is a framework designed to enhance theorem proving in Lean using retrieval-augmented language models (LLMs). It introduces Lean Copilot, which assists users by suggesting tactics and searching for proofs, allowing for the integration of both local and cloud-based models. LeanDojo extracts proofs from Lean into datasets for training machine learning models, enabling these models to interact with Lean's proof environment to prove theorems. The framework includes two benchmarks: LeanDojo Benchmark and LeanDojo Benchmark 4, which consist of extensive collections of theorems, proofs, tactics, and premises from math libraries. A key feature is the fine-grained annotation of premises, which aids in premise selection, a critical aspect of theorem proving. LeanDojo also addresses performance overestimation by implementing a challenging data split that requires models to generalize to new premises. The environment allows for interactive proof state observation and feedback, essential for training and evaluating provers. Experiments show that the ReProver model, trained on LeanDojo Benchmark, outperforms existing proof automation tactics. Additionally, ReProver has successfully discovered new proofs and identified formalization bugs in theorem statements. A ChatGPT plugin for LeanDojo enables informal interactions with theorem proving, although it faces challenges in finding correct proofs compared to specialized models. The team behind LeanDojo includes researchers from various institutions, contributing to the advancement of automated theorem proving.

Link Icon 8 comments
By @worldsayshi - 6 months
Victor Taelin is doing some semi-related stuff with Claude and their home built proof language Kind2:

https://x.com/VictorTaelin/status/1811167900780175423

Can recommend taking a look at their recorded Twitch stream to see it in action.

By @maxwells-daemon - 6 months
Second author here. Happy to answer any questions about the work!
By @simonw - 6 months
Useful context: https://en.m.wikipedia.org/wiki/Lean_(proof_assistant) - "Lean is a proof assistant and a functional programming language. It is based on the calculus of constructions with inductive types. "
By @fsndz - 6 months
What's a real-life use case of theorem proving ? I really want to learn more about that but it always feel like an abstract thing that people do because they like solving puzzles. Does it help in solving the reliability challenge of current LLMs (https://www.lycee.ai/blog/ai-reliability-challenge) ?
By @brotchie - 6 months
How good is Lean at assisting the analytical solution to PDEs?

10+ years out from a Finance PhD where I ended up using numerical methods because I really didn't have the math skills to prove closed form solutions.

Would love to know if, starting with a stochastic differential equation, how far your can go re: applying Ito's lemma and working through the math to get to a closed form solution (using Lean).

It the main advantage of Lean (ignoring LLM assistance) that you build up declarative code that, as you incrementally work on the proof, guarantees that the proof-so-far is correct?

So you're still "working through the math" but rather than un-executable math notation written on a pad, you have "by induction" a guaranteed valid argument up to the point you are at in the proof?

Just trying to build a mental model of Lean > pen and pad.

By @thomasahle - 6 months
I wonder if they could integrate with the reinforcement learning approach from AlphaProof (this week). Having an IMO silver level proof copilot would pretty neat!
By @wolfspider - 6 months
I was recently using Low* with ChatGPT and amazed it could actually explain it to me so I’m looking forward to using this.
By @altkjg - 6 months
Glad to see that major pieces of work like Lean or Wolfram Alpha are getting attention because LLMs utilize them.

Still not convinced that LLMs do anything else than rearranging other people's work.

Effects can already be seen: The Washington Post used to display articles when found via Google, now you get a paywall. And I can no longer criticize them for it.