July 6th, 2024

Markdown: An effective tool for LLM interaction

Introducing 'Mark', a Markdown CLI tool for seamless interaction with GPT-4o models. It enables in-document threaded conversations, image tags, link references, and extensibility through standard input. Mark enhances user experience by optimizing interactions with LLMs. Installation requires an OpenAI API key and Python 3.10+.

Read original articleLink Icon
Markdown: An effective tool for LLM interaction

Introducing 'Mark', a Markdown CLI tool designed to interact with GPT4o models seamlessly. Mark allows users to create in-document threaded conversations with GPT, incorporating responses back into the document. It supports image tags for visual context and link references for effective RAG retrieval. The tool is extensible, enabling users to pipe content through standard input for GPT responses. Markdown's compatibility with version control systems and modern IDEs enhances the user experience. By leveraging Markdown's simplicity and structure, Mark optimizes interactions with LLMs, emphasizing key parts and providing context for better responses. The tool's key features include in-document thread building, GPT Vision with image tags, RAG using links, and custom system prompts. Mark's installation requires an OpenAI API key and Python 3.10+. With use cases ranging from extracting data from screenshots to getting personalized coding help, Mark streamlines interactions with GPT-4, offering a robust and efficient solution for developers and writers.

Link Icon 7 comments
By @mickmcq - 5 months
A couple of notes on the blog post: the example

echo "Please explain this code: $(cat some_class.py)" | mark

needs a dash at the end to work correctly. Also, it doesn't output pandoc-flavored markdown (blank lines before headings and code chunks) unless I specifically ask it to, as in:

echo "Please explain this code, using pandoc-flavored markdown, leaving a blank line before headings and code chunks: $(cat some_class.py)" | mark -

By @mark_l_watson - 5 months
This looks very good, I was just reading the code on GitHub.

I mostly use local models. I might modify 'mark' myself, or wait a while and see if anyone does a pull request.

A little off topic, but I run ollama at the command line using:

echo "what is 1 + 3?" | ollama run llama3:latest

By @lioeters - 5 months
I wonder if the CLI could have a "watch mode" where it watches a file or directory, and automatically append the response as you edit and save a Markdown file. Not sure how well it would work in practice, but seems like it could be an interesting alternative to the "chat" format.
By @nickfixit - 5 months
Fabric and this!!! This is promising to build on.

danielmiessler.com/p/fabric-origin-story

together with obsidian is my setup I am trying to build now. I'm using obsidian to plan the vector and meta data to pull and reference with the assistants and building function tools to query.

By @oakpond - 5 months
A similar tool for llama.cpp: https://tildegit.org/unworriedsafari/mill.py
By @tmaier - 5 months
This looks cool. It would ve great, if this would get integrated into a Obsidian plugin
By @eterps - 5 months
See also: https://news.ycombinator.com/item?id=40866228

I think Ryan Elston's blog post is more effective in explaining the advantages of markdown for LLM interaction.