June 25th, 2024

LLMs on the Command Line

Simon Willison presented a Python command-line utility for accessing Large Language Models (LLMs) efficiently, supporting OpenAI models and plugins for various providers. The tool enables running prompts, managing conversations, accessing specific models like Claude 3, and logging interactions to a SQLite database. Willison highlighted using LLM for tasks like summarizing discussions and emphasized the importance of embeddings for semantic search, showcasing LLM's support for content similarity queries and extensibility through plugins and OpenAI API compatibility.

Read original articleLink Icon
LLMs on the Command Line

Simon Willison gave a talk at the Mastering LLMs conference about accessing Large Language Models (LLMs) from the command-line. He introduced his LLM Python command-line utility for exploring and utilizing LLMs efficiently. The tool supports OpenAI models and plugins for various providers, enabling users to run prompts and manage conversations easily. Additionally, Simon demonstrated using plugins like llm-claude-3 to access models such as Claude 3 Opus and Claude 3 Haiku. The LLM tool logs all prompts and responses to a SQLite database, allowing for easy browsing and analysis. Simon also discussed running local models, using plugins like llm-gpt4all and llm-ollama, and highlighted llamafile for bundling models and software into a single executable file. Furthermore, he shared insights on using LLM for tasks like summarizing Hacker News discussions and scraping websites. Simon emphasized the importance of embeddings for semantic search and showcased how LLM supports embedding content for similarity queries. Lastly, he mentioned the extensibility of LLM through plugins and its compatibility with OpenAI API endpoints.

Related

Testing Generative AI for Circuit Board Design

Testing Generative AI for Circuit Board Design

A study tested Large Language Models (LLMs) like GPT-4o, Claude 3 Opus, and Gemini 1.5 for circuit board design tasks. Results showed varied performance, with Claude 3 Opus excelling in specific questions, while others struggled with complexity. Gemini 1.5 showed promise in parsing datasheet information accurately. The study emphasized the potential and limitations of using AI models in circuit board design.

GitHub – Karpathy/LLM101n: LLM101n: Let's Build a Storyteller

GitHub – Karpathy/LLM101n: LLM101n: Let's Build a Storyteller

The GitHub repository "LLM101n: Let's build a Storyteller" offers a course on creating a Storyteller AI Large Language Model using Python, C, and CUDA. It caters to beginners, covering language modeling, deployment, programming, data types, deep learning, and neural nets. Additional chapters and appendices are available for further exploration.

Researchers describe how to tell if ChatGPT is confabulating

Researchers describe how to tell if ChatGPT is confabulating

Researchers at the University of Oxford devised a method to detect confabulation in large language models like ChatGPT. By assessing semantic equivalence, they aim to reduce false answers and enhance model accuracy.

How to run an LLM on your PC, not in the cloud, in less than 10 minutes

How to run an LLM on your PC, not in the cloud, in less than 10 minutes

You can easily set up and run large language models (LLMs) on your PC using tools like Ollama, LM Suite, and Llama.cpp. Ollama supports AMD GPUs and AVX2-compatible CPUs, with straightforward installation across different systems. It offers commands for managing models and now supports select AMD Radeon cards.

Delving into ChatGPT usage in academic writing through excess vocabulary

Delving into ChatGPT usage in academic writing through excess vocabulary

A study by Dmitry Kobak et al. examines ChatGPT's impact on academic writing, finding increased usage in PubMed abstracts. Concerns arise over accuracy and bias despite advanced text generation capabilities.

Link Icon 10 comments
By @simonw - 4 months
This was a workshop I gave in my https://llm.datasette.io/ CLI tool.

What other CLI tools are people using to work with LLMs in the terminal?

There one comment here about https://github.com/paul-gauthier/aider and Ollama is probably the most widely used CLI tool at the moment: https://github.com/ollama/ollama/blob/main/README.md#quickst...

By @dvt - 4 months
Fantastic work here! I'm working on a local tool, affectionately called Descartes, which does something similar—but with a spotlight-like UX for the non-hackers out there.

I do think that LLMs have the potential to fundamentally change the way we interact with our computers. There's a lot of edge cases (especially when combining it with the inaccurate science of screen readers) but it's pretty mind-blowing when it works. I'm working on a blog post, but here's my little proof of concept working on both Windows in a web browser[1] and MacOS in the Finder [2].

[1] https://vimeo.com/931907811

[2] https://dvt.name/wp-content/uploads/2024/04/image-11.png

By @bagels - 4 months
I wrote one to help with creating command line commands. It just hits openai api with a prompt asking for just a code block to run on bash + whatever is passed in, and then it prints the command out. I wrote it because I can never remember all the weird command args for all the tools.

$ bashy find large files over 10 gb

find / -type f -size +10G

By @bearjaws - 4 months
Aider continues to be the best way to interact with LLMs while coding, and its a command line tool.

Copilot is pretty good, but the forced change > commit > QA process that Aider forces you through is really powerful.

By @jillesvangurp - 4 months
> We have implemented basic RAG—Retrieval Augmented Generation, where search results are used to answer a question—using a terminal script that scrapes search results from Google and pipes them into an LLM.

I love this. Simple and effective. RAG is just search leveled up with LLMs. Such an obvious thing to do. We know how to do search and can use it to unlock vast amounts of knowledge. Instead of letting LLMs dream up facts by compressing all knowledge into them, a better use of them is letting them summarize and reason about the facts it finds. IMHO the art is actually going to be in letting them come up with the right query as well. Or queries. It could be a lot more exhaustive in its searches than we could be.

By @liamYC - 4 months
This is awesome, thanks for sharing. I find this kind of tool really useful, Aider in particular. I made my own cli tool for interacting with GPT. It’s really useful with the -c flag for generating code especially bash commands I've forgotten

https://github.com/ljsimpkin/chat-gpt-cli-tool

By @jgalt212 - 4 months
Every time I see an LLM demo, I'm blown away. Every time I use one for myself, I feel like a fool.

I say this because the scraper demo bit looks very neat, but I've been down this path before and I don't want to waste my time getting bad or deceptively incorrect results.

By @behnamoh - 4 months
I wish llm were more stable, but unfortunately things just kept breaking out of the blue without me touching any settings of the program. I often had to reinstall the package. but finally I gave up and implemented my own.
By @qiakai - 4 months
> What other CLI tools are people using to work with LLMs in the terminal?

I personally love using x-cmd. Small size (1.1MB), open source, interactive operation

[1] https://www.x-cmd.com/

[2] https://www.x-cmd.com/mod/openai