July 18th, 2024

Show HN: Llm2sh – Translate plain-language requests into shell commands

The `llm2sh` utility translates plain language into shell commands using LLMs like OpenAI and Claude. It offers customization, YOLO mode, and extensibility. Installation via `pip` is simple. User privacy is prioritized. Contributions to the GPLv3-licensed project are welcome. Users should review commands before execution. Visit the GitHub repository for details.

Read original articleLink Icon
InterestSkepticismSuggestions
Show HN: Llm2sh – Translate plain-language requests into shell commands

The `llm2sh` command-line utility translates plain language requests into shell commands using various Large Language Models (LLMs) like OpenAI and Claude. It offers features such as customizable configuration, YOLO mode for quick command execution, and extensibility for new LLMs and prompts. Installation is simple with `pip install llm2sh`, and commands can be run with `llm2sh [options] <request>`. The tool emphasizes user privacy by not storing data, although LLM APIs may retain requests. Contributions to the GPLv3-licensed project are encouraged on the GitHub repository. `llm2sh` is experimental, and users are advised to review generated commands before execution. More information is available on the GitHub repository for `llm2sh`.

AI: What people are saying
The article on `llm2sh` utility generates a mix of feedback and suggestions from users.
  • Some users find the tool promising but question its efficiency compared to traditional command syntax.
  • There are suggestions for sandboxing and making the tool more secure, such as using Docker containers.
  • Several users appreciate the GPLv3 license, easy installation via `pip`, and good documentation.
  • Questions arise about the tool's ability to handle specific commands and its comparison to other similar tools.
  • Some users express interest in extending the tool to support custom or local APIs like llama.cpp.
Link Icon 11 comments
By @padolsey - 3 months
Cool! I’m experimenting with something like this that uses docker containers to ensure it’s sandboxed. And, crucially, rewindable. And then I can just let it do ~whatever it wants without having to verify commands myself. Obviously it’s still risky to let it touch network resources but there’s workarounds for that.
By @yjftsjthsd-h - 3 months
Some really nice things:

+ GPLv3

+ Defaults to listing commands and asking for confirmation

+ Install is just "pip install"

+ Good docs with examples

Is there a way to point at an arbitrary API endpoint? IIRC llama.cpp can do an OpenAPI compatible API so it should be drop in?

By @conkeisterdoor - 3 months
This looks great! I would use this if you had a dispatcher for using a custom/local OpenAI-compatible API like eg llama.cpp server. If I can make some time I'll take a stab at writing one and submit a PR :)
By @causal - 3 months
This looks good.

I created something similar using blade a while back, but I found that using English to express what I want was actually really inefficient. It turns out that for most commands, the command syntax is already a pretty expressive format.

So nowadays I'm back to using a chat UI (Claude) for the scenarios where I need help figuring out the right command. Being able to iterate is essential in those scenarios.

By @MuffinFlavored - 3 months
How much time does this gain you from the perspective of "you have to double check its output and hope it didn't make a mistake"?
By @llagerlof - 3 months
Nice tool. I am using ai-shell for that purpose.

https://github.com/BuilderIO/ai-shell

By @Y_Y - 3 months
There are plenty of variations on this tool around, it would be nice to see a comparison.
By @amelius - 3 months
Does it understand commands such as "get this Nvidia driver to work"?
By @francisduvivier - 3 months
Wonder how this compares to open interpreter.
By @fire_lake - 3 months
Would consider rewriting in a language that is more portable? Ideally this would be a single binary, excluding the models!