Basic ReAct webapp using FastHTML and LangGraph
The "curiosity" GitHub repository experiments with ReAct chatbots using LangGraph and FastHTML, integrating Tavily for search, facing challenges with WebSocket and SQLite, and requiring specific setup steps.
Read original articleThe GitHub repository titled "curiosity" is a project dedicated to experimenting with ReAct chatbots, focusing on technologies such as LangGraph and FastHTML to create a user experience akin to Perplexity. It employs a ReAct Agent that integrates with Tavily for enhanced search capabilities and utilizes OpenAI's GPT-4o-mini alongside a locally hosted llama3.1 model for text generation. The project has encountered challenges, particularly with WebSocket connections and SQLite persistence while attempting to stream tokens from the language model to the frontend. The frontend, built with FastHTML, offers a quick user experience but has also presented debugging difficulties. To set up the project, users need to clone the repository, ensure they have a recent Python 3 interpreter, create a virtual environment, install required packages, configure an .env file with necessary API keys, and run the application. The repository also features a visual representation of the project in its README.
- The project explores ReAct chatbots using LangGraph and FastHTML.
- It integrates with Tavily for improved search capabilities.
- Challenges include WebSocket connections and SQLite persistence.
- Setup requires Python 3, virtual environment, and specific API keys.
- A visual representation of the project is included in the README.
Related
Show HN: Chrome extension that brings Claude Artifacts for ChatGPT
The GitHub URL provides details on "Artifacts for ChatGPT," covering functionality, inspiration, and future plans. Installation guidance is available, with additional support offered upon request.
Show HN: a Rust lib to trigger actions based on your screen activity (with LLMs)
The GitHub project "Screen Pipe" uses Large Language Models to convert screen content into actions. Implemented in Rust + WASM, inspired by `adept.ai`, `rewind.ai`, and `Apple Shortcut`. Open source under MIT license.
Vercel AI SDK: RAG Guide
Retrieval-augmented generation (RAG) chatbots enhance Large Language Models (LLMs) by accessing external information for accurate responses. The process involves embedding queries, retrieving relevant material, and setting up projects with various tools.
PyTorch – Torchchat: Chat with LLMs Everywhere
The torchchat GitHub repository enables execution of large language models using PyTorch on multiple platforms, supporting models like Llama 3 and Mistral, with features for chatting, text generation, and evaluation.
Show HN: Engine Core – open-source LLM chat management and tool call framework
Engine Core is a GitHub repository that enables Large Language Models to use dynamic prompts and tool functions. It supports various LLM integrations and encourages user contributions under the Apache 2.0 License.
Maybe obvious for anyone with more of a pulse on the latest in the LLM space, but it was new to me and took some digging to get more context.
Related
Show HN: Chrome extension that brings Claude Artifacts for ChatGPT
The GitHub URL provides details on "Artifacts for ChatGPT," covering functionality, inspiration, and future plans. Installation guidance is available, with additional support offered upon request.
Show HN: a Rust lib to trigger actions based on your screen activity (with LLMs)
The GitHub project "Screen Pipe" uses Large Language Models to convert screen content into actions. Implemented in Rust + WASM, inspired by `adept.ai`, `rewind.ai`, and `Apple Shortcut`. Open source under MIT license.
Vercel AI SDK: RAG Guide
Retrieval-augmented generation (RAG) chatbots enhance Large Language Models (LLMs) by accessing external information for accurate responses. The process involves embedding queries, retrieving relevant material, and setting up projects with various tools.
PyTorch – Torchchat: Chat with LLMs Everywhere
The torchchat GitHub repository enables execution of large language models using PyTorch on multiple platforms, supporting models like Llama 3 and Mistral, with features for chatting, text generation, and evaluation.
Show HN: Engine Core – open-source LLM chat management and tool call framework
Engine Core is a GitHub repository that enables Large Language Models to use dynamic prompts and tool functions. It supports various LLM integrations and encourages user contributions under the Apache 2.0 License.