June 26th, 2024

Show HN: R2R V2 – A open source RAG engine with prod features

The R2R GitHub repository offers an open-source RAG answer engine for scalable systems, featuring multimodal support, hybrid search, and a RESTful API. It includes installation guides, a dashboard, and community support. Developers benefit from configurable functionalities and resources for integration. Full documentation is available on the repository for exploration and contribution.

Read original articleLink Icon
Show HN: R2R V2 – A open source RAG engine with prod features

The R2R GitHub repository hosts an open-source RAG answer engine aimed at facilitating the transition from local LLM experimentation to scalable RAG systems. It offers a robust RAG system with features like multimodal support, hybrid search, and graph RAG, all accessible through a RESTful API. The repository contains installation instructions, quickstart guides, a user-friendly dashboard, and community support details. Developers can benefit from configurable and extensible functionalities, along with client-server support for seamless integration. Additionally, the repository provides resources such as documentation, cookbooks, and examples to aid in utilizing various R2R features and integrations. For those interested in exploring further or contributing, the full documentation is available on the R2R GitHub Repository.

Link Icon 20 comments
By @hubraumhugo - 4 months
Do you also see the ingestion process as the key challenge for many RAG systems to avoid "garbage in, garbage out"? How does R2R handle accurate data extraction for complex and diverse document types?

We have a customer who has hundreds of thousands of unstructured and diverse PDFs (containing tables, forms, checkmarks, images, etc.), and they need to accurately convert these PDFs into markdown for RAG usage.

Traditional OCR approaches fall short in many of these cases, so we've started using a combined multimodal LLM + OCR approach that has led to promising accuracy and consistency at scale (ping me if you want to give this a try). The RAG system itself is not a big pain point for them, but the accurate and efficient extraction and structuring of the data is.

By @jonathan-adly - 4 months
This is excellent. I have been running a very similar stack for 2 years, and you got all the tricks of the trade. Pgvector, HyDe, Web Search + document search. Good dashboard with logs and analytics.

I am leaving my position, and I recommended this to basically replace me with a junior dev who can just hit the API endpoints.

By @vanillax - 4 months
The quick start is defiantly not quick. You really should provide a batteries included docker compose with Postgres image ( docker.io/tensorchord/pgvecto-rs:pg14-v0.2.0 )

If I want to use dashboard I have to clone another repo? 'git clone git@github.com:SciPhi-AI/R2R-Dashboard.git' ? why not make it available in a docker container so that if im only interested in rag I can plug into the docker container for dashboard?

This project feels like a collection of alot of things thats not really providing any extra ease to development. It feels more like joining a new company and trying to find out all the repo and set everything up.

This really looks cool, but Im struggling to figure out if its a SDK or suite of apps or both but in the later case the suite of apps is really confusing if i have to still write all the python, then it feels more like a SDK?

Perhaps provide better "1 click" install experience to preview/show case all the features and then let devs leverages the r2r lalter...

By @ldjkfkdsjnv - 4 months
This looks great, will be giving it a shot today. Not to throw cold water on the release, but I have been look at different RAG platforms. Anyone have any insight into which is the flagship?

It really seems like document chunking is not a problem that can be solved well generically. And RAG really hinges on which documents get retrieved/the correct metadata.

Current approaches around this seem to be using a ReRanker, where we fetch a ton of information and prune it down. But still, document splitting, is tough. Especially when you start to add transcripts of video that can be a few hours long.

By @SubiculumCode - 4 months
I've been interested in building a RAG for my documents, but as an academic project I do not have the funds to spend on costly APIs like a lot of RAG projects out there depend on, not just LLM part, but for the reranking, chunking, etc, like those form Cohere.

Can R2R be built with all processing steps implementing local "open" models?

By @davedx - 4 months
I’ve checked out quite a few RAG projects now and what I haven’t seen really solved is ingestion, it’s usually like “this is an endpoint or some connectors, have fun!”.

How do I do a bulk/batch ingest of say, 10k html documents into this system?

By @p1esk - 4 months
“ What were the UK's top exports in 2023?"

"List all YC founders that worked at Google and now have an AI startup."

How to check the accuracy of the answers? Is there some kind of a detailed trace of how the answer was generated?

By @sandeepnmenon - 4 months
Could you provide more details on the multimodal data ingestion process? What types of data can R2R currently handle, and how are non-text data types embedded? Can the ingestion be streaming from logs?
By @Kluless - 4 months
Interesting. Can you talk a bit about how the process is faster/better optimized for the dev teams? Sounds like there's a big potential to accelerate time to MVP.
By @FriendlyMike - 4 months
Is there a way to work with source code? I've been looking for a rag solution that can understand the graph of code. For example "what analytics events get called when I click submit"
By @causal - 4 months
Have you integrated with any popular chat front-ends, e.g. OpenWebUI?
By @jhoechtl - 4 months
Get neo4j out and count me in. No need for that Ressource hog.
By @vintagedave - 4 months
> R2R is a lightweight repository that you can install locally with `pip install r2r`, or run with Docker

Lightweight is good, and running it without having to deal with Docker is excellent.

But your quickstart guide is still huge! It feels very much not "quick". How do you:

* Install via Python

* Throw a folder of documents at it

* Have it set there providing a REST API to get results?

Eg suppose I have an AI service already, so I throw up a private Railway instance of this as a Python app. There's a DB somewhere. As simple as possible. I can mimic it at home just running a local Python server. How do I do that? _That's_ the real quickstart.

By @GTP - 4 months
How does this compare with Google's NotebookLM?
By @mentos - 4 months
Seems like there is an opportunity to make this as easy to use as Dropbox.
By @hdjsvdjue7 - 4 months
I can't wait to try it after work. How would one link it to ollama?
By @wmays - 4 months
What’s the benefit over langchain? Or other bigger platforms?
By @revskill - 4 months
As soon as it does not require openai then it is good.
By @haolez - 4 months
On a side note, is there an open source RAG library that's not bound to a rising AI startup? I couldn't find one and I have a simple in-house implementation that I'd like to replace with something more people use.
By @taylorbuley - 4 months
I could see myself considering this. And not just because it's got a great project name.