We no longer use LangChain for building our AI agents
Octomind switched from LangChain due to its inflexibility and excessive abstractions, opting for modular building blocks instead. This change simplified their codebase, increased productivity, and emphasized the importance of well-designed abstractions in AI development.
The article discusses why Octomind stopped using the LangChain framework for building their AI agents. Initially, LangChain seemed promising with its high-level abstractions, but as their requirements grew more complex, LangChain became a source of friction. The article highlights the challenges faced with LangChain's inflexibility and the difficulties in writing lower-level code due to excessive abstractions. It emphasizes the importance of crafting well-designed abstractions, especially in rapidly evolving fields like AI. Octomind found that replacing LangChain's rigid abstractions with modular building blocks simplified their codebase and increased productivity. The article suggests that using frameworks for AI applications may not always be necessary, advocating for a building blocks approach with carefully selected external packages instead. By moving away from frameworks like LangChain, Octomind was able to develop more quickly and efficiently. The shift to modular building blocks without heavy abstractions allowed the team to focus on coding rather than translating requirements into framework-specific solutions.
Damn I built a RAG agent during the past 3 months and a half for my internship. And literally everyone in my company was asking me why I wasn't using llangchain or llamaindex like I was a lunatic. Everyone else that built a rag in my company used llangchain, one even went into prod.
I kept telling them that it works well if you have a standard usage case but the second you need to something a little original you have to go through 5 layers of abstraction just to change a minute detail. Furthermore, you won't really understand every step in the process, so if any issue arises or you need to be improve the process you will start back at square 1.
This is honestly such a boost of confidence.
By @geuis - 5 months
I built my first commercial LLM agent back in October/November last year. As a newcomer to the LLM space, every tutorial and youtube video was about using LangChain. But something about the project had that "bad code" smell about it.
I was fortunate in that the person I was building the project for was able to introduce me to a few other people more experienced with the entire nascent LLM agent field and both of them strongly steered me away from LangChain.
Avoiding going down that minefield ridden path really helped me out early on, and instead I focused more on learning how to build agents "from scratch" more or less. That gave me a much better handle on how to interact with agents and has led me more into learning how to run the various models independently of the API providers and get more productive results.
By @hwchase17 - 5 months
Hi HN, Harrison (CEO/co-founder of LangChain) here, wanted to chime in briefly
I appreciate Fabian and the Octomind team sharing their experience in a level-headed and precise way. I don't think this is trying to be click-baity at all which I appreciate. I want to share a bit about how we are thinking about things because I think it aligns with some of the points here (although this may be worth a longer post)
> But frameworks are typically designed for enforcing structure based on well-established patterns of usage - something LLM-powered applications don’t yet have.
I think this is the key point. I agree with their sentiment that frameworks are useful when there are clear patterns. I also agree that it is super early on and super fast moving field.
The initial version of LangChain was pretty high level and absolutely abstracted away too much. We're moving more and more to low level abstractions, while also trying to figure out what some of these high level patterns are.
For moving to lower level abstractions - we're investing a lot in LangGraph (and hearing very good feedback). It's a very low-level, controllable framework for building agentic applications. All nodes/edges are just Python functions, you can use with/without LangChain. It's intended to replace the LangChain AgentExecutor (which as they noted was opaque)
I think there are a few patterns that are emerging, and we're trying to invest heavily there. Generating structured output and tool calling are two of those, and we're trying to standardize our interfaces there
Again, this is probably a longer discussion but I just wanted to share some of the directions we're taking to address some of the valid criticisms here. Happy to answer any questions!
By @CharlieDigital - 5 months
Bigger problem might be using agents in the first place.
We did some testing with agents for content generation (e.g. "authoring" agent, "researcher" agent, "editor" agent) and found that it was easier to just write it as 3 sequential prompts with an explicit control loop.
It's easier to debug, monitor, and control the output flow this way.
But we still use Semantic Kernel[0] because the lowest level abstractions that it provides are still very useful in reducing the code that we have to roll ourselves and also makes some parts of the API very flexible. These are things we'd end up writing ourselves anyways so why not just use the framework primitives instead?
Similarly to this post, I think that the "good" abstractions handle application logic (telemetry, state management, common complexity), and the "bad" abstractions make things abstract away tasks that you really need insight into.
This has been a big part of our philosophy on Burr (https://github.com/dagworks-inc/burr), and basically everything we build -- we never want to tell how people should interact with LLMs, rather solve the common problems. Still learning about what makes a good/bad abstraction in this space -- people really quickly reach for something like langchain then get sick of abstractions right after that and build their own stuff.
By @muzani - 5 months
Langchain was released in October 2022. ChatGPT was released in November 2022.
Langchain was before chat models were invented. It let us turn these one-shot APIs into Markov chains. ChatGPT came in and made us realize we didn't want Markov chains; a conversational structure worked just as well.
After ChatGPT and GPT 3.5, there were no more non-chat models in the LLM world. Chat models worked great for everything, including what we used instruct & completion models for. Langchain doing chat models is just completely redundant with its original purpose.
By @fforflo - 5 months
LLM frameworks like LangChain are causing a java-fication or Python .
Do you want a banana? You should first create the universe and the jungle and use dependency injection to provide every tree one at a time, then create the monkey that will grab and eat the banana.
By @empiko - 5 months
This echoes our experience with LangChain, although we have abandoned it before putting it into production. We found out that for simple use cases it's too complex (as mentioned in the blog), and for complex use cases it's too difficult to adapt. We were not able to identify what is the sweet spot when it is worth it to use it. We felt like we can easily code ourselves most of its functionality very quickly and in a way that fits our requirements.
By @captaincaveman - 5 months
I think LangChain basically tried to do a land grab, insert itself between developers and LLM's.
But it didn't add significant value and seemed to dress it up by adding abstractions that didn't really make sense.
It was that abstraction gobbledygook smell that made me cautious.
By @altdataseller - 5 months
Langchain reminds me of GraphQL. A technology that a lot of ppl seem to hype about, sounds like something you should use because all the cool kids use it, but at the end of the day just makes things unncessarily complicated.
By @isaacphi - 5 months
I had the same impression after working through the LangChain tutorials.
The one thing I'd like to ask about is Observability. LangChain has some tools around observability that seem genuinely useful to me, and specific to working with LLMs. Are there ways to use only these tools, or alternative observability tools you recommend for working with LLMs?
By @wg0 - 5 months
Sorry noob question - where can I read more about this "agents" paradigm? Is one agent's output directly calling/invoking another agent? Or there's already fixed graph of information flow with each agent (I presume some prompt presets/templates like "you are an expert this only respond in that") sorts of?
Also, how much success people have or had with automating the E2E tests for their various apps by stringing such agents together themselves
EDIT: Typos
By @etse - 5 months
My reading of the article is that because LangChain is abstracted poorly, frameworks should not be used, but that seems a bit far.
my experience is that Python has a frustrating developer experience for production services. So I would prefer a framework with better abstractions and a solid production language (performance and safety), over no framework and Python (if those were options)
By @deckar01 - 5 months
I recently unwrapped linktransformer to get access to some intermediate calculations and realized it was a pretty thin wrapper around SentenceTransformer and DBScan. It would have taken me so much longer to get similar results without copying their defaults and IO flow. It’s easy to take for granted code you didn’t have to develop from scratch. It would be interesting if there was a tool that inlined dependency calls and shook out unvisited branches automatically.
By @elbear - 5 months
It would have been great if the article provided a more realistic example.
The example they use is indeed more complex than the openai equivalent, but LangChain allows you to use several models from several providers.
Also, it's true that the override of the pipe character is unexpected. But it should make sense, if you're familiar with Linux/Unix. And I find it shows more clearly that you are constructing a pipeline:
prompt | model | parser
By @bastawhiz - 5 months
Genuine question: can someone point me to a use case where langchain makes the problem easier to solve than using the openai/anthropic/ollama SDKs directly? I've gotten a lot of advice to use langchain, but the docs haven't really shown me how it simplifies the task, or at least not more than using an SDK directly.
I really want to at least understand when to use this as a tool but so far I've been failing to figure it out. Some of the things that I tried applying it for:
- Doing a kind of function calling (or at least, implementing the schema validation) for non-gpt models
- parsing out code snippets from responses (and ignoring the rest of the output)
- Having the output of a prompt return as a simple enum without hallucinations
- process a piece of information in multiple steps, like a decision tree, to create structured output about some text (is this a directory listing or a document with content? What category is it? Is it NSFW? What is the reason for it being NSFW?)
Any resources are appreciated
By @zby - 5 months
I am always suspicious with frameworks. There are two reasons of that. First is that because of the inversion of control they are more rigid than libraries. This is quite fundamental - but there are cases where the trade off is totally worth it. The second one is because of how they are created - it often starts with an application which is then gradually made generic. This is good for advertising - you can always show how useful the framework with an application that uses it. But this "making it generic" is a very tricky process that often fails. It is a top down, the authors need to imagine possible uses and then enable them in the framework - while with libraries the users have much more freedom to discover them in a bottom up process. Users always have surprising ideas.
There are now libraries that cover some of the features of Langchain. There is Instructor and mine LLMEasyTools for function calling, there is LiteLLM for API unification.
By @nosefrog - 5 months
Anyone who has read LangChain's code would know better than to depend on it.
By @danielmarkbruce - 5 months
Yup. The problem with frameworks is they assume (historically mostly but not always correctly) that layers of abstraction mean one can forget about the layers below. This just doesn't work with LLMs. The systems are closer to biology or something.
By @Kydlaw - 5 months
IMO LangChain provides very high level abstractions that are very useful for prototyping. It allows you to abstract away components while you dig deeper on some parts that will deliver actual value.
But aside from that, I don't think I would run it in production. If something breaks, I feel like we would be in a world of pain to get things back up and running. I am glad they shared their experience on that, this is an interesting data point.
By @andrewfromx - 5 months
"When abstractions do more harm than good" I'll take this for $2000 please and if i get the daily double, bet it all.
By @iknownthing - 5 months
I tried LangChain a while ago for a RAG project. I liked how I could just plug into different vector stores to try them out. But I didn't understand the need for the abstractions around the API calls. It's not that hard to just call these APIs directly and its not that hard to create whatever prompt you'd like.
By @Turskarama - 5 months
This is so common I think it could just about be a lemma:
Any tool that that helps you to get up and running quicker by abstracting away boilerplate will eventually get in the way as your projects complexity increases.
By @fragebogen - 5 months
I'd challenge some of these criticisms and give my 2c on this.
I've spent the last 6 months working on a rather complex chat with routes, agents, bells and whistles sort of system. Initially, time to POC was short, so I picked it to get quick at my feet. Eventually, I thought. The code base isn't enormous, I can easily rewrite it, but I'd like to see what people mean with "abstraction limiting progress" kind of statements. I've now kept building this project for another 6 months and I must say the more I work with it and understand its philosophy.
It's not that complicated. The philosophy is just different from many other python projects. The LCEL pipes for example is a really nice way to think of modularity. Want to switch out one model for another? Well just import another model and replace the old. Want to parse it more strictly, exchange the parser. The fact that everything is an instance of `RunnableSerializable` is a really convenient way of making things truly modular. Want to test your pipe syncronously? Easy just use `.stream()` instead of `.astream()` and get on with it.
I think my biggest hurdle was understanding how to debug and pipe components, but once I got familiarized with it, I must say it made me grow as a python dev and appreciate the structure and thought behind it.
Where complexity arise is when you have a multi-step setup, some sync and some async. I've had to break some of these steps up in code, but otherwise it gives me tons of flexibility to pick and chose components.
My only real complaint would be lack of documentation and outdated documentation, I'm hardly the only one, but it really is frustrating sometimes to understand what some niche module can and cannot do.
By @maximilianburke - 5 months
I just pulled out LangChain from our AI agents; we now have much smaller docker images and the code is a lot easier to understand.
By @infecto - 5 months
LangChain itself blows my mind as one of the most useless libraries to exist. I hope this does not come off the wrong way but so many people told me they were using it so it was easy to move been models. I just did not understand it, these are simple API calls that felt like Web Dev 101 when starting a new product. Maybe its that so many new people were coming into the field using LLM but it surprised me as even what I thought were experienced people were struggling. Its like LLMs brought out the confusion in people.
It was interesting as a library at the very beginning to see how people were thinking about patterns but pretty useless in production.
By @Treesrule14 - 5 months
Has anyone else found a good way to swap out models between companies, Langchain has made it very easy for us to swap between openai/anthropic etc
By @whitej125 - 5 months
I used LangChain early on in it's life. People crap on their documentation but at least at that point in time I had no problem with it. I like reading source code so I'd find myself reading the code for further comprehension anyway. In my case - I'm a seasoned engineer who was discovering LLMs and thought LangChain suited that way of learning pretty well.
When it came to building anything real beyond toy examples, I quickly outgrew it and haven't looked back. We don't use any LC in production. So while LC does get a lot of hate from time to time (as you see in a lot of peers posts here) I do owe them some credit for helping bridge my learning of this domain.
By @monarchwadia - 5 months
I'm the author of Ragged, a lightweight connector that makes it easy to connect to and work wth language models. Think about it like an ORM for LLMs --- a unified interface designed to make it easy to work with LLMs. Just wanted to plug my framework in case people are looking for an alternative to building their own connector components.
LangChain approach struck me as interesting, but I never really saw much inherent utility in it. For our production code we went with direct use of LLM runtime libraries and it was more than enough.
By @czechdeveloper - 5 months
I used langchain in one project and I do regret choosing it over just writing everything over direct API. I feel their pain.
It had advantage of having standardized API, so I could switch local LLM to OpenAI and just compare results in a heartbeat, but when I wanted anything out of ordinary (ie. get logprobs), there was just no way.
By @cyanydeez - 5 months
In some sense, this could be retitled "We no longer use training wheels on our bikes"
By @codelion - 5 months
Many such cases. It is very hard to balance composition and abstraction in such frameworks and libraries. And LLMs being so new it has taken several iterations to get the right patterns and architecture while building LLM based apps. With patchwork (https://github.com/patched-codes/patchwork) an open-source framework for automating development workflows we try hard to avoid it by not abstracting unless we see some client usage. As a result you do see some workflows appear longer with many steps but it makes it easier to compose them.
By @d4rkp4ttern - 5 months
Frustration with LangChain is what led us (ex-CMU/UW-Madison researchers) to start building Langroid[1], a multi-agent LLM framework. We have been thoughtful about designing the right primitives and abstractions to enable a simple developer experience while supporting sophisticated workflows using single or multiple agents. There is an underlying loop-based orchestration mechanism that handles user interaction, tool handling and inter-agent handoff/communication.
Everyone in my office is talking about ai agents as a magic bullet, driving me crazy
By @StrauXX - 5 months
I don't like langchain that much either. It's not as bad as LLAmaIndex and Haystack in regards to extreme overengineering and overabstracting but it still is bad.
The reason I still use Langchain is that often times I need to be able to swap out LLM service providers, embedding models and so on for clients. Thats really the only part about langchain that really works well.
Btw. you don't have to actually chain langchain entities. You can use all of them directly. That makes the magic framework code issue much more tolerably as Langchain turns from a framework into a library.
By @dmezzetti - 5 months
An alternative is using txtai (https://github.com/neuml/txtai). It's lightweight and works with both local and remote LLMs.
We initially had problems diagnosing issues inside LangChain and were hitting weird issues with some elements of function calling, so we experimented with a manual reconstruction of exactly what we needed and it was faster, more resilient and easier to maintain.
I can see how switching models might be easier using LangChain as an abstraction layer, but that doesn't justify making everything else harder.
By @Oras - 5 months
The comments are good example that hype > quality.
99% of docs mentioning LangChain or showing a code example with LangChain. Wherever you look at tutorials or YouTube videos, you will see LangChain.
They take the credit of being the first framework to abstract LLM calls and other features such as reading data from multiple sources (before function calling was a thing).
Langchain was first, got popular, and hence for new comers they think it’s the way, until they use it.
By @jostmey - 5 months
Learning LangChain is effort, but not as much as truly understanding deep learning, so you learn LangChain and it feels like progress, when it may not be
By @gravenate - 5 months
Hard Agree, Semantic Kernal, On the other hand seems to actually be a value add on top of the simple API calls. Have you guys tried it ?
By @andix - 5 months
Are there better abstractions? I wanted to look into Microsoft's Semantic Kernel, which seems to be a direct competitor of LangChain. Are there any other options?
Langchain has always been open source and has always sucked. I'm shocked anyone still uses it when you can see it for yourself.
By @resource_waste - 5 months
LangChain tutorials be like:
Go to foo_website and put your credit card to get their API. Then go to bar_website, get their API. Then go to yayeee_website and get their API. Then go to...
But unironically.
I actually counted 4 APIs in some 'how to' article. I ended up DIYing that with 0 APIs.
Whoever got into langchain planted their APIs. That is why it sucks.
By @sabrina_ramonov - 5 months
You used langchain for a simple replacement of OpenAI API calls — of course it will increase complexity for no benefit.
The benefits of langchain are:
(1) unified abstraction across multiple different models and
(2) being able to plug this coherently into one architecture.
If you’re just calling some OpenAI endpoints, then why use it in the first place?
By @zackproser - 5 months
Here's a real world example of a custom RAG pipeline built with Langchain
I did a full tutorial with source code that's linked at the top of that page ^
Fwiw I think it's a good idea to build with and without Langchain for deeper understanding.
By @sandGorgon - 5 months
shameless plug - i build a JS/TS framework which tries to solve the abstraction problem. we use a json variant called jsonnet (created at google. expressive enough for kubernetes).
P.S. we also build a webassembly compiler that compiles this down to wasm and deploy on hardware.
By @createaccount99 - 5 months
A lot of competition in the field, and just about all of them (llamaindex/autogpt/langchain/others?) appear as "build sdk, build saas on top" type of products.
Curious thing, but I'd rather not partake myself.
By @greo - 5 months
I am not a fan of LangChain. And I would never use it for any of my projects.
LLM is already a probabilistic component that is tricky to integrate into a solid deterministic system. An abstraction wrapper that bloats the already fuzzy component just increases the complexity for no apparent benefit.
By @te_chris - 5 months
The thing that blows my mind is that this wasn’t obvious to them when they first looked at langchain
By @spullara - 5 months
every good developer i know that has started using langchain stopped after realizing that they need more control than it provides. if you actually look at what is going on under the hood by looking at the requests you would probably stop using it as well.
By @nprateem - 5 months
Wasn't it obviously pointless from the outset? Posts like this raise questions about the technical decisions of the company more than anything else IMO. Strange they'd want to publicise making such poor decisions.
By @jsemrau - 5 months
LCEL is such a weird paradigm that I never got the hang of.
Why | use | pipes?
By @dcole2929 - 5 months
I've seen a lot of stuff recently about how LangChain and other frameworks for AI/LLM are terrible and we shouldn't use them and I can't help but think that people are missing the point. If you need strong customization or flexibility frameworks of any kind are almost always the wrong choice, whether you're building a website or an AI agent. That's kind of the whole point of a framework. Opinionated workflows that enable a specific kind of application. Ideally the goal is to cover 80% of the cases and provide escape hatches to handle the other 20% until you can successfully cover those too.
As someone new to the space I have zero opinions of whether LangChain is better than writing it all yourself, but I can certainly say that, I at least, appreciate having a proscribed way of doing things, and I'm okay with the idea that I may get to a place where it no longer serves my needs. It's also worth noting that the benefit of LangChain is the ability to "chain" together these various AI links. Is there a better easier way to do that? Probably, but LangChain removes that overhead.
By @seany62 - 5 months
Glad to see I'm not the only one experiencing this. The agents framework I use is moving very fast and its not uncommon for even minor versions to break my current setup
By @djohnston - 5 months
Idk, dude spends the post whining about writing multi agent architecture and doesn’t mention langgraph once. Reads like a lead who failed to read the docs.
By @cyounkins - 5 months
Is there a lighter weight solution that abstracts the interfaces so I can swap GPT4 with Claude, including function calling?
By @mark_l_watson - 5 months
I was an early enthusiast of both LangChain and LlamaIndex (and I wrote a book using both frameworks, free to read online [1]) but I had some second thoughts when I started when I started writing LLM examples for my Common Lisp and Racket books that were framework-free, even writing simple vector data stores from scratch. This was, frankly, more fun.
For my personal LLM hacking in Python, I am starting down the same path: writing simple vector data stores in NumPy, write my own prompting tools and LLM wrappers, etc.
I still think that for many developers LangChain and LlamaIndex are very useful (and I try to keep my book up to date), but I usually write about things of most interest to me and I have been thinking of rewriting a new book on framework-free LLM development.
There was a Reddit thread in langchain sub a while back basically saying exactly this (plus same comments as here)
By @ZiiS - 5 months
The "good abstraction" has a bug; slightly undermines the argument.
By @_pdp_ - 5 months
We also built our own system that caters for our customers' needs.
By @hcks - 5 months
LangChain is a critical thinking test and orgs using it are ngmi
By @JSDevOps - 5 months
The dude on that blog is trying way too hard to look like Sam Altman which is fucking weird.
By @gexaha - 5 months
that's a nice AI image with octopi
By @xyst - 5 months
Never been a fan of ORM for databases. So why would that change with AI/LLM “prompt engineering”? Author confirms my point.
By @ricklamers - 5 months
FWIW I think LangChain has evolved a lot and is a nice time saver once you figure out the patterns it uses. The LangSmith observability is frankly fantastic to quickly get a sense of how your expected LLM flow engineering ends up working out in practice. So much FUD here, unwarranted IMO. Don’t forget, reading code is harder than writing it, doesn’t warrant throwing out the baby with the bath water. Don’t fall for NIH :) Haven’t had issues running in prod recently either since they’ve matured their packaging with core/community/partner etc. For agentic use cases look at LangGraph for a cleaner set of primitives that give you the amount of control needed there.