September 3rd, 2024

Engineering over AI

The article emphasizes the importance of engineering in code generation with large language models, highlighting skepticism due to hype, the need for structural understanding of codebases, and a solid technical foundation.

Read original articleLink Icon
Engineering over AI

The article discusses the current state of code-generating large language model (LLM) agents, emphasizing the need for a focus on engineering rather than just AI hype. The author notes that many teams are distracted by flashy interfaces and unrealistic demonstrations, leading to inflated valuations and skepticism from investors and users regarding the actual value of LLM applications. The concept of "engineering over AI" is introduced, which prioritizes solving genuine engineering challenges and using AI as a supportive tool rather than merely optimizing prompts. The author highlights the importance of context in code generation, arguing that understanding the structural nature of codebases—both at the file and logic levels—is crucial for effective code generation. Current methods relying on embeddings fail to capture these structural relationships, which hampers the functionality of generated code. The article concludes that a solid engineering foundation is essential for developing practical and effective code generation applications.

- The focus on engineering is essential for successful code generation using AI.

- Current LLM applications face skepticism due to overhyped demonstrations and inflated valuations.

- Understanding the structural nature of codebases is critical for effective code generation.

- Relying solely on embeddings for context in code generation is insufficient.

- A solid technical foundation is necessary for building functional AI applications.

Link Icon 9 comments
By @jokethrowaway - 6 months
Very true!

I do RAG for other types of structured data and this is fundamental to get relevant objects in your context.

My approach for code would be to create a graph structure with relationships between the different codepaths and expose a retrieval api through tools/function calling so that the LLM can query the codebase structure on top of doing semantic embedding similarity search and text similarity search.

You could also add a graph search for related elements for each element returned by the other search pipelines to increase the chance of having all the pieces of the puzzle in the context before using the LLM to solve the problem.

The other crucial thing to do would be to inspect dependencies (and their types, when possible) and maybe download documentation to offer tips that are accurate and not hallucinated.

Nowadays I get hallucinations for code generation as soon as things get hard, making LLM coding useful only for trivial code writing.

Analysing the code structure and dependencies would require plenty of work for each specific language, so it won't be a easy win like "just throwing RAG" - which is what the current players are doing to raise money - with mediocre results.

By @davidt84 - 6 months
I feel like I just read the introduction to an interesting blog post.
By @airstrike - 6 months
I feel like this is both very right but also the million dollar question?

I don't think others necessarily quote-unquote "lost focus" on this problem, but it's not exactly easy to solve correctly, so in the meantime it's easier to create something with the next best approximation.

By @downWidOutaFite - 6 months
I think this is the idea behind sourcegraph's cody, trying to take their expertise in understanding codebases and ASTs and using it to guide the llm
By @dimgl - 6 months
Where's the rest of the article?
By @katdork - 6 months
Typo of "codebase" as codebae: This is also the reason why the higher context window doesn’t matter. Even if you could feed your whole codebase into an LLM, you’d still face the same problem of missing the structural relationships of the codebae.
By @ratedgene - 6 months
You would need to have smaller agents negotiate on behalf of their functional units.