LangChain vs Custom
Should I use LangChain (or similar frameworks) or build custom?
The framework vs. build-it-yourself decision. Frameworks speed up prototypes but add complexity; custom code is simpler but requires more upfront work.
Approaches
LangChain / LlamaIndex / Similar
moderateUse an AI application framework that provides abstractions for chains, agents, retrieval, memory, and tool calling.
Pros
- Fast to prototype — build an agent in hours
- Pre-built integrations (vector DBs, APIs, tools)
- Community plugins and extensions
- Handles common patterns (RAG, chains, agents)
- Good for learning concepts quickly
Cons
- Heavy abstraction can obscure what's happening
- Debugging is harder when things break
- Lock-in to framework's mental model
- Often generates verbose, inefficient prompts
- Breaking changes between versions
- Harder to customize edge cases
Use When
- Prototyping and exploring ideas
- Standard use cases (RAG, basic agents)
- Small team without deep AI/LLM expertise
- Integrating with many external services
- Learning AI concepts quickly
Avoid When
- Production systems with strict requirements
- You need full control over prompts and behavior
- Token efficiency is critical (cost or context limits)
- You have experienced AI engineers
- Debugging transparency is important
Show Code Example
import { ChatOpenAI } from '@langchain/openai';
import { createReactAgent } from '@langchain/langgraph/prebuilt';
import { TavilySearchResults } from '@langchain/community/tools/tavily_search';
const tools = [new TavilySearchResults()];
const model = new ChatOpenAI({ model: 'gpt-4o' });
const agent = createReactAgent({ llm: model, tools });
const result = await agent.invoke({
messages: [{ role: 'user', content: 'What is the weather in Tokyo?' }]
});
// Magic! But what prompts were sent? What happened internally? Custom Implementation
moderateBuild directly on the LLM API (OpenAI, Anthropic, etc.) without a framework. Write your own prompts, tool handling, and orchestration.
Pros
- Full control over prompts and behavior
- Easier to debug — you wrote all the code
- Optimal token efficiency
- No framework lock-in or breaking changes
- Deeper understanding of how things work
- Simpler dependency tree
Cons
- Slower to start — build everything yourself
- More code to maintain
- Need to understand LLM APIs deeply
- No pre-built integrations
- Risk of reinventing common patterns poorly
Use When
- Production systems with specific requirements
- You have experienced AI/LLM engineers
- Token efficiency matters (cost or context limits)
- You need full debugging transparency
- Your use case doesn't fit standard patterns
Avoid When
- Rapid prototyping phase
- Standard use cases that frameworks handle well
- Small team without deep LLM expertise
- You need many external integrations quickly
Show Code Example
import OpenAI from 'openai';
const openai = new OpenAI();
const tools = [{
type: 'function',
function: {
name: 'search_web',
description: 'Search the web for current information',
parameters: { /* ... */ }
}
}];
async function agent(query: string) {
const response = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: query }],
tools
});
if (response.choices[0].message.tool_calls) {
// Handle tool call, continue conversation
// You control exactly what happens
}
return response.choices[0].message.content;
}
// Clear, debuggable, efficient Lightweight SDK (Vercel AI SDK, Anthropic SDK)
simpleUse minimal SDKs that provide convenience without heavy abstraction. Thin wrappers around APIs.
Pros
- Best of both worlds: convenience + control
- Minimal abstraction — easy to understand
- Type-safe APIs
- Streaming support out of the box
- No framework lock-in
Cons
- Fewer pre-built integrations than LangChain
- Still need to build complex patterns yourself
- May need to combine with other libraries
Use When
- You want convenience without heavy abstraction
- Building for production from the start
- Using a specific model provider (OpenAI, Anthropic)
- You want type safety
Avoid When
- You need many pre-built integrations
- Building a complex multi-model orchestration
Show Code Example
import { generateText, tool } from 'ai';
import { openai } from '@ai-sdk/openai';
const result = await generateText({
model: openai('gpt-4o'),
prompt: 'What is the weather in Tokyo?',
tools: {
weather: tool({
description: 'Get weather for a city',
parameters: z.object({ city: z.string() }),
execute: async ({ city }) => getWeather(city)
})
}
});
console.log(result.text);
// Clean, typed, minimal abstraction Decision Factors
| Factor | LangChain / LlamaIndex / Similar | Custom Implementation | Lightweight SDK (Vercel AI SDK, Anthropic SDK) |
|---|---|---|---|
| Project phase Are you prototyping or building for production? | Prototyping — explore ideas quickly | Production — full control and debugging | Either — good for both phases |
| Team expertise How experienced is your team with LLM APIs? | Low — frameworks abstract complexity | High — leverage direct API knowledge | Medium — some familiarity helps |
| Complexity of use case Standard pattern or unique requirements? | Standard RAG, basic agents | Unique orchestration, strict requirements | Standard patterns with customization |
| Debugging requirements How important is understanding what's happening? | Low — trust the abstractions | High — need to trace every step | Medium — transparent but convenient |
| Token efficiency Are costs or context limits a concern? | Not critical — frameworks can be verbose | Critical — optimize every prompt | Important — minimal overhead |
Real-World Scenarios
Building a demo for a hackathon or internal POC
Speed is critical. LangChain gets you a working prototype in hours. Polish and efficiency don't matter yet.
Production customer support bot with strict SLAs
Need debugging transparency, optimal performance, and no surprises from framework updates.
Internal tool for developers with moderate complexity
Good balance of convenience and control. Team can understand and maintain it.
Multi-agent workflow with complex orchestration
Complex orchestration often doesn't fit framework patterns. Custom code gives flexibility.