Skip to content

LangChain vs Custom

Should I use LangChain (or similar frameworks) or build custom?

The framework vs. build-it-yourself decision. Frameworks speed up prototypes but add complexity; custom code is simpler but requires more upfront work.

intermediate 15 min
Sources verified Dec 22

Approaches

LangChain / LlamaIndex / Similar

moderate

Use an AI application framework that provides abstractions for chains, agents, retrieval, memory, and tool calling.

Latency: Variable (framework overhead)
Cost: Same as custom (just API costs), but more token usage from verbose prompts

Pros

  • Fast to prototype — build an agent in hours
  • Pre-built integrations (vector DBs, APIs, tools)
  • Community plugins and extensions
  • Handles common patterns (RAG, chains, agents)
  • Good for learning concepts quickly

Cons

  • Heavy abstraction can obscure what's happening
  • Debugging is harder when things break
  • Lock-in to framework's mental model
  • Often generates verbose, inefficient prompts
  • Breaking changes between versions
  • Harder to customize edge cases

Use When

  • Prototyping and exploring ideas
  • Standard use cases (RAG, basic agents)
  • Small team without deep AI/LLM expertise
  • Integrating with many external services
  • Learning AI concepts quickly

Avoid When

  • Production systems with strict requirements
  • You need full control over prompts and behavior
  • Token efficiency is critical (cost or context limits)
  • You have experienced AI engineers
  • Debugging transparency is important
Show Code Example
langchain_example.ts
import { ChatOpenAI } from '@langchain/openai';
import { createReactAgent } from '@langchain/langgraph/prebuilt';
import { TavilySearchResults } from '@langchain/community/tools/tavily_search';

const tools = [new TavilySearchResults()];
const model = new ChatOpenAI({ model: 'gpt-4o' });

const agent = createReactAgent({ llm: model, tools });

const result = await agent.invoke({
  messages: [{ role: 'user', content: 'What is the weather in Tokyo?' }]
});

// Magic! But what prompts were sent? What happened internally?

Custom Implementation

moderate

Build directly on the LLM API (OpenAI, Anthropic, etc.) without a framework. Write your own prompts, tool handling, and orchestration.

Latency: Optimal (no framework overhead)
Cost: Lower (efficient prompts, no overhead)

Pros

  • Full control over prompts and behavior
  • Easier to debug — you wrote all the code
  • Optimal token efficiency
  • No framework lock-in or breaking changes
  • Deeper understanding of how things work
  • Simpler dependency tree

Cons

  • Slower to start — build everything yourself
  • More code to maintain
  • Need to understand LLM APIs deeply
  • No pre-built integrations
  • Risk of reinventing common patterns poorly

Use When

  • Production systems with specific requirements
  • You have experienced AI/LLM engineers
  • Token efficiency matters (cost or context limits)
  • You need full debugging transparency
  • Your use case doesn't fit standard patterns

Avoid When

  • Rapid prototyping phase
  • Standard use cases that frameworks handle well
  • Small team without deep LLM expertise
  • You need many external integrations quickly
Show Code Example
custom_example.ts
import OpenAI from 'openai';

const openai = new OpenAI();

const tools = [{
  type: 'function',
  function: {
    name: 'search_web',
    description: 'Search the web for current information',
    parameters: { /* ... */ }
  }
}];

async function agent(query: string) {
  const response = await openai.chat.completions.create({
    model: 'gpt-4o',
    messages: [{ role: 'user', content: query }],
    tools
  });

  if (response.choices[0].message.tool_calls) {
    // Handle tool call, continue conversation
    // You control exactly what happens
  }

  return response.choices[0].message.content;
}

// Clear, debuggable, efficient

Lightweight SDK (Vercel AI SDK, Anthropic SDK)

simple

Use minimal SDKs that provide convenience without heavy abstraction. Thin wrappers around APIs.

Latency: Optimal
Cost: Lowest (efficient + no overhead)

Pros

  • Best of both worlds: convenience + control
  • Minimal abstraction — easy to understand
  • Type-safe APIs
  • Streaming support out of the box
  • No framework lock-in

Cons

  • Fewer pre-built integrations than LangChain
  • Still need to build complex patterns yourself
  • May need to combine with other libraries

Use When

  • You want convenience without heavy abstraction
  • Building for production from the start
  • Using a specific model provider (OpenAI, Anthropic)
  • You want type safety

Avoid When

  • You need many pre-built integrations
  • Building a complex multi-model orchestration
Show Code Example
vercel_ai_example.ts
import { generateText, tool } from 'ai';
import { openai } from '@ai-sdk/openai';

const result = await generateText({
  model: openai('gpt-4o'),
  prompt: 'What is the weather in Tokyo?',
  tools: {
    weather: tool({
      description: 'Get weather for a city',
      parameters: z.object({ city: z.string() }),
      execute: async ({ city }) => getWeather(city)
    })
  }
});

console.log(result.text);
// Clean, typed, minimal abstraction

Decision Factors

Factor LangChain / LlamaIndex / SimilarCustom ImplementationLightweight SDK (Vercel AI SDK, Anthropic SDK)
Project phase

Are you prototyping or building for production?

Prototyping — explore ideas quicklyProduction — full control and debuggingEither — good for both phases
Team expertise

How experienced is your team with LLM APIs?

Low — frameworks abstract complexityHigh — leverage direct API knowledgeMedium — some familiarity helps
Complexity of use case

Standard pattern or unique requirements?

Standard RAG, basic agentsUnique orchestration, strict requirementsStandard patterns with customization
Debugging requirements

How important is understanding what's happening?

Low — trust the abstractionsHigh — need to trace every stepMedium — transparent but convenient
Token efficiency

Are costs or context limits a concern?

Not critical — frameworks can be verboseCritical — optimize every promptImportant — minimal overhead

Real-World Scenarios

Building a demo for a hackathon or internal POC

Recommended: framework

Speed is critical. LangChain gets you a working prototype in hours. Polish and efficiency don't matter yet.

Production customer support bot with strict SLAs

Recommended: custom

Need debugging transparency, optimal performance, and no surprises from framework updates.

Internal tool for developers with moderate complexity

Recommended: lightweight

Good balance of convenience and control. Team can understand and maintain it.

Multi-agent workflow with complex orchestration

Recommended: custom

Complex orchestration often doesn't fit framework patterns. Custom code gives flexibility.

Common Misconceptions

Myth: LangChain is necessary for building AI applications
Reality: The underlying APIs are simple. LangChain adds convenience but isn't required.
Myth: Custom implementation is too hard
Reality: Tool calling and RAG are ~100 lines of code each. The APIs are well-documented.
Myth: Frameworks are always slower
Reality: Runtime overhead is minimal. The cost is in abstraction complexity and verbose prompts.
Myth: You must choose one approach forever
Reality: Many teams prototype with LangChain, then rewrite critical paths in custom code.

Sources

Tempered AI Forged Through Practice, Not Hype

Keyboard Shortcuts

j
Next page
k
Previous page
h
Section home
/
Search
?
Show shortcuts
m
Toggle sidebar
Esc
Close modal
Shift+R
Reset all progress
? Keyboard shortcuts