Back to Blog

Getting Started with LangChain and LLMs

Large Language Models (LLMs) have fundamentally changed the way we build applications. From chatbots to knowledge retrieval systems, from code generators to creative writing assistants — LLMs are everywhere. But building production-ready applications with them requires more than just calling an API. That's where LangChain comes in.

What is LangChain?

LangChain is an open-source framework designed to simplify the development of applications powered by LLMs. It provides modular components for:

Setting Up Your Environment

First, install the required packages:

pip install langchain openai chromadb tiktoken

Create a .env file with your OpenAI API key:

OPENAI_API_KEY=sk-your-api-key-here

Your First Chain

Let's build a simple chain that takes a topic and generates a blog outline:

from langchain.chat_models import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
from langchain.chains import LLMChain

llm = ChatOpenAI(model="gpt-4", temperature=0.7)

prompt = ChatPromptTemplate.from_template(
    "Create a detailed blog outline for the topic: {topic}"
)

chain = LLMChain(llm=llm, prompt=prompt)
result = chain.run(topic="Building AI apps with LangChain")
print(result)

Adding RAG (Retrieval-Augmented Generation)

The real power unlocks when you combine LLMs with your own data. This is called RAG — Retrieval-Augmented Generation.

RAG allows your LLM to answer questions grounded in specific documents, reducing hallucinations and increasing accuracy.

Here's a simplified workflow:

  1. Load your documents (PDFs, web pages, text files)
  2. Split them into chunks using a text splitter
  3. Create embeddings and store them in a vector database (Pinecone, ChromaDB)
  4. At query time, retrieve relevant chunks and pass them to the LLM

Example with ChromaDB

from langchain.document_loaders import TextLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores import Chroma

# Load and split
loader = TextLoader("my_notes.txt")
docs = loader.load()
splitter = RecursiveCharacterTextSplitter(chunk_size=500)
chunks = splitter.split_documents(docs)

# Create vector store
embeddings = OpenAIEmbeddings()
vectorstore = Chroma.from_documents(chunks, embeddings)

# Query
results = vectorstore.similarity_search("What is LangChain?")
print(results[0].page_content)

Building an Agent

Agents are LLMs that can decide which tools to use, when to use them, and how to combine their outputs. They're like giving your AI a toolkit.

from langchain.agents import initialize_agent, Tool
from langchain.tools import DuckDuckGoSearchRun

search = DuckDuckGoSearchRun()
tools = [
    Tool(name="Search", func=search.run,
         description="Search the web for current information")
]

agent = initialize_agent(
    tools, llm, agent="zero-shot-react-description", verbose=True
)
agent.run("What are the latest trends in AI for 2026?")

Key Takeaways

The LLM ecosystem is evolving rapidly. LangChain stays up to date with the latest models and integrations, making it an excellent choice for building your next AI-powered project. Happy building!