Generative AI

LangChain Tutorial: Building LLM Applications

📅 December 05, 2025 ⏱️ 1 min read 👁️ 2 views 🏷️ Generative AI

LangChain is a powerful framework for developing applications powered by language models. Build sophisticated AI apps with ease.

Basic LangChain Setup


from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate

# Initialize LLM
llm = OpenAI(temperature=0.7)

# Create prompt template
template = "Write a {adjective} story about {topic}"
prompt = PromptTemplate(
    input_variables=["adjective", "topic"],
    template=template
)

# Generate
result = llm(prompt.format(adjective="funny", topic="robots"))
print(result)
```

Chains for Multi-Step Tasks


from langchain.chains import LLMChain, SimpleSequentialChain

# First chain: Generate topic
topic_chain = LLMChain(
    llm=llm,
    prompt=PromptTemplate(
        input_variables=["interest"],
        template="Suggest a blog topic about {interest}"
    )
)

# Second chain: Write outline
outline_chain = LLMChain(
    llm=llm,
    prompt=PromptTemplate(
        input_variables=["topic"],
        template="Create an outline for: {topic}"
    )
)

# Combine chains
overall_chain = SimpleSequentialChain(
    chains=[topic_chain, outline_chain]
)

result = overall_chain.run("artificial intelligence")
```

Memory for Conversations


from langchain.memory import ConversationBufferMemory
from langchain.chains import ConversationChain

# Add memory
memory = ConversationBufferMemory()
conversation = ConversationChain(
    llm=llm,
    memory=memory,
    verbose=True
)

# Chat with memory
conversation.predict(input="Hi, my name is Alice")
conversation.predict(input="What's my name?")  # Remembers "Alice"
```

Document Loaders and Splitting


from langchain.document_loaders import TextLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter

# Load document
loader = TextLoader('document.txt')
documents = loader.load()

# Split into chunks
text_splitter = RecursiveCharacterTextSplitter(
    chunk_size=1000,
    chunk_overlap=200
)
chunks = text_splitter.split_documents(documents)
```

Vector Stores for Semantic Search


from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores import FAISS

# Create embeddings
embeddings = OpenAIEmbeddings()

# Build vector store
vectorstore = FAISS.from_documents(chunks, embeddings)

# Similarity search
query = "What is machine learning?"
docs = vectorstore.similarity_search(query, k=3)
```

Agents for Tool Use


from langchain.agents import load_tools, initialize_agent
from langchain.agents import AgentType

# Load tools
tools = load_tools(["serpapi", "llm-math"], llm=llm)

# Create agent
agent = initialize_agent(
    tools,
    llm,
    agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
    verbose=True
)

# Use agent
agent.run("What is the population of Tokyo multiplied by 2?")
```

LangChain simplifies building complex LLM applications. Start creating today!

🏷️ Tags:
langchain llm ai development chatgpt python generative ai

📚 Related Articles