How to Use PostgreSQL for LangGraph Memory and Checkpointing with FastAPI
LangGraph

How to Use PostgreSQL for LangGraph Memory and Checkpointing with FastAPI

Why LangGraph Works in Scripts but Breaks in FastAPI — and the Correct Way to Manage Postgres Lifecycles

TermTrix
TermTrix
Dec 31, 2025
4min read

I Built a LangGraph + FastAPI Agent… and Spent Days Fighting Postgres

Here’s what actually went wrong — and how I fixed it.

I had a LangGraph-based AI agent working perfectly.

  • Memory worked
  • Checkpointing worked
  • Postgres worked

Everything ran flawlessly in a standalone Python script.

Then I moved the exact same logic into FastAPI.

And everything broke.

This post explains why that happened, the concrete errors I hit, and the one mental model shift that finally fixed everything.


The Goal

The setup sounded straightforward:

  • Use LangGraph for stateful conversations
  • Store memory and checkpoints in Postgres
  • Run everything inside FastAPI for production

In a standalone script, the core logic looked like this (simplified):

async with (
    AsyncPostgresStore.from_conn_string(DB_URI) as store,
    AsyncPostgresSaver.from_conn_string(DB_URI) as checkpointer,
):
    graph = builder.compile(store=store, checkpointer=checkpointer)
    await graph.ainvoke(...)

It worked beautifully.

So I reused the same idea inside FastAPI.

That’s where the trouble started.


Error #1: “the connection is closed”

The first failure looked like a classic database problem:

psycopg.OperationalError: the connection is closed

Naturally, I suspected:

  • Connection pooling issues
  • Event loop conflicts
  • Postgres limits

None of those were the problem.


The Real Issue

Inside FastAPI, I was creating the graph during startup using an async with block.

That meant:

  • Postgres connections opened
  • Graph compiled
  • async with exited
  • Connections closed ❌

Later, incoming requests reused the already-compiled graph.

That graph was holding references to closed database connections.


Error #2: _AsyncGeneratorContextManager Has No Attribute get_next_version

After refactoring, things actually got worse:

AttributeError: '_AsyncGeneratorContextManager' object has no attribute 'get_next_version'

This one was confusing — until I printed the types:

print(type(store))
print(type(checkpointer))

Output:

<class 'contextlib._AsyncGeneratorContextManager'>
<class 'contextlib._AsyncGeneratorContextManager'>

The Key Realization

These calls:

AsyncPostgresStore.from_conn_string(...)
AsyncPostgresSaver.from_conn_string(...)

do not return usable store objects.

They return async context managers.

In a script, this works because you immediately enter them.

In FastAPI, if you don’t explicitly enter those context managers, LangGraph receives context managers instead of real store and checkpointer objects.

LangGraph expects methods like:

  • checkpointer.get_next_version()

Context managers do not have those methods.


Error #3: relation "checkpoints" does not exist

Once the lifecycle issues were fixed, I hit this:

psycopg.errors.UndefinedTable: relation "checkpoints" does not exist

This one was expected.

LangGraph does not auto-create database tables.

I had commented out the required setup calls:

await store.setup()
await checkpointer.setup()

Without them, Postgres had no schema.


The One Mental Model That Fixed Everything

This realization unlocked the solution:

Scripts own their lifecycle. Servers do not.

Scripts

  • Open resources
  • Use them once
  • Close everything
  • Exit

FastAPI

  • Runs for hours
  • Handles many requests
  • Must keep shared resources alive
  • Must close them only on shutdown

The fix wasn’t changing LangGraph code.

The fix was moving lifecycle ownership into FastAPI’s lifespan.


The Correct Production Pattern

1. Enter Postgres context managers once (at startup)

@asynccontextmanager
async def lifespan(app: FastAPI):
    async with (
        AsyncPostgresStore.from_conn_string(DB_URI) as store,
        AsyncPostgresSaver.from_conn_string(DB_URI) as checkpointer,
    ):
        await store.setup()
        await checkpointer.setup()

        app.state.graph = await create_graph(store, checkpointer)
        yield

2. Make graph creation pure

No database lifecycle logic inside.

async def create_graph(store, checkpointer):
    builder = StateGraph(MessagesState)
    builder.add_node("call_model", call_model)
    builder.add_edge(START, "call_model")

    return builder.compile(
        store=store,
        checkpointer=checkpointer,
    )

3. Reuse the graph safely per request

@app.post("/chat")
async def chat(req: Request):
    graph = req.app.state.graph
    return await graph.ainvoke(...)

Why the Script Worked but FastAPI Didn’t

Because the script implicitly did this:

open DB → run graph → close DB → exit

FastAPI requires this instead:

open DB → serve many requests → close DB on shutdown

Once the lifecycle aligned with the framework, every error disappeared.


Lessons Learned (Save Yourself the Pain)

  • Never use async with for shared resources inside request logic
  • If from_conn_string() returns a context manager, you must enter it
  • LangGraph will not create Postgres tables automatically
  • If you see _AsyncGeneratorContextManager, your lifecycle is wrong
  • Scripts and servers require different mental models

Final Thoughts

This wasn’t a LangGraph bug. This wasn’t a FastAPI bug. This wasn’t a Postgres bug.

It was a lifecycle mismatch.

Once that clicked, the solution was clean, simple, and production-ready.

If you’re building stateful AI agents or long-running assistants with LangGraph, get the lifecycle right first.

Everything else becomes easy after that.

#LangGraph#FastAPI#Python#Async#PostgreSQL#AI Agents#Backend Engineering#System Design#Production Debugging

Read Next

Building RAG with Elasticsearch as a Vector Store
System Design

Building RAG with Elasticsearch as a Vector Store

Build a production-ready RAG system using Elasticsearch as a unified vector store. Learn how to integrate LangChain and Ollama for efficient document retrieval.

Using an MCP Server with LangGraph: A Practical Guide to MCP Adapters
AI Agent

Using an MCP Server with LangGraph: A Practical Guide to MCP Adapters

Learn how to integrate an MCP server with LangGraph using MCP adapters to build deterministic, schema-validated AI agents. This practical guide explains why prompt-only tool calling fails and how MCP enables reliable, production-grade agent workflows.

🚀 Turbopack in Next.js: Does turbopackFileSystemCacheForDev Make Your App Lightning Fast?
Next js

🚀 Turbopack in Next.js: Does turbopackFileSystemCacheForDev Make Your App Lightning Fast?

How to Create a Perfect AWS Security Group (Production-Ready & Secure)
Cloud Security

How to Create a Perfect AWS Security Group (Production-Ready & Secure)

Learn how to design a production-ready AWS Security Group using least-privilege principles for EC2, RDS, and Redis—without breaking your app. AWS Security Group best practices Secure EC2 Security Group RDS Security Group configuration Redis Security Group AWS AWS least privilege networking Cloud security for backend apps

Load Testing FastAPI: Can Your API Handle 1 Million Requests?
Backend Engineering

Load Testing FastAPI: Can Your API Handle 1 Million Requests?

Learn how to load test a FastAPI application using Apache JMeter to simulate one million requests, analyze throughput and latency, and uncover real production bottlenecks before traffic hits.

How to Use PostgreSQL for LangGraph Memory and Checkpointing with FastAPI
AI Engineering

How to Use PostgreSQL for LangGraph Memory and Checkpointing with FastAPI

A deep dive into real-world issues when integrating LangGraph with FastAPI and Postgres. Learn why async context managers break checkpointing, how to fix _AsyncGeneratorContextManager errors, create missing tables, and deploy LangGraph agents correctly in production.

Building a Simple AI Agent Using FastAPI, LangGraph, and MCP
AI Agents

Building a Simple AI Agent Using FastAPI, LangGraph, and MCP

Build a production-ready AI agent using FastAPI, LangGraph, and MCP. Learn how to design tool-calling agents with memory, Redis persistence, and clean workflow orchestration.

Using Celery With FastAPI: Solving Async Event Loop Errors Cleanly--
Backend Engineering

Using Celery With FastAPI: Solving Async Event Loop Errors Cleanly--

Learn why async/await fails inside Celery tasks when using FastAPI, and discover a clean, production-safe pattern to avoid event loop errors using internal FastAPI endpoints.Python FastAPI Celery AsyncProgramming BackendEngineering DistributedSystems Microservices

Sharding PostgreSQL for Django Applications with Citus
Databases

Sharding PostgreSQL for Django Applications with Citus

Scale your Django application horizontally by sharding PostgreSQL with the Citus extension. Improve performance with distributed storage and parallel queries.