I Built a LangGraph + FastAPI Agent… and Spent Days Fighting Postgres
Here’s what actually went wrong — and how I fixed it.
I had a LangGraph-based AI agent working perfectly.
- Memory worked
- Checkpointing worked
- Postgres worked
Everything ran flawlessly in a standalone Python script.
Then I moved the exact same logic into FastAPI.
And everything broke.
This post explains why that happened, the concrete errors I hit, and the one mental model shift that finally fixed everything.
The Goal
The setup sounded straightforward:
- Use LangGraph for stateful conversations
- Store memory and checkpoints in Postgres
- Run everything inside FastAPI for production
In a standalone script, the core logic looked like this (simplified):
async with (
AsyncPostgresStore.from_conn_string(DB_URI) as store,
AsyncPostgresSaver.from_conn_string(DB_URI) as checkpointer,
):
graph = builder.compile(store=store, checkpointer=checkpointer)
await graph.ainvoke(...)
It worked beautifully.
So I reused the same idea inside FastAPI.
That’s where the trouble started.
Error #1: “the connection is closed”
The first failure looked like a classic database problem:
psycopg.OperationalError: the connection is closed
Naturally, I suspected:
- Connection pooling issues
- Event loop conflicts
- Postgres limits
None of those were the problem.
The Real Issue
Inside FastAPI, I was creating the graph during startup using an async with block.
That meant:
- Postgres connections opened
- Graph compiled
async withexited- Connections closed ❌
Later, incoming requests reused the already-compiled graph.
That graph was holding references to closed database connections.
Error #2: _AsyncGeneratorContextManager Has No Attribute get_next_version
After refactoring, things actually got worse:
AttributeError: '_AsyncGeneratorContextManager' object has no attribute 'get_next_version'
This one was confusing — until I printed the types:
print(type(store))
print(type(checkpointer))
Output:
<class 'contextlib._AsyncGeneratorContextManager'>
<class 'contextlib._AsyncGeneratorContextManager'>
The Key Realization
These calls:
AsyncPostgresStore.from_conn_string(...)
AsyncPostgresSaver.from_conn_string(...)
do not return usable store objects.
They return async context managers.
In a script, this works because you immediately enter them.
In FastAPI, if you don’t explicitly enter those context managers, LangGraph receives context managers instead of real store and checkpointer objects.
LangGraph expects methods like:
checkpointer.get_next_version()
Context managers do not have those methods.
Error #3: relation "checkpoints" does not exist
Once the lifecycle issues were fixed, I hit this:
psycopg.errors.UndefinedTable: relation "checkpoints" does not exist
This one was expected.
LangGraph does not auto-create database tables.
I had commented out the required setup calls:
await store.setup()
await checkpointer.setup()
Without them, Postgres had no schema.
The One Mental Model That Fixed Everything
This realization unlocked the solution:
Scripts own their lifecycle. Servers do not.
Scripts
- Open resources
- Use them once
- Close everything
- Exit
FastAPI
- Runs for hours
- Handles many requests
- Must keep shared resources alive
- Must close them only on shutdown
The fix wasn’t changing LangGraph code.
The fix was moving lifecycle ownership into FastAPI’s lifespan.
The Correct Production Pattern
1. Enter Postgres context managers once (at startup)
@asynccontextmanager
async def lifespan(app: FastAPI):
async with (
AsyncPostgresStore.from_conn_string(DB_URI) as store,
AsyncPostgresSaver.from_conn_string(DB_URI) as checkpointer,
):
await store.setup()
await checkpointer.setup()
app.state.graph = await create_graph(store, checkpointer)
yield
2. Make graph creation pure
No database lifecycle logic inside.
async def create_graph(store, checkpointer):
builder = StateGraph(MessagesState)
builder.add_node("call_model", call_model)
builder.add_edge(START, "call_model")
return builder.compile(
store=store,
checkpointer=checkpointer,
)
3. Reuse the graph safely per request
@app.post("/chat")
async def chat(req: Request):
graph = req.app.state.graph
return await graph.ainvoke(...)
Why the Script Worked but FastAPI Didn’t
Because the script implicitly did this:
open DB → run graph → close DB → exit
FastAPI requires this instead:
open DB → serve many requests → close DB on shutdown
Once the lifecycle aligned with the framework, every error disappeared.
Lessons Learned (Save Yourself the Pain)
- Never use
async withfor shared resources inside request logic - If
from_conn_string()returns a context manager, you must enter it - LangGraph will not create Postgres tables automatically
- If you see
_AsyncGeneratorContextManager, your lifecycle is wrong - Scripts and servers require different mental models
Final Thoughts
This wasn’t a LangGraph bug. This wasn’t a FastAPI bug. This wasn’t a Postgres bug.
It was a lifecycle mismatch.
Once that clicked, the solution was clean, simple, and production-ready.
If you’re building stateful AI agents or long-running assistants with LangGraph, get the lifecycle right first.
Everything else becomes easy after that.







