In most of my Python projects, I realized I was spending nearly 20% of my development time just updating database schemas and keeping models in sync across multiple environments like dev, staging, and prod.
Every time I added a new field or changed a table, I had to jump through the usual hoops—update the model classes, generate the migration files with Alembic, double-check the SQL, then go back and clean up any mismatches. I also had to make sure that the migration would actually run without errors, but I could never guarantee that until someone ran it manually against a target database or CI/CD pipeline caught it. And of course, I still had to commit the changes, open a PR, and write documentation so teammates knew what I had changed.
It was slow, repetitive, and broke my focus. So I built an agent to do all of that for me. Think of it as your “database engineer”. A little AI-powered assistant that understands exactly what you want, makes the change, updates the code, and even submits a pull request to your GitHub repository. You don’t need to open your SQL console or touch your ORM (Object-relational mapping) migrations. You just tell it what you want—in plain, human language—and it handles the rest.
From Schema Prompt to GitHub PR
The idea is simple. Let’s say I’m working on a travel agency project, and I want to add an address
column to the user
table. Instead of writing SQL or modifying Python classes manually, I can just say:
“Add an address field to the travel user table as a string.”
That’s it. The agent reads my request, updates the schema, regenerates the corresponding Python model (using either SQLAlchemy or Pydantic), and creates a GitHub pull request in my repo with all the necessary changes. It even documents the diff and includes ER diagrams if needed.
You can see it in action here: GitHub PR Example
How It Works Behind the Scenes
I used Agno to build the agent. Agno AI Agent orchestrates the agent actions and connects to multiple MCP servers. The AI agent uses GibsonAI MCP to handle the database operations and GitHub MCP to manage version control and PR creation. These two services run behind the scenes, connected through the agent’s logic. Frontend dashboard is hosted in a lightweight Python app powered by Streamlit.
When you type a prompt into the interface, it routes your request to the GibsonAI MCP Server. This applies the schema change to a real (serverless) database and keeps everything in sync with your project. Then it regenerates the model classes and hands them off to the GitHub MCP Server, which commits the new files and opens a pull request.
This AI Agent workflow goes much further with:
- Smart migration ordering: Understands all dependencies and applies changes in the correct order (e.g., creating tables before adding constraints).
- Schema validation: Runs and tests the changes on a real dev database environment and validates the migration.
- PR Comments with Schema Diff & ERD Diagrams: Highlights changes by showing what’s being added/removed at the schema level.
It is also possible to build this automated PR-creation AI Assistant workflow using any AI-enabled editor like Cursor, Windsurf, or GitHub Copilot in VS Code; just follow the setup in this guide.
About the DB Schema Change to PR Agent Project Structure
The agent.py
Python script file is where the brain of the agent lives. It handles the full Schema-to-PR workflow: prompt instructions for the agent, connecting to the MCP servers, interpreting schema change requests, applying changes to the database via GibsonAI MCP, generating the correct Python models based on the updated schema, and then using GitHub MCP tools to commit those files and create a pull request.
# agent.py
import asyncio
import os
import traceback
from textwrap import dedent
from agno.agent import Agent, RunResponse
from agno.storage.sqlite import SqliteStorage
from agno.tools.mcp import MultiMCPTools
from agno.utils.log import logger
...
async def run_schema_to_pr_agent(
message: str, model_id: str | None = None, session_id: str | None = None
) -> RunResponse:
# Validate GitHub configuration
if not GITHUB_TOKEN:
raise ValueError(
"GitHub configuration incomplete. Please set GITHUB_PERSONAL_ACCESS_TOKEN environment variable."
)
# Set up SQLite storage for session persistence
storage = SqliteStorage(
table_name="schema_pr_agent_sessions", db_file="tmp/schema_pr_agent.db"
)
# Set up environment for MCP servers
env = {
**os.environ,
"GITHUB_PERSONAL_ACCESS_TOKEN": GITHUB_TOKEN,
}
try:
# Connect to both MCP servers using MultiMCPTools
async with MultiMCPTools(
[
"uvx --from gibson-cli@latest gibson mcp run",
"npx -y @modelcontextprotocol/server-github",
],
env=env,
timeout_seconds=300, # Increase timeout to 5 minutes
) as mcp_tools:
agent = Agent(
name="Schema-to-PR Agent",
model=get_model(MODEL_ID, MODEL_API_KEY),
tools=[mcp_tools],
instructions=INSTRUCTIONS,
storage=storage,
session_id=session_id,
add_datetime_to_instructions=True,
add_history_to_messages=True,
num_history_runs=3, # Include last 3 conversation turns
show_tool_calls=True,
)
response = await agent.arun(message)
return response
except TimeoutError as te:
raise RuntimeError(f"MCP server timeout: {te}") from te
except Exception as e:
print("=== FULL ERROR TRACEBACK ===")
traceback.print_exc()
print("============================")
raise RuntimeError(
f"Error connecting to MCP servers or running agent: {e}"
) from e
On the other hand, app.py
provides an interface using Streamlit. It allows you to enter schema changes, configure settings like repository and model directory, and view the history of your interactions. The app handles session management, error handling, and visual feedback on the success or failure of requests. It’s the frontend layer that brings the agent to life — and makes working with schema changes feel conversational.
What You Need to Get Started
Before you can run the agent, you’ll need a few things set up:
- Python 3.10+
- UV package manager
- A GibsonAI account and CLI installed
- A GitHub personal access token
- An API key for OpenAI or Groq
You can use a sample schema by cloning it or create your own via a prompt in the GibsonAI app. The agent requires your GibsonAI Project ID to connect and apply schema changes. You will find further instructions on how to find it in the Readme file.
How to Install the Agent
First, clone the repo and install the dependencies using:
git clone https://github.com/GibsonAI/awesome-gibson.git
cd database-schema-to-pr-agent
Install project dependencies with UV:
uv sync
Then copy the .env.example
to .env
, and update it with your configuration. Once everything’s in place, you can start the Streamlit UI with:
streamlit run app.py
This will launch the interface at http://localhost:8501
You can fork and use the following sample repo with Python models for the travel agency database. This repo has already synced with the GibsonAI project sample schema. Start describing your database schema changes in natural language and watch the agent do its magic.
Final Thoughts
This project started as a personal experiment to save myself time and reduce friction during schema changes. But it quickly turned into something that could benefit every developer working with relational databases.
If you’ve ever been annoyed by migration scripts, model drift, or PRs without context, this agent is for you. Give it a try, and let me know what you think. Now I want to see how it can help others too.