Transform your SQL schemas into production-ready databases with automatically generated APIs, Pydantic models, and SQLAlchemy integration. Go from schema to production in minutes, not days.
How it works
Import your SQL schema file, and GibsonAI automatically creates a fully functional MySQL database with RESTful APIs, Pydantic schemas, and SQLAlchemy models. Perfect for rapid prototyping and production deployments.
Key Features
SQL Schema Import
- SQL Schema Files: Direct import from .sql files with DDL statements
- MySQL Support: Full support for MySQL databases and syntax
- Relationship Detection: Automatically detects foreign key relationships
- Data Type Mapping: Intelligent mapping of SQL data types to Python types
Automatic Code Generation
- Pydantic Schemas: Generate validation schemas for all your tables
- SQLAlchemy Models: Create ORM models for database interactions
- API Documentation: Automatically generated OpenAPI specifications
- Python Integration: Ready-to-use Python code for your applications
Text-to-SQL Analysis
- Natural Language Queries: Ask questions about your imported data
- Gibson Studio: Run generated SQL queries in the intuitive data management UI
- Data Exploration: Discover patterns and insights in your imported schema
- Query Generation: Automatically generate SQL from natural language
Step-by-step guide
1. Import your SQL schema
# Import from SQL file
gibson import mysql
# Or import from existing database
gibson import mysql
2. Generate Python models
# Generate Pydantic schemas
gibson code schemas
# Generate SQLAlchemy models
gibson code models
# Generate all Python code
gibson code base
3. Explore with text-to-SQL
Use Gibson Studio to analyze your imported schema:
- "Show me all tables and their relationships"
- "Which tables have the most foreign key constraints?"
- "Find any tables without primary keys"
- "What's the structure of the users table?"
4. Deploy and access your database
Your database is automatically deployed with:
- RESTful APIs: Base URL
https://api.gibsonai.com
- SQL queries:
/v1/-/query
- Table operations:
/v1/-/[table-name-in-kebab-case]
- SQL queries:
- OpenAPI Spec: Available in your project settings
- Direct Connection: Connection string available in the UI
- API Documentation: Available in the data API section
Example SQL import
Sample schema file
-- users.sql
CREATE TABLE users (
id INT AUTO_INCREMENT PRIMARY KEY,
email VARCHAR(255) UNIQUE NOT NULL,
name VARCHAR(100) NOT NULL,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
CREATE TABLE posts (
id INT AUTO_INCREMENT PRIMARY KEY,
title VARCHAR(255) NOT NULL,
content TEXT,
user_id INT NOT NULL,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (user_id) REFERENCES users(id)
);
Generated Pydantic schemas
from pydantic import BaseModel
from datetime import datetime
from typing import Optional
class User(BaseModel):
id: Optional[int] = None
email: str
name: str
created_at: Optional[datetime] = None
class Post(BaseModel):
id: Optional[int] = None
title: str
content: Optional[str] = None
user_id: int
created_at: Optional[datetime] = None
Generated SQLAlchemy models
from sqlalchemy import Column, Integer, String, Text, DateTime, ForeignKey
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import relationship
Base = declarative_base()
class User(Base):
__tablename__ = 'users'
id = Column(Integer, primary_key=True, autoincrement=True)
email = Column(String(255), unique=True, nullable=False)
name = Column(String(100), nullable=False)
created_at = Column(DateTime, default=datetime.utcnow)
posts = relationship("Post", back_populates="user")
class Post(Base):
__tablename__ = 'posts'
id = Column(Integer, primary_key=True, autoincrement=True)
title = Column(String(255), nullable=False)
content = Column(Text)
user_id = Column(Integer, ForeignKey('users.id'), nullable=False)
created_at = Column(DateTime, default=datetime.utcnow)
user = relationship("User", back_populates="posts")
Integration examples
Using the RESTful API
import requests
# Get all users
response = requests.get("https://api.gibsonai.com/v1/-/users")
users = response.json()
# Create a new user
user_data = {
"email": "john@example.com",
"name": "John Doe"
}
response = requests.post("https://api.gibsonai.com/v1/-/users", json=user_data)
Using direct SQL queries
import requests
# Query with text-to-SQL
query = "SELECT * FROM users WHERE created_at > '2024-01-01'"
response = requests.post("https://api.gibsonai.com/v1/-/query", json={"query": query})
results = response.json()
Using SQLAlchemy models
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
# Use connection string from GibsonAI UI
engine = create_engine("your-connection-string-from-ui")
Session = sessionmaker(bind=engine)
session = Session()
# Query using SQLAlchemy
users = session.query(User).filter(User.created_at > '2024-01-01').all()
Use cases
RAG schema generation
Generate optimized schemas for RAG applications with vector storage
Unified API layer
Create consistent data access with automatically generated APIs
Schema updates and migrations
Manage schema changes with automatic zero-downtime migrations
What's next?
Try it on GibsonAI!
GibsonAI is an AI-powered developer platform that lets you design, build, and deploy production-grade serverless databases in minutes — using just natural language prompts.