Manage database schema changes alongside feature development using GibsonAI's development and production environments with automatic Python model generation. Deploy schema updates safely and coordinate database changes with application features.
How it works
GibsonAI provides separate development and production environments, allowing you to develop and test schema changes safely before deploying to production. The system automatically generates Pydantic schemas and SQLAlchemy models for seamless Python integration with your feature development workflow.
Environment management
Separate development and production environments for safe schema changes
Python models
Automatic Pydantic schemas and SQLAlchemy models for feature development
Safe deployments
Zero-downtime migrations with automatic validation and rollback
Text-to-SQL
Query your feature data using natural language with Gibson Studio
Key Features
Environment-Based Development
- Development Environment: Safe testing ground for new schema changes
- Production Environment: Zero-downtime deployments to production
- Environment Isolation: Complete separation between development and production
- Schema Synchronization: Automatic promotion from development to production
Python Integration
- Pydantic Schemas: Type-safe models for feature validation
- SQLAlchemy Models: ORM models for database operations
- Automatic Updates: Models updated automatically with schema changes
- Feature-Specific Models: Models tailored for your feature requirements
Text-to-SQL Analysis
- Feature Analysis: Ask questions about your feature data
- Gibson Studio: Run generated SQL queries to analyze feature usage
- Schema Validation: Validate schema changes before deployment
- Performance Monitoring: Monitor feature performance with natural language queries
Step-by-step guide
1. Develop schema changes for your feature
# Working in development environment
# Create schema changes for your feature
gibson modify users "Add a preferences column for user settings and a feature_flags column to track enabled features"
2. Generate Python models for your feature
# Generate Pydantic schemas for validation
gibson code schemas
# Generate SQLAlchemy models for database operations
gibson code models
# Generate all Python code
gibson code base
3. Test your feature with text-to-SQL
Use Gibson Studio to analyze your feature data:
- "Show me users who have enabled the new feature"
- "What percentage of users have customized their preferences?"
- "Find any users with empty feature_flags"
- "Show feature adoption rates over time"
4. Deploy to production
# Working in production environment
# Deploy your validated schema changes
gibson deploy
Example feature development workflow
Feature schema changes
# Add schema for a new notification feature
gibson modify notifications "Create a notifications table with user_id, message, type, read status, and created_at timestamp"
Generated database schema
-- Generated for notification feature
CREATE TABLE notifications (
id INT AUTO_INCREMENT PRIMARY KEY,
user_id INT NOT NULL,
message TEXT NOT NULL,
type ENUM('info', 'warning', 'success', 'error') DEFAULT 'info',
read_status BOOLEAN DEFAULT FALSE,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE
);
Generated Pydantic schema
from pydantic import BaseModel
from datetime import datetime
from typing import Optional
from enum import Enum
class NotificationType(str, Enum):
INFO = "info"
WARNING = "warning"
SUCCESS = "success"
ERROR = "error"
class Notification(BaseModel):
id: Optional[int] = None
user_id: int
message: str
type: NotificationType = NotificationType.INFO
read_status: bool = False
created_at: Optional[datetime] = None
Generated SQLAlchemy model
from sqlalchemy import Column, Integer, String, Text, Boolean, DateTime, ForeignKey, Enum
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import relationship
from datetime import datetime
import enum
Base = declarative_base()
class NotificationType(enum.Enum):
INFO = "info"
WARNING = "warning"
SUCCESS = "success"
ERROR = "error"
class Notification(Base):
__tablename__ = 'notifications'
id = Column(Integer, primary_key=True, autoincrement=True)
user_id = Column(Integer, ForeignKey('users.id'), nullable=False)
message = Column(Text, nullable=False)
type = Column(Enum(NotificationType), default=NotificationType.INFO)
read_status = Column(Boolean, default=False)
created_at = Column(DateTime, default=datetime.utcnow)
user = relationship("User", back_populates="notifications")
Feature integration examples
Using Pydantic for feature validation
from pydantic import ValidationError
# Validate notification data
try:
notification_data = {
"user_id": 1,
"message": "Welcome to our new feature!",
"type": "success"
}
notification = Notification(**notification_data)
print(f"Valid notification: {notification.message}")
except ValidationError as e:
print(f"Validation error: {e}")
Using SQLAlchemy for feature operations
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
# Use connection string from GibsonAI UI
engine = create_engine("your-connection-string-from-ui")
Session = sessionmaker(bind=engine)
session = Session()
# Create notification for feature rollout
new_notification = Notification(
user_id=1,
message="New feature is now available!",
type=NotificationType.SUCCESS
)
session.add(new_notification)
session.commit()
# Query unread notifications
unread_notifications = session.query(Notification).filter(
Notification.read_status == False
).all()
Using RESTful APIs for feature integration
import requests
# Create notification via API
notification_data = {
"user_id": 1,
"message": "Feature successfully enabled!",
"type": "success"
}
response = requests.post("https://api.gibsonai.com/v1/-/notifications", json=notification_data)
new_notification = response.json()
# Get unread notifications
response = requests.get("https://api.gibsonai.com/v1/-/notifications?read_status=false")
unread = response.json()
Feature rollout strategies
Gradual rollout monitoring
Use text-to-SQL to monitor feature adoption:
-- Generated from: "Show feature adoption rate by day"
SELECT
DATE(created_at) as rollout_date,
COUNT(*) as notifications_sent,
COUNT(CASE WHEN read_status = true THEN 1 END) as read_notifications
FROM notifications
WHERE type = 'success'
AND message LIKE '%feature%'
GROUP BY DATE(created_at)
ORDER BY rollout_date;
Feature performance analysis
-- Generated from: "Find users most engaged with notifications"
SELECT
u.id,
u.username,
COUNT(n.id) as total_notifications,
COUNT(CASE WHEN n.read_status = true THEN 1 END) as read_notifications,
ROUND(COUNT(CASE WHEN n.read_status = true THEN 1 END) / COUNT(n.id) * 100, 2) as read_percentage
FROM users u
LEFT JOIN notifications n ON u.id = n.user_id
GROUP BY u.id, u.username
ORDER BY read_percentage DESC;
Access your feature data
Integration options:
- RESTful APIs: Base URL
https://api.gibsonai.com
- SQL queries:
/v1/-/query
- Table operations:
/v1/-/[table-name-in-kebab-case]
- SQL queries:
- OpenAPI Spec: Available in your project settings
- Direct Connection: Connection string available in the UI
- API Documentation: Available in the data API section
Use cases
Schema versioning
Manage schema evolution across development and production environments
Unified API layer
Create consistent data access with automatically generated APIs
Schema migrations
Manage schema changes with automatic zero-downtime migrations
What's next?
Try it on GibsonAI!
GibsonAI is an AI-powered developer platform that lets you design, build, and deploy production-grade serverless databases in minutes — using just natural language prompts.