Verify real binary exists: ls -la /root/.nvm/versions/node/v24.10.0/lib/node_modules/@openai/codex/vendor/x86_64-unknown-linux-musl/codex/codex
Set environment variables: export CODEX_HOME=/opt/codex && export PATH="/root/.nvm/versions/node/v24.10.0/bin:$PATH"
Check authentication status: /root/.nvm/versions/node/v24.10.0/bin/codex login status
If not authenticated: /root/.nvm/versions/node/v24.10.0/bin/codex login or echo "sk-your-api-key" | /root/.nvm/versions/node/v24.10.0/bin/codex login --with-api-key
Test authentication: /root/.nvm/versions/node/v24.10.0/bin/codex whoami
Start the service: sudo systemctl start codex-wrapper-api3
# Start the service
sudo systemctl start codex-wrapper-api3
# Check status
sudo systemctl status codex-wrapper-api3
# Enable on boot
sudo systemctl enable codex-wrapper-api3
Using Docker (Optional)
Quick Start with Docker Compose
cd /var/www/api3
docker-compose up -d
View Logs
docker-compose logs -f codex-api
Stop Service
docker-compose down
API Usage Workflow
1. List Available Models
GET http://localhost:5203/v1/models
Check which models are configured and available
2. Make a Chat Request
POST http://localhost:5203/v1/chat/completions
Send requests with proper authentication
π Available Models & Reasoning Control
β Working Base Models (5 Models)
π₯ Premium Codex Models
gpt-5.1-codex-max - Flagship, exceptional quality
gpt-5.1-codex - Recommended default, balanced
gpt-5.1-codex-mini - Fast, cost-effective
π€ General Purpose Models
gpt-5.1 - Enhanced with high reasoning
gpt-5 - Previous generation, compatible
βοΈ Dynamic Reasoning Control
ποΈ Use x_codex Parameter for Reasoning Control
Instead of using separate model variants, control reasoning effort dynamically:
Reasoning Level
API Parameter
Speed
Best For
β‘ Low
"reasoning_effort": "low"
π Fast (1-2s)
Quick fixes, simple scripts
βοΈ Medium
"reasoning_effort": "medium"
β‘ Balanced (2-3s)
Production code, daily tasks
π§ High
"reasoning_effort": "high"
π’ Slow (3-5s)
Complex algorithms, research
β οΈ Currently Unavailable
Extra High reasoning level (returns 400 errors)
Specialized model variants (e.g., gpt-5.1-codex-max-low, etc.)
Separate thinking levels (not supported with current account)
Check rate limit usage: Monitor headers in responses
β οΈ Important Daily Tasks:
Rotate API keys if using key-based authentication
Check for Codex CLI updates: npm update -g @openai/codex (verify binary path doesn't change after update)
Monitor quota usage with your provider
Backup configuration files
π Comprehensive Authentication Setup Guide
Overview of Authentication Layers
The Codex API uses a two-layer authentication system:
Proxy API Key: Authenticates clients to this API wrapper service
Codex CLI Authentication: Authenticates the wrapper service to the underlying Codex CLI
Both layers must be properly configured for the system to work.
π Part 1: Detailed Codex CLI Authorization Setup
β οΈ Before You Begin:
The systemd service runs as root user (not www-data, not ubuntu)
Ensure you have administrative access to this system
Choose your authentication method (OAuth or API Key)
For production, use API Key authentication
Step 1: Verify Codex CLI Installation
β οΈ VERIFYING THE EXACT INSTALLATION ON THIS SERVER:
# Check if NPM wrapper exists
ls -la /root/.nvm/versions/node/v24.10.0/bin/codex
# Check if real binary exists
ls -la /root/.nvm/versions/node/v24.10.0/lib/node_modules/@openai/codex/vendor/x86_64-unknown-linux-musl/codex/codex
# Verify version using NPM wrapper
export PATH="/root/.nvm/versions/node/v24.10.0/bin:$PATH"
/root/.nvm/versions/node/v24.10.0/bin/codex --version
# Expected output:
# codex version 0.63.0
# Check Node.js version
node --version
# Expected: v24.10.0
For local development or when you can open a browser:
# Set environment variables
export CODEX_HOME=/opt/codex
export PATH="/root/.nvm/versions/node/v24.10.0/bin:$PATH"
# Check current authentication status
/root/.nvm/versions/node/v24.10.0/lib/node_modules/@openai/codex/vendor/x86_64-unknown-linux-musl/codex/codex login status
# If not authenticated, run interactive OAuth login
/root/.nvm/versions/node/v24.10.0/lib/node_modules/@openai/codex/vendor/x86_64-unknown-linux-musl/codex/codex login
# Expected output:
# Opening browser to: https://chat.openai.com/auth/login
# Waiting for authentication...
# Success! You are now logged in as user@example.com
# Credentials saved to: /opt/codex/auth.json
# Alternative: Use NPM wrapper (recommended)
/root/.nvm/versions/node/v24.10.0/bin/codex login
What happens in detail:
Codex CLI starts a local web server on port 1455
Your browser opens to OpenAI's authentication page
You log in with your ChatGPT credentials
OpenAI redirects back to localhost:1455
The CLI receives and stores the authentication token
Token is saved to /opt/codex/auth.json
Remote/Headless Setup
For servers without GUI access:
# Method 1: SSH Port Forwarding
ssh -L 1455:localhost:1455 root@your-server
# Then on the server:
/root/.nvm/versions/node/v24.10.0/bin/codex login
# Method 2: Manual Token Transfer
# On a machine with browser:
/root/.nvm/versions/node/v24.10.0/bin/codex login --export-token > token.txt
# Transfer token.txt to server:
scp token.txt root@server:/tmp/
# On server:
/root/.nvm/versions/node/v24.10.0/bin/codex login --import-token /tmp/token.txt
rm /tmp/token.txt
Common Issues with OAuth:
Port 1455 blocked: Use port forwarding or different port
Browser blocked: Use manual token transfer
Corporate firewall: May need to allow chat.openai.com
Step 4: API Key Authentication Method (Production)
Setup with OpenAI API Key
Best for production environments, automation, and headless servers:
# Set up environment variables
export CODEX_HOME=/opt/codex
export PATH="/root/.nvm/versions/node/v24.10.0/bin:$PATH"
# Method 1: Direct login with API key (using real binary)
/root/.nvm/versions/node/v24.10.0/lib/node_modules/@openai/codex/vendor/x86_64-unknown-linux-musl/codex/codex login --with-api-key
# Then paste your API key when prompted, OR pipe it:
echo "sk-your-api-key-here" | /root/.nvm/versions/node/v24.10.0/lib/node_modules/@openai/codex/vendor/x86_64-unknown-linux-musl/codex/codex login --with-api-key
# Method 2: Using NPM wrapper (recommended)
/root/.nvm/versions/node/v24.10.0/bin/codex login --api-key "sk-...your-api-key-here"
# Method 3: Environment variable (recommended for containers)
export OPENAI_API_KEY="sk-...your-api-key-here"
/root/.nvm/versions/node/v24.10.0/bin/codex login
# Method 4: Permanent configuration
mkdir -p /opt/codex
cat > /opt/codex/config.toml << EOF
[auth]
api_key = "sk-...your-api-key-here"
EOF
API Key Requirements:
Must have access to the Responses API (not just Chat API)
Should be scoped to your production environment
Create multiple keys for different services if needed
Set appropriate usage limits and quotas
Step 5: Verify Codex CLI Authentication
Test authentication immediately after setup:
# Set environment variables
export CODEX_HOME=/opt/codex
export PATH="/root/.nvm/versions/node/v24.10.0/bin:$PATH"
# Check current authentication status (using real binary)
/root/.nvm/versions/node/v24.10.0/lib/node_modules/@openai/codex/vendor/x86_64-unknown-linux-musl/codex/codex login status
# Or using NPM wrapper
/root/.nvm/versions/node/v24.10.0/bin/codex whoami
# Expected output for OAuth:
# You are logged in as: user@example.com
# Authentication method: OAuth
# Token expires: 2024-12-31
# Expected output for API key:
# You are logged in with API key
# Key ID: sk-...xxxx
# Authentication method: API Key
# If not authenticated, you'll see:
# Error: Not authenticated. Please run 'codex login' first.
# List available models (tests actual API access)
/root/.nvm/versions/node/v24.10.0/bin/codex models list
# Expected output:
# Available models:
# - gpt-5.1-codex-max
# - gpt-5.1-codex
# - gpt-5.1-codex-mini
# - gpt-5.1
# - gpt-5
# - codex-cli
# - gpt-5.1-codex low
# - gpt-5.1-codex medium
# - gpt-5.1-codex high
# - gpt-5.1-codex extra high
# (All 10 models are available)
If verification fails:
Error: Not authenticated - Run codex login again
Error: Invalid API key - Check key format and permissions
Error: No models available - Check account limits
Permission denied - Check file permissions on /opt/codex/
π§ Part 2: Service Integration and Configuration
Step 6: Configure File Permissions and Ownership
Ensure the root user can access authentication files:
# Check current permissions
ls -la /opt/codex/
# Set proper ownership (already root:root)
sudo chown -R root:root /opt/codex/
# Set secure permissions
sudo chmod 700 /opt/codex/
sudo chmod 600 /opt/codex/auth.json
sudo chmod 600 /opt/codex/config.toml
# Verify
sudo -u root ls -la /opt/codex/
Permission Matrix:
/opt/codex/ - 700 (rwx------) - Owner only
auth.json - 600 (rw-------) - Owner read/write
config.toml - 600 (rw-------) - Owner read/write
NEVER use 777 or world-readable permissions
Step 7: Configure Environment Variables
Update your .env file with required variables:
# API Wrapper Configuration
# This is the key your API clients will use
PROXY_API_KEY=eric
# Codex CLI Configuration - EXACT PATHS FOR THIS SERVER
CODEX_HOME=/opt/codex
CODEX_WORKDIR=/workspace/codex-api3
# EXACT PATHS - DO NOT CHANGE
CODEX_PATH=/root/.nvm/versions/node/v24.10.0/lib/node_modules/@openai/codex/vendor/x86_64-unknown-linux-musl/codex/codex
CODEX_BINARY_PATH=/root/.nvm/versions/node/v24.10.0/lib/node_modules/@openai/codex/vendor/x86_64-unknown-linux-musl/codex/codex
# Path to Codex CLI NPM wrapper
PATH=/root/.nvm/versions/node/v24.10.0/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
NODE_PATH=/root/.nvm/versions/node/v24.10.0/lib/node_modules
# API Configuration
PORT=5203
HOST=0.0.0.0
# Rate Limiting
RATE_LIMIT_PER_MINUTE=120
# Optional: Default model
DEFAULT_MODEL=gpt-5.1-codex
# Optional: Maximum parallel requests
CODEX_MAX_PARALLEL_REQUESTS=4
# Optional: Request timeout in seconds
CODEX_TIMEOUT=120
# Security Settings
CODEX_SANDBOX_MODE=workspace-write
CODEX_ALLOW_DANGER_FULL_ACCESS=0
Important Notes:
PROXY_API_KEY is NOT your OpenAI API key
The exact API key for this deployment is: eric
Store .env securely and never commit to version control
Use different keys for dev/staging/prod environments
Step 8: Test the Complete Integration
Perform end-to-end testing:
# 1. Start the API service
cd /var/www/api3
sudo systemctl start codex-wrapper-api3
# 2. Check service status
sudo systemctl status codex-wrapper-api3
# 3. Test health endpoint
curl http://localhost:5203/health
# 4. Test with invalid proxy key (should fail)
curl -X POST http://localhost:5203/v1/chat/completions \
-H "Authorization: Bearer invalid-key" \
-H "Content-Type: application/json" \
-d '{"model": "gpt-5.1-codex", "messages": [{"role": "user", "content": "test"}]}'
# Expected: {"detail": "Invalid proxy API key"}
# 5. Test with valid proxy key
curl -X POST http://localhost:5203/v1/chat/completions \
-H "Authorization: Bearer eric" \
-H "Content-Type: application/json" \
-d '{"model": "gpt-5.1-codex", "messages": [{"role": "user", "content": "Hello"}]}'
# Expected: Successful response with message content
# Set file permissions
sudo chmod 600 /opt/codex/auth.json
sudo chmod 600 /opt/codex/config.toml
sudo chmod 600 /var/www/api3/.env
# Add to .gitignore
echo -e ".env\n/opt/codex/\n*.key\n*.pem" >> /var/www/api3/.gitignore
# Use system keyring (alternative)
# Store sensitive data in system keychain/keyring
# Reference via environment variables at runtime
Rotation Procedures
Regular rotation schedule:
Proxy API Key: Rotate every 90 days
OpenAI API Key: Rotate every 180 days
OAuth Tokens: Auto-rotate by OpenAI (~30 days)
Rotation process:
# Rotate proxy key
# 1. Generate new key
NEW_KEY=$(openssl rand -hex 32)
# 2. Update .env file
sudo sed -i "s/PROXY_API_KEY=.*/PROXY_API_KEY=$NEW_KEY/" /var/www/api3/.env
# 3. Restart service
sudo systemctl restart codex-wrapper-api3
# 4. Update all client applications with new key
# Rotate OpenAI API key
# 1. Generate new key in OpenAI dashboard
# 2. Re-login with new key
sudo su - root
cd /var/www/api3
export PATH="/root/.nvm/versions/node/v24.10.0/bin:$PATH"
/root/.nvm/versions/node/v24.10.0/bin/codex login --api-key "sk-new-key-here"
# 3. Verify
/root/.nvm/versions/node/v24.10.0/bin/codex whoami
/root/.nvm/versions/node/v24.10.0/bin/codex models list
# Check npm global path
export PATH="/root/.nvm/versions/node/v24.10.0/bin:$PATH"
# Reinstall if needed
npm install -g @openai/codex
Error: Not authenticated
No valid auth token
sudo su - root
cd /var/www/api3
/root/.nvm/versions/node/v24.10.0/bin/codex login
# For API key:
/root/.nvm/versions/node/v24.10.0/bin/codex login --api-key "sk-..."
# Check if ports are blocked
# For OAuth port
nc -zv localhost 1455
# For API port
nc -zv localhost 5203
# For API access
nc -zv api.openai.com 443
# If using corporate proxy
export https_proxy=http://proxy.company.com:8080
export http_proxy=http://proxy.company.com:8080
# Check DNS
nslookup api.openai.com
dig api.openai.com
# Test connectivity
ping -c 3 api.openai.com
curl -I https://api.openai.com
β Verification Complete Checklist
Codex NPM wrapper exists at /root/.nvm/versions/node/v24.10.0/bin/codex
Codex real binary exists at /root/.nvm/versions/node/v24.10.0/lib/node_modules/@openai/codex/vendor/x86_64-unknown-linux-musl/codex/codex
Codex version 0.63.0 confirmed with Node.js v24.10.0
Authentication configured (OAuth or API Key) for root user
/root/.nvm/versions/node/v24.10.0/lib/node_modules/@openai/codex/vendor/x86_64-unknown-linux-musl/codex/codex login status shows authenticated
codex whoami shows successful authentication
codex models list shows available models
CODEX_PATH and CODEX_BINARY_PATH correctly set in /var/www/api3/.env
File permissions set correctly on /opt/codex/ (700)
PROXY_API_KEY configured in /var/www/api3/.env
API service starts on port 5203 without errors
Health endpoint responds correctly at http://localhost:5203/health
Chat completions work with proxy key
Service logs show no authentication errors
β οΈ Important Security Reminders
Never share your PROXY_API_KEY with external users
Keep OpenAI API keys confidential and rotate regularly
Monitor API usage for unauthorized access
Use different credentials for different environments
Enable audit logging for production deployments
Consider using a secrets management system for production
π Common Administrative Tasks
Rate Limiting Configuration
Setting Rate Limits
Add to /var/www/api3/.env:
# Requests per minute per client (configured for this server)
RATE_LIMIT_PER_MINUTE=120
# Burst capacity (optional)
RATE_LIMIT_BURST=10
# Via API
curl -H "Authorization: Bearer eric" \
http://localhost:5203/v1/models
# Via Codex CLI (using NPM wrapper)
export PATH="/root/.nvm/versions/node/v24.10.0/bin:$PATH"
/root/.nvm/versions/node/v24.10.0/bin/codex models list
# Or using real binary directly
/root/.nvm/versions/node/v24.10.0/lib/node_modules/@openai/codex/vendor/x86_64-unknown-linux-musl/codex/codex models list
Configuring Model Providers
Edit /opt/codex/config.toml:
[model_providers.openai]
type = "openai"
api_key = "${OPENAI_API_KEY}"
base_url = "https://api.openai.com/v1"
# For local models
[model_providers.local]
type = "openai"
base_url = "http://localhost:8080/v1"
Reasoning Effort Control
Four reasoning effort levels available:
low - Fast responses, minimal overhead
medium - Balanced reasoning (default)
high - Deep analysis for complex tasks
extra high - Maximum reasoning for the most challenging tasks
Method 1: Environment Variable
# Add to /var/www/api3/.env
CODEX_REASONING_EFFORT=high # Options: low, medium, high, extra high
# Choose a model with built-in reasoning level:
"model": "gpt-5.1-codex low" # Fast
"model": "gpt-5.1-codex medium" # Balanced
"model": "gpt-5.1-codex high" # Deep
"model": "gpt-5.1-codex extra high" # Maximum
Sandbox Configuration
Available Modes
read-only - No file access
workspace-write - Write to workspace only
danger-full-access - Full system access
Configuration
# In /var/www/api3/.env
CODEX_SANDBOX_MODE=workspace-write
# Allow danger mode
CODEX_ALLOW_DANGER_FULL_ACCESS=1
Performance Tuning
Parallel Request Handling (Latest Updates)
β Session 5 Enhancements Applied (2025-12-02)
Connection pooling and timeout issues have been resolved with the following improvements:
# Current Production Configuration (/var/www/api3/.env)
# Maximum parallel Codex processes - INCREASED
CODEX_MAX_PARALLEL_REQUESTS=5 # Updated: Default was 2, now 5 (+150% capacity)
# Request timeout (seconds) - OPTIMIZED
CODEX_TIMEOUT=120 # Stable timeout with proper cleanup
# Service environment paths - FIXED
PATH=/usr/bin:/usr/local/bin:/home/eric/.nvm/versions/node/v25.2.1/bin:/usr/local/sbin:/usr/sbin:/sbin:/bin
CODEX_PATH=/home/eric/.nvm/versions/node/v25.2.1/bin/codex
CODEX_HOME=/opt/codex
# Use a secrets manager (example with HashiCorp Vault)
export PROXY_API_KEY=$(vault kv get -field=api-key secret/codex/prod)
# Or use Docker secrets (in docker-compose.yml)
services:
codex-api:
environment:
PROXY_API_KEY: /run/secrets/proxy_api_key
secrets:
- proxy_api_key
Network Security
Firewall Configuration
Restrict access to the API endpoint on port 5203:
# UFW example
sudo ufw allow from 10.0.0.0/8 to any port 5203
sudo ufw deny 5203
# iptables example
sudo iptables -A INPUT -p tcp --dport 5203 -s 10.0.0.0/8 -j ACCEPT
sudo iptables -A INPUT -p tcp --dport 5203 -j DROP
# Recommended for production
CODEX_SANDBOX_MODE=read-only
CODEX_ALLOW_DANGER_FULL_ACCESS=0
CODEX_LOCAL_ONLY=1
Docker Security
# Run as root (as configured)
USER root
# Read-only filesystem
--read-only --tmpfs /tmp
# Drop capabilities
--cap-drop ALL
Access Control
IP Whitelisting
# Middleware example for IP whitelisting
ALLOWED_IPS = ["10.0.0.0/8", "192.168.0.0/16"]
@app.middleware("http")
async def ip_whitelist(request: Request, call_next):
client_ip = request.client.host
if not any(ipaddress.ip_address(client_ip) in ipaddress.ip_network(allowed)
for allowed in ALLOWED_IPS):
raise HTTPException(status_code=403, detail="IP not allowed")
return await call_next(request)
Monitoring and Auditing
Security Monitoring Checklist:
Log all API requests with timestamps and IP addresses
Monitor for unusual request patterns
Set up alerts for authentication failures
Regular security audits of access logs
Monitor rate limit violations
π Monitoring the Service
Health Check Endpoints
Basic Health Check
GET http://localhost:5203/health
curl http://localhost:5203/health
Returns: {"status": "healthy"}
Detailed Health Check
GET http://localhost:5203/health/detailed
curl http://localhost:5203/health/detailed
Returns service status, Codex CLI status, and system info
# Add to requirements.txt
prometheus-client
# Metrics endpoint
GET http://localhost:5203/metrics
# Example metrics
codex_api_requests_total{method="POST",status="200"}
codex_api_request_duration_seconds_bucket{le="1.0"}
codex_api_active_connections
Monitoring Dashboard
Grafana Dashboard Components:
API request rate and response times
Error rate breakdown
Codex CLI execution metrics
System resource usage
Rate limit utilization (120/min limit)
Active model distribution
Alerting Setup
Configure alerts for:
Service downtime (health check failures)
High error rate (>5%)
Elevated response times (P99 > 10s)
Codex CLI authentication failures
Rate limit exhaustion (approaching 120/min)
Disk space > 80% used
π§ Maintenance Tasks
Service Management
Restarting the Service
# Docker Compose
docker-compose restart codex-api
# Systemd service (configured for this server)
sudo systemctl restart codex-wrapper-api3
# Check status
sudo systemctl status codex-wrapper-api3
# Direct process
pkill -f "uvicorn app.main:app"
cd /var/www/api3
nohup python -m uvicorn app.main:app --host 0.0.0.0 --port 5203 > /var/log/codex-api3.log 2>&1 &
Zero-Downtime Deployment
# Using blue-green deployment
docker-compose up -d --scale codex-api=2
# Wait for new container to be healthy
docker-compose up -d --scale codex-api=1 --no-deps codex-api
Updating Components
Update Codex CLI
# Check current version (NPM wrapper)
/root/.nvm/versions/node/v24.10.0/bin/codex --version
# Current expected version: 0.63.0
# Update to latest
npm update -g @openai/codex
# Verify update didn't change binary path
ls -la /root/.nvm/versions/node/v24.10.0/lib/node_modules/@openai/codex/vendor/x86_64-unknown-linux-musl/codex/codex
# If binary path changed after update, update .env file
# Edit /var/www/api3/.env and update CODEX_PATH and CODEX_BINARY_PATH
Update Python Dependencies
cd /var/www/api3
# Update all packages
pip install -r requirements.txt --upgrade
# Check for outdated packages
pip list --outdated
# Add NPM wrapper to PATH
export PATH="/root/.nvm/versions/node/v24.10.0/bin:$PATH"
# Verify wrapper exists
ls -la /root/.nvm/versions/node/v24.10.0/bin/codex
# Verify real binary exists
ls -la /root/.nvm/versions/node/v24.10.0/lib/node_modules/@openai/codex/vendor/x86_64-unknown-linux-musl/codex/codex
# Use full path to NPM wrapper
/root/.nvm/versions/node/v24.10.0/bin/codex --version
"Model not found"
Fix:
# List available models
curl http://localhost:5203/v1/models
# Check config.toml
sudo cat /opt/codex/config.toml
Q: What's the difference between CODEX_LOCAL_ONLY and regular mode?
A: When CODEX_LOCAL_ONLY=1, the API will only accept requests for local model providers. This prevents the API from making requests to remote APIs like OpenAI, keeping all processing local. Set to 0 to allow remote providers.
Q: How can I increase the request timeout?
A: Set the CODEX_TIMEOUT environment variable in /var/www/api3/.env:
# In /var/www/api3/.env file
CODEX_TIMEOUT=300 # 5 minutes
Q: Can I use custom system prompts or instructions?
A: Yes! Create an AGENTS.md file in your workspace directory:
# Create AGENTS.md in workspace
mkdir -p /workspace/codex-api3
echo "You are a Python programming expert..." > /workspace/codex-api3/AGENTS.md
Codex will automatically merge these instructions with each request.
Q: How do I handle rate limiting?
A: Configure rate limiting in your /var/www/api3/.env file:
# 120 requests per minute per client (configured for this server)
RATE_LIMIT_PER_MINUTE=120
# Unlimited (0)
RATE_LIMIT_PER_MINUTE=0
The API returns rate limit headers in each response to monitor usage.
Q: What's the maximum number of parallel requests?
A: Configure with CODEX_MAX_PARALLEL_REQUESTS in /var/www/api3/.env:
# Default is 2, increase for better throughput
CODEX_MAX_PARALLEL_REQUESTS=4
# Set to 1 for serial processing
CODEX_MAX_PARALLEL_REQUESTS=1
Q: How do I enable debug logging?
A: Set the log level in /var/www/api3/.env:
# In /var/www/api3/.env
LOG_LEVEL=DEBUG
# Or as environment variable
export LOG_LEVEL=DEBUG
Q: Can I run this behind a reverse proxy?
A: Yes! Here's a sample nginx configuration for port 5203:
For long-running operations that may exceed timeout limits, the API now supports asynchronous job execution. Jobs run in the background while you can poll for results or stream progress in real-time.
Benefits
No Timeouts: Jobs run in background, HTTP connection closes immediately
Resume on Disconnect: Results retrievable even if connection drops
Progress Monitoring: Stream real-time progress via Server-Sent Events
Better Error Handling: Failed jobs tracked with detailed error messages
Configuration
Add these variables to your .env file:
# Job storage backend: memory, sqlite, or auto
JOB_STORAGE_TYPE=sqlite
# Job lifetime in seconds (default: 1 hour)
JOB_DEFAULT_TTL=3600
# SQLite database path
JOB_SQLITE_PATH=/var/www/api3/data/codex_jobs.db