OpenAI SDK Compatibility
Use the OpenAI SDK with AI Magicx as a drop-in replacement
OpenAI SDK Compatibility
AI Magicx's chat completions endpoint is fully compatible with the OpenAI SDK, making it easy to migrate existing applications or use familiar tooling.
The OpenAI SDK works with AI Magicx's chat completions endpoint. For other features (images, video, audio, music), use our REST API directly.
Quick Start
from openai import OpenAI
client = OpenAI(
api_key="sk_your_aimagicx_api_key",
base_url="https://www.aimagicx.com/api/v1"
)
response = client.chat.completions.create(
model="openai/gpt-4o-mini",
messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)import OpenAI from "openai";
const client = new OpenAI({
apiKey: "sk_your_aimagicx_api_key",
baseURL: "https://www.aimagicx.com/api/v1",
});
const response = await client.chat.completions.create({
model: "openai/gpt-4o-mini",
messages: [{ role: "user", content: "Hello!" }],
});
console.log(response.choices[0].message.content);curl https://www.aimagicx.com/api/v1/chat/completions \
-H "Authorization: Bearer sk_your_aimagicx_api_key" \
-H "Content-Type: application/json" \
-d '{
"model": "openai/gpt-4o-mini",
"messages": [{"role": "user", "content": "Hello!"}]
}'Migrating from OpenAI
If you're currently using the OpenAI API, migration requires only two changes:
Before (OpenAI)
from openai import OpenAI
client = OpenAI(
api_key="sk-your-openai-key" # OpenAI key
)
response = client.chat.completions.create(
model="gpt-4o-mini", # OpenAI model name
messages=[{"role": "user", "content": "Hello!"}]
)After (AI Magicx)
from openai import OpenAI
client = OpenAI(
api_key="sk_your_aimagicx_key", # 1. Change to AI Magicx key
base_url="https://www.aimagicx.com/api/v1" # 2. Add base URL
)
response = client.chat.completions.create(
model="openai/gpt-4o-mini", # 3. Prefix with provider
messages=[{"role": "user", "content": "Hello!"}]
)Supported Features
| Feature | Supported | Notes |
|---|---|---|
| Chat completions | Yes | Full support |
| Streaming | Yes | SSE streaming |
| System messages | Yes | Full support |
| Multi-turn conversations | Yes | Full support |
| Temperature/max_tokens | Yes | Standard parameters |
| Vision (image analysis) | Yes | Via images array |
| Function calling | Partial | Model-dependent |
Model Name Mapping
AI Magicx uses provider-prefixed model names:
| OpenAI Model | AI Magicx Model |
|---|---|
gpt-4o | openai/gpt-4o |
gpt-4o-mini | openai/gpt-4o-mini |
gpt-4-turbo | openai/gpt-4-turbo |
gpt-3.5-turbo | openai/gpt-3.5-turbo |
You can also use models from other providers:
| Provider | Example Model |
|---|---|
| Anthropic | anthropic/claude-3.5-sonnet |
google/gemini-pro | |
| Meta | meta-llama/llama-3-70b |
Streaming Example
from openai import OpenAI
client = OpenAI(
api_key="sk_your_aimagicx_key",
base_url="https://www.aimagicx.com/api/v1"
)
stream = client.chat.completions.create(
model="openai/gpt-4o-mini",
messages=[{"role": "user", "content": "Write a haiku about coding"}],
stream=True
)
for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="", flush=True)Environment Variables
For cleaner code, use environment variables:
export OPENAI_API_KEY="sk_your_aimagicx_key"
export OPENAI_BASE_URL="https://www.aimagicx.com/api/v1"Then your code doesn't need explicit configuration:
from openai import OpenAI
client = OpenAI() # Uses environment variables
response = client.chat.completions.create(
model="openai/gpt-4o-mini",
messages=[{"role": "user", "content": "Hello!"}]
)Limitations
The following OpenAI SDK features are not supported through AI Magicx:
client.images.generate()- Use our REST API for imagesclient.audio.speech.create()- Use our REST API for TTSclient.audio.transcriptions.create()- Use our REST API for STT- Assistants API
- Embeddings API
- Fine-tuning API
For features not covered by OpenAI SDK compatibility, use our REST API directly.