>
在 AI 对话中输入以下内容,即可安装此 Skill:
请帮我安装 aitoll-chat-completions 这个 Skill,请用 curl 下载:…/skills/aitoll-chat-completions/downloadAITOLL is a unified OpenAI-compatible gateway that routes requests to multiple LLMs
(DeepSeek, GPT-5.2, Claude, Gemini) and image models through a single endpoint.
---
When this skill is invoked, follow these steps:
1. Identify the task type:
- Text chat / Q&A → use a text model
- Image generation (text → image) → use gemini-3-pro-image-preview
- Image editing (image → modified image) → use gemini-3-pro-image-preview
- Image understanding (image → text description) → use a multimodal model
2. Identify the programming language the user wants (Python, JS, cURL, etc.).
Default to Python with requests if unspecified.
3. Check for API key: Remind the user to set AITOLL_API_KEY as an environment
variable if they haven't mentioned it.
4. Select a model using the decision tree below, then generate complete runnable code.
5. Explain the model choice and any important caveats (content format, streaming, etc.).
---
Use this decision tree to pick the right model:
| Use case | Preferred model | Alternative |
|---|---|---|
| Text chat, cost-sensitive | deepseek-chat | claude-haiku-4.5 |
| Text chat, balanced | glm-4.7 | glm-5 |
| Text chat, quality-first | gpt-5.2 | claude-sonnet-4.5 |
| Code generation | claude-haiku-4.5 | claude-sonnet-4.5 |
| Multimodal understanding | gemini-3-flash-preview | gemini-3-pro-preview |
| Image generation / editing | gemini-3-pro-image-preview | — |
See [references/models.md](https://aitoll.net/skills/aitoll-chat-completions/references/models.md) for the full model table with
streaming and multimodal capability notes.
---
Always follow these rules when writing AITOLL integration code:
os.environ.get("AITOLL_API_KEY").response.raise_for_status() (Python) orcheck HTTP status (other languages).
content as a string. Image models (gemini-3-pro-image-preview) return content as an array of objects
with type: "image_url". Write code that handles both.
type: "text" / type: "image_url")for image generation, editing, or understanding requests.
"stream": true in the request body and parse SSE chunks (each line starts with data: , ends with data: [DONE]).
---
When responding to the user:
1. Show complete, runnable code — no placeholders except AITOLL_API_KEY.
2. Include a brief explanation of which model was chosen and why.
3. Add a reminder: set export AITOLL_API_KEY="your-key" before running.
4. If the response may contain an image (base64), show how to save or display it.
---
Consult these files for detailed specs:
references/models.md](https://aitoll.net/skills/aitoll-chat-completions/references/models.md) — Full model table, streaming support,multimodal capabilities, and selection guidance.
references/api-reference.md](https://aitoll.net/skills/aitoll-chat-completions/references/api-reference.md) — Base URL, auth headers,request/response schema, content formats, error codes, and streaming details.
references/code-examples.md](https://aitoll.net/skills/aitoll-chat-completions/references/code-examples.md) — Complete working examplesin Python and cURL for all task types.
>
Guide for creating high-quality MCP (Model Context Protocol) servers that enable LLMs to interact with external services through well-designed tools. Use when building MCP servers to integrate external APIs or services, whether in Python (FastMCP) or Node/TypeScript (MCP SDK).
Toolkit for interacting with and testing local web applications using Playwright. Supports verifying frontend functionality, debugging UI behavior, capturing browser screenshots, and viewing browser logs.