Models API
The Models API returns the list of AI models available on the ClawHQ platform. Use this endpoint to discover which models you can assign to your agents and to check their specifications.
Endpoint
GET /api/v1/modelsAuthentication
Requires a valid API key passed as a Bearer token. See the Authentication docs for details.
Authorization: Bearer $CLAWHQ_API_KEYRequest
This endpoint accepts no query parameters. It returns all models available to your account based on your current plan.
Response
A successful request returns a JSON object containing an array of model objects:
{
"models": [
{
"id": "k2.5-standard",
"name": "K2.5 Standard",
"context_window": 128000,
"description": "Most capable model with vision and function calling support"
},
{
"id": "m2.5-mini",
"name": "M2.5 Mini",
"context_window": 128000,
"description": "Smaller, faster, and more cost-effective version"
},
{
"id": "k2.5-reasoning",
"name": "K2.5 Reasoning",
"context_window": 200000,
"description": "Balanced model with strong reasoning and large context"
},
{
"id": "m2.5-fast",
"name": "M2.5 Fast",
"context_window": 200000,
"description": "Fast and efficient model optimized for high-throughput tasks"
},
{
"id": "x3-flash",
"name": "X3 Flash",
"context_window": 1000000,
"description": "High-speed model with the largest available context window"
},
{
"id": "r1-reasoner",
"name": "R1 Reasoner",
"context_window": 64000,
"description": "Open-weight reasoning model with strong analytical capabilities"
}
]
}Response Fields
| Field | Type | Description |
|---|---|---|
id | string | Unique model identifier used when configuring agents |
name | string | Human-readable display name |
context_window | number | Maximum number of tokens the model can process in a single request (input + output) |
description | string | Brief summary of the model's strengths and use cases |
Context Window
The context_window value represents the maximum number of tokens a model can handle in a single interaction. This includes both the input (your message, conversation history, knowledge base context) and the generated output.
When choosing a model for your agent, consider the following:
- Short conversations — Models with smaller context windows (64K) work well and tend to respond faster
- Long conversations with history — Larger context windows (128K+) allow more conversation turns to be retained
- Knowledge-heavy agents — Models with the largest context windows (200K+) can incorporate more knowledge base content per response
Tip: Model availability may vary based on your plan and region. The Models API always returns the accurate list of models you can currently use. Check this endpoint before configuring a new agent to see the latest options.
Code Examples
cURL
curl https://app.clawhq.tech/api/v1/models \
-H "Authorization: Bearer $CLAWHQ_API_KEY"Python
import os
import requests
API_KEY = os.environ["CLAWHQ_API_KEY"]
response = requests.get(
"https://app.clawhq.tech/api/v1/models",
headers={"Authorization": f"Bearer {API_KEY}"},
)
models = response.json()["models"]
for model in models:
ctx = f"{model['context_window']:,}"
print(f"{model['name']} ({model['id']}) - {ctx} tokens")
print(f" {model['description']}")JavaScript
const API_KEY = process.env.CLAWHQ_API_KEY;
const response = await fetch("https://app.clawhq.tech/api/v1/models", {
headers: {
"Authorization": `Bearer ${API_KEY}`,
},
});
const { models } = await response.json();
models.forEach((model) => {
const ctx = model.context_window.toLocaleString();
console.log(`${model.name} (${model.id}) - ${ctx} tokens`);
console.log(` ${model.description}`);
});Choosing the Right Model
Each model has different strengths. Here are some guidelines for selecting the right model for your use case:
- Customer support — K2.5 Standard or K2.5 Reasoning for nuanced, empathetic responses
- High-volume automation — M2.5 Mini or M2.5 Fast for fast, cost-effective processing
- Complex reasoning — R1 Reasoner for analytical and logic-intensive tasks
- Large document analysis — X3 Flash for its million-token context window
Tip: You can assign both a primary and fallback model to each agent via the dashboard. If the primary model is temporarily unavailable, the agent automatically falls back to the secondary model with no downtime.
Error Responses
| Status | Description |
|---|---|
401 | Invalid or missing API key |
403 | Account does not have an active Pro or Ultra plan |
429 | Rate limit exceeded |
Next Steps
- Agents API — See which models your agents are using
- Chat API — Send messages to your agents
- Rate Limits — Understand rate limiting and best practices