Qwen API Guide
Model: qwen3.6-plus
Quick Start

Qwen 3.6 Plus API

Use this endpoint with OpenAI-compatible SDKs and tools. You only need two values: your API key and the base URL. Contact Telegram to get access.

Base URL: https://qwenapi.sbs/v1 Endpoint: /chat/completions Model: qwen3.6-plus

1) Requirements

Active API key from Telegram support.
Base URL: https://qwenapi.sbs/v1
Auth header: Authorization: Bearer YOUR_API_KEY
Model: qwen3.6-plus
For instant token streaming of final answer text, set enable_thinking=false.

2) Client .env

OPENAI_API_KEY=YOUR_API_KEY
OPENAI_BASE_URL=https://qwenapi.sbs/v1
OPENAI_MODEL=qwen3.6-plus

PowerShell cURL

curl https://qwenapi.sbs/v1/chat/completions ^
  -H "Authorization: Bearer YOUR_API_KEY" ^
  -H "Content-Type: application/json" ^
  --data-binary "{\"model\":\"qwen3.6-plus\",\"messages\":[{\"role\":\"user\",\"content\":\"Say OK\"}],\"stream\":false}"

Linux/macOS cURL

curl https://qwenapi.sbs/v1/chat/completions \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  --data '{"model":"qwen3.6-plus","messages":[{"role":"user","content":"Say OK"}],"stream":false}'

3) Integration Notes

This API is OpenAI-compatible for chat completions.
Use server-side calls for production apps. Do not expose keys in browser code.
You can keep temperature low for more deterministic answers.
Reasoning control: set enable_thinking=false (or reasoning_effort=\"none\") to disable reasoning output.
If request succeeds, you should receive a normal OpenAI-style response object.
If you get 401/403, rotate/recheck API key first.

4) Python

import os
from openai import OpenAI

client = OpenAI(
    api_key=os.environ["OPENAI_API_KEY"],
    base_url=os.environ.get("OPENAI_BASE_URL", "https://qwenapi.sbs/v1"),
)

resp = client.chat.completions.create(
    model=os.environ.get("OPENAI_MODEL", "qwen3.6-plus"),
    messages=[
        {"role": "system", "content": "You are concise."},
        {"role": "user", "content": "Give me three Rust tips."}
    ],
    temperature=0.3,
)

print(resp.choices[0].message.content)

Python Streaming

stream = client.chat.completions.create(
    model=os.environ.get("OPENAI_MODEL", "qwen3.6-plus"),
    messages=[{"role": "user", "content": "Stream a short answer."}],
    enable_thinking=False,
    stream=True,
)

for chunk in stream:
    delta = chunk.choices[0].delta.content or ""
    if delta:
        print(delta, end="", flush=True)

5) JavaScript

import OpenAI from "openai";

const client = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY,
  baseURL: process.env.OPENAI_BASE_URL || "https://qwenapi.sbs/v1",
});

const response = await client.chat.completions.create({
  model: process.env.OPENAI_MODEL || "qwen3.6-plus",
  messages: [
    { role: "system", content: "You are concise." },
    { role: "user", content: "Give me three Rust tips." }
  ],
  temperature: 0.3
});

console.log(response.choices[0].message.content);

JavaScript Streaming

const stream = await client.chat.completions.create({
  model: process.env.OPENAI_MODEL || "qwen3.6-plus",
  messages: [{ role: "user", content: "Stream a short answer." }],
  enable_thinking: false,
  stream: true
});

for await (const part of stream) {
  const delta = part.choices?.[0]?.delta?.content || "";
  if (delta) process.stdout.write(delta);
}

6) Raw SSE Streaming

curl -N https://qwenapi.sbs/v1/chat/completions ^
  -H "Authorization: Bearer YOUR_API_KEY" ^
  -H "Content-Type: application/json" ^
  --data-binary "{\"model\":\"qwen3.6-plus\",\"messages\":[{\"role\":\"user\",\"content\":\"Stream now\"}],\"stream\":true}"

7) Reasoning Configuration

Reasoning ON (default): richer internal thinking, may stream reasoning_content before final answer text.
Reasoning OFF: add enable_thinking=false for cleaner token-by-token answer streaming.

Reasoning OFF JSON Example

{
  "model": "qwen3.6-plus",
  "messages": [{"role":"user","content":"Stream plain answer"}],
  "enable_thinking": false,
  "stream": true
}

8) Troubleshooting

401 / 403: invalid or disabled API key.
429: rate limit reached. retry with backoff.
502: temporary upstream/service issue. retry after a short delay.
Model error: send qwen3.6-plus exactly.
No stream output: client must support SSE and flush stdout; if needed set enable_thinking=false to stream answer tokens directly.

9) Go-Live Checklist

API key is stored in server secrets only.
Base URL set to https://qwenapi.sbs/v1.
Model fixed to qwen3.6-plus.
Retry + timeout are configured in your client.