Appearance
Chat Completions
Create a chat completion using the specified model.
Create Chat Completion
POST https://api.3xcoder.com/v1/chat/completionsRequest Body
| Parameter | Type | Required | Description |
|---|---|---|---|
model | string | Yes | Model ID (e.g., gpt-4o, claude-sonnet-4-20250514, gemini-2.5-pro) |
messages | array | Yes | Array of message objects |
temperature | number | No | Sampling temperature (0-2). Default: 1 |
top_p | number | No | Nucleus sampling. Default: 1 |
max_tokens | integer | No | Maximum tokens in the response |
stream | boolean | No | Enable streaming (SSE). Default: false |
stop | string/array | No | Stop sequences |
presence_penalty | number | No | Presence penalty (-2 to 2). Default: 0 |
frequency_penalty | number | No | Frequency penalty (-2 to 2). Default: 0 |
tools | array | No | List of tools (functions) the model can call |
tool_choice | string/object | No | Controls tool usage (auto, none, required, or specific tool) |
Message Object
Each message in the messages array has a role and content:
| Role | Description |
|---|---|
system | Sets the behavior/context for the assistant |
user | The user's message |
assistant | Previous assistant response (for multi-turn) |
tool | Tool/function call result |
Text Message
json
{
"role": "user",
"content": "Explain quantum computing in simple terms."
}Multimodal Message (Image)
json
{
"role": "user",
"content": [
{
"type": "text",
"text": "What's in this image?"
},
{
"type": "image_url",
"image_url": {
"url": "https://example.com/image.png"
}
}
]
}Response
json
{
"id": "chatcmpl-abc123",
"object": "chat.completion",
"created": 1700000000,
"model": "gpt-4o",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Quantum computing uses quantum bits (qubits)..."
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 12,
"completion_tokens": 50,
"total_tokens": 62
}
}Examples
bash
curl https://api.3xcoder.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "gpt-4o",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
]
}'javascript
const response = await fetch('https://api.3xcoder.com/v1/chat/completions', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${process.env.OPENAI_API_KEY}`
},
body: JSON.stringify({
model: 'gpt-4o',
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Hello!' }
]
})
})
const data = await response.json()
console.log(data.choices[0].message.content)python
from openai import OpenAI
client = OpenAI(
api_key="your-api-key",
base_url="https://api.3xcoder.com/v1"
)
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
]
)
print(response.choices[0].message.content)go
package main
import (
"bytes"
"encoding/json"
"fmt"
"net/http"
"io"
)
func main() {
body, _ := json.Marshal(map[string]interface{}{
"model": "gpt-4o",
"messages": []map[string]string{
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"},
},
})
req, _ := http.NewRequest("POST", "https://api.3xcoder.com/v1/chat/completions",
bytes.NewBuffer(body))
req.Header.Set("Content-Type", "application/json")
req.Header.Set("Authorization", "Bearer "+apiKey)
resp, _ := http.DefaultClient.Do(req)
defer resp.Body.Close()
result, _ := io.ReadAll(resp.Body)
fmt.Println(string(result))
}java
import java.net.http.*;
import java.net.URI;
HttpClient client = HttpClient.newHttpClient();
String json = """
{
"model": "gpt-4o",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
]
}
""";
HttpRequest request = HttpRequest.newBuilder()
.uri(URI.create("https://api.3xcoder.com/v1/chat/completions"))
.header("Content-Type", "application/json")
.header("Authorization", "Bearer " + apiKey)
.POST(HttpRequest.BodyPublishers.ofString(json))
.build();
HttpResponse<String> response = client.send(request,
HttpResponse.BodyHandlers.ofString());
System.out.println(response.body());csharp
using System.Net.Http;
using System.Text;
var client = new HttpClient();
client.DefaultRequestHeaders.Add("Authorization", $"Bearer {apiKey}");
var json = """
{
"model": "gpt-4o",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
]
}
""";
var content = new StringContent(json, Encoding.UTF8, "application/json");
var response = await client.PostAsync(
"https://api.3xcoder.com/v1/chat/completions", content);
var result = await response.Content.ReadAsStringAsync();
Console.WriteLine(result);Streaming
Set stream: true to receive the response as Server-Sent Events (SSE).
Example
bash
curl https://api.3xcoder.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "gpt-4o",
"stream": true,
"messages": [
{"role": "user", "content": "Write a haiku about coding."}
]
}'Stream Response Format
Each SSE event contains a JSON chunk:
data: {"id":"chatcmpl-abc","object":"chat.completion.chunk","choices":[{"index":0,"delta":{"content":"Hello"},"finish_reason":null}]}
data: {"id":"chatcmpl-abc","object":"chat.completion.chunk","choices":[{"index":0,"delta":{"content":" world"},"finish_reason":null}]}
data: {"id":"chatcmpl-abc","object":"chat.completion.chunk","choices":[{"index":0,"delta":{},"finish_reason":"stop"}]}
data: [DONE]Python Streaming Example
python
from openai import OpenAI
client = OpenAI(
api_key="your-api-key",
base_url="https://api.3xcoder.com/v1"
)
stream = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Write a haiku about coding."}],
stream=True
)
for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="")Function Calling (Tools)
You can define tools that the model can invoke:
json
{
"model": "gpt-4o",
"messages": [
{"role": "user", "content": "What's the weather in Tokyo?"}
],
"tools": [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get the current weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "City name"
}
},
"required": ["location"]
}
}
}
],
"tool_choice": "auto"
}When the model calls a tool, the response includes a tool_calls array. You then send the result back as a tool role message to get the final response.
