# Nitro Chat Request Object (vs OpenAI)
| Feature | Nitro | ChatGPT |
|-----------------------|-------------------------------------------------------|------------------------------------------------------------------------|
| API Endpoint | `http://localhost:3982/v1/chat/completions` | `https://api.openai.com/v1/chat/completions` |
| Request Body Type | `application/json` | `application/json` |
| `messages` | Array, contains input data or prompts for the model | Array, required, list of messages in the conversation |
| `model` | String, specifies the model for tasks | String, required, ID of the model to use |
| `max_tokens` | Number, default: 2048 | Integer or null, defaults to infinity |
| `stop` | Array, tokens/phrases to stop output | String/array/null, up to 4 sequences to stop token generation |
| `frequency_penalty` | Number, default: 0 | Number or null, optional, defaults to 0, range -2.0 to 2.0 |
| `presence_penalty` | Number, default: 0 | Number or null, optional, defaults to 0, range -2.0 to 2.0 |
| `temperature` | Number, default: 0.7 | Number or null, optional, defaults to 1, range 0 to 2 |
| `stream` | Boolean, default: true | Boolean or null, optional, defaults to false |
| `logit_bias` | Not available | Map, optional, modifies token likelihood |
| `n` | Not available | Integer or null, optional, defaults to 1 |
| `response_format` | Not available | Object, optional, specifies output format |
| `seed` | Not available | Integer or null, optional, in Beta |
| `top_p` | Available just like openai | Number or null, optional, defaults to 1 |
| `tools` | Not available | Array, optional, list of tools the model can use |
| `tool_choice` | Not available | String or object, optional, controls tool use |
| `user` | Not available | String, optional, unique identifier for end-user |
| `function_call` | Not available | String or object, optional, deprecated in favor of `tool_choice` |
| `functions` | Not available | Array, optional, deprecated in favor of `tools` |
## One by one breakdown
### Discussion Points
- Persisting chat/completion responses
- in `messages`?
- Threads
- Decision: move messages to `jsonl`
- Messages
- `chat/completions` -> Persists to Messages
- ERD between Threads, Messages
```
/threads-id
message.jsonl
thread.json
// thread.json
{
asssistant: [
{ model1: { var1: new_value },
{ model2: { var1: new_value },
]
}
```
### Roadmap
- (Nitro) logit_bias:
- This is actually inside nitro scope since it's dependent on inference engine. HOWEVER, this is very advanced features and does not provide any significant values for normal user, therefore i'm postponing it now (we can create an issue to track eventually)
- (Nitro) n:
- This is actually an old and almost deprecated parameters from openai, it just means it returns the choices and weights > 1 which llama cpp doesn't support now and jan or nitro shouldn't waste time also (pretty much no one use)
- (Jan) response_format:
- This response format is just a feature to support Python runtime, it will return data in Python object binaries (insane) this also revealed that openai using python runtime at least to generate this return data. Should be implemented in Jan due to multi runtime nature.
- (Nitro) seed:
- Just giving the inference engine different seed, currently is randomly generated by nitro. This should be minor but also worth adding later on for user control (create issues)
- (Jan) tools:
- Chain and runtime related should be handled in Jan
- (Jan) tool_choice
- Same above
- (Jan??) user:
- This is a feature to support state-ful way of tracking inferencing from openai, as a stateless inference engine nitro cannot support,but i don't see many really use this in practice, so we can use this as dummy value if not needed
- (Jan) function_call:
- Chain and runtime related shold be handled in Jan
- (Jan) functions:
- Same above