# Ollama

Run large language models locally or in the cloud with Ollama

- **Category:** ai models
- **Auth:** API_KEY
- **Composio Managed App Available?** N/A
- **Tools:** 8
- **Triggers:** 0
- **Slug:** `OLLAMA`
- **Version:** 00000000_00

## Tools

### Chat with Ollama model

**Slug:** `OLLAMA_CHAT`

Tool to send a chat message with conversation history to Ollama. Use when you need to have a multi-turn conversation with an LLM model.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `model` | string | Yes | Model name to use for generating responses |
| `think` | string | No | Enables thinking output; accepts true/false or 'high'/'medium'/'low' |
| `tools` | array | No | Optional function tools the model may call during chat |
| `format` | string | No | Format for response: 'json' or JSON schema object |
| `stream` | boolean | No | Returns streamed partial responses; defaults to false |
| `options` | object | No | Runtime options controlling text generation (ModelOptions) |
| `logprobs` | boolean | No | Returns log probabilities of output tokens |
| `messages` | array | Yes | Chat history as message objects with role and content |
| `keep_alive` | string | No | Model keep-alive duration (e.g., '5m' or 0 to unload immediately) |
| `top_logprobs` | integer | No | Number of most likely tokens per position when logprobs enabled |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Generate Text with Ollama

**Slug:** `OLLAMA_GENERATE`

Tool to generate text responses from Ollama models with optional raw mode. Use raw=true to bypass prompt templating when you need full control over the prompt for debugging or custom processing. Note that raw mode will not return a context.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `raw` | boolean | No | When true, bypasses prompt templating and returns the raw response from the model. Use this for debugging or when you need full control over the prompt. Note: raw mode will not return a context. |
| `model` | string | Yes | Model name to use for generation. Examples: 'llama2', 'mistral', 'gemma3:4b'. |
| `think` | string | No | Enables thinking output. Accepts true/false or 'high'/'medium'/'low' for verbosity level. |
| `format` | string | No | Structured output format. Use 'json' for JSON output or provide a JSON schema object. |
| `images` | array | No | Array of base64-encoded images for multimodal models. |
| `prompt` | string | No | Text for the model to generate a response from. Required unless using raw mode with a complete prompt. |
| `stream` | boolean | No | When true, returns partial responses as stream. Default is false for non-streaming responses. |
| `suffix` | string | No | Text that appears after the user prompt. Used for fill-in-the-middle models. |
| `system` | string | No | System prompt for the model to generate a response from. |
| `options` | object | No | Runtime options for model generation behavior. |
| `logprobs` | boolean | No | Whether to return log probabilities of output tokens. |
| `keep_alive` | string | No | Model keep-alive duration. Examples: '5m' for 5 minutes, 0 to unload immediately. |
| `top_logprobs` | integer | No | Number of most likely tokens to return at each position. Requires logprobs=true. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Models

**Slug:** `OLLAMA_LIST_MODELS`

Tool to list all available Ollama models and their details. Use when you need to fetch installed models with metadata including name, size, last modified timestamp, digest, and format information.

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### OpenAI-Compatible Chat Completion

**Slug:** `OLLAMA_OPEN_AI_CHAT_COMPLETIONS`

Tool to create OpenAI-compatible chat completions using Ollama models. Use when you need conversational AI responses with OpenAI API format compatibility.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `n` | integer | No | Number of chat completion choices to generate. Default: 1 |
| `seed` | integer | No | Seed for reproducible outputs. Same seed with same parameters produces same results |
| `stop` | string | No | Stop sequences where the API will stop generating further tokens. Can be a string or array of strings |
| `user` | string | No | Unique identifier for the end-user for monitoring and abuse detection |
| `model` | string | Yes | Model identifier to use for completion (e.g., 'llama2', 'mistral', 'gemma3:4b') |
| `tools` | array | No | Array of tool/function definitions available for the model to call |
| `top_p` | number | No | Nucleus sampling parameter. Alternative to temperature. Default: 1.0 |
| `stream` | boolean | No | Enable streaming responses. If true, tokens are sent as server-sent events. Default: false |
| `messages` | array | Yes | Array of message objects representing the conversation history |
| `logit_bias` | object | No | Modify likelihood of specified tokens appearing in the completion. Map of token IDs to bias values (-100 to 100) |
| `max_tokens` | integer | No | Maximum number of tokens to generate in the completion |
| `temperature` | number | No | Sampling temperature for randomness (0-2). Higher values make output more random. Default: 1.0 |
| `tool_choice` | string | No | Controls which tool the model should use ('none', 'auto', or specific tool name) |
| `stream_options` | object | No | Additional streaming options. |
| `response_format` | object | No | Response format specification for structured outputs. |
| `presence_penalty` | number | No | Penalize tokens that have appeared to encourage new topics (-2.0 to 2.0). Default: 0 |
| `frequency_penalty` | number | No | Penalize frequent tokens to reduce repetition (-2.0 to 2.0). Default: 0 |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### OpenAI-Compatible Text Completion

**Slug:** `OLLAMA_OPEN_AI_COMPLETIONS`

Tool to create OpenAI-compatible text completions using Ollama models. Use when you need text generation with OpenAI API format compatibility beyond chat-based interactions.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `n` | integer | No | Number of completions to generate for the prompt. Default: 1 |
| `echo` | boolean | No | Returns the prompt embedded within the completion response. Default: false |
| `seed` | integer | No | Random seed for reproducible outputs. Same seed with same parameters should yield same result |
| `stop` | string | No | Sequences where the model should stop generating. Can be a single string or array of strings |
| `user` | string | No | Unique identifier representing the end-user for tracking and abuse monitoring |
| `model` | string | Yes | Model identifier to use for completion (e.g., 'llama2', 'mistral', 'deepseek-v3.2') |
| `top_p` | number | No | Nucleus sampling parameter controlling diversity (0-1). Recommended to alter either temperature or top_p, not both. Default: 1.0 |
| `prompt` | string | Yes | Text prompt for completion generation. Note: Currently only accepts a string, not arrays |
| `stream` | boolean | No | Enable streaming responses as Server-Sent Events. Default: false |
| `suffix` | string | No | Text that comes after the completion. Used for insertion mode |
| `best_of` | integer | No | Generates multiple completions server-side and returns the best one. Default: 1 |
| `logprobs` | integer | No | Number of log probabilities to return (up to 5) |
| `logit_bias` | object | No | Modify likelihood of specified tokens appearing. Maps token IDs to bias values (-100 to 100) |
| `max_tokens` | integer | No | Maximum number of tokens to generate in the completion. Total of prompt + max_tokens cannot exceed model's context length |
| `temperature` | number | No | Controls randomness in output (0-2). Lower values make output more focused, higher values more random. Default: 1.0 |
| `stream_options` | object | No | Additional streaming options. |
| `presence_penalty` | number | No | Penalizes tokens based on whether they appear in text so far (-2.0 to 2.0). Positive values encourage new topics. Default: 0 |
| `frequency_penalty` | number | No | Penalizes tokens based on frequency in text so far (-2.0 to 2.0). Positive values decrease repetition. Default: 0 |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### List Models (OpenAI Compatible)

**Slug:** `OLLAMA_OPEN_AI_LIST_MODELS`

Tool to list available models using OpenAI-compatible API format. Use when you need to retrieve locally available Ollama models with metadata following OpenAI's model list format.

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Show Model Information

**Slug:** `OLLAMA_SHOW`

Tool to show comprehensive information about an Ollama model. Use when you need to retrieve model details, parameters, template, license, or system prompt.

#### Input Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `model` | string | Yes | The name of the model to retrieve information about. |
| `verbose` | boolean | No | When enabled, includes large verbose fields in the response. |

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |

### Get Ollama Version

**Slug:** `OLLAMA_VERSION`

Tool to get the version of Ollama running locally. Use to check which version of Ollama is currently installed.

#### Output

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `data` | string | Yes | Data from the action execution |
| `error` | string | No | Error if any occurred during the execution of the action |
| `successful` | boolean | Yes | Whether or not the action execution was successful or not |
