LLM

LLM

Manage AI model interactions within your workspace.

Active Tasks

Info
Description

Task ID

Unique identifier

Model

Which AI model is running

Status

Running, Queued, Completed, Failed

Tokens

Input/output token count

Duration

How long the task has run

Task Controls

Action
Effect

Pause

Suspend the task

Resume

Continue paused task

Cancel

Stop and discard

View Output

See generated content

Rate Limiting

Limit
Options

Max concurrent tasks

1-10

Tokens per minute

Custom threshold

Auto-cancel timeout

Time before killing stuck tasks

Last updated