Skip to content

AI models and parameters

Projects and agents use LLM configuration to control model behavior.

Common parameters

temperature

  • Low (e.g. 0–0.3): More consistent, deterministic answers — support, policy text.
  • Mid (e.g. 0.5–0.8): Balanced creativity.
  • High (e.g. 0.9+): More varied phrasing — risk: more hallucinations.

maxTokens

  • Caps response length and cost; very low values can truncate answers.

topP

  • Nucleus sampling; works with temperature to control diversity.

Choosing a model

  • Consider latency, cost, multilingual support, and tool-calling (function calling) compatibility.
  • The exact model list depends on your backend / provider configuration.

Projects · Agents

Cere Insight 2.0 documentation