Context.ai
  • What is Context.ai?
  • List of Trackable Metrics
  • Product Analytics
    • Overview
    • Topic Categorisation
      • LLM Topics
    • User Engagement Tracking
    • Foreign Language Support
    • PII Filtering
    • Custom Metadata Filtering
    • Backfill Analytics Data
    • Custom Events
    • API Ingestion Methods
      • Log Conversation
      • [deprecated] Upsert Conversation
      • Thread Conversation
      • Patch Thread Message
      • API Resources
        • Chat Message
        • [deprecated] Tool Message
        • Custom Event
        • Metadata
        • Conversation
        • Thread
    • Embedded API
      • Multi-Tenancy
      • Conversations
        • Series Data
  • Integrations
    • Getting Started
    • Python SDK
    • Javascript SDK
    • LangChain Plugin
    • Haystack Plugin
    • Authorization
Powered by GitBook
On this page

Was this helpful?

  1. Product Analytics
  2. Topic Categorisation

LLM Topics

Freeform prompt design guide

PreviousTopic CategorisationNextUser Engagement Tracking

Last updated 9 months ago

Was this helpful?

Implementation Background

LLM Topics rely on receiving back structured output from the OpenAI gpt-4o-mini model via the function calling feature.

We define two parameters, reason and verdict. verdict is a boolean which is true if the LLM believes there is a match, false otherwise. reason is a Chain-of-Thought explanation of why the LLM has made the choice of verdict.

When a new message is analyzed, the LLM topic's prompt is sent as the system message. The model is then prompted with a user message containing the ingested conversation message. We send instructions to OpenAI via the tools parameter to always respond with a function call.

The parameters are recorded below for reference.

parameters: {
  type: "object",
  properties: {
    reason: {
      type: "string",
      description: "The reason for the verdict"
    },
    verdict: {
      type: "boolean",
      description: "Does the message fulfill the requirements of the given task"
    }
  },
  required: ["reason", "verdict"]
}
LLM Message Diagram