Overview
In Orvi AI, a chat is a single prompt execution on a single AI model (ChatGPT, Perplexity, Gemini, etc.) that produces an AI response.
Chats are the foundation for everything you see in Orvi AI:
- Visibility: derived from whether your brand (or other brands) appears in chats.
- Position: derived from the mention order inside chats.
- Sentiment: derived from the tone of mentions inside chats.
- Sources & citations: derived from the URLs the model used and/or cited while generating the response.
At a data level, Orvi AI stores chats and their extracted entities in these MotherDuck tables:
- *
chats**: the response itself + execution metadata model, location, executed_at, status, prompt_type).
- *
brand_mentions** and *product_mentions**: who/what was mentioned, in what order, and with what sentiment.
- *
source_events**: which domains/URLs appeared as sources and which were explicit citations.
Where you’ll find chats in Orvi AI
- Overview → Recent Chats: a quick view of recent AI responses across your selected filters.
- Prompt pages: you can drill into the chat history for a specific prompt to understand *why* a metric changed (new brands mentioned, mention order shifts, new sources appearing, etc.).
Anatomy of a chat
Each chat contains the elements Orvi AI analyzes to compute your dashboards and metrics:
Execution context (from chats)
- Status
chats.status): whether the run completed successfully (default: success).
- Model
chats.model): which model produced the answer (e.g. chatgpt, perplexity, mistral, gemini, google-ai-mode, google-ai-overview).
- Prompt type
chats.prompt_type): whether this run is a brand prompt or a product prompt brand / product).
- Location
chats.location): where the prompt was executed from (e.g. US, FR). Some platforms may not support location selection or may return no location.
- Executed at
chats.executed_at): when the prompt was run.
The response (from chats)
- Prompt text
chats.prompt_text): the prompt that was executed.
- Response text
chats.response_text): the AI-generated answer shown in the chat view.
Mentions (from brand_mentions / product_mentions)
Orvi AI extracts mentions from the response and stores them as “mention events”:
- Mention order
mention_order): 1st, 2nd, 3rd… (this drives the Position metric).
- Sentiment score
sentiment_score): a 0–100 score describing the tone of the mention (this drives the Sentiment metric).
- Context snippet
context_snippet): the surrounding excerpt used to justify the detection.
Sources and citations (from source_events)
For each chat, Orvi AI records the URLs involved in the response generation:
- Source URL
source_events.url) + domain source_events.domain)
- Domain type
source_events.domain_type) and URL type source_events.url_type)
- Citation flag
source_events.is_citation): whether the URL was explicitly cited in the response text
- Citation order
source_events.citation_order): if cited, which citation number it was
Sources vs citations
Not all sources are citations — but every citation is a source.
- Sources: all URLs the model accessed or considered while generating the response (stored as rows in
source_events).
- Citations: the subset of sources that were explicitly referenced in the response text (where
source_events.is_citation = TRUE).
Example:
A response might show 8 sources in the sidebar, but only 5 of them are marked as citations.
How to interpret the difference:
- Citations tend to drive traffic (because they are “named” or referenced directly in the answer).
- Non-cited sources still build authority (they can influence the answer without being explicitly referenced).
This is why Orvi AI tracks both Used % (how often a domain appears as a source) and Avg Citations / Total Citations (how often sources are explicitly cited).
Different platforms handle web search, source panels, and citations differently. This affects what you see in chats:
- ChatGPT: may or may not browse the web depending on the experience being simulated. It’s normal to see chats with no sources.
- Perplexity: often shows many sources, but may cite fewer directly in the response body.
- Gemini / other models: may have different constraints around location selection and source presentation.
AI models also have inherent randomness in wording and source selection. Use chats to understand *what changed*, but rely on trends over time (weeks, not days) for strategic decisions.
Reading chat position rankings (how Position is computed)
When multiple brands/products appear in a chat, Orvi AI computes position using mention order:
Position Score = AVG(mention_order)
Important: mention order is computed over all detected mentions in the chat — not just the competitors you’ve explicitly added to your project.
Example:
Chat A mentions: Hyundai (1), Chevrolet (2), BMW (3) → BMW position = 3 Chat B mentions: Hyundai (1), Chevrolet (2), Ferrari (3), BMW (4) → BMW position = 4
Even if you haven’t added Ferrari as a tracked competitor, it still affects the “true” position context in that chat.
Last modified on January 25, 2026