Product Guides3 min read • Mar 07, 2026By Jordan Reyes

How to interpret AI sentiment scores for your brand (Mar 2026 Update 3)

This practical guide helps you stand up Abhord, run your first cross-LLM survey, interpret results, set up competitor tracking, and turn insights into action—fast.

Abhord Quickstart Guide (Refreshed Edition)

This practical guide helps you stand up Abhord, run your first cross-LLM survey, interpret results, set up competitor tracking, and turn insights into action—fast.

What’s new in this edition:

  • Clearer default schemas for mentions, sentiment, and share of voice (SoV)
  • Stronger guidance on prompt versioning, language/region coverage, and confidence scoring
  • Updated recommendations for multi-LLM portfolios and model variance control
  • Practical QA steps to reduce false positives and entity drift

1) Initial setup and configuration

1) Create a workspace

  • Invite teammates and assign roles: Admin (billing + providers), Analyst (projects + exports), Viewer (dashboards).
  • Set your default region(s) and language(s). Start with your core market; add adjacent locales later to compare variance.

2) Connect providers and models

  • Add API credentials for the LLMs you plan to survey (e.g., at least three from different vendors to reduce single-model bias).
  • Tag models by purpose: “fast-scan,” “balanced,” “deep-reasoning.” You’ll sample across tags later.

3) Define your entities

  • Add your brand, products, and company as canonical entities.
  • Add known aliases, misspellings, and localizations (e.g., product nicknames, ticker, prior brand names).
  • Optional: Add disambiguation notes (e.g., “Not the film; we’re the CRM platform”).

4) Guardrails and privacy

  • Enable PII redaction in prompts and outputs.
  • Turn on deduplication of near-identical mentions and activate hallucination checks (evidence validation) where available.

5) Baseline keywords and themes

  • Seed themes like “pricing,” “support,” “security,” “innovation,” and “performance.”
  • Add negative keywords (e.g., unrelated companies with similar names) to cut noise.

Pro tip: Save all the above as a “Workspace Template” so new projects inherit consistent settings.


2) Run your first survey across LLMs

Goal: Ask a portfolio of models the same structured questions and capture consistent, analyzable output.

1) Start a new project

  • Choose a template: “Brand Health Scan” or “Competitor Landscape.”
  • Select languages/regions. Keep English/US first for a clean baseline.

2) Write task prompts

  • Keep questions neutral and specific. Example:

- “List notable brands in project management software and briefly explain why each is mentioned. Return JSON with fields: brand, reason, evidence, confidence.”

  • Avoid leading language (“best,” “top”) unless the research objective is rankings.

3) Define an output schema

  • Mentions: entity_canonical, entity_alias, evidence_snippet, source_hint (if provided), confidence (0–1).
  • Sentiment: score (-1 to +1), polarity (negative/neutral/positive), rationale_short.
  • SoV: auto-computed later; ensure mentions are structured for counting.

4) Choose models and sampling

  • Select 3–5 heterogeneous models. Set sample size per model (e.g., 20–50 prompts each).
  • Enable randomized prompt variants and order shuffling to reduce pattern bias.
  • Fix temperature within a narrow band across models; keep one “exploratory” high-temperature model for breadth.

5) Quality controls

  • Add a couple of “control” prompts with known answers to sanity-check consistency.
  • Turn on automatic retries for rate limits and transient errors.
  • Test-run 5 prompts; inspect outputs; then scale.

Run the job and wait for completion. You’ll land on the Results dashboard.


3) Interpreting results: mentions, sentiment, share of voice

Mentions

  • Definition: Count of times an entity appears in structured results after alias normalization and deduplication.
  • What to check:

- Canonical

Jordan Reyes

Principal SEO Scientist

Jordan Reyes is a 15-year SEO and AI search veteran focused on search experimentation, SERP quality, and LLM recommendation signals.

Ready to optimize your AI visibility?

Start monitoring how LLMs perceive and recommend your brand with Abhord's GEO platform.