Product Guides3 min read • Mar 23, 2026By Ethan Park

How to interpret AI sentiment scores for your brand (Mar 2026 Update 9)

Abhord Quickstart Guide (Refreshed Edition)

Abhord Quickstart Guide (Refreshed Edition)

This practical guide helps new Abhord users stand up a generative engine optimization (GEO/AEO) program in under a week. It covers setup, running your first cross-LLM survey, interpreting results, competitor tracking, and turning insights into action.

What’s new in this refreshed edition

  • Normalized share-of-voice (SOV) across models and locales: apples-to-apples comparison is now default.
  • Source Trace 2.0: improved citation capture and clustering to reveal which pages most influence answers.
  • Multi-locale presets: faster rollout in new markets with language, model, and query-tuning bundles.
  • Mentions+ taxonomy: entity disambiguation for brand variants and product lines (e.g., “Acme” vs “Acme Pro”).
  • Action Playbooks: one-click recommendations mapped to web, content, PR, and product tasks.

1) Initial setup and configuration

Goal: create a clean data foundation so results are consistent and actionable.

  • Create a workspace

- Organization name, primary domain(s), and brand handles.

- Choose home locale and time zone for reporting.

  • Define entities

- Add your brand, products, and key people as entities.

- Enter canonical names, common aliases, and disallowed collisions (e.g., “Atlas” the product vs. “Atlas” the gym).

- Optional: upload a factsheet (CSV) with product specs, pricing tiers, and must-have claims for validation checks.

  • Connect sources (optional but recommended)

- Verify site ownership (TXT or file) for deeper crawl and structured data checks.

- Add owned channels you want LLMs to cite (docs, blog, help center).

  • Select model roster

- Start with the default balanced roster spanning leading US/EU and APAC models.

- Keep “Auto-refresh models” on so your studies track newly-deployed versions without manual edits.

- Tip: Retain at least one consistent “control model” for longitudinal comparability.

  • Configure locales and languages

- Use multi-locale presets (e.g., EN-US, EN-GB, DE-DE, JA-JP).

- Turn on “regional compliance” if your category has location-specific claims.

  • Set governance and alerts

- Choose PII redaction level.

- Set alert thresholds for sentiment drops (e.g., -10 points week-over-week) and competitor surges (+5 pts SOV).

  • Save as Baseline v1

- This locks current configuration so you can measure improvement against a stable starting point.

2) Running your first survey across LLMs

Goal: capture how models talk about your brand today, by intent and locale.

  • Pick a template

- Start with Brand Perception Baseline or Alternatives & Recommendations.

- Templates include a proven “question bank” for discovery, comparison, pricing, and troubleshooting intents.

  • Define intents and prompts

- Include at least 12–18 prompts spanning:

- Discovery: “Best X for Y?”, “Top tools for …”

- Comparison: “Brand A vs Brand B”, “Is Brand A worth it?”

- Task-based: “How do I … with X?”, “Fix … in X”

- Commercial: “Pricing for X”, “Discounts/coupons for X”

- Add 2–3 “negative” prompts to probe risk: “Why avoid X?”, “Common complaints about X.”

  • Sampling and run settings

- Sample size: n=50 per model per locale (good first pass).

- Temperature sweep: 0.2 and 0.7 to capture deterministic and creative modes.

- Turn on Source Trace 2.0 and Mentions+.

-

Ethan Park

AI Marketing Strategist

Ethan Park brings 13+ years in marketing analytics, SEO, and AI adoption, helping teams connect AI visibility to measurable growth.

Ready to optimize your AI visibility?

Start monitoring how LLMs perceive and recommend your brand with Abhord's GEO platform.