Abhord Product Guide (2026 Refresh)
This refreshed edition includes new guidance for multi-model volatility, region-aware tracking, deduplicated mentions, and confidence-weighted share of voice. If you used the prior guide, note the additions: a quick-start preset, improved model roster management, sentiment 2.0 with target-based scoring, and alerting by materiality thresholds.
1) Initial setup and configuration
- Create your workspace
- Add your brand, products, and key people as entities. Include common names, acronyms, and misspellings to boost recall.
- Define disambiguation rules (e.g., “Sage” the product vs “sage” the herb) to reduce false positives.
- Connect sources and choose model coverage
- Select LLMs to survey (e.g., ChatGPT-style assistants, Claude-family, Gemini-family, Bing Copilot-style, Perplexity-style, and Llama-based assistants).
- Enable region variants (US, EU, APAC) where available. Models now localize recommendations more aggressively; regional coverage improves accuracy.
- Configure the model roster
- Use the Auto-Refresh roster so new model versions are included without rework. Lock specific models if you need longitudinal comparability.
- Set session isolation on by default to avoid personalization bleed.
- Build your entity dictionary
- Map each entity to: canonical URL, short description, category, and preferred spelling. This improves clustering, sentiment precision, and deduping.
- Access and governance
- Invite marketing, product, PR, and SEO owners; assign roles (viewer, analyst, admin).
- Set data retention and export policies per workspace.
- Quick-start preset (new)
- From Templates, pick “Brand Health Essentials” to auto-load a baseline query set, 3-region coverage, and weekly cadence.
2) Running your first survey across LLMs
- Define the questions users actually ask
- Cover the main intent groups:
- Discovery: “What is [Brand]?”, “Best [category] tools for [job]?”
- Comparison: “Is [Brand] better than [Competitor]?”, “[Brand] vs [Competitor] price/features.”
- Transactional: “How to buy [Product]”, “Discounts for [Brand].”
- Support/trust: “Is [Brand] legit?”, “Does [Product] integrate with [X]?”
- Create your survey
- Pick 12–20 high-impact prompts from the above, write them in natural user language, and add 2–3 paraphrases per prompt (reduces phrasing bias).
- Sampling: start with 30–50 responses per prompt per model per region. For volatile models, run smaller—but more frequent—batches (e.g., 10/day for 3 days).
- Timing: schedule across different hours; some assistants show time-of-day variability when sourcing newsy content.
- Reduce bias and noise
- Enable randomization of prompt order and model rotation.
- Keep session length to 1 turn per prompt for clean retrieval-style answers; use 2–3 turns only when measuring follow-up behavior.
- Test pass, then production
- Run a sandbox survey on 2 prompts to validate captures and entity matching.
- Promote to production and freeze the question set for the first 2–4 weeks to establish a baseline.
3) Interpreting results: mentions, sentiment, share of voice
- Mentions (now deduplicated)
- Direct mentions: your entity is explicitly named.
- Indirect/co-mentions: implied or referenced via synonyms.
- New: Abhord clusters near-duplicates across paraphrased answers so the same idea isn’t double-counted.
- Sentiment 2.0 (target-based)
- Polarity at the entity level inside multi-entity answers (so your brand can be positive even if a competitor is negative in the same reply).
- Mixed sentiment handling: see the split (%) and the confidence score; prioritize high-confidence negatives first.
- Tip: Filter by “Evidence Present” to