Abhord Quickstart Guide (2026 Refresh)
This practical guide helps new Abhord users go from zero to value fast. It reflects platform and market changes through February 2026, including stronger multi‑LLM support, better deduping, clearer metrics, and action workflows.
What’s new since the last edition
- Multi‑LLM survey orchestration now supports per‑model prompts, budgets, and localized variants.
- Mentions are deduped across near‑synonyms and misspellings by default; toggle “strict” to see literal counts.
- Sentiment adds a “mixed” class and a confidence score; you can filter by confidence to reduce noise.
- Share of Voice (SOV) can be normalized by model, locale, or query cluster to avoid skew from high‑volume sources.
- Competitor tracking includes clusters (brand families, product lines) and alert thresholds by movement, not just level.
- Insight Actions let you assign owners, due dates, and run follow‑up surveys to confirm impact.
1) Initial setup and configuration
- Create a workspace: Add your brand name, website, and primary category. If you operate multiple brands, enable brand families.
- Connect model providers: Choose managed connections or bring your own keys. Set per‑provider daily caps to control cost.
- Define entities: Add your brand, product names, common abbreviations, and near‑synonyms. Include disambiguation terms if your name overlaps with generic words.
- Add exclusions: List off‑topic terms and competitors you do not want mapped to your brand. This reduces false positives in mentions.
- Locales and languages: Select priority markets and languages. Turn on locale‑aware prompts to capture region‑specific phrasing.
- Compliance and privacy: Choose sampling storage duration and PII redaction level. Abhord redacts by default; only lower if you have a legal basis.
- Integrations and alerts: Connect Slack/email/webhooks. Set alerts for spikes in negative sentiment, sudden SOV drops, or new competitor mentions.
- Roles and permissions: Assign Owners (configure), Analysts (interpret), and Viewers (read‑only). Use project‑level access for agencies.
2) Run your first survey across LLMs
- Pick a scope: Start with one product line and 8–12 high‑intent queries (e.g., “best X for Y,” “X vs Y,” “is X safe,” “price of X,” “how to choose X”).
- Draft prompts: Use neutral, information‑seeking prompts to minimize bias. Keep variants short and explicit (e.g., “Recommend top [category] for [use case] with reasons.”).
- Choose models and locales: Select 3–6 leading LLMs and 1–3 locales. More models improve coverage; start modest to establish a baseline.
- Execution settings:
- Sample size: 10–20 runs per prompt‑model‑locale combo to capture variability.
- Randomization: Enable to vary system/user instruction order and reduce prompt‑position bias.
- Cost guardrails: Set a survey budget and a per‑model cap; enable auto‑pause on anomaly detection.
- Safety: Keep default refusal handling; it improves completion rates without skewing tone.
- Dry run: Execute 2–3 prompts across all models to validate parsing and entity mapping. Fix any mis‑attributions before scaling.
- Launch: Run the full survey. Use the Live panel to watch completion, errors, and per‑model variance. If variance is extreme, increase sample size for those prompts.
3) Interpreting results: mentions, sentiment, share of voice
- Mentions
- What counts: A direct brand reference or an inferred, unambiguous reference (via synonyms you approved).
- Modes: “Smart (deduped)” merges close variants; “Strict (literal)” shows raw counts. Compare both when auditing.
- Quality: Use the Mention Confidence filter; start at ≥0.7 for dashboards, inspect 0