Abhord Quickstart: Practical Product Guide (2026 Refresh)
This refreshed edition (March 2026) helps new Abhord users go from zero to insight fast. It reflects recent platform changes, clearer defaults, and field-tested recommendations.
What’s new in the 2026 refresh
- Workspace-level cost caps and spend alerts
- Improved entity disambiguation (Mentions v2) and sentiment calibration
- Share of Voice weighting by model reliability
- Zero-retention and PII redaction modes for sensitive runs
- Multi-model orchestration presets and stratified sampling
- Drift watchlists and outlier-model flags
1) Initial setup and configuration
- Create your workspace
- Invite teammates and assign roles: Admin (billing and governance), Analyst (create and run), Viewer (read-only).
- Set cost guardrails: monthly cap, per-run limit, and soft alerts at 50/80/100%.
- Connect LLM providers
- Add API keys for the providers you use. Enable at least one “frontier” model, one “balanced-cost,” and one “open-source” option to avoid single-model bias.
- Turn on zero-retention mode if using external providers for sensitive prompts.
- Define entities and keywords
- Add your brand, products, and common aliases. Include misspellings, acronyms, and regional names.
- Add competitors and map each to product lines, markets, and languages.
- Tip: Start narrow (your flagship product + 2 primary competitors) to keep early SoV signals clean.
- Configure compliance and privacy
- Enable PII redaction for inputs and outputs.
- Set data retention windows (e.g., 30 or 90 days) and export policies.
- Notifications and exports
- Connect Slack/Teams email for spike alerts.
- Enable scheduled CSV/JSON exports to your BI store.
2) Running your first survey across LLMs
Goal: get a directional read on how leading models talk about your brand vs. two competitors.
- Create a survey
- Objective: “Baseline brand perception and purchase drivers.”
- Audience language: start with English; add Spanish or French if those markets matter.
- Timeframe: single-run baseline today; schedule weekly recurrence.
- Choose your model mix (stratified)
- Frontier (accuracy-oriented)
- Balanced-cost (scalable volume)
- Open-source (transparency and customization)
- Set a minimum of 100 samples per model for stable early estimates; 300+ recommended for launches.
- Author prompts using templates
- Perception: “In 3–4 sentences, how do LLMs currently describe [Brand] in [Category]?”
- Purchase drivers: “List top 5 reasons to choose [Brand] vs. [Competitor].”
- Risks: “What concerns do users have about [Brand]?”
- Randomize brand order to reduce position bias. Keep system prompts identical across models.
- Controls and reproducibility
- Use Stable Evaluation mode: fixed seed, temperature 0.2–0.4, and matched top_p.
- Turn on deduplication and near-duplicate clustering to avoid overcounting repeated phrasing.
- Preview cost estimate; confirm spend guardrails before launch.
- Run and monitor
- Watch live quotas. If one model throttles, enable smart rebalancing so volume shifts without skewing your sample mix.
3) Interpreting results: mentions, sentiment, share of voice
- Mentions (Mentions v2)
- Definition: occurrences of an entity (explicit or implicit) mapped to its canonical form.
- What’s improved: better alias handling and context windows that disambiguate “Apple” (fruit vs. company).
- Practical check: sample 20 random positives and 20 near-misses to confirm alias coverage; add missing variants.
- Sentiment
- Scale: typically -1 (negative) to +1 (positive) with a confidence score.
- Calibrated ensemble: we aggregate multiple scorers and clamp extremes to reduce sarcasm false-positives.
- Read it right: look at sentiment by topic cluster. A neutral overall score may hide strongly positive “value” mentions and negative “support” mentions.
- Share of