Abhord Quickstart Guide (2026 Refresh)
This refreshed edition reflects platform updates through early 2026, including model coverage expansions, improved sentiment scoring, and new automation. If you used the 2025 guide, look for “Updated” callouts for what’s changed.
What’s new since the last edition
- Updated: Broader model roster and routing. Abhord now supports first‑party connectors for GPT‑4.1/mini, Claude 3.5 family, Gemini 2.x, Cohere Command‑R+, and open‑weights via hosted endpoints. You can mix models per survey and normalize results.
- Updated: Sentiment v2 tuned for LLM‑generated text, reducing false‑positive “neutral” by ~18% and adding confidence bands.
- Updated: Mention clustering now dedupes near‑duplicates across models (fuzzy and semantic).
- New: Share of Voice (SoV) normalization by model traffic and domain authority signals.
- New: Competitor tracking playbooks, Slack/Email alerts, and CSV/API exports.
- New: Cost guards (per‑run caps) and PII scrubbing by default.
1) Initial setup and configuration
Goal: connect data, define entities, and set reliable defaults.
- Create your workspace
- Name the workspace after your brand or product line.
- Invite teammates with roles: Admin (billing + settings), Analyst (create surveys, view costs), Viewer.
- Connect sources
- LLMs: select the models you want to survey. Recommended baseline mix: GPT‑4.1, Claude 3.5 Sonnet, Gemini 2.0 Pro, and one budget model (e.g., GPT‑4o‑mini).
- Destinations (optional): Slack for alerts, Email digests, CSV export, and Webhook/API for BI tools.
- Define entities and synonyms
- Add your brand and product entities with canonical names and common variants (e.g., “Acme SuperX,” “Super X,” ticker, misspellings).
- Add competitors with aliases. Include product lines, codenames, and regional brand names.
- Configure guardrails and privacy
- Turn on PII scrubbing and profanity masking (default: on).
- Set a monthly spend cap and per‑run cap to avoid surprises.
- Choose a default region for data processing that aligns with your compliance needs.
- Recommended defaults (safe starting point)
- Sample size: 50–100 prompts per survey iteration.
- Model mix: 3–4 diverse models (avoid only one family).
- Randomization: shuffle prompt order across models (bias control on).
- Logging: keep raw completions for audit; redact on export.
2) Run your first survey across LLMs
Goal: test how models describe your brand, category, and competitors.
- Pick or create a template
- Start with “Brand Knowledge + Recommendations” template.
- Prompts typically include:
- “What is [Brand/Product]?” (definition)
- “Top alternatives to [Brand/Product]?” (competitive set)
- “Pros and cons of [Brand/Product] for [use case]?” (positioning)
- “Who should consider [Brand/Product], and why?” (ICP fit)
- Add primer and fairness notes (Updated)
- Use a neutral system prompt: “Answer factually; cite reasoning; avoid speculation.”
- Add brand facts (10–12 bullet points) to reduce hallucinations, then run a control without facts to see organic recall.
- Choose your model mix and sampling
- Select at least one top‑tier, one mid‑tier, and one cost‑efficient model.
- Set N=75 per prompt across the mix for a quick first pass.
- Run and monitor
- Enable cost guard and streaming logs.
- If a model errors or times out, Abhord auto‑retries on a secondary region (Updated) and records fallbacks.
- Quality checks
- Skim 10 random responses per prompt.
- Tag obvious hallucinations or policy violations; Abhord excludes these from aggregates if you enable “exclude flagged.”
3) Interpreting results: mentions, sentiment, share of voice
- Mentions
- What it is: count of brand, product, or feature references in responses, deduped via clustering (Updated).
- How to use: filter by prompt, model, and region. Check “distinct mention clusters” to avoid double‑counting.
- Tip: high mentions with low precision? Tighten entity synonyms or add negative keywords.
- Sentiment (Updated)
- What it is: polarity score (−1 to +1) with confidence intervals and rationale snippets.
- How to use: compare sentiment by prompt type (definition vs. recommendation) and by model.
- Tip: watch “mixed sentiment” bands; they often signal outdated or inconsistent documentation in the wild.
- Share of Voice (SoV) (New normalization)
- What it is: your mention share within the competitive set, normalized across model traffic and domain authority signals.
- How to use: view SoV at the entity and feature levels. Toggle “organic” (no brand facts provided) vs. “primed.”
- Tip: If organic SoV lags primed SoV, prioritize awareness initiatives (docs, FAQs, third‑party explainers).
4) Setting up competitor tracking
- Create your competitor set
- Add direct and adjacent competitors. Include aliases, product lines, and regional names.
- Define exclusion terms (e.g., generic “studio,” “assistant”) to