Abhord Quickstart Guide (Refreshed Edition)
Who this is for
- New Abhord users who want a fast, reliable way to measure how large language models (LLMs) talk about your brand, products, and competitors—and to turn those signals into action.
What’s new since the last edition
- Guided setup: a simplified project wizard, automatic entity/synonym detection, and API-key vault for managing LLM access.
- Survey Builder upgrades: reusable question templates, cross-model quotas, and pilot-mode to sanity-check prompts before a full run.
- Metrics refresh: entity-aware sentiment (reduces false polarity from generic text), normalized Share of Voice (SoV) across model and region, and better deduplication for near-duplicate mentions.
- Alerts and integrations: threshold-based alerts, weekly rollups, and export to BI tools via CSV or webhook.
- Best-practice updates: stronger prompt controls to reduce hallucinations, and weighting guidance to reflect real traffic share across LLMs.
1) Initial setup and configuration
- Create a workspace
- Name your workspace after your company/product line.
- Invite teammates across SEO, product marketing, comms, and data—permissions can be set to Viewer, Editor, or Admin.
- Connect models
- Add API keys for the LLMs you’ll survey (e.g., OpenAI, Anthropic, Google, Meta). Store them in Abhord’s key vault. If you don’t have keys for some providers, select “Abhord-managed access” where available.
- In Model Coverage, choose the locales you care about (e.g., US/EN first). You can add more regions later.
- Define entities
- Add your brand, product names, and key features as entities. Include common variations and misspellings (e.g., “Acme Pro,” “AcmePro,” “Acme Pro 2.0”).
- Add competitors and their synonyms. Use the “Suggest synonyms” helper to capture nicknames and shorthand.
- Tag entities by type (brand, product, feature, industry term). This improves sentiment attribution and SoV precision.
- Configure baselines
- Set your primary time window (e.g., last 30 days) and comparison period (previous 30 days).
- Choose weighting: start with Equal weighting across LLMs, then shift to Traffic-weighted once you have channel share estimates.
2) Run your first survey across LLMs
- Start small with a pilot
- In Survey Builder, select “Pilot mode” with 2–3 core questions and 2–3 LLMs. This validates prompt clarity and entity detection.
- Use neutral, verifiable prompts. Example: “What is [Brand] known for? Cite specific features.” Avoid leading language.
- Add a “freshness anchor” to prompts: “As of today, summarize … If unknown, reply ‘insufficient data.’” This reduces stale facts.
- Expand to a full survey
- Questions: cover awareness (what is X?), positioning (how does X compare to Y?), objections (downsides/limitations), and purchase guidance (pricing, fit, alternatives).
- Quotas: set a minimum response count per LLM (e.g., 50–100 samples), per region if applicable.
- Anti-hallucination guardrails: instruct models to provide sources where possible and to answer “not sure” if uncertain.
- Run and monitor: watch the Live pane. If you see obvious drift or misclassification, pause, refine prompts or synonyms, and resume.
- Save as a template so your team can rerun with one click.
3) Interpreting results: mentions, sentiment, share of voice
- Mentions
- Direct mentions: exact or close variants of your brand/products. These drive SoV.
- Indirect mentions: references to your category or features without naming your brand. Useful for content opportunity discovery.
- Tip: Use the Mentions Explorer filters for “new vs. returning mentions” to see what emerged since the last cycle.
- Sentiment (entity-aware)
- Scores range negative to positive; neutral means no clear polarity.
- Entity-aware sentiment ties polarity to the correct entity (e.g., “slow” attached to Competitor B, not your brand).
- Watch the “Why this score?” explanation to spot misattributions; re-tag entities if needed.
- Share of Voice (SoV)
- SoV shows your portion of direct mentions vs. competitors over time, normalized across models.
- View SoV by LLM to identify which models over- or under-represent you.
- Use the “Net Movement” view to see gains/losses by topic (e.g., you gained SoV in “speed,” lost in “pricing clarity”).
Updated interpretation tips
- Cross-model variance is normal. Treat each LLM as a distribution channel with its own bias and recency profile.
- Track “evidence density” (how often claims include citations). Low density + high SoV can signal fragile positioning.
4) Setting up competitor tracking
- Build a competitor set
- Add 3–6 primary competitors. Too many creates noise.
- For each, add