Product Guides3 min read • Mar 04, 2026By Maya Patel

Understanding your Abhord dashboard: Key metrics explained (Mar 2026 Update 2)

This practical guide helps new Abhord users stand up a reliable LLM survey program, read the metrics that matter, and turn insights into action.

Abhord Quickstart Guide (Refreshed March 2026)

This practical guide helps new Abhord users stand up a reliable LLM survey program, read the metrics that matter, and turn insights into action.

What’s new in this 2026 refresh

  • Model mix guidance: pair at least one frontier model with one open or cost‑efficient model to reduce bias and improve coverage.
  • Structured outputs by default: use JSON extraction prompts for stable mentions and sentiment; avoid free‑form when you plan to trend.
  • Position‑aware share of voice (SoV): weigh answers that appear earlier or in “single best” responses more heavily for decisioning.
  • Cadence and size updates: for directional reads, 50–100 answers per model is often enough; use Wilson intervals to judge significance.
  • Alerting thresholds: act on ≥5 percentage‑point SoV moves sustained across two runs, or ≥10pp single‑run spikes.

1) Initial setup and configuration

1) Create your workspace

  • Projects: One per brand, market, or product line.
  • Roles: Assign at least one Admin (settings), one Analyst (surveys/dashboards), and one Editor (prompts/entities).

2) Connect model providers

  • Bring-your-own keys for the models you plan to test (e.g., frontier + cost‑efficient/open models).
  • Set per‑provider rate limits and concurrency to avoid throttling during runs.

3) Define entities and synonyms

  • Brand entity: canonical name plus common variants (e.g., “Acme”, “Acme Co.”, ticker, product names).
  • Competitors: list each rival and their known aliases; add exclusions (e.g., “Acme Brick” if unrelated).
  • Save to your global dictionary so extraction is consistent across surveys.

4) Configure defaults

  • Output mode: JSON or structured spans for extraction tasks.
  • Sampling: temperature 0.2–0.4 for extraction; 0.6–0.8 for generative discovery. Keep top_p at 1.0 initially.
  • Token limits: cap max_tokens to prevent truncation; 512–1024 is safe for short QA.
  • Random seeds: fix a seed for reproducibility when supported.
  • Redaction: enable PII redaction if you’ll paste proprietary queries.

Pro tip: Add a staging project to test prompts and extraction rules before deploying to your main dashboards.


2) Run your first survey across LLMs

1) Pick a use case

  • Brand QA: “Who makes the best X?”, “Is [Brand] trustworthy?”, “Top alternatives to [Brand]?”
  • Navigational/transactional: “Where to buy…”, “Pricing for…”, “Compare [Brand] vs [Competitor].”
  • Informational: “What is [Category] and leading providers?”

2) Create a survey

  • Query set: 10–25 high‑intent prompts that reflect how real users ask. Include exact, comparative, and “best of” forms.
  • Models: choose 2–4 (at least one frontier + one cost‑efficient/open).
  • Samples: aim for 50–100 answers per model to get a stable first read.
  • Geography/language: specify locale to match your market.

3) Prompt design

  • Use neutral, user‑

Maya Patel

Director of AI Search Strategy

Maya Patel has 12+ years in SEO and AI-driven marketing, leading enterprise programs in search visibility, content strategy, and GEO optimization.

Ready to optimize your AI visibility?

Start monitoring how LLMs perceive and recommend your brand with Abhord's GEO platform.