Abhord Quick-Start Guide (Refreshed March 2026)
What’s new in this edition
- Unified setup wizard: faster workspace, sources, and model connectors in one flow.
- Multi-model panels: run the same survey across several LLMs with automatic cost and rate-limit management.
- Better normalization: upgraded deduping, entity linking, and sarcasm-aware sentiment for cleaner counts.
- Confidence bands and model agreement: see how stable each metric is and where models disagree.
- Competitor watchlists and alerts: one-time setup with baselines and anomaly detection.
- Action routing: push insights to Slack, Jira, or email with saved playbooks.
1) Initial setup and configuration
Goal: stand up a trustworthy workspace in under 30 minutes.
- Create your workspace
- Name your workspace and set your primary time zone and default language.
- Invite teammates with roles: Admin (settings + billing), Analyst (create/analyze), Viewer (read-only).
- Connect model providers
- Add one or more LLM providers (e.g., your preferred API endpoints). Store API keys in the built-in secrets vault.
- Set guardrails: max tokens per run, monthly spend caps, and retry policy for rate limits/timeouts.
- Recommendation (new): create a “Panel” with at least 3 diverse models for cross-checking sentiment and entity extraction.
- Add data sources
- Start with public web + news, then layer in social/forums and your owned sources (support tickets, NPS verbatims, app reviews).
- For each source: set crawl frequency, language coverage, and region. Toggle PII redaction on.
- Tip: define exclusion rules (e.g., job listings, boilerplate footer text) to reduce noise.
- Define entities and rules
- Create canonical entities for your brand, products, and competitors. Add aliases, tickers, hashtags, and common misspellings.
- Add negative keywords (e.g., “apple fruit” if you’re Apple Inc.) to prevent off-topic matches.
- New recommendation: enable “strict linking” for your main brand to collapse near-duplicates and co-reference across languages.
2) Running your first survey across LLMs
Goal: collect an initial pulse on brand perception and topic themes within 24–48 hours.
- Start a new project > choose “Brand Perception Pulse” (or a blank template if you prefer)
- Time window: last 30 days is a good baseline.
- Geography/language: begin where you have the most volume; add more once you validate signal quality.
- Configure your questionnaire
- Core questions (edit as needed):
- Is this a mention of [Brand/Product]? Which entity?
- Sentiment (negative/neutral/positive) and intensity (low/med/high).
- What is the main theme? (select or suggest)
- Does the content express a purchase, churn, or recommend intent?
- New recommendation: ask for “rationale snippets” so models cite the exact text driving the label.
- Select your LLM panel
- Choose 3–5 models with different strengths (fast + cheap; accurate + slower).
- Sampling plan: set a per-model sample size (e.g., 1,000 items each) or use adaptive sampling that stops once confidence bands narrow.
- Run a small pilot
- Process 100–200 items per model first.
- Inspect disagreements: refine prompts or add examples to your instructions.
- Lock the survey, then scale to your full sample or continuous monitoring.
- Cost and performance tips
- Cap max tokens and enable summarization-only mode on long threads.
- Use language-specific models to avoid translation bias when you have enough volume in that language.
3) Interpreting results: mentions, sentiment, share of voice
Goal: read metrics with appropriate skepticism and act only on stable signals.
- Mentions
- “Mentions” is the count of items that the models link to your entity after deduping and spam filtering.
- Use unique-author and unique-domain views to avoid over-counting serial posters.
- New: toggle “strict linking” to see the impact of tighter entity resolution.
- Sentiment
- Default view shows Positive/Neutral/Negative with intensity. Drill down by source, topic, and geography.
- Pay attention to confidence bands: wide bands mean