Your Dashboard

📱 Understanding Your Dashboard

AICarma organizes your data into key objects. Each one gives you a different lens on your AI visibility — and each one tells you exactly what to do next.

🏢 Brands

Your home base. Each brand is an entity you're tracking across all AI models.

What to do:
  • Add brands you want to monitor — your company, product lines, or client brands if you're an agency
  • Check weekly scores at a glance — visibility, sentiment, and position aggregated across all models
  • Spot trends using the directional indicators showing whether your visibility is improving or declining week-over-week

🤖 Models

Break down your brand's performance by AI model. Each model is its own data point.

What to do:
  • Identify underperforming models — your brand might rank #1 on ChatGPT but be absent from DeepSeek
  • Compare Instruct vs. Thinking — instruct models show what AI says; thinking models reveal why
  • Drill into daily views for any model to see day-to-day fluctuations and correlate with your content changes

💬 Prompts

The questions your customers ask AI. Each prompt is tracked daily across all models.

What to do:
  • Review auto-suggested prompts — AICarma generates these based on your brand and competitors
  • Add industry-specific prompts — the exact questions your ideal customers type into ChatGPT or Perplexity
  • Analyze per-prompt performance — find which prompts give you the best or worst visibility and focus your content strategy

🎯 Competitors

Every metric tracked for your brand is also tracked for your competitors — on the same prompts, across the same models.

What to do:
  • Review AI-discovered competitors — our two-stage discovery finds competitors you might not have considered
  • Compare visibility scores side-by-side — see who's winning and losing across each model
  • Spot competitive gaps — when a competitor surges on a specific prompt, investigate what content they published

🔗 Sources

When AI models cite web sources in their responses, AICarma captures them — especially powerful with Perplexity.

What to do:
  • Discover content gaps — if competitors are cited but you're not, you know exactly which content to create
  • Find authority signals — see which of your pages AI models trust enough to cite
  • Identify backlink opportunities — discover third-party sources AI trusts in your industry

Reading AI Responses

The five views above give you aggregated scores and trends. But the real depth lives one level deeper — inside individual AI responses. This is where you see exactly what each model says about your brand, word-for-word.

Anatomy of a Response

When you open any individual response, you'll see:

  • Prompt — the exact question that was asked
  • Model — which AI model generated this answer (e.g. GPT-5-nano, Claude Haiku 4.5, Sonar)
  • Full AI answer — the complete text the model returned, unedited
  • Scores — visibility, sentiment, and position calculated for this specific response
  • Source URLs — every web page the model cited or referenced (see Source Tracking for the full breakdown)

💡 This is raw intelligence. While dashboards show you averages and trends, individual responses show you the exact language models use to describe your brand. Read them — you'll often discover things no aggregated score can reveal.

How Each Model Behaves Differently

Not all AI models work the same way. Understanding their behavior helps you interpret your data correctly:

🟢 ChatGPT (OpenAI)

Sometimes performs web searches, sometimes doesn't. When ChatGPT responds from its training data alone, you'll see responses with no sources listed — this is normal, not a data issue. Web-grounded responses will include source URLs.

🔵 Perplexity

Always searches the web. Shows many source URLs but may cite fewer of them directly in the response text. Expect higher source counts but lower inline citation numbers. Best model for source analysis.

🟣 Claude (Anthropic)

Relies more heavily on training data. Tends to give nuanced, detailed brand descriptions. Source behavior varies by model tier — Haiku is faster and lighter, Sonnet produces deeper reasoning.

🔴 Gemini (Google)

Integrates with Google Search when grounding is enabled. Strong at referencing recent information. Flash models are fast for daily monitoring; Pro models provide deeper analysis.

⚡ Open-Source Models (DeepSeek, Llama, Qwen)

These models often respond differently from commercial ones — and that matters. Enterprise customers running these models behind firewalls make purchasing decisions based on what their AI says. See all 14 models →

The Multi-Model Advantage

This is where AICarma's 14-model coverage becomes uniquely powerful. For the same prompt, you can compare responses across all models side-by-side:

  • Spot model-specific gaps — your brand might be #1 on ChatGPT but completely absent from DeepSeek. That's an enterprise customer you're invisible to.
  • Compare reasoning — Thinking models (GPT-5.2, Claude Sonnet 4.5, Gemini 3 Pro) explain why they rank brands the way they do. Instruct models just give you the answer. Read both.
  • Detect inconsistencies — if one model praises your brand and another ignores it, that inconsistency tells you where your content strategy has gaps.
  • Track model updates — when a model updates its training data or search behavior, your visibility can shift overnight. Daily monitoring across 14 models catches these changes immediately.

Most tools can't do this

Typical competitors track 2–3 models and charge extra for each additional one. With AICarma, comparing 14 models on the same prompt is built-in — no upsells, no limits. This isn't just more data; it's a fundamentally different level of insight into how your brand is perceived across the entire AI ecosystem.


Brand Visibility vs. Source Visibility

There's an important distinction between being mentioned and being cited:

🗣️ Brand Visibility

  • Your brand is explicitly named in the AI's response
  • Measured by your Visibility Score
  • Indicates the AI associates your brand with the topic

🔗 Source Visibility

  • Your content was used or cited as a source — even if your brand name wasn't mentioned
  • Measured through Source Tracking
  • Indicates the AI trusts your content as reference material

What the Gaps Tell You

  • Cited often but never mentioned? Your content has authority, but your brand lacks name recognition. The AI trusts your pages but doesn't associate them with your brand name. Action: Strengthen branding in your content — make sure your brand name appears prominently alongside the expertise.
  • Mentioned often but never cited? The AI knows your brand but doesn't trust your content as a reference. You have awareness but not authority. Action: Create comprehensive, citable content — detailed guides, data-driven research, and original analysis that models will want to reference.