How to Build an AI Tool Intelligence Report Using the Vibe Data API

· Tutorial
← Back to Blog
10 min read · A step-by-step guide to querying 5 data sources and building a composite Developer Mindshare Score

5 data sources. 11 tools ranked. 1 composite score. This tutorial walks through every line of code needed to query the Vibe Data API for AI developer tool metrics, normalize the signals, and produce a competitive intelligence report you can hand to a product manager, share in a team Slack, or automate as a weekly cron job.

What You'll Build

  • A "Developer Mindshare Score" that combines Reddit mentions, NPM downloads, GitHub stars, HackerNews activity, and Stack Overflow questions into a single 0-100 ranking
  • Normalized scoring across sources with wildly different scales (millions of downloads vs. dozens of SO questions)
  • A formatted terminal report comparing the top AI coding tools side-by-side
  • Customizable weights so you can tune what signals matter most for your use case
5
Data Sources
1
Composite Score
11
Tools Ranked

1 Set Up the API Client

All data is available through the Vibe Data REST API. You authenticate with an API key passed in the Authorization header. The VIBE_DATA_API_KEY environment variable holds your key — get one here.

require('dotenv').config();

const API_KEY = process.env.VIBE_DATA_API_KEY;
const BASE_URL = 'https://vibe-data.com';

async function apiGet(endpoint) {
  const res = await fetch(`${BASE_URL}${endpoint}`, {
    headers: { 'Authorization': `Bearer ${API_KEY}` }
  });
  if (!res.ok) throw new Error(`API ${res.status}: ${await res.text()}`);
  const json = await res.json();
  return json.data;
}

async function buildReport() {
  console.log('Vibe Data API client ready');

  // 7-day rolling window
  const endDate = new Date().toISOString().split('T')[0];
  const startDate = new Date(Date.now() - 7 * 86400000)
    .toISOString().split('T')[0];
  const cutoff = new Date(startDate);

  console.log(`Report period: ${startDate} to ${endDate}`);

  // ... Steps 2-6 go inside this function ...
}

buildReport().catch(console.error);

Node.js 18+ includes fetch natively — the only dependency is npm install dotenv. Create a .env file with your VIBE_DATA_API_KEY. All subsequent code runs inside the buildReport() function.

2 Query Reddit Mentions

Reddit is the broadest signal of what developers are actively discussing. The /api/reddit endpoint returns individual mentions with engagement metrics. We fetch each tool's mentions in parallel, filter to the last 7 days, and aggregate client-side.

// Step 2: Reddit developer discussion volume
const TOOLS = [
  'bolt', 'chatgpt', 'cursor', 'claude', 'v0',
  'openai', 'lovable', 'anthropic', 'aider',
  'github-copilot', 'windsurf'
];

const redditPromises = TOOLS.map(async (tool) => {
  const mentions = await apiGet(
    `/api/reddit?tool_name=${tool}&limit=500`
  );
  const recent = mentions.filter(m =>
    new Date(m.created_utc * 1000) >= cutoff
  );
  return {
    tool_name: tool,
    mention_count: recent.length,
    total_score: recent.reduce((s, m) => s + (m.score || 0), 0),
    total_comments: recent.reduce((s, m) =>
      s + (m.num_comments || 0), 0)
  };
});
const redditAgg = (await Promise.all(redditPromises))
  .sort((a, b) => b.mention_count - a.mention_count);

console.log('\n--- Reddit Mentions (Last 7 Days) ---');
redditAgg.forEach((r, i) => {
  const ratio = (r.total_score / r.mention_count).toFixed(1);
  console.log(
    `${i+1}. ${r.tool_name}: ${r.mention_count} mentions, ` +
    `${r.total_score} upvotes (${ratio}x engagement)`
  );
});
Example Output
--- Reddit Mentions (Last 7 Days) --- 1. bolt: 1316 mentions, 3953 upvotes (3.0x engagement) 2. chatgpt: 1256 mentions, 2104 upvotes (1.7x engagement) 3. cursor: 1240 mentions, 3116 upvotes (2.5x engagement) 4. claude: 1231 mentions, 2342 upvotes (1.9x engagement) 5. v0: 1163 mentions, 8899 upvotes (7.7x engagement) 6. openai: 1125 mentions, 6396 upvotes (5.7x engagement) 7. lovable: 909 mentions, 7627 upvotes (8.4x engagement) 8. anthropic: 866 mentions, 5651 upvotes (6.5x engagement) 9. aider: 572 mentions, 1557 upvotes (2.7x engagement) 10. github-copilot: 440 mentions, 1288 upvotes (2.9x engagement)

Notice the "engagement ratio" (upvotes per mention). Bolt leads in raw volume at 1,316 mentions, but v0 has a 7.7x engagement ratio — each mention sparks far more interest. This distinction between volume and intensity is exactly why a multi-signal approach matters.

3 Pull NPM Download Data

NPM downloads measure what developers actually install — not just what they discuss. The /api/npm endpoint returns packages sorted by downloads, with weekly and monthly totals already computed.

// Step 3: NPM package adoption (latest snapshot)
const npmData = await apiGet('/api/npm?limit=15');

console.log('\n--- NPM Downloads (Latest) ---');
npmData.slice(0, 10).forEach((p, i) => {
  const weekly = (p.weekly_downloads / 1e6).toFixed(2);
  const monthly = (p.monthly_downloads / 1e6).toFixed(2);
  console.log(`${i+1}. ${p.package_name}: ${weekly}M/week (${monthly}M/month)`);
});
Example Output
--- NPM Downloads (Latest) --- 1. openai: 11.80M/week (50.49M/month) 2. ai: 7.44M/week (31.13M/month) 3. @anthropic-ai/sdk: 5.60M/week (22.66M/month) 4. @langchain/core: 2.73M/week (11.80M/month) 5. @google/generative-ai: 1.92M/week (8.69M/month) 6. langchain: 1.58M/week (7.06M/month) 7. @ai-sdk/xai: 1.05M/week (4.45M/month) 8. ollama: 0.57M/week (2.47M/month) 9. @ai-sdk/perplexity: 0.38M/week (1.40M/month) 10. cohere-ai: 0.35M/week (1.64M/month)

The openai package leads with 11.8M weekly downloads, but @anthropic-ai/sdk at 5.6M is growing faster in relative terms. Note the mapping challenge: NPM package names don't match Reddit tool names. openai maps to both "ChatGPT" and "OpenAI" in Reddit discussions. We'll handle this cross-source mapping in Step 5.

4 Add GitHub Stars, HackerNews Mentions, and Stack Overflow Questions

GitHub stars signal long-term open source traction. HackerNews mentions capture a selective technical audience. Stack Overflow questions signal real-world adoption friction — developers asking SO questions are using the tool in production and hitting real issues.

GitHub Stars

// Step 4a: GitHub stars (open source traction)
const githubData = await apiGet('/api/github?limit=20');

console.log('\n--- GitHub Stars (Top AI Tool Repos) ---');
githubData.slice(0, 10).forEach((r, i) => {
  console.log(
    `${i+1}. ${r.repo_full_name}: ` +
    `${Number(r.stars).toLocaleString()} stars, ` +
    `${Number(r.forks).toLocaleString()} forks`
  );
});
Example Output
--- GitHub Stars (Top AI Tool Repos) --- 1. Aider-AI/aider: 40,733 stars, 3,898 forks 2. cursor/cursor: 32,273 stars, 2,201 forks 3. openai/openai-python: 30,013 stars, 4,563 forks 4. github/copilot-docs: 23,243 stars, 2,386 forks 5. stackblitz/bolt.new: 16,201 stars, 14,547 forks 6. anthropics/anthropic-sdk-python: 2,782 stars, 456 forks

HackerNews Mentions

// Step 4b: HackerNews mentions (technical community)
const hnPromises = TOOLS.map(async (tool) => {
  const mentions = await apiGet(
    `/api/hackernews?tool_name=${tool}&limit=200`
  );
  const recent = mentions.filter(m =>
    new Date(m.time * 1000) >= cutoff
  );
  return {
    tool_name: tool,
    mention_count: recent.length
  };
});
const hnAgg = (await Promise.all(hnPromises))
  .sort((a, b) => b.mention_count - a.mention_count);

console.log('\n--- HackerNews Mentions (Last 7 Days) ---');
hnAgg.filter(r => r.mention_count > 0).forEach((r, i) => {
  console.log(`${i+1}. ${r.tool_name}: ${r.mention_count} mentions`);
});

Stack Overflow Questions

// Step 4c: Stack Overflow questions (adoption friction)
const soPromises = TOOLS.map(async (tool) => {
  const questions = await apiGet(
    `/api/stackoverflow?tool_name=${tool}&limit=200`
  );
  const recent = questions.filter(q =>
    new Date(q.creation_date) >= cutoff
  );
  return {
    tool_name: tool,
    question_count: recent.length,
    total_views: recent.reduce((s, q) =>
      s + (q.view_count || 0), 0),
    total_answers: recent.reduce((s, q) =>
      s + (q.answer_count || 0), 0)
  };
});
const soAgg = (await Promise.all(soPromises))
  .sort((a, b) => b.question_count - a.question_count);

console.log('\n--- Stack Overflow Questions (Last 7 Days) ---');
soAgg.filter(r => r.question_count > 0).forEach((r, i) => {
  console.log(
    `${i+1}. ${r.tool_name}: ${r.question_count} questions, ` +
    `${r.total_views} views`
  );
});
Example Output
--- Stack Overflow Questions (Last 7 Days) --- 1. cursor: 14 questions, 393 views 2. chatgpt: 12 questions, 216 views 3. github-copilot: 6 questions, 183 views 4. claude: 5 questions, 102 views 5. openai: 5 questions, 104 views 6. v0: 2 questions, 41 views 7. bolt: 1 questions, 11 views

Cursor leads SO questions at 14 per week — not because it's buggy, but because it has enough production users hitting edge cases to ask about them. Tools with zero SO questions (Windsurf, Replit, Lovable) may have smaller production footprints, or their support channels live elsewhere (Discord, GitHub issues).

5 Build the Composite Developer Mindshare Score

Here's where it gets interesting. Each data source operates at a completely different scale: NPM downloads are in millions, Reddit mentions in thousands, SO questions in single digits. To combine them into a single score, we normalize each dimension to 0–100 (where 100 = the leader in that category), then apply weights.

// Normalize values to 0-100 scale (max = 100)
function normalize(values) {
  const max = Math.max(...values);
  if (max === 0) return values.map(() => 0);
  return values.map(v => (v / max) * 100);
}

// Map tool identifiers across API responses
// Reddit/HN/SO use tool_name, NPM uses package_name,
// GitHub uses repo_full_name
const toolMap = {
  cursor:     { npm: [],                    github: 'cursor/cursor' },
  claude:     { npm: ['@anthropic-ai/sdk'], github: 'anthropics/anthropic-sdk-python' },
  chatgpt:    { npm: ['openai'],            github: 'openai/openai-python' },
  v0:         { npm: ['ai'],                github: null },
  bolt:       { npm: [],                    github: 'stackblitz/bolt.new' },
  lovable:    { npm: [],                    github: null },
  windsurf:   { npm: [],                    github: null },
  replit:     { npm: [],                    github: null },
  openai:     { npm: ['openai'],            github: 'openai/openai-python' },
  aider:      { npm: [],                    github: 'Aider-AI/aider' },
  'github-copilot': { npm: [],             github: 'github/copilot-docs' },
};

// Signal weights — tune these to your priorities
const WEIGHTS = {
  reddit:        0.30,  // Broadest developer discussion signal
  npm:           0.25,  // Actual package adoption
  github:        0.20,  // Open source credibility
  hackernews:    0.15,  // Technical community filter
  stackoverflow: 0.10   // Production adoption friction
};

Why These Weights?

  • Reddit (30%): The broadest signal — captures casual discussion, recommendations, and complaints across hundreds of subreddits
  • NPM (25%): Measures what developers actually install in their projects, not just what they talk about
  • GitHub (20%): Stars represent long-term open source community investment — harder to game than discussion
  • HackerNews (15%): A more selective technical audience — making the HN front page takes stronger signal
  • Stack Overflow (10%): A lagging indicator of production adoption — questions appear after real-world usage

Now gather the raw values, normalize, and compute the weighted composite:

const tools = Object.keys(toolMap);

// Gather raw values per tool per source
const redditScores = tools.map(t => {
  const row = redditAgg.find(r => r.tool_name === t);
  return row ? row.mention_count : 0;
});

const hnScores = tools.map(t => {
  const row = hnAgg.find(r => r.tool_name === t);
  return row ? row.mention_count : 0;
});

const npmScores = tools.map(t => {
  const pkgs = toolMap[t].npm;
  if (!pkgs.length) return 0;
  let total = 0;
  for (const pkg of pkgs) {
    const row = npmData.find(r => r.package_name === pkg);
    if (row) total += parseInt(row.weekly_downloads);
  }
  return total;
});

const githubScores = tools.map(t => {
  const repo = toolMap[t].github;
  if (!repo) return 0;
  const row = githubData.find(r => r.repo_full_name === repo);
  return row ? parseInt(row.stars) : 0;
});

const soScores = tools.map(t => {
  const row = soAgg.find(r => r.tool_name === t);
  return row ? row.question_count : 0;
});

// Normalize each dimension to 0-100
const normReddit = normalize(redditScores);
const normNpm    = normalize(npmScores);
const normGithub = normalize(githubScores);
const normHN     = normalize(hnScores);
const normSO     = normalize(soScores);

// Compute weighted composite score
const composite = tools.map((tool, i) => ({
  tool,
  reddit:        normReddit[i],
  npm:           normNpm[i],
  github:        normGithub[i],
  hackernews:    normHN[i],
  stackoverflow: normSO[i],
  score: (
    normReddit[i] * WEIGHTS.reddit +
    normNpm[i]    * WEIGHTS.npm +
    normGithub[i] * WEIGHTS.github +
    normHN[i]     * WEIGHTS.hackernews +
    normSO[i]     * WEIGHTS.stackoverflow
  )
}));

composite.sort((a, b) => b.score - a.score);

6 Produce the Formatted Report

The final step: print a clean, formatted report that's instantly readable. This is the deliverable — the "product" of the analysis.

console.log('');
console.log('╔════════════════════════════════════════════════════════════════════════════╗');
console.log('║           AI TOOL DEVELOPER MINDSHARE REPORT                              ║');
console.log('║           Data: ' + startDate + ' to ' + endDate + ' | vibe-data.com                  ║');
console.log('╠════════════════════════════════════════════════════════════════════════════╣');
console.log('║ Rank │ Tool            │ Score │ Reddit │ NPM    │ GitHub │ HN     │ SO   ║');
console.log('╠══════╪═════════════════╪═══════╪════════╪════════╪════════╪════════╪══════╣');

composite.forEach((t, i) => {
  const rank = String(i + 1).padStart(4);
  const name = t.tool.padEnd(15);
  const score = t.score.toFixed(1).padStart(5);
  const rd = t.reddit.toFixed(1).padStart(6);
  const np = t.npm.toFixed(1).padStart(6);
  const gh = t.github.toFixed(1).padStart(6);
  const hn = t.hackernews.toFixed(1).padStart(6);
  const so = t.stackoverflow.toFixed(1).padStart(4);
  console.log(`║ ${rank} │ ${name} │ ${score} │ ${rd} │ ${np} │ ${gh} │ ${hn} │ ${so} ║`);
});

console.log('╠════════════════════════════════════════════════════════════════════════════╣');
console.log('║ Weights: Reddit 30% │ NPM 25% │ GitHub 20% │ HN 15% │ SO 10%            ║');
console.log('╚════════════════════════════════════════════════════════════════════════════╝');

Here's what the report looks like with real data from :

Report Output
╔════════════════════════════════════════════════════════════════════════════╗ ║ AI TOOL DEVELOPER MINDSHARE REPORT ║ ║ Data: 2026-02-12 to 2026-02-19 | vibe-data.com ║ ╠════════════════════════════════════════════════════════════════════════════╣ ║ Rank │ Tool │ Score │ Reddit │ NPM │ GitHub │ HN │ SO ║ ╠══════╪═════════════════╪═══════╪════════╪════════╪════════╪════════╪══════╣ ║ 1 │ chatgpt │ 76.9 │ 95.4 │ 100.0 │ 73.7 │ 0.0 │ 85.7 ║ ║ 2 │ openai │ 74.0 │ 85.5 │ 100.0 │ 73.7 │ 33.3 │ 35.7 ║ ║ 3 │ claude │ 59.9 │ 93.5 │ 47.4 │ 6.8 │ 100.0 │ 35.7 ║ ║ 4 │ cursor │ 54.1 │ 94.2 │ 0.0 │ 79.2 │ 0.0 │100.0 ║ ║ 5 │ v0 │ 43.7 │ 88.4 │ 63.1 │ 0.0 │ 0.0 │ 14.3 ║ ║ 6 │ bolt │ 38.7 │ 100.0 │ 0.0 │ 39.8 │ 0.0 │ 7.1 ║ ║ 7 │ aider │ 33.0 │ 43.5 │ 0.0 │ 100.0 │ 0.0 │ 0.0 ║ ║ 8 │ github-copilot │ 25.7 │ 33.4 │ 0.0 │ 57.1 │ 0.0 │ 42.9 ║ ║ 9 │ lovable │ 20.7 │ 69.1 │ 0.0 │ 0.0 │ 0.0 │ 0.0 ║ ║ 10 │ replit │ 7.6 │ 25.3 │ 0.0 │ 0.0 │ 0.0 │ 0.0 ║ ║ 11 │ windsurf │ 4.3 │ 14.2 │ 0.0 │ 0.0 │ 0.0 │ 0.0 ║ ╠════════════════════════════════════════════════════════════════════════════╣ ║ Weights: Reddit 30% │ NPM 25% │ GitHub 20% │ HN 15% │ SO 10% ║ ╚════════════════════════════════════════════════════════════════════════════╝
The same tool can rank #1 in one signal and #7 in another. That's the whole point of a composite score — it reveals which tools have broad ecosystem traction vs. narrow spikes.

What This Report Tells You

The report above isn't just a leaderboard — the per-signal columns reveal the shape of each tool's adoption pattern:

Pattern What It Means Example
High Reddit + High NPM Broad discussion AND real adoption — the strongest signal ChatGPT (95.4 + 100.0)
High Reddit + Zero NPM Lots of buzz but no SDK — may be a hosted product, not a developer tool Bolt (100.0 + 0.0)
Low Reddit + High GitHub Quiet community discussion but strong open source following — a "builder's tool" Aider (43.5 + 100.0)
High SO + Low Everything Else Production users hitting real issues — adoption is ahead of hype Cursor (100.0 SO, moderate others)
High HN + Moderate Reddit Technical community champion — respected by builders, growing mainstream Claude (100.0 HN + 93.5 Reddit)

Customizing the Weights

The weights are the most opinionated part of this analysis, and they should change based on who's reading the report:

// VC-focused weights (adoption signals)
const VC_WEIGHTS = {
  reddit: 0.15, npm: 0.35, github: 0.30,
  hackernews: 0.10, stackoverflow: 0.10
};

// DevRel-focused weights (community signals)
const DEVREL_WEIGHTS = {
  reddit: 0.35, npm: 0.10, github: 0.10,
  hackernews: 0.30, stackoverflow: 0.15
};

// Balanced weights
const BALANCED_WEIGHTS = {
  reddit: 0.20, npm: 0.20, github: 0.20,
  hackernews: 0.20, stackoverflow: 0.20
};

Making It a Recurring Report

The real power of this approach is running it weekly and tracking changes. Save the composite scores to a JSON file each week, and you can compute week-over-week momentum:

const fs = require('fs');
const outputPath = `./reports/mindshare-${endDate}.json`;
fs.writeFileSync(outputPath, JSON.stringify(composite, null, 2));
console.log(`Report saved to ${outputPath}`);

Automate it with a cron job, a GitHub Action, or a simple node-cron scheduler. Compare score values across weeks to spot tools gaining or losing momentum before the mainstream narrative catches up.

Explore the Live Data

The same data that powers this tutorial is updated daily. View the live dashboard or read our methodology for collection details.

View Live Dashboard Read Methodology