5 data sources. 11 tools ranked. 1 composite score. This tutorial walks through every line of code needed to query the Vibe Data API for AI developer tool metrics, normalize the signals, and produce a competitive intelligence report you can hand to a product manager, share in a team Slack, or automate as a weekly cron job.
What You'll Build
- A "Developer Mindshare Score" that combines Reddit mentions, NPM downloads, GitHub stars, HackerNews activity, and Stack Overflow questions into a single 0-100 ranking
- Normalized scoring across sources with wildly different scales (millions of downloads vs. dozens of SO questions)
- A formatted terminal report comparing the top AI coding tools side-by-side
- Customizable weights so you can tune what signals matter most for your use case
1 Set Up the API Client
All data is available through the Vibe Data REST API. You authenticate with an API key passed in the Authorization header. The VIBE_DATA_API_KEY environment variable holds your key — get one here.
require('dotenv').config();
const API_KEY = process.env.VIBE_DATA_API_KEY;
const BASE_URL = 'https://vibe-data.com';
async function apiGet(endpoint) {
const res = await fetch(`${BASE_URL}${endpoint}`, {
headers: { 'Authorization': `Bearer ${API_KEY}` }
});
if (!res.ok) throw new Error(`API ${res.status}: ${await res.text()}`);
const json = await res.json();
return json.data;
}
async function buildReport() {
console.log('Vibe Data API client ready');
// 7-day rolling window
const endDate = new Date().toISOString().split('T')[0];
const startDate = new Date(Date.now() - 7 * 86400000)
.toISOString().split('T')[0];
const cutoff = new Date(startDate);
console.log(`Report period: ${startDate} to ${endDate}`);
// ... Steps 2-6 go inside this function ...
}
buildReport().catch(console.error);
Node.js 18+ includes fetch natively — the only dependency is npm install dotenv. Create a .env file with your VIBE_DATA_API_KEY. All subsequent code runs inside the buildReport() function.
2 Query Reddit Mentions
Reddit is the broadest signal of what developers are actively discussing. The /api/reddit endpoint returns individual mentions with engagement metrics. We fetch each tool's mentions in parallel, filter to the last 7 days, and aggregate client-side.
// Step 2: Reddit developer discussion volume
const TOOLS = [
'bolt', 'chatgpt', 'cursor', 'claude', 'v0',
'openai', 'lovable', 'anthropic', 'aider',
'github-copilot', 'windsurf'
];
const redditPromises = TOOLS.map(async (tool) => {
const mentions = await apiGet(
`/api/reddit?tool_name=${tool}&limit=500`
);
const recent = mentions.filter(m =>
new Date(m.created_utc * 1000) >= cutoff
);
return {
tool_name: tool,
mention_count: recent.length,
total_score: recent.reduce((s, m) => s + (m.score || 0), 0),
total_comments: recent.reduce((s, m) =>
s + (m.num_comments || 0), 0)
};
});
const redditAgg = (await Promise.all(redditPromises))
.sort((a, b) => b.mention_count - a.mention_count);
console.log('\n--- Reddit Mentions (Last 7 Days) ---');
redditAgg.forEach((r, i) => {
const ratio = (r.total_score / r.mention_count).toFixed(1);
console.log(
`${i+1}. ${r.tool_name}: ${r.mention_count} mentions, ` +
`${r.total_score} upvotes (${ratio}x engagement)`
);
});
Notice the "engagement ratio" (upvotes per mention). Bolt leads in raw volume at 1,316 mentions, but v0 has a 7.7x engagement ratio — each mention sparks far more interest. This distinction between volume and intensity is exactly why a multi-signal approach matters.
3 Pull NPM Download Data
NPM downloads measure what developers actually install — not just what they discuss. The /api/npm endpoint returns packages sorted by downloads, with weekly and monthly totals already computed.
// Step 3: NPM package adoption (latest snapshot)
const npmData = await apiGet('/api/npm?limit=15');
console.log('\n--- NPM Downloads (Latest) ---');
npmData.slice(0, 10).forEach((p, i) => {
const weekly = (p.weekly_downloads / 1e6).toFixed(2);
const monthly = (p.monthly_downloads / 1e6).toFixed(2);
console.log(`${i+1}. ${p.package_name}: ${weekly}M/week (${monthly}M/month)`);
});
The openai package leads with 11.8M weekly downloads, but @anthropic-ai/sdk at 5.6M is growing faster in relative terms. Note the mapping challenge: NPM package names don't match Reddit tool names. openai maps to both "ChatGPT" and "OpenAI" in Reddit discussions. We'll handle this cross-source mapping in Step 5.
4 Add GitHub Stars, HackerNews Mentions, and Stack Overflow Questions
GitHub stars signal long-term open source traction. HackerNews mentions capture a selective technical audience. Stack Overflow questions signal real-world adoption friction — developers asking SO questions are using the tool in production and hitting real issues.
GitHub Stars
// Step 4a: GitHub stars (open source traction)
const githubData = await apiGet('/api/github?limit=20');
console.log('\n--- GitHub Stars (Top AI Tool Repos) ---');
githubData.slice(0, 10).forEach((r, i) => {
console.log(
`${i+1}. ${r.repo_full_name}: ` +
`${Number(r.stars).toLocaleString()} stars, ` +
`${Number(r.forks).toLocaleString()} forks`
);
});
HackerNews Mentions
// Step 4b: HackerNews mentions (technical community)
const hnPromises = TOOLS.map(async (tool) => {
const mentions = await apiGet(
`/api/hackernews?tool_name=${tool}&limit=200`
);
const recent = mentions.filter(m =>
new Date(m.time * 1000) >= cutoff
);
return {
tool_name: tool,
mention_count: recent.length
};
});
const hnAgg = (await Promise.all(hnPromises))
.sort((a, b) => b.mention_count - a.mention_count);
console.log('\n--- HackerNews Mentions (Last 7 Days) ---');
hnAgg.filter(r => r.mention_count > 0).forEach((r, i) => {
console.log(`${i+1}. ${r.tool_name}: ${r.mention_count} mentions`);
});
Stack Overflow Questions
// Step 4c: Stack Overflow questions (adoption friction)
const soPromises = TOOLS.map(async (tool) => {
const questions = await apiGet(
`/api/stackoverflow?tool_name=${tool}&limit=200`
);
const recent = questions.filter(q =>
new Date(q.creation_date) >= cutoff
);
return {
tool_name: tool,
question_count: recent.length,
total_views: recent.reduce((s, q) =>
s + (q.view_count || 0), 0),
total_answers: recent.reduce((s, q) =>
s + (q.answer_count || 0), 0)
};
});
const soAgg = (await Promise.all(soPromises))
.sort((a, b) => b.question_count - a.question_count);
console.log('\n--- Stack Overflow Questions (Last 7 Days) ---');
soAgg.filter(r => r.question_count > 0).forEach((r, i) => {
console.log(
`${i+1}. ${r.tool_name}: ${r.question_count} questions, ` +
`${r.total_views} views`
);
});
Cursor leads SO questions at 14 per week — not because it's buggy, but because it has enough production users hitting edge cases to ask about them. Tools with zero SO questions (Windsurf, Replit, Lovable) may have smaller production footprints, or their support channels live elsewhere (Discord, GitHub issues).
5 Build the Composite Developer Mindshare Score
Here's where it gets interesting. Each data source operates at a completely different scale: NPM downloads are in millions, Reddit mentions in thousands, SO questions in single digits. To combine them into a single score, we normalize each dimension to 0–100 (where 100 = the leader in that category), then apply weights.
// Normalize values to 0-100 scale (max = 100)
function normalize(values) {
const max = Math.max(...values);
if (max === 0) return values.map(() => 0);
return values.map(v => (v / max) * 100);
}
// Map tool identifiers across API responses
// Reddit/HN/SO use tool_name, NPM uses package_name,
// GitHub uses repo_full_name
const toolMap = {
cursor: { npm: [], github: 'cursor/cursor' },
claude: { npm: ['@anthropic-ai/sdk'], github: 'anthropics/anthropic-sdk-python' },
chatgpt: { npm: ['openai'], github: 'openai/openai-python' },
v0: { npm: ['ai'], github: null },
bolt: { npm: [], github: 'stackblitz/bolt.new' },
lovable: { npm: [], github: null },
windsurf: { npm: [], github: null },
replit: { npm: [], github: null },
openai: { npm: ['openai'], github: 'openai/openai-python' },
aider: { npm: [], github: 'Aider-AI/aider' },
'github-copilot': { npm: [], github: 'github/copilot-docs' },
};
// Signal weights — tune these to your priorities
const WEIGHTS = {
reddit: 0.30, // Broadest developer discussion signal
npm: 0.25, // Actual package adoption
github: 0.20, // Open source credibility
hackernews: 0.15, // Technical community filter
stackoverflow: 0.10 // Production adoption friction
};
Why These Weights?
- Reddit (30%): The broadest signal — captures casual discussion, recommendations, and complaints across hundreds of subreddits
- NPM (25%): Measures what developers actually install in their projects, not just what they talk about
- GitHub (20%): Stars represent long-term open source community investment — harder to game than discussion
- HackerNews (15%): A more selective technical audience — making the HN front page takes stronger signal
- Stack Overflow (10%): A lagging indicator of production adoption — questions appear after real-world usage
Now gather the raw values, normalize, and compute the weighted composite:
const tools = Object.keys(toolMap);
// Gather raw values per tool per source
const redditScores = tools.map(t => {
const row = redditAgg.find(r => r.tool_name === t);
return row ? row.mention_count : 0;
});
const hnScores = tools.map(t => {
const row = hnAgg.find(r => r.tool_name === t);
return row ? row.mention_count : 0;
});
const npmScores = tools.map(t => {
const pkgs = toolMap[t].npm;
if (!pkgs.length) return 0;
let total = 0;
for (const pkg of pkgs) {
const row = npmData.find(r => r.package_name === pkg);
if (row) total += parseInt(row.weekly_downloads);
}
return total;
});
const githubScores = tools.map(t => {
const repo = toolMap[t].github;
if (!repo) return 0;
const row = githubData.find(r => r.repo_full_name === repo);
return row ? parseInt(row.stars) : 0;
});
const soScores = tools.map(t => {
const row = soAgg.find(r => r.tool_name === t);
return row ? row.question_count : 0;
});
// Normalize each dimension to 0-100
const normReddit = normalize(redditScores);
const normNpm = normalize(npmScores);
const normGithub = normalize(githubScores);
const normHN = normalize(hnScores);
const normSO = normalize(soScores);
// Compute weighted composite score
const composite = tools.map((tool, i) => ({
tool,
reddit: normReddit[i],
npm: normNpm[i],
github: normGithub[i],
hackernews: normHN[i],
stackoverflow: normSO[i],
score: (
normReddit[i] * WEIGHTS.reddit +
normNpm[i] * WEIGHTS.npm +
normGithub[i] * WEIGHTS.github +
normHN[i] * WEIGHTS.hackernews +
normSO[i] * WEIGHTS.stackoverflow
)
}));
composite.sort((a, b) => b.score - a.score);
6 Produce the Formatted Report
The final step: print a clean, formatted report that's instantly readable. This is the deliverable — the "product" of the analysis.
console.log('');
console.log('╔════════════════════════════════════════════════════════════════════════════╗');
console.log('║ AI TOOL DEVELOPER MINDSHARE REPORT ║');
console.log('║ Data: ' + startDate + ' to ' + endDate + ' | vibe-data.com ║');
console.log('╠════════════════════════════════════════════════════════════════════════════╣');
console.log('║ Rank │ Tool │ Score │ Reddit │ NPM │ GitHub │ HN │ SO ║');
console.log('╠══════╪═════════════════╪═══════╪════════╪════════╪════════╪════════╪══════╣');
composite.forEach((t, i) => {
const rank = String(i + 1).padStart(4);
const name = t.tool.padEnd(15);
const score = t.score.toFixed(1).padStart(5);
const rd = t.reddit.toFixed(1).padStart(6);
const np = t.npm.toFixed(1).padStart(6);
const gh = t.github.toFixed(1).padStart(6);
const hn = t.hackernews.toFixed(1).padStart(6);
const so = t.stackoverflow.toFixed(1).padStart(4);
console.log(`║ ${rank} │ ${name} │ ${score} │ ${rd} │ ${np} │ ${gh} │ ${hn} │ ${so} ║`);
});
console.log('╠════════════════════════════════════════════════════════════════════════════╣');
console.log('║ Weights: Reddit 30% │ NPM 25% │ GitHub 20% │ HN 15% │ SO 10% ║');
console.log('╚════════════════════════════════════════════════════════════════════════════╝');
Here's what the report looks like with real data from :
What This Report Tells You
The report above isn't just a leaderboard — the per-signal columns reveal the shape of each tool's adoption pattern:
| Pattern | What It Means | Example |
|---|---|---|
| High Reddit + High NPM | Broad discussion AND real adoption — the strongest signal | ChatGPT (95.4 + 100.0) |
| High Reddit + Zero NPM | Lots of buzz but no SDK — may be a hosted product, not a developer tool | Bolt (100.0 + 0.0) |
| Low Reddit + High GitHub | Quiet community discussion but strong open source following — a "builder's tool" | Aider (43.5 + 100.0) |
| High SO + Low Everything Else | Production users hitting real issues — adoption is ahead of hype | Cursor (100.0 SO, moderate others) |
| High HN + Moderate Reddit | Technical community champion — respected by builders, growing mainstream | Claude (100.0 HN + 93.5 Reddit) |
Customizing the Weights
The weights are the most opinionated part of this analysis, and they should change based on who's reading the report:
- For a VC evaluating a tool company: Weight GitHub stars and NPM downloads higher — these are harder to fake and indicate real traction
- For a DevRel team: Weight Reddit and HackerNews higher — that's where the conversion opportunity lives
- For a hiring manager: Weight Stack Overflow higher — SO activity correlates with production adoption, and that's where you find experienced users to hire
- For a product manager: Weight all sources equally (0.20 each) for a balanced view of the competitive landscape
// VC-focused weights (adoption signals)
const VC_WEIGHTS = {
reddit: 0.15, npm: 0.35, github: 0.30,
hackernews: 0.10, stackoverflow: 0.10
};
// DevRel-focused weights (community signals)
const DEVREL_WEIGHTS = {
reddit: 0.35, npm: 0.10, github: 0.10,
hackernews: 0.30, stackoverflow: 0.15
};
// Balanced weights
const BALANCED_WEIGHTS = {
reddit: 0.20, npm: 0.20, github: 0.20,
hackernews: 0.20, stackoverflow: 0.20
};
Making It a Recurring Report
The real power of this approach is running it weekly and tracking changes. Save the composite scores to a JSON file each week, and you can compute week-over-week momentum:
const fs = require('fs');
const outputPath = `./reports/mindshare-${endDate}.json`;
fs.writeFileSync(outputPath, JSON.stringify(composite, null, 2));
console.log(`Report saved to ${outputPath}`);
Automate it with a cron job, a GitHub Action, or a simple node-cron scheduler. Compare score values across weeks to spot tools gaining or losing momentum before the mainstream narrative catches up.
Explore the Live Data
The same data that powers this tutorial is updated daily. View the live dashboard or read our methodology for collection details.
View Live Dashboard Read Methodology