We're removing all barriers to AI development intelligence. Starting today, our complete dashboard is freely accessible—no email gate, no signup wall, no friction. Just instant access to the data that helps you make better decisions about AI coding tools.
📊 What Changed
Before: Email required to access dashboard
Now: Instant, free access to everything
Bonus: Optional weekly newsletter for those who want deeper insights
What You Get, Completely Free
Our dashboard aggregates data from 17 sources covering 50,000+ repositories, updated weekly. Here's everything you can access right now without entering an email:
🏆 AI Coding Benchmark Leaderboard
Real-time tracking of the top AI coding tools across 5 major benchmarks:
- SWE-bench - Real-world software engineering tasks (22 tools tracked)
- BigCodeBench - Complex code generation challenges
- HumanEval - Python function synthesis benchmark
- MBPP - Mostly Basic Python Problems
- LiveCodeBench - Live coding performance metrics
See which models are winning right now: Claude 4.5 Sonnet leading at 70.6% on SWE-bench, GPT-5 at 65%, and the full competitive landscape updated weekly.
📚 Research Citation Velocity Tracker
Track which AI coding research papers are gaining momentum. Our citation velocity metrics predict production adoption 6-12 months early. Currently tracking:
- Top 5 papers by weekly citation growth
- Total citations + influential citation counts
- Publication venues and author information
- Direct links to arXiv papers
This is unique data you won't find aggregated anywhere else—we're pulling from Semantic Scholar's 200M+ paper database and calculating velocity in real-time.
📈 GitHub Adoption Trends
See which tools developers actually use in production:
- 50,000+ repositories analyzed for dependency adoption
- Star velocity tracking (which tools are growing fastest)
- Fork rates and contributor counts
- Commit frequency and release cadence
💬 Developer Sentiment Analysis
What developers are actually saying about tools across multiple platforms:
- Reddit - r/programming, r/MachineLearning, r/LocalLLaMA sentiment
- Hacker News - Story counts, comment averages, discussion quality
- Stack Overflow - Question volumes, answer rates, tag trends
- Twitter/X - Mention tracking, engagement metrics
📦 Package Ecosystem Health
NPM Packages
Monthly download trends for AI development packages
PyPI Packages
Python package adoption for ML/AI tools
Docker Hub
Container pull counts = production deployment signals
🤖 AI Assistant Adoption
Real data on what developers use:
- AI context file adoption (.cursorrules, .aider, .claude, etc.)
- Distribution across GitHub repositories
- Growth trends by assistant type
🇨🇳 Chinese AI Ecosystem
DeepSeek and Qwen package adoption, HuggingFace model downloads, and marketplace presence tracking—data most Western dashboards miss.
Why We Removed the Email Gate
Simple: data should be accessible. If you're making decisions about which AI coding tool to adopt, you shouldn't have to give up your email just to see benchmark scores.
We built this platform because we wanted this data ourselves. The information was scattered across Papers with Code, GitHub, HuggingFace leaderboards, Reddit threads, and academic papers. Aggregating it took weeks of engineering work.
Now it's yours, free, forever.
The Optional Weekly Report
If you want deeper insights delivered to your inbox, we now offer an optional newsletter signup (right on the dashboard). Here's what you get:
- Benchmark movement analysis - Which tools improved, which declined, and by how much
- Citation velocity highlights - Research papers showing rapid adoption signals
- Emerging tool alerts - New tools crossing adoption thresholds
- Market shifts - Sentiment changes, funding announcements, major releases
- Exclusive analysis - Deeper dives we don't publish on the dashboard
No spam. No daily emails. Just a weekly digest of what actually matters.
📧 Newsletter Example: Last Week's Highlights
- Claude 4.5 Sonnet jumped 4.2% on SWE-bench (now at 70.6%)
- Ollama hit 35M Docker pulls (vs PyTorch's 15M)
- Citation velocity leader: "Vibe Coding Survey" paper gained 127 citations this week
- Sentiment shift: Cursor retention concerns rising on Reddit (38% same-or-more vs 34% churn)
What's Different From Other Dashboards
1. Multi-source aggregation
Most tools show one data source (e.g., just GitHub stars). We combine 17 sources for a complete picture.
2. Citation velocity tracking
No one else tracks research → production pipelines. We do, because it predicts market leaders 6-12 months early.
3. Cross-benchmark normalization
Comparing 65% on SWE-bench vs 85% on HumanEval is meaningless without normalization. We handle that (free tier shows raw scores, Professional tier adds normalization).
4. Velocity metrics everywhere
It's not just current scores—it's trend direction. We calculate 7-day, 30-day, and 90-day velocity for benchmarks, GitHub stars, package downloads, and citations.
5. Real production signals
Docker pulls, package manager downloads, and actual code repository adoption—not just popularity contests.
For CTOs and Engineering Leaders
If you're evaluating AI coding tools for your team, this dashboard compresses your research timeline. Instead of spending 6-12 weeks evaluating tools (which research shows is typical), you can:
- Week 1: Review benchmark scores, GitHub adoption, and sentiment across all top tools
- Week 2: Filter by your stack (Python/TypeScript/etc.) and use case (refactoring vs new features)
- Week 3-4: Run pilots with the top 2-3 tools that match your criteria
The data that used to take 40-80 hours of manual research is now available in 5 minutes.
What's Coming Next
This is version 1.0 of the free dashboard. Here's what we're adding:
- Historical trend charts - See benchmark performance over time (currently showing latest scores)
- Tool comparison view - Side-by-side comparison of 2-5 tools across all metrics
- Custom alerts - Get notified when specific tools cross thresholds
- API access - Pull our data programmatically (Professional tier)
- More benchmarks - Adding FeatBench and domain-specific benchmarks
Ready to Explore?
No signup required. Just open the dashboard and start exploring.
Open Free Dashboard →Our Commitment
The free dashboard will always be free. We won't add an email gate. We won't limit data freshness for free users beyond what's necessary to manage costs (currently 7-day delay on some datasets).
We'll make money through Professional tier subscriptions ($49/mo) for teams that need real-time data, API access, and advanced features like cross-benchmark normalization. Enterprise tier ($2,000+/mo) will serve CTOs who need decision engines, peer benchmarks, and custom research.
But the core dashboard—the one you see today—stays free.
Join 1,000+ Developers Already Using It
Since launch, we've seen developers from Google, Microsoft, Anthropic, OpenAI, and hundreds of startups using the dashboard to track AI coding tool trends. The data is helping teams make better decisions about tool adoption.
Now it's your turn.
Questions?
Email us: intelligence@vibe-data.com
Twitter: @vibe_data