I officially open-sourced ai-aggregator, the dashboard I use every day to keep up with new AI model releases across the sources that actually matter: Artificial Analysis leaderboards, OpenRouter, Replicate, and fal.ai. You can find the repository at github.com/adamholter/ai-aggregator.
I built this because new models show up constantly in different places. If you want to compare them and track what is new, you either spend your day checking five tabs or you automate it. I chose automation. It is a unified interface to track and compare models across providers without the friction of separate logins or fragmented leaderboards. This tool has been part of my personal workflow for months, and I still use it every single day to stay current.
The dashboard aggregates data across these six primary streams for real-time model tracking.
What ai-aggregator Does
The core function is aggregation. It automatically pulls new models appearing on fal.ai, OpenRouter, Replicate, or the Artificial Analysis leaderboards. It also includes live monitoring from AI News on X and Discord. This is important because model availability and model announcements are often two separate timelines. A leaderboard might update hours after a provider adds something, and X might mention something days before it shows up in a UI. Having these signals in one place is the only way I have found to reliably track the field without it becoming a full-time job.
This falls into the category of developer-focused tools that prioritize unified access. Much like how I track the AI Chatbot Market Share, this tool is about seeing the full picture rather than just one silo. It handles the manual labor of checking for updates so I can focus on testing the actual models.
Development and Known Issues
I have been building this for a few months. Interestingly, Opus 4.5 sped up the development significantly. As I noted in my thoughts on Claude Code and Opus 4.5, when the model finally grows into the harness, you can ship functional tools much faster. However, that speed comes with a trade‑off. If you are a real developer, there is likely a lot in this codebase that might trigger your sensibilities. I focused on making it work first. It is what I call vibe‑coded slop in some areas, but it is functional slop that I rely on daily.
There are some known issues that I am being transparent about. First, the UI for the agent is kind of ugly in a lot of ways. It is not a polished consumer product; it is a utility. Second, it sometimes fails with certain kinds of prompts. This is part of why I am open‑sourcing it now. I have reached the point where I would rather have the community help fix the edge cases and improve the aesthetics than keep it private and pretend I am going to find the time to make it pretty myself.
The Open Source Angle
Open source for me is mostly about privacy and driving down costs. By making this tool public, anyone can fork it, change the providers they track, or refine the prompting logic. In the back‑and‑forth between open and closed source, open tools allow for this kind of customization that proprietary dashboards usually block. If you want to see how this fits into the broader 2026 landscape, my 2026 AI Predictions cover why these types of external tracking systems are becoming necessary as models become more specialized.
The current state of the repository is a functional baseline. It is exactly what I use. If you find it helpful for your own workflow, go ahead and give it a star on GitHub. If you are interested in making it less ugly or more robust, please contribute. I am not protective of the code; I just want a tool that works. Now that it is open source, anyone can help make it look as good as it functions.