This image is not AI generated; this is a screenshot from the AI dashboard.

AI Dashboard Update: A Central Hub for Artificial Analysis, OpenRouter, fal and More

A single place to follow the models and chatter that matter

Do you like Artificial Analysis? How about OpenRouter, Inc? How about fal? Me too. The problem for anyone who actually follows model releases, pricing changes, and community testing is that the information is spread across a dozen different sources. One site has benchmarks, another has API changes, Discord threads hide useful test results, and social feeds are noisy. I built a dashboard to collect the data I want into a single, usable interface.

Who this is for

This is a tool for developers, model researchers, and power users who need quick access to data that affects decisions. If you pick an API for production, you care about uptime and response quality. If you pick a model to test, you want to see recent benchmarks and community notes. If you follow pricing, you want changes surfaced quickly. The dashboard brings together model performance signals, community chatter, and news so you can act without hopping between tabs.

What the dashboard collects

The main sources I pull in are model aggregators like Artificial Analysis, provider pages such as OpenRouter, provider queues and APIs like fal, plus social and community sources that often show real-world behavior faster than a press release. It also includes my X feed, MattVidPro’s Discord, and Testing Catalog News. The idea is not to replace deep research but to make the first pass of discovery and triage much faster.

New updates shipped. The dashboard should be a lot better now. I recommend trying it if you follow AI work.

Release notes

  • Moved several tabs out of experimental mode including the monitor, testing catalog, the hype feed, and the latest tab
  • Fixes to the background agent that fetches and normalizes data
  • General user interface improvements to make scanning and filtering faster

Roadmap

  • AI filtering so you can make focused feeds for the exact signals you care about such as open source models with long context windows or pricing alerts for specific providers
  • Make the monitor more real time. Right now it updates once per day. The next step is to surface status and performance changes as they happen so production teams can react faster

Why this approach

There are two practical goals guiding the dashboard design. First, reduce the time it takes to spot meaningful changes. A pricing tweak, a queue issue, or a community test that shows a regression should not be buried. Second, present enough context so you do not have to follow every link to understand the impact. The feed items link back to original sources when deeper investigation is needed.

The monitor gives a rolling view of model status and simple performance metrics. The testing catalog is a place to find recent benchmarks and community test results. The hype feed surfaces what people are talking about on X and Discord without treating every mention like a headline. The latest tab is a chronological stream that helps you see recent activity at a glance.

I made stability and usability the focus of this release. That meant getting the data agent more reliable, improving how items are deduplicated, and cleaning up the UI so you can find relevant items with fewer clicks. I still consider this an iteration rather than a final product, but the current changes make it more useful for anyone who spends time evaluating models and providers.

Try the dashboard and tell me what matters most for your workflow. The site is here: https://ai-dashboard-c8bp.onrender.com/