The word 'PREPARE' printed in bold black sans serif font on a pure white background

How I Stay Informed and Prepared for AI Interviews: My Custom Dashboard and Gemini 3.0 Pro Prep

The day before Thanksgiving, I had an interview with the BBC. I finally got the actual recording, which is much better than the video of the TV screen I had before. The way I secured it is a little interesting. I asked the BBC for the footage, but they never got back to me. However, someone on X didn't believe it. They thought I was perhaps the 'real' Adam Holter's nephew. 

I wanted to detail two things that are crucial for operating in this space: first, the system I use to stay current, and second, how I prepared for a high-stakes media interview using AI.

How I Stay Informed in the AI Space: The Custom Dashboard

Staying informed in AI is essentially a full-time job. The speed of releases, the cost changes, and the constant flow of new research mean that traditional methods like just reading a few newsletters or following a handful of accounts simply won't cut it anymore. I use newsletters and X, but the backbone of my information flow is a tool I developed myself: a custom AI dashboard.

This tool automatically aggregates and organizes data from various sources across the web. It's designed to give me a real-time, objective view of the AI model landscape, moving past marketing claims and straight to the metrics that matter.

Data Sources and Aggregation

The dashboard pulls in data from multiple specialized platforms, each serving a specific purpose for monitoring different aspects of the AI ecosystem:

  • LLM Performance and Cost: I pull information from OpenRouter and Artificial Analysis. These sources provide the latest data about Large Language Models, including how they score on benchmarks, how much they cost to run, and other relevant usage data. All of this is aggregated into its own dedicated tabs for quick comparison.
  • Media Generation Models: The dashboard collects information from Fal.ai and Replicate. These platforms generally host media generation models (images, video, audio). Any new model that appears in this space is usually added to these platforms quickly, meaning I find it on the dashboard almost immediately.
  • Leaks and Early Releases: I also collect data from Testing Catalog. They are very good at gathering leaks and new release information about models, often before they have been officially released. This gives me a significant head start on understanding what's coming.

Real-Time Monitoring and Filtering

Beyond static data pulls, the tool has a rigorous monitoring system:

  • It uses a monitor feed configured to pull information from MatVidPro's Discord, which I also follow, and my X feed, focusing on a few key figures for real-time news monitoring.
  • It has a daily system that checks every morning for important news.
  • A constant monitoring system checks every three hours for new posts from certain X accounts. These accounts usually cover every major release instantly, ensuring the dashboard is always up to date even if the information isn't caught by the general aggregation methods.

The most useful feature is the Latest tab. It filters all the incoming data and only shows you what has happened in the last 24 hours. You can adjust that filter to a week, or enable the Hype tab, which pulls information from what's currently popular on GitHub, Reddit, and Replicate.

I currently host and maintain the dashboard at this URL: https://ai-dashboard-c8bp.onrender.com/. It also supports an API with a /docs page for developers who want to integrate this data into their own systems.

Dashboard Source Primary Data Focus Benefit
OpenRouter / Artificial Analysis LLM Benchmarks, Usage Costs Objective performance and financial metrics
Fal.ai / Replicate Media Generation Model Availability Quick access to new creative models
Testing Catalog Leaks and Pre-release Information Anticipating major announcements
MatVidPro Discord / X Feeds Real-time News and Commentary Instant, human-vetted updates

Other Information Channels

While the dashboard handles data, I still rely on human analysis and commentary. YouTube is a primary source. I am subscribed to a lot of channels, including Theo Brown, AICodeKing, AI Search, Matt Wolfe, MatVidPro, Sam Witteveen, IndieDevDan, The Official Anthropic and OpenAI Accounts, Theoretically Media, Developers Digest, Two Minute Papers, AI Explained, Caleb Writes Code, The Feature Crew, and Fireship. Simon Willison's blog at simonwillison.net is also highly recommended, even if his YouTube presence is minimal. I need diverse perspectives to avoid falling into echo chambers. On that note, I need to say this again: A lot of people are subscribed to Matthew Berman. He is a grifter. Do not listen to him.

How I Prepped for the BBC Interview

When I had the BBC interview scheduled, I decided to treat the preparation like a coding project. I used AI Studio, which is a great place to code up individual use tools that you probably won't be sharing with anyone else. It gives you access to a bunch of APIs without needing to go get API keys, and it's totally free with usage of Gemini 3.0 Pro, which is one of the best models available.

The AI Interview Simulator

My prep tool used Gemini 3.0 Pro in a few different roles:

  1. I used the Gemini Live AI to simulate the interviewer. I fed it the email back and forth about what the interview should be about, giving it context on the expected focus.
  2. I also used a separate Gemini 3.0 Pro instance to act as a real-time critique and training partner. I had previously run a GPT 5 thread where I brainstormed potential prep and curveball questions.
  3. After each simulated round, the critique instance would analyze my performance, pointing out weak arguments, unclear language, or areas where I lacked depth.

I practiced this cycle several times until I felt comfortable with the pace and the technical details. The live voice AI is good enough now that practicing out loud with it is actually useful for simulating pressure and timing.

Expectations vs- Reality

This preparation was critical because the actual interview took a pretty different angle from what I had expected. My prep focused heavily on Gemini 3.0 Pro and Nano Banana Pro, which were mentioned in the preliminary emails. However, the interview ended up focusing more on the NVIDIA stock dip and Google's TPUs. Because I had used the AI to simulate curveball questions and had generally researched the broader hardware market—a topic I've covered before, like in my post on NVIDIA's dominance, I was able to adjust quickly.

The Value of AI Studio

I highly recommend AI Studio for this kind of bespoke, personal-use automation. The value proposition is simple: you get access to top-tier models like Gemini 3.0 Pro, the ability to chain them together in custom workflows, and you don't have to deal with API key management or billing for small, self-contained projects. For anyone “vibe-coding,” AI Studio is a practical, free solution.

Final Takeaway

To succeed in the AI field, you need two things: a disciplined system for information retrieval and an innovative approach to preparation. My custom dashboard ensures I don't miss crucial data points on LLMs and media models, and using AI Studio for interview simulation ensures I'm ready for the questions I don't expect. The tools are here; the only requirement is to build the systems around them.