Pure white background, centered black sans serif text reading 'GPT Image 1.5'. No other elements.

ChatGPT Images v1.5 Is Here: Better Editing, Still Not the Model That Beats Nano Banana Pro

OpenAI just shipped a new version of ChatGPT Images powered by a new flagship image model: GPT Image 1.5.

The first thing to understand is the naming: this is 1.5, not an “Images v2” moment. A lot of people expected a clean reset and a big generational jump. Instead, OpenAI is shipping an upgrade that’s mostly about making image generation and editing inside ChatGPT feel more dependable, with a corresponding API model called gpt-image-1.5.

And if you’re the type of person benchmarking against the current top image models, this launch is not screaming “we’re here to win the quality-per-dollar war.” It reads like OpenAI trying to make images a mainstream feature in the ChatGPT product, not a niche tool for people who spend their nights comparing models.

What OpenAI says improved in ChatGPT Images

OpenAI’s product claims for the new ChatGPT Images experience are pretty clear and, to their credit, focused on practical failure modes people complain about:

  • More reliable, precise edits that change only what you ask for while keeping lighting, composition, and likeness intact.
  • Faster generation, with OpenAI calling out “up to 4x faster.”
  • Better instruction following for detailed prompts and multi-step changes.
  • Improved dense text rendering for signs, posters, UI mockups, and covers.
  • Better handling of lots of small faces in one image, with more natural results.

They also describe editing in a set of verbs that matches how normal people talk: add, subtract, combine, blend, transpose. That’s a useful framing because it tells you what they’re optimizing for: iterative editing where the model does not drift every time you make a small request.

That last part is the heart of this release. If the model can preserve what you already like and only touch what you asked to change, that’s a major usability improvement, even if the first draft is not the prettiest output on the market.

The UI change matters more than people want to admit

OpenAI added a dedicated Images creation space in ChatGPT, including:

  • Preset filters and prompts to start quickly.
  • Trending prompts that update regularly.
  • A one-time likeness upload flow you can reuse across creations.

Most users are not “model shoppers.” They’re not comparing Nano Banana Pro vs Seedream vs OpenAI’s latest. They’re sitting inside ChatGPT and they want a result that looks reasonable without learning a new workflow.

So yes, the UI changes are a big part of what OpenAI shipped. If you care about mainstream adoption, “a better place to make images” usually beats “a slightly better benchmark score”

API details: where GPT Image 1.5 fits, and where it does not yet

From an API perspective, there are two separate stories:

  • Image API: the classic endpoints for generations and edits. This is where gpt-image-1.5 is available today.
  • Responses API: conversational, multi-step flows where a text model can call an image tool. This currently supports gpt-image-1 and gpt-image-1-mini as image tools, and the docs says support for gpt-image-1.5 is “in progress.”

If you’re building agent-style systems where the text model plans and calls tools across steps, that “in progress” line matters. You can still use 1.5 by calling the Image API directly, but it’s not yet the cleanest plug‑in for tool‑based flows.

OpenAI is also making the platform direction explicit: DALL·E 2 and DALL·E 3 are deprecated, and OpenAI says they will stop being supported on May 12, 2026. If you have legacy DALL·E usage, you should plan a migration. Waiting until the last minute is how you end up with a weekend emergency that never needed to happen.

The question everyone asks: what does one high‑res 16:9‑ish output cost?

OpenAI prices image generation by image tokens. The docs provide example output token counts by resolution and quality. For a single high quality output, the closest official preset to a 16:9 frame is 1536×1024.

The example output token count for high quality at 1536×1024 is:

  • 6208 output image tokens

That number is only output tokens. Your real cost includes:

  • Input text tokens
  • Input image tokens if you provide reference images
  • Potentially higher input image tokens if you set input_fidelity=high

So if you’re asking “what is the price in dollars,” the honest answer is: it depends on your account’s per‑token image rates for that model. The token count is the useful constant. For one high quality 1536×1024 output, start with 6208 output image tokens, then add inputs. That amounts to about 20 cents.

OpenAI also notes that complex prompts can take up to 2 minutes. That is not a theoretical detail. Latency decides whether a workflow is pleasant or annoying, especially if you expect people to do several iterations.

Bar chart showing example output image tokens by size and quality

Example output image token counts from OpenAI docs. High quality at 1536×1024 is 6208 output tokens, before inputs.

OpenAI says it is cheaper than GPT Image 1, but that is not the comparison power users care about

OpenAI says GPT Image 1.5 is 20% cheaper than GPT Image 1 for image inputs and outputs. That is a real improvement if you were already paying for GPT Image 1.

But the comparison most people are making right now is not “new OpenAI image model vs old OpenAI image model.” It’s “new OpenAI image model vs the best model I can buy today.” And in that comparison, my take is pretty blunt:

  • It looks worse than Nano Banana Pro on raw quality, speed, and cost at comparable resolutions.
  • It does not go up to 4K the way Nano Banana Pro can.
  • Instruction following can be a bright spot, and in a few cases it may follow a specific instruction that Nano Banana Pro misses.

I’m not claiming OpenAI is trying to beat Nano Banana Pro head‑on. But if that was the unspoken expectation from the audience, then GPT Image 1.5 is not that product.

On cost, the story is also messy. I keep seeing “cheaper” repeated, but relative to the models people are benchmarking against, I’m not seeing it. If anything, based on the comparisons I’ve been able to make so far, GPT Image 1.5 is about 40% more expensive than Nano Banana Pro for the same resolution, while also being limited on max resolution.

So I’d call this a weak launch if it was meant to win the model‑nerd leaderboard. But it is still a solid upgrade for the average ChatGPT user, which is where OpenAI has the biggest distribution advantage.

If you want broader context on how close competitors are getting on text and consistency, I wrote about it here: Seedream 4.5 vs. Nano Banana Pro: ByteDance’s Model Gets Closer on Text and Consistency.

Why better editing is the real product here

There’s a reason OpenAI keeps talking about “edits that only change what you asked for.” If you use image models for anything besides posting pretty pictures, you spend most of your time doing revisions.

Teams want things like:

  • Change the copy on the sign without changing the scene.
  • Swap the product color but keep the reflections and shadows consistent.
  • Move an object to the left, keep the framing, keep the vibe, keep the person’s face the same.

That is what “multi‑turn editing consistency” buys you. It’s not glamorous, but it’s what makes image generation usable for repeatable work.

OpenAI is also pushing the “one‑time likeness upload” concept inside ChatGPT. If you’re a mainstream user who just wants a consistent face across a few images, that’s a big friction reducer. Power users can argue about the best model. Most people just want fewer steps.

Extra signals: GPT‑5.2 Mini leak, and possible Flash updates

Outside of the image model itself, there are two things people are watching right now.

  • A leaked screenshot showing GPT‑5.2 Mini on an OpenAI surface. That could mean a launch soon. If you want related background from my earlier coverage, here’s: OpenAI GPT‑5.2 Launching December 9th Under Code Red: Strong on Reasoning, Weak on Design Taste.
  • Logan Kilpatrick posting three lightning bolts, which often points to a Flash release approaching. If that ends up meaning Gemini 3 Flash, and if it pairs with something like Nano Banana 2 Flash, then OpenAI’s cost and speed pressure gets worse immediately.

The release volume over the last few months has been absurd, and it’s not slowing down with two weeks left in the year.

My take

GPT Image 1.5 makes ChatGPT Images better: more consistent editing, better text rendering, a cleaner creation space, and a smoother “make tweaks until it’s right” loop. That is a meaningful product improvement for the giant group of people who will only ever use images through ChatGPT.

But if your bar is “beat Nano Banana Pro on speed, cost, and raw output quality,” this is not that. It’s a step forward for OpenAI’s default user. It’s not the model I would pick if the goal is the best image output per dollar.