Perplexity launched their new AI assistant for Android, right alongside OpenAI’s Operator announcement. While this timing wasn’t ideal for me since I wasn’t home to test them both immediately, the capabilities of Perplexity’s offering deserve attention.
The Perplexity Assistant moves beyond being just another search tool – it’s a fully integrated system that can actually do things. It calls other apps, maintains context between different actions, and handles multi-step tasks. Want to find a restaurant and book it? The assistant will research options and help make the reservation. Need a reminder about an upcoming event? It will find the details and set it up.
One particularly interesting aspect is the multimodal functionality. You can point your camera at something and ask about what you’re seeing, or have it analyze what’s on your screen. This moves us closer to having AI that understands and interacts with our physical world.
The assistant also works across 15 different languages, which is impressive for a launch. This global accessibility puts pressure on established players like Siri and Alexa to expand their capabilities.
I discussed similar developments in AI assistants in my previous post about [OpenAI’s subscription challenges](https://adam.holter.com/openais-subscription-problem-a-case-for-pay-as-you-go-pricing/), and this launch from Perplexity further reinforces the rapid advancement in this space.
While the timing of this release alongside OpenAI’s Operator wasn’t ideal for immediate testing, the feature set looks promising. I’ll be diving deeper into both systems soon to compare their real-world performance and usefulness.