Gloss Key Takeaways
  1. Perplexity’s new “Computer” orchestrates 19 AI models in parallel from a single conversation, planning work, delegating subtasks, and assembling results.
  2. The enterprise version’s connectors (Slack, Snowflake, Salesforce, HubSpot) target the hardest part of enterprise AI: reaching and using real business data, not just having a smart model.
  3. Multi-model orchestration can improve outcomes by routing each subtask to the best-fit model for quality or cost, avoiding the one-architecture-fits-all constraint of single-model products.
  4. The main trade-off shifts dependency from a single model vendor to the orchestrator, where poor routing decisions can degrade results even if the underlying models are strong.
  5. Perplexity is betting models will commoditize and value will move to orchestration, but that only holds if routing reliably exploits meaningful differences between models rather than treating them as interchangeable.

hero

Perplexity launched what it calls "Computer" this week, a workspace that orchestrates 19 AI models in parallel from a single conversation. You describe a project. The system plans it, delegates subtasks to whichever model is best suited for each one, and assembles the results.

The enterprise version connects to Slack, Snowflake, Salesforce, and HubSpot. The consumer version, branded "Personal Computer," runs continuously on a dedicated local device like a Mac Mini, accessing your files and applications autonomously.

This is a different product category than what OpenAI, Anthropic, or Google are building. Those companies sell individual models. Perplexity is selling orchestration across everyone else's models.

Why 19 models matters

Single-model products have a fundamental constraint: every task gets processed by the same architecture, regardless of whether that architecture is the best fit. Your customer support summary and your financial analysis go through the same model with the same strengths and weaknesses.

Multi-model orchestration breaks that constraint. A coding task routes to the model that benchmarks highest on code. A summarization task routes to the model that's cheapest per token while maintaining acceptable quality. A reasoning task routes to the model with the strongest chain-of-thought capability.

Approach What you get What you give up
Single model (ChatGPT, Claude) Consistent interface, one vendor relationship Best-fit capability per task
Multi-model orchestration (Perplexity) Best model for each subtask, cost optimization Vendor lock-in to the orchestrator

The trade-off is real. You're no longer dependent on one model provider. You're dependent on the orchestration layer instead. If Perplexity's routing makes poor choices about which model handles which task, the output quality suffers regardless of how good the individual models are.

The enterprise connector story

The Slack, Snowflake, Salesforce, and HubSpot integrations are where this gets commercially interesting. Most enterprise AI deployments struggle with the same problem: the model is smart but it can't reach the data it needs. Connecting to the systems where business data actually lives is the hard part, not the model itself.

Perplexity is positioning Computer as the layer that sits between your business systems and multiple AI models simultaneously. Your CRM data flows to whichever model handles relationship analysis best. Your data warehouse queries route to whichever model processes structured data most efficiently.

Whether this works depends entirely on the quality of the routing logic. If the orchestrator consistently picks the right model for the right task, it's genuinely more capable than any single model. If it picks wrong, you get a worse result than you'd have gotten by just picking one good model and sticking with it.

What this means for the market

Perplexity is betting that the model layer commoditizes. If individual models become interchangeable for most tasks, the value moves to whoever routes between them most effectively. It's the same bet that cloud providers made about servers: the hardware doesn't matter, the orchestration layer does.

The counterargument is that models aren't commodity yet. Claude handles long-context tasks differently than GPT handles them. Gemini processes multimodal inputs differently than either. These differences matter for specific use cases, and a routing layer that treats models as interchangeable might miss the nuances that make each one valuable for particular tasks.

The market will answer this question within the year. If Perplexity Computer delivers better results than single-model products for enterprise workflows, the orchestration-layer approach wins. If the routing introduces more errors than it prevents, single-model simplicity wins. There's no middle ground on this one.

Gloss What This Means For You

If you’re evaluating AI tools for work, test whether orchestration actually improves your specific workflows by running the same tasks through a single strong model versus a routed, multi-model setup and comparing accuracy, cost, and consistency. Pay special attention to data access: connectors to your CRM, warehouse, and chat systems may matter more than marginal model quality. As you pilot, treat routing quality as the key risk—monitor failure modes, require transparency on which model handled what, and be ready to fall back to a simpler single-model approach if the orchestrator adds errors.