How does "Perplexity Computer" work?
Perplexity’s “Computer” (desktop app with Computer mode and MCP support) runs Perplexity’s own models and tools locally on your machine, and can also route work to external models (OpenAI, Google, xAI, etc.) behind the scenes, but it does not “plug into” Opus/Gemini/Grok/Veo in the way you manually wire separate APIs together. Instead, Perplexity abstracts those models behind a single interface and lets you add your own tools and data via MCP and Connectors.
What “Computer” actually is
On macOS, Perplexity exposes Local MCP support inside the desktop app. This lets the AI talk to your local environment (files, apps, terminals, calendars, etc.) using the Model Context Protocol.
This is orthogonal to whether the underlying LLM is Perplexity’s own, GPT‑4/4.1, or any other third‑party model; the MCP layer sits on top as a tool interface.
How external LLMs are integrated
Perplexity runs a model router over multiple first‑party and third‑party models and chooses (or lets you choose) which model to answer a given query. The public “Third‑Party Models & Terms” page confirms that Perplexity integrates a set of external models from different providers, and which ones apply depends on your plan and settings.
Conceptually, Opus/Gemini/Grok/etc. would fit into this model pool: Perplexity’s router can call out to them as providers when licensed/available, but you as a user don’t wire them in directly with API keys inside Computer; they are abstracted behind the Perplexity platform.
Recommended by LinkedIn
Where MCP and third‑party models meet
MCP is about tools and data access, not which LLM is used.
So “integration with Opus, Gemini, Grok, Veo, etc.” is not done at the MCP/Computer configuration level; it happens at the platform level where Perplexity routes your request to the best available model and then optionally augments it with MCP and Connectors. You configure tools (MCP, Connectors) and preferences (e.g., image model); Perplexity manages the actual external LLM integrations behind the scenes.
If you tell me which specific model you care about (e.g., Gemini 2.0 vs Claude Opus vs “Grok 3” APIs), I can outline how you’d normally use that separately via its own API, and how MCP/Computer would sit alongside it in your workflow rather than directly “inside” it.
See their introduction for yourself -
Ai hype, Ai hype with a scam flawed errored system. It’s not a new computer but a computer layout. They play these games all the time.
Peter Scheffer So for accessing the language models. There is a cost. Not just subscription based. I have been eagerly wanting my GPTS to access my data. I suspect that MCP means that you have to buy API access.