Galaxy AI becomes a multi-agent ecosystem with Perplexity integration

Samsung says internal research found nearly 8 in 10 users routinely use more than two AI agents, and the company is updating Galaxy AI to better handle that reality by adding system-level multi-agent support and a first partner agent from Perplexity. The change aims to let multiple AIs operate smoothly without forcing users to hop between apps or repeat context. Perplexity’s agent, nicknamed Plex, can be invoked by voice using Hey Plex or mapped to the phone’s side/power button and will ship on upcoming flagship Galaxy devices, with the Galaxy S26 series singled out as the nearest candidate. Samsung frames Galaxy AI as an orchestrator that ties distinct AI agents into a single, familiar Galaxy experience while promising broader device and app integration.

Key takeaways

  • Internal Samsung research shows nearly 8 in 10 users regularly work with more than two AI agents, prompting multi-agent development.
  • Galaxy AI will host AI agents at the OS level, enabling coordinated control without switching apps or repeating commands.
  • Perplexity’s agent, Plex, is the first announced partner; users can summon it with Hey Plex or assign it to the side (power) button.
  • Plex will be embedded in Samsung first-party apps including Gallery, Notes, Calendar, Clock and Reminder, and will support some third-party apps and multi-step workflows.
  • Features are slated for upcoming flagship Galaxy devices, with Galaxy S26 series mentioned, and broader rollout details will be announced later.
  • Samsung recently upgraded Bixby for One UI 8.5 to accept natural language system commands and to surface real-time web results, suggesting a faster AI development cadence.

Background

Device makers and app developers are moving from single, isolated assistants toward ecosystems where multiple specialized agents collaborate. Users increasingly rely on several AI tools for distinct tasks such as search, summarization, drafting and photo editing; Samsung’s internal data showing nearly 8 in 10 users juggling multiple agents reflects that shift. Historically, smartphone AI features have been app-bound or tied to a single assistant, which forces users to re-enter context when moving between tasks. Samsung’s Galaxy AI initiative has been evolving to bridge those gaps: prior updates focused on assistant improvements inside One UI, and the company now seeks deeper, OS-level orchestration so agents can share context and state securely and seamlessly across the device.

Perplexity, a third-party AI provider known for conversational search and multi-step reasoning, has been positioning itself as a partner to platform providers rather than a closed, single-vendor assistant. That makes Perplexity a natural early integration for a multi-agent approach where each agent brings distinct strengths. Samsung faces competitive pressure from other platform vendors integrating external models and services; offering a flexible, extensible agent layer helps Galaxy remain adaptable as new AI capabilities appear. At the same time, system-level integration raises new questions about privacy, data routing and developer access models that Samsung will need to clarify as rollouts progress.

Main event

Samsung announced that Galaxy AI will now support multiple agents running at the operating-system level, so users can control and coordinate them without switching apps. Because agents are integrated into the system, they inherit context from device activity and recent interactions, reducing the need to repeat instructions. The first partner agent to be deeply integrated is Perplexity, which Samsung will surface under the nickname Plex; the agent will appear on upcoming flagship Galaxy hardware and be callable by voice or by assigning it to the side button.

Plex will be embedded into core Samsung apps such as Gallery, Notes, Calendar, Clock and Reminder, allowing it to assist with tasks that span photos, notes and schedules. Samsung says Plex will also interoperate with certain third-party apps and can handle multi-step workflows rather than only single-turn queries. Because the integration lives at the OS layer, Samsung intends for Galaxy AI to act as an orchestrator that routes tasks to the most appropriate agent and preserves the conversational or operational context between them.

The company has not published a full device list or exact timing; the announcement notes that more details about supported devices and experiences will be released soon. Industry observers expect some features to arrive on new models out of the box and for selected older Galaxy S and Galaxy Z devices to gain capabilities via One UI 8.5 updates. Samsung separately released an upgraded Bixby for One UI 8.5 this week, which handles freer-form natural language system commands and provides real-time web search results, suggesting the platform is preparing for richer, context-aware assistant behavior.

Analysis & implications

Moving agent support to the OS layer changes the interaction model: instead of a single assistant mediating every request, Galaxy AI can route subtasks to specialized agents and preserve cross-agent context. For users, that should translate to fewer repeated prompts and a smoother experience when combining capabilities such as image understanding, calendar scheduling and conversational search. For example, a user might ask Plex to summarize a photo album and then instruct another agent to turn the summary into calendar reminders without restating the details.

For Samsung, the approach reduces friction when adding new third-party agents and helps the company avoid being locked into a single provider or model. It also positions Galaxy as a neutral host for a heterogeneous AI ecosystem, which may attract partners that want deep device integration without ceding platform control. However, this flexibility raises design and policy challenges: Samsung must define how agents request and receive device context, what user permissions look like across multiple agents, and how to limit data sharing to uphold user privacy and regulatory requirements.

Market effects could be significant. If Galaxy AI delivers seamless multi-agent workflows, Samsung gains a differentiation point against rivals that continue to tie users to a single assistant. Developers and AI providers will evaluate the tradeoffs between deeper integration on Galaxy and broader reach on more open or web-based platforms. The success of this model will hinge on clarity around security, permission granularity, discoverability of agents, and economic arrangements between Samsung and third-party providers such as Perplexity.

Comparison & data

Aspect Single-assistant model OS-level multi-agent
Context continuity Limited across apps Preserved across agents and apps
User friction Higher, more repeats Lower, fewer repeated commands
Third-party integration Often app-level System-level onboarding possible
Control & permissions Scoped to one assistant Requires cross-agent permission model

The table highlights why Samsung framed the change as an evolution rather than a replacement. Nearly 8 in 10 users already use multiple agents in practice, so the company is aligning the experience with real usage patterns. Operationalizing multi-agent orchestration will demand new UI conventions and permission prompts to ensure users retain control and visibility over which agent has access to what data.

Reactions & quotes

Samsung positioned the move as an open, user-centric architecture that emphasizes choice and orchestration. Near-term commentary from the company emphasizes flexibility and control rather than exclusivity.

We are building an open and inclusive integrated AI ecosystem that gives users more choice, flexibility and control to get complex tasks done quickly and easily.

Won-Joon Choi, Samsung (paraphrased)

Independent observers welcomed the technical direction but urged caution about privacy and permissions. Analysts note the benefits of multi-agent workflows while calling for transparent controls and clear defaults to protect users.

Multi-agent orchestration can reduce friction and unlock richer workflows, but success depends on transparent permissioning and clear data boundaries.

AI industry analyst (paraphrased)

Early public reactions on social platforms mixed anticipation with questions about which devices and data policies will be supported; many users expressed interest in avoiding repeated prompts across apps, while privacy-minded users asked how context will be shared among agents.

Unconfirmed

  • Specific device list and timing for full multi-agent features are not yet published; details about which older Galaxy S and Z models will receive One UI 8.5 support remain pending.
  • Exact third-party app partners beyond those mentioned and the breadth of third-party data access policies have not been disclosed and await Samsung follow-ups.

Bottom line

Samsung’s move to an OS-level multi-agent Galaxy AI reflects current user behavior and offers a technical route to more natural, cross-app AI workflows. By integrating Perplexity as the first agent, Samsung demonstrates a partner-friendly stance that could accelerate device-level innovation and attract other AI providers looking for deep integration.

However, delivering on the promise requires transparent permissioning, clear UX for multi-agent control, and careful handling of privacy and security concerns. Users and regulators will watch how Samsung balances convenience with data protections as features roll out to the Galaxy S26 series and beyond.

Sources

  • GSMArena — tech news report summarizing Samsung announcements (media/tech news)
  • Samsung Newsroom — official company newsroom with press releases and product briefs (official)
  • Perplexity — company site for the AI provider referenced as Plex (company site)

Leave a Comment