9 min read

Why the search for a next-gen AI device paradigm will fail – and how to achieve the Unified Cognitive Mesh

Why the search for a next-gen AI device paradigm will fail – and how to achieve the Unified Cognitive Mesh

Over the last few years, as LLM technology emerged, we’ve seen an explosion of attempts to define a new paradigm for AI computing. Everyone seems to be searching for “the next iPhone moment” – the breakout device form factor that brings ambient AI into our lives.

Most notable is Humane’s failed AI Pin. But it wasn’t alone. Rabbit R1, Meta’s smart glasses, Rewind’s pendant, OpenAI’s rumored puck designed by Jony Ive – the list keeps growing. New hardware startups are racing to cram microphones, cameras, and LLMs into tiny objects you wear, carry, or stick to your lapel. Many are beautifully designed. Some are intriguing. Others are straight-up dystopian.

And while some of these products might sell a few million units, I believe all of them will fail in their ultimate goal: creating the next dominant computing platform.

Why? Because none of them replace the smartphone or computer. They’re all just accessories. They're merely supplements that may offer novelty, but not necessity.

What We Learned from Smartwatches

Take smartwatches. The Apple Watch is the most successful wearable of all time. Yet it hasn’t displaced the phone and never will. It fills a niche: fitness, notifications, quick replies. But it’s not a standalone computing platform for the majority of people. It didn’t become the new center of our digital lives.

Now imagine the uphill battle for even narrower form factors. Smart glasses that constantly record video and audio. Pendants that sit around your neck, eavesdropping. Pucks you’re expected to carry around and plop next to your phone on the table. These aren’t mass-market replacements, they’re niche novelties. At best, they’ll achieve sub-smartwatch adoption. At worst, they’ll fade out like Google Glass or Snap Spectacles.

And yet, the problem isn’t just the form factor. It’s what these devices assume about where intelligence should live – and how it should behave.

The Cloud-by-Default Fallacy

The most high-profile entrants – Humane, OpenAI, Meta – have a glaring weakness: they rely on the cloud for everything. Your voice, your video, your context, all streamed to the cloud, parsed by proprietary models, then fed back to you as answers.

There’s no real push for local processing. No real respect for privacy. No serious attempt to build something distributed, trusted, and user-owned.

Even more concerning, these devices replicate the sins of the past: siloed apps, proprietary clouds, data hoarding. They’re not the antidote to legacy computing – they’re just another flavor of it.

Meanwhile, Legacy Platforms Struggle with AI

At the same time, Apple, Google, and Microsoft are racing to bolt AI onto their aging platforms. You can feel the seams.

Apple Intelligence is a perfect example, grafted onto iOS and macOS like a second brain, disconnected from core system functions and scattered across multiple layers of UI. Google’s AI integrations are fragmented, often tied to specific Android or Pixel generations, with little consistency across the broader ecosystem. Windows bakes in Copilot like Clippy 2.0, floating awkwardly in a legacy desktop environment.

These companies aren’t building a new paradigm. They’re layering AI onto operating systems designed for files, folders, and touchscreens – not intent, conversation, and context.

Today’s Paradigm: Redundant State Computing

Let’s look at a typical tech-savvy user – me!

Here’s my current setup:

  • iPhone
  • MacBook Air
  • iMac desktop
  • Mac mini running Plex
  • iPad mini
  • Oura ring
  • AirPods
  • Powerbeats Pro
  • Apple TVs connected to various TVs
  • Pixel phone running GrapheneOS

Most of these devices have similar hardware: ARM processors, multiple gigabytes of RAM, and 64GB+ of flash storage. Each runs a modern OS. Each is powerful. Each is constantly syncing.

Now here’s the problem: almost every one of these devices is duplicating work. My iPhone and iPad are syncing photos, apps, messages, and more. My Mac is syncing with iCloud. My Mac mini is running media indexing and file syncing. Each device is storing the same data. Each one is chewing CPU and RAM just to stay in sync.

This is the world of Redundant State Computing. Each device is its own island, maintaining its own state, its own file system, its own app data. We try to bridge them with sync, but that’s a brittle abstraction, constantly breaking, endlessly reconciling.

It’s wasteful, but more importantly, it’s hostile to true intelligence. Ask Siri to find a file and it’ll only look on that device. Ask ChatGPT in Safari to summarize your last email and it won’t know what you’re talking about. The idea of stateful AI– an assistant that knows you, follows you, and understands your context across space and time – is impossible in a redundant state world.

User Interface Challenges of Redundant State Computing

There are two primary user interfaces associated with today’s computing paradigm: the desktop and the grid of apps.

Both were designed for a world before AI. The desktop metaphor dates back to the 1980s. The app grid, pioneered by the iPhone, became the dominant mobile interface in the 2000s. But neither is equipped to handle fluid, context-aware, multi-modal interactions. They were built for files and apps, not conversations, sensors, and intent.

So we get kludges. AI assistants buried inside apps. Widgets glued onto desktops. Cloud services like ChatGPT that live in a browser tab, disconnected from the rest of the OS. Meanwhile, “AI integration” often means little more than piping local inputs to a remote LLM via an API and rendering the response.

This leaves us with a fragmented experience. Ask Siri something on your iPhone and it fails to find the context you created yesterday on your Mac. Use a ChatGPT plugin to make a restaurant reservation, but your calendar app has no idea. Start a document on your desktop and reference it in a voice conversation from your glasses – good luck.

The modern user interface is not designed for distributed cognition. It’s designed for siloed apps, each competing for your attention and your data. AI, by contrast, is inherently integrative; it wants the full picture. But today’s devices can’t give it that picture, because they each have their own local, redundant state.

This is not just a UI problem. It’s an architectural problem. And it demands a new solution.

The Next Paradigm: Unified Cognitive Mesh

In the Ender’s Game sci-fi trilogy, the superintelligent AI named Jane doesn’t reside in a single device. She emerges from the Ansible network – aware, adaptive, and woven into the fabric of communication itself. Likewise, I believe the next generation of computing will not be embodied by any one piece of hardware, but will emerge from the interplay between our devices.

Intelligence will live across the mesh, not inside a pendant, a puck, or a phone, but in the space between them. It will see what you see, hear what you hear, know your past conversations, your calendar, your current tasks, your goals. Not because it's in the cloud watching you, but because it lives across your personal, private computing mesh network.

This is the vision I believe in.

To accomplish this, we need a new kind of operating system. One designed from first principles to enable distributed, contextual, privacy-respecting AI computing. A system that:

  • Lives across your devices – not in one
  • Maintains a shared state – not redundant states
  • Understands context – not just inputs
  • Lets you control which tasks run where – and what stays local vs what’s outsourced to the cloud
  • Enables real-time collaboration between devices, not post-hoc sync

This OS can coordinate compute tasks between your phone, your earbuds, your laptop, your home server. It can keep state coherent without relying on cloud sync. It can create a persistent memory of your activities across time and device boundaries, with full user control and local-first defaults. And it can power next-gen user interfaces – like voice, vision, haptics – without tying those interfaces to form factor fads like pendants or pucks.

I call this paradigm Unified Cognitive Mesh, and it requires a rethink from the kernel up. The future of computing isn’t another smart device. It’s a cohesive platform – a beam of intelligence that lives across your personal mesh network, not inside a single gadget.

A Fluid Interface Across the Mesh

In a Unified Cognitive Mesh, the user interface is no longer tied to a single screen, window, or app. Instead, it becomes a persistent, adaptive layer that flows seamlessly across your devices. Your laptop, phone, earbuds, TV, wall display, even your car – they each become portals into the same shared intelligence. The interface simply renders wherever you are.

You don’t “open” an app. You engage with an ongoing thread of thought, action, or conversation, one that persists regardless of device or modality. You might start reviewing a project while walking with earbuds in, then sit at your desktop and see your AI assistant has already pulled up relevant files, summarized yesterday’s notes, and queued a message draft. Later, a wall display in your home might surface a reminder or continuation of that task, without any manual sync.

The mesh UI is situational. It adapts in real time – to your current focus, your recent activity, your preferred input method. One moment it’s audio-first. The next, it’s visual, tactile, or ambient. Think of it as a translucent layer of cognition that wraps around your environment, rather than a static interface you tap into.

Rather than a home screen or desktop, your system presents an evolving space of intent: what you’re working on, thinking about, deciding, or deferring. And instead of traditional apps, you interact with small, composable capabilities that collaborate – surfacing when relevant, stepping aside when not.

That doesn’t mean traditional apps go away. Native UIs like Gmail, Notion, GitHub, Figma, and Slack still exist and remain accessible. But instead of competing for your attention, they become background utilities. Their full UI can appear when needed – to do focused editing, for example – but otherwise their functions, data, and affordances are pulled into your cognitive surface by your assistant. You can draft an email without opening Gmail, tweak a Figma file inline without switching contexts, or respond to a GitHub issue without ever touching a browser tab.

Messaging is where this becomes most transformative. Today, our conversations are scattered across iMessage, SMS, WhatsApp, Signal, Telegram, Discord, Slack, comments in Figma, issues in GitHub, threads in Linear, and endless emails. Each has its own app, its own inbox, its own interface paradigm. Managing this today requires juggling notifications and mentally context-switching every few minutes.

In a Unified Cognitive Mesh, all of that chaos is absorbed. Your assistant abstracts messages across all platforms into a unified layer of conversation. You can simply ask, “Did Max respond about the design?” and the system understands that the reply came in Slack, or in Figma comments, or via Signal. You can reply directly from the same interface – no need to jump between apps like a caffeinated pinball machine. You can follow threads, summarize long chains, prioritize across inboxes, or even mute entire categories of chatter all in one place.

The result is a system that is not only more intelligent, but dramatically more humane. It clears the clutter, respects your attention, and helps you maintain flow. A UX that thinks like you and works the way your mind already does.

The Microkernel Foundation

At Foundation, we originally set out to build a secure, next-generation operating system to power our personal security platform, Passport Prime. That effort led us to the Xous Rust-based microkernel – something that, at the time, simply felt like the right foundation for privacy, modularity, and fine-grained control.

Now that we’ve built on it, we realize we’ve stumbled into something far bigger.

Our KeyOS is designed from the start with security, composability, and extensibility in mind. Unlike traditional monolithic operating systems, KeyOS separates core functions into isolated processes, enabling precise control over what runs where – and how each process interacts with hardware and user data.

This architecture would be ideal for a Unified Cognitive Mesh. It would allow us to assign specific tasks to specific devices, enforce fine-grained permissions between components, and ensure that user data remains local unless explicitly shared. AI modules running on one device could communicate securely with others over encrypted channels – forming the foundation of a distributed, privacy-respecting personal mesh network.

Most importantly, KeyOS isn’t just a wallet OS or embedded firmware. It’s a flexible, modern platform that could support everything from real-time context sharing to dynamic compute allocation across personal devices.

This mesh could be connected through a secure, post-quantum encrypted protocol we call QuantumLink – a low-latency communication layer that would allow devices to discover each other locally, share relevant context, and route AI tasks or queries without exposing data to the public cloud.

Originally built for secure Bluetooth communication between Passport Prime and a smartphone, QuantumLink enables end-to-end encrypted pairing and message passing. We believe this protocol could be extended far beyond the wallet use case to support a wide range of secure communication pathways, including local device-to-device links, high-trust mesh coordination, and securely outsourcing work to public AIs.


Humane failed not because of bad industrial design or imperfect software. It failed because it misunderstood the problem. The future isn’t a single AI device. It’s an ecosystem built on a unified foundation.

Pendants, glasses, pucks, and earbuds may come and go. But the next real leap will happen when we break free from the legacy of Redundant State Computing and finally build the Unified Cognitive Mesh that AI, and users, truly need.

And for that, we believe a new microkernel OS is ideal – enabling modularity, secure isolation, and the fine-grained control required for distributed, local-first intelligence.