Your apps, your AI, your space -- in any browser, on any device.

For decades, your most-used interface -- the home screen -- has been locked to hardware.

Your phone.

Your laptop.

Your tablet.

Different storage. Different apps. Different assistants. When you switch devices, you start over.

Open Aya OS removes the device as the center of gravity.

Your workspace lives in the cloud.

Your AI remembers you.

Your environment follows you anywhere a browser opens.

Open Aya OS is a browser-native operating system.

No install.

No app store.

No device lock-in.

Open a URL and your full workspace appears:

  • ·37 integrated applications
  • ·A voice-native AI assistant with structured cognition
  • ·Infinite cloud-backed storage
  • ·A real OS shell (windows, taskbar, Spotlight, CLI)

It behaves like an operating system. It runs like a website.

OpenAya cloud workspace dashboard with unified home screen
OpenAya modular productivity apps in shared workspace
OpenAya files and cloud storage interface

Aya Is a Cognitive System, Not a Chatbox

Most assistants are stateless wrappers around a model. Aya runs through a structured pipeline: Cognitive Spine (8-step reasoning), Context Layers (persistent user model + daily state), Strategy Auction (6 agents bid on every task), Behavioral Learning (16 continuous RL parameters), Memory System (episodic, semantic, procedural). She doesn't just respond. She routes, evaluates, adapts, and remembers. This is architecture -- not prompt stacking.

Voice-Native by Design

120+ voice commands are embedded at the OS level. Not "voice added later." Voice is a first-class input. Custom annunciation preprocessing prevents robotic clipping and mispronunciation. All speech flows through a centralized engine.

Infinite Space

Open Aya OS replaces device storage limits with cloud persistence. No "storage full." No syncing conflicts. No app silos. Your workspace expands with you. Mobile feels like a native app grid. Desktop feels like a real OS.

One Platform, 37 Apps

Productivity. Learning. Creative. Developer. System. All built in the same shell. All searchable via Spotlight (Cmd+K). All voice-accessible. All designed to share context with Aya. Not 37 disconnected apps -- one unified environment.

Inspectable Intelligence

Type /inspect -- see what Aya is thinking. /audit -- see which agent won the Strategy Auction. /status -- see what signals influenced behavior. No black box.

193
TypeScript files
~52K
Lines of code
37
Applications
15
Intelligence modules
120+
Voice commands
26
Database tables
10
API endpoints
8
Reasoning steps / msg

What’s Real Now

  • ·37 apps live in an OS shell
  • ·120+ voice commands integrated
  • ·Intelligence pipeline built and running
  • ·Local-first persistence works today

What Ships Next

  • ·Replace mock Supabase with production client
  • ·Auth + cross-device persistence
  • ·Unified global library + search across docs/apps

Every message -- typed or spoken -- passes through this pipeline before the AI model generates a response.

01
CLI Check
Slash commands handled locally
02
SIG Heartbeat
Signal emitted to all subsystems
03
Context Layers
SOUL/USER/MEMORY/DAILY document built
04
Strategy Auction
6 agents bid, best strategy selected
05
TinyAdapter
RL signals extracted, steering computed
06
Memory Retrieval
Relevant memories + user model loaded
07
Cognitive Spine
8-step reasoning decomposition
08
Agent Routing
Intent classified, agent assigned
09
Prompt Assembly
Context + reasoning + memory merged
10
AI Generation
GPT-4o-mini with tuned parameters
11
Response
Rendered with cognitive metadata
12
Feedback Loop
Memory, context, adapter all updated

All built in the same shell. All searchable via Spotlight (Cmd+K). All voice-accessible. All designed to share context with Aya.

Productivity

8 apps

Talk, Notes, Word Processor, Calendar, Calculator, Code Lab, Timer, Work OS

Learning

8 apps

Study Helper, FokusRead, Learning Paths, AI Curriculum, Study OS, Knowledge Map, Knowledge Forge, Typing Game

Creative

6 apps

AI Canvas, AI Studio, Image Studio, Media Hub, Music Player, Beam Transfer

System & Developer

9 apps

Spatial Files, Settings, Admin Dashboard, Developer Hub, Web Browser, Aya Workspace, Aya Memory Viewer, Aya Dashboard, Workflow Composer

120+ Voice Commands

"Open notes." "Search for machine learning." "Set a timer for 25 minutes." The entire OS responds to voice. Custom annunciation engine prevents the robotic clipping typical of browser TTS.

14 CLI Commands

/remember, /forget, /teach, /mode, /depth, /status, /inspect, /audit, /export, /persona, /health, /reset, /help. Tab completion. Power users control Aya's behavior directly.

Runtime
Next.js
Language
TypeScript
Styling
Tailwind CSS v4
Database
Supabase (26 tables)
AI
GPT-4o-mini
Voice
Web Speech API

OpenAya is the cognitive interface platform of the Serpens stack. It informs UI patterns across all Serpens products, drives voice and ambient computing research, and provides the workspace layer that other platforms (SerpenSky, Global Health IQ) connect to.

OpenAya began with a vision for reimagining how we interact with our workspaces:

The Future Work Space -- As Convention Would Have It
by Richie Adomako

The Future Work Space concept art - ambient computing pods with holographic displays

Open Aya OS is live in private preview.

The differentiated core -- intelligence + environment -- is built. What ships next is production persistence and auth. The schema exists. RLS exists. Tables exist. This is plumbing -- not invention.

Try OpenAyaContact UsInvestor Overview