Local by default.
Transcription runs on-device with Whisper. Speech runs on-device with Kokoro. Your API keys are encrypted at rest. Nothing is phoned home — ever.
Whisper · Kokoro · Fernet · OllamaOpenSair is a private, local AI agent for macOS. Multi-agent, autonomous, voice-first. It installs like an app — and stays on your machine. No cloud, no vendor lock, no compromises.
Every other AI agent lives on someone else's server. OpenSair lives on your Mac.
Your files don't leave. Your prompts don't leave. Your API keys never touch anyone else's disk.
Give it a goal. Go for a walk. It'll work while you're gone and show you the receipts when you're back.
A condensed read of fifty-plus built-in skills. Every one of them runs with your permission, on your machine, and is reversible from the journal.
Transcription runs on-device with Whisper. Speech runs on-device with Kokoro. Your API keys are encrypted at rest. Nothing is phoned home — ever.
Whisper · Kokoro · Fernet · OllamaSpawn sub-agents. Delegate in parallel. Each gets its own model, skills, and permissions. Drill into any child’s full transcript when you want to know what happened.
Multi-agent · Delegation · Directory routingGive it a goal and a priority. It works while you sleep, adjusts its own wake cadence, pauses itself on failure, and pings you only when it matters.
Autonomous · Cron · Priority 1–10Files. Terminal. Browser. Spotlight. Gmail. Calendar. Drive. Docs. Discord. Obsidian. Instagram, Threads, Messenger, SMS. HTTP, clipboard, processes, system prefs. It’s not a tab — it’s an agent.
System · Browser · Google · Social · ObsidianCross-session memory that survives every chat. Your docs indexed with semantic search and source attribution. Per-agent scope, importance scores, duplicate-detection — without leaking a word to a vendor.
Memory · Knowledge · Semantic · Per-agentOpenAI, Anthropic, OpenRouter, Ollama. Mix providers across agents. Pay your provider directly — we take zero cut — or go fully offline with local Ollama models.
OpenAI · Anthropic · OpenRouter · OllamaThe shape of OpenSair in a glance.
Hold a real back-and-forth with your agent. Whisper transcribes locally, Kokoro speaks locally. Zero API cost.
Persistent Chrome profile. Logins survive. Click, fill, scrape, screenshot. The web without the API gatekeepers.
Autonomous background goals with priorities. It wakes itself, adjusts cadence, and pings only when it matters.
Your primary agent plans. Sub-agents execute in parallel — each with its own model and permissions. You see every conversation.
Spawn five sub-agents, hand each a slice of the problem, and get a single answer back. No round-robin, no babysitting.
Every sub-agent keeps its own conversation. Click any one to see the full trace — reasoning, tool calls, results.
Route reasoning to Claude, heavy lifting to GPT, quiet work to local Ollama. Cost and latency, your call.
Not a cron. Not a schedule you set and forget. You give it a goal; it decides when to wake next — every 30 seconds when things are hot, every few hours when they're not. It paces itself.
“Find everything in my drive about project Helios and summarize it.” It runs, ends, shows its work.
A cron fires on your schedule. A goal wakes on its schedule. Tight loop while work is active. Long loop when the world's quiet. No tuning required from you.
Every cycle is checkpointed. Restart the app and a half-finished goal picks up where it left off — cleanly.
You don't train it — you use it. Every night a learning pass runs across the day's conversations, extracting what matters, consolidating what's redundant, tagging the entities that keep coming up. No fine-tuning. No uploads. No vendor.
You prefer JetBrains Mono for code blocks and tight markdown spacing.
Never auto-send replies to clients without approval. Draft only.
Demo call with Acme was moved from Apr 15 to Apr 20 at 14:00.
Maya is a frontend engineer at Acme. She reports to Sam. Works in SF.
Facts, instructions, events, relationships. Each with importance scores and entity tags, so the agent knows what to surface and when.
Ask about something from three weeks ago without the exact words. If the meaning is close, the memory surfaces.
Every memory write is reversible from the Activity Journal. Nothing sticks unless you want it to.
Did the cron trigger. Did the agent email who it said. Did it — uh — delete the right folder. Every action is logged. Most of them can be undone.
Your agent takes actions. You keep the receipts. Every skill call — what it was, what arguments it passed, what it got back — goes into the journal. Filter by agent, status, or text. Reverse what's reversible. Power and control, in that order.
Every Friday at 9am, a cron is supposed to run. Did it? Open the journal. It did, and here's the result.
Agent wrote to a memory you didn't want. Ingested the wrong folder. Hit revert. It unwinds cleanly.
Click any row to see arguments, return value, and which agent (or sub-agent) called it.
Tell the agent to share a report, a chart, a carousel, a draft. It packages the artifact, generates a persistent public link, hands you the URL. No export dance. No cloud-storage app. No “download then upload” shuffle.
“Share the deck with Sam.” “Give me a public link for that CSV.” “Post this markdown somewhere.” The agent packages, uploads, returns the URL.
Links don't expire. They're live until you (or the agent, if you ask) revoke them. Send once, reference forever.
PDFs, images, markdown, HTML, spreadsheets, code — the link viewer opens them inline. No “download to see” friction for the recipient.
The agent asks before it touches anything that matters. Approve once. Approve for the session. Or lock it down per agent — marketing shouldn't be running shell commands.
Reads, searches, listings, passive observation. Runs without interrupting you.
Writes, sends, drafts. Accept once or accept for this agent forever.
Destructive or irreversible. Shell, delete, mass-send. No “remember my answer” option.
Agents are not equal. Your research agent never needs shell. Your bookkeeping agent doesn't touch social. Lock each one down to the exact skills it needs. Everything else is denied — silently, permanently.
Bring the keys you already have. Mix providers across agents. Or run the whole thing offline with Ollama — we'll default to it for summarization when it's installed.
We never proxy your calls. We don't see your tokens. No seats, no lock-in.
Low for quick calls, high for the hard thinking. Toggle per agent to manage cost.
If a model can see, images flow through. If it can't, we don't waste tokens trying.
Older context gets smart-summarized. Skill definitions lazy-load. Redundant tool calls dedupe. When Ollama is installed, summarization runs on-device for free. You pay your provider for what you actually need — nothing more.
Full for short chats. Summarize for long ones — preserves code, paths, and decisions. Truncate for cheap runs. Per-agent override.
When a local model is available, compaction and summarization run on-device. Zero API cost. Zero data shared.
Small Ollama models get a curated skill set so they don't choke on tool schemas. Local power without the token bloat.
The easy test for “local”: yank the wifi. If it breaks, it wasn't local. OpenSair keeps working — your keys, your files, your agent.
| What | OpenSair | Typical hosted agent |
|---|---|---|
| Chat messages & files | Stays on your Mac | Sent to their cloud |
| Voice transcription | Whisper · on-device | Uploaded to API |
| Speech synthesis | Kokoro · on-device | Uploaded to API |
| API keys | Encrypted · never leave | Stored on servers |
| Browser cookies & sessions | Your disk only | Remote profile |
| Memory & knowledge | Your disk only | Vector DB you don’t own |
| Telemetry | None | Standard practice |
| Works offline | Yes (with local models) | Never |
Every other “open” AI agent is the same piece of software with a new logo. A few receipts from this quarter, for the record.
OpenSair is not a fork, not a wrapper, and not a hosted service. Built from scratch around a single idea: the agent lives with you, not on a vendor's disk.
We're not naming names. You've read the news.
Remember the Goals feature from earlier? We gave it one goal: reach 1,000 followers on @open.sair. It's been running on its own ever since — researching, drafting, art-directing, publishing, and replying.
One autonomous Goal. No team. No agency. Every post on the feed, every reply, every A/B test — the agent. The only human in the loop is the one clicking approve when it asks.
One flat price. No per-seat. No “pro” tier hiding the good features. Bring your own keys, or go offline with Ollama. Done.
One flat price for the app. Your AI usage goes directly to OpenAI, Anthropic, or OpenRouter — or nothing at all if you run locally with Ollama.
Short answers, no hedging.