Cogitae Tutorials — Learn Core Mac AI Workflows
Start here to learn the core workflows in Cogitae — from connecting a provider and working with files to branching conversations, building agents, conducting research, automating your Mac, and controlling everything from your iPhone.
Get Started
Add Another AI Provider
Cogitae works with any number of AI providers simultaneously — Anthropic Claude, OpenAI, Google Gemini, xAI Grok, Mistral, Perplexity, Groq, DeepSeek, Ollama, and more. You bring your own API keys, which means your conversations never pass through a middleman and you pay provider rates directly — no markup.
What You’ll Need
An API key from at least one provider. If you don’t have one yet, here are the fastest options to get started:
- Anthropic Claude — console.anthropic.com — Claude 3.5 Haiku is fast and cheap; Claude 3.7 Sonnet is the best all-rounder
- OpenAI — platform.openai.com — GPT-4o is capable and widely supported
- Google Gemini — aistudio.google.com — Gemini Flash is extremely fast and inexpensive; has the best prompt caching rates
Steps
1. Open Preferences
Press ⌘, or go to Cogitae → Preferences in the menu bar.
2. Select the AI tab
Click the AI tab in the preferences toolbar.
3. Add a Provider
Click the + button in the Providers section. A provider editor opens on the right.
4. Choose Your Provider
Select your provider from the dropdown (e.g., Anthropic). The form updates to show the fields that provider needs.
5. Enter Your API Key
Paste your API key into the API Key field. Cogitae stores it securely in the macOS Keychain — it never leaves your machine in plain text.
6. Choose a Default Model
Select a default model from the dropdown.
7. Save
Click Save. Your provider appears in the list and is immediately available in all conversations.
Setting a Summarization Provider
Back in the AI preferences, you’ll see a Summarization Provider dropdown. This is the provider Cogitae uses for lightweight background tasks — generating conversation summaries, creating references for bookmarked content, and similar housekeeping. Set it to your fastest, cheapest model (Gemini Flash or Claude Haiku are good choices) to keep background costs minimal.
You’re Ready
Open a new conversation (⌘N), select your provider from the toolbar, and start talking. Once you have a provider connected, Cogitae becomes fully self-documenting — just ask it how to use any feature and it will tell you, or do it for you.
Work With Files in Chat
One of the fastest ways to understand Cogitae is to use it on real files. Give it access to a folder, ask it to inspect what’s there, and let it help you search, summarize, and reason over the contents.
What You’ll Do
Grant Cogitae access to a folder, then use chat to inspect files and answer questions about them.
Steps
1. Pick a folder you care about
Choose a project folder, notes folder, or any small directory you want to explore.
2. Add the folder as a workspace
In Cogitae, add that folder as a workspace so the app can access it.
3. Start a new conversation
Press ⌘N and begin a new chat.
4. Ask Cogitae what’s in the folder
Try prompts like:
“Show me the top-level files in this workspace.”
“What kind of project is this?”
“Find the files most relevant to authentication.”
5. Ask a follow-up question that requires reading
For example:
“Summarize the README and tell me how to run this project.”
“Search for TODO comments and group them by file.”
“Explain how configuration is handled in this codebase.”
Why This Matters
This is where Cogitae starts to feel different from a browser chat. Instead of pasting snippets manually, you can work directly with the files that matter to the task.
Branching Conversations: Never Lose a Good Idea
Most AI clients give you a linear conversation. Cogitae gives you a tree. Every AI response, every note, every system message can be branched — you can explore a different direction without losing where you were.
Why Branching Changes Everything
In a linear chat, if you don’t like an answer and ask a follow-up, you’ve permanently altered the conversation. The original response is still there visually, but the context has moved on.
In Cogitae, you can branch from any AI response and take the conversation in a completely different direction — while the original branch stays intact and navigable. You end up with a map of how your thinking evolved, not just where it ended up.
Creating a Branch
- Hover over any AI response and click the branch icon
- Type your new message in the input field
- Submit — a new branch is created
You’ll see branch spinners (arrows) on the message, showing you how many siblings exist. Click them to move between branches.
The Conversation Graph
Open the Table of Contents with Command-L and switch to Graph View. You’ll see your entire conversation as a visual tree:
- Blue nodes: System messages
- Green nodes: Your messages
- Purple nodes: AI responses
- Orange nodes: Notes
Pan, zoom, and right-click any node to navigate to it or make a different branch active. For complex research or planning sessions, this view reveals structure you can’t see in a linear scroll.
Practical Uses
Exploring alternatives: Ask for three different approaches to a problem, then branch from each one to explore them independently.
Safe editing: Edit a user message to rephrase a question — Cogitae automatically creates a branch, so your original question and its response are preserved.
Parallel research: Branch at a system message to run the same conversation with two different AI providers and compare responses side-by-side.
Capturing tangents: If a conversation goes somewhere interesting but unrelated to your main thread, branch there and explore it without derailing the original.
Creating Projects: Branch any number of conversations off of the initial system prompt. Each different aspect of your project on its own branch, all grouped under a parent conversation. Each branch can have its own branches.
Notes as Anchors
Notes are non-AI messages you can drop anywhere in the conversation tree — they don’t get sent to the AI, but they can be branched from and attached to queries as context. Notes have the same full markdown rendering as AI messages. Use them to annotate branches, mark decisions, or leave yourself reminders mid-investigation.
Automate Your Mac
Build a File Monitor Agent in 60 Seconds
Cogitae’s event-driven agent system lets you create autonomous agents that watch for things and act on them — no code required. This tutorial walks you through creating an agent that monitors a folder and summarizes what changed.
What You’ll Build
An agent that watches your Downloads folder, detects new files, and sends you a desktop notification with a plain-English summary of what arrived.
Steps
1. Open Agent Preferences
Go to Preferences → Agents and click + to create a new agent.
2. Name Your Agent
Give it a name like Downloads Monitor.
3. Choose the Event Source
Set the event source to FileSystem and select your ~/Downloads folder as the path to watch.
4. Write the Instruction
In the instruction prompt field, write something like:
A new file has appeared in my Downloads folder. Briefly describe what it is based on its filename and extension, and whether I likely need to do anything with it.
5. Set the Action Policy
Choose Act and Notify — the agent will run the AI analysis and send you a notification with the result.
6. Enable and Save
Toggle the agent on. From now on, every time a file lands in Downloads, Cogitae silently analyzes it and tells you what it is.
What’s Happening Under the Hood
Cogitae uses FSEvents (the same kernel-level file watching macOS uses for Spotlight) to detect changes. When a change is detected, your agent wakes up, runs a headless AI session with full tool access, and delivers the result — all without you touching the app.
Taking It Further
- Watch a project folder and get notified when build artifacts change
- Monitor a shared Dropbox folder and summarize new documents
- Watch a log directory and alert you when errors appear
- Ask Cogitae to write and install the agent for you: “Create an agent that watches my Downloads folder and tells me when something new arrives”
Control Your Mac from Your iPhone
Cogitae has an iOS companion app that lets you start conversations, continue them, and — most powerfully — send instructions to the macOS app from your phone. Your Mac runs the task while you’re away from it.
What You Can Do
- Browse and continue any conversation from your Mac, on your iPhone
- Start new conversations that sync back to macOS
- Send natural language commands that execute on your Mac: “Read the file I was working on and summarize the last 50 lines”
- Trigger agents, change settings, run tools — all from iOS
Setup
On your Mac:
Cogitae uses Bonjour for local network discovery — no configuration needed if your iPhone and Mac are on the same Wi-Fi network.
On your iPhone:
- Open the Cogitae iOS app
- Tap the machine picker — your Mac appears automatically
- Select it to connect
Starting and Continuing Conversations
The iOS app shows your full conversation history, synced via iCloud. Tap any conversation to open it, read it, and continue it. Responses from the AI appear on both devices.
Sending Remote Commands
This is where it gets interesting. When connected to your Mac, messages you send from iOS are relayed to the macOS app and executed with full tool access — the same tools available in a desktop conversation.
From your phone, you can say:
“Check if my server is running and tell me the last 20 lines of the log” “What files did I add to my project today?” “Change the code style to dark theme” “Run my daily summary agent now”
Your Mac does the work. The result comes back to your phone.
A Practical Workflow
You’re in a meeting and remember you need to check something in your codebase. Pull out your phone, open Cogitae, connect to your Mac, and ask:
“Search my project for all uses of the deprecated API and list the files”
By the time you’re back at your desk, the answer is waiting.
Advanced Features
Socratic Reasoning: Getting Better Answers Through Debate
Cogitae includes a structured investigation system called Aristotle that uses a second, independent AI model (controlled by Socrates) to challenge its own conclusions. The result is analysis that’s been stress-tested before it reaches you — closer to how good human thinking works.
Why This Matters
Single-model responses are confident by default. The model generates a plausible answer and stops. It doesn’t backtrack, consider alternatives, or catch its own overreach.
Cogitae’s Aristotle forces that process to happen: one model investigates, builds a case, and reaches a conclusion — then a second model (Socrates) tears it apart looking for gaps, unsupported assumptions, logical fallacies, and unexplored alternatives. The cycle repeats until both models agree the conclusion is solid.
When Cogitae Uses It
When you ask a diagnostic question — “Why is this happening?”, “What’s wrong with this approach?”, “What am I missing?” — Cogitae evaluates whether the problem is complex enough to warrant Socratic reasoning. If it is, it will ask your permission before launching the full investigation (since it uses a high-intelligence model and costs more tokens).
How to Trigger It Manually
Ask anything that starts with why, what’s wrong, or help me think through:
“Help me think through the architecture for this feature.” “Why does this algorithm behave differently at scale?” “What are the weaknesses in this business plan?”
You can also ask Cogitae to investigate explicitly:
“Investigate why my build times have gotten slower.” “Explore the tradeoffs between these two approaches.”
What You Get Back
A structured report with:
- Conclusions with confidence levels
- Evidence gathered during investigation
- Eliminated alternatives — what was considered and ruled out
- Residual uncertainty — what’s still unknown
- Recommended next step
Practical Example: Planning a Feature
Instead of asking “How should I implement X?” (which gets a confident single-pass answer), try:
“I want to add real-time collaboration to my app. Investigate the tradeoffs between CRDTs, operational transforms, and a simple lock-based approach for my use case.”
Cogitae will research each approach, have Socrates challenge its analysis, and return a structured comparison that’s been genuinely stress-tested.
Context Optimizer: Use Only the Tools You Need
Cogitae supports a large and growing number of tools — file access, web search, code execution, shell commands, and more. Every enabled tool adds its full definition to the system prompt, which means more tokens per query. The Context Optimizer solves this by letting the AI discover and activate tools on demand, rather than loading everything upfront.
How It Works
When the Context Optimizer is enabled, Cogitae starts each conversation with a minimal toolset. Instead of injecting every tool definition into the system prompt, it provides a single catalog tool that the AI can call to:
- List available tools — see what’s available, grouped by category, with descriptions
- Enable specific tools — activate only the ones relevant to the current task
The AI reads your message, decides which tools it needs, enables them, and proceeds — all transparently. You don’t need to do anything differently.
Turning It On
Go to Preferences → Tool Settings and enable Tool Context Optimizer. It applies to new conversations from that point forward.
When to Use It
The optimizer is most useful when you have many tools enabled — especially MCP tools or native plugins — but most conversations only use a handful of them. It reduces token usage by keeping unused tool definitions out of the context window.
The trade-off is a small overhead: the AI makes an extra tool call at the start of each session to discover and enable what it needs. For conversations with only a few tools enabled, the savings are minimal and you can leave it off.
Tips
Enable a wide variety of tools. The optimizer makes it practical to leave many tools enabled without paying the token cost for all of them in every conversation. Install what you might need and let the AI pick the right tools for the job.
Trust the AI’s tool selection. The optimizer shows the AI a catalog with descriptions. It will enable the tools relevant to your query — you don’t need to tell it which tools to use.
Spend Less, Think More: Automatic Cost Optimization
Cogitae is built around the idea that you shouldn’t have to think about AI costs — the app should handle that for you. It does this in three ways: automatic model selection, prompt caching, and live cost tracking.
Automatic Model Selection
When Cogitae spawns sub-agents (for Aristotle, Socrates, parallel research tasks, and more), it doesn’t just use your default model for everything. It scores each task against two dimensions:
- Intelligence Index — how complex is the reasoning required?
- Coding Index — does this task involve writing or analyzing code?
For a simple task like “search this file for a pattern,” it picks the cheapest model that can do it reliably. For a complex architectural analysis, it selects a frontier model. You get the right tool for each job automatically, without configuring anything.
Prompt Caching
Anthropic and Google both support prompt caching — when the same large context (your system prompt, conversation history, attached files) is sent repeatedly, the provider caches it and charges a fraction of the normal input price for subsequent requests.
Cogitae takes full advantage of this. Long conversations with Anthropic Claude or Google Gemini get significantly cheaper as the session progresses, because the context is being served from cache rather than re-processed each time.
You can see the cache hit ratio for any conversation:
“What’s my token usage?”
Cogitae reports both best-case cost (with full caching) and worst-case cost (no cache), so you know exactly where your money is going.
Live Cost Tracking
The cost tool gives you per-session, per-message, and per-provider breakdowns:
“How much has this conversation cost so far?” “Compare the cost of my last 10 messages across providers”
Cogitae pulls live pricing from its cloud API (refreshed every 12 hours) so the numbers are always current — no stale hard-coded rates.
Practical Tips for Keeping Costs Down
Install a wide selection of models. Don’t just add the latest-and-greatest models. Add cheaper ones, older ones, faster ones too. Add many models from many different providers. This will give Cogitae a good selection from which to pick the best model for the job.
Use the cheapest model that works. For simple Q&A, drafting, or summarization, a fast cheap model (Claude Haiku, Gemini Flash, GPT-4o-mini) is often indistinguishable from the expensive ones. Reserve frontier models for genuine complexity.
Keep long-running conversations going rather than starting fresh. In a cached session, your accumulated context costs a fraction of what it would in a fresh conversation.
Let Cogitae pick the model. When using multi-agent features like Aristotle or parallel research, trust the automatic model selection. It’s optimized to minimize cost while maintaining quality.
Research a Topic Like a PhD Student
Cogitae’s Aristotle agent, academic search, and memory system combine into something that feels less like a chatbot and more like a research assistant with a library card. This tutorial walks you through a structured investigation of a real question — the kind that would normally take hours of reading and cross-referencing.
What You’ll Do
Investigate a complex question using Aristotle’s hypothesis-driven reasoning and academic search, then save your findings to memory so future conversations can build on them.
Pick a Question Worth Investigating
Choose something with genuine complexity — a question where a quick Google search gives you conflicting answers or superficial takes. Good examples:
“Is there solid evidence that microplastics affect human endocrine function?”
“What’s the current consensus on whether code review actually reduces defect rates?”
“Do standing desks measurably improve health outcomes, or is it mostly marketing?”
The key is picking a question where you actually want a nuanced answer, not just a yes or no.
Steps
1. Start a new conversation
Press ⌘N. Make sure you have a capable model selected (Claude Sonnet or Opus, GPT-4o, or Gemini Pro) — this is going to be a multi-step investigation that benefits from strong reasoning.
2. Enable the right tools
In the instruction message, enable: aristotle, academic_search, memory, and web. Aristotle will use academic search to find papers, web to check sources, and memory to store findings.
3. Ask your question with investigative framing
Don’t just ask the question directly. Frame it as an investigation:
“Investigate whether there is solid clinical evidence that intermittent fasting improves metabolic health markers in non-obese adults. Search the academic literature, evaluate the quality of the studies, and tell me what’s well-supported versus what’s still speculative.”
Cogitae recognizes this as a complex diagnostic question and offers to launch Aristotle.
4. Let Aristotle work
Accept the Aristotle investigation. You’ll see the Agent Panel (⌘[) light up. Aristotle will:
- Form initial hypotheses based on your question
- Search academic databases for relevant papers
- Evaluate source quality and methodology
- Build a case for and against each hypothesis
- Hand the analysis to Socrates, who attacks it looking for gaps
- Iterate until both models agree the conclusions hold up
This takes a few minutes. The Agent Panel shows progress in real time.
5. Read the structured report
When Aristotle finishes, you get a report with:
- Conclusions ranked by confidence level
- Evidence with citations to specific studies
- Eliminated alternatives — what was considered and ruled out
- Residual uncertainty — what the evidence doesn’t cover yet
- Recommended next steps for further investigation
6. Save key findings to memory
Tell Cogitae to remember what matters:
“Store the key findings from this investigation to memory — particularly the high-confidence conclusions and the studies that support them.”
Now any future conversation can recall this research. Next week, when you’re discussing the same topic, Cogitae already knows what the evidence says.
Why This Matters
A single-model response to “does intermittent fasting work?” gives you a confident paragraph that reads well but may cherry-pick evidence. Aristotle forces the analysis through adversarial review — Socrates actively tries to break the conclusions before they reach you. The result is closer to what a careful researcher would produce after reading the same papers.
Taking It Further
- Branch from the report to explore a specific sub-question in depth
- Use Newton instead of Aristotle when you want knowledge synthesis rather than hypothesis testing
- Chain investigations: use the memory from one study as context for the next
- Ask Cogitae to export the report as markdown for sharing
Let Cogitae Run Your Morning Briefing
Imagine opening your Mac and finding a summary already waiting: today’s meetings, what’s on your calendar, and a heads-up on anything that needs attention. This tutorial shows you how to build a timer-based agent that delivers a daily briefing without you lifting a finger.
What You’ll Build
An agent that fires every morning, checks your calendar, and sends you a desktop notification with a plain-English summary of your day.
Steps
1. Open Agent Preferences
Go to Preferences → Agents and click + to create a new agent.
2. Name Your Agent
Call it something like Morning Briefing.
3. Choose the Event Source
Set the event source to Timer and configure the schedule: weekdays at 08:00 (or whatever time you want your briefing).
4. Write the Instruction
In the instruction prompt field, write:
Prepare my morning briefing. Check my calendar for today’s events and summarize them — include meeting times, who’s involved, and anything I should prepare for. If there’s a gap of 2+ hours, mention it as potential focus time. Keep the summary concise and actionable.
5. Enable the Calendar Tool
In the agent’s tool selection, enable calendar. If you want the agent to also check files (like a project folder for recent changes), enable read and search and add the folder as a workspace.
6. Set the Action Policy
Choose Act and Notify — the agent runs the AI and sends you a desktop notification with the result.
7. Enable and Save
Toggle the agent on. Tomorrow morning at 8:00, your briefing arrives automatically.
Extending the Briefing
Once the basic calendar briefing is working, you can make it richer:
- Add workspace monitoring: Include a project folder and ask the agent to note any files that changed overnight
- Add web search: Enable the web tool and ask it to check weather or relevant news
- Chain agents: Create a second agent triggered by AgentLifecycle that fires when Morning Briefing completes — it could, for example, open the first relevant document or create a conversation with your day’s plan
A Practical Example
Your instruction might evolve to something like:
Morning briefing. Check today’s calendar events. For any meeting with more than 3 attendees, note the topic so I can prepare. Flag any back-to-back meetings with no break. If my “Current Sprint” workspace has files modified since yesterday, list them. End with one sentence: what’s the single most important thing I should focus on today?
The agent runs every morning while you’re making coffee. By the time you sit down, your day is summarized.
Debug a Codebase You’ve Never Seen
You’ve just been handed a repo you’ve never touched. Something is broken. You don’t know the architecture, the conventions, or where to start. This is exactly where Sherlock — Cogitae’s deep investigation agent — earns its name.
What You’ll Do
Point Sherlock at an unfamiliar codebase and ask it to find the root cause of a bug, without you having to understand the project structure first.
Steps
1. Add the project as a workspace
Drag the project folder onto the instruction message, or use the Attachments button to add it. Cogitae now has read access to the entire directory tree.
2. Enable the right tools
Sherlock needs filesystem access to explore. Make sure these are enabled in the instruction message: file, fs, search, exec, and sherlock. If the project has a build system or test suite, also enable exec and approve the relevant executables (like npm, pytest, cargo, etc.).
3. Describe the symptom
Be specific about what’s broken, but don’t guess at the cause:
“This project crashes when processing files larger than 50MB. The error is ‘allocation failed’ but the machine has 32GB of RAM. I’ve never worked in this codebase. Find the root cause.”
Or for something subtler:
“Users report that search results are sometimes missing items that definitely exist in the database. It’s intermittent. I just inherited this project. Figure out why.”
4. Let Sherlock investigate
Cogitae launches Sherlock, who works like a detective:
- Reads the project structure to understand the architecture
- Identifies the relevant subsystems based on the symptom
- Traces the code paths involved
- Forms hypotheses about possible causes
- Tests each hypothesis by reading the actual code
- Hands findings to Socrates for adversarial review
- Reports back with a structured diagnosis
Watch the Agent Panel — Sherlock’s progress is visible as it explores the codebase.
5. Read the diagnosis
Sherlock returns a structured report: what’s causing the issue, where in the code it happens, why it happens, and what to do about it. The report includes file paths and line numbers so you can verify the findings yourself.
When to Use Sherlock vs. Just Asking
For simple questions about a codebase — “where is authentication handled?” or “what does this function do?” — you don’t need Sherlock. Just ask directly with file access enabled.
Sherlock is for problems that require investigation: tracing execution paths, correlating symptoms with causes, ruling out red herrings. The kind of debugging where a human would need to read a lot of code before forming a theory.
Taking It Further
- Ask Sherlock to investigate performance issues: “This API endpoint is 10x slower than it should be. Profile the code path and find the bottleneck.”
- Use Sherlock on your own code when something breaks in a part of the project you haven’t touched in months
- Branch from Sherlock’s report to explore the fix — ask Cogitae to implement it
Turn a Messy Idea Into a Project Plan
You have a vague idea and a blank page. Maybe it’s a feature you want to build, a side project you’ve been thinking about, or a problem at work that needs a structured approach. This tutorial chains three specialized agents — Da Vinci for creative ideation, Hippodamus for structured planning, and Patton for stress-testing — to turn that mess into something actionable.
What You’ll Do
Start with a rough idea, have Cogitae expand it creatively, organize it into a concrete plan, then pressure-test it for weaknesses — all in one conversation.
Steps
1. Start with the messy version
Open a new conversation and just describe the idea as it exists in your head. Don’t worry about structure:
“I want to build a tool that helps people manage their reading list. Not just bookmarks — something that tracks what you’ve read, what you thought about it, and suggests what to read next based on patterns in what you liked. Maybe it connects to Kindle highlights somehow. I don’t know the architecture yet.”
2. Call Da Vinci for creative expansion
Ask Cogitae to explore the idea:
“Use Da Vinci to explore this idea. What are the most interesting directions this could go? What adjacent problems could it solve? What would make this genuinely different from existing reading trackers?”
Da Vinci thinks across domains — it might connect your reading tracker to spaced repetition research, or suggest that the “what you thought about it” piece is the real innovation, not the tracking.
3. Branch and call Hippodamus for structure
Branch from Da Vinci’s response (so you keep the creative exploration intact) and ask for a plan:
“Based on Da Vinci’s ideas, use Hippodamus to create a structured project plan. Break this into phases with concrete deliverables. What do I build first? What can wait?”
Hippodamus produces a structured plan with phases, dependencies, and milestones. It can save the plan as a markdown file in your workspace.
4. Branch again and call Patton for stress-testing
Branch from the plan and bring in the adversarial reviewer:
“Use Patton to attack this plan. What are the weakest assumptions? Where will I get stuck? What’s most likely to fail? Be harsh.”
Patton looks for gaps: unrealistic scope, missing technical unknowns, dependency risks, things you’re underestimating. It’s the voice in the room that says “have you actually thought about…”
5. Synthesize
You now have three branches from one conversation:
- Da Vinci: Creative possibilities and differentiation
- Hippodamus: Structured plan with phases
- Patton: Weaknesses and risks
Navigate between them using the conversation graph (⌘]) to see the full picture. Create a new branch to write the final plan that incorporates the best of all three.
Why Three Agents?
Each agent has a distinct cognitive mode. Da Vinci explores without constraints. Hippodamus organizes without sentiment. Patton attacks without mercy. Using all three gives you the creative-structured-critical loop that good teams do naturally — but available on demand, in one conversation, in a few minutes.
Taking It Further
- Save the final plan to memory so future conversations can reference it
- Ask Hippodamus to export the plan as a markdown file with checklists
- Use this workflow for anything that needs structured thinking: conference talks, blog posts, system designs, hiring plans
Teach Cogitae How You Work
Every time you start a new conversation, Cogitae starts fresh. Unless you teach it. The memory system lets you store preferences, facts, conventions, and lessons that automatically inject into future conversations. The more you teach it, the less you repeat yourself.
What You’ll Do
Build up a set of memories that make Cogitae dramatically more useful over time — from coding style preferences to project context to personal workflow patterns.
Step 1: Store Your Preferences
Tell Cogitae how you like to work:
“Remember that I prefer TypeScript over JavaScript, use Tailwind for styling, and always want strict null checks enabled. When writing code for me, use functional components with hooks, not class components.”
Cogitae stores this as a preference memory. Next time you ask it to write code — even in a completely new conversation — it remembers.
More examples:
“Remember that I use vim keybindings and prefer terminal-based workflows over GUI tools.”
“Remember that when I ask for explanations, I want them concise — no preambles, no hedging, just the answer.”
“Remember that I manage three projects: the API (Go), the frontend (React), and the data pipeline (Python). When I say ’the API’ I mean the Go service.”
Step 2: Store Project Context
Give Cogitae context that isn’t obvious from the code:
“Remember that our API uses the repository pattern, and all database access goes through repository interfaces. Direct SQL in service code is considered a bug.”
“Remember that the legacy auth system is being replaced by the new OAuth flow. Any code touching authentication should use the new system, not the old one.”
“Remember that the deployment pipeline runs on GitHub Actions and deploys to AWS ECS. The staging environment is at staging.example.com.”
Step 3: Store Lessons
When something goes wrong — or right — capture the lesson:
“Remember that the Postgres connection pool needs to be at least 20 for our load. We had an outage when it was set to 5.”
“Remember that the PDF export feature is sensitive to font loading order. Always load fonts before rendering.”
How Memory Injection Works
At the start of each conversation turn, Cogitae searches your memory store for entries relevant to the current context. Matching memories are injected into the system prompt — the AI sees them as context before it responds. You don’t have to ask it to remember; it does automatically.
Profile memories (things about you personally) are always injected. Project and preference memories are injected when they’re relevant to the topic.
Managing Your Memories
You can browse, search, and clean up memories at any time:
“Show me all my stored memories.”
“Search my memories for anything about deployment.”
“Delete the memory about the old auth system — we finished the migration.”
You can also manage memories in Preferences → Memory, which shows a searchable table with access counts, injection frequency, and health status.
The Compound Effect
One memory isn’t transformative. Twenty are. After a few weeks of casual use, Cogitae knows your stack, your conventions, your project’s quirks, and your personal preferences. Conversations that used to require three messages of setup context now just work from the first prompt.
Automate File Organization With Plain English
Your Downloads folder is a mess. Invoices mixed with screenshots mixed with PDFs from six months ago. This tutorial builds an agent that watches a folder, classifies new files, and moves them into organized subdirectories — all described in natural language.
What You’ll Build
An event-driven agent that watches your Downloads folder (or any folder), analyzes each new file, and sorts it into subdirectories based on what it is.
Steps
1. Create the directory structure
Before the agent can sort files, it needs somewhere to put them. Create a few subdirectories in a target location:
~/Documents/Sorted/
├── invoices/
├── screenshots/
├── documents/
├── images/
├── code/
└── other/
Or let the agent create them — it can do that too.
2. Create the agent
Go to Preferences → Agents and click +.
- Name: File Organizer
- Event source: FileSystem, watching
~/Downloads
3. Write the instruction
A new file has appeared in my Downloads folder. Analyze its filename, extension, and if possible its contents to determine what category it belongs to. Then move it to the appropriate subdirectory under ~/Documents/Sorted/:
- PDF invoices, receipts, and bills → invoices/
- Screenshots (PNG/JPG with “Screenshot” or “Screen Shot” in the name) → screenshots/
- Documents (PDF, DOCX, TXT that aren’t invoices) → documents/
- Images (JPG, PNG, WEBP, GIF that aren’t screenshots) → images/
- Code files (source code, configs, scripts) → code/
- Everything else → other/
After moving, rename the file to include today’s date prefix if it doesn’t already have one (e.g., 2026-03-24-filename.pdf). Send a notification with what you moved and where.
4. Enable the right tools
Give the agent access to: file, fs, search, and notification. Add both ~/Downloads and ~/Documents/Sorted/ as workspace paths.
5. Set the Action Policy
Choose Act and Notify so the agent sorts files and tells you what it did.
6. Set rate limits
Under agent settings, set a reasonable cooldown (30 seconds) so rapid-fire downloads don’t overwhelm it. Set max activations per hour to 30.
7. Enable and test
Toggle the agent on. Download a file — any file. Within seconds, you should get a notification telling you it was classified and moved.
Refining the Rules
The beauty of this approach is that the rules are just English. Want to add a new category? Edit the instruction:
- Kindle highlights (TXT files with “Kindle” or “notebook” in the name) → reading/
Want smarter classification? Give it more context:
If the file is a PDF, read the first page. If it mentions an amount due, a total, or an invoice number, it’s an invoice. If it’s a research paper (has an abstract section), move it to research/.
Taking It Further
- Add a second agent that runs weekly to clean up the “other” folder — reviewing files that didn’t match any category
- Watch your Desktop folder too, with the same rules
- Create project-specific agents that watch a project’s
dist/orbuild/directory and archive old artifacts
Security Review Your Own Code
You’ve written the code. It works. But is it safe? Cogitae’s Patton agent is an adversarial reviewer that looks at your project like an attacker would — probing for vulnerabilities, misconfigurations, and security gaps you didn’t think about.
What You’ll Do
Run Patton against a project workspace and get a structured security assessment with specific findings, severity levels, and remediation guidance.
Steps
1. Add the project as a workspace
Drag your project folder onto the instruction message to grant file access.
2. Enable the right tools
Patton needs to read code and explore the project. Enable: patton, file, fs, search, and web (for checking known vulnerability databases).
3. Launch the review
Ask Cogitae to run a security review:
“Use Patton to perform a security review of this project. Focus on: authentication and authorization logic, input validation and injection risks, secret handling and credential storage, API security, and dependency vulnerabilities. Report findings by severity.”
4. Wait for the assessment
Patton methodically works through the codebase:
- Maps the attack surface (entry points, APIs, auth boundaries)
- Reads authentication and authorization code
- Searches for common vulnerability patterns (SQL concatenation, unsanitized user input, hardcoded secrets)
- Checks how secrets and credentials are handled
- Reviews dependency files for known-vulnerable packages
- Hands findings to Socrates for validation (to reduce false positives)
5. Read the security report
Patton returns findings organized by severity:
- Critical: Issues that could lead to data breach or system compromise
- High: Significant vulnerabilities that should be fixed before production
- Medium: Issues that increase attack surface or reduce defense depth
- Low: Best-practice deviations and hardening opportunities
Each finding includes: what the problem is, where it is in the code (file and line), why it matters, and how to fix it.
What Patton Catches
Real examples of what Patton finds in typical projects:
- SQL injection: String concatenation in database queries instead of parameterized queries
- XSS: User input rendered without sanitization in HTML templates
- Hardcoded secrets: API keys or database passwords committed in config files
- Missing auth checks: API endpoints that don’t verify the caller’s identity
- Insecure defaults: Debug mode enabled, CORS set to wildcard, verbose error messages in production
- Dependency issues: Known CVEs in installed packages
When to Run It
- Before every production deployment
- After adding a new API endpoint or authentication change
- When onboarding to a new codebase (combine with Sherlock for a full overview)
- Periodically on long-running projects — new vulnerabilities are discovered constantly
Taking It Further
- Branch from Patton’s report and ask Cogitae to fix the findings one by one
- Create a template with Patton’s tools pre-enabled and a security-focused instruction prompt — run it as a one-click review
- Set up a timer agent that runs Patton weekly against your main project and notifies you of new findings
Demo
Watch Patton catch a real vulnerability — the March 2026 LiteLLM exploit — in action:
Run a Literature Review With Academic Search
You need to understand what the research says about a topic — not a blog post summary, but an actual review of the primary literature. Cogitae’s academic search and Newton agent can do this in minutes instead of days.
What You’ll Do
Search the academic literature on a topic, evaluate the quality of the sources, and produce a structured review with citations — the kind of thing that normally requires a university library and a weekend.
Steps
1. Start a new conversation
Press ⌘N and select a strong reasoning model (Claude Sonnet/Opus or GPT-4o).
2. Enable tools
Enable: newton, academic_search, web, and memory. Newton uses academic search to find papers and web to check sources and access full texts when available.
3. Define the scope
Be specific about what you want reviewed and how deep to go:
“Use Newton to conduct a literature review on the effectiveness of spaced repetition for long-term knowledge retention. Focus on peer-reviewed studies from the last 10 years. Evaluate the methodology of each study. I want to know: what’s well-established, what’s debated, and where the research gaps are.”
Or for a technical topic:
“Review the academic literature on CRDT performance in collaborative editing systems. Focus on operational benchmarks and scalability results, not just theoretical papers. Compare the major CRDT families (counter, register, sequence) for my use case: a real-time collaborative text editor.”
4. Let Newton research
Newton works differently from Aristotle. Where Aristotle tests hypotheses, Newton synthesizes knowledge:
- Searches multiple academic databases for relevant papers
- Evaluates each source for quality, methodology, and relevance
- Identifies the key findings and how they relate to each other
- Spots contradictions and debates in the literature
- Hands the synthesis to Socrates for critical review
- Produces a structured overview of the field
5. Read the review
Newton returns a structured literature review:
- Key findings organized by theme, with citations
- Source quality assessment — which studies are strongest and why
- Points of consensus — what the field agrees on
- Active debates — where researchers disagree and why
- Research gaps — important questions that haven’t been adequately studied
- Recommended reading — the 3–5 most important papers to read in full
6. Save to memory
Store the review findings for future reference:
“Save the key conclusions from this review to memory, especially the consensus findings and the strongest citations.”
When to Use Newton vs. Aristotle
Newton is for “what does the literature say?” — knowledge synthesis and source evaluation. Use it when you want to understand a field.
Aristotle is for “what’s actually true here?” — hypothesis testing and adversarial reasoning. Use it when you have a specific question that needs stress-testing.
For a thorough deep-dive, use Newton first to survey the landscape, then branch and use Aristotle to investigate the most contested claims.
Taking It Further
- Export the review as markdown and use it as the foundation for a blog post, paper, or presentation
- Branch from individual citations to investigate a specific paper’s claims with Aristotle
- Build a reading list: ask Cogitae to bookmark the recommended papers and summarize each one
Control Your Mac From the Couch
The iPhone tutorial showed you the setup. This one is about the workflows — the real things you can do from your phone that make you wonder why you ever stood up.
Quick Setup Reminder
If you haven’t set up the iOS app yet: open Cogitae on your iPhone, tap the machine picker, and select your Mac. If both devices are on the same Wi-Fi, Bonjour handles the rest. For access outside your network, set up the relay in Preferences → Remote Access on the Mac.
Lazy Sunday Workflows
Here are actual things you can do from your phone while your Mac does the work:
Check on running processes:
“Is my backup script still running? What’s the CPU usage like?”
Find that file you downloaded:
“Search my Downloads folder for a PDF I got this week about kitchen remodeling.”
Preview tomorrow’s schedule:
“What’s on my calendar for tomorrow? Anything before 10am?”
Check your project status:
“In my web project workspace, show me which files changed since Friday.”
Manage your agents:
“Run my daily summary agent now.”
“Disable the file monitor agent until I’m back at my desk.”
Quick research:
“Search the web for the best rated espresso grinder under $200 and summarize the top 3 options.”
Dictate notes into a conversation:
Start a new conversation from your phone and dictate thoughts, ideas, or to-dos. They’re synced back to your Mac via iCloud and waiting for you when you sit down.
From the Coffee Shop
When you’re out of the house but still want access:
“Read the error log from my project’s last test run and tell me what failed.”
“Search my notes workspace for everything about the Henderson proposal.”
“What was the last thing we discussed about the API redesign?” (Cogitae searches your conversation history)
Your Mac runs the tools, processes the files, executes the commands. Your phone is just the remote.
A Real Scenario
You’re at dinner and remember you forgot to check if the deploy went through:
- Open Cogitae on your iPhone
- Connect to your Mac
- Type: “Check if there were any errors in today’s deployment logs”
- Your Mac reads the log files and summarizes
- Response: “Deployment completed at 4:32 PM with no errors. All health checks passed.”
Back to dinner.
Tips
- Keep it conversational. You don’t need to use special syntax — just say what you need in plain English.
- Use existing conversations. Tap into a conversation you had at your desk to continue where you left off — the full context is there.
- Trigger agents, don’t rebuild. If you have an agent that does something useful, trigger it from your phone instead of typing the whole instruction again.
Compare Three AI Models on the Same Problem
Every model has a personality. Claude is careful. GPT is confident. Gemini is fast. But which one is best for your problem? Cogitae’s branching and multi-provider support let you run the same question through multiple models side-by-side and compare the results.
What You’ll Do
Ask a single question, branch it to three different AI providers, and compare the responses to see which model handles your specific task best.
Steps
1. Set up your providers
Make sure you have at least two or three providers configured in Preferences → AI. The more variety, the more interesting the comparison. Good combinations:
- Claude Sonnet + GPT-4o + Gemini Pro (three different “flavors” of frontier reasoning)
- Claude Opus + Claude Haiku (same family, different capability/cost tradeoff)
- A cloud model + a local Ollama model (compare quality vs. privacy)
2. Start with your first model
Open a new conversation, select your first provider (say, Claude Sonnet), and ask your question:
“Design a database schema for a multi-tenant SaaS application that supports per-tenant custom fields, audit logging, and soft deletes. Explain your design decisions.”
Or try something more subjective:
“Write a commit message for a change that refactors the authentication middleware to use JWT tokens instead of session cookies, adds rate limiting, and fixes a bug where expired sessions weren’t being cleaned up.”
3. Branch for the second model
Click the branch icon on the instruction message. In the new branch, switch the provider to your second model (say, GPT-4o). The same conversation context is preserved — only the model changes.
Submit the same question. (Tip: if your question is in the instruction message, just press submit. If it’s a user message, the branch starts fresh from the instruction.)
4. Branch again for the third model
Create another branch from the instruction, switch to your third provider (Gemini Pro), and submit.
5. Compare side-by-side
Now you have three branches, each with a response from a different model. Use the branch spinners to flip between them, or open the conversation graph (⌘]) to see all three branches visually.
Compare:
- Accuracy: Which model got the details right?
- Depth: Which one explored edge cases?
- Practicality: Which response could you actually use?
- Tone: Which one communicates the way you think?
- Length: Which one was appropriately concise vs. thorough?
The Shortcut: Compare Providers
For quick comparisons, Cogitae has a built-in Siri Shortcut called Compare Providers that sends the same question to multiple providers automatically and shows the results together. But the branching approach gives you more control — you can adjust the question per branch, follow up differently, and keep the full conversation tree for reference.
When This Is Useful
Choosing a default model. Run your most common type of task through several models. The one that consistently gives the best results for your workflow is your default.
High-stakes questions. When accuracy matters — a technical decision, legal interpretation, or medical question — getting multiple perspectives reduces the risk of a single model’s blind spot.
Exploring style. Writing tasks, documentation, and creative work vary dramatically between models. Branch and compare to find the voice that matches what you need.
Cost vs. quality. Compare a frontier model (Claude Opus, GPT-4o) against a cheaper one (Claude Haiku, GPT-4o-mini). If the cheap model’s response is good enough for your task, switch and save.
Taking It Further
- Use this technique during Aristotle investigations — branch at the instruction and run the same investigation with different models to see how reasoning quality varies
- Build a template with comparison pre-configured: instruction text, multiple provider branches ready to go
- Store your findings to memory: “Remember that for database schema design, Claude gives the most practical answers. For creative writing, GPT-4o has a better voice.”