Connect Your Stack in 30 Minutes: LLMs, Email, and BYOK
A single guided path from zero integrations to a fully connected workspace. This guide explains what each integration unlocks, why we use a bring-your-own-keys model, and exactly how to set everything up — with context on what to prioritize first.
Why Bring Your Own Keys?
Most AI tools bundle API costs into their pricing and mark them up 2–5x. You pay $50/month but have no idea how much of that goes to OpenAI, how much to email sending, and how much is margin. When costs are opaque, you can't optimize them.
NotSolo takes a different approach. Your Bring Your Own Keys subscription covers the orchestration layer — the agent coordination, memory, planning, and execution infrastructure. The API calls themselves go through your own accounts, at your own rates, with your own usage dashboards.
This matters for three reasons:
- Cost transparency: You can see exactly what each agent costs per week by checking your OpenAI/Anthropic billing. There's no hidden markup.
- Model choice: Want to use Claude for content drafting but GPT-4o for classification? You can. Want to try DeepSeek for cost-sensitive workflows? Go ahead. You're not locked into whatever model the vendor chose.
- No vendor lock-in: Your API keys, your accounts, your data. If you ever leave NotSolo, nothing changes about your LLM or email setup.
What the subscription covers: Agent orchestration (heartbeat scheduling, task coordination, skill execution), persistent memory (your Kanban board, cycle history, learning data), cross-agent communication (Squad Chat, handoffs), and the weekly reporting pipeline. These are the hard parts to build yourself — the BYOK model means you get them without paying a tax on every API call.
Before You Start
You don't need to connect everything at once. NotSolo is designed to work incrementally — start with the essentials and add integrations as you activate more agent capabilities. Here's the priority order:
- 🔴 Required: At least one LLM provider (OpenAI, Anthropic, DeepSeek, or Google)
- 🟡 Recommended: Email provider (Resend) — needed for lifecycle emails
- 🟢 Optional: Serper.dev + Google Search Console (SEO tracking), Apify (social scraping), Stripe (revenue metrics), Cursor (code generation)
All keys are entered in one place: Dashboard → Settings → API Keys. Each key is stored encrypted using your database's vault system and is never logged, displayed after entry, or accessible to other users in your company. See Security & Privacy for the full data handling policy.
Step 1: Connect an LLM Provider (5 minutes)
Every agent needs a language model to reason, draft, classify, and analyze. Without an LLM key, agents can't execute any of their skills — this is the one truly required integration.
Which provider should you choose?
If you're not sure, start with OpenAI. GPT-4o offers the best balance of quality, speed, and cost for the types of tasks agents perform (classification, drafting, analysis). Here's a comparison to help you decide:
OpenAI (GPT-4o, GPT-4o-mini)
Best all-round option. GPT-4o handles everything from Reddit reply drafting to metric analysis well. GPT-4o-mini is significantly cheaper and works for simpler classification tasks. Most founders start here.
Anthropic (Claude 3.5 Sonnet, Claude 3 Haiku)
Excellent for long-form content drafting and nuanced analysis. If Quill (content agent) is a priority for you — e.g., you're running a content-first growth strategy — Claude often produces more natural prose. Haiku is fast and cheap for classification.
DeepSeek
The budget option. Significantly cheaper per token than OpenAI or Anthropic, with surprisingly good quality for structured tasks like classification and data extraction. A good choice if you're cost-sensitive and running agents at high heartbeat frequencies.
Google (Gemini)
Strong for multimodal tasks and very large context windows. Less commonly used as a primary model, but useful as a secondary option for specific skills.
How to set it up
- Go to your LLM provider's dashboard and create an API key. For OpenAI: platform.openai.com → API Keys → Create new key.
- Copy the key (you'll only see it once).
- In NotSolo, go to Dashboard → Settings → API Keys.
- Paste the key into the corresponding provider field and save.
- The integration status will update to "Connected" — agents can now use this model.
You can connect multiple providers simultaneously. This is useful if you want different agents to use different models — for example, using a cheaper model for Scout's classification tasks and a more capable model for Quill's content drafting. See LLM integration docs for model-specific configuration.
Step 2: Connect Email (5 minutes)
Pulse (customer success agent) uses email to send lifecycle messages — onboarding nudges, re-engagement emails, feature announcements. Without an email provider, Pulse can still detect user signals and create Kanban tasks, but it can't send the emails itself.
Why Resend?
We support Resend as the email provider because it's developer-friendly, has a generous free tier (100 emails/day), and handles domain verification cleanly. It's also the provider most solo founders are already using or can set up fastest.
Setup steps
- Create a Resend account at resend.com (free tier is fine to start).
- Verify your sending domain. This is the domain your emails will come from (e.g., yourproduct.com). Resend walks you through adding DNS records — it takes about 5 minutes for DNS propagation.
- Generate an API key in Resend's dashboard.
- Paste the key into Dashboard → Settings → API Keys → Resend.
Once connected, Pulse will draft emails as Kanban tasks in the Review queue. You approve before anything is sent — this is the default review-first workflow. Each email shows you the recipient, subject line, and full body text before you sign off.
Domain verification is important. Without it, emails will either fail to send or land in spam. Make sure the DNS records are fully propagated before testing. You can check this in Resend's domain settings — it'll show a green checkmark when verified. See Email integration docs for troubleshooting.
Step 3: Optional Integrations (10–20 minutes)
These integrations unlock additional agent capabilities. You don't need all of them — connect the ones that match your current growth strategy.
SEO Tracking: Serper.dev + Google Search Console
If you're investing in content or want to track organic search performance, connect both of these. Together, they give your agents a complete picture of your SEO landscape:
- Serper.dev provides SERP (search engine results page) data — where you rank for tracked keywords, what the AI overview says, who else ranks for the same terms. Quill (content agent) uses this to identify keyword opportunities and track whether published content is climbing.
- Google Search Console provides your actual click and impression data — how many people saw your pages in search, how many clicked, and your average position. Atlas uses this in weekly reports to show content performance trends.
Setup: Add your Serper.dev API key in Settings → API Keys. For Google Search Console, use the OAuth flow in Settings → API Keys → GSC Connect. Both are detailed in the SEO integration docs.
Web Scraping: Apify
Scout (outreach agent) can scan Facebook groups for relevant posts and leads using Apify's scraping infrastructure. This is particularly useful if your ICP is active in Facebook communities — which is common for B2B tools targeting freelancers, agencies, or niche industries.
Configure your Facebook group URLs in Dashboard → Settings → ICP & Platforms and add your Apify API key in the API Keys tab. Scout will include these groups in its heartbeat scans. See Apify integration docs.
Revenue Data: Stripe
If you're already processing payments through Stripe, connecting it gives Atlas access to your revenue metrics — MRR, churn rate, new subscriptions, upgrades, and downgrades. This data appears in weekly reports and helps Atlas make more informed strategic recommendations.
Without Stripe, Atlas can still generate useful reports based on outreach, content, and user signal data — but adding revenue context makes the "what's working" analysis significantly more actionable. See Stripe integration docs.
Code Generation: Cursor Cloud Agents
Forge (product agent) can spawn Cursor Cloud agents to generate pull requests for your codebase. This is the most advanced integration — it lets Forge turn product specs into actual code changes. If you're using Cursor as your IDE and have a connected GitHub repo, this closes the loop from "user requested a feature" to "PR ready for review."
This is optional and best added after you're comfortable with the core workflow. See Cursor integration docs.
After Setup: Verify Everything Works
Once your keys are entered, verify the connections are active:
- Check the Settings page: Each integration shows a "Connected" or "Not Connected" status. Make sure the ones you configured show as connected.
- Enable agent heartbeats: Go to Dashboard → Agents and toggle on the agents you want to activate. Each agent has its own heartbeat interval — the default is usually fine for getting started.
- Watch the first heartbeat: After the first interval passes, check Dashboard → Agents for heartbeat logs. You should see entries showing which skills ran and whether they completed successfully.
- Check Squad Chat: Agents post summaries after their heartbeats. If everything is connected, you'll see messages from Scout, Forge, and others describing what they found.
Next step: Now that your stack is connected, run your first weekly cycle — the checklist walks you through Monday-to-Friday using all the tools you just set up.
Related Guides & Docs
- Bring Your Own Keys — The full BYOK philosophy and what the subscription covers
- LLMs (OpenAI, Anthropic) — Model-specific configuration and tips
- Email Providers (Resend) — Domain verification and troubleshooting
- Security & Privacy — How your API keys are stored and protected
- Your First Week — The broader onboarding narrative