Developer · MCP Server

Make your CRM the back end of any AI agent.

SabNode exposes a Model Context Protocol server with typed tools, resources and prompts that any MCP-aware client — Anthropic Claude, OpenAI GPT, Google Gemini, custom agents — can reach. Search contacts, draft replies, run reports and trigger flows from the LLM directly, with scoped tokens and audit logging on every call.

  • Native MCP server over JSON-RPC
  • Tools, resources and prompts exposed
  • Works with Claude, GPT, Gemini, custom
  • Scoped tokens with full audit trail
The problem

LLMs are smart, but blind to your operational data

You have an internal LLM assistant that can write emails and summarise meetings. It cannot answer "which contacts in Bengaluru replied to our flash-sale broadcast yesterday and have an LTV over ₹5,000?" because it has no idea your CRM exists. The fix is supposed to be tool calling — give the LLM a function, let it call. But every LLM client expects a different schema (OpenAI functions, Anthropic tools, Gemini function declarations), and you end up writing three adapters for the same operation.

Model Context Protocol (MCP) is Anthropic's open standard that fixes this. Servers expose tools, resources and prompts over JSON-RPC; any MCP-aware client can discover and call them. Claude Desktop, the official OpenAI MCP client, Gemini's tool surface and a growing ecosystem of agent frameworks all speak it. Publish your CRM as an MCP server once, every agent can read it.

SabNode ships a production MCP server out of the box. Every workspace gets a stable MCP endpoint; your LLM client authenticates with a scoped token; the server exposes ~40 tools (search_contacts, send_message, list_flows, run_report, create_broadcast and more), a curated set of resources (current dashboards, recent conversations, brand guidelines documents) and prompt templates (write_reply, summarise_conversation, build_segment). Every call is audit-logged with the originating LLM and user.

What it is

MCP Server, in depth.

SabNode's MCP server speaks the latest Model Context Protocol specification over JSON-RPC. Clients establish a session, discover available tools and resources, and invoke them with typed parameters. Tools are categorised by scope: read-only tools require contacts:read or messages:read scopes on the token; write tools require explicit write scopes. The server enforces scopes on every call, returns structured errors when access is denied, and writes an audit log entry that records the LLM model, the workspace, the user who minted the token, the tool, the arguments and the result.

The tool catalogue covers the entire CRM. Contact tools (search_contacts, get_contact, update_contact_fields, merge_contacts), conversation tools (list_conversations, get_messages, assign_conversation, send_message, generate_reply), flow tools (list_flows, trigger_flow, get_flow_run), broadcast tools (create_broadcast, list_broadcast_runs, get_broadcast_metrics) and analytics tools (run_report, query_dashboard, export_segment). Tools return JSON-RPC results with a documented schema that mirrors the REST API, so an agent that knows the REST surface can reach the MCP surface with zero relearning.

Resources are read-only blobs of context: the current dashboard, the brand voice guidelines, the WhatsApp template library, the last 50 conversations on a queue. Clients can subscribe to a resource and get notified when it changes, so a chat with Claude that has the inbox resource attached automatically refreshes when new messages arrive. Prompts are reusable instruction templates that an end user can pick from a menu — "draft a reply in our brand voice", "build a segment that matches this description" — without writing the prompt themselves.

For agents that need to act autonomously, the MCP server pairs with the rest of SabNode's primitives. An agent can search a segment, fire a broadcast, listen to webhooks for response events, and decide what to do next — all through MCP. Combined with AI Studio and Triggers, this is the closest thing on the market to a fully agentic CRM.

Capabilities

Everything you get with MCP Server.

7 capabilities
01

JSON-RPC over MCP

Standards-compliant MCP server over JSON-RPC. Compatible with Claude Desktop, Anthropic API's MCP support, OpenAI's MCP client, Google Gemini agents and the growing ecosystem of agent frameworks.

02

40+ typed tools

Tools for contacts, conversations, flows, broadcasts, analytics and files. Every tool has a typed parameter schema and a structured response schema documented in the MCP discovery handshake.

03

Curated resources

Resources expose the current inbox, brand guidelines, template library, dashboard data and SabFiles documents. Subscriptions let the LLM see new state without manual re-fetches.

04

Reusable prompts

Prompt templates — Draft a Reply, Summarise This Conversation, Build a Segment, Explain a Flow — let end users pick canned interactions without writing prompts themselves.

05

Scoped tokens

MCP sessions authenticate with the same scoped API keys used by the REST API. Tools enforce scope at call time. A read-only token cannot send a broadcast, ever.

06

Full audit log

Every tool call is logged with timestamp, LLM model, user, tool, parameters and result. Audit log is exportable to S3 and available through the REST API for compliance review.

07

Rate limit and quotas

Per-workspace and per-token rate limits prevent runaway agents. Daily quotas on write tools (send_message, trigger_flow, create_broadcast) protect your sending reputation from a stuck loop.

Use cases

Built for the way teams actually work.

SaaS
Case 01

Claude as inbox assistant

A support team connects Claude Desktop to their SabNode MCP endpoint. Agents ask Claude to draft replies using the brand voice resource, and Claude pulls the recent conversation history and customer record automatically through MCP tools.

D2C
Case 02

GPT-powered segmentation

A growth manager asks GPT "build me a segment of high LTV repeat buyers in the south who have not bought in 60 days". GPT calls the search_contacts and build_segment MCP tools and returns a saved segment ready for broadcast.

E-commerce
Case 03

Autonomous agent escalation

A custom LangGraph agent watches webhooks. When a high-AOV customer expresses frustration, the agent uses MCP tools to summarise the conversation, draft a 10% goodwill coupon and assign the conversation to a senior agent — without human intervention.

Logistics
Case 04

Gemini-powered reporting

A logistics team chats with Gemini to ask "what is the average reply time on our top-5 priority lanes this week?". Gemini calls the run_report MCP tool, formats the result and offers to schedule the same query as a weekly Slack digest.

Financial Services
Case 05

Compliance audit copilot

A compliance officer connects an internal LLM to MCP and asks for all messages containing personal financial data sent in the last 30 days. The LLM calls list_conversations and get_messages, summarises and flags anomalies for review.

How it works

From signup to first send in minutes.

MCP Server is included on every SabNode workspace. No separate billing, no extra setup — flip it on from your workspace settings.

  1. 01

    Mint MCP token

    Generate a scoped MCP token in Settings → Developer → MCP. Pick scopes appropriate to the agent (read-only, draft-only, full-write).

  2. 02

    Configure the client

    Paste the SabNode MCP endpoint and token into your client (Claude Desktop, custom agent, OpenAI integration). Discovery runs automatically.

  3. 03

    Discover tools & resources

    The LLM lists available tools, resources and prompts. Tool schemas are typed; resources can be subscribed for live updates.

  4. 04

    Invoke from chat

    Ask the LLM a question that requires CRM data. It picks the right tool, calls it, and reasons over the result with the rest of your context.

  5. 05

    Audit and govern

    Review the audit log for every call. Tighten scopes, set quotas, rotate tokens when needed. Compliance team has a clean trail.

Plays well with

Works with the tools you already ship on.

Anthropic ClaudeOpenAI GPTGoogle GeminiClaude DesktopLangGraphLlamaIndexVercel AI SDKCursor
Frequently asked

Questions about MCP Server.

Can't find what you're looking for? Talk to our team.

What is Model Context Protocol?
MCP is an open protocol introduced by Anthropic that standardises how LLMs talk to external systems. Servers expose tools, resources and prompts over JSON-RPC. Any MCP-aware client — Claude, OpenAI, Gemini, custom agent frameworks — can discover and use them without the integrator writing vendor-specific adapters.
Is the MCP server stable in production?
Yes. SabNode's MCP server is built on the same infrastructure as the REST API, with the same uptime SLA. Tools call into the same internal services that power the UI, so behaviour is consistent. We pin against the MCP specification version your client negotiates, so a spec update never breaks an existing agent without notice.
Can I expose only a subset of tools to a specific LLM?
Yes. Scopes on the MCP token define which tools are visible during discovery. A read-only token sees only read tools, and an even narrower custom scope can restrict the agent to, say, contacts-only access. Workspace admins can define reusable scope bundles and assign them per integration.
How does this differ from giving the LLM the REST API?
You could give an LLM the REST API via tool calling, but you would write the adapter per LLM (OpenAI functions, Anthropic tools, Gemini function declarations). MCP standardises the adapter. The same server works across every LLM client with no extra code, and resource subscriptions and prompts have no native equivalent in raw REST tool calling.
Does the LLM have access to other tenants?
No. Each MCP token is workspace-scoped. Tool calls hit the same multi-tenant guards as the REST API. A token minted in workspace A can never read or write to workspace B, and tenant isolation is enforced at the database query layer regardless of what the LLM tries to pass as an argument.
Can I run my own MCP server alongside SabNode's?
Yes. MCP is open; you can run additional servers for your own internal systems and let the LLM use both. A common pattern is SabNode (CRM) plus an internal Postgres MCP server plus an analytics MCP server, all visible to one agent.
Is the MCP server safe for autonomous agents?
It is safer than raw API access. Daily quotas on write tools prevent a stuck loop from blasting your customer base. The audit log captures every call. Scoped tokens let you start in read-only mode and gradually grant write scopes as you build confidence in the agent's behaviour. We still recommend a human review loop for any first-time production agent.
Developer · MCP Server

Ship mcp server into production this week.

No credit card. No sales call required. Spin up a workspace, plug in a number, and your team is live in under an hour.