JSON-RPC over MCP
Standards-compliant MCP server over JSON-RPC. Compatible with Claude Desktop, Anthropic API's MCP support, OpenAI's MCP client, Google Gemini agents and the growing ecosystem of agent frameworks.
SabNode exposes a Model Context Protocol server with typed tools, resources and prompts that any MCP-aware client — Anthropic Claude, OpenAI GPT, Google Gemini, custom agents — can reach. Search contacts, draft replies, run reports and trigger flows from the LLM directly, with scoped tokens and audit logging on every call.
You have an internal LLM assistant that can write emails and summarise meetings. It cannot answer "which contacts in Bengaluru replied to our flash-sale broadcast yesterday and have an LTV over ₹5,000?" because it has no idea your CRM exists. The fix is supposed to be tool calling — give the LLM a function, let it call. But every LLM client expects a different schema (OpenAI functions, Anthropic tools, Gemini function declarations), and you end up writing three adapters for the same operation.
Model Context Protocol (MCP) is Anthropic's open standard that fixes this. Servers expose tools, resources and prompts over JSON-RPC; any MCP-aware client can discover and call them. Claude Desktop, the official OpenAI MCP client, Gemini's tool surface and a growing ecosystem of agent frameworks all speak it. Publish your CRM as an MCP server once, every agent can read it.
SabNode ships a production MCP server out of the box. Every workspace gets a stable MCP endpoint; your LLM client authenticates with a scoped token; the server exposes ~40 tools (search_contacts, send_message, list_flows, run_report, create_broadcast and more), a curated set of resources (current dashboards, recent conversations, brand guidelines documents) and prompt templates (write_reply, summarise_conversation, build_segment). Every call is audit-logged with the originating LLM and user.
SabNode's MCP server speaks the latest Model Context Protocol specification over JSON-RPC. Clients establish a session, discover available tools and resources, and invoke them with typed parameters. Tools are categorised by scope: read-only tools require contacts:read or messages:read scopes on the token; write tools require explicit write scopes. The server enforces scopes on every call, returns structured errors when access is denied, and writes an audit log entry that records the LLM model, the workspace, the user who minted the token, the tool, the arguments and the result.
The tool catalogue covers the entire CRM. Contact tools (search_contacts, get_contact, update_contact_fields, merge_contacts), conversation tools (list_conversations, get_messages, assign_conversation, send_message, generate_reply), flow tools (list_flows, trigger_flow, get_flow_run), broadcast tools (create_broadcast, list_broadcast_runs, get_broadcast_metrics) and analytics tools (run_report, query_dashboard, export_segment). Tools return JSON-RPC results with a documented schema that mirrors the REST API, so an agent that knows the REST surface can reach the MCP surface with zero relearning.
Resources are read-only blobs of context: the current dashboard, the brand voice guidelines, the WhatsApp template library, the last 50 conversations on a queue. Clients can subscribe to a resource and get notified when it changes, so a chat with Claude that has the inbox resource attached automatically refreshes when new messages arrive. Prompts are reusable instruction templates that an end user can pick from a menu — "draft a reply in our brand voice", "build a segment that matches this description" — without writing the prompt themselves.
For agents that need to act autonomously, the MCP server pairs with the rest of SabNode's primitives. An agent can search a segment, fire a broadcast, listen to webhooks for response events, and decide what to do next — all through MCP. Combined with AI Studio and Triggers, this is the closest thing on the market to a fully agentic CRM.
Standards-compliant MCP server over JSON-RPC. Compatible with Claude Desktop, Anthropic API's MCP support, OpenAI's MCP client, Google Gemini agents and the growing ecosystem of agent frameworks.
Tools for contacts, conversations, flows, broadcasts, analytics and files. Every tool has a typed parameter schema and a structured response schema documented in the MCP discovery handshake.
Resources expose the current inbox, brand guidelines, template library, dashboard data and SabFiles documents. Subscriptions let the LLM see new state without manual re-fetches.
Prompt templates — Draft a Reply, Summarise This Conversation, Build a Segment, Explain a Flow — let end users pick canned interactions without writing prompts themselves.
MCP sessions authenticate with the same scoped API keys used by the REST API. Tools enforce scope at call time. A read-only token cannot send a broadcast, ever.
Every tool call is logged with timestamp, LLM model, user, tool, parameters and result. Audit log is exportable to S3 and available through the REST API for compliance review.
Per-workspace and per-token rate limits prevent runaway agents. Daily quotas on write tools (send_message, trigger_flow, create_broadcast) protect your sending reputation from a stuck loop.
A support team connects Claude Desktop to their SabNode MCP endpoint. Agents ask Claude to draft replies using the brand voice resource, and Claude pulls the recent conversation history and customer record automatically through MCP tools.
A growth manager asks GPT "build me a segment of high LTV repeat buyers in the south who have not bought in 60 days". GPT calls the search_contacts and build_segment MCP tools and returns a saved segment ready for broadcast.
A custom LangGraph agent watches webhooks. When a high-AOV customer expresses frustration, the agent uses MCP tools to summarise the conversation, draft a 10% goodwill coupon and assign the conversation to a senior agent — without human intervention.
A logistics team chats with Gemini to ask "what is the average reply time on our top-5 priority lanes this week?". Gemini calls the run_report MCP tool, formats the result and offers to schedule the same query as a weekly Slack digest.
A compliance officer connects an internal LLM to MCP and asks for all messages containing personal financial data sent in the last 30 days. The LLM calls list_conversations and get_messages, summarises and flags anomalies for review.
MCP Server is included on every SabNode workspace. No separate billing, no extra setup — flip it on from your workspace settings.
Generate a scoped MCP token in Settings → Developer → MCP. Pick scopes appropriate to the agent (read-only, draft-only, full-write).
Paste the SabNode MCP endpoint and token into your client (Claude Desktop, custom agent, OpenAI integration). Discovery runs automatically.
The LLM lists available tools, resources and prompts. Tool schemas are typed; resources can be subscribed for live updates.
Ask the LLM a question that requires CRM data. It picks the right tool, calls it, and reasons over the result with the rest of your context.
Review the audit log for every call. Tighten scopes, set quotas, rotate tokens when needed. Compliance team has a clean trail.
No credit card. No sales call required. Spin up a workspace, plug in a number, and your team is live in under an hour.