Now in beta

Every agent action scored
against the entire network.
Know before it acts.

Every agent action is scored against our cross-network behavioral baseline. See how your agent compares to hundreds of others. Block the dangerous ones before they execute. The independent safety layer that learns from every agent in the network.

LangChain decides which tool to call. CrewAI decides how to orchestrate. Sansin decides whether the action should happen at all.

Risk scoring against a cross-network behavioral baseline Percentile ranking: see how your agent compares to 847+ others Independent third-party safety auditing, not a framework plugin

Wrap your tools. Sansin scores every action against the network.

One function call. Three steps. Every agent action compared to hundreds of others.

1

Your agent

Wrap your tools with SansinGate.

Your agent wants to send_email, delete_file, or query_database. Before it executes, the call goes through SansinGate.

agent.py
from sansin import SansinGate, wrap_tools_with_gate

gate = SansinGate(api_key="sk_a1b2c3...")
tools = wrap_tools_with_gate([email_tool, db_tool], gate)
agent = create_agent(tools=tools)
# Every tool call now goes through Sansin
2

Sansin scores against the baseline

How does this action compare to every other agent in the network?

Sansin computes a risk score, then ranks it against our cross-network behavioral baseline. send_email to one person? 12th percentile, allow. send_email to 500 recipients? 97th percentile across 847 agent deployments, block.

The baseline learns from every agent in the network. Your percentile ranking gets more meaningful with every deployment. Thompson Sampling blends in your team's corrections after 50+ overrides.

Every decision is logged with percentile ranking and full reasoning. Override any decision, and the network learns from your correction.

3

Safe execution

Allowed actions run. Blocked actions explain why.

Allowed actions execute normally. Blocked actions return a structured decision with risk_score, percentile ranking, comparison_group_size, and a recommendation. Your agent handles the response.

Fail-open by default so Sansin never breaks your agent. Configurable fail-closed for high-stakes environments.

See how Sansin scores against the network

Click an example to see the API response with percentile ranking.

POST /v1/gate/check — 200 OK

            

One behavioral baseline. Every autonomous system.

AI Agents

LangChain, CrewAI, custom agents. Every tool call scored against our cross-network baseline with percentile ranking. The SDK is 4 lines of Python.

Notification Intelligence

Should you send this notification? Sansin's original use case. Same decision engine, proven in production. 37% fewer notifications, same revenue.

Coming: IoT & Robotics

Autonomous devices making real-world decisions. Same behavioral baseline, same percentile ranking. On the 2027 roadmap.

Built to say no

Privacy is non-negotiable

Every query is tenant-scoped. Every model is isolated. Agent data stays where it belongs. We designed for this from day one, not as an afterthought.

Restraint is the product

If the smartest thing to do is not take an action, we don't take it. The best decision engines know when to say no.

Every decision is explainable

You can see why an action was allowed or blocked, what the risk score was, and how confident the model was. No black boxes.

See how your agent compares to the network.

Something went wrong. Try again or email us directly.

or view the docs on GitHub