Back to blog
Security

Agents are easy to connect. Harder to secure.

Ekechi Nwokah·April 13, 2026·12 min read

When an agent queries a CRM, a data warehouse, and a ticketing system, it crosses three permission models. Here's how managed agent services, API gateways, and agentic search handle that — and where they diverge as the system grows.

When an agent needs to answer a question that spans a CRM, a data warehouse, and a ticketing system, it is querying three systems with three different permission models. A sales rep who can see only EU accounts. Contract values restricted to finance. Support ticket visibility rules that vary by customer tier. None of these systems know about each other's permissions.

The agent combines results from all three. With gateway approaches, aligning those permission models is your problem — built into hooks, consumer configurations, and pipelines that must be kept consistent across systems that were never designed to talk to each other.

The two patterns most teams reach for are a managed agent service (a cloud platform that orchestrates agents, enforces access policies, and provides request hooks for data filtering) or a traditional API gateway repurposed for MCP (route access controls, per-consumer credentials, request and response transformations).

A third pattern — agentic search — approaches the coordination problem differently. The system can connect to data sources using a single set of credentials and apply a centralized access policy before the query is executed on data in place or on ingested data. There is no Salesforce sharing rule to replicate, no Zendesk visibility model to mirror. You define what a user can see in one place, and that definition is what governs every query.

Below is what each approach looks like for a specific multi-source query, and where they diverge as the system grows.

The example

Show EU enterprise customers with contracts over €250k where support tickets mention concerns about data staying in Europe in the last 90 days.

Answering it requires combining data from three systems. None of those systems share a permission model. They were not designed to. A typical organization has:

  • Sales reps who can see only accounts in their territory
  • Finance teams who control access to contract values — often more restrictive than CRM access
  • Support ticket visibility rules that vary: some orgs restrict internal notes, some scope tickets by account team, some have enterprise accounts where tickets are only visible to dedicated account managers
  • Enterprise customers who have contractual or regulatory requirements about who at the vendor can view their data

An agent answering this question calls all three systems and combines the results. Below is how each architecture handles that — and where it breaks down.

Architecture 1: Managed agent service

A managed agent service is a cloud-hosted platform that orchestrates LLM agents. You define an agent, register tools it can call, and write access policies that control which users or agent identities can invoke which tools. You also get request hooks — sometimes called interceptors — which are functions that run on each tool call and can inspect or rewrite both the request going out and the response coming back.

Access policies answer whether this user can call this tool at all. The request hook is where you inject the filters that scope what the tool returns.

Tool call 1: search_tickets

"Support tickets mention concerns about data staying in Europe" is not a column value you can filter with SQL. A customer might write "we need GDPR-compliant hosting," "keep data in Frankfurt," "EU residency requirement," or "concerned about where our data lives." All of these mean the same thing. SQL WHERE ticket_body LIKE '%data%Europe%' catches some of them and misses most.

To find tickets regardless of exact phrasing, you need a search system that matches on meaning. Tickets are extracted from the source system, split into shorter passages, embedded as vectors, and stored in a vector database alongside the original text and any metadata needed for filtering. At query time the search phrase is embedded and the database returns the passages whose vectors are closest to it.

The vector database is a separate system, built and maintained by a data or ML engineering team — not the platform team writing the gateway hooks — running on its own schedule.

Restricting what the search returns to authorized users requires filtering on the metadata stored alongside each passage. The managed service request hook injects that filter:

# Request hook — runs before the tool call reaches the vector DB
def before_tool_call(tool_call, user_context):
    if tool_call.name == "search_tickets":
        tool_call.params["filter"] = {
            "region": {"$in": user_context.allowed_regions},
            "account_id": {"$in": user_context.accessible_account_ids},
        }
        return tool_call

The filter runs before the vector search. Only passages whose metadata matches the user's allowed regions and accounts are returned.

The problem is not the hook. The problem is the assumption the hook relies on. Every chunk in the vector database must carry correct, current metadata for that filter to work. This is harder than it sounds:

  • Metadata drift from incomplete initial indexing. If a ticket was indexed before the region field was added to the metadata schema, that passage has no region tag. The $in filter on region will either skip it or include it unconditionally. Either way the behavior is wrong. Fixing it requires a backfill: re-run the pipeline for all historical tickets, re-embed them, rewrite the metadata.
  • Schedule gaps between source system and vector database. If the pipeline runs nightly and an account's region changes during the day, that change is not reflected in the vector database until the next pipeline run. The hook cannot detect this because it does not know the pipeline schedule.
  • Business logic required for multi-account tickets. Enterprise tickets often reference multiple accounts. A ticket from an EU customer discussing an integration issue with a non-EU vendor mentions both. Which region does the chunk belong to? Getting this right requires encoding business rules into the chunking pipeline. Those rules change.

None of this is visible at the gateway layer. The hook runs and injects the filter. Whether the passages it filters against carry correct metadata is a separate problem, owned by a different team, on a different schedule, with no formal connection to the security review that approved the hook.

Tool call 2: query_contracts

The SQL for this call might look like:

SELECT customer_id, contract_value, region, contract_end_date
FROM contracts
WHERE contract_value > 250000
  AND updated_at >= NOW() - INTERVAL '90 days'

This query has no user-scoping. Every row in the contracts table matching the value and date conditions will be returned, regardless of which accounts the requesting user is supposed to access.

Adding scope requires either row-level security configured in the database:

-- PostgreSQL row-level security policy
CREATE POLICY eu_region_access ON contracts
  USING (region = current_setting('app.user_region'));

Or query rewriting in the request hook:

def before_tool_call(tool_call, user_context):
    if tool_call.name == "query_contracts":
        scope = " AND region = '" + user_context.primary_region + "'"
        tool_call.params["sql"] = tool_call.params["sql"] + scope
        return tool_call

Row-level security in the database is more robust — it is enforced regardless of what SQL the caller sends. But it requires a database administrator to configure and maintain it, and it must be consistent with whatever the access policy and hook logic produce. Query rewriting in the hook is simpler to change but easier to bypass.

Tool call 3: query_customers

The CRM enforces its own visibility rules. Salesforce has sharing rules. HubSpot has team and user visibility settings. These rules are not typically exposed as parameters you can pass — they are enforced by the CRM internally, based on the authenticated user identity.

If the CRM tool call authenticates as a service account, the CRM returns records based on the service account's visibility, which is usually much broader than any individual user's. To restrict results, the hook must either replicate the CRM's visibility model, call the CRM's permission API, or authenticate as the requesting user:

def before_tool_call(tool_call, user_context):
    if tool_call.name == "query_customers":
        # Call CRM permission API to get user-visible account list
        visible_ids = crm_permission_api.get_accessible_accounts(user_context.user_id)
        tool_call.params["filter"]["account_id"] = {"$in": visible_ids}
        return tool_call

This adds a synchronous API call to the CRM permission system on every agent request. That call is not free. It adds latency. If the CRM's permission API is unavailable, the hook either blocks the request or falls back to unfiltered results.

Layer 4: Application-layer merge

After all three tool calls return, application code joins the results:

results = []
for customer in customers:
    contracts = [c for c in contract_results if c["customer_id"] == customer["id"]]
    tickets = [t for t in ticket_results if t["account_id"] == customer["id"]]

    if contracts and tickets:
        results.append({
            "customer": customer,
            "contracts": contracts,
            "tickets": tickets,
        })

Each list was filtered independently, by a different system, using a different permission model, based on a different representation of the user's access rights. The join assumes they are all consistent. They may not be. No hook caught it. Each hook only saw its own tool call. No part of the system was watching for cross-tool consistency.

Architecture 2: Traditional MCP gateway

A traditional enterprise API gateway repurposed for MCP sits in front of your MCP-enabled tools and handles routing, authentication, and per-consumer policies. The critical difference from the managed agent service: the gateway injects HTTP headers and the backend tool is responsible for reading them and applying them as filters. The gateway does not own the rewrite. That is a convention between the gateway and the tool developer, not a structural guarantee.

Tool call 1: search_tickets

The gateway adds consumer-scoped headers to the forwarded request:

# per-consumer gateway configuration
consumer: compliance_analyst_group
plugins:
  - name: request-transformer
    config:
      add:
        headers:
          - "X-User-Region: EU"
          - "X-Allowed-Account-IDs: acct_001,acct_002,acct_019"

The MCP tool server for ticket search receives those headers. It is responsible for translating them into vector database filter parameters:

# Inside the MCP tool server — the tool developer writes and owns this
@app.post("/tools/search_tickets")
def search_tickets(request: Request, body: SearchRequest):
    region = request.headers.get("X-User-Region")
    acct_ids = request.headers.get("X-Allowed-Account-IDs", "").split(",")
    results = vector_db.search(
        query=body.query,
        filter={
            "region": {"$eq": region},
            "account_id": {"$in": acct_ids},
        },
        top_k=body.top_k,
    )
    return results

If the tool developer wrote this correctly, the filter runs. If they forgot the header check, omitted the filter call, or the code has a bug that silently falls back to an unscoped search, the vector database returns results without any access control applied. The gateway confirmed the consumer was authenticated and allowed to call the route. What the tool returned is the tool developer's concern.

ResponsibilityOwner
Gateway request-transformer configurationPlatform / API gateway admin
MCP tool server reading and applying headersTool developer — per tool
Verifying tool server actually enforces headersSecurity team — manual code audit
Vector database metadata correctnessData / ML engineering
Chunking pipeline synchronizationData / ML engineering
Backfill when metadata schema changesData / ML engineering

Tool call 2: query_contracts

The gateway injects X-User-Region: EU. The contracts MCP tool must translate that into a SQL restriction:

@app.post("/tools/query_contracts")
def query_contracts(request: Request, body: ContractQuery):
    region = request.headers.get("X-User-Region")
    sql = body.sql
    if region:
        sql += f" AND region = '{region}'"
    return {"data": db.execute(sql)}

The response transformer can strip fields from what comes back:

plugins:
  - name: response-transformer
    config:
      remove:
        json:
          - "data[*].renewal_discount"
          - "data[*].internal_margin"

The response transformer runs after the database query completes. If the tool server did not apply the region header correctly, the database returned unscoped rows and the transformer stripped columns from all of them. It cannot retroactively remove rows that should not have been in the result set. Field stripping and row-level access control are different problems.

Tool call 3: query_customers

The gateway authenticates the consumer-to-gateway connection. The connection from the MCP tool server to the CRM is authenticated separately. CRM visibility rules are enforced based on authenticated user identity. To scope results to what the requesting user would see:

@app.post("/tools/query_customers")
def query_customers(request: Request, body: CustomerQuery):
    allowed_ids = set(
        request.headers.get("X-Allowed-Account-IDs", "").split(",")
    )
    crm_results = crm_client.search(body.filters)
    return [r for r in crm_results if r["account_id"] in allowed_ids]

The X-Allowed-Account-IDs value was set in the gateway consumer configuration. If the user's territory changed since then, the header still contains the old list. The CRM has the correct state. The gateway consumer config does not. There is no automated link between the two.

The enforcement gap

The structural difference between the two gateway architectures:

Enforcement pointManaged agent serviceTraditional API gateway
How filter is appliedHook rewrites outbound request parametersHeader injected; tool server applies filter
What happens if tool ignores filterHook owns the rewrite — tool receives scoped paramsNo data filtering — gateway cannot detect this
Auditing enforcementGateway logs show rewritten paramsRequires auditing each tool server's source code
Row-level scope on responseApplied before query runs (in hook)Response transformer strips columns only, not rows

Consumer configuration at scale

Gateway consumer configuration is per-consumer, per-route. At 30 users across 10 routes, that is 300 consumer-route configuration entries to maintain. When a user changes role, every route configuration for that consumer must be updated. There is no central definition of what a "compliance analyst" can see — that definition is distributed across configurations, written by the same person at different times, and updated asynchronously when access rules change in upstream systems.