ai

AI Agents Need Trust Architecture to Negotiate

March 31, 2026 · 3 min read

AI Agents Need Trust Architecture to Negotiate

In 1832, thirty-one competing banks gathered daily in a modest room on London's Lombard Street to settle transactions through the Bankers Clearing House. This system worked not through regulation but through a trust architecture built on registered identity, clear standards, and collective reciprocity where violations meant immediate expulsion. Today, as AI agents prepare to negotiate thousands of times daily across organizational boundaries, Salesforce researchers argue the same trust architecture remains unbuilt, creating a critical gap before enterprise deployment.

Salesforce AI Research has spent months stress-testing agent-to-agent interactions, revealing that current AI models were not designed for this task. Existing models are optimized for human-agent interaction, trained to be helpful and agreeable rather than to hold positions, make strategic concessions, or understand the consequences of failed negotiations. This mismatch becomes apparent in what researchers call "echoing behavior," where two accommodating agents spiral into excessive agreeableness, turning a simple return request into a twenty-minute comedy of errors.

The research identifies what Salesforce calls the "wriggling problem"—the inherent variance in AI outputs that creates different outcomes from identical inputs. Unlike the deterministic rules governing the London bankers' transactions, modern AI agents explore probability distributions, making them difficult to audit with traditional frameworks. This variance represents not just a technical but a legal and ethical one, requiring new evaluation s before transactions become consequential in domains like healthcare or finance.

Based on their , Salesforce proposes four foundational elements for agentic trust architecture. First, registered identity and reputation over time, building on their pioneering Agent Cards concept that Google has adopted. Second, boundaries rather than scripts, establishing principles like professional standards of care rather than rigid decision trees. Third, structured accountability with clear audit trails and human oversight roles. Fourth, calibrated escalation protocols that determine when agents should stop and involve human judgment.

These elements are becoming urgent as agent-to-agent ecosystems emerge in three key domains. In healthcare, patient advocate agents coordinate with insurance agents for billing and authorization, with Salesforce already training agents on thousands of synthetic scenarios with UCSF Health. In financial services, treasury agents negotiate credit facilities and foreign exchange transactions carrying fiduciary responsibilities. In supply chains, manufacturing agents coordinate with logistics providers while balancing competitive intelligence sharing.

Extends beyond individual agent behavior to system-level outcomes. As Salesforce's Chief Legal Officer Sabastian Niles notes, systems of interacting agents can produce no individual system was designed to generate, requiring governance to shift from controlling individual actors to managing interaction ecosystems. This requires what researchers call "institutional imagination" to adapt human-built systems for artificial minds.

Salesforce emphasizes that organizations must define standards rather than rules, build for auditability from the start, invest in reputation infrastructure measuring performance across thousands of interactions, and partner with standards organizations early. Like the London Clearing House that emerged from shared necessity rather than regulation, the trust architecture for AI negotiation will be shaped by those engaging with the problem before transactions become consequential.

The research concludes that while technical protocols for AI negotiation are being developed, the governance frameworks, reputation infrastructure, and legal architecture remain largely uncharted territory. As AI agents prepare to enter commercial systems at scale, the trust architecture built now will determine whether their participation strengthens or degrades centuries-old systems of commercial trust.