AI Agents Need Two Types of Authorization
March 23, 2026 · 4 min read
As AI agents become more integrated into workplace tools like Slack and Notion, a critical question emerges: who should these agents authenticate as when accessing sensitive data? The answer, according to a recent development from LangSmith Fleet, is not one-size-fits-all, but rather depends on whether the agent operates on behalf of individual users or with its own fixed identity. This distinction is fundamental to ensuring privacy and security in collaborative environments, where agents might handle personal information or perform actions with potential consequences. The introduction of two agent types, Assistants and Claws, directly addresses this need, offering a structured approach to authorization that balances flexibility with control.
In traditional setups, agents were primarily thought to operate on-behalf-of users, meaning they would use the credentials of the person interacting with them. For example, an onboarding agent with access to Notion and Rippling should allow Alice to see her own information and pages, but prevent her from accessing Bob's private data. This requires identifying the user, such as through a Slack user ID, and mapping that to appropriate credentials passed to tools at runtime. However, the emergence of OpenClaw introduced a different model, where an agent created by Alice could be exposed to others through channels like email or Twitter, using authorization granted by Alice rather than the end user's credentials. This shift highlighted the limitations of a single authorization , as using Alice's own credentials might grant excessive access to private documents.
To implement these two authorization types, LangSmith Fleet developed Assistants and Claws, each tailored to specific use cases. Assistants are designed for scenarios where agents should use the end user's credentials, enabling personalized access in channels like Slack. This requires mapping user IDs from those channels to LangSmith IDs, a process currently supported in a subset of channels. Claws, on the other hand, operate with a fixed set of credentials, often using dedicated accounts in tools like Notion to control access independently of who interacts with them. The system also incorporates channels such as Slack, Gmail, Outlook, and Teams, along with agent sharing capabilities, to facilitate deployment across various platforms. This ology ensures that authorization aligns with the agent's intended role, whether it's assisting individuals or serving a broader audience.
Of this approach are illustrated through real-world agent examples deployed in LangSmith Fleet. An onboarding agent, classified as an Assistant, accesses Slack and Notion using the end user's credentials, allowing personalized interactions within Slack. An email agent, a Claw, responds to incoming emails by checking a calendar for meeting availability and attempting to respond on behalf of the creator, with actions like sending emails gated by human-in-the-loop guardrails. A product agent, also a Claw, monitors competitors and assists with product questions using its own Notion account, exposed via a custom Slack bot. These cases demonstrate how the two authorization types function in practice, with Assistants enabling user-specific access and Claws providing controlled, shared functionality with built-in safeguards.
The significance of this development lies in its direct response to user demands for both authorization models, as observed during the launch of LangSmith Fleet. By offering Assistants and Claws, the system provides a flexible framework that can adapt to diverse needs, from personal assistants to team-wide tools. This is particularly important as agents are exposed through various channels, increasing the potential for misuse or sensitive actions. The integration of human-in-the-loop guardrails for Claws further underscores the emphasis on safety, ensuring that dangerous or sensitive operations are reviewed before execution. This structured approach not only enhances security but also supports scalable agent deployment in enterprise settings.
However, the paper acknowledges limitations and areas for future development. Currently, Assistants are only available in channels where user ID mapping is supported, restricting their deployment in some contexts. Additionally, the authors note that this work represents just the start of agent authorization, with plans to introduce more granular memory permissions. For instance, memory handling may differ between Assistants and Claws to prevent sensitive information about one user from being used in chats with another. While current access permissions manage this by controlling who can edit an agent's memory, future updates aim to implement user-specific memory systems. These limitations highlight the evolving nature of agent authorization and the need for ongoing refinement to address complex privacy and functionality s.