ai

ServiceNow Builds AI Agents to Automate Customer Journeys

November 17, 2025 · 2 min read

ServiceNow Builds AI Agents to Automate Customer Journeys

ServiceNow is developing an intelligent multi-agent system to transform its internal sales and customer success operations. The digital workflow platform is leveraging LangSmith and LangGraph from LangChain to orchestrate the entire customer journey, from initial lead identification through post-sales adoption and expansion.

Previously, ServiceNow faced fragmentation with agents deployed across different platform components without unified coordination. This made it difficult to manage complex workflows spanning the complete customer lifecycle. The company decided to build a comprehensive system capable of handling lead qualification, deal closure, adoption tracking, renewal processes, and customer advocacy.

The architecture employs a supervisor agent for orchestration, with specialized subagents handling specific tasks. Different triggers activate appropriate agents based on customer signals and lifecycle stage, enabling intelligent workflow automation. For example, during the adoption phase, agents monitor application usage and proactively identify opportunities for increased customer value.

LangGraph provided the low-level tools and abstraction techniques needed for sophisticated multi-agent coordination. ServiceNow's team extensively used map-reduce style graphs with Send API and subgraph calling throughout their system. This modular approach allowed engineers to build smaller subgraphs first, then compose larger graphs that call the original components as modules.

Human-in-the-loop capabilities proved particularly valuable during development. Engineers can pause execution for testing, approve or rewind agent actions, and restart specific steps with different inputs without waiting for complete re-runs. This significantly reduced development friction, especially important given the latency of waiting for model responses during testing.

ServiceNow implemented a sophisticated evaluation framework in LangSmith tailored to their multi-agent system. Rather than using one-size-fits-all metrics, they define custom scorers based on each agent's specific task. The company leverages LLM-as-a-judge evaluators to assess agent responses, with different thresholds for various output types.

The platform is currently in testing with QA engineers evaluating agent performance. ServiceNow plans to continuously collect real user data and use LangSmith to monitor live agent performance. When production runs pass established thresholds, those prompts will automatically become part of the golden dataset for ongoing quality assurance.

As a next step, ServiceNow will implement multi-turn evaluation, a recently launched LangSmith feature that evaluates agent performance across end-to-end user interactions. This approach uses the context of entire conversation threads rather than single exchanges, providing more comprehensive assessment of system performance.