top of page

Inside Citi’s AI Adoption: From Pilot to 175K People

  • Writer: Ram Srinivasan
    Ram Srinivasan
  • Jan 23
  • 4 min read

Updated: Feb 19


The number one question I get from Fortune 500 executives is: "Ram, how do I get my people to actually use AI when they're scared it'll replace them?"


Citi just answered it at massive scale.


The substrate: Banks are the hardest place to deploy AI (legacy systems, regulations, zero risk tolerance). Citi just proved the unlock isn't better technology, it's better human dynamics.

1. Banking AI is hard

Before we get to what Citi did, understand why this matters.


Banks run on a fragile mix of legacy mainframes and modern stacks, under intense regulatory scrutiny and zero tolerance for compliance mistakes. Citi has to integrate AI into systems that move trillions, satisfy global regulators, and protect highly confidential financial data.


In that environment, a single bad workflow or hallucinated output is not just embarrassing; it can be a regulatory or risk event.


If you can make AI work in a bank, you can make it work anywhere.


2. How Citi Scaled to 175K people

Across 2025, Citi moved from experimentation to institution‑level deployment:

  • It launched an AI agent pilot with around 5,000 employees, using an internal platform called Citi Stylus Workspaces, to embed “agentic” tools into real workflows instead of demos on the side.

  • Citi then mandated AI and prompt training for roughly 175,000 employees, framing it as reskilling and responsible use.

  • By late 2025, leadership reported that nearly 180,000 employees across 83 countries were using the bank’s proprietary AI tools, and that AI was freeing up about 100,000 developer hours per week through capabilities like code review and automation.

  • Alongside this, Citi built an internal network of roughly 4,000 AI-focused volunteers/advocates to help business teams adopt AI in their day‑to‑day work.


This was not a “turn on a chatbot and hope” effort. It was a designed system.


3. The Real Problem Citi Solved

Most companies treat AI adoption as a tooling problem. Citi treated it as a behavior and risk‑management problem, across three dimensions:

  • Integration: Can AI touch real systems and data with guardrails that satisfy risk, legal, and regulators? Citi’s agents were embedded into actual workflows and governed like any other critical system change.

  • Competence: Can you raise the floor of AI skills fast enough that prompt quality doesn’t create operational or compliance risk? Mandatory training for the full workforce tackled this directly.

  • Trust: Can you convince employees that AI is there to augment them, not quietly map their jobs for replacement? Citi’s framing around productivity, reskilling, and support from peers (not just top‑down mandates) was central.


Ignore any of these three, and adoption stalls in the pilot phase. Address them systematically, and usage compounds.


4. The Adoption Architect's Playbook

What Citi built is less a “tools rollout” and more an adoption architecture. Translated into a repeatable playbook, it looks like this:

  • Prove integration in real workflows: Run a pilot that plugs AI into actual high‑value processes with full risk controls, not just a sandbox demo. The goal is to convince risk, compliance, and line leaders that AI can live inside the real system safely.

  • Standardize inputs through training: Use short, mandatory training to reduce variance in how people prompt and interpret AI outputs. Training is not just about quality control; it is the antidote to fear. When people feel competent, they are far more likely to experiment and far less likely to see AI as a threat.

  • Distribute trust through peers: Build a volunteer or champion network embedded in business units. People don’t change behavior because of a global email; they change because someone they trust, in their context, shows them a useful, safe use case and helps them over the first few hurdles.

  • Create live feedback loops: Let that peer network and real usage data shape how the tools evolve. When employees see their feedback turning into product changes, the system becomes self‑reinforcing instead of feeling like a top‑down imposition.


5. The AI Adoption Flywheel

Most leaders will ship AI tools and hope for adoption. A smaller group will become adoption architects: they will design the integration, training, and trust systems that make new behavior almost inevitable.


Citi’s experience shows that in a highly regulated, legacy‑heavy environment, the constraint is no longer just the technology. The constraint is whether leadership is willing to:

  • Integrate AI into real workflows with real guardrails.

  • Mandate and invest in competence at scale.

  • Engineer trust through peers, not just memos.


Citi’s experience shows phase one of AI adoption is getting safe activation at scale; phase two is deepening usage so AI becomes the default way of working. Citi is now positioned for that second wave.


So when your CEO asks how to get people to actually use AI at scale, you now have a concrete, real‑world answer: Replace fear with competence, and design an adoption flywheel, don’t just deploy a tool.


Until next time,

Ram


— 

Ram Srinivasan

MIT Alum | Author, The Conscious Machine | Global Future of Work and AI Adoption Leader published in Business Insider, Fortune, Harvard Business Review, MIT Executive Viewpoints and more.


A Message From Ram:

My mission is to illuminate the path toward humanity's exponential future. If you're a leader, innovator, or changemaker passionate about leveraging breakthrough technologies to create unprecedented positive impact, you're in the right place. If you know others who share this vision, please share these insights. Together, we can accelerate the trajectory of human progress.


Disclaimer:

Ram Srinivasan currently serves as an Innovation Strategist and Transformation Leader, authoring groundbreaking works including "The Conscious Machine" and the upcoming "The Exponential Human."


All views expressed on "Substrate" and across all digital channels and social media platforms are strictly personal opinions and do not represent the official positions of any organizations or entities I am affiliated with, past or present. The content shared is for informational and inspirational purposes only. These perspectives are my own and should not be construed as professional, legal, financial, technical, or strategic advice. Any decisions made based on this information are solely the responsibility of the reader.


While I strive to ensure accuracy and timeliness in all communications, the rapid pace of technological change means that some information may become outdated. I encourage readers to conduct their own due diligence and seek appropriate professional advice for their specific circumstances.

 
 
bottom of page