Get the whitepaper that explains how GenAI is redefining data security and why security leaders need to pay attention.
Download now.

How the City of Los Angeles Built a Culture of Strong AI Governance

March 18, 2026Reading time: 8 mins
Mark Stone
Senior Technical Writer
banner-bg-dawn

When most organizations discuss AI governance, they typically begin with restrictions and then proceed from there.

The City of Los Angeles took the opposite approach. Instead of banning AI tools, Chief Information Officer Ted Ross and his team chose education, experimentation, and empowerment as the key elements of their governance model. 

What they got from governance done right is 27,500 employees learning to use AI safely, confidently, and responsibly.

So, how can your organization do it right? 

Governance Through Empowerment

Ross’s philosophy is simple: you cannot govern what you do not understand. Before the city could decide what tools to allow or restrict, employees needed firsthand experience.

“We ask everyone to try AI before they tell me they don’t want to use it,” Ross said. “You would not decide you dislike a food without tasting it first.”

Rather than a blanket policy of restriction, the city invested in large-scale training and structured adoption. Staff are encouraged to explore GenAI tools, such as Google Gemini, within a safe and governed environment, which is also backed by contractual controls that prevent data from being used to train the model or from becoming exposed externally.

Ross explained that curiosity is not the enemy of governance; ignorance is. “If someone says, ‘I don’t want to use AI,’ I respect that,” he said. “But I also want them to know what they’re saying no to. Learn what it can do, learn what it shouldn’t do, and then make an informed decision.”

Training as the First Line of Defense

The city’s strategy rests on the belief that educated users are the strongest security control. It bears repeating: educated users are the strongest security control

To that end, the city’s Information Technology Agency (ITA) has hosted citywide training sessions attracting between 3,000 and 5,000 participants at a time. Sessions cover foundational topics such as “What is AI?” and “AI, Privacy, and Security,” alongside role-specific workshops for accountants, HR teams, developers, and designers.

To reinforce learning, the city of Los Angeles also runs specialized programs for “super users,” employees who go through eight weeks of in-depth training to become internal experts and advocates. For Ross, it’s very similar to the city’s approach to cybersecurity awareness. “We do not stop phishing emails from reaching people; we train them to recognize and report them,” he said. “The same logic applies to AI. If we hide the tools, no one learns how to use them safely.”

He noted that the goal is to help employees think about their work through a new lens without overwhelming them with jargon. “If you are in HR, I want you to ask, ‘What does AI mean for recruiting and onboarding?’ If you are in accounting, think, ‘How can AI speed up reconciliation or audits?’ We are not trying to create AI engineers out of everyone — we are trying to create AI-aware employees.”

Safe Environments for Exploration

Governance for the City of Los Angeles Information Technology Association starts with trusted environments. Every employee now has access to Google Gemini, a sanctioned GenAI tool protected by legal agreements that prohibit Google from training on city data. Technical controls also block logins to unapproved AI systems and prevent employees from connecting their city accounts to unsanctioned tools.

“We do not want to play whack-a-mole,” Ross said. “Our goal is to build up the capabilities of our users so that our first line of defense is our people, not our last.” By offering legitimate, governed AI options, Ross is taking a huge step in repudiating “shadow AI”, the unsanctioned use of generative tools that happens when employees feel restricted or unsupported.

“If they find value using AI at home,” he added, “and we block them from using it at work, we are excluding that same value from our organization. We saw this with shadow IT for decades — AI just accelerates it.”

He also emphasized that policies and controls should never be static. “We use technology to make sure the right guardrails are in place, but we also rely on our people to stay smart about what they are doing,” Ross said. “Governance is a partnership; it’s not about catching people doing something wrong. It’s about helping them do it right.”

Learning Loops and Continuous Governance

Adoption is not a one-time rollout. The city views governance as an evolving practice, a “virtuous curve” in which training leads to informed use, and informed use reveals new governance needs. “It has taken us two years to get to where we are,” Ross said, “and we still have a long way to go. It is about building, refining, adding, and changing.”

This continuous improvement loop enables the city to adapt to both new AI capabilities and employee feedback. Ross and his team have gained valuable lessons from previous technology shifts, such as cloud adoption, mobile rollout, and social media governance. Knowing what they know guides how the city integrates AI responsibly and safely across departments.

Ross refers to AI as “a tool with nuclear energy,” capable of both great progress and great harm. “You can do some incredible things with AI, and you can also ruin someone’s life with AI. That’s why we use it carefully — the right tool, the right purpose, the right boundaries.”

How Empowerment Beats Fear

Ross’s most important insight may also be the simplest: fear slows progress. “AI is not a silver bullet,” he said. “But there are tremendous things it can do, and we want to use it for good, not for harm.” His emphasis on inclusion, giving everyone a chance to learn and participate, flips governance on its head from a compliance exercise into a shared mission.

He believes adoption and oversight must evolve together. “If we roll out tools faster than people can understand them, we fail,” he said. “And if we make people afraid to use those tools, we fail too.”

For Los Angeles government employees, that balance between curiosity and caution is practiced daily. Employees learn, experiment, and build confidence within guardrails. Governance becomes muscle memory, not a manual to follow and then forget.

The Concentric AI Takeaway

The City of Los Angeles’ journey mirrors what every enterprise faces today: balancing innovation with control. 

Concentric AI’s Semantic Intelligence platform brings the same philosophy to data governance.

Through AI-driven discovery, classification, and risk analysis, organizations can:

  • Identify and protect sensitive information flowing into GenAI tools.
  • Govern GenAI use without halting innovation.
  • Enforce data policies automatically across structured and unstructured systems.


Just as the City of Los Angeles built up a culture of responsible adoption, Concentric AI helps enterprises create a foundation where visibility, trust, and security grow together.

The latest from Concentric AI