The Human Side of AI Governance in Washington's Public Sector
At the Washington IT Leadership Forum, the most important conversation about AI governance did not happen during the formal session. It happened over lunch — and it was about people, not policy.
At the recent Washington IT Leadership Forum, a session on AI governance covered the topics you would expect: policy frameworks, risk management, procurement guardrails. But what stayed with me happened over lunch, not during the formal session.
Someone at my table asked a simple question: "What has been your biggest disappointment with AI?"
I had to pause. My answer was the speed of the hype. The relentless wave of headlines and hot takes has left a lot of people feeling overwhelmed, even scared. In some cases, feeling left behind. And that matters, because the leaders in that room were not chasing hype. They were focused on something harder: measured adoption, thoughtful governance, and a genuine commitment to educating their staff so that no one gets left behind.
What was encouraging was hearing the same concern echoed during the governance session itself. The panel, moderated by Sean McSpaden and featuring Angela Kleis (Deputy CIO and Chief Data Privacy Officer, WA Dept of Commerce), Lisa Qian (City AI Officer, City of Seattle), and James Galvin (AI and Emerging Technology Program Lead, WaTech), did not just talk about frameworks and compliance. They stressed the importance of educating staff, easing fears about job security, and making sure AI adoption does not leave people behind. It was a good reminder that the best governance programs start with the people, not the policy documents.
What does AI governance look like once teams start using AI in real work?
On paper, AI governance covers policies, risk frameworks, procurement standards, and documentation requirements. Most agencies have some version of this in place or in progress. But the moment AI tools enter daily workflows, governance has to meet reality.
That means understanding where these tools actually create value, where human review is still essential, and how to maintain transparency so that oversight is not just a principle but a practice. Washington state policy reinforces this by requiring risk management protocols, ongoing monitoring of AI outputs, and role-based training. This is not a one-time compliance exercise. It is a continuous effort, and it only works when the people using the tools understand their role in it.
That shift from policy to practice is where the human side of governance starts to matter most.
Where is AI already creating real value in public-sector work?
The most useful AI adoption is not happening through sweeping transformation initiatives. It is happening quietly, in the day-to-day. People are using AI to summarize lengthy documents, turn rough notes into structured briefing materials, draft first versions of constituent correspondence, organize large collections of policy research, and prepare intake information for human review.
None of this is flashy. All of it saves time and reduces friction in work that was already getting done, just slower. When staff can hand off the repetitive, time-consuming parts of their job to AI and redirect their energy toward higher-value work, that is a real return. And recognizing these contributions honestly, without overstating them, is what builds a credible foundation for broader adoption.
How can leaders tell which AI use is worth encouraging and which needs attention?
Governance is often framed as setting boundaries. That matters, but it only tells half the story. The other half is understanding what productive AI use actually looks like inside your organization.
This is where monitoring and evaluation tools become important. With the right instrumentation, leaders can identify which workflows are producing consistent, high-quality results and which are generating errors or risky outputs. They can see which teams are thriving and which need targeted coaching. At Autessa, this is something we think about constantly: how do you give leaders visibility into what AI is actually doing across their teams, so governance becomes a tool for guidance rather than just compliance?
That kind of practical intelligence turns oversight into something useful. It also makes training programs more responsive. Instead of running the same generic session for every team, you can focus on the gaps that actually exist.
But visibility alone is not enough. The tools themselves need to be designed with the human in the lead. That means intuitive interfaces that staff can actually work with, not just tolerate. It means giving teams the ability to tailor AI workflows to how they actually operate, adjust prompts, refine outputs, and shape the tool to fit their process rather than the other way around. And when those improvements happen continuously, driven by the people using the tools every day, you get something powerful: adoption that grows organically because the experience keeps getting better.
This is also where cost matters. Public-sector agencies should not need to pay for expensive vendor-driven customization every time they want to adjust how an AI tool works. When the tools are designed to be tailored by the people who use them, agencies gain control over their own improvement cycle. That changes the economics of AI adoption entirely.
Where must human judgment stay in the lead?
Human oversight is a foundational requirement of public-sector AI governance, and for good reason. Washington's statewide AI framework is clear on this: agencies must verify AI-generated content for accuracy, review outputs that affect public communications or decisions, monitor results based on risk level, and provide role-based training that reinforces accountability.
But "human in the loop" cannot just mean someone clicks "approve" at the end of an automated process. It means the human stays in the lead. The person doing the work should understand what the AI did, why it produced a particular output, and what to look for when something is off. That requires tools that are transparent, workflows that surface the reasoning behind AI outputs, and training that builds genuine competence rather than just compliance awareness.
This is also where job security concerns need to be addressed head-on. When staff see AI tools that are designed to support their judgment, not replace it, and when they are given real agency to shape how those tools work, the fear starts to fade. People are not afraid of tools that make their work better. They are afraid of tools they do not understand, cannot control, and suspect might be designed to make them unnecessary.
The leaders who get this right will be the ones who treat their people as the most important part of the system, because they are. The public trusts government agencies to get these decisions right, and that trust runs through the people who carry out the work every day.
The leaders I spoke with at the forum are not afraid of AI. They are working to get it right, thoughtfully, with their teams and their communities in mind. That combination of innovation and responsibility is what will define the next chapter of AI in Washington's public sector. And it will not happen through policy alone. It will happen through the people who carry that policy forward every day.