ChatGPT ain’t ‘good enough for government work.’

Jun 3, 2024 | GovSet AI | 0 comments

A Realistic Approach to Implementing AI in State and Municipal Government

The title is a bit misleading, but I stand by the claim: ChatGPT is not good enough for government work; Well, at least not straight out of the box. While the technology developed by OpenAI is downright awe inspiring, it isn’t meant to be deployed as an enterprise tool within complex organizations like government agencies.

So, if the answer to the question “what is our AI strategy?” isn’t as simple as buying ChatGPT premium or Microsoft CoPilot licenses for your entire staff, then what is it?  

Your first Implementation

Effective and responsible AI adoption in the public sector hinges on two key initiatives:

  1. Staff Training: Educating employees on best practices while making them aware of the benefits and risks associated with AI usage.
  2. Data Governance: Ensuring that the records and histories used to train AI models are accurate and exhibit minimal bias.

So, how should local government integrate AI?

Basic AI use cases aim to shorten the time needed to complete tasks without eliminating human involvement. Examples include generating reports, responding to emails, and creating press releases. Leveraging tools like ChatGPT or Microsoft Copilot can lead to immediate productivity enhancements for tasks that do not require the AI to generate referenced or researched information.

Although there is a slight learning curve, generating administrative records using an LLM/GPT is relatively straightforward out of the box. It is crucial to emphasize the importance of proofreading the resulting document for accuracy and validating all ‘facts’. LLMs tend to “hallucinate,” fabricating seemingly plausible events, facts, and figures.

While administrative record creation and report writing are initial use cases, AI adoption will progress to a logical endpoint: a future where AI functions as a semi-independent extension of government staff. The true value lies in a deeply integrated Agentic AI trained on your agency’s legislative and administrative files.

Agentic AI will be able to independently do things like:

  • Write reports with fact-checked claims, figures, and timelines.
  • Respond to written communications from residents and stakeholders.
  • Complete regulatory forms like grant applications and HR filings.
  • Provide instant answers to questions involving organizational history

Before deeply integrating AI into your organization, it is essential to implement a data governance and standardization framework. Concentrate your efforts on creating a single source of truth to be ready for agentic AI use cases in the next 2-3 years.

Risk of Early Adoption

In 2024, the risk for government lies in the rapid market evolution. Numerous AI “solutions” exist, but few have widespread adoption, and there are no clear market leaders. As a result, you must depend on your existing professional service provider or IT integrator, who also lacks a crystal ball. No one can predict which companies, software, platforms, or models will persist over the next decade.

Governments can’t afford to be locked into vendors that haven’t been around for a full budget cycle. It is crucial to ensure your AI investments are directed toward platform and ecosystem-agnostic efforts.

Modeling a Platform Agnostic Source-of-Truth

The design and format of your source-of-truth should align with the specific needs and goals of your agency. For some, this will entail a fully custom solution managed by a team of software developers and data scientists. For most, a cloud-based commercial off-the-shelf tool will suffice. Enterprising technologists with the time may consider exploring infrastructure-level tools like AWS Textract + Comprehend and OpenAI’s development API.

Regardless of the approach, the records must be portable and accessible for use in third-party applications. Avoiding single-vendor lock-in is crucial. Ultimately, the method of organizing your data is less important than ensuring that records are portable and usable in third-party applications. Portability is essential to avoid being locked into a single vendor.

The adoption of AI across the public sector will invariably be a marathon, not a sprint. your agency’s unique needs and goals, it may be more advantageous to focus on developing a robust data framework as a foundation for future AI use cases, rather than deploying AI for staff and citizens immediately. Regardless, ensure your agency’s investment is protected by prioritizing platform-agnostic data standards and indexes, despite any objections from your current software providers.

Pick the Right Data Management and Structuring System

We’re building GovSet to provide data warehousing and structuring tools designed for state and local government. Built on top of our open-source data standard, GovSet interprets, classifies, and structures public sector records into a machine-readable format, making them suitable for use in LLMs, search directories, and other applications.

We are currently in the process of piloting our software with a local government in Virginia. Using their records from the past five years, we inventoried their data sources, performed optical character recognition (OCR) on their records, detected named entities, and converted the entire record set to a machine-readable JSON standard.

If you’d like to participate in our pilot program for public sector data structuring, contact us here.

About the Author

Nathan Simpson is a government technologist and elected Town Council member for the Town of Appomattox, VA. With experience at all levels of government—including federal and state agencies, small towns, and independent commissions—he has developed communications and technology solutions for the Commonwealth of Virginia, U.S. Army, U.S. Marine Corps, and the U.S. Department of the Interior. Nathan supports public sector agencies in adopting data governance systems and AI tools through his company, The Morningside Group.