AI in the Public Sector: Guidance, Responsibility and Impact

Tyler Tech Podcast Episode 109, Transcript

The Tyler Tech Podcast explores a wide range of complex, timely, and important issues facing communities and the public sector. Expect approachable tech talk mixed with insights from subject matter experts and a bit of fun. Each episode highlights the people, places, and technology making a difference. Give the podcast a listen today and subscribe.

Show Notes

In this episode of the Tyler Tech Podcast, we explore the evolving role of artificial intelligence (AI) in the public sector and the critical considerations shaping its development and implementation. AI presents transformative opportunities for government agencies, from streamlining operations and automating routine tasks to enhancing accessibility and building trust with communities. However, these advancements come with unique responsibilities around ethics, transparency, and regulatory compliance.

Kristine Lim, product manager in Tyler’s Data & Insights Division, shares how AI can empower public sector employees by alleviating repetitive tasks, fostering equity, and improving service delivery. She emphasizes the importance of embedding ethical principles, usability, and adaptability into AI solutions to meet the diverse and changing needs of communities.

John Wright, general counsel, corporate at Tyler, delves into the regulatory landscape, discussing how governments can navigate evolving data privacy and security standards while adopting AI responsibly. He offers practical guidance on ensuring accountability, mitigating risks, and building trust in AI-driven decision-making.

This episode also highlights Tyler Connect 2025, our annual conference designed to bring public sector professionals together to empower, collaborate, and imagine. Join us in San Antonio, Texas, from May 11–14, 2025, for product training, networking, and inspiration to help drive your organization forward. Early registration opens December 10—visit tylertech.com/connect to secure your spot!

And learn more about the topics discussed in this episode with these resources:

Listen to other episodes of the podcast.

Let us know what you think about the Tyler Tech Podcast in this survey!

Transcript

Kristine Lim: You have to approach AI with a sense of responsibility. Like, foundationally, that has to be baked into any development vision that you have with AI solutions.

Because the benefits are huge, but there are obviously risks that we cannot ignore.

So, trust and transparency really should be baked into those designing principles.

Josh Henderson: From Tyler Technologies, it’s the Tyler Tech Podcast, where we explore the trends, technologies, and people shaping public sector innovation today.

I’m your host, Josh Henderson, part of the corporate marketing team here at Tyler. We’re glad to have you with us. Each episode, we bring you thought provoking conversations on the tools and strategies driving our communities forward.

If you enjoy our podcast, please consider subscribing, giving us a five star rating, and sharing the show with others.

In today’s episode, we’re diving into a critical topic, artificial intelligence (AI) in the public sector.

AI has the potential to drive transformation, but its development and application come with unique responsibilities, especially around ethics, transparency, and regulation.

Joining us today are two experts from Tyler. First, we’ll hear from Kristine Lim, a product manager in Tyler’s data and insights division, who will discuss the guiding principles and goals shaping AI’s development for public sector needs.

Kristine brings a unique perspective on how AI can improve service delivery, foster transparency, and build trust within communities.

Then we’ll shift to the regulatory side with John Wright, general counsel, corporate at Tyler.

John will help us navigate the evolving legal landscape surrounding AI, covering essential topics like data privacy, security, and accountability.

Key considerations for any public sector organization implementing AI solutions.

Whether you’re a government official interested in the future of AI or just curious about the impact of emerging technology on the public sector, this episode explores the opportunities and responsibilities AI brings to the field. So, let’s get started with Kristine Lim. We hope you enjoy the conversation.

Kristine, welcome back to the Tyler Tech Podcast.

Kristine Lim: Thank you for having me. It’s great to see you again, Josh.

Josh Henderson: Of course. Good to see you again as well.

Let’s just jump right in, and let’s start with sort of the big picture of AI’s role, in the public sector. So how would you describe the guiding principles that are shaping the development of AI tools for the public sector?

Kristine Lim: Well, I would say there’s a few principles, that really shape how we are developing AI for the public sector. The first one really being this thought of empowerment through automation. So, it’s the idea of using AI to handle those really repetitive tasks that, like, may take up too much time and can bog down an organization and especially in that public sector where resources are already so slim.

So, for example, we talk to customers where data entry or document processing takes forever. Those are things that can be automated so that employees can focus on the harder stuff. And then we really believe that this would have, like, a huge impact, not only on efficiency, but also on job satisfaction.

So, if you were an employee of the public sector, being able to focus on more of the fulfilling parts of your job, because AI is taking care of the more mundane stuff, we assume that you’d probably be happier, and then that would lead to more engaged employees and therefore better service for the public. So that’s, like, the one of the big ones, empowerment through automation. We also think a lot about ethical responsibility because it’s great to automate things, but how do we make sure it’s fair and, like, prevent bias from keeping into our algorithms? So especially in government, those decisions, they have such a big impact on people’s lives, and it’s not enough to just automate. So, we really are making sure that we are including ethics from the start.

And then that last, I guess, one of our last guiding principles that we really think about is usability. So, of the solutions that we’re building, we have these interviews with residents who just come to us complaining, like, this government website is too hard to use, or it feels impossible to figure out this form, like, how frustrating it is.

But AI has the potential to change that and making it more understandable and accessible.

I think that would be a huge step forward towards public trust and engagement because it can process huge amounts of data, but also make it more understandable. And so, therefore, like, making it work for the people that it’s meant to serve. You should be building things that focus on continuous adaptation and scalability.

The public sector is always changing, and needs are shifting. Populations are changing, so AI tools can’t be these static solutions.

They have to be able to adapt to those changing needs of the sector that we serve. So, to do that, we need to build AI systems that have long term frameworks in mind. Like, you don’t want to invest in something that’s going to be obsolete in a couple of years, and it’s got to be able to evolve and scale along with the changes in the public sector.

Josh Henderson: In your view, what values and goals should drive AI innovation, especially when considering or creating tools meant to serve communities?

Kristine Lim: So, we’ve thought about this a lot as a team.

We are really focused on I mentioned this earlier, on equity, transparency, and empowerment again. But this time, I think more focused not only on empowering public sector employees, but also communities, and then adaptability and innovation. So, obviously, public sector organizations are held to a higher standard, and they should be. They’re designed to serve communities. And, like, from that perspective, the values then should layer into the principles when it comes to designing those AI systems.

If I could summarize it, I would say, like, taking a really human centered approach to AI in the public sector, is really what should drive AI innovation so that we are serving communities correctly.

Taking a human-centered approach to AI in the public sector is what should drive AI innovation so that we are serving communities correctly.

Kristine Lim

Product Manager, Data & Insights Division, Tyler Technologies

Josh Henderson: Yeah. And not only are there those types of considerations for the public sector for public sec sector organizations, but there are unique challenges as well for the public sector.

As, you know, in Tyler, we work specifically with the public sector, so we’re very well aware of those challenges. But what potential does AI have to improve the efficiency, the responsiveness, or accessibility, of public services?

Kristine Lim: Yeah. I love this question because I think everyone in the public sector should really be thinking about this.

We have an incredible opportunity with AI, but we need to be able to be careful and plan carefully. So, AI can help improve responsiveness by answering those frequently asked questions or help people navigate those complicated government processes even after business hours. So, we’ve built something called Tyler’s resident assistant that does things like this. And in talking to our customers, we’ve heard stories of, like, how they’re able to, like, make game changing decisions because instead of showing up Monday morning, having to, like, go through all of these emails and calls, the resident assistant was able to handle over the weekend those conversations where people couldn’t call during business hours or who struggled with using the websites.

So, it really decreased the amount of work on their plate to also make them more responsive to the other things that they needed to address.

I’d say AI really simplifies access to government data too. So, if you can imagine, like, open data portals being easier to use and data visualizations being clearer and more engaging, AI, I think, really helps people find and understand the information that they’re looking for. So, this is something that we think is obviously a game changer for, thinking about transparency in government and empowers those citizens and to hold their government accountable.

And then I think the last piece is efficiency. So, the idea of streamlining operations, obviously, to make things run more smoothly. I mentioned before this idea of automating data processing or data analysis so that public sector organizations, can deal with the increased need and demand for government services while they’re also dealing with, unfortunately, budget cuts or staff shortages.

So that way AI frees up the resources and helps agencies to do more, I’d say, with less.

Josh Henderson: That’s really great. And we’ve you talked about this a little bit earlier about ethics or ethical concerns being part of the process of implementation and things like that. But ethics are really a core part of any technology used in the public sector. But I wanted to ask you, how should ethical consideration shape the development of AI or AI solutions?

Kristine Lim: You have to approach AI with a sense of responsibility. Like, foundationally, that has to be baked into any development vision that you have with AI solutions.

Because the benefits are huge, but they’re obviously risks that we cannot ignore.

So, trust and transparency really should be baked into those designing principles.

And when I talk about ethical considerations, some of the things that we consider are how we can prevent bias. Like, how are we ensuring that we’re designing AI systems to treat all individuals equitably?

We evaluate guardrails that help mitigate those bias. We understand, like, what type of questions might be potentially malicious or seen as, subjective rather than objective here at the facts.

When it comes to, like, transparency as well, we’ve been very intentional to develop, solutions that allow the users to actually check the work of the bot. So, like, being able to show your work, the resident assistant might say, here’s how you can renew your driver’s license, but it’ll also link where it got the information so that the user can actually click into it and decide if this for themselves is what they needed.

And then finally, AI is so exciting, but it would be unethical to create AI solutions without human oversight. It’s not yet at a place where it can just run on its own.

AI tools honestly should really augment human decision making and not replace it entirely. So, making sure that you have human oversight in the critical areas, to maintain the balance between, like, automation and nuance understanding that only a human can provide. So those are a few of the ethical considerations that we consider in designing.

Josh Henderson: And you touched on transparency and trust being, you know, vital parts of this, especially with residents, being involved in in the whole equation. How can public sector organizations build that trust around AI’s role in decision making and service delivery for residents?

Kristine Lim: I’d say the first part is just being transparent and, like, proactive in communication.

So public sector organizations that get ahead of it, they’re able to be very open about what their plans are first for AI or how they want to use it. Because if you can be clear in your communication and your process, you’re almost, like, demystifying the kind of fear around AI sometimes, around hallucinations and, like, what are they going to do with my data? But that helps build trust among stakeholders. I think also including accountability mechanisms will really help public sector organizations build trust.

So that could mean, like, a framework that ensures there’s oversight over AI driven decisions. We work with really, really engaged partners who are creating AI principles that all, all of their government has to abide by before even shipping any of the solutions that might touch a user. So that’s really something that having those types of frameworks or protocols in place, for human review or some type of accountability audit is going to be a really great way to increase trust.

I talked before about transparency and communication. This is somewhat related, but I think community engagement. So, if you are able to, like, open up the conversation to the community about AI implementation, that would help foster a sense of inclusion and trust. And I know, like, right now, it feels like everyone is moving so quickly, and all of a sudden, it’s like AI, AI, AI.

So, for a lot of people, it’s like, woah. What just happened? And being able to create a pause or a moment for people to be able to have those conversations, ask those questions, address those concerns really helps build that trust. And then besides, like, obviously, defining your ethical guidelines and your compliance rules, being able to publish that so that people know, like, this is specifically what we are building towards and why and how.

Yeah. And I think that’s it. It would be helpful to once people get to a place where they can demonstrate the positive impact that they’re seeing with AI. So being able to take that data and highlighting real life examples of AI, improving efficiency, accessibility, or, like, outcomes that reinforced its value, to the public.

Josh Henderson: That’s excellent guidance. Kristine, thank you for that.

Lastly, I just kind of wanted to look ahead and, you know, we any conversation surrounding AI is always about how fast it’s moving, how fast it’s evolving.

But how do you envision the continued evolution of AI in the public sector, and the tools being used in the public sector? And what does that mean for the industry?

Kristine Lim: It’s so interesting to be in the position, and I feel really grateful, because it’s exciting to be in these conversations where all of a sudden people are at this part of the hype cycle where they’re like, I need this. How do I do this? What’s happening? But I think we’re getting to a place where people are now understanding what AI will look like in this industry. And so, for us, our hypothesis is that AI is just going to become even more integrated into government services, almost like, an intelligent infrastructure. So, I mentioned previously, automating mundane tasks. So, like, collecting forms instead of them having to type that in themselves. The, you know, resident assistant or someone could do that.

Being able to, like, create those higher value opportunities for employees to focus on that work, basically, instead of doing the mundane stuff.

Because AI like, the evolution of AI is not to replace people. It’s about helping them become better at their jobs, especially in the public sector. We want to enable a more human centric government. And to do that, taking away some of these mundane tasks helps public sector employees, in my opinion, focus on, like, those more personalized communications, those ones that need more of that nuance.

Speaking of personalization, I think for the future, the industry is going to really be seeing a lot of personalization and adaptable AI needs. So, imagine, AI systems that look at more specific community data points to understand that specific community’s challenges and recommending a set of solutions.

Right now, we see a lot of, like, kind of one size fits all models, but being able to take that and personalize that to localities or whatever it may be, would really help to guide the needs, and it’s something that we’re seeing a lot. And so, for the industry, for public sector organizations, I think what this means is that innovation is required.

There are a lot of people who kind of want to sprinkle AI on things and think like, oh, we checked the box, But you really need some type of dedication to understand, like, how you can be adaptable and understanding those use cases so that you cannot only, like, ride along the wave, but make sure you’re riding it appropriately and correctly and doing it in a way that serves your communities.

So that’s where I’m seeing the industry going.

Josh Henderson: Yeah. And we’re lucky to have you on the team to kind of walk us through and analyze this stuff, as the developments continue to take place. And I can’t wait to have you back on the show. Thank you, Kristine, so much for joining me today.

Kristine Lim: Thanks, Josh. Great seeing you.

Josh Henderson: Stay tuned. We’ll be right back with more of the Tyler Tech Podcast.

Hey there, Tyler Tech Podcast listeners. Have you heard the buzz? Our annual user conference, Tyler Connect 2025, is officially on the horizon. And I’m here with my colleague, Jade Champion, with some exciting news to share.

Jade Champion: That’s right. Early registration opens on Tuesday, December tenth. And trust us, you don’t want to miss this. Mark your calendars for May eleventh through May 14, 2025, because we’re heading back to sunny San Antonio, Texas.

Josh Henderson: I can picture it already. The unique charm of Historic Market Square, the vibrant Riverwalk, and, of course, the iconic Alamo. But let’s be honest. It’s not just the location that makes Tyler Connect a must attend event.

Jade Champion: Absolutely. Tyler Connect is one of the largest gatherings of public sector professionals, and it’s your chance to collaborate with peers who are solving challenges with innovative solutions. It’s packed with product training, networking opportunities, and inspiration to help you and your team thrive.

Josh Henderson: Whether you’re a first timer or a Connect veteran, this conference always delivers. And let’s not forget, you’ll leave with new knowledge, practical tools, and meaningful connections to apply in your work and share with your teams.

Jade Champion: From December tenth, head to tylertech.com/connect to take advantage of our early registration pricing, which secures your spot for an unforgettable time in San Antonio.

Josh Henderson: We can’t wait to see you at Tyler Connect 2025, a place to empower, collaborate, and imagine what’s possible for the public sector, all in the heart of Texas. Now let’s get back to the Tyler Tech Podcast.

Now that we’ve explored the guiding principles and goals shaping AI development from a product perspective, let’s shift to the regulatory side of the conversation.

John Wright, general counsel, corporate at Tyler, is here to help us unpack the evolving legal landscape of AI in the public sector.

As AI technology rapidly advances, regulations are constantly changing to keep up. So, let’s dive into what public sector leaders need to consider as they explore AI solutions.

Alright, John. Thanks for joining us on the podcast today.

John Wright: Happy to be here. Thanks for having me.

Josh Henderson: Of course. Of course. Let’s just jump right in. The regulatory landscape around AI is constantly changing.

What risks should public sector organizations consider as they explore AI solutions?

John Wright: Well, before we dive in, I just want to clarify that what I share today is not intended as my legal advice and doesn’t establish an attorney client relationship with anyone.

My opinions are not the same as Tyler Technologies.

I’m just sharing my perspective as a lawyer at a technology company, in this day and age of AI.

But getting back to your question on public sector organizations and what type of risks to keep in mind when exploring AI solutions, the first thing I would say is don’t be distracted by the hype that exists today around AI, and really take a step back and ask what makes the solution that you’re considering AI. Because there are things that we called AI five or ten years ago that don’t compare technologically to what is considered AI today. You could actually get in trouble for calling something AI when it’s not and advertising it that way.

So that would be my first, point of warning is around the language, really understanding what AI means today, and what that means for the technology that you’re considering and the tool, you’re using to solve a particular problem. The next thing I would say is, you know, really consider whether the tool you’re considering is ready for use in the public sector.

There are some really amazing pieces of technology that have been released for consumer use and use in the private sector.

That doesn’t mean that those tools are ready for prime time in the public sector.

Really consider whether or not it is appropriate to use some of these AI tools in a public sector setting, for the delivery of government services.

And then finally, if you decide that that tool is ready for use in the public sector, ask yourself as an organization, are you ready? Is your organization really prepared to take on, that type of technology, or are there other, technology upgrades and modernizations that need to take place first in order to properly implement one of these new AI solutions that have captured everybody’s attention?

Josh Henderson: A lot of important things to consider there. And, obviously, with these types of solutions, they’re evolving day to day, so rapidly, but also, at the same time, regulations are developing and evolving day to day. So how can technology providers address evolving regulations when developing AI solutions to sort of minimize the impact on public sector organizations?

John Wright: Technology providers need to have a perspective where they take into consideration the new regulations that are being developed at the local, state, federal, and even international level, while at the same time maintaining a focus on compliance with established regulations when it comes to data privacy and security and the protection of confidential information, reducing bias and discrimination.

Those regulations are already established and apply to the use of new technologies. And so, technology providers need to not lose sight of that. And keep in mind that there are existing regulatory frameworks, that apply and not get lost in the fact that there is a flurry of new regulation being established all over the world.

I think that technology providers also need to have, deep industry knowledge of the public sector in order to meet those needs of public sector organizations and need to use teams with diverse backgrounds to vet AI solutions before they’re deployed.

So, I mean by that that it can’t just be technologists and developers that build a tool from the ground up and deploy it. There need to be other people at the table who can talk about, compliance and talk about messaging and then talk about legal compliance and compliance with regulatory frameworks.

And then once a technology provider has vetted a tool internally, the work doesn’t stop there. And the technology provider really needs to work hand in hand with the public sector organization on implementing that AI tool and maintaining it over time, testing it, and keeping it in compliance as the regulatory landscape evolves because it’s going to continue to evolve.

Josh Henderson: Yeah. Some really, really important information to keep top of mind, right there. So, thank you for that, John. And now, obviously, data privacy, security are they’re priorities. They might be top priorities in in a lot of ways, but, for public sector technology. But how can public sector organizations safeguard data privacy while still leveraging AI insights?

John Wright: Well, they should be top priorities, and use of AI technology shouldn’t come at the expense of data privacy and security.

So, I would say that good data privacy and security practices come first.

Public sector organizations have long been stewards of public data, and it’s widely known that developers of leading AI technologies need more good data to train AI tools on so that they can perform better. So, the public sector is in a unique place right now, as the organizations that maintain such large troves of good data, and the protection of data is, really paramount, it should be considered first. And I’m not saying that public sector organizations need to solve all their data security problems because we all know that there’s no such thing as perfect security.

But there are baseline good practices when it comes to data hygiene and conservative collection of data, and, obviously, data security practices. And all of those need to be in place so that a public sector organization can confidently go out and use an AI tool after first establishing good data security and privacy practices.

Josh Henderson: And I think sort of a throughline for all of this, both Kristine and your conversation now is accountability. Accountability seems to be a very big part of using AI responsibly.

What steps can public sector organizations take to validate and monitor an AI solution after implementation?

John Wright: Well, before implementation, I would suggest testing, testing, and more testing of these tools, by the provider, by the end user.

There has to be cooperation between the provider and the user on the testing.

And after implementation, the testing doesn’t end. There has to be a scenario where the government user and the technology provider continue to work to test and benchmark and make sure that the tool is operating in compliance with expectations and in compliance with current regulations.

Josh Henderson: Lots of important considerations.

John, I think this has been incredibly valuable, and a and a good, counterpart to some of the things that Kristine, was offering listeners early on in the episode. But wanted to leave you with one final question here.

So, looking ahead, what do you think are some of the measures public sector organizations can take to mitigate the risk of new AI regulations sort of hindering their use of AI solutions? So, it’s kind of a little bit more seamless moving forward.

John Wright: It’s a good question. It’s safe to assume that this new generation of AI technology is here to stay. And what I would say is that it’s not a race to implement any particular tool in the public sector. What I would say is to mitigate the risk of AI regulations hindering use, is to take a step back and take a very thoughtful approach at how your organization is handling compliance with established regulations first around the things we talked about, data privacy, security, protection of confidential and proprietary information, avoiding bias and discrimination. See how your organization is doing on those fronts first before you jump into using the latest AI technology in compliance with the latest AI regulation.

Take a very thoughtful and measured approach to the solutions, that you’re considering procuring and who the people are at the government organization that are going to use those tools and who the people are, in the public who are going to access tools perhaps and receive government services using those tools.

And really put those people first and take a human centered approach to the procurement of AI tools. But at the end of the day, this regulatory landscape is going to be, fast moving and evolving constantly.

So, it’s important to partner, with trusted providers of technology on this journey because I believe that this new generation of AI technology is here to stay. So don’t rush into any one particular solution. Take a very thoughtful and measured approach.

Josh Henderson: That’s great. And we’re glad to have you on the team so we can kind of keep ourselves updated on all those, evolving regulations as AI continues to rapidly evolve. So, John, thank you so much for joining me today on the podcast. I know we’ll have you back on again soon. Can’t wait to have you back.

John Wright: Thank you so much. I really appreciate it.

I hope you enjoyed these conversations with Kristine Lim and John Wright. If you’d like to learn more about the guidelines, regulations, and ethical considerations around AI in the public sector, be sure to check out our show notes for additional resources. Adopting AI in the public sector isn’t just about keeping up with technology. It’s about creating opportunities to improve efficiency, transparency, and service delivery for the communities we serve. At Tyler, we’re committed to partnering with public sector organizations to harness AI responsibly and effectively.

The possibilities of AI in the public sector are vast, and we’ll continue exploring them in upcoming episodes. If you’d like to dive deeper into any of the topics we covered today, feel free to reach out to us at podcast@tylertech.com. Our subject matter experts would be happy to connect with you on AI, data solutions, or any other area impacting the public sector. We’d also love to hear your thoughts on how we can make the Tyler Tech Podcast even better. Please take a moment to fill out our audience survey linked in the show notes. And don’t forget to subscribe, rate, and review the podcast.

This marks our final episode of 2024, but we’ll be back in the new year with more great conversations, diving into the trends and innovations driving a modern public sector.

Until then, all of us at Tyler wish you a joyful holiday season and a happy new year. For Tyler Technologies, I’m Josh Henderson. Thanks for tuning in to the Tyler Tech Podcast.

Related Content