ChatGPT was introduced by OpenAI in late 2022, and within a short time span, it has become incredibly popular, drawing over 100 million users. This has made it the fastest-growing internet service or website ever. ChatGPT relies on OpenAI’s GPT algorithm and falls under the category of generative artificial intelligence (AI). This type of AI generates fresh content, including text, images, audio, or video, based on prompts that are fed to an algorithm that has been trained on trillions of data points. Microsoft has invested billions in start-up OpenAI and recently, Google unveiled its own conversational AI rival, Bard.
There is growing anticipation among experts that such generative AI tools will significantly impact the way we work. There is also widespread debate over the opportunities, benefits and risks presented by generative AI tools to Local Governments all over the world.
For instance, generative AI can assist individuals in writing briefings, identifying coding problems, or designing visual communication campaigns. Additionally, councils and similar organizations will find innovative ways to incorporate generative AI into their business processes, service offerings, and existing software. As an example, they could utilize generative AI to summarize transcripts of call logs or public meetings.
State and local governments have been utilizing chatbots for several years as a search engine to assist residents in finding the right information on various topics such as DMV services, unemployment claims, and rental assistance. During the early days of the COVID-19 pandemic, many states rushed to deploy chatbots to hasten emergency benefit applications and debunk misinformation. However, the significant advantage of ChatGPT and other large language models is their ability to generate comprehensive responses that simulate human conversations rather than providing clinical directions to a website. These bots scan billions of data points and are programmed to correspond to specific behaviors or subject areas, allowing them to predict the appropriate words to follow when presented with a string of text.
Some of the other common possible applications of ChatGPT include –
- Automating repetitive tasks, such as data entry and analysis, freeing government employees to focus on more complex and important tasks.
- Knowledge management: make information easier to access and process, leading to more efficient processes.
Though it’s still very early days for these emerging technologies, with many ethical, technical and practical issues still to be played out, some cities and countries are moving early and beginning to experiment.
In Singapore, 90,000 civil servants will soon be able to tap the power of ChatGPT to conduct research and draft reports and speeches from their existing work productivity tools by using a tool called Pair built directly into Microsoft Office.
A few weeks ago, the Portuguese government announced a plan to roll out a “Practical Guide to Access to Justice”: an AI model that Microsoft helped develop which uses the underlying tech of ChatGPT to help citizens with basic legal questions. Officials say it will give citizens information on court proceedings, answering questions about documents required for a marriage license or Portuguese citizenship, for example.
Some other governments are taking a more cautious approach to chat-based advancements — for now. Some experts feel that when a government uses an AI tool, it should be held to a different standard than when a private company debuts one.
In the UK, it has been reported that multiple federal departments have sought clarification on whether they are allowed to use ChatGPT to automate repetitive policy-making tasks like writing emails or letters. Additionally, civil servants have reportedly been cautioned against using chatbots which have been known to sometimes produce inaccurate information, but use of the tool has not been entirely ruled out.
So what are the concerns? What are the risks and challenges?
Despite the potential of ChatGPT and other AI-enabled workforce applications for local governments, risks and challenges exist, and will continue to emerge. Beyond the business and cultural implications, following must be considered:
- Lack of data diversity: One limitation of ChatGPT is the absence of diverse data in its training. Since the model relies heavily on a large dataset of text, the absence of diversity could result in errors and biases when it comes to understanding and responding to diverse perspectives and cultures. The outputs of ChatGPT are mainly based on historical inputs, which may not necessarily be accurate or appropriate in all cases.
- Lack of interpretability: ChatGPT’s decision-making process can be difficult to understand, justify, or explain.
- Safety concerns: ChatGPT could have unintended consequences, such as spreading misinformation or automating decision-making with negative effects on people.
- Lack of regulation: Regulations around this technology are lacking, causing confusion and uncertainty about how it should be used and by whom.
- Data security: The data input becomes part of the training data set, effectively in the public domain. Past cases have shown that governmental use of AI could cause privacy concerns.
It only added to mounting privacy worries when OpenAI, the company behind ChatGPT, disclosed it had to take the tool offline temporarily on March 20 to fix a bug that allowed some users to see the subject lines from other users’ chat history. Following this breach, regulators in Italy issued a temporary ban on ChatGPT in their country, citing privacy concerns.
OpenAI, says in its privacy policy that it collects all kinds of personal information from the people that use its services. It says it may use this information to improve or analyze its services, conduct research, communicate with users and develop new programs and services, among other things. The privacy policy states it may provide personal information to third parties without further notice to the user unless required by law. If the more than 2,000-word privacy policy seems a little opaque, that’s likely because this has pretty much become the industry norm in the Internet age. OpenAI also has a separate Terms of Use document, which puts most of the onus on the user to take appropriate measures when engaging with its tools.
Google’s privacy policy, which includes its Bard tool, is similarly long-winded, and it has additional terms of service for its generative AI users. The company states that to help improve Bard while protecting users’ privacy, “we select a subset of conversations and use automated tools to help remove personally identifiable information.” “These sample conversations are reviewable by trained reviewers and kept for up to 3 years, separately from your Google Account,” the company states in a separate FAQ for Bard. The company also warns: “Do not include info that can be used to identify you or others in your Bard conversations.” The FAQ also states that Bard conversations are not being used for advertising purposes, and “we will clearly communicate any changes to this approach in the future.”
As a result, some experts caution against adding anything to these tools that one would not want to assume will be shared with others.
As with any AI-based technology, it is essential for government agencies to use ChatGPT and other similar tools responsibly and protect customer data. It is crucial for individuals and teams to have a thorough understanding of how ChatGPT and other AI-enabled workforce capabilities work before incorporating them into their operations. They must also be aware of the areas where these tools excel and where their limitations or potential risks lie. These challenges are common issues that arise with the implementation of AI in any industry.
Like many innovations, ChatGPT straddles the line between scary and exciting. The public sector has experimented with automation and algorithms in the past, and seen mixed results. ChatGPT’s ease of use and potential to solve organizational problems, however, is rapidly advancing the widespread adoption of AI.
The impact of ChatGPT and similar AI technologies on work culture will be substantial. Getting ahead of their widespread use will help local governments keep up with the pace of need. At the same time, it’s important to make sure that ethical guardrails and proper oversight are in place so that these new public-facing systems are not abused or fall vulnerable to biases.
Written by Sandeep Mehta, CTO, Planeteria Media/Digital Deployment. Follow him on LinkedIn or contact him at smehta@planeteria.com.