The federal government recently introduced a new set of guidelines for public servants around how artificial intelligence tools, like ChatGPT, can be used in their workplaces. While an expert in the field calls that a good “first step,” he also said there is a need for the government to push further, developing more resources to ensure public servants use those tools safely and effectively.
Released on Sept. 6, the guidelines say generative AI is a type of artificial intelligence that produces content “such as text, audio, code, videos and images” and that content is produced based on information the user inputs, which typically consists of short instructional texts.
What are the uses of generative AI in the public service?
Issued to support federal institutions in the responsible use of generative AI, the guide shares principles to help users assess the risks associated with the technology and to use it responsibly, Treasury Board spokesperson Martin Potvin said.
Generative AI, according to Potvin, can be used by public servants for various tasks including writing and editing documents and emails, coding, summarizing information, brainstorming, doing research, translation and learning and providing support to clients, including answering questions and troubleshooting.
“As public servants and federal institutions consider the use of these tools, they must be cautious and evaluate the risks prior to use,” Potvin said. “Public servants are responsible for the content generated by these tools, including making sure that it is factual, legal and ethical.”
The guidelines warn that caution should be used when generative AI is used in developing content for public communications, like web and social-media posts. It also warns that users should be cautious about employing the technology to automate “assessments, recommendations or decisions” about clients, noting employees should be able to understand how the tool produces its outputs, be able to find the data it relied on, ensure it’s factual and doesn’t include “undue bias” toward clients.
“When you generate content, you are responsible for making sure it is accurate, clear, non-partisan and unbiased,” the guide states. “You are also responsible for making sure permissions to reproduce, adapt, translate or publish third-party material have been secured and that the content does not violate intellectual property laws.”
Expert cites need for more resources, training
On top of the initial roadmap, more resources and training are needed to ensure employees feel comfortable using different types of AI tools, said Hossein Bonakdari, a University of Ottawa civil engineering professor with expertise in AI.
“This is not everything,” Bonakdari said. “This is the first step. We need to go forward with that.”
He said additional guides should be created for each different type of generative AI, including models used for text, audio, code, video and image creation.
“Training of the employees in the government is really important because they need to understand, for example, how to write the prompts when we are talking about privacy and (handling) classified information,” Bonakdari said, adding that different departments should develop case studies to teach staff about their responsibilities while using the technology. “AI is a significant driver of the current era of rapid technological advancement. We should be prepared. We need to benefit from this advantage.”
Is generative AI a threat to jobs?
The federal government has promised that the goal of adopting AI is not to eliminate public-service jobs.
“Generative AI tools are just that: tools,” Potvin said. “The tools should be evaluated for their potential to help increase the efficiency and productivity of public servants, not for their potential to substitute for a high-performing workforce.”
Bonakdari said that, if the government plays its cards right, AI can actually create jobs.
“We need to focus on enhancing human-AI collaboration,” he said, adding that new jobs related to the technology included AI trainers, policy analysts and AI reviewers — which he said was important given that AI tools like Chat GPT are based on information from 2021. “In my opinion, there is nothing to be worried about if there’s good planning.”
Jennifer Carr, president of the Professional Institute of the Public Service of Canada, echoed those remarks, noting it’s crucial that the government works to “upskill” workers in order to prevent layoffs, given that the technology will take over some jobs.
“An IT worker who used to be a programmer could move into cloud security with a little bit of investment,” she said, adding the union has been doing research on how jobs will change in the coming years. “If they invest and wisely look at their resources, they should be fine.”
AI and security
When using tools like Chat GPT, the guide indicates, employees should be transparent about the use of generative AI tools, identifying content that has been produced using the technology.
It also states employees should ensure not to input personal or sensitive information. Potvin said employees are required to comply with codes of values and ethics as well as several federal laws and policies, including the Privacy Act, the Directive on Automated Decision-Making and the Directive on Security Management. Those policies, Potvin said, include consequences for non-compliance.
Treasury Board President Anita Anand says managers are also responsible for overseeing employees’ use of AI.
“The purpose of these guidelines is to ensure that there’s a framework for the use of AI going forward,” Anand said. “That framework is evergreen, meaning that, should we see that there is an issue with the use of AI, for example, undue bias or undue discrimination, that could be examined and the framework be updated.”
When asked about the possibility of creating training modules or releasing additional in-depth guidelines on AI, Anand said the government was in the process of “socializing the guidelines” and making sure that employees were aware of things like the need to adhere to existing legal obligations, to maintain confidentiality in government documents and to have conversations with their managers about the use of AI.
“We will then, as I said, evaluate and reevaluate and update as necessary,” Anand said.
Potvin said that, while unions were not consulted before the release of the preliminary guide, they will be engaged in ongoing work.
“Of note, public service unions were engaged in the most recent review of the Directive on Automated Decision-Making, one of the key policies that underpins these Guidelines,” he said of the document that was amended on April 25 to outline rules for use of AI by the government.
Carr said the guide needed to more specifically outline when the technology should not be used in the workplace.
“A guide is a guide,” Carr said, adding there should be a recourse mechanism for taxpayers if they think AI was used wrongfully. “If something is not followed, there’s no way for us to intervene to say that, ‘Hey, this use of AI in this particular area is not okay.’”
Chris Aylward, national president of the Public Service Alliance of Canada, said in an email that unions and workers should be consulted when AI tools or systems were being implemented.
While the union is concerned about AI’s capability to produce content that is discriminatory, not representative or includes biases, PSAC is glad to see the government has come up with best practices to mitigate that risk, he said.
“The government must ensure these best practices are upheld, including providing ongoing training to users so they can identify biased or discriminatory content generated by AI,” Aylward said.
Demonstrations near Parliament Hill draw crowds of protesters and counter-protesters Wednesday
Plan for residential towers across from Central Experimental Farm approved by city’s planning committee a second time