Skip to main content

Features

  • When Chat GPT launched in November 2022, it gained 100 million users in just four weeks. That unleashed a flurry of excitement among some who could see the potential and imagined just how useful it could be.

  • Not only can you teach an old dog new tricks, you can teach it to play blues guitar, says JW Jones.

  • The death of a spouse is monumental. Shock, harrowing sadness, sometimes a surge of positivity that seems to come from nowhere: the grieving process is deep and varied, in part universal, in part individual.

  • As the years pass, we can all expect to lose a step or two mentally — and, when it’s expected, we take it in stride as long as the decline feels mild and normal.

The good, the bad and the potential

Artificial intelligence tools are already being used by government. We look at the challenges and opportunities they offer. 

When Chat GPT launched in November 2022, it gained 100 million users in just four weeks. That unleashed a flurry of excitement among some who could see the potential and imagined just how useful it could be. It also unleashed panic among others, who feared the worst, imagining their favourite sci-fi thrillers becoming reality, or worse, imagining their jobs being made redundant as a result of highly intelligent and adaptable bots (internet robots.)

Similar reactions no doubt took place within the federal government, but amid the hype created by ChatGPT, people failed to recognize that artificial intelligence was being used in many facets of government already.

“It’s not in the future, it’s been under way for years,” says Michael Wernick, who served as 23rd clerk of the Privy Council from 2016 to 2019 after many years as a federal deputy minister and is now the Jarislowsky Chair in public sector management at the University of Ottawa. “Facial recognition at border crossings is already there [for example], so it’s more a question of pace of adoption.” 

He says the challenge with pace of adoption is that middle managers — those who usually would be the ones implementing this kind of change — are often risk-averse, and the media doesn’t help by finding sources who are upset about AI and making them central to stories about its adoption, he says. 

“The feedback loop is all negative so the behavioural conditioning of middle managers is to be cautious,” Wernick says. “It’s not a character flaw, it’s a rational response to the incentive structure in which they work. Anytime someone takes a risk, they don’t get a lot of credit if it works. No one would know that weather forecasting is twice as accurate as it was 10 years ago because no one ever tells [us] that. But if there’s a glitch or problem or somebody’s file goes to the wrong address, the feedback is instantaneous. 

"That’s fine because that’s how you learn to do things better, but the behavioural conditioning is to make people exceedingly cautious.” 

He says you can get positive feedback if you improve service, but there’s no reinforcement on improving internal services. 

In spite of that, AI has arrived in government and Wernick sees lots of possibilities for it in the future. 
 

Where AI could help

“The public sector includes provincial, municipal and territorial [governments] and some of the most interesting stuff will be in health care, education, universities, courts and policing,” says Wernick, who says how AI will be used is such a hot topic, he could attend a conference on it every week. 

He warns that one thing to consider is the risk of bias entering algorithms.

“If you’re training the engines with a lot of prior input you’re going to import a lot of racism and misogyny, and all you need is one example for all hell to break loose,” he says. “The public sector will have to show some transparency around the algorithms. It’s inevitable and perfection is impossible, so the question is: Does the government have the learning software to show how it will fix [such problems]?” 

Randy Goebel, a computer science professor at the University of Alberta and the founder of the Alberta Machine Intelligence Institute, says he also sees real potential for AI in the health-care sector and in the legal field. 

“Canada was always a leader in deploying computer science techniques in managing and organizing legal documents, which are still, for the most part in Canada, public documents that are open to scrutiny and therefore open to use by AI systems to build predictive models,” Goebel says. “In the extreme, some people think that it's only a matter of time before all judicial decisions are made by machines. I think that that's wrong for several reasons, but it's clear that the average person can accelerate their access to justice by automating simple things. For example, you're starting to see companies pop up and say, ‘Got a traffic ticket? Send it to us and we'll fight it for you.’” 

He says the health system has at least as much opportunity — and some challenges. He points to an example: In 2021, the Canadian College of Physicians and Surgeons conducted interviews across the country with AI experts. In the final report, it came up with this aphorism: “In the near term, physicians will not be replaced by AI. In the long term, physicians who do not use AI will be replaced.” 
 

Regulatory situation 

Canada has an AI bill, C-27, that’s still in the drafting stage, but is designed to reform federal private-sector law and legislate the design, development and use of AI systems. While Canada redrafts, Europe has already legislated and the U.S., according to Goebel, is “still the Wild West, and listening too much to companies like Google, Amazon and Microsoft.” 

Goebel says the concerns with the use of AI by government are data security and the ethical use of data. 

“In the ’80s, Canada and Singapore led the world in the automation and delivery of government services,” Goebel says. “Now I think we’re in the top 50, not the top five.”
 

What’s worrisome

Joanna Redden, an associate professor at the University of Western Ontario, has created a database of 303 automated tools already being used by government.

“One of the major problems I see is that there's so little information available about how these systems actually work in practice,” Redden says. “In addition to mapping where and how systems are being used, which led to the register we've published, I've also been doing case study investigations of how systems work in practice. And I think that's a really key piece that we don't know enough about. There's not enough information publicly available about the strengths and limitations of the new kinds of information systems that are being implemented, or piloted and used.”

She says quite often there aren’t studies on the impact of these new systems — how they’re changing decision-making practices, resource allocation and service delivery. She strongly recommends having interdisciplinary teams doing this kind of research alongside government as it implements these tools.


Challenges for AI 

The federal government’s Guide on the Use of Generative Artificial Intelligence (Gen-AI) lists a number of challenges and opportunities:

  • Gen-AI tools can improve productivity through increased efficiency and quality of outputs in analytical and writing tasks. 
  • Gen-AI content can amplify biases, violate intellectual property, privacy and other laws.
  • There are environmental costs because the servers consume vast amounts of power. 
  • There are potential ethical labour implications if data-labelling and annotation required is outsourced to countries with very low wages. 
  • Training data could be outdated, biased or lack a diversity of views. 
  • Gen-AI could pose risks to the integrity and security of federal institutions due to its use by bad actors. 
About the author

Jennifer Campbell is the editor of Sage magazine and Sage60