Shaping the Future of AI Before it Reshapes Us

Anthropologist Genevieve Bell wants technology creators to ask big questions before deciding what to build and how to build it.

By Paul Boutin

By Paul Boutin April 23, 2020

AI is already an essential component of many industries, using decisions made by sophisticated software algorithms to control physical machinery in the real world. Traffic lights and elevators, for instance, have already become more self-controlling than many who use them realize.

In the coming years, such ”cyber-physical systems” will accelerate their reach into our lives, our societies and the physical world. That’s why anthropologist Genevieve Bell is working to change the way many people think about AI and automation, beginning with how they design and plan for the long term.

Bell spent 20 years as a researcher at Intel, where she was the first female senior fellow, a role of distinction that she still holds. She is a futurist and AI practitioner grounding in anthropology, which means her approach is not to dictate answers, but to ask questions.

“We need to take AI safely to scale by thinking about the big questions that cyber-physical systems raise, and how we go about answering them,” said Bell in a call from the Australian National University in Canberra, where in early 2019 she launched an ambitious new program called the 3A Institute – Autonomy, Agency and Assurance. 3Ai launched a masters program in early 2019.

[Related story: Coronavirus Gives AI and Big Data Chance to Shine

The distinguished professor is taking a Silicon Valley approach to the 3Ai curriculum: start teaching now and adapt as you learn during the academic year. Her main aim, and that of the institute, is to build a new branch of engineering to bring AI responsibly, safely and sustainably to scale.

Her time in Silicon Valley strengthened her belief in the need to assert the importance of people, and the diversity of our lived experiences, into conversations about technology and the future.

“Since returning home to Australia in 2017 and establishing the 3Ai, I have been increasingly struck by the complicated dance of being human in this world I was helping make digital, and what we could and should be doing differently,” she said.

Bell listed several big-picture questions AI creators should ask before deciding what sort of AI to build and if it is succeeding.

How autonomous should our AI systems be? How much agency should they have? These are fundamental questions. She suggests that AI creators deeply consider what and even whether to build before they begin thinking about implementation.

“The concept of autonomy in cyber-physical systems is different from autonomy in humans,” she said, and different teams may have different ideas of how to approach autonomy.

“Google’s autonomous vehicles are different from Tesla’s, are different from Volvo’s,” she said.

A favorite fictional example of Bell’s is the sci-fi comedy The Hitchhiker’s Guide to the Galaxy, in which a sentient elevator resentful of its lack of agency spends its time sulking in the basement and trying to persuade riders to let it choose their destination. The joke works for Bell because it’s easy to visualize engineers who didn’t consider that giving their creation too much intelligence might backfire.

People expect a smart elevator to be only so smart, she said.

“It can go up and down, but no one is going to imagine that the elevator is going to walk out of the building and go down to the pub for a beer.”    

Assure Systems are Safe and Successful

We think about risk, liability, ethics, standards, policy and regulatory settings, Bell said. But these issues must be considered at scale, and across disparate regulatory regions where government agencies will dictate different answers to the same questions, much as the European Union set its own rigid GDPR standards for handling data about international Internet users.

[Related story: GPDR Enters Year 2]

What does success look like and how can it be measured? These are big picture questions that force designers to understand the impact on humanity and the environment.

“I’m particularly concerned about questions around energy consumption and safety versus productivity,” Bell said.

“There are questions about whole-of-society engagement.”

Question mark with arrows with young woman

The lifts are a perfect everyday example, requiring autonomous controllers to make decisions between conserving energy and getting people to their destinations quickly, either of which could be deemed a success. Engineers need to consider ahead of time what can and should be measured to evaluate their systems’ performance.

Design for Human Interfaces and Diversity

“We have an embodied memory about how these things should work,” Bell said. “We have muscle memory. It would be very weird to get into an elevator without buttons. You go in expecting to push them.”

[Related story: 17 Exciting Facts about AI in Canada]

Any newly automated system that doesn’t behave as people have come to expect it requires thought. Bell cited a sign at an urban train crossing: “Traffic signals may vary.” Drivers accustomed to mentally timing lights may be caught off guard by the light’s irregular cycles through red and green. They may even be confused by the sign.

Bell is also concerned by a common response to such potentially dangerous autonomy.

“You often hear, ‘We’re going to keep a human in the loop as a failsafe,’” she said. “I’m struck by two things about that: Which human do we think this is, and which loop do we think that is?”

Human intervention may not always be available when and where it’s needed, and in many cases handing off to a human invites even worse outcomes. A trained expert monitoring a radar screen for potential problems in the sky has trouble staying focused beyond 20 minutes, she said.

Bell said a diversity of ideas must be designed into new systems.

“We want as many different voices and opinions and thoughts as we could find,” she said.

“The world that’s coming is not a homogenous one. In addition to the faculty, we have playwrights, artists, politicians on legislatures and people building trucks.”

While these are foundational questions, more are sure to arise as the use cases with autonomous machines expand and our relationship with them evolves.

“We are educating each other,” Bell said.

 

Try Nutanix Frame Desktop-as-a-Service free for 30 days.

Paul Boutin is a contributing writer. Find him on Twitter @paulboutin.

© 2020 Nutanix, Inc. All rights reserved.  For additional legal information, please go here.