The ultimate value of artificial intelligence is in saving people from useless, repetitive, stupid tasks.
For example, one application of AI that I would find to be really useful and cool – although it is currently very costly – is embedding AI into the walls of conference rooms. This would create a conference room that listens the way your smart phone does. It could transcribe notes of the meeting so participants can focus on the conversation without writing everything down. It could also recognize someone saying, “Let’s take a 15 minute break” and then notify the robotic barista to deliver five coffees. Or alternatively, it could monitor the CO2 levels in the room and suggest a short break to get fresh air, which would make people more productive when they return.
When you look at all the things that happen in corporate meeting rooms, this is an area with exciting potential. Anyone who has messed around with cables to connect laptops and turn on conferencing cameras can see the potential in an artificial intelligence that can shift from “plug and play” to “sense and play.”
If it takes us 10 minutes to get the projector to work, that is 10 minutes we’re not doing something useful. And an area where AI can provide a tangible improvement in the workplace.
However, this type of smart conference room is also a perfect example of how AI can go wrong. While it is great to have a conference room listening to record notes of a work meeting, you would not want that same AI listening in on a highly sensitive HR meeting or when you stepped into the conference room for a personal phone call.
You definitely want the ability to turn things on or off.
I recently published an article in the Spring 2019 edition of Work & Place titled, “The ambiance of ambience: How AI changes occupant experience.” In this article, I mention writings by Philip Brey on the topic of whether AI will enhance human autonomy or diminish it:
Brey identified three key ways in which ambient intelligence can foster greater human agency:
- By making environments more responsive to voluntary actions, thus helping people to more easily reach particular goals (like asking a conference room to take notes)
- By supplying people with detailed and personalized information about their surroundings, giving them the ability to interact more successfully with their environment (like having the conference room alert occupants when CO2 levels get too high)
- By allowing the environment to respond to human needs without explicit effort, thus freeing people from “tedious routine tasks” (like automatically connecting your laptop to present in a conference room)
As the flip side to this coin, Brey identified three key ways in which ambient intelligence could take away human control:
- By taking actions that do not correspond to the needs or intentions of users (like recording a confidential conversation)
- By, in effect, telling us how to behave due to making incorrect inferences about a situation (like automatically extending or cancelling a reservation based on misinterpreted statements)
- By not simply representing the needs of the user but also incorporating interests of a third party, such as a corporation (like encouraging less small talk by sending transcripts of meetings to managers)
I think that sometimes we forget to retain our own agency. When you talk about smart equipment in the workplace, we need to be very cognizant of this issue, and be willing to put the thought and effort into determining when, where, how to give up or retain agency.
If I wasn’t aware of the new technology and suddenly got an email from my conference room with the meeting notes, I would not be happy about it.