New AI tools like Cogito are designed to remind us to be “human” and to issue reminders and warnings for empathy and compassion.
Read for 15+ min
If your all-day job is to be empathetic, the burnout is just human.
Few people are more aware of this than customer service representatives who have to approach every conversation with energy and compassion – be it their first call of the day or their 60th. It is their job to make even the most difficult customers feel, understood and respected to be, and still provide them with accurate information. This is often a big task, which leads to frustration on both sides of the call.
In recent years, however, an unlikely helper has emerged: artificial intelligence tools that are designed to help people use and maintain "human" traits such as empathy and compassion.
One of these tools is a platform called Cogito, named after the famous Descartes philosophy Cogito ergo sum ("So I think I am"). It is an AI platform that monitors sales and service requests for large companies (including MetLife and Humana) and provides employees with real-time feedback on customer interactions.
During a call, an employee can see cogito pop-up warnings on their screen that encourage them to show more empathy, increase their vocal energy, speak more slowly, or respond more quickly. Interactions are captured and tracked on in-house dashboards, and managers can instantly assess what tasks different members of their team may need to work on.
Conor Sprouls is a call center representative in MetLife's Disability Insurance Department and uses Cogito all the time. On a typical day, he answers between 30 and 50 calls. Each lasts between five and 45 minutes, depending on the complexity of the problem.
Sproul's first caller on the morning of September 12, 2019 was someone with an anxiety disorder, and Cogito once pinged Sprouls with a warning to be empathetic and to respond too slowly a few times (not uncommon when explaining for someone's allegations Sprouls).
When Cogito was launched, some employees were concerned about constant oversight and overloading of notifications. For example, they were approached too often via the empathy signal, and at some point the tool believed that a representative and a customer were talking about each other when they actually laughed. But Sprouls says the system gets more intuitive with every call. With regard to monitoring, call center calls are always recorded and sent to the supervisor. So this is not a big change.
According to Sprouls, Cogito could even offer a more realistic representation of performance. “A manager cannot be expected to listen to every call for every one of his employees. So if we occasionally select only random calls, it can be a stroke of luck that an employee is monitored for a simple call. and another could be monitored on a hard one, ”he says. “Cogito will give you the end result: who has to work on what? I think many of us see Cogito as a personal job coach. "
MetLife has been using Cogito for about two years, although it was first introduced as a pilot project.
Emily Baker, a MetLife supervisor with a team of around 17 employees, says that her employees have all benefited from Cogito's evidence during the pilot process. She says an employee's favorite was the energy boost. At the end of the day, he would start to sit in his seat, and the posture meant that he wasn't projecting his voice so much. When the surge of energy (a coffee cup symbol) appeared, he straightened up and spoke more vigorously, so that he had more to do with the call.
"I like the fact that in my special supervisor dashboard, as a whole, I can see how we develop as a team when there are trends," says Baker. "Are everyone talking about the caller? Do you have any problems with dead air? You can drill down into any person and it is really good if you give one-to-one lessons. "
Now MetLife is in the process of introducing Cogito into even more of its customer-centric departments – claims, direct sales, customer growth. The company also plans to more than double the number of employees using the platform (from 1,200 to over 3,000).
"It's a bit weirdly dynamic," says Kristine Poznanski, head of global customer solutions at MetLife. “We use technology and artificial intelligence to help our employees demonstrate more human behavior. It's something you don't think of intuitively. "
A growing trend
Josh Feast, co-founder and CEO of Cogito, learned from counseling at the New Zealand Department of Children and Family that social workers can get burnout in just three to five years. He was shocked by the irony that a profession designed to care for people was not conducive to caring for people in that profession.
An idea began to take shape and, after a course at the MIT Media Lab where Feast gained an important insight, it took on further shape: Large companies understand data well. So if he wanted to help people in a large company, he had to present his idea in a language that the company team could understand. "It was almost like lightning," he says.
And so Cogito was born. During the R&D phase, Feast and his co-founder worked for DARPA, the U.S. Government's Defense Advanced Research Projects Agency. The agency had soldiers in mind who were struggling with PTSD, and DARPA provided the Cogito team with resources to seek help with mental health problems. So Feast began to investigate how nurses treated patients.
"There was a real 'aha' moment when we discovered that if you could use this technology to understand the conversation and measure the dance between the nurse and the patient, you could start with the level of empathy and that Read the compassion they showed … and the resulting attitude of the patient towards this interaction, ”says Feast.
He built dashboards to show a level of compassion and empathy and found something remarkable: when people received real-time feedback while talking to someone, the level of compassion and empathy during the conversation improved. This finding was the key to Cogito's future.
However, Cogito is not the only AI-based tool designed to help us promote our humanity.
There is Butterfly, an AI tool that managers can use to empathize with their employees and increase job satisfaction. Once Butterfly is embedded in a messaging system in the workplace, it functions as a chatbot – executives are given real-time advice based on employee surveys and feedback. Butterfly analyzes the latter to measure stress, collaboration, conflict, purpose, creativity and the like. Managers then receive calls for action and reading material to solve problems in their team. For example, a manager with a heavily loaded team could receive an article on how to create a more compassionate work environment.
"In short, Butterfly was designed to help managers stay up to date on their team's engagement and overall satisfaction," said David Mendlewicz, co-founder and CEO. "Think of an AI-controlled happiness assistant or an AI-controlled leadership trainer."
Another AI-based empathy tool is Supportiv, a peer counseling platform that targets natural language processing to deal with daily psychological issues such as work stress, anxiety, loneliness, and family conflicts. Seconds after a user answers Supportiv's main question – "What is your fight?" -, he is divided into an anonymous, topic-specific peer support group.
Each group has a trained live moderator (who can also provide specialized or emergency services if required). An AI algorithm scans conversations to identify mood, tone, engagement and interaction patterns. Prompts are displayed on the moderator's part – user X has not contributed to the conversation for some time, or user Y has shared one of the above ideas that was not addressed. Co-founder Helena Plater-Zyberk's vision for the next iteration of Supportiv: additional AI advances that could help identify isolated users in chats and alert moderators with suggestions on how to be more empathic with them.
The goal, says Plater-Zyberk, is to create "superhuman moderators" – with compassion, empathy and hyper-vigilance to enable group chat better than any normal person.
IBM Project Debater
When it comes to the "I think that's why I am" theory, IBM's Project Debater is just the thing. The AI system introduced by the technology giant in January is the first to allow complex ideas to be discussed with people. Essentially, Debater is about rational thinking and empathy – if you look at opposing viewpoints and understand an opponent so well that you can address your arguments bit by bit and ultimately win them over.
Dr. Aya Soffer, vice president of AI technology at IBM Research, envisions a variety of practical applications for debaters – a policy maker who wants to understand the implications of a law they are considering. For example, what are the precedents, the pros and cons, the arguments on both sides of the equation when phones are banned from schools (a law passed by the French government in 2018)? A financial analyst or investment advisor can use Debater to make intelligent predictions about what a new type of technology may or may not mean for the market.
We usually look for arguments to convince ourselves or someone else of something. However, Soffer says that taking counter-arguments could be even more effective, be it to change an opinion or to strengthen an existing perspective. This kind of higher level empathy and reasoning is something that IBM Debater wants to help.
Pitfalls and privacy
As with all new technologies, there are some use cases with this type.
First, the data used to train the algorithm may contain systemic deviations. For example, teaching that predominantly white men express empathy could result in a system that shows lower output for women and minorities. A call center agent with an illness may show less energy than the perceived norm, but does their best to compensate for it in other ways.
For this reason, it is a good idea for individuals to provide this data before it is shared with their superiors, says Rosalind Picard, founder and director of the Affective Computing research group at the MIT Media Lab. She believes that it is against ethics to share data about an employee's interactions, such as compassion, empathy, and energy, with a manager first.
And then there is a temptation that this type of technology goes beyond the intended use case – a helpful reminder to make a real connection – and instead serves as a driver of insincere interactions that are fueled by fear. Because similar tech tools are part of the basis of social rating systems (think Black mirror"Nosedive" episode). In 2020, China plans to debut publicly available social credit scores for every citizen. This score helps determine a person's eligibility for an apartment, the travel deals on offer, the schools where they can enroll their children, and even whether they can see a hospital doctor without having to pay in advance.
Experts predict that over the next five years we will make great strides in "mood analysis" – a type of natural language processing that identifies human emotions by analyzing facial expressions and body language or text responses.
However, Noah Goodman, associate professor at Stanford University's Computation and Cognition Lab, has a moral dilemma: what can you do right with the information these systems learn? Should they have goals – ask us to adapt our environment or send us tools that can make us feel happier, more compassionate and more empathetic? What should technology do with data about how we feel about others, how we perform on a given interaction? And who should it share this information with? "This is a place where the scary limit is always close," says Goodman.
Another Problem? AI simply cannot replicate or fully understand human emotions. Take Cogito for example. Let's say you're on the phone all day with a customer service representative and you get a warning that you sound low-energy and tired instead of being energetic and attentive. That doesn't mean you actually feel tired, Picard says, and that's an important distinction to make.
"It doesn't know how I feel," says Picard. "It has no awareness – it just means that with this system that listens to your voice quality, compared to your usual voice quality and compared to other people on the phone with the voice quality of this company, according to the data, you may sound like this collected … It is not to say that you are this way."
There is a misunderstanding that we are already at the point where AI works effectively understood human emotions instead of just being able to analyze data and recognize related patterns. The term "artificial intelligence" itself could spread this misunderstanding, Picard says. In order not to stir up the public's fear of the future of AI, she recommends calling it AI instead.
"As soon as we call the software" AI ", many think that it does more than it is," she says. "When we say that the machine" learns "and that it" learned "something, we mean that we have trained a lot of the math to take a series of inputs and to create a mathematical function that is a series generated by spending with them. It cannot "learn" or "know" or "feel" or "think" like any of us. It does not live. "
Implications and regulations
Some experts believe that there will be a day when the technology will be able to understand and replicate "unique human" properties. The idea falls under the "Computational Theory of the Mind" – that the brain is a dedicated tool for processing information and that even complex emotions such as compassion and empathy can be recorded as data. But even if that is true, there is a difference between experiencing emotions and understanding emotions – and, according to Goodman, it will one day be possible to build AI systems that understand people's emotions well without them Experience emotions yourself.
There is also the idea that the introduction of new technologies has often been accompanied by fear throughout history. "We are always afraid that something new will come out, especially if it has a large technological component," says Mendlewicz. "Exactly the same fear rose when the first telegraph came … and when the telegraph was replaced by the phone, people also expressed fear … over [it] We are becoming less human – we have to communicate with a machine. "
One of the most important questions to ask is: How do we avoid using them to alienate people or to create more distance between people?
A prime example of this are social media platforms that have been introduced to improve human connectivity, but paradoxically end up as tools of polarization. “We learned from this that human connectivity and the humanity of technology should not be a requirement. It has to be cultivated, ”says Rumman Chowdhury, who heads Accenture's initiative for responsible AI. "Instead of finding out how we fit the technology, we have to find out how the technology fits us."
This also means paying attention to red flags, including the technical "solutionism" error – the idea that technology can solve all human problems. While this is not possible, technology can point out things that we need to focus on to work towards more comprehensive solutions.
“We humans have to be ready to do the hard work,” says Chowdhury. "Empathy doesn't just happen because an AI told you to be more empathetic … [Let’s say] I'll create an AI to read your emails and let you know if they sound good enough, and if not, I'll fix your emails for you so that they sound good. That doesn't make you a nicer person. It doesn't make you more empathetic … Creating one of these AI that involves improving people needs to be planned very carefully so that people can get the job done. "
Some of this work involves developing systems to regulate this type of AI before it spreads. Experts have already started with floating ideas.
For each AI tool, Chris Sciacca, communications manager at IBM Research, would like to see an “AI fact sheet” that works like a nutritional label on a loaf of bread, including data about who trained the algorithm, when and what data was used. This way you can "look under the hood" – or even in the black box – of an AI tool and understand why a certain result could have occurred. Remember to compare the results with a grain of salt. He says IBM is working to standardize and promote such practice.
Picard proposes regulations that are comparable to those for lie detection tests, such as the federal law passed in 1988 on the protection of workers from polygraphs. According to a similar law, it is obvious that employers with few exceptions would not need monitoring tools for AI communication – and even in these cases, they could not monitor anyone without informing them about the technology and their rights.
Spencer Gerrol, CEO of Spark Neuro – a neuroanalysis company that wants to measure emotions and attention for advertisers – says the potential impact of this kind of sensitive AI keeps him awake at night. While Facebook created "amazing" technologies, it also helped interfere in the US election. And when it comes to devices that can read emotions based on your brain activity, the consequences can be worse, especially since a lot of the emotions are unconscious. This means that one day, one device may be more "aware" of your emotions than you are. "Ethics are getting complex," says Gerrol, especially when advertisers try to get individuals to act using what is known about their emotions.
As for the founder of Cogito himself? Feast expects AI tools to fall into two categories over the next five to ten years:
Virtual agents who do tasks for us.
Smart extension or services that aim to strengthen or expand our own human capabilities.
Feast imagines a fusion of man and machine, tools that we consider necessary to achieve the desired performance in certain settings. These types of tools, he says, will "broaden and strengthen our humanity".