BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Emerging Artificial Intelligence (AI) Leaders: Rana el Kaliouby, Affectiva

Following
This article is more than 6 years old.

Affectiva

“Without our emotions, we can’t make smart decisions,” says Rana el Kaliouby. In the field of artificial intelligence, this is sheer heresy. Isn’t the goal of AI to create a machine with human-level intelligence but without the human “baggage” of emotions, biases, and intuitions that only get in the way of smart decisions?

As the co-founder and CEO of Affectiva, el Kaliouby is on a mission to expand what we mean by “artificial intelligence” and create intelligent machines that understand our emotions. Surveying the evolution of how we have interacted with computers, she asks “what’s the next more natural interface?” and answers “conversational and perceptual.”

Inducting el Kaliouby into its 2017 class of Young Global Leaders, the World Economic Forum has called her “a pioneer of industry who's spearheading digital applications of emotion-sensing and facial coding.” The new AI category el Kaliouby and her team at Affectiva are spearheading is “Emotion AI,” defining a new market by pursuing two goals: Allowing machines to adapt to human emotions in real-time and providing insights and analytics so organizations can understand how people engage emotionally in the digital world.

Founded in 2009 and backed by $34 million raised to date from Kleiner Perkins Caufield & Byers and Fenox Venture Capital, among other investors, Affectiva has worked with more than 1,400 brands, helping them evaluate and improve the effectiveness of their marketing and advertising activities (e.g., CBS and Mars). In addition, Affectiva has provided its software to developers so they can add emotion sensing to their apps and devices (e.g., GIPHY, tagging GIFs with emotions and reactions; shelfPoint, capturing customer engagement and sentiment data in the retail store; Mabu, a personal healthcare robot assistant; and Brain Power, helping teach social and cognitive skills to children and adults with autism).

It all started with Rosalind Picard. El Kaliouby can pinpoint the exact moment when her eyes opened to the possibilities and potential of empowering computers with empathy. After graduating at the top of her class at American University in Cairo and earning a merit scholarship to pursue graduate studies, she was looking for a computer science area to specialize in. Then she read Picard’s Affective Computing, published in 1997, and became “super-fascinated by the idea that a computer can read people’s emotions. It’s something very very human.”

When she arrived at the University of Cambridge in 2001 to start her PhD studies, she has already decided to focus on facial expressions. While she found out that the field of human-computer interaction (HCI) “wasn’t focused on emotions,” she also discovered that Cambridge had a prominent autism research center which has developed a large dataset of labeled emotions (performed on video by actors) to help teach autistic kids how to read facial expressions. El Kaliouby realized she could use this dataset to train her algorithm to do the same. She also realized that “this is not about human-computer interaction, this is about human to human communications.” In other words, she flipped HCI on its head—from making computers less computer-like (easier for humans to use) to making computers more human-like (understanding human emotions).

For her dissertation, el Kaliouby used the autism research center’s data to train a computer model to recognize accurately and in real-time complex mental states with “an accuracy and speed that are comparable to that of human recognition.” Having a practical bent, she also published a concept paper suggesting a device that will give kids real-time feedback on the emotional state of the person they are interacting with. That became the basis for a grant from the NSF that sponsored her work with Picard and the Affective Computing research group at MIT’s Media Lab over the next three years.

El Kaliouby’s goal was to pursue an academic career—“my entire path was optimized to do that”—but the real world beckoned. When she demonstrated her research to the Media Lab sponsors, corporations such as Toyota and Fox News, they kept bringing up practical applications such as testing TV programming and embedding the software in cars. In 2009, when el Kaliouby and Picard asked Frank Moss, the lab director at the time, for more research resources to develop these applications, he told them “this is not a research problem anymore. You ought to start a company.”

And they did. El Kaliouby says she was becoming frustrated with academia, with the cycle of building a prototype, testing it with a small sample, writing a paper, presenting at a conference, and moving on to the next thing. “It’s not going to change anyone’s life,” she felt. Picard and el Kaliouby launched Affectiva as “a sustainable organization that actually brings something to the market. We will succeed if we manage to change the way people make decisions, incorporating emotional data.”

Moving from the controlled lab environment to the real world resulted in the collection of real world data, the kind that could be used to train algorithms to recognize facial expressions in less-than-perfect conditions. That has turned out to be a competitive advantage for Affectiva which has built a unique database of more than 5 million videos shot in 75 countries or about 2 billion facial frames of real people expressing real emotions.

The massive amount of data has recently allowed Affectiva to take advantage of the latest approach to machine learning—deep learning—resulting in improved accuracy. For el Kaliouby and her team, the move from a research environment to the business world, has not reduced the pressure to “push the state of the art.” And just like other participants in the race to improve AI that left academia for the business world, they continue to participate in the academic race to publish or perish.

Sharing the results of their work is not limited, however, to advancements in AI. Given the breadth of their data collection, they can quantify the impact of culture, gender, age and even context (e.g., people relaxing at home or driving a car) on the way people express their emotions. For example, they found that in more collectivist cultures, people dampen their emotions in group settings but are very expressive when they are at home alone. In more individualistic cultures, such as North America and Europe, it’s the opposite – people are more expressive in group settings than when they are by themselves. Practically, it means that “we have to build a very specific benchmark for each geographical area,” says el Kaliouby.

While continuing to build its expertise in collecting, coding, and analyzing facial expressions (currently at 20 facial expressions representing seven emotions), Affectiva has started analyzing speech, an additional indicator of human emotion. It’s an important step towards the conversational future, a more natural mode of human-computer communications, where we have a dialog with our devices rather than just commanding them, says el Kaliouby.

It’s a future where technology is enriched with emotion: Smartphones that react to your mood; cars that sense fatigue, distraction and driver/passenger mood; social robots and IoT devices that respond with empathy.

It’s a future in which we may understand better what constitute our “intelligence” and how to make computers even better helpers than they are today.

“Expression in itself, or the language of emotions… is certainly of importance to the welfare of mankind,” wrote Charles Darwin in The Expression of the Emotions in Man and Animals. But when computers were invented as tools for improving our welfare, their first successful application as calculating devices led many in the AI community (and beyond) to focus on logic and see emotions as obstacles to rational decision making. Intelligence became equated with calculation.

To which Rosalind Picard responded in Affective Computing: “A healthy balance of emotions is integral to intelligence, and to creative and flexible problem solving… If we want computers to be genuinely intelligent, to adapt to us, and to interact naturally with us, then they will need the ability to recognize and express emotions…”

If el Kaliouby and other researchers and practitioners expanding AI today succeed in their endeavors to teach computers the language of emotions, we may also have a future in which AI finally abandons the distracting goal of creating “human-level intelligence” and is simply focused on making computers better tools for improving our welfare. A future in which AI stands for Augmented (human) Intelligence.

See also:

Emerging Artificial Intelligence (AI) Leaders: Richard Socher, Salesforce

Artificial Intelligence Pioneers: Peter Norvig, Google

Follow me on Twitter or LinkedInCheck out my website