AI HAS SOME EXPLAINING TO DO

By Michael Hardy | Spring 2019

Printer Friendly

SILICON VALLEY STARTUP KYNDI IS BETTING ON A MORE USER-FRIENDLY VERSION OF AI.

As the founder and CEO of Kyndi, one of Silicon Valley’s hottest artificial intelligence startups, Ryan Welsh (MBA ’13) fields plenty of invitations to speak at tech conferences. Recently, he’s taken to starting his presentations by displaying a PowerPoint slide featuring a single, cryptic sentence: “For AI to thrive, it needs to be explainable.” Then he walks off the stage.

After a few moments of confusion, Welsh returns to the stage. “Oh, you want me to explain why?” he asks the now-laughing audience.

man sitting on top of a filing cabinetTo Welsh, the gimmick underscores the fatal flaw in the conventional understanding of Artificial Intelligence (AI). When you perform an internet search for, say, “Should I buy Apple stock,” Google’s proprietary algorithm will provide links to hundreds of thousands of financial websites offering opinions about whether or not Apple shares are undervalued. It’s up to you to read through those websites and arrive at your own conclusion.

But what if Google could read those websites for you and provide its own analysis? What if you could ask Google why you should (or shouldn’t) invest in Apple?

That’s what Kyndi’s AI software does. Welsh recently demonstrated his system to a prospective client by feeding it hundreds of Apple earnings reports and financial analyses, then asking it to explain why he should invest in the company. “It came back with a bunch of reasons — Apple has good cash flow, it’s only trading at 12 times earnings, etc.,” Welsh recalled. “As opposed to surfacing a bunch of blue links that you have to read through and analyze yourself.”

Advocates of so-called “deep learning,” which is currently the dominant approach to AI, believe they can teach computers to think by feeding them massive quantities of data and asking them to look for patterns. That approach works for many problems, such as image recognition, but it’s far less successful at reading and understanding natural languages like English.

Kyndi takes a hybrid approach by programming its software with basic reading comprehension skills before unleashing it on earnings reports, scientific papers or other research data. “Imagine taking a baby and teaching it to complete a task like graduating from high school, versus teaching a 17-year-old to graduate from high school,” Welsh explained. “With deep learning, you’re constantly starting from birth.”

Welsh’s interest in AI dates to the 2008 financial crisis. At the time, he was working as a quantitative analyst at a law firm that represented several major investment banks. “During the financial boom the banks were creating these esoteric financial products that their own lawyers didn’t understand, so they outsourced that work to other law firms,” Welsh said.

All of those financial products generated paperwork — lots of paperwork. On the weekend that Lehman Brothers declared bankruptcy in September 2008, Welsh remembered having to read around three years’ worth of information in three days.

“That’s where the seed for Kyndi was planted,” he said. “As humans, we can only read at a fixed rate, probably about a page a minute. Yet the amount of information that we need to read is increasing exponentially. It’s one of the biggest bottlenecks in the current production process. So what if we could create machines that allow people to do that?”

READING COMPREHENSION

The problem is that as powerful as computers have become, they’re still terrible at reading comprehension. Take Google Translate, which is good at translating individual sentences but struggles at longer lengths. “The system doesn’t actually know what it’s translating, so when you go to several sentences, you start to have context and meaning that the computer can’t understand,” Welsh said. “It’s more abstract and conceptual. A lot of what we do with language, machines don’t understand.”

Kyndi uses proprietary natural language processing software to give it a leg up over more traditional AI programs. To build that software, Welsh and his colleagues took a page from the past by doing much of their coding in Prolog, a programming language invented in the 1970s that Welsh described as “more logic-driven and more powerful in its abstract reasoning” than modern programming languages such as C++ or Java. Kyndi combines its natural language processor with elements of deep learning AI to create a system capable of critically analyzing thousands of documents.

That raises a familiar theoretical question about AI: Does Kyndi’s software actually understand the documents it reads?

Welsh expresses skepticism. “I always push back on anyone who claims their system understands language. That said, if we were to measure understanding by whether or not the user felt understood, I would say our system is significantly better than any other system out there. If a bunch of people ask Siri or Alexa something, I bet 80 or 90 percent of them wouldn’t feel like they’ve been understood.”

With Kyndi, though, users can ask questions in English about any document the system has analyzed and receive back, also in English, relevant replies that show advanced understanding of the subject. After reading dozens of scientific papers about a new technology, Kyndi can answer a question like, “Has this technology been demonstrated in a laboratory setting?” One Fortune 500 pharmaceutical company hired Kyndi to analyze the thousands of audits it conducts each quarter on its production facilities around the world. Previously, a team of 60 people had to spend the next quarter reading all those audits; now, Kyndi’s system does the job in 90 minutes.

‘EXPLAINABLE AI’

man with arms crossed looking serious in front of mostly erased whiteboardWelsh founded Kyndi in San Francisco in 2014 with Arun Majumdar, a protégée of AI pioneers Marvin Minsky and John McCarthy, who Welsh calls “probably one of the best inventors of the past 50 years. He gave me a crash course in machine learning. I feel like I’ve been in the field for 25 years, whereas I’ve only been it for five.” Having earned his MBA from Notre Dame, Welsh tapped into the Irish alumni network to get Kyndi off the ground.

“My whole company is a Notre Dame company,” he said. “My first investor was a Notre Dame alum; my first major institutional investor is a Notre Dame alum; my COO, Amy Guarino, is a Notre Dame alum; and probably a third of my engineering team is from Notre Dame.”

With its Series B financing round about to close, Kyndi plans to expand from 30 to 50 employees by the end of the year. Clients have included companies in the health care, defense, intelligence and financial services sectors. Kyndi has been the subject of laudatory stories in Harvard Business Review and The New York Times. If everything goes according to plan, Welsh hopes to take the company public within the next five years. “We’re just going gangbusters,” he said. “We’re starting to catch the wave.”

For Welsh, “Explainable AI” is both a slogan (the company is applying for a copyright on the phrase) and a business strategy. Kyndi’s first clients have all come from heavily regulated sectors such as health care, finance and government. These are precisely the sectors that have been slowest to adopt AI systems, in part because of the “black box” problem. With most AI algorithms, you feed in massive quantities of data and get back a set of results. But if you ask how the algorithm arrived at those results, you likely won’t understand the answer. The algorithm itself is a black box, opaque to everyone except the programmers who built it.

That becomes a problem when, say, a bank rejects your loan application because its computer algorithm determines you’re a bad credit risk. If you sue the bank, it has to be able to justify its decision in court — not with a mathematical formula but with a set of reasons. “For industries, lack of explainability is the No. 1 hurdle for the adoption of machine learning,” Welsh said. “These companies are in the business of taking calculated risks. But if you can’t calculate the risks, how can you take them?”

An algorithm’s inability to explain itself also has philosophical implications, Welsh argued. “As a human being, our first response to anything is usually, ‘Why?’ That’s literally what makes us human, our ability and desire to understand why things are a certain way.” AI programs that aren’t accountable to human curiosity will never be accepted by most users, he predicted. “We build products for people. ... If we don’t build products that adhere to how we work as people, they won’t be adopted.”

For an example of how not to design products, Welsh cites AI researcher Andrew Ng’s argument that, to avoid collisions with self-driving cars, pedestrians should stop jaywalking and only cross the street at marked crosswalks.

“That’s ridiculous,” Welsh said, laughing. “That’s crazy. We build products to live in our world and make our world better. Not the other way around.”

Categories

  • Vita6
    Editor's Letter 14340
    NEWS 14342
    Research: WHY THE EARLY BIRD GETS THE CLICKS 14345
    Research: MOTIVATED BLINDNESS AND THE SLIPPERY SLOPE 14361
    FUTURES 14363
    INSIGHTS 14381
    FACULTY IN THE NEWS 14419
    PHOTO OPS 14456
  • Features3
    TEACHING IN THE AGE OF TECH REVOLUTIONS 14383
    AUTOMATION ANXIETY AND THE MEANING OF WORK 14406
    LOAVES AND FISHES 14443
  • Family8
    A FUTURE IN HIS PAST 14395
    GOOD WORK 14397
    ON READING THE BIBLE 14408
    MORE THAN A POLLYANNA 14414
    AI HAS SOME EXPLAINING TO DO 14421
    CLASS NOTES 14425
    IN MEMORIAM 14431
    WHERE ARE THEY NOW? 14434
  • Salt and Light1
    A FUTURE IN HIS PAST 14395
  • Class Notes2
    CLASS NOTES 14425
    IN MEMORIAM 14431
  • Ask More of Business1
    GOOD WORK 14397
  • In Memoriam1
    IN MEMORIAM 14431
  • Mendoza News1
    NEWS 14342
  • First Person1
    ON READING THE BIBLE 14408