TWO years ago IBM attracted a lot of admiring publicity when its “Watson” program beat two human champions at "Jeopardy!", an American general-knowledge quiz. It was a remarkable performance. Computers have long excelled at games like chess: in 1997 Deep Blue, another of the computer giant's creations, famously beat the reigning world champion Garry Kasparov. But "Jeopardy!" relies on the ability to correlate a vast store of general knowledge with often-punny, indirect clues. Making things hardest still, the clues themselves are, famously, phrased as answers, to which contestants must supply an appropriate question.
Yet IBM has always had bigger plans for its artificial know-it-all than beating humans at quiz shows. On February 8th it announced the first of them. Together with the Memorial Sloan-Kettering Cancer Centre and Wellpoint, a health company, it plans to adapt the system for oncologists, with trials due to begin in two clinics. The idea is to use the machine as a sort of prosthetic brain for doctors, by delegating to it the task of keeping up with medical literature.
What is really impressive about Watson is not so much that it thrashes humans, but how it does so. The machine extracts “meaning” from vast quantities of what computer scientists call unstructured data, which essentially means anything designed to be consumed by humans rather than computers. To prepare for its "Jeopardy!" appearances, the program was fed (among other things) dictionaries, archives of newspaper articles, lexical databases of English and the whole of Wikipedia. From these it was able to extract relationships between concepts and become deft enough with metaphors, similes or puns that it could cope with the show’s elliptical clues.
It is this ability to process human-oriented information that IBM hopes will be useful for doctors. The volume of medical research is huge and growing. According to one estimate, to keep up with the state of the art, a doctor would have to devote 160 hours a week to perusing papers, leaving eight hours for sleep, work and, well, everything else in life. Fortunately, Watson doesn't need any sleep.
IBM's ultimate goal is for Watson—or a small computer running the front-end, since the processing itself will take place on an internet-connected supercomputer—to compare patient notes with the information harvested from medical journals, treatment guidelines, etc. It would then suggest several treatment regimens, ranked by how effective it thinks they are likely to be. Watson may even suggest clinical trials that the patient could be enrolled in.
Every one of its recommendation will be based on data in the medical literature, and the human doctor can tell the computer to show how it arrived at the conclusion, linking back to the original data. If the doctor disagrees, or wishes to add any constraints, he can tell the program by speaking into a microphone. For now, though, the plan is to use the machine for "utilisation management", American health-care jargon for deciding whether a treatment is appropriate given the state of medical knowledge—and, therefore, whether it will be paid for by an insurance company.
Ensuring that doctors can stay current with the latest developments in their field sounds like an excellent idea. And it should be easily extendable: a computer that can crunch large amounts of natural-language data has obvious uses in law, politics and academia. But having computers decide whether to grant insurance coverage already grates with some patients, a reaction that is unlikely to improve simply because IBM's new machine excels at the job. How doctors will react to the presence of an electronic clever-clogs on their desks also remains to be seen. For although no one is claiming that Watson can replace human medics, it is another instance (alongside legal work and journalism) of computers encroaching into the sorts of white-collar jobs previously thought to be the preserve of two-legged biological computers.
AI Interview for Online Appointments
IT’S NOT HARD to see the appeal of an online doctor’s appointment. You don’t have to risk catching germs from other patients. And you don’t have to commute to the doctor’s office, or sit around the waiting room flipping through lousy magazines while listening to screaming kids.
But for doctors, these appointments take just as much time as in-person visits, says physician Ray Costantini. In fact, he says that while he was running the telehealth program at Providence Health and Services, he noticed that online appointments could take even longer. If that’s true, it undermines one of the key arguments for online consults: longer visits mean they don’t cost any less.
It’s also why Costantini co-founded Bright.md, a Portland, Oregon-based telehealth startup that aims to automate as much of an online doctor’s visit as possible, cutting the total time a doctor spends on each appointment from about 20 minutes to as little as 90 seconds. Costantini hopes that in the process, it could cut the cost of everyday health care dramatically. The company announced today that it has raised a $3.5 million round of funding to help it meet that goal.
Before you actually talk to your doctor, Bright.md’s app will guide you through a “smart exam” to gather basic data. The app dynamically adapts the questions according to your answers—not unlike online dating sites of the OkCupid variety. Using a proprietary artificial intelligence system, it will give your doctor a preliminary diagnosis and treatment plan.
Costantini stresses the smart exam is only the first part of the online appointment. After responding to the software’s questions, patients will always speak with their doctor. “Patients want to get care from their doctor, not from a computer,” he says. In some cases, the smart exam may determine that an online consult isn’t adequate and that the patient should make an in-person appointment.
Bright.md isn’t the only company trying to give doctors AI assistance. The startup Enlitic is using cutting-edge AI technology to to help doctors diagnosis patients, while cancer researchers are are using IBM’s Watson platform to find new treatments. But while an artificially intelligent diagnosis system is the catchy part of Bright.md’s pitch, its biggest value to doctors might come from its other features. In addition to the “smart exam,” the company’s software automatically generates chart notes and helps manage other paperwork, such as insurance coding. In short, it automates all of the most repetitive parts of a physician’s job, enabling them to focus on treating a patient.
Watson Augmented
IBM says that Watson, its artificial-intelligence technology, can use advanced computer vision to process huge volumes of medical images. Now Watson has its sights set on using this ability to help doctors diagnose diseases faster and more accurately.
Last week IBM announced it would buy Merge Healthcare for a billion dollars. If the deal is finalized, this would be the third health-care data company IBM has bought this year (see “Meet the Health-Care Company IBM Needed to Make Watson More Insightful”). Merge specializes in handling all kinds of medical images, and its service is used by more than 7,500 hospitals and clinics in the United States, as well as clinical research organizations and pharmaceutical companies. Shahram Ebadollahi, vice president of innovation and chief science officer for IBM’s Watson Health Group, says the acquisition is part of an effort to draw on many different data sources, including anonymized, text-based medical records, to help physicians make treatment decisions.
Merge’s data set contains some 30 billion images, which is crucial to IBM because its plans for Watson rely on a technology, called deep learning, that trains a computer by feeding it large amounts of data.
Watson won Jeopardy! by using advanced natural-language processing and statistical analysis to interpret questions and provide the correct answers. Deep learning was added to Watson’s skill set more recently (see “IBM Pushes Deep Learning with a Watson Upgrade”). This new approach to artificial intelligence involves teaching computers to spot patterns in data by processing it in ways inspired by networks of neurons in the brain (see “Breakthrough Technologies 2013: Deep Learning”). The technology has already produced very impressive results in speech recognition (see “Microsoft Brings Star Trek’s Voice Translator to Life”) and image recognition (see “Facebook Creates Software That Matches Faces Almost as Well as You Do”).
IBM’s researchers think medical image processing could be next. Images are estimated to make up as much as 90 percent of all medical data today, but it can be difficult for physicians to glean important information from them, says John Smith, senior manager for intelligent information systems at IBM Research.
One of the most promising near-term applications of automated image processing, says Smith, is in detecting melanoma, a type of skin cancer. Diagnosing melanoma can be difficult, in part because there is so much variation in the way it appears in individual patients. By feeding a computer many images of melanoma, it is possible to teach the system to recognize very subtle but important features associated with the disease. The technology IBM envisions might be able to compare a new image from a patient with many others in a database and then rapidly give the doctor important information, gleaned from the images as well as from text-based records, about the diagnosis and potential treatments.
Finding cancer in lung CT scans is another good example of how such technology could help diagnosis, says Jeremy Howard, CEO of Enlitic, a one-year-old startup that is also using deep learning for medical image processing (see “A Startup Hopes to Teach Computers to Spot Tumors in Medical Scans”). “You have to scroll through hundreds and hundreds of slices looking for a few little glowing pixels that appear and disappear, and that takes a long time, and it is very easy to make a mistake,” he says. Howard says his company has already created an algorithm capable of identifying relevant characteristics of lung tumors more accurately than radiologists can.
Howard says the biggest barrier to using deep learning in medical diagnostics is that so much of the data necessary for training the systems remains isolated in individual institutions, and government regulations can make it difficult to share that information. IBM’s acquisition of Merge, with its billions of medical images, could help address that problem.