It’s Stardate 47025.4, in the 24th century. Starfleet’s star android, Lt. Commander Data, has been enlisted by his renegade android “brother” Lore to join a rebellion against humankind — much to the consternation of Jean-Luc Picard, captain of the USS Enterprise. “The reign of biological life-forms is coming to an end,” Lore tells Picard. “You, Picard, and those like you, are obsolete.”

That’s Star Trek for you — so optimistic that machines won’t dethrone humans until at least three more centuries. But that’s fiction. In real life, the era of smart machines has already arrived. They haven’t completely taken over the world yet, but they’re off to a good start.

“Machine learning” — a sort of concrete subfield within the more nebulous quest for artificial intelligence — has invaded numerous fields of human endeavor, from medical diagnosis to searching for new subatomic particles. Thanks to its most powerful incarnation — known as deep learning — machine learning’s repertoire of skills now includes recognizing speech, translating languages, identifying images, driving cars, designing new materials and predicting trends in the stock market, among uses in many arenas.

“Because computers can effortlessly sift through data at scales far beyond human capabilities, deep learning is not only about to transform modern society, but also about to revolutionize science — crossing major disciplines from particle physics and organic chemistry to biological research and biomedical applications,” computational neuroscientist Thomas Serre wrote in the 2019 Annual Review of Vision Science.

A proliferation of new papers on machine learning, deep learning and artificial intelligence have flooded the scientific literature in recent years. Reviews of this new research have covered such topics as health care and epidemiology, materials science, fundamental physics, quantum computing, simulations of molecular interactions, fluid mechanics, clinical psychology, economics, vision science and drug discovery.

These reviews spotlight machine learning’s major accomplishments so far and foretell even more substantial achievements to come. But most such reviews also remark on intelligent machines’ limitations. Some impressive successes, for instance, reflect “shortcut” learning that gets the right answer without true understanding. Consequently, apparently smart machines can be easily tricked into error. And much of today’s so-called machine intelligence is narrowly focused skill, effective for a specific task, but without the flexibility of the general cognitive abilities possessed by people. A computer that can beat grandmasters at chess would be mediocre at poker, for example.

“In stark contrast with humans, most ‘learning’ in current-day artificial intelligence is not transferable between related tasks,” writes computer scientist Melanie Mitchell in her 2019 book Artificial Intelligence: A Guide for Thinking Humans.

As Mitchell explains, many barriers impede the quest for true artificial intelligence — machines that can think and reason about the world in a general way as (at least some) humans can.

“We humans tend to overestimate artificial intelligence advances and underestimate the complexity of our own intelligence,” Mitchell writes. Fears of superintelligent machines taking over the world are therefore misplaced, she suggests, citing comments by the economist and behavioral scientist Sendhil Mullainathan: “We should be afraid,” he wrote. “Not of intelligent machines. But of machines making decisions that they do not have the intelligence to make. I am far more afraid of machine stupidity than of machine intelligence.”

Photo shows Melanie Mitchell with a copy of her book.

Computer scientist and scholar Melanie Mitchell’s 2019 book explains the capabilities and the limits of current artificial intelligence.

CREDIT: PHOTO BY KENDALL SPRINGER

Machine learning’s swift progress

To be fair, computer scientists have developed some pretty powerful strategies for teaching machines how to learn. Typically such learning relies on some variant of computing systems known as neural networks. In a crude way, those networks emulate the human brain, with processing units based on the brain’s nerve cells, or neurons. In a traditional neural network, a layer of artificial neurons receives inputs that modify the strength of the connections to the neurons in another layer, where patterns in the input can be identified and reported to an output layer. Such an artificial neural network can “learn” how to classify input data as, say, an image of a cat.

In the last decade or so, the dominant machine learning strategy has relied on artificial neural networks with multiple layers, a method known as deep learning. A deep learning machine can detect patterns within patterns, enabling more precise classifications of input, exceeding the ability of even expert humans. A well-trained deep learning system can detect a signal of cancer in an CT scan that would elude a human radiologist’s eyes.

In some systems, the learning is “supervised,” meaning the machine is trained on labeled data. With unsupervised learning, machines are trained on large datasets without being told what the input represents; the computer itself learns to identify patterns that define categories or behaviors. In another approach, called reinforcement learning, a machine learns to respond to input with actions that are “rewarded” (perhaps by adding numbers to a memory file) if they help achieve a goal, such as winning a game. Reinforcement learning demonstrated its power by producing the machine that beat the human champion in the game of Go.

But success at Go, while worthy of headlines, is not nearly as notable as machine learning’s more practical successes in such realms as medicine, industry and science.

Photo shows a champion Go player in a contest with Google’s DeepMind computer, facilitated by the computer’s lead programmer, in a game of Go in Seoul, South Korea in 2016.

Professional Go player Lee Sodol, right, faced Google’s artificial intelligence program AlphaGo in a series of Go games in 2016. Google DeepMind’s lead programmer Aja Huang, left, placed the first stone during the final match. AlphaGo won many, but not all, games against its human competitors.

CREDIT: AP PHOTO / LEE JIN-MAN

In medicine, machine learning has helped researchers cope with weaknesses in standard tests for treatment effectiveness. Medical trials testing disease treatments typically rely on average results to determine effectiveness, and can therefore miss possible benefits for small subgroups of patients. One trial, for instance, found that a weight-loss program did not reduce heart problems among people with diabetes. But a machine learning algorithm identified a subset of patients for which weight loss did reduce heart problems, as infectious disease expert Timothy Wiemken and computer scientist Robert Kelley noted in the 2020 Annual Review of Public Health.

Machine learning has also assisted in finding new drugs to test. “Deep learning has been widely applied to drug discovery approaches,” chemist Hao Zhu writes in the latestAnnual Review of Pharmacology and Toxicology. “The current progress of artificial intelligence supported by deep learning has shown great promise in rational drug discovery in this era of big data.”

As with discovering new drugs for medical purposes, machine learning has proved productive in discovering new materials for industrial uses. Searching for “superhard” materials resistant to wear and tear can be streamlined with machine learning algorithms, as in a case study described in the 2020 Annual Review of Materials Research. “This case study … is an excellent example of the powerful role that machine learning can play in the identification of new structural materials,” materials scientist Taylor Sparks and colleagues wrote in that review.

“I am far more afraid of machine stupidity than of machine intelligence.”

Sendhil Mullainathan

While practical uses get the most attention, machine learning also offers advantages for basic scientific research. In high-energy particle accelerators, such as the Large Hadron Collider near Geneva, protons smashing together produce complex streams of debris containing other subatomic particles (such as the famous Higgs boson, discovered at the LHC in 2012). With bunches containing billions of protons colliding millions of times per second, physicists must wisely choose which events are worth studying. It’s kind of like deciding which molecules to swallow while drinking from a firehose. Machine learning can help distinguish important events from background noise. Other machine algorithms can help identify particles produced in the collision debris.

“Deep learning has already influenced data analysis at the LHC and sparked a new wave of collaboration between the machine learning and particle physics communities,” physicist Dan Guest and colleagues wrote in the 2018 Annual Review of Nuclear and Particle Science.

Machine learning methods have been applied to data processing not only in particle physics but also in cosmology, quantum computing and other realms of fundamental physics, quantum physicist Giuseppe Carleo and colleagues point out in another recent review.

“In parallel to the rise of machine learning techniques in industrial applications, scientists have increasingly become interested in the potential of machine learning for fundamental research,” Carleo and coauthors wrote last year in Reviews of Modern Physics.

Limits on learning

As Carleo and many other reviewers have emphasized, machine learning has its downsides. Its successes should not blind scientists to its faults.

“A healthy and critical engagement with the potential power and limitations of machine learning includes an analysis of where these methods break and what they are distinctly not good at,” Carleo and coauthors wrote.

For one thing, a machine’s “intelligence” is limited by the nature of the data it learns from. Machines trained to screen job applicants by analyzing human hiring decisions, for example, can learn various biases that historically have discriminated against certain groups.

Even when machines perform well, they are not always as smart as they seem. Reports of skill in recognizing images, for instance, should be tempered by the fact that reports of the machine’s accuracy often refer to its top five “guesses” — if any of the five is correct, the machine gets credit for a correct identification.

Often a seemingly smart machine performs accurately but not because it learns or understands a task in anything like the way a human does. Rather the machine finds a shortcut that frequently produces a correct answer. “A deep neural network may appear to classify cows perfectly well — but fails when tested on pictures where cows appear outside the typical grass landscape,” Robert Geirhos and collaborators wrote in a recent paper online at arXiv.org. In that case, “grass” is the system’s shortcut indicator for “cow.”

Sometimes machines rely on texture rather than shape as a shortcut for identifying objects. If a picture of a cat is converted by Photoshop to look like an embossed image in various shades of gray, a machine might think it’s an elephant.

Such shortcuts may be one reason machines can be easily fooled by adversarial efforts at deception.

“It is surprisingly easy for humans to surreptitiously trick deep neural networks into making errors,” Mitchell commented in her book. Slight alterations in a medical X-ray — imperceptible to the human eye — can change a machine’s diagnosis from “99 percent confidence that the image shows no cancer to 99 percent confidence that cancer is present.”

It’s hard to understand why machines fail in these ways, Mitchell observes, because humans don’t know how the machines make their decisions. Processing within the multiple layers of a deep learning neural network are like the inner workings of a black hole — invisible to human perception — making it difficult to determine how deep learning works.

“The reasons for decisions made by deep neural networks are often hard to understand, which makes their failures hard to predict or fix,” Mitchell notes.

Nevertheless it’s clear that however the machines are learning, it’s not the same way humans learn. But it’s not just machine intelligence that’s hard to understand — so is human intelligence. Scientists will have to understand human intelligence much more completely before they can devise a more powerful artificial substitute. And that’s why there’s little danger that the nonfiction equivalent of Data’s brother Lore will soon enslave the human race.

Mitchell, for one, shares the sentiment expressed by software entrepreneur Mitchell Kapor: “Human intelligence is a marvelous, subtle, and poorly understood phenomenon. There is no danger of duplicating it anytime soon.” And maybe not even by the 24th century.