Depending on whom you talk to, artificial intelligence—AI—is either the god that will deliver us from all of our miseries or the devil that will extinguish our last faint spark of humanity.
Perhaps both, perhaps neither, but the June 13 issue of The Economist had an illuminating special report on AI. As the section’s subhead puts it, “An understanding of AI’s limitations is starting to sink in.”
Evidently AI, like bell bottoms, mullets, and tattoos, is sometimes in vogue, sometimes not. This doesn’t have to do with fashion, but with a lurch between overheated expectations (“We should stop training more radiologists,” urged one AI guru in 2016, on the grounds that computers can do the same things faster and better) and more temperate realities.
Starsky’s failure recapitulates an old story in the AI world, which accounts for its ups and downs: as the promised “breakthroughs failed to appear, the downpour of investor interest became a drizzle,” writes Starsky cofounder Stefan Seltz-Achmacher. “The biggest [reason], however, is that supervised machine learning doesn’t live up to the hype. It isn’t actual artificial intelligence akin to C-3PO, it’s a sophisticated pattern-matching tool.”
“We also saw that . . . our heavy investment into safety didn’t translate for investors,” Seltz-Achmacher adds.
This fact is especially dismaying. With the grotesque example of Boeing in everyone’s mind, one might expect investors to understand that safety is the worst possible area for shortcuts and cheapskating. Such, apparently, is not the case. As Seltz-Achmacher puts it, “No one really likes safety, they like features.”
In short, we may be ten years away from a self-driving truck. “No one should be betting a business on safe AI decision makers,” writes Seltz-Achmacher. “The current companies who are will continue to drain momentum over the next two years, followed by a few years with nearly no investment in the space, and (hopefully) another unmanned highway test for five years.”
As The Economist points out, the difficulty has to do with “edge cases”—uncommon things, like planes landing on highways (which has happened), to which iterative AI training is poorly suited.
Another issue has not been addressed: public opinion. Are American drivers going to be comfortable sharing congested highways with enormous, fast-moving vehicles that are not guided by human intelligence?
In a survey of a small sample of the American population (one person = me), 100 percent of the respondents said they had grave reservations about this possibility.
Everyone has had narrow and gut-slamming misses with trucks: sometimes it’s your fault, and sometimes it’s the trucker’s. But in many of these cases, it’s your capacity (and the other guy’s) to deal with these edges that usually saves you.
Depending on whom you talk to, artificial intelligence—AI—is either the god that will deliver us from all of our miseries or the devil that will extinguish our last faint spark of humanity.
Perhaps both, perhaps neither, but the June 13 issue of The Economist had an illuminating special report on AI. As the section’s subhead puts it, “An understanding of AI’s limitations is starting to sink in.”
Evidently AI, like bell bottoms, mullets, and tattoos, is sometimes in vogue, sometimes not. This doesn’t have to do with fashion, but with a lurch between overheated expectations (“We should stop training more radiologists,” urged one AI guru in 2016, on the grounds that computers can do the same things faster and better) and more temperate realities.
Starsky’s failure recapitulates an old story in the AI world, which accounts for its ups and downs: as the promised “breakthroughs failed to appear, the downpour of investor interest became a drizzle,” writes Starsky cofounder Stefan Seltz-Achmacher. “The biggest [reason], however, is that supervised machine learning doesn’t live up to the hype. It isn’t actual artificial intelligence akin to C-3PO, it’s a sophisticated pattern-matching tool.”
“We also saw that . . . our heavy investment into safety didn’t translate for investors,” Seltz-Achmacher adds.
This fact is especially dismaying. With the grotesque example of Boeing in everyone’s mind, one might expect investors to understand that safety is the worst possible area for shortcuts and cheapskating. Such, apparently, is not the case. As Seltz-Achmacher puts it, “No one really likes safety, they like features.”
In short, we may be ten years away from a self-driving truck. “No one should be betting a business on safe AI decision makers,” writes Seltz-Achmacher. “The current companies who are will continue to drain momentum over the next two years, followed by a few years with nearly no investment in the space, and (hopefully) another unmanned highway test for five years.”
As The Economist points out, the difficulty has to do with “edge cases”—uncommon things, like planes landing on highways (which has happened), to which iterative AI training is poorly suited.
Another issue has not been addressed: public opinion. Are American drivers going to be comfortable sharing congested highways with enormous, fast-moving vehicles that are not guided by human intelligence?
In a survey of a small sample of the American population (one person = me), 100 percent of the respondents said they had grave reservations about this possibility.
Everyone has had narrow and gut-slamming misses with trucks: sometimes it’s your fault, and sometimes it’s the trucker’s. But in many of these cases, it’s your capacity (and the other guy’s) to deal with these edges that usually saves you.
Richard Smoley, editor for Blue Book Services, Inc., has more than 40 years of experience in magazine writing and editing, and is the former managing editor of California Farmer magazine. A graduate of Harvard and Oxford universities, he has published 11 books.