"Although it is coming on 18 years since Deep Blue beat Kasparov, humans are still barely fending off computers at shogi, while we retain some breathing room at Go. ... Ten years ago, each doubling of speed was thought to add 50 Elo points to strength. Now the estimate is closer to 30. Under the double-in-2-years version of Moore's Law, using an average of 50 Elo gained per doubling since Kasparov was beaten, one gets 450 Elo over 18 years, which again checks out. To be sure, the gains in computer chess have come from better algorithms, not just speed, and include nonlinear jumps, so Go should not count on a cushion of (25 – 14)*9 = 99 years."
This follows the recent Slashdot discussion of "Economists Say Newest AI Technology Destroys More Jobs Than It Creates" citing a NY Times article and other previous discussions like Humans Need Not Apply. What is most interesting to me about this HBR article is not the article itself so much as the fact that concerns about the economic implications of robotics, AI, and automation are now making it into the Harvard Business Review. These issues have been otherwise discussed by alternative economists for decades, such as in the Triple Revolution Memorandum from 1964 — even as those projections have been slow to play out, with automation's initial effect being more to hold down wages and concentrate wealth rather than to displace most workers. However, they may be reaching the point where these effects have become hard to deny despite going against mainstream theory which assumes infinite demand and broad distribution of purchasing power via wages.
As to possible solutions, there is a mention in the HBR article of using government planning by creating public works like infrastructure investments to help address the issue. There is no mention in the article of expanding the "basic income" of Social Security currently only received by older people in the U.S., expanding the gift economy as represented by GNU/Linux, or improving local subsistence production using, say, 3D printing and gardening robots like Dewey of "Silent Running." So, it seems like the mainstream economics profession is starting to accept the emerging reality of this increasingly urgent issue, but is still struggling to think outside an exchange-oriented box for socioeconomic solutions. A few years ago, I collected dozens of possible good and bad solutions related to this issue. Like Davidow and Malone, I'd agree that the particular mix we end up will be a reflection of our culture. Personally, I feel that if we are heading for a technological "singularity" of some sort, we would be better off improving various aspects of our society first, since our trajectory going out of any singularity may have a lot to do with our trajectory going into it.
When the University of Chicago asked a panel of leading economists about automation, 76 percent agreed that it had not historically decreased employment. But when asked about the more recent past, they were less sanguine. About 33 percent said technology was a central reason that median wages had been stagnant over the past decade, 20 percent said it was not and 29 percent were unsure. Perhaps the most worrisome development is how poorly the job market is already functioning for many workers. More than 16 percent of men between the ages of 25 and 54 are not working, up from 5 percent in the late 1960s; 30 percent of women in this age group are not working, up from 25 percent in the late 1990s. For those who are working, wage growth has been weak, while corporate profits have surged. "We're going to enter a world in which there's more wealth and less need to work," says Erik Brynjolfsson. "That should be good news. But if we just put it on autopilot, there's no guarantee this will work out."
Now, two physicists have shown that one form of deep learning works exactly like one of the most important and ubiquitous mathematical techniques in physics, a procedure for calculating the large-scale behavior of physical systems such as elementary particles, fluids and the cosmos. The new work, completed by Pankaj Mehta of Boston University and David Schwab of Northwestern University, demonstrates that a statistical technique called "renormalization," which allows physicists to accurately describe systems without knowing the exact state of all their component parts, also enables the artificial neural networks to categorize data as, say, "a cat" regardless of its color, size or posture in a given video.
"They actually wrote down on paper, with exact proofs, something that people only dreamed existed," said Ilya Nemenman, a biophysicist at Emory University.