A particular phrase has risen in frequency in the last decade or so. It pops up various permutations in tech talks aimed at the general public and in pop-science publications and other TedX-parallel venues. It starts:
“Humans are bad at …"
The speaker then almost invariably goes onto describe some area where machines or machine learning can (or, more often should) outperform humans, like driving.
But the overall impression given by pieces featuring this phrase is that they are attempting to browbeat humans (not those in the writer/speaker’s direct audience, but the other humans they all know) into accepting their inferiority in some domain and ceding control of the sphere in question to some other entity (in the case of self-driving cars, to robots).
But humans are not bad at most tasks, they just execute according to human priorities. The Humans Are Bad At camp wants humans to cede their individual autonomy to either
- A cabal of other humans, who have their own biases and different (usually elite) priorities. In addition to reducing modal and aggregate agency, this shift to oligarchy attenuates the market benefits of mass individual decision-making.
- Some algorithm designed by humans with their implicit biases (see 1 above) with unpredicable failure modes and which may or may not remain aligned with human goals.
In many cases, the proposed transfer of control to an ostensibly rational and impartial algorithm is merely a cover for a takeover by a different—usually elite and tech-savvy—group of humans. There is no AI. So if you cede your agency in some domain to an external actor, it will be another human. Who is also probably “Bad At”.
Pictured here: bad at driving Nicole Ottawa & Oliver Meckes / Eye of Science
The purpose of human activity is to serve human ends, as defined by the actor. Before assessing what humans are “bad” at, we must determine their incentives and goals, and develop adequate benchmarks. Driving is more than getting from point A to point B.
Mother Brain does not exist. The question is: “how do we reach human-human goal alignment?”