Current tech fear-mongering seems to be centered around two concerns:

  • The AI singularity will occur and super-intelligent robots will enslave us.1

  • There will be mass unemployment due to automation.

Certain jobs should disappear due to automation. These are the jobs that should have been done by a computer in the first place, and we’d all be better off for it.

Nassim Nicholas Taleb’s recent book, Skin in the Game, claims that modern salaried life is essentially slavery. Economic definitions of slavery aside, many folks will probably agree with his assessment on pure intuition.

He uses the analogy2 of the Dog and the Wolf from Aesop’s fables:

A gaunt Wolf was almost dead with hunger when he happened to meet a House-dog who was passing by. “Ah, Cousin,” said the Dog. “I knew how it would be; your irregular life will soon be the ruin of you. Why do you not work steadily as I do, and get your food regularly given to you?”

“I would have no objection,” said the Wolf, “if I could only get a place.”

“I will easily arrange that for you,” said the Dog; “come with me to my master and you shall share my work.”

So the Wolf and the Dog went towards the town together. On the way there the Wolf noticed that the hair on a certain part of the Dog’s neck was very much worn away, so he asked him how that had come about.

“Oh, it is nothing,” said the Dog. “That is only the place where the collar is put on at night to keep me chained up; it chafes a bit, but one soon gets used to it.”

“Is that all?” said the Wolf. “Then good-bye to you, Master Dog.”

Does modern “slavery” in the form of salaried employment harm anyone other than the individual who volunteers for it? Though it has deprived itself of the wolf’s dignity, the dog seems happy. Is it ok if done by choice?

Taleb doesn’t seem to think so. His reasoning seems obvious in retrospect:

“People whose survival depends on qualitative “job assessments” by someone of higher rank in an organization cannot be trusted for critical decisions.”

Why? He explains:

“The employee has a very simple objective function: fulfill the tasks that his or her supervisor deems necessary, or satisfy some gameable metric.”

Metrics are simplifications; simplifications are necessarily gameable. Instead of optimizing for producing value, the employee will optimize for themselves. Taleb argues that the only wolf-like employees are the small category of people who are assessed on metrics that are equivalent to the desired outcome (i.e. a trader’s P&L or a salesperson’s numbers). Successful ones have wolf-like leverage, and they don’t hesitate to use it. Companies are terrified of losing them.

What about fulfilling tasks that the supervisor deems necessary? This manifests in two types of jobs.

One: a supervisor relies on the employee to provide valuable skills or decision-making prowess that cannot be obtained elsewhere. Like the skilled salesman or trader, such an employee also has leverage. They can always find another job, or offer their skills directly to the market in the form of a business. In other words, the individual functions as a productive unit in their own right - they are not simply an extension of their employer’s will. Most working people aspire to be like this.

Two: the employee is essentially a program, carrying out commands. They are not required or perhaps actively discouraged from exercising critical judgement in their role. Professional and financial services companies are rife with such roles, many high-paying. Programs can be expensive and sophisticated, but they are programs nonetheless.

Humans are not computers, and such “programmatic” roles combine the worst traits of both to make a crappy deal for everyone involved. The company gets a disengaged worker who optimizes for doing the minimum necessary without getting fired, which usually creates politics and bullshit. The employee gets to hate the slog of waking up on Mondays. Society is cheated of a better world, because instead of innovating solutions to hard problems, people are loafing, playing political games, and looking at pictures of cats to avoid stabbing their own eyes out at work. Nobody is better off. Enron, the IRS, most banks which collapsed in 2008 - no wonder most large bureaucracies look and feel like hell.

As automation pushes people out of jobs, there will no doubt be carnage in the short-term. Some folks may not recover within one lifetime without assistance. Government support of skills education may be necessary to prevent too grotesque of a disparity from forming. But this problem is hardly unique to our century: we handled steam, oil, and electricity, and we will handle whatever comes next.

Gunsmiths do not make butter, and dairy farmers do not make guns: they make their respective products and trade with each other. Gains from specialization are good. Automation will lead to an increased proportion of human brainpower being focused onto problems that only a human brain can solve. Computers will take care of the rest, humming away with no complaints or quashed ambitions. We will all be better off for it.



1 I don’t understand AI well enough to speak confidently, but it seems the majority of singularity doomsday predictions are at least several decades too early, if not completely absurd.

It seems wrong to say, for example, that the recent Google Assistant demo passed the Turing test. While a very impressive demo, the conversations occured with busy shopowners who wanted to exchange necessary info and get the customer off the phone as quickly as possible. They had no reason to scrutinize the person on the other end for strange intonations or speaking tics, which are quite evident in the recording. The insertion of speech dysfluencies was a clever piece of aesthetic trickery, but hardly a meaningful step towards “the singularity”, so to speak.

And watch this Tesla on auto-pilot confuse a highway divider for a lane. Yes, this is most likely fixable with situation-specific training data (or strong signals like colour-coded lane marker paint)…but guess who still has to realize the product gap, implement the solution, and monitor the training progress in real life? That’s right. Humans. We still have a long way to go.

2 Taleb mentions that large bureaucracies select for predictable, controllable individuals. Tangential to his dog vs. wolf analogy, there is scientific evidence that dogs are less intelligent, more docile versions of wolves. Breeding may have selected for the same genetic deletion in dogs that causes Williams Syndrome in humans.