Let's Get It Started
June 20, 2016

Humans and the Code They Create

It’s elementary: you can get the code out of the human, but you can’t get the human out of the code.

If you ask some, the software sky is falling. According to naysaying observers, the Bay Area is winding down its 7+ year boom, and supposedly meritocratic technologies that don’t discriminate and claim to solve the world’s problems — more neatly and better than before — will no longer anchor the local economy.

Many Americans — that is those who don’t exist in tech-hub bubbles across the country – are both delighted and fearful of technology. When they think of tech, it not only evokes images of their fancy smart phone but also nightmares of nefarious hackers and flame-throwing drones. Despite relying on the computer in their pocket for most of the things in their lives, they are wary of national data collection, facial scanning technology or autonomous vehicles.

Some are wary of the impacts of technology on humans. Recently, the Atlantic examined the human fear of total knowledge and supposes that academics and others are ignoring the fact that the “Googlefication” of books and archives will destroy human memory. After all, if every bit of information is stored anywhere and everywhere, who needs to remember anything?

But others remain optimistic. They see technology as Sherlock Holmes sees logic and deduction; a tool that helps people rise above human fallibility and order the world neatly to get to the truth. And while I no longer believe that software will eat the world, I do know that technology impacts every moment of my day.

Some claim that technology is neutral and agnostic. And that may be true. But I’ve worked closely with those who do build these programs. I know that they’re human, and they show up to work every day with their own biases, and that where they work and everything they create is in some ways a reflection of themselves. That what they code and how they code is a choice.

This is a long way of saying that I’m never surprised to learn that software and the companies that create it are not 100%-free of human error, of human judgement.

I’ve worked in online content and know the human effort that goes into parsing the results of a Google search, so I was shocked that others were surprised to learn that humans work at Facebook, and have a non-algorithmic impact on what appeared in the Trending section.

I understand how machine-learning programs purporting to neutrally codify predilection to commit crime would be based upon an already biased system, and therefore exhibit those same structural biases in their software output. I don’t like it, but it makes sense to me that some Airbnb hosts discriminate if they can discern the race of a potential renter, because humans discriminate.

And I understand how coders and designers working in fields with mostly-female executive assistants would give their AI personal helpers female names, unless it’s a lawyer-bot. Because they see female assistants, and male lawyers.

In The Sign of the Four, Sherlock famously says “when you have eliminated the impossible, whatever remains, however improbable, must be the truth.” But you know what’s impossible? People. People are messy, impossible, difficult beings who act against their own best interest, self-select their friend groups and sources of information, emotionally lash out, and vote for Donald Trump for President.

Sherlock and Moriarty had the same tools, the same resources, the same intellect. And yet one used his tools for good, the other for mayhem. Likewise, technology is only as good as the humans who use it.

People are impossible, and people are the truth, and no algorithm can fix being a person for you. Which is why it’s important to approach technology as a human, and understand that the 1s and 0s can only take us so far.

Read More