Welcome to the machine

Today’s link roundup is a single article, one that hits on a point that’s been bouncing around my head constantly. For decades, we studied human intelligence in the hope that we might mimic our own thought mechanisms in computer code. Instead, as computing power becomes ubiquitous, meshing into a cheap, unified platform, we are discovering modes of thinking in that platform that do not seem to exist in our biology.

Over at Backchannel, David Weinberger posted a piece yesterday on the impact of what he calls “alien knowledge.”

We are increasingly relying on machines that derive conclusions from models that they themselves have created, models that are often beyond human comprehension, models that “think” about the world differently than we do.

But this comes with a price. This infusion of alien intelligence is bringing into question the assumptions embedded in our long Western tradition. We thought knowledge was about finding the order hidden in the chaos. We thought it was about simplifying the world. It looks like we were wrong. Knowing the world may require giving up on understanding it.

Weinberger goes on to describe these models, such as the AI engines that mastered the game, Go. Unlike previous iterations of “smart computing,” it isn’t possible for a human to describe how Google’s Alpha Go program evaluated its moves. Iterative AI engines are programmed to create their own thought models based on the conditions they encounter. What makes this interesting from a political perspective is the potential for AI engines to outclass our own reasoning. The old ‘Turing Test” goal for intelligent computing may be the wrong way to think of AI. Perhaps computers don’t need to mimic us in order to render our biological reasoning moot. Just look at the results of the last election.

If you’ve spent any time on Twitter you may have encountered users that engage in slightly odd behavior. It isn’t always easy to identify a “bot,” a programmed engine for disseminating information (or more often, disinformation) on the platform, but they are ubiquitous. Phony information spread via automated techniques played a powerful role in the last election.

Those bots wouldn’t pass the Turing Test, yet they soundly defeated the human institutions they were programmed to target. We do not possess the computing power necessary to constantly filter a barrage of carefully crafted disinformation. AI engines are already outclassing us in ways that threaten the viability of key institutions.

Since we first started carving notches in sticks, we have used things in the world to help us to know that world. But never before have we relied on things that did not mirror human patterns of reasoning — we knew what each notch represented — and that we could not later check to see how our non-sentient partners in knowing came up with those answers. If knowing has always entailed being able to explain and justify our true beliefs — Plato’s notion, which has persisted for over two thousand years — what are we to make of a new type of knowledge, in which that task of justification is not just difficult or daunting but impossible?

As we look for ways to understand what happened in Election 2016 and prepare for what looms ahead, we should perhaps be thinking less about questions of policy and more about the impact of data overload on our minds. We may be bumping against biological limits in our capabilities, limits that require us to develop new social and technological adaptations to help us cope. A rationalist model of how human beings should best process information may be approaching the end of its evolutionary utility.

25 Comments

  1. I think what made the misinformation work was that a large segment of voters already had a negative opinion of Hillary. It was easy to get them to believe even the most outrageous BS that the bots tossed out. Reason be dammed, they bought in to it.
    Put Bernie in her place, and the same stuff would not fly.

    1. You are probably correct about Bernie but I have to say I’ve been real disappointed in his actions since the election. That promise he made that he would become a Democrat? Now says he’s remaining an Independent. The comments he’s made about Dems who are currently seeking office (Ossoff for one) – hardly helpful. I get the impression that Sanders is tied much more to ego than principle.

  2. The current problem isn’t exactly the one you’re focusing on, which is that AI could run out of our control and start working on goals we don’t understand or even know it has. The current problem is that some of these AI platforms have been successfully used by some people to subvert social decisionmaking by the rest of us. They are currently still working for their master’s goals.

    1. FA- Broad general goals are composed of smaller intermediates. These are self-generated. This is the essence of strategy. While it’s true that general goal-setting has not yet been attempted or observed, there appears to be no particular reason it won’t.

  3. Profile photo of EJ EJ

    Off-topic:
    Bill O’Reilly’s gone. Twitter is ablaze with the dismay of his followers. They’re calling Fox News a network of cucks and SJWs.

    Setting aside my schadenfreude for the moment, I wonder if this is the American Right passing into a new life stage like a caterpillar becoming a butterfly. The “To Those Who Lied To My Father” stage is no longer needed, and the “Proud Boys MAGA” stage is beginning. Thoughts?

    1. I think you’re right, though those “proud boys” are a singularly pathetic lot. That Mad Men swagger had a certain appeal in a certain era. These guys are the WalMart version of the alpha man. They’re just trolls.

      At their worst, they might devolve into some ragtag terrorist group, but they aren’t going to fill the shoes of the old O’Reilly, Giuliani, Daley, Christie etc.

  4. Profile photo of EJ EJ

    On-topic:
    I’d be surprised if Twitter aren’t seriously looking into a general-case solution for the bot problem. This threatens the long-term viability of their platform. People go to Twitter to hear other humans speak; the more crowded it becomes with bots and adverts, the less enticing it becomes.

    It’s interesting to speculate on what the people of the future might say when they look back at this. Are we at the dawn of the new age, or in the brief window between the technology existing and it being banned?

    1. Keep in mind, Twitter was just the most visible tip of this phenomenon. Twitter will probably correct this. It won’t be that hard. But Election 2016 is the first example I can think of where AI-based technology was used to distort our perception in such a comprehensive and consequential way. This has disturbing implications that go way beyond Trump. I don’t know how we stop it, which is not to say that I don’t think we can, but it just isn’t clear how.

    2. EJ – No useful technology has ever been effectively banned. Ever.
      National borders are of no consequence. We are going to have to learn to live with it, or evolve with it. ‘It’, is here to stay. I don’t think we can use one of the three wishes to coax it back into the bottle.

    1. Gerrymandering and big money control representation. Add to this powerful mix voter suppression and voter apathy. Too few Americans vote. Ossoff lost by a little over 3K votes…..with 44% voter turnout in GA – in one of the most expensive special election primaries in the country. VOTE.

  5. Until around 50 years ago, we sought so-called closed solutions to problems. Equations were derived that explained, (or modelled), a system. Plug in the variables, chug, and out came the solution.

    Such an approach is not of much use anymore. Most problems, ranging from aerodynamics, to climate models, to economics don’t lend themselves to this approach. The underlying phenomena are so complex that a “closed solution” is practically impossible. The advent of fast computers has allowed ‘brute force’ approaches to problem solving. Essentially, it works like this: break a problem down into tiny elements whose interactions with the external world and each other can be explained with simple mathematical relationships, run all of those little models, run them again, and again, and again, see what happens. This is a brute force approach, but, called finite element analysis, it is used to design bridges, airplanes, sewers, and about everything else you can think of. You can model economic systems that way, too. But it’s fundamentally ‘not human’. Human thought evolved to solve problems with limited computational resources. Our brains, while magnificent in their elegance, are slower than snails. Enter the machines.

    The term “alien” is pretty much perfect in describing how nonhuman intelligence thinks. And therein lies the problem. We’ve evolved to cope with each other, and higher animals. They are predictable in context. Machines, as well illustrated in the linked article, aren’t necessarily. Unpredictability, baby – that’s the problem.

      1. I doubt it. Ethics and morality derive from our evolution as a social species. The importance of the flourishing of humans and other creatures that we suspect share our ability to suffer as well as experience joy, is writ in our DNA. Machines don’t share that history or kinship. To such an entity, the very concepts of morality and ethics would be alien.

    1. Infinite Elephants!
      Yes an incredibly powerful technique – but while scientists may find this approach novel and unusual engineers have ALWAYS worked that way
      Scientists try and develop logical models – engineers work on working models something that approximates the truth well enough to fix the problem

  6. Even as we lament mankind’s dependence upon technology, we appreciate and expand its application in every phase of our lives. Someone who has experienced the joys and frustrations of technology has put together a site to enable us to more easily access the vast storehouse of information that computers offer. Steve Ballmer unveiled this “best new thing” for people who are data driven. The article suggests “Moral values without facts are baseless, but facts without values are meaningless”. Nice, Mr. Ballmer. Let us hope that there are still large numbers of people who think facts matter.

    https://www.theatlantic.com/business/archive/2017/04/can-a-beautiful-website-of-facts-change-anybodys-mind/523563/?utm_source=nl-atlantic-daily-041917

Leave a Reply