Jaron Lanier--There is no AI

Quotes below are from Jaron Lanier’s article There Is No AI (The New Yorker 20 Apr 2023) and an UnHerd interview by Flo Read: Jaron Lanier: How humanity can defeat AI (YouTube 2023).

Data Dignity

Data dignity involves tracing human contributions to AI mashups. It is a philosophical argument needing a technological implementation.

  • “Digital stuff would typically be connected with the humans who want to be known for having made it. In some versions of the idea, people could get paid for what they create, even when it is filtered and recombined through big models, and tech hubs would earn fees for facilitating things that people want to do.”
  • “At some point in the past, a real person created an illustration that was input as data into the model, and, in combination with contributions from other people, this was transformed into a fresh image.”
  • “A data-dignity approach would trace the most unique and influential contributors when a big model provides a valuable output. For instance, if you ask a model for “an animated movie of my kids in an oil-painting world of talking cats on an adventure,” then certain key oil painters, cat portraitists, voice actors, and writers—or their estates—might be calculated to have been uniquely essential to the creation of the new masterpiece. They would be acknowledged and motivated. They might even get paid.”
  • “How detailed an accounting should data dignity attempt? Not everyone agrees. The system wouldn’t necessarily account for the billions of people who have made ambient contributions to big models—those who have added to a model’s simulated competence with grammar, for example. At first, data dignity might attend only to the small number of special contributors who emerge in a given situation. Over time, though, more people might be included, as intermediate rights organizations—unions, guilds, professional groups, and so on—start to play a role. People in the data-dignity community sometimes call these anticipated groups mediators of individual data (mids) or data trusts. People need collective-bargaining power to have value in an online world—especially when they might get lost in a giant A.I. model.”
  • “Consider what might happen if A.I.-driven tree-trimming robots … allow for a new type of indirect landscaping artistry…. With data dignity, the models might create new sources of income, distributed through collective organizations. Tree trimming would become more multifunctional and interesting over time; there would be a community motivated to remain valuable. Each new successful introduction of an A.I. or robotic application could involve the inauguration of a new kind of creative work. In ways large and small, this could help ease the transition to an economy into which models are integrated.”

Human Social Collaboration

AI is a human construct that enables giant mashups of human expression, notably with today’s large language models (LLMs: ChatGPT and Google Bard).

  • “The new programs mash up work done by human minds. What’s innovative is that the mashup process has become guided and constrained, so that the results are usable and often striking. This is a significant achievement and worth celebrating—but it can be thought of as illuminating previously hidden concordances between human creations, rather than as the invention of a new mind.”
  • “After all, what is civilization but social collaboration? Seeing A.I. as a way of working together, rather than as a technology for creating independent, intelligent beings, may make it less mysterious—less like hal 9000 or Commander Data. But that’s good, because mystery only makes mismanagement more likely.”
  • “We can now imagine a Web site that reformulates itself on the fly for someone who is color-blind, say, or a site that tailors itself to someone’s particular cognitive abilities and styles. A humanist like me wants people to have more control, rather than be overly influenced or guided by technology. Flexibility may give us back some agency.”
  • “At some point in the past, a real person created an illustration that was input as data into the model, and, in combination with contributions from other people, this was transformed into a fresh image. Big-model A.I. is made of people—and the way to open the black box is to reveal them.”

Big picture

  • “What kind of answer do we want when we ask why? Revealing the indispensable antecedent examples from which the bot learned its behavior would provide an explanation. We could react to that output differently, and adjust the inputs of the model to improve it.”
  • “The initial proposals for digital-network architecture, put forward by the monumental scientist Vannevar Bush in 1945 and the computer scientist Ted Nelson in 1960, preserved provenance. Now A.I. is revealing the true costs of ignoring this approach.”

Interview with Flo Read of UnHerd

  • On bad internet behavior: “The antidote is universal clarity, context, transparency, which can only come about by revealing people, since revealing ideas is impossible because we don’t know what an idea is.”
  • On Spotify and similar music hub services: “Everything should be a mashup and we don’t need to know who the musician was and they don’t need to have bargaining power in a financial transaction …. I think that was a gigantic wrong turn that we can’t afford to repeat with AI because it gets amplified so much that it could really destroy technology…. The availability of music to move through the internet was not the problem. It’s the surrounding material…. What really screwed musicians was the idea that you build a business model demoting the musician, demoting the person and instead elevating the platform.”
  • “You might be able to learn new options for consilience between different points of view that way, which could be extraordinary. Many people have been looking at … could we actually do this to help us understand potential for cooperation in policy that we might not see if we seem to have irreconcilable differences about how to handle something like land use. Is there some possibility that this mashup thing might uncover some strains of potential cooperation [that] is hard to see otherwise?”

References

Updated on February 1, 2024

Written on September 21, 2023