Technological Singularity Apocalypse

It’s not just venture capitalists and billionaires making important predictions about A.I. A new, elaborate 232-page report from the Pew Research Center canvassed more than 300 experts across industries about the changes they predict by 2035 from technological developments.

The technological singularity—or simply the singularity—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization.

According to the most popular version of the singularity hypothesis, I. J. Good’s intelligence explosion model of 1965, an up-gradable intelligent agent could eventually enter a positive feedback loop of self-improvement cycles.

With, each successive; and more intelligent generation appearing more and more rapidly, causing a rapid increase (“explosion”) in intelligence which would ultimately result in a powerful super-intelligence, qualitatively far surpassing all human intelligence.

The report compiles the views of researchers, developers and business leaders in global organizations, technology companies and research labs to organize a wealth of perspectives on current technological trends.

From economic industry experts like Alon Halevy, the director of Meta’s Reality Labs, to Eileen Donahoe, the executive director of Stanford Global Digital Policy Incubator, the combined predictions of these artists, authors, innovators, developers, business and policy leaders, researchers, and academics sketch a look into the future.

While there’s an overarching message gleaned from the report, there’s also a variety of personal, philosophical, religious, and political ideologies at play in the individual responses.

So what does the future hold for humanity? Well, four out of 10 experts said they were just as concerned as they were excited about the changes in the humans-plus-tech evolution, according to the report.

That majority comprises the 42% of experts who said they are equally excited and concerned and the 37% who said they are more concerned than excited about the changes they expect in the humans-plus-tech evolution by 2035.

The report synthesizes a warning: Our future depends on the good or ill intent of the next generation as they build the knowledge ecosystem, to either serve the public good or serve the current highly-extractive iteration of the web.

The commonly discussed fears are present — plutocracy and dictatorial reign, social collapse, a mental health crisis due to isolation, losing a sense of truth and scientific accuracy.

And, of course, the Skynet future of total domination through autonomous warfare of the nuclear and cyber variety.

This is my greatest fear. From the point the technological Singularity was first proposed, the marriage of man and machine has proceeded at a pace that even worries the boosters of artificial general intelligence (AGI).

It might come as no surprise that one of the more pervasive themes in the report is the fear of profit and power-driven incentives in economics and politics.

Censorship, social credit and around-the-clock surveillance will become ubiquitous worldwide; there is nowhere to hide from global dictatorship, Fenton writes.

Human knowledge will wane and there will be a growing idiocracy due to the public’s digital brainwashing and the snowballing of unreliable, misleading, false information.

The report also brings up a world of specific—perhaps more frightening—hypothetical s that in some ways are already playing out.

A good way to think about a proposed technology is to ask: What would 4chan do with it?

Connecting computational biology to wetlab synthesizers is just a matter of money and expertise. What will 4chan do with LLM tools? Rheingold writes, referring to the online message board that’s home to many hackers.

It’s safe to say that even the most damning voices are predicated on the fact that it’s not already too late. With the proper regulation and effort, the future can be what humanity dreams. Even in the future, it might not be too late to change.

When we awake from this trans-humanist fever dream of human perfection that bears little resemblance to the actual world we’ve managed to create, I think steady efforts at preserving the core values of the humanities will have proved prescient.

This massive and imposed technological infusion will be seen as a chimera. Perhaps we’ll even learn how to use some of it wisely.

The Hungarian-American mathematician John von Neumann (1903-1957) became the first known person to use the concept of a singularity in the technological context.

Stanislaw Ulam reported in 1958 an earlier discussion with von Neumann centered on the accelerating progress of technology and changes in human life.

This gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue. Subsequent authors have echoed this viewpoint.

The concept and the term singularity were popularized by Vernor Vinge – first in 1983 (in an article that claimed that once humans create intelligence’s greater than their own, there will be a technological and social transition similar in some sense to the knotted space-time at the center of a black hole.

And later in his 1993 essay The Coming Technological Singularity, (in which he wrote that it would signal the end of the human era, as the new super-intelligence would continue to upgrade itself and would advance technologically at an incomprehensible rate).

He wrote that he would be surprised if it occurred before 2005 or after 2030. Another significant contributor to wider circulation of the notion was Ray Kurzweil‘s 2005 book The Singularity is Near, predicting singularity by 2045.

Some scientists, including Stephen Hawking, have expressed concern that artificial super-intelligence (ASI) could result in human extinction.

The consequences of a technological singularity and its potential benefit or harm to the human race have been intensely debated.

Prominent technologists and academics dispute the plausibility of a technological singularity and the associated artificial intelligence explosion, including Paul Allen, Jeff Hawkins, John Holland, Jaron Lanier, Steven Pinker, Theodore Modis, and Gordon Moore.

One claim made was that the artificial intelligence growth is likely to run into decreasing returns instead of accelerating ones, as was observed in previously developed human technologies.

Fortune.com / ABC Flash Point News 2024.

One Comment on “Technological Singularity Apocalypse

Leave a reply to Curacao Bombata Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.