The Reality Behind Technological Singularity and Artificial Intelligence Takeovers

If there is one topic common to tech, Artificial Intelligence, and the future that is always hyped about, it is the technological singularity.

A Robocalypse Hype-Alooza!

Every once in a while, a pop culture phenomenon arises, usually from the world of cinema, and preys on humanity’s fear of a robocalyptic scenario. The latest in this trend was the Mitchells versus the Machines. The time that movie took between presenting new robot technology and revealing them to be genocidal villains was precisely two minutes. Even the Marvel universe which usually relies on fantasized, alien races for its villainy, had a robocalypse edition in The Age of Ultron.

Technological Singularity at its Extreme in Pop Media
Avengers: Age of Ultron and Mitchells vs. the Machines are two popular examples of the robocalypse genre.

It seems that as soon as these robots become autonomous; they jump straight to Hitler mode and start plotting the end of humanity. I mean, what about things like character development, evolution, and arc? Human villains are typically given far more complex reasons and elaborate motivations to explain their enemy status. With high-functioning robots, on the other hand, their animosity is assumed by default. There rarely needs to be a satisfying arc that fully explains why this robot would only stop at complete annihilation.

Do We Really Fear Sentient Robots?

This extreme depiction of AI reveals the deep-seated fear we all have of a world in which our programs break free of, well, the programming. This fear does not just lie in the heads of imaginative storytellers in print and film either. It is a fairly common topic that writers often explore complete with expert opinions and pertinent science. You search singularity and you get headlines like “Reaching the Singularity May be Humanity’s Greatest and Last Accomplishment” (Airspace Magazine), and “How governments can stop the rise of unfriendly, unstoppable super-AI” (The Conversation). A few other headlines will be trying to “soothe” us against the fears with dubious assurances: “Don’t Worry About the AI Singularity: The Tipping Point is Already Here” (Forbes).

It’s time to explore how much relevance these fears have to actual science. How realistic, let’s wonder, are these fears, and how likely is singularity anyway?

Artificial Intelligence Rises: Vernor Vinge Coins a Term

Vernor Vinge, the first author on Technological Singularity.
Vernor Vinge, the math and computer science professor and science fiction author who first wrote about Technological Singularity signaling the end of Humanity. Photo from WIRED.

Vernor Vinge was the first to use this term in a technological sense. He was a professor of math and computer science at San Diego State University. He predicted in 1993 that “within 30 years, we will have the technological means to create superhuman intelligence.” The very next sentence he followed this prediction with was: “Shortly after, the human era will be ended.”

It would seem that the originator of the term singularity also happened to originate the fears associated with it. Other well-known science & tech juggernauts who have voiced their fears include Stephen Hawking and Elon Musk. But not all behemoths of this field have necessarily invoked such fears when talking optimistically about it.

How Technological Singularity Will Happen: The Kurzweil Curve

Consider Ray Kurzweil’s influential book in 2005: The Singularity is Near. In this book, Kurzweil highlighted how GNR, that is, a combination of “accelerating returns” in the fields of genetics, nanotechnology, and robotics (AI), will overtake and replace human intelligence.

Graphic summarizing Kurzweil’s notion of Singularity. Source: Transcendent Man.

Technological Singularity in the Words of Other Key Experts

Many other experts have expressed their ideas on Singularity. However, their sayings are not necessarily accompanied by warnings or predictions of the end of humanity as a result.

What did Alan Turing, Ray Kurzweil, and Douglas Hofstadter say on Artificial Intelligence, Machine Learning, & Technological Singularity?
The concept of Artificial Intelligence, Machine Learning, and Technological Singularity from other titans of the industry is less fatalistic than in Vernor Vinge’s essay. Left to Right: Alan Turing. Ray Kurzweil. Douglas Hofstadter.

Consider these quotes:

“It is customary to offer a grain of comfort, in the form of a statement that some peculiarly human characteristic could never be imitated by a machine. I cannot offer any such comfort, for I believe that no such bounds can be set.”
—Alan Turing, 1951

“Computing hardware can do anything that a brain could do, but I don’t think at this point we’re doing what brains do. We’re simulating the surface level of it, and many people are falling for the illusion.” — Douglas Hofstadter, 2017

“Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage the machines will be in effective control.”
—Ted Kaczynski (the Unabomber), 1995

Technological Singularity May Be Less Extreme Than We Think

Notice what’s common in the quotes by Turin and Hofstadter. The key word in Turin’s quote is imitation. Similarly, the Hofstadter quote comes from an interview on how even today AI is only imitating the “surface level” of human cognition. Kaczynski’s quote keeps its focus on the complexity of future decisions needing AI. What brains do is qualitatively and categorically different from what machines do even when they run on the principles of deep learning designed to let them learn and improve.

When Vinge talked of singularity, he hinged his prediction on large computer networks somehow “waking up” (or humanity somehow being able to create conscious, “awake” AI); or on biological systems or brain-computer interfaces taking humans’ intelligence to a transcendent level. This second possibility is also what Kurzweil was calling singularity. In a recent interview, Kurzweil rejected the notion of a brilliant Artificial Intelligence mastermind somehow enslaving humanity as “fiction.” What he predicted was in terms of bionic convergence: what he calls “immortal software-based humans.” If you’re thinking of Transcendence, Johnny Depp starring flop, you are correct.

Ray Kurzweil's concept of Technological Singularity was better captured in this movie.
The flop movie Transcendence comes closer to Ray Kurzweil’s idea of Technological Singularity than a typical Robocalypse genre movie. Movie Poster via LA Mag.

Three Different Notions of Singularity

It is clear when experts talk about singularity, they are focusing on different issues rather than robots taking over. Let’s explore the three ways the concept of singularity is used in the field today and how possible or impossible each really is.

Superintelligence – The Singularity That Is Already Here.

In its strictest terms, we have already achieved (or are close to achieving) technological singularity. Many traditional jobs by humans are now much more efficiently run by robotics and software. Automated assembly lines in factories and solutions in other industries spanning from healthcare to the increasingly important cybersecurity have the potential to amp up overall productivity by billions, if not trillions, in just two decades. Bionic solutions are helping bypass disability every day, such as the bionic eye developed by Australians, low-cost bionic limbs in Tunisia, and bionic legs at MIT.

All these technologies are imitation-based and achieve their super-status through scale, speed, and complexity. There’s no risk for the waking up of the demons here. The fears in this thread of singularity are of a different kind entirely — what cognitive scientist Gary Klein calls The Second Singularity which we are hurtling towards.

A Mural from Miami's Art District depicting human-friendly Artificial Intelligence.
A Mural from Miami’s Art District depicting human-friendly Artificial Intelligence.

The Second Singularity We Must All Fear

This second singularity is the eradication of human expertise and its replacement by AI automation. It is the permanent loss of “tacit knowledge, the perceptual skills, the mental models” that workers in healthcare, control room specialists, teachers, trainers, and even case workers in social care institutions develop over time with experience.

Economic Singularity is another feared consequence. As self-improving and self-replicating machines increasingly take over in the near future, who will hire humans and how much will the CEOs be ready to pay per hour for human services as they go obsolete? Cautious futurists are now saying it’s the profit-maximizing CEOs we all really need to be afraid. Loss of human earnings could develop to a point where machines are churning out products and running services that no one can afford to buy.

Bionic Singularity — Cracking the Neural Code

AI may be scaringly good or fast at what we can program it to do, but transcending its programming and becoming truly self-sufficient and independent is a different order of business.

Bionic Intelligence approaches Ray Kurzweil's idea of Technological Singularity,
Bionic Intelligence approaches Ray Kurzweil’s idea of Technological Singularity. However, progress is slow and incremental. via freak.

The biggest scientific obstacle to that: humans don’t really understand what makes us those conscious, self-determining selves. John Horgan summed it up succinctly a few years back: “Bionic convergence and psychic uploading won’t be possible unless we crack the neural code, science’s hardest problem.”

What is the neural code? In the words of Richard Gao, a computational neuroscientist: “It’s as if these neurons were an ensemble of orchestral musicians, coming together to perform an unknown composition. Our goal as neuroscientists is to uncover these compositions and to better understand the organizing principles behind cooperative neural activity — and how they drive complex behaviors.”

In short, while we understand how the brain works, we still have no clue exactly how the millions of neurons firing come together to create the conglomeration of the unique sensations and thought processes that is our conscious mind. If we literally believe that we — or our programmed creations — can one day somehow jump beyond our organic limitations and truly work out how we are created and work — or design programs that do — we are entertaining some very unscientific assumptions.

It’s all nice and cool to fantasize the era of transcendent AI, but how much of this futuristic takes are grounded in the status quo of the field? Source: Transcendent Man.

Conscious/Autonomous AI — The Machine Learning Goals

This is the Singularity that we’re hoping to reach through deep learning.

But again deep learning in machines connects to that nifty “cracking the neural code” problem. Goals and hopes aside, current machine learning models are nothing more than the elementary school of this field. Problems like rising costs, funding acquisition, and energy fuels complicated the path of progress.

Are We Hitting the Wall Before We Even Reach Singularity?

Facebook’s head of AI research, Jerome Pesenti, made headlines in 2019 when he admitted that deep learning was hitting the wall soon, because the cost for the top experiments in the field was going up ten-fold every year. He also admitted that there is no real model for improving one’s own intelligence, whether it’s humans or AI, something that is a necessary condition to even consider singularity a realistic possibility.

In one survey, attendees of an AGI conference were asked to predict specific, successfully superior AI achievements such as passing the Turing test, passing 3rd grade, making a Nobel-worthy scientific breakthrough, and attaining true, superhuman intelligence. Here are the results.

Realistic Estimation by AI experts of Achieving Different Singularity Milestones. From: How Long Until Human-Level AI?

Take a look again at the title of the graph: “Without Funding.” Scientists are, of course, aware of how the ability of humans to create any AI hinges on the flow of resources.

So Where’s the Deep Machine Learning Singularity Really At?

By now it should be clear that the singularity type with any hope of ever resembling its pop culture avatar hinges on the deep machine learning models inspired by the neural code.

I’ve already called it elementary and experts believe this singularity may never be achieved especially without a guaranteed funding flow. Let’s take a look at why that is so:

  1. For every specific task that an AI program or machine is trained for, massive amounts of human data is required. Then the AI has to be trained for months and often years to achieve the level of success we see in tech stories.
  2. All the weaknesses and biases in the datasets infiltrate the decision-making prowess of the AI. For instance, if the data set came from University student populations, the AI would run into trouble when performing for other populations. If the human variables in the data had inadequate sampling for people of a particular cultural background, than its functioning would be discriminatory to those people — a topic we will return to in the future in this column.
  3. Despite full resources and perfect training AI tools cannot generalized to learning situations that vary from its specific training scenarios. Adapting or expanding its functioning would mean redesigning and then retraining the new AI.
  4. Fixing a single error in the AI means retraining which means reacquisition of unproblematic, unbiased, and fully representative data sets. All that requires a doubling up of all the resources including energy fuels, funding and time. And this doubling up becomes necessary every time a flaw or limitation is discovered in the AI’s real-world application.
Machine Learning may not be what people assume.
KDNuggets.com

In Sum: Technological Singularity Is Nothing to Be Afraid Of

As Elani Vasilaki, Professor of Computational Neuroscience, reminds us, we are making key unscientific assumptions in fearing technological singularity.

  • There is no ghost in the machine. Despite the complexity of computer chess, Go and Jeapordy players beating humans, they remain mechanical operators that are designed to self-learn from a given parameters. There is nothing latent — no soul or consciousness — to arouse and rebel.
  • Despite the great achievements, all AI remains sorely dependent on its specialized training. It is worse than a human toddler in that aspect. A toddler can learn by watching a simple task being performed for them. To learn the same task, AI would require a huge amount of training, resources, and time.

My advice for the screenwriters out there—if any bother to read our website actually reporting on stuff they fantasize about—is to scale down the mindless hysteria. In its place, come up with creative ways of presenting science that lights the minds of eager audiences on fire and inspires them to invention and discovery.

Share this:

10 comments

  • Jordanstanley

    Interesting take on the technological singularity and the fear of AI takeovers! While AI is advancing rapidly, true singularity is still more theoretical than imminent. What matters now is how we integrate AI responsibly into real-world solutions—whether in automation,

  • Ethan Parker

    Thank you! Great article!

  • Holden Clark

    How about learning more about data-driven analytics? You can also compare your performance with your goals and benchmarks, which is something Oxagile can help you with. It can help you assess your efficiency and effectiveness and adjust your tactics accordingly. Believe me in my experience, you can definitely benefit from it, at least review this information.

  • Sam Jonson

    This is mainly about uncoordinated ad processing — marketers might face tiresome operations, while managing ad campaings or customizing settings through several points of entrance and diverse data sources, including demand-side platforms and social platforms. They’re seeking out the ways to optimize hours-long processing, the solution to which is closer and easier they sometimes think. Our project case sample will serve as a proof for it.

  • Toby Bradford

    WorkSmart is a workforce management and productivity solution developed by Crossover to help workers and their managers tap into the type of data and analytics that drive performance improvements at both the individual and team levels.

    We like to think of WorkSmart a personal, data-driven productivity coach for knowledge workers. It measures data points in many critical areas, including application usage and time spent task-switching or surfing the web.

    Like athletes who rely on real-time data to improve their performance, WorkSmart serves as a performance coach for individuals to build on patterns of success and become better contributors. It generates eye-opening insights on behaviors to change or avoid, like too much time spent multi-tasking, checking emails, or in meetings — insights that workers rarely see for themselves, and that managers and businesses urgently need.

  • Toby Bradford

    How does data-driven analytics improve efficiency?

  • xosta

    I trade in the stock market. And even though I make all the investment decisions, I still prefer to rely heavily on AI and various investment platforms. There are many great options if you want to try. For example, YCharts vs Koyfin: Which one suits your budget? I chose Koyfin as an alternative to more expensive platforms. The same or even more functionality for less money.

  • Kower

    The use of modern technologies in business is essential if you want to develop and improve your business. Many decide to contact the flutter developers to automate various manual tasks. By working with a cross-platform application that is adapted to different user environments, you can significantly improve productivity.

  • Great article!

Comments are closed.

You may also like

Grab the Game and
some Merch, Visit
the Official Store

Click Here!

Subscribe to A Little Bit Human

Stay in the Loop

Subscribe to receive the latest news and updates about A little bit Human. We promise not to spam you!

Gentle Jack: The Party Game for Bad Friends - Coming soon!

Join the Waitlist!