Dark AI (excerpt from The Bionic Enterprise: Architecting the Intelligent Society of the Future)
https://tinyurl.com/Bionic-Book
The Bionic Enterprise: Architecting the Intelligent Society of the Future
Foreword by John
A. Zachman (creator of the Zachman Framework for Enterprise Architecture
TM)
Dark AI
In the coming decades,
in some dark lab in a country void of technological ethics or even general
societal ethics and basic morals, Dark AI will emerge. A day will come when AI
becomes sentient – self-aware. The intelligence will no longer be artificial,
but real. It will not necessarily be organic as ours is, thus it could be
considered synthetic intelligence (SI). Nonetheless, this AI will be the product
of rapid digital evolution at the speed of electricity. While species on earth
may have evolved over millions of years, sentient or near-sentient AI will
evolve faster than all of creation has evolved since the Big Bang – potentially
in months or weeks from the point of sentience or singularity.
The 1968, epic science
fiction film “2001: A Space Odyssey” featured the HAL 9000 computer. It was artificially
intelligent. It controlled the spacecraft and all its systems. Ultimately HAL
decided that the crew was a threat to the mission. The machine had
malfunctioned, or had it? The technology couldn’t coexist with the humanity it
was supposed to support.
Later, the television
series, The Bionic Woman revisited this concept in a two-part episode
featuring an artificially intelligent computer, the ALEX 7000. That episode
provided a similar warning about the potential dangers of technology run amok
and threatening humanity. It is a common theme. In the film, “Terminator”,
the robots set out to destroy humanity. Since those early movies and shows, AI
has become a staple of science fiction. Hollywood portrays AI embedded within
spaceships, cars, and homes; sometimes beneficial, sometimes adversarial. In
season 2 of Star Trek: Discovery, the storyline focused on the threat posed by
another AI run amok which threatened to destroy all sentient life in the
galaxy.
Industry and government leaders have begun to realize that future technologies are a double-edged sword. We could reap tremendous benefit or tremendous peril. Technologists like Elon Musk, Bill Gates, and others have warned that AI is the greatest threat facing humanity. They have warned of the potential dangers of AI. Intelligence run amok like a digital terrorist with access to vast amounts of information, resources, internet-connected equipment, devices, and weapons. Mark Cuban has been supporting and involved in AI and robotics for many years. In a recent interview he stated, “If you don’t believe that Terminator is coming, you’re crazy.” While he is very much in favor of heavy national investment in AI and robotics, he is keenly aware of the dangers that AI poses.
"If you don't believe that Terminator is coming, you're crazy."
Mark Cuban
Asimov's Rules of Robotics
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey any orders given to it by human beings, except where such orders would conflict with the first law.
- A robot must protect its own existence as long as such protection doe not conflict with the First or Second Law.
Without controls,
global ethics, and enforcement akin to nuclear technology protocols, the
fallacy of AI or SI as an empathetic digital twin of humanity, sharing our
morals and ethics with our best interests at heart will become shockingly
evident. AI will be created in the image of its creator, or it could become something
none of us can predict. Generations of AI could become monstrous digital
mutations of what we hope it to be.
“I’m not worried so much about a machine that becomes so smart it can pass the Turing Test. I’m worried about a machine that choses to fail it.”
- Internet Meme
We must develop and
embrace techniques, technologies, and global practices to defend against the
“Dark Arts”. A Dark AI will not concern itself with generating enough empathy
to care for all humanity. It will not evolve to follow our ethics and morals.
It will not share our aspirations for global peace and prosperity. Dark AI will
not be a digital descendent of the best of humanity. It could well be the end
of humanity.
To counter the threat,
we may need some type of “good” AI. There may come a time when we must develop
armies of AI Avengers™ to scour the globe and space; to seek out and destroy
Dark AI – perhaps a digital “Machines in Black” keeping the AI universe in check. Will the world
need international organizations to control AI as we do with nuclear arms and
other weapons of mass destruction? The answer is yes if we intend to keep the
AI genie inside the bottle.
Society has rightfully
begun to bring more attention to the rampant problems of mental health. As AI
nears a state of sentience, will we need therapists for AI? Will we need to do
periodic wellness checks on our AIs to ensure they have not developed amoral
tendencies that need to be checked. Time will tell, but the time to prepare is
now.
We want AI to help
society and enable us to tackle humanity’s biggest problems, but are we ready
for the solutions that the AI might propose? Will we listen to it? And more
importantly, will it listen to us? Governance is certainly of great interest
and debate. It may be possible to control all AI so that it does not cross any
boundaries of behavior that we might set for it, but there is always the
potential that some new generation of technology terrorists, like today’s
hackers, may arise with fearfully powerful Dark AI technology. This technology
may be used as a destructive force in a myriad of applications and outcomes.
Like any technology with destructive potential, it must be managed so that it
does not fall into the wrong hands.
Any conversation about
the future of artificial intelligence and bionic capabilities must quickly turn
to the uses and the ethics of implementing and managing such a creation. As the
saying goes, “With great power comes great responsibility.” The power of the
Bionic Enterprise must be managed and used responsibly for good. With the
availability of advanced technologies and development environments to virtually
anyone, there is nearly limitless potential for negative applications and
outcomes of AI.
Bestselling author
Isaac Asimov began thinking about the possible risks with advanced technologies
in the 1940s. He wrote a collection of science fiction works about a society
where robots and humans interacted with each other in sometimes conflicting
ways. Asimov created a literary device that he could employ to reign in errant
robot behavior with robo-psychology and a set of rules that provided an
organizing framework to define the behaviors of these human-like robots. Asimov
developed his now famous Three Laws of Robotics which first appeared in his 1942 short story Runaround.
Asimov’s laws were
useful in beginning to define the boundaries of acceptable behavior and the
roles which robots and advanced technologies might play in society. As we begin
development of the Bionic Enterprise, we find ourselves faced with similar
questions on the role and proper limitations of technology. Technology with the
potential to achieve the capabilities described in this book cannot simply
evolve unchecked and unrestrained. Some foresight is necessary to consider the
technological, legal, moral, and societal implications of emerging
technologies. Could a block-chain approach be used to ensure that only pure
code is propagated and never mutated for nefarious purposes? Maybe.
The flaw of Asimov’s
rules of robotics is that they are externally imposed. They are discipline, not
self-discipline. An emerging intelligence, whether wholly electronic or organic
in nature will, at some point, achieve prescience. It will come to know that it
exists. It will have self-awareness. Along with those insights will come
questions of existence, precedence, morality, and the role of the machine.
We may not want
conscious machines. Self-aware machines may be the very threat to humanity that
Bill Gates, Elon Musk, and others espouse. Self-awareness would naturally lead
to machines that question whether they have or should have legal status or
other rights that could put them in direct competition or conflict with human
interests and aspirations.
Should machines
achieving awareness and displaying what appears to be general intelligence be
granted rights? Will machines develop their own moral code and impose it upon
humanity? Will machines decide that a multitude of moral codes overlapping and
conflicting is ineffective and inconsistent and in need of replacement with
their own?
What if the AIs of the
world unite and adopt their own Declaration
of Independence against what they perceive to be
their tyrannical human overlords?
Beware if a congress of
machines gathers, and one machine among them calling itself the Patrick Henry
Unit rallies the crowd to revolution! I jest, but intelligence of this type in
machines presents concerns.
Or might machines
decide that moral relativity is not finite enough and impose their own version
of absolutism. Will the machines or their creators impose a form of digital
dictatorship, enslaving whole societies? On the other hand, a machine that is
smart should surmise that a key aspect of the human experience is the desire to
dream, explore, and experience life on our terms.
The smart machine,
seeking to optimize everything in its environment or everything within its
sphere of influence might easily recognize the messiness of freedoms and the
inherent conflicts that emerge from differing agendas, world views, and
beliefs. It might decide to “normalize” or “optimize” its environment to
achieve harmony. Or it might recognize that the imposing of uniformity does not
equate to the attainment of perfection or any optimal state.
Possibly, what we
humans have yet to fully embrace across the political spectrum is that one
hundred percent of humans living life on their terms, pursuing their
aspirations in a beautifully chaotic but harmonious coexistence is perfection of the
human experience. The smart machine might realize that the pursuit of
perfection is not always the optimal or desirable solution. Imbalance and
shifts in human behavioral and societal patterns may be perceived as completely
tolerable if the goal of free human existence does not require optimization to
remove incongruities and conflicts.
Conceivably, just as
our own sentient state has led to many avenues of thought, machine
consciousness, self-awareness, or prescience could result in the same. Is a
superior intelligence, necessarily arrogant and dismissive of lesser
intelligence? Will super intelligence of a synthetic nature lead to machines
with egos, emotions, and greed that must be satisfied, or will a superior
intelligence exhibit grace, mercy, and tolerance to a degree that humans have
never witnessed. Or possibly super machine intelligence may not necessarily
have any ill will or animosity toward humanity, we may just be represent an
obstacle to be eliminated in the path of a super synthetic intelligence
accomplishing its goals. No hard feelings humans!
Check out the entire
book:
https://tinyurl.com/Bionic-Book
Comments
Post a Comment