by Ryan Lovelace
November 19, 2024
from TheWashingtonTimes Website
Former U.S. Secretary of State Henry Kissinger
attends a luncheon with French President Emmanuel Macron,
Vice President Kamala Harris and Secretary of State Antony Blinken,
Thursday, Dec. 1, 2022, at the State Department in Washington.
The former secretary of state exerted uncommon influence
on global affairs under Presidents Richard Nixon and Gerald Ford,
earning both vilification and the Nobel Peace Prize.
Died Nov. 29, 2023.
(AP Photo/Jacquelyn Martin, File)
Trilateral Commissionmembers Eric Schmidt and the late Henry Kissinger express the endgame of fellow member
Zbigniew Brzezinski‘s Technetronic Era, aka
those who will control the world are the superhumans who are genetically hacked or otherwise merged with advanced technology like AI.
Their book, ‘Genesis’, talks about taking intelligent design out of God’s hands and giving it over to posthuman designers of co-evolution.
My book,
The Genesis of Modern Globalization, details how the Trilateral Commission took over
Technocracyand
Transhumanismto take over the world.
Fifty years later, nothing has changed.
However, both Kissinger and Brzezinski are now in the grave and would tell us another story if they could.
Humanity must begin preparations to no longer be in charge of Earth because of artificial intelligence, according to a new book from the late statesman Henry Kissinger and a pair of the country’s leading technologists.
The rise of A.I. creating “superhuman” people is a major topic of concern in “Genesis – Artificial Intelligence, Hope and the Human Spirit,” published by Little, Brown and Company.
It’s the “last book” from Kissinger, according to the publisher’s parent company Hachette. Kissinger was a longtime U.S. diplomat and strategist who died last year at age 100.
Kissinger’s co-authors, former Google CEO Eric Schmidt and longtime Microsoft senior executive Craig Mundie, finished the combined work after Kissinger’s death.
Mr. Schmidt and Mr. Mundie wrote,
they were among the last people to speak with Kissinger and sought to honor his dying request to finish the manuscript.
The authors offer a bracing message, warning that AI tools have already started outpacing human capabilities so people might need to consider biologically engineering themselves to ensure they are not rendered inferior or wiped out by advanced machines.
In Chapter 8, a section titled “Coevolution – Artificial Humans,” the three authors encourage people to think now about,
“trying to navigate our role when we will no longer be the only or even the principal actors on our planet.”
“Biological engineering efforts designed for tighter
human fusion with machinesare already underway,” they add.
Current efforts to integrate humans with machine include brain-computer interfaces, a technology that the U.S. military identified last year as of the utmost importance.
Such interfaces allow for a direct link between the brain’s electrical signals and a device that processes them to accomplish a given task, such as controlling a battleship.
The authors also raise the prospect of a society that chooses to create a hereditary genetic line of people specifically designed to work better with forthcoming AI tools.
The authors describe such redesigning as undesirable, with the potential to cause,
“the human race to split into multiple lines, some infinitely more powerful than others.”
“Altering the genetic code of some humans to become superhuman carries with it other moral and evolutionary risks,” the authors write.
“If AI is responsible for the augmentation of human mental capacity, it could create in humanity a simultaneous biological and psychological reliance on ‘foreign’ intelligence.”
Such a physical and intellectual dependence may create new challenges to separate man from the machines, the authors warn.
As a result,
designers and engineers should try to make the machines more human, rather than make humans more like machines.
But that raises a new problem:
choosing which humans to make the machines follow in a diverse and divided world.
“No single culture should expect to dictate to another the morality of the intellects on which it would be relying,” the authors wrote.
“So, for each country, machines would have to learn different rules, formal and informal, moral, legal, and religious, as well as, ideally, different rules for each user and, within baseline constraints, for every conceivable inquiry, task, situation, and context.”
The authors say society can expect technical difficulties, but those difficulties will pale in comparison with designing machines to follow a moral code, as the authors said they do not believe good and evil are self-evident concepts.
Kissinger, Mr. Schmidt and Mr. Mundie urged greater attention to,
aligning machines with human values…!
The trio said they would prefer that no artificial general intelligence surpassing humanity’s intellect is allowed to emerge unless it is properly aligned with the human species.
The authors said they are rooting for humanity’s survival and hope people will figure it out, but that the task will not be easy.
“We wish success to our species’ gigantic project, but just as we cannot count on tactical human control in the longer-term project of co-evolution, we also cannot rely solely on the supposition that machines will tame themselves,” the authors wrote.
“Training an AI to understand us and then sitting back and hoping that it respects us is not a strategy that seems either safe or likely to succeed”…
by Makia Freeman
December 8, 2017
from Freedom-Articles Website
We have the reached the stage of AI Building AI.
Our AI robots/machines are creating child AI robots/machines.
Have we already lost control?
AI Building AI
is the next phase humanity appears to be going through in its technological evolution.
We are at the point where corporations are designing Artificial Intelligence (AI) machines, robots and programs to make child AI machines, robots and programs – in other words, we have AI building AI.
While some praise this development and point out the benefits (the fact that AI is now smarter than humanity in some areas, and thus can supposedly better design AI than humans), there is a serious consequence to all this: humanity is becoming further removed from the design process – and therefore has less control.
We have now reached a watershed moment with AI building AI better than humans can.
If AI builds a child AI which outperforms, outsmarts and overpowers humanity, what happens if we want to modify it or shut it down – but can’t?
After all, we didn’t design it, so how can we be 100% sure there won’t be unintended consequences? How can we be sure we can 100% directly control it?
AI building AI: AutoML built NASNet.
Image credit: Google Research
AI Building AI – Child AI Outperforms All Other Computer Systems in Task
Google Brain researchers announced in May 2017 that they had created AutoML, an AI which can build children AIs.
The “ML” in AutoML stands for Machine Learning. As this article Google’s AI Built its Own AI that Outperforms Any Made by Humans reveals, AutoML created a child AI called NASNet which outperformed all other computer systems in its task of object recognition:
“The Google researchers automated the design of machine learning models using an approach called reinforcement learning. AutoML acts as a controller neural network that develops a child AI network for a specific task.
For this particular child AI, which the researchers called NASNet, the task was recognising objects – people, cars, traffic lights, handbags, backpacks, etc. – in a video in real-time.
AutoML would evaluate NASNet’s performance and use that information to improve its child AI, repeating the process thousands of times.
When tested on the ImageNet image classification and COCO object detection data sets, which the Google researchers call “two of the most respected large-scale academic data sets in computer vision,” NASNet outperformed all other computer vision systems.
According to the researchers, NASNet was 82.7 percent accurate at predicting images on ImageNet’s validation set.
This is 1.2 percent better than any previously published results, and the system is also 4 percent more efficient, with a 43.1 percent mean Average Precision (mAP).”
With AutoML, Google is building algorithms that analyze the development of other algorithms, to learn which methods are successful and which are not.
This Machine Learning, a significant trend in AI research, is like “learning to learn” or “meta-learning.”
We are entering a future where computers will invent algorithms to solve problems faster than we can, and humanity will be further and further removed from the whole process.
AI building AI:
how will humanity control children AI
when humans didn’t create them?
AI Building AI – Programmed Parameters vs. Autonomous and Adaptable Systems
The issue is stake is how much “freedom” we give AI.
By that I mean this: those pushing the technological agenda boast that AI is qualitatively different to any machines of the past, because AI is autonomous and adaptable, meaning it can “think” for itself, learn from its mistakes and alter its behavior accordingly.
This makes AI more formidable and at the same time far more dangerous, because then we lose the ability to predict how it will act. It begins to write its own algorithms in ways we don’t comprehend based on its supposed “self-corrective” ability, and pretty soon we have no way to know what it will do.
Now, what if such an autonomous and adaptable AI is given the leeway to create a child AI which has the same parameters? Humanity is then one step further removed from the creation.
Yes, we can program the first AI to only design children AIs within certain parameters, but can we ultimately control that process and ensure the biases are not handed down, given that we are programming AI in the first place to be more human-like and learn from its mistakes?
In his article The US and the Global Artificial Intelligence Arms Race, Ulson Gunnar writes:
“OpenAI’s Dr. Dario Amodei would point out that research conducted into machine learning often resulted in unintended solutions developed by AI.
He and other researchers noted that often the decision making process of AI systems is not entirely understood and many results are often difficult to predict.
The danger lies not necessarily in first training AI platforms in labs and then releasing a trained system onto a factory floor, on public roads or even into combat with predetermined and predictable capabilities, but in autonomous AI systems being released with the capacity to continue learning and adapting in unpredictable, undesirable and potentially dangerous ways.
Dr. Kathleen Fisher would reiterate this concern, noting that autonomous, self-adapting cyber weapons could potentially create unpredictable collateral damage.
Dr. Fisher would also point out that humans would be unable to defend against AI agents.”
Hal 9000,
the evil computer/machine from
AI Building AI: Can We Ever Be 100% Sure We Are Protected Against AI?
Power and strength without wisdom and kindness is a dangerous thing, and that’s exactly what we are creating with AI.
We can’t ever teach it to be wise or kind, since those qualities spring from having consciousness, emotion and empathy. Meanwhile, the best we can do is have very tight ethical parameters, however there are no guarantees here.
The average person has no way of knowing what code was created to limit AI’s behavior. Even if all the AI programmers in the world wanted to ensure adequate ethical limitations, what if someone, somewhere, makes a mistake?
What if AutoML creates systems so quickly that society can’t keep up in terms of understanding and regulating them?
NASNet could easily be employed in automated surveillance systems due to its excellent object recognition. Do you think the NWO controllers would hesitate even for a moment to deploy AI against the public in order to protect their power and destroy their opposition?
The Google’s AI Built Its Own AI that Outperforms any Made by Humans article tries to reassure us with its conclusion:
“Thankfully, world leaders are working fast to ensure such systems don’t lead to any sort of dystopian future.
Amazon, Facebook, Apple, and several others are all members of the Partnership on AI to Benefit People and Society, an organisation focused on the responsible development of AI.
The Institute of Electrical and Electronics Engineers (IEE) has proposed ethical standards for AI, and DeepMind, a research company owned by Google’s parent company Alphabet, recently announced the creation of group focused on the moral and ethical implications of AI.
Various governments are also working on regulations to prevent the use of AI for dangerous purposes, such as autonomous weapons, and so long as humans maintain control of the overall direction of AI development, the benefits of having an AI that can build AI should far outweigh any potential pitfalls.”
However, I am anything but reassured. We can set up all the ethics committees we want. The fact remains that it is theoretically impossible to ever protect ourselves 100% from AI.
The article Containing a Superintelligent AI is Theoretically Impossible explains:
“…according to some new work from researchers at the Universidad Autónoma de Madrid,as well as other schools in Spain, the US, and Australia, once an AI becomes “super intelligent”… it will be impossible to contain it.
Well, the researchers use the word “incomputable” in their paper, posted on the ArXiv preprint server, which in the world of theoretical computer science is perhaps even more damning.
The crux of the matter is the “halting problem” devised by Alan Turing, which holds that no algorithm is able to correctly predict whether another algorithm will run forever or whether it will eventually halt – that is, stop running.
Imagine a super-intelligent AI with a program that contains every other program in existence.
The researchers provided a logical proof that if such an AI could be contained, then the halting problem would by definition be solved. To contain that AI, the argument is that you’d have to simulate it first, but it already simulates everything else, and so we arrive at a paradox.
It would not be feasible to make sure that [an AI] won’t ever cause harm to humans.”
Meanwhile, it appears there are too many lures and promises of profit, convenience and control for humanity to slow down.
AI is starting to take everything over. Facebook just deployed a new AI which scans users’ posts for “troubling” or “suicidal” comments and then reports them to the police!
This article states:
“Facebook admits that they have asked the police to conduct more than ONE HUNDRED wellness checks on people.
‘Over the last month, we’ve worked with first responders on over 100 wellness checks based on reports we received via our proactive detection efforts.
This is in addition to reports we received from people in the Facebook community.’
“
Final Thoughts
With AI building AI, we are taking another key step forward into a future where we are allowing power to flow out of our hands. This is another watershed moment in the evolution of AI.
What is going to happen?
Sources
- https://research.googleblog.com/2017/05/using-machine-learning-to-explore.html
- http://www.sciencealert.com/google-s-ai-built-it-s-own-ai-that-outperforms-any-made-by-humans
- https://www.activistpost.com/2017/12/us-global-artificial-intelligence-arms-race.html
- http://www.networks.imdea.org/whats-new/news/2016/containing-superintelligent-ai-theoretically-impossible
- https://www.activistpost.com/2017/11/facebooks-new-suicide-detection-put-innocent-people-behind-bars.html