Weaponizing AI

Spread the love

March 31, 2023

from RT Website


Welcome screen for the

OpenAI “ChatGPT” app

© Getty Images / Leon Neal
 

The country’s

data protection authority

has demanded that

the chatbot’s creator OpenAI

take action or face hefty fines…


Italy’s data protection watchdog has banned access to OpenAI‘s ChatGPT chatbot, due to alleged privacy violations.

The decision came after a data breach on March 20 that exposed users’ conversations and payment information.

ChatGPT, which was launched in November 2022, has become popular for its ability to write in different styles and languages, create poems, and even write computer code.

However, the Italian National Authority for Personal Data Protection criticized the chatbot,

for not providing an information notice to users whose data is collected by OpenAI.

The watchdog also took issue with the “lack of a legal basis” that would justify the collection and mass storage of personal data intended to “train” the algorithms that run the platform.

Although the chatbot is intended for people over the age of 13, the Italian authorities also blasted OpenAI for failing to install any filters to verify user age, which they claim can lead to minors being presented with responses,

“absolutely not in accordance with their level of development.”

The watchdog is now demanding that OpenAI “communicate within 20 days the measures undertaken” to remedy this situation or face a fine of up to 4% of its annual worldwide turnover.

The decision to block the chatbot and temporarily limit the processing of Italian users’ data via OpenAI has taken “immediate effect,” the organization added.


Meanwhile, over 1,100 AI researchers and prominent tech leaders, including Tesla CEO Elon Musk and Apple co-founder Steve Wozniak,

have

signed an open letterdemanding a six-month moratorium on “giant AI experiments.”

The signatories claim that,

“AI systems with human-competitive intelligence can pose profound risks to society and humanity” and that the rapidly advancing technology should be “planned for and managed with commensurate care and resources.”

The group has strongly cautioned against allowing an,

“out-of-control race to develop and deploy even more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”

The letter states that if the AI developers can’t govern themselves, governments must step in, creating regulatory bodies capable of reigning in runaway systems, funding safety research, and softening the economic blow when super-intelligent systems start taking over human jobs.




by Steven Metz
June 10, 2016

from WorldPoliticsReview Website






Navy Rear Adm. Mat Winter, left, and Navy Adm. Jonathan Greenert

with the Navy-sponsored Shipboard Autonomous Firefighting Robot,

Washington, Feb. 4, 2015

(Department of Defense photo).
 

“Fifteen years after a drone first fired missiles in combat,” journalist Josh Smith

recently wrote from Afghanistan, “the U.S. military’s drone program has expanded far beyond specific strikes to become an everyday part of the war machine.”

Important as this is, it is only a first step in a much bigger process.

As a report co-authored in January 2014 by Robert Work and Shawn Brimley put it,

“a move to an entirely new war-fighting regime in which unmanned and autonomous systems play central roles” has begun.

Where this ultimately will lead is unclear.

Work, who went to become the deputy secretary of defense in May 2014, and Brimley represent one school of thought about robotic war. Drawing from a body of ideas about military revolutions from the 1990s, they contend that roboticization is inevitable, largely because it will be driven by advances in the private sector.

Hence the United States military must embrace and master it rather than risk having enemies do so and gain an advantage.

On the other side of the issue are activists who want to stop the development of military robots. For instance the United Nations Human Rights Council has called for a moratorium on lethal autonomous systems.

Nongovernmental organizations have created what they call the Campaign to Stop Killer Robots, which is modeled on recent efforts to ban land mines and cluster munitions.

Other groups and organizations share this perspective.

Undoubtedly the political battle between advocates and opponents of military robots will continue. However, regardless of the outcome of that battle, developments in the next decade will already set the trajectory for the future and have cascading effects.

At several points, autonomous systems will cross a metaphorical Rubicon from which there is no turning back.
 

  • –  One such Rubicon is when some nation deploys a robot that can decide to kill a human based on programmed instructions and an algorithm rather than a direct instruction from an operator. In military parlance, these would be robots without “a human in the loop.”

    In a sense, this would not be entirely new: Booby traps and mines have killed without a human pulling the trigger for millennia. But the idea that a machine would make something akin to a decision rather than simply killing any human that comes close to it adds greater ethical complexity than a booby trap or mine, where the human who places it has already taken the ethical decision to kill.
     In Isaac Asimov‘s science fiction collection “I, Robot,” which was one of the earliest attempts to grapple with the ethics of autonomous systems, an ironclad rule programmed into all such machines was that, “a robot may not injure a human being.” Clearly that is an unrealistic boundary, but as an important 2008 report sponsored by the U.S. Navy argued, “Creating autonomous military robots that can act at least as ethically as human soldiers appears to be a sensible goal.” Among the challenges to meeting this goal that the report’s authors identified, “creating a robot that can properly discriminate among targets is one of the most urgent.” In other words, the key is not the technology for killing, but the programmed instructions and algorithms. But that also makes control extraordinarily difficult, since programmed instructions can be changed remotely and in the blink of an eye, instantly transforming a benign robot into a killer.

     
  • A second Rubicon will be crossed when non-state entities field military robots. Since most of the technology for military robots will arise from the private sector, anyone with the money and expertise to operate them will be able to do so. That includes,
    • corporations
    • vigilantes
    • privateers
    • criminal organizations
    • violent extremist movements, as well as contractors working on their behalf
    Even if efforts to control the use of robots by state militaries in the form of international treaties are successful, there would be little to constrain non-state entities from using them. Nations constrained by treaties could be at a disadvantage when facing non-state enemies that are not.

     
  • A third Rubicon will be crossed when autonomous systems are no longer restricted to being temporary mobile presences that enter a conflict zone, linger for a time, then leave, but are an enduring presence on the ground and in the water, as well as in the air, for the duration of an operation. Pushing this idea even further, some experts believe that military robots will not be large, complex autonomous systems, but swarms of small, simple machines networked for a common purpose. Like an insect swarm, this type of robot could function even if many of its constituent components were destroyed or broke down. Swarming autonomous networks would represent one of the most profound changes in the history of armed conflict. In his seminal 2009 book “Wired for War,” Peter Singer wrote, “Robots may not be poised to revolt, but robotic technologies and the ethical questions they raise are all too real.” This makes it vital to understand the points of no return. Even that is only a start: Knowing that the Rubicon has been crossed does not alone tell what will come next.

When Caesar and his legion crossed the Rubicon River in 49 B.C., everyone knew that some sort of conflict was inevitable.

But no one could predict Caesar’s victory, much less his later assassination and all that it brought. Although the parameters of choice had been bounded, much remained to be determined.

Similarly, Rubicon crossings by military robots are inevitable, but their long-term outcomes will remain unknown.

It is therefore vital for the global strategic community, including governments and militaries as well as scholars, policy experts, ethicists, technologists, nongovernmental organizations and international organizations to undertake a collaborative campaign of learning and public education.

Political leaders must engage the public on this issue without hysteria or hyperbole, identifying all the alternative scenarios for who might use military robots, where they might use them, and what they might use them for.

With such a roadmap, it might be possible for political leaders and military officials to push roboticization in a way that limits the dangers, rather than amplifying them.




by J.D. Heyes
October 12, 2015
from
NaturalNews Website

Spanish version
 


The Pentagon’s secretive futuristic weapons and capabilities research institution, DARPA, is at it again, this time pursuing the development of synthetic “living organisms” that are bound to have a major impact on all aspects of humanity and the surrounding environment.

The Washington Post reports that public sector agencies and private sector investors are putting millions into the development of synthetic biology, which is leading to a rash of new innovations that are having an impact on agriculture, energy and health, among other sectors.

Citing the latest “U.S. Trends in Synthetic Biology Research Funding” report from the Wilson Center’s Synthetic Biology Project in the nation’s capital, The Washington Post noted that the U.S. government has funded north of $820 million in research programs focused on synthetic biological development programs between 2008 and 2014.

The Washington Post further reported:

In the public sector, the role of innovation giant DARPA in funding synthetic biology projects has exploded, eclipsing the role of other prominent U.S. government agencies that fund synthetic biology programs, such as the National Science Foundation (NSF), National Institutes of Health (NIH), and the USDA.

In 2014 alone, DARPA funded $100 million in programs, more than three times the amount funded by the NSF, marking a fast ramp-up from a level of zero in 2010.


Worrisome military applications?


Because DARPA has been involved in the development of a number of scientific firsts, it’s worth keeping an eye on the defense research agency regarding its work in the field of synthetic biology, the paper noted.

Through initiatives such as DARPA’s Living Foundries program, the agency is attempting to create or facilitate the creation of an actual manufacturing platform for living organisms.

To this end, DARPA awarded the Broad Institute Foundry, an MIT synthetic research lab, $32 million to figure out how to design and then manufacture DNA.

“Living Foundries seeks to transform biology into an engineering practice by developing the tools, technologies, methodologies, and infrastructure to speed the biological design-built-test-learn cycle and expand the complexity of systems that can be engineered,” says the

Living Foundries web page.

“The tools and infrastructure developed as part of this program are expected to enable the rapid and scalable development of transformative products and systems that are currently too complex to access.”

DARPA now represents nearly 60 percent of all public funding in the field of synthetic biology, Todd Kuiken, the senior program researcher at the Wilson Center who authored the trends report, told the Post.

When all of the Department of Defense spending is added in, he said, about two-thirds of all synthetic biology funding from Uncle Sam is slanted toward the defense sector.

But to what end…?

That’s the worrying part when you begin to consider the implications and prospects of utilizing synthetic organisms and the potential to perhaps create a biological apocalypse in nations that are not friendly to the U.S.

As the Post notes, a number of Pentagon programs are classified and hard-and-fast figures are difficult to get, so there really is no way to know exactly what the military might be working on at this moment in this field.
 


Society relies on many products


Kuiken told the Post that a number of military programs appear to focus on dual-use technologies such as bacteria that are able to get rid of the barnacles attached to the bottom of U.S. Navy warships.

One Army program is aimed at developing “biologically-inspired power generation,” and that could have major applications in the consumer sector as long as people are okay with powering devices using biological, living organisms rather than traditional batteries.

MIT biological engineering professor Christopher Voigt, who started the institution’s foundry, says the research is vital to the development of a myriad of products and treatments.

“Society relies on many products from the natural world that have intricate material and chemical structures, from chemicals such as antibiotics to materials like wood,” Voigt said, according to a statement on the

MIT foundry’s web site.

“We’ve been limited in our ability to program living cells to redesign these products – for example, to program living cells to create materials as intricate as wood or seashells – but with new properties,” he continued.

“Rather, products from synthetic biology have been limited to small, simple organic molecules. I want to change the scale of genetic engineering to access anything biology can do.”



Sources

Leave a Reply