AI – What is it

First, the Gatekeepers



by Andy Lee Roth
March 15, 2021

from TheMarkaz Website





“The Gatekeepers” by Ali Banisadr,

(b. Tehran 1976, lives and works in New York),

oil on linen, 72 x 108 inches (2010).

Courtesy of the artist.




Algorithms, artificial intelligence programs controlled by Big Tech companies including Google, Facebook and Twitter – corporations with no commitment to ethical journalism – are the new gatekeepers

More and more, proprietary algorithms rather than newsroom editors determine which news stories circulate widely, raising serious concerns about transparency and accountability in determinations of newsworthiness.

The rise of what is best understood as algorithmic censorship makes newly relevant the old concept of “gatekeeping” in ways that directly address previous critiques of how we get our news.

To illustrate the power of algorithms to control the flow of information, consider the example of what happened to the digital record of an academic conference that I attended last year.

 


YouTube and the Critical Media Literacy Conference of the Americas

In October 2020 I participated in an academic conference focused on media literacy education.
 

The event brought together the field’s leading figures for two days of scholarly panels and discussions.

Many of the participants, including those in a session I moderated, raised questions about the impact of Big Tech companies such as Google and Facebook on the future of journalism and criticized how corporate news media,

including not only Fox News and MSNBC but also the New York Times and Washington Post,

…often impose narrow definitions of newsworthiness.

In other words, the conference was like many others I’ve attended, except that due to the pandemic we met virtually via Zoom. 

After the conference concluded, its organizers uploaded video recordings of the keynote session and more than twenty additional hours of conference presentations to a YouTube channel created to make those sessions available to a wider public.

Several weeks later, YouTube removed all of the conference videos, without any notification or explanation to the conference organizers.

As MintPress News reported, an academic conference at which many participants raised warnings about,

“the dangers of media censorship” had, ironically, “been censored by YouTube.”

Despite the organizers’ subsequent formal appeals, YouTube refused to restore any of the deleted content; instead, it declined to acknowledge the content was ever posted in the first place.

Through my work with Project Censored, a nonprofit news watchdog with a global reputation for opposing news censorship and championing press freedoms, I was familiar with online content filtering.

Thinking about YouTube’s power to delete the public video record of an academic conference, without explanation, initially reminded me of the “memory holes” in George Orwell‘s Nineteen Eighty-Four.

In Orwell’s dystopian novel, memory holes efficiently whisk away for destruction any evidence that might conflict with or undermine the government’s interests, as determined by the Ministry of Truth.

But I also found myself recalling a theory of news production and distribution that enjoyed popularity in the 1950s but has since fallen from favor.

I’ve come to understand YouTube’s removal of the conference videos as (a new form of) gatekeeping, the concept developed by David Manning White and Walter Gieber in the 1950s to explain how newspaper editors determined what stories to publish as news.


The original gatekeeping model

White studied the decisions of a wire editor at a small midwestern newspaper, examining the reasons that the editor, whom White called “Mr. Gates,” gave for selecting or rejecting specific stories for publication.

Mr. Gates rejected some stories for practical reasons:

“too vague,” “dull writing,” or “too late – no space”…

But in 18 of the 423 decisions that White examined, Mr. Gates rejected stories for political reasons, rejecting stories as “pure propaganda” or “too red,” for example. 

White concluded his 1950 article by emphasizing,

“how highly subjective, how based on the gatekeeper’s own set of experiences, attitudes and expectations the communication of ‘news’ really is.”

In 1956, Walter Gieber conducted a similar study, this time examining the decisions of 16 different wire editors.

Gieber’s findings refuted White’s conclusion of gatekeeping as subjective. Instead, Gieber found that, independently of one another, editors made much the same decisions.

Gatekeeping was real, but the editors treated story selection as a rote task, and they were most concerned with what Gieber described as “goals of production” and “bureaucratic routine” – not, in other words, with advancing any particular political agenda.

More recent studies have reinforced and refined Gieber’s conclusion that professional assessments of “newsworthiness,” not political partisanship, guide news workers’ decisions about what stories to cover.

The gatekeeping model fell out of favor as newer theoretical models – including “framing” and “agenda setting” – seemed to explain more of the news production process.

In an influential 1989 article, sociologist Michael Schudson described gatekeeping as,

“a handy, if not altogether appropriate, metaphor.”

The gatekeeping model was problematic, he wrote, because,

“it leaves ‘information’ sociologically untouched, a pristine material that comes to the gate already prepared.”

In that flawed view “news” is preformed, and the gatekeeper,

“simply decides which pieces of prefabricated news will be allowed through the gate.”

Although White and others had noted that “gatekeeping” occurs at multiple stages in the news production process, Schudson’s critique stuck.

With the advent of the Internet, some scholars attempted to revive the gatekeeping model.

New studies showed how audiences increasingly act as gatekeepers, deciding which news items to pass along via their own social media accounts.

But, overall, gatekeeping seemed even more dated:

“The Internet defies the whole notion of a ‘gate’ and challenges the idea that journalists (or anyone else) can or should limit what passes through it,” Jane B. Singer wrote in 2006.


Algorithmic news filtering

Fast forward to the present and Singer’s optimistic assessment appears more dated than gatekeeping theory itself.

Instead, the Internet, and social media in particular, encompass numerous limiting “gates,” fewer and fewer of which are operated by news organizations or journalists themselves. 

Incidents such as YouTube’s wholesale removal of the media literacy conference videos are not isolated.

In fact, they are increasingly common as privately-owned companies and their media platforms wield ever more power to regulate who speaks online and the types of speech that are permissible.

Independent news outlets have documented,

Some Big Tech companies’ decisions have made headline news.

After the 2020 presidential election, for example, Google, Facebook, YouTube, Twitter, and Instagram restricted the online communications of Donald Trump and his supporters:

after the January 6 assault on the Capitol, Google, Apple, and Amazon suspended Parler, the social media platform favored by many of Trump’s supporters.

But decisions to deplatform Donald Trump and suspend Parler differ in two fundamental ways from most other cases of online content regulation by Big Tech companies.

  • First, the instances involving Trump and Parler received widespread news coverage; those decisions became public issues and were debated as such. 
  • Second, as that news coverage tacitly conveyed, the decisions to restrict Trump’s online voice and Parler’s networked reach were made by leaders at Google, Facebook, Apple, and Amazon. They were human decisions.

“Thought Police” by Ali Banisadr,

oil on linen, 82 x 120 inches (2019).

Courtesy of the artist.

This last point was not a focus of the resulting news coverage, but it matters a great deal for understanding the stakes in other cases, where the decision to filter content – in effect, to silence voices and throttle conversations – were made by algorithms, rather than humans.

Increasingly the news we encounter is the product of both the daily routines and professional judgments of journalists, editors, and other news professionals and the assessments of relevance and appropriateness made by artificial intelligence programs that have been developed and are controlled by private for-profit corporations that do not see themselves as media companies much less ones engaged in journalism.

When I search for news about “rabbits gone wild” or the Equality Act on Google News, an algorithm employs a variety of confidential criteria to determine what news stories and news sources appear in response to my query.

Google News does not produce any news stories of its own but, like Facebook and other platforms that function as news aggregators, it plays an enormous – and poorly understood – role in determining what news stories many people see.


The new algorithmic gatekeeping

Recall that Schudson criticized the gatekeeping model for,

“leaving ‘information’ sociologically untouched.”

Because news was constructed, not prefabricated, the gatekeeping model failed to address the complexity of the news production process, Schudson contended.

That critique, however, no longer applies to the increasingly common circumstances in which corporations such as Google and Facebook, which do not practice journalism themselves, determine what news stories members of the public are most likely to see – and what news topics or news outlets those audiences are unlikely to ever come across, unless they actively seek them out.

In these cases, Google, Facebook, and other social media companies have no hand – or interest – in the production of the stories that their algorithms either promote or bury.

Without regard for the basic principles of ethical journalism as recommended by the Society of Professional Journalists,

  • to seek the truth and report it
  • to minimize harm
  • to act independently
  • to be accountable and transparent

The new gatekeepers claim content neutrality while promoting news stories that often fail glaringly to fulfil even one of the SPJ’s ethical guidelines.

This problem is compounded by the reality that it is impossible for a contemporary version of David Manning White or Walter Gieber to study gatekeeping processes at Google or Facebook:

The algorithms engaged in the new gatekeeping are protected from public scrutiny as proprietary intellectual property.

As April Anderson and I have previously reported, a class action suit filed against YouTube in August 2019 by LGBT content creators could,

“force Google to make its powerful algorithms available for scrutiny.”

Google/YouTube have sought to dismiss the case on the grounds that its distribution algorithms are “not content-based.”


Algorithms, human agency, and inequalities

“Trust in the Future” by Ali Banisadr,

oil on linen, 82 x 120 inches (2017).

Courtesy of the artist.

To be accountable and transparent is one of guiding principles for ethical journalism, as advocated by the Society of Professional Journalists.

News gatekeeping conducted by proprietary algorithms crosses wires with this ethical guideline, producing grave threats to the integrity of journalism and the likelihood of a well-informed public.

Most often when Google, Facebook, and other Big Tech companies are considered in relation to journalism and the conditions necessary for it to fulfill its fundamental role as the “Fourth Estate” – holding the powerful accountable and informing the public – the focus is on how Big Tech has thoroughly appropriated the advertising revenues on which most legacy media outlets depend to stay in business.

The rise of algorithmic news gatekeeping should be just as great a concern. Technologies driven by artificial intelligence (AI) reduce the role of human agency in decision making.

This is often touted, by advocates of AI, as a selling point:

Algorithms replace human subjectivity and fallibility with “objective” determinations.

Critical studies of algorithmic bias, including,

  • Safiya Umoja Noble‘s Algorithms of Oppression
  • Virginia Eubank‘s Automating Inequality
  • Cathy O’Neill‘s Weapons of Math Destruction,

…advise us to be wary of how easy it is to build longstanding human prejudices into “viewpoint neutral” algorithms that, in turn, add new layers to deeply-sedimented structural inequalities.

With the new algorithmic gatekeeping of news developing more quickly than public understanding of it, journalists and those concerned with the role of journalism in democracy face multiple threats.

We must exert all possible pressure to force corporations such as Google and Facebook to make their algorithms available for third-party scrutiny; at the same time, we must do more to educate the public about this new and subtle wrinkle in the news production process.

Journalists are well positioned to tell this story from first-hand experience, and governmental regulation or pending lawsuits may eventually force Big Tech companies to make their algorithms available for third-party scrutiny.

But the stakes are too high to wait on the sidelines for others to solve the problem.

So what can we do now in response to algorithmic gatekeeping?

I recommend four proactive responses, presented in increasing order of engagement:

  • Avoid using “Google” as a verb,…a common habit that tacitly identifies a generic online activity with the brand name of a corporation that has done as much as any to multiply epistemic inequality. The concept was developed by Shoshana Zuboff, author of The Age of Surveillance Capitalism, to describe a form of power based on the difference between what we can know and what can be known about us.  
  • Remember search engines and social media feeds are not neutral information sources.The algorithms that drive them often serve to reproduce existing inequalities in subtle but powerful ways. Investigate for yourself. Select a topic of interest to you and compare search results from Google and DuckDuckGo.  
  • Connect directly to news organizations that display firm commitments to ethical journalism,…rather than relying on your social media feed for news. Go to the outlet’s website, sign up for its email list or RSS feed, subscribe to the outlet’s print version if there is one. The direct connection removes the social media platform, or search engine, as an unnecessary and potentially biased intermediary.  
  • Call out algorithmic bias when you encounter it.Call it out directly to the entity responsible for it; call it out publicly by letting others know about it.

Fortunately, our human brains can employ new information in ways that algorithms cannot.

Understanding the influential roles of algorithms on our lives – including how they operate as gatekeepers of the news stories we are most likely to see – allows us to take greater control of our individual online experiences.

Based on greater individual awareness and control, we can begin to organize collectively to expose and oppose algorithmic bias and censorship…

 

y Jonathan Chadwick
May 18, 2022

from DailyMail Website



 

Google’s hype on DeepMind exceeds reality in achieving Artificial General Intelligence (AGI).

According to Tristan Greene of ‘TheNextWeb’,

“It’s not a general AI, it’s a bunch of pre-trained, narrow models bundled neatly.”

What is certain is Google’s ability to ‘make it so’ and fool a public that cannot distinguish between ‘magic’ and ‘reality’…

Source

  • DeepMind expert suggests the hardest tasks to create a human-like AI are solved
     
  • The London firm wants to build an ‘AGI‘ that has the same intelligence as humans
     
  • This week DeepMind unveiled a program capable of achieving over 600 tasks


‘The Game is Over!’

Google’s DeepMind says

it is close to achieving

‘human-level’ artificial intelligence,

but it still needs to be scaled up…
 



DeepMind, a British company owned by Google, may be on the verge of achieving human-level artificial intelligence (AI).

Nando de Freitas, a research scientist at DeepMind and machine learning professor at Oxford University, has said ‘the game is over’ in regards to solving the hardest challenges in the race to achieve artificial general intelligence (AGI).

AGI refers to a machine or program that has the ability to understand or learn any intellectual task that a human being can, and do so without training.

According to De Freitas, the quest for scientists is now scaling up AI programs, such as with more data and computing power, to create an AGI.

Earlier this week, DeepMind unveiled a new AI ‘agent’ called Gato that can complete 604 different tasks,

‘across a wide range of environments’…

Gato uses a single neural network – a computing system with interconnected nodes that works like nerve cells in the human brain.

It can chat, caption images, stack blocks with a real robot arm and even play the 1980s home video game console Atari, DeepMind claims.

DeepMind, a British company owned by Google,

may be on the verge of achieving

human-level artificial intelligence (file photo)
 



Gato uses a single neural network

– computing systems with interconnected nodes

that work like nerve cells in the human brain –

to complete 604 tasks, according to DeepMind

De Freitas comments came in response to an opinion piece published on The Next Web that said humans alive today won’t ever achieve AGI. 

De Freitas tweeted:

‘It’s all about scale now! The Game is Over! It’s about making these models bigger, safer, compute efficient, faster…’ 

However, he admitted that humanity is still far from creating an AI that can pass the Turing test – a test of a machine’s ability to exhibit intelligent behavior equivalent to or indistinguishable from that of a human. 

After DeepMind’s announcement of Gato, The Next Web article said it demonstrates AGI no more than virtual assistants such as Amazon’s Alexa and Apple’s Siri, which are already on the market and in people’s homes. 

‘Gato’s ability to perform multiple tasks is more like a video game console that can store 600 different games, than it’s like a game you can play 600 different ways,’ said The Next Web contributor Tristan Greene

‘It’s not a general AI, it’s a bunch of pre-trained, narrow models bundled neatly.’ 

Gato has been built to achieve a variety of hundreds of tasks, but this ability may compromise the quality of each task, according to other commentators. 

https://www.dailymail.co.uk/embed/video/2688781.html
  https://platform.twitter.com/embed/Tweet.html?dnt=false&embedId=twitter-widget-0&features=eyJ0ZndfdGltZWxpbmVfbGlzdCI6eyJidWNrZXQiOltdLCJ2ZXJzaW9uIjpudWxsfSwidGZ3X2ZvbGxvd2VyX2NvdW50X3N1bnNldCI6eyJidWNrZXQiOnRydWUsInZlcnNpb24iOm51bGx9LCJ0ZndfdHdlZXRfZWRpdF9iYWNrZW5kIjp7ImJ1Y2tldCI6Im9uIiwidmVyc2lvbiI6bnVsbH0sInRmd19yZWZzcmNfc2Vzc2lvbiI6eyJidWNrZXQiOiJvbiIsInZlcnNpb24iOm51bGx9LCJ0ZndfZm9zbnJfc29mdF9pbnRlcnZlbnRpb25zX2VuYWJsZWQiOnsiYnVja2V0Ijoib24iLCJ2ZXJzaW9uIjpudWxsfSwidGZ3X21peGVkX21lZGlhXzE1ODk3Ijp7ImJ1Y2tldCI6InRyZWF0bWVudCIsInZlcnNpb24iOm51bGx9LCJ0ZndfZXhwZXJpbWVudHNfY29va2llX2V4cGlyYXRpb24iOnsiYnVja2V0IjoxMjA5NjAwLCJ2ZXJzaW9uIjpudWxsfSwidGZ3X3Nob3dfYmlyZHdhdGNoX3Bpdm90c19lbmFibGVkIjp7ImJ1Y2tldCI6Im9uIiwidmVyc2lvbiI6bnVsbH0sInRmd19kdXBsaWNhdGVfc2NyaWJlc190b19zZXR0aW5ncyI6eyJidWNrZXQiOiJvbiIsInZlcnNpb24iOm51bGx9LCJ0ZndfdXNlX3Byb2ZpbGVfaW1hZ2Vfc2hhcGVfZW5hYmxlZCI6eyJidWNrZXQiOiJvbiIsInZlcnNpb24iOm51bGx9LCJ0ZndfdmlkZW9faGxzX2R5bmFtaWNfbWFuaWZlc3RzXzE1MDgyIjp7ImJ1Y2tldCI6InRydWVfYml0cmF0ZSIsInZlcnNpb24iOm51bGx9LCJ0ZndfbGVnYWN5X3RpbWVsaW5lX3N1bnNldCI6eyJidWNrZXQiOnRydWUsInZlcnNpb24iOm51bGx9LCJ0ZndfdHdlZXRfZWRpdF9mcm9udGVuZCI6eyJidWNrZXQiOiJvbiIsInZlcnNpb24iOm51bGx9fQ%3D%3D&frame=false&hideCard=false&hideThread=false&id=1525397036325019649&lang=en&origin=https%3A%2F%2Fwww.bibliotecapleyades.net%2Fsociopolitica2%2Fsociopol_internetgoogle97.htm&sessionId=da488a7e9c6b010a80ac992e681a0824605c08c0&theme=light&widgetsVersion=2615f7e52b7e0%3A1702314776716&width=550px

De Freitas tweeted:

 ‘It’s all about scale now!

The Game is Over!

It’s about making these models

bigger, safer, compute efficient, faster…’

In another opinion piece, ZDNet columnist Tiernan Ray wrote that the agent,

‘is actually not so great on several tasks’. 

‘On the one hand, the program is able to do better than a dedicated machine learning program at controlling a robotic Sawyer arm that stacks blocks,’ Ray said.

‘On the other hand, it produces captions for images that in many cases are quite poor. 

‘Its ability at standard chat dialogue with a human interlocutor is similarly mediocre, sometimes eliciting contradictory and nonsensical utterances.’

For example, when a chatbot, Gato initially mistakenly said that Marseille is the capital of France. 

Also, a caption created by Gato to accompany a photo read,

‘man holding up a banana to take a picture of it’, even though the man wasn’t holding bread. 

DeepMind details Gato in a new research paper, entitled ‘A Generalist Agent,’ that’s been posted on the Arxiv preprint server.

The company’s authors have said such an agent will show ‘significant performance improvement’ when it’s scaled-up. 

AGI has been already identified as a future threat that could wipe out humanity either deliberately or by accident

Pictured a dialogues with Gato

when prompted to be a chatbot.

A critic called Gato’s ability

to have a chat with a human ‘mediocre’

Earlier this week, British firm

DeepMind revealed Gato,

a program that can chat, caption images,

stack blocks with a real robot arm and even play

the 1980s home video game console Atari.

Depicted here are some of the tasks

that Gato has been tested

on in a DeepMind promo

Dr Stuart Armstrong at Oxford University’s Future of Humanity Institute previously said AGI will eventually make humans redundant and wipe us out. 

He believes,

machines will work at speeds inconceivable to the human brain and will skip communicating with humans to take control of the economy and financial markets, transport, healthcare and more…

Dr Armstrong said a simple instruction to an AGI to ‘prevent human suffering’ could be interpreted by a super computer as ‘kill all humans’, due to human language being easily misinterpreted. 

Before his death, Professor Stephen Hawking told the BBC:

‘The development of full artificial intelligence could spell the end of the human race.’ 

During his lifetime,

the famous British astrophysicist

Professor Stephen Hawking (pictured)

said AI ‘could spell the end of the human race’

In a 2016 paper, DeepMind researchers acknowledged the need for a ‘big red button’ to prevent a machine from completing,

‘a harmful sequence of actions’…

DeepMind, which was founded in London in 2010 before being acquired by Google in 2014, is known for creating an AI program that beat a human professional Lee Sedol, the world champion, in a five-game match in 2016.

In 2020, the firm announced it had solved a 50-year-old problem in biology, known as the ‘protein folding problem‘ – knowing how a protein’s amino acid sequence dictates its 3D structure. 

DeepMind claimed to have solved the problem with 92 per cent accuracy by training a neural network with 170,000 known protein sequences and their different structures. 

The firm is perhaps

best known for its AlphaGo AI program

that beat a human professional Go player Lee Sedol ,

the world champion, in a five-game match.

Pictured, Go world champion Lee Sedol of South Korea

seen ahead of the first game the

Google DeepMind Challenge Match

against Google’s AlphaGo programme

in March 2016

WHAT IS GOOGLE’S DEEPMIND AI PROJECT?
DeepMind was founded in London in 2010 and was acquired by Google in 2014.

It now has additional research centers in Edmonton and Montreal, Canada, and a DeepMind Applied team in Mountain View, California.

DeepMind is on a mission to push the boundaries of AI, developing programs that can learn to solve any complex problem without needing to be taught how.

If successful, the firm believes this will be one of the most important and widely beneficial scientific advances ever made.

The company has hit the headlines for a number of its creations, including software it created a that taught itself how to play and win at 49 completely different Atari titles, with just raw pixels as input.

In a world first, its AlphaGo program took on the world’s best player at G, one of the most complex and intuitive games ever devised, with more positions than there are atoms in the universe – and won.



by Steven Metz
June 10, 2016

from WorldPoliticsReview Website






Navy Rear Adm. Mat Winter, left, and Navy Adm. Jonathan Greenert

with the Navy-sponsored Shipboard Autonomous Firefighting Robot,

Washington, Feb. 4, 2015

(Department of Defense photo).
 

“Fifteen years after a drone first fired missiles in combat,” journalist Josh Smith recently wrote from Afghanistan, “the U.S. military’s drone program has expanded far beyond specific strikes to become an everyday part of the war machine.”

Important as this is, it is only a first step in a much bigger process.

As a report co-authored in January 2014 by Robert Work and Shawn Brimley put it,

“a move to an entirely new war-fighting regime in which unmanned and autonomous systems play central roles” has begun.

Where this ultimately will lead is unclear.

Work, who went to become the deputy secretary of defense in May 2014, and Brimley represent one school of thought about robotic war. Drawing from a body of ideas about military revolutions from the 1990s, they contend that roboticization is inevitable, largely because it will be driven by advances in the private sector.

Hence the United States military must embrace and master it rather than risk having enemies do so and gain an advantage.

On the other side of the issue are activists who want to stop the development of military robots. For instance the United Nations Human Rights Council has called for a moratorium on lethal autonomous systems.

Nongovernmental organizations have created what they call the Campaign to Stop Killer Robots, which is modeled on recent efforts to ban land mines and cluster munitions.

Other groups and organizations share this perspective.

Undoubtedly the political battle between advocates and opponents of military robots will continue. However, regardless of the outcome of that battle, developments in the next decade will already set the trajectory for the future and have cascading effects.

At several points, autonomous systems will cross a metaphorical Rubicon from which there is no turning back.
 

  • –  One such Rubicon is when some nation deploys a robot that can decide to kill a human based on programmed instructions and an algorithm rather than a direct instruction from an operator. In military parlance, these would be robots without “a human in the loop.”

    In a sense, this would not be entirely new:Booby traps and mines have killed without a human pulling the trigger for millennia.But the idea that a machine would make something akin to a decision rather than simply killing any human that comes close to it adds greater ethical complexity than a booby trap or mine, where the human who places it has already taken the ethical decision to kill.
      “Creating autonomous military robotsthat can act at least as ethically as human soldiersappears to be a sensible goal.”  In Isaac Asimov‘s science fiction collection “I, Robot,” which was one of the earliest attempts to grapple with the ethics of autonomous systems, an ironclad rule programmed into all such machines was that,”a robot may not injure a human being.”Clearly that is an unrealistic boundary, but as an important 2008 report sponsored by the U.S. Navy argued,”Creating autonomous military robots that can act at least as ethically as human soldiers appears to be a sensible goal.”Among the challenges to meeting this goal that the report’s authors identified,”creating a robot that can properly discriminate among targets is one of the most urgent.”In other words, the key is not the technology for killing, but the programmed instructions and algorithms. But that also makes control extraordinarily difficult, since programmed instructions can be changed remotely and in the blink of an eye, instantly transforming a benign robot into a killer.

     
  • A second Rubicon will be crossed when non-state entities field military robots. Since most of the technology for military robots will arise from the private sector, anyone with the money and expertise to operate them will be able to do so. That includes,
    • corporations
    • vigilantes
    • privateers
    • criminal organizations
    • violent extremist movements, as well as contractors working on their behalf
    Even if efforts to control the use of robots by state militaries in the form of international treaties are successful, there would be little to constrain non-state entities from using them. Nations constrained by treaties could be at a disadvantage when facing non-state enemies that are not.

     
  • A third Rubicon will be crossed when autonomous systems are no longer restricted to being temporary mobile presences that enter a conflict zone, linger for a time, then leave, but are an enduring presence on the ground and in the water, as well as in the air, for the duration of an operation. Pushing this idea even further, some experts believe that military robots will not be large, complex autonomous systems, but swarms of small, simple machines networked for a common purpose. Like an insect swarm, this type of robot could function even if many of its constituent components were destroyed or broke down. Swarming autonomous networks would represent one of the most profound changes in the history of armed conflict. In his seminal 2009 book “Wired for War,” Peter Singer wrote,”Robots may not be poised to revolt, but robotic technologies and the ethical questions they raise are all too real.”This makes it vital to understand the points of no return. Even that is only a start:Knowing that the Rubicon has been crossed does not alone tell what will come next.

When Caesar and his legion crossed the Rubicon River in 49 B.C., everyone knew that some sort of conflict was inevitable.

But no one could predict Caesar’s victory, much less his later assassination and all that it brought. Although the parameters of choice had been bounded, much remained to be determined.

Similarly, Rubicon crossings by military robots are inevitable, but their long-term outcomes will remain unknown.

It is therefore vital for the global strategic community, including governments and militaries as well as scholars, policy experts, ethicists, technologists, nongovernmental organizations and international organizations to undertake a collaborative campaign of learning and public education.

Political leaders must engage the public on this issue without hysteria or hyperbole, identifying all the alternative scenarios for who might use military robots, where they might use them, and what they might use them for.

With such a roadmap, it might be possible for political leaders and military officials to push roboticization in a way that limits the dangers, rather than amplifying them.

Leave a Reply