(full site)
Fark.com

Back To Main
   Human rights group wants a ban on all robots that are relentlessly pursuing Sarah Connor

20 Nov 2012 04:33 AM   |   4702 clicks   |   Globe and Mail
Showing 1-50 of 91 comments
Refresh Page 2
View Comments:
Sgygus    [TotalFark]  
calling for an international treaty outlawing military weapons systems that decide - without a human "in the loop" - when to pull the trigger

This is quite prudent. Machines are not capable of being responsible. Humans are.

19 Nov 2012 09:29 PM
MaudlinMutantMollusk    [TotalFark]  
Listen, and understand. That terminator is out there. It can't be bargained with. It can't be reasoned with. It can't be banned.

19 Nov 2012 09:37 PM
RedPhoenix122    [TotalFark]  
www.slashcastpodcast.comView Full Size


He'll cut you if you try to ban him.

20 Nov 2012 01:43 AM
namatad    [TotalFark]  
yawnnnnnnnnnnnnnnnnnnn
bunch of worthless dumbasses say what?

20 Nov 2012 02:20 AM
wildsnowllama     

Sgygus: calling for an international treaty outlawing military weapons systems that decide - without a human "in the loop" - when to pull the trigger

This is quite prudent. Machines are not capable of being responsible. Humans are.


Capable? Sure. Often? Meh.

20 Nov 2012 04:40 AM
untaken_name     
Oh, a HUMAN rights group is against robots. Big farkin' surprise there. Bigots.

20 Nov 2012 04:45 AM
vrax     
i49.tinypic.comView Full Size

20 Nov 2012 04:46 AM
randomjsa     
"We want you to unilaterally stop using technology that is keeping you safe from your enemies that don't have it"

20 Nov 2012 04:53 AM
othmar     
do the robots get a pay raise?

20 Nov 2012 04:55 AM
yousaywut     
Have none of these scientists ever watched a movie. OMG we're all going to die by autonomous replicating killer robots. Oh lordy lord.

/Seriously the timeing and accuracy of properly calibrated automatic machines has to be a factor as well as the lack of human capacity for nuance which they will most assuradly lack. Machine sentinals is just a war crime waiting to happen.

20 Nov 2012 04:57 AM
LewDux    [TotalFark]  
Silly Luddites, you can't stop progress

20 Nov 2012 05:00 AM
Great Janitor     
I propose that all kill bots be programmed with a preset kill limit and once that kill limit is reached, the kill bot must deactivate. Just never announce what that limit is. No commander will ever be cunning enough to deal with that.

20 Nov 2012 05:01 AM
Mock26     
Go ahead and ban them. It will not stop them from being deployed. Seriously, do you think that if some Palestinian terrorist group could design one of these killer robots that they would refrain from setting it loose in Tel Aviv because they had been banned?

20 Nov 2012 05:02 AM
Archie Goodwin     
Everything will be fine until they go on strike.

"What do we want?"

"Lithium Batteries"

"When do we want them?"

"Right No w. Now. n oo ooo wwww..."

20 Nov 2012 05:05 AM
sexorcisst     

Article: They also want robot designers to enact a "code of conduct" to keep the genie of killing machines with artificial intelligence in the bottle.


Christina Aguilera, fighting for our human rights.

20 Nov 2012 05:06 AM
Bonanza Jellybean     
But what will become of Chew-Chew, the cyborg train that runs on babymeat?

20 Nov 2012 05:09 AM
GimletDeuce     
If only there were some sort of insurance available to protect us from this threat. Oh, Sam Waterston, where are you when we need you?

20 Nov 2012 05:43 AM
Jim_Callahan    [TotalFark]  

Sgygus: calling for an international treaty outlawing military weapons systems that decide - without a human "in the loop" - when to pull the trigger

This is quite prudent. Machines are not capable of being responsible. Humans are.


Kind of arguable. Machines also aren't capable of going off the reservation and making bad decisions. Humans are.

In some ways, robotic defense systems are a human rights improvement over corruptible human soldiers.

//Also, for the obvious historical reference, consider that international law following WW1 banned the use of aircraft as weapon platforms of any kind. Look up how long that lasted for an idea of how long we can keep auto systems a high schooler could build given sufficient money off the battleground.

20 Nov 2012 05:45 AM
Day_Old_Dutchie     
1 "Serve the public trust"
2 "Protect the innocent"
3 "Uphold the law"
4 (Classified)

20 Nov 2012 05:50 AM
Ishkur    [TotalFark]  

Sgygus: This is quite prudent. Machines are not capable of being responsible. Humans are.


Yes, because humans have a rich and luxurious history of being peaceful and kind to one another even when in possession of tools that have no explicit purpose other than killing large quantities of other humans.

20 Nov 2012 05:54 AM
way south     

Jim_Callahan: Sgygus: calling for an international treaty outlawing military weapons systems that decide - without a human "in the loop" - when to pull the trigger

This is quite prudent. Machines are not capable of being responsible. Humans are.

Kind of arguable. Machines also aren't capable of going off the reservation and making bad decisions. Humans are.

In some ways, robotic defense systems are a human rights improvement over corruptible human soldiers.

//Also, for the obvious historical reference, consider that international law following WW1 banned the use of aircraft as weapon platforms of any kind. Look up how long that lasted for an idea of how long we can keep auto systems a high schooler could build given sufficient money off the battleground.


We've already got automated war machines called land mines.
The problem is machines aren't accountable for following orders. They do whatever they were rigged or programmed to do, and if the code is vague enough then they'll take a shot at an airliner as quickly as they'd shoot down an enemy fighter. Putting wheels on a mouse trap doesn't make it sympathetic to the differences between mice and hamsters. At least with a traditional fighter you've got the pilot to blame.
Maybe You could bypass a ban by adding an authorization button, but that means the signatory soldier would be risking his name on a pile of code he didn't write.
...but he could just say he was ordered to sign and its all good. Who knows.

I doubt they'll pass a ban to begin with, because the idea of using automated guns for perimeter defense is just too tempting for a first world army up against guerrillas.

dl.dropbox.comView Full Size

20 Nov 2012 06:21 AM
Jim_Callahan    [TotalFark]  

way south: They do whatever they were rigged or programmed to do, and if the code is vague enough then they'll take a shot at an airliner as quickly as they'd shoot down an enemy fighter.


So... you have no problem with automated combat, basically? Because you're only objecting to poorly programmed automated combat in that post.

Sort of like saying "I object to bridges. They're always failing under stress, coming loose from their moorings, and falling down". Well, no, not if they're competently designed they're not.

//If you think I'm making fun of you... I am, a little. The coding to distinguish a passenger airliner's profile from a fighter or a bomber is literally in "the intern could do it in an hour with a Kinect and the SDK" territory, that's not even slightly complex functionality at this point in automation tech.

20 Nov 2012 06:41 AM
mamoru    [TotalFark]  

Jim_Callahan: Kind of arguable. Machines also aren't capable of going off the reservation and making bad decisions. Humans are.


You mean the complicated hardware and software to control an autonomous weapons platform will never malfunction? What a relief!

Anyway, I doubt deployment can be banned. How about a law which holds the human who orders the deployment of such machines directly responsible for the actions of them, with punishments equal to that human having carried out such actions him/herself? You want to deploy a machine that cannot take responsibility for who it shoots? Then you take responsibility for who it shoots.

/general "you", not you specifically, Jim_Callahan :)

20 Nov 2012 06:44 AM
Jim_Callahan    [TotalFark]  

mamoru: You want to deploy a machine that cannot take responsibility for who it shoots? Then you take responsibility for who it shoots.


That's pretty much how it works, yes. You step on a land-mine that's not supposed to be there, the government that deployed the mine is the one at fault. Same with this stuff.

You mean the complicated hardware and software to control an autonomous weapons platform will never malfunction? What a relief!

Probably less frequently than the reprobates we sometimes hire slip off base to have a shooting spree among the local civilians, or mutilate corpses, or are bribed to let contractors abscond with millions of dollars in untraceable cash.

100% absolute infallibility is miles and miles above the bar that terminators have to leap to be an improvement, is what I'm getting at here.

20 Nov 2012 06:49 AM
ShabazKilla    [TotalFark]  
eggshell-robotics.node3000.comView Full Size

20 Nov 2012 06:50 AM
way south     

Jim_Callahan: way south: They do whatever they were rigged or programmed to do, and if the code is vague enough then they'll take a shot at an airliner as quickly as they'd shoot down an enemy fighter.

So... you have no problem with automated combat, basically? Because you're only objecting to poorly programmed automated combat in that post.

Sort of like saying "I object to bridges. They're always failing under stress, coming loose from their moorings, and falling down". Well, no, not if they're competently designed they're not.

//If you think I'm making fun of you... I am, a little. The coding to distinguish a passenger airliner's profile from a fighter or a bomber is literally in "the intern could do it in an hour with a Kinect and the SDK" territory, that's not even slightly complex functionality at this point in automation tech.


The difference between poor programming and good programming often depends on what the programmer anticipated.
Assuming you could distinguish aircraft with absolute accuracy, and you write your drone to ignore a foreign 737 entering your airspace, it might ignore a 737 flown by suicided bombers or even an armed 737 acting as a missile boat.
Likewise if you tell it to attack any aircraft that turns up in a set space, it might attack a legitimate airline that's strayed too close because of some other factor.

I have no problem with automated combat, and I never had a problem with landmines (I respect why such things is unpopular, but they have a legitimate battlefield use).
I do have a problem with the idea that a robots judgement is superior to a humans. Because a robot is only looking for a pre-determined parameter to trigger an attack.
It will never contemplate the repercussions for being wrong.

/and don't worry about making fun of people on FARK.
/its all good here.

20 Nov 2012 06:52 AM
Vaneshi     

Jim_Callahan:
In some ways, robotic defense systems are a human rights improvement over corruptible human soldiers.


No. We're not talking about a Terminator here or indeed anything that has the ability to comprehend. As per the botjunkie photo that was posted we're talking about a tracked machine with a 50cal machine gun and a webcam strapped to it.

These things have zero way of knowing if they're looking at an enemy troop formation or a civilian protest/market, it'll just know that they aren't friendly because they lack the correct RFID or a. n. other FoF identifier... then slaughter them.

Hell these things are driven by computers and we have daily threads here about various bits of software screwing up! So somehow one of these things encounters a race condition nobody had found in its code before and just randomly starts throwing grenades... at the barracks.

With a human (and we do mean a member of the armed forces here) in the loop there is at least someone we can point to and say "Why did you let it do that?"

20 Nov 2012 06:56 AM
SkunkWerks     
What? This little guy?

i1.ytimg.comView Full Size

20 Nov 2012 06:56 AM
Dracolich     
What, pray tell, is the difference between training a typical soldier and using a robot? They're not our best and brightest. They're not our most responsible or our moral champions. What's the critical thing that typically defines someone that enlists today? From what I can tell, the common theme is lack of a support structure. It's the path that's left when the other paths are unavailable. So when the soldier is faced with an immoral request, should we expect them to alienate the only support structure they have left? No, they're very likely to do what was requested and be traumatized for life.

tl;dr: We already had unquestioning soldiers.

20 Nov 2012 07:00 AM
Vaneshi     

way south: Assuming you could distinguish aircraft with absolute accuracy, and you write your drone to ignore a foreign 737 entering your airspace, it might ignore a 737 flown by suicided bombers or even an armed 737 acting as a missile boat.


Or it gets confused and turns your AWACS in to shrapnel. That and considering that military equipment is sold to other people (we in the UK do it, you American's do it, the Russians do it, etc.) how will it respond to a friendly F-15, MiG, Tornado, etc. with a failed IFF unit when it's also having to deal with enemy F-15's, MIG's and Tornado's?

Honestly I don't see a way for it to distinguish the two as it's software will just see a MIG, F-15 or Tornado (as examples) with no IFF broadcast. Perhaps Jim will provide us with a convincing argument as to why this could never happen...

20 Nov 2012 07:04 AM
SkunkWerks     

Dracolich: What, pray tell, is the difference between training a typical soldier and using a robot?


Well, you can't hack a soldier's brain. Not yet anyway.

20 Nov 2012 07:06 AM
Vaneshi     

Dracolich: tl;dr: We already had unquestioning soldiers.


When a soldier pulls the trigger he or she is intrinsically aware that if they shoot the wrong thing then 'bad things' will happen to them. This provides an impetus to make damn sure what you are shooting at is in fact an enemy.

When a machine pulls the trigger... who's responsible? It screws up and who's to blame?

20 Nov 2012 07:06 AM
Dracolich     

Vaneshi: Dracolich: tl;dr: We already had unquestioning soldiers.

When a soldier pulls the trigger he or she is intrinsically aware that if they shoot the wrong thing then 'bad things' will happen to them. This provides an impetus to make damn sure what you are shooting at is in fact an enemy.

When a machine pulls the trigger... who's responsible? It screws up and who's to blame?


This is an interesting point. We have a lot of accidents in war from using soldiers. Warning shots have killed civilians many times, but some people blame that on soldiers "not following protocol."

You also bring up self-interest. If you're not sure about the person approaching you, do you shoot them? This is a real issue that we've seen in Iraq. When people feel threatened, they act. When they're not sure if they're threatened but the risk is real, they act. Is this the case if a robot is used? Does the person controlling it feel like the amount of risk on the line is the same? They're no longer at personal risk. It's no longer "your life vs a reprimand." The safer choice shifts towards getting more information first. It becomes "your robot vs a reprimand." This may actually make civilians safer, but we'll probably go through a fair number of additional robots when we're incorrect. On the bright side, we'll have recordings of what happened in that particular case.

20 Nov 2012 07:18 AM
People_are_Idiots     

ShabazKilla: [eggshell-robotics.node3000.com image 415x317]


Man.... beat me to it!

20 Nov 2012 07:26 AM
Notabunny    [TotalFark]  
Sending machines to do dangerous work is the way of the future for all countries, not just 1st world countries. If we decide to ban fully autonomous killbots, fine. But not everybody will abide by our self-imposed ban. We should still build killbot killers which will take out the bad guy's machines.

20 Nov 2012 07:30 AM
rnatalie    [TotalFark]  
Here's where robot's rules of order don't apply!

(Anybody ever tell Siri to "Listen choke head this is worker speaking?")

20 Nov 2012 07:51 AM
holybovine     

Great Janitor: I propose that all kill bots be programmed with a preset kill limit and once that kill limit is reached, the kill bot must deactivate. Just never announce what that limit is. No commander will ever be cunning enough to deal with that.


www.google.caView Full Size

20 Nov 2012 07:52 AM
dragonchild     

way south: I do have a problem with the idea that a robots judgement is superior to a humans. Because a robot is only looking for a pre-determined parameter to trigger an attack.


I'll take it. Robots don't get bored; they don't get traumatized; they don't get fatigued; they don't require rescue. If a soldier (or a small group thereof) is cut off, command is faced with a very difficult choice over whether to order a rescue or just tell them "good luck". Makes for good drama in fiction, but in reality it's a brutally pragmatic call where the soldiers are left to their own fate if the cost in resources is too high. If a robot is cut off and surrounded, it can just go apeshiat before self-destructing. And as others say, they're not going to go off-mission to commit some recreational atrocities, flee in panic or desert entirely, take or give a bribe. It's not a big deal if a robot's arm gets blown off. Robots don't care to be home for the holidays. They don't have kids back home. You don't even need to bring them back at all.

I'm not a fan of war, but as long as the government and media are working hand-in-hand to insulate voters from the reality that war is hell anyway, I don't get the idea that we need to give a pile of warm meat bodies horrific injuries and PTSD to justify what we're doing. That's not humane; that's human sacrifice.

I actually hope for a future where "war" is reduced to nothing more than a very expensive chess match between robot armies. In such a world we'd still pay the cost in wasted resources, but as long as society rewards stupid megalomaniacs with power, at least we can avoid sating their egos with offerings of soldier meat.

20 Nov 2012 08:05 AM
SkunkWerks     

dragonchild: I actually hope for a future where "war" is reduced to nothing more than a very expensive chess match between robot armies


I seem to recall an episode of Classic Trek in which the crew encountered a planet where computers would simulate wars between nations. People "killed" during these simulations would then have to report to kill chambers. This was all in the name of keeping an actual war from breaking out.

Progress!

20 Nov 2012 08:13 AM
way south     

dragonchild: If a robot is cut off and surrounded, it can just go apeshiat before self-destructing.


Problem is robots don't really think. They follow a checklist of instructions according to what their sensors may or may not perceive. Their definition of "cut off" might be "lost radio contact due to interference and... OOh!, look, a wedding party full of terrorists!!".

Robots show all the judgement of a mousetrap.
When the right parameters are met, SNAP!

20 Nov 2012 08:16 AM
Wakosane     
Now that I think of it...

If the Terminator was captured in 1984 , would there be legal grounds to prosecute it?

That would have made a thrilling courtroom sequel.

20 Nov 2012 08:18 AM
way south     

dragonchild: I actually hope for a future where "war" is reduced to nothing more than a very expensive chess match between robot armies.


This I can agree with.
I just think its way too soon and no such AI capable of doing the job will exist, or could be trusted, within the foreseeable future.

20 Nov 2012 08:19 AM
SkunkWerks     

way south: I just think its way too soon and no such AI capable of doing the job will exist, or could be trusted, within the foreseeable future.


clatl.comView Full Size

20 Nov 2012 08:20 AM
tardological     

Dracolich: Vaneshi: Dracolich: tl;dr: We already had unquestioning soldiers.

When a soldier pulls the trigger he or she is intrinsically aware that if they shoot the wrong thing then 'bad things' will happen to them. This provides an impetus to make damn sure what you are shooting at is in fact an enemy.

When a machine pulls the trigger... who's responsible? It screws up and who's to blame?

This is an interesting point. We have a lot of accidents in war from using soldiers. Warning shots have killed civilians many times, but some people blame that on soldiers "not following protocol."

You also bring up self-interest. If you're not sure about the person approaching you, do you shoot them? This is a real issue that we've seen in Iraq. When people feel threatened, they act. When they're not sure if they're threatened but the risk is real, they act. Is this the case if a robot is used? Does the person controlling it feel like the amount of risk on the line is the same? They're no longer at personal risk. It's no longer "your life vs a reprimand." The safer choice shifts towards getting more information first. It becomes "your robot vs a reprimand." This may actually make civilians safer, but we'll probably go through a fair number of additional robots when we're incorrect. On the bright side, we'll have recordings of what happened in that particular case.


The interesting thing here is that a robot casualty can be repaired back to full operational capacity, whereas a human casualty gets sent home with a box full of medals... or in it.

/I support our robot troops
//as long as we make them look like Necrons

20 Nov 2012 08:21 AM
spentshells     

Bonanza Jellybean: But what will become of Chew-Chew, the cyborg train that runs on babymeat?


GWar?

20 Nov 2012 08:30 AM
Onkel Buck     
i1206.photobucket.comView Full Size

20 Nov 2012 08:31 AM
Fail in Human Form     

Wakosane: Now that I think of it...

If the Terminator was captured in 1984 , would there be legal grounds to prosecute it?

That would have made a thrilling courtroom sequel.


Probably a short sequel, considering it would start killing everyone in the court room. "How does the defendant plead"? *rips lawyer in half*

20 Nov 2012 08:31 AM
LordOfThePings     
www.bbc.co.ukView Full Size


Here I am, brain the size of a planet, and you want to keep me from deciding when to kill you.

20 Nov 2012 08:32 AM
trickymoo     
I'm Sam Waterston, of the popular TV series "Law & Order". As a senior citizen, you're probably aware of the threat robots pose. Robots are everywhere, and they eat old people's medicine for fuel.
www.scarybot.comView Full Size

Well, now there's a company that offers coverage against the unfortunate event of robot attack, with Old Glory Insurance. Old Glory will cover you with no health check-up or age consideration. You need to feel safe. And that's harder and harder to do nowadays, because robots may strike at any time.


And when they grab you with those metal claws, you can't break free.. because they're made of metal, and robots are strong.
i.huffpost.comView Full Size

Now, for only $4 a month, you can achieve peace of mind in a world full of grime and robots, with Old Glory Insurance. So, don't cower under your afghan any longer. Make a choice. WARNING: Persons denying the existence of Robots may be Robots themselves

Old Glory Insurance. For when the metal ones decide to come for you - and they will.

20 Nov 2012 08:37 AM
Huntceet     
First they'll want to make you register to own a personal protection robot. Next you'll have to register each personal protection robot you own. Finally they'll come around to confiscate.

20 Nov 2012 08:44 AM
Showing 1-50 of 91 comments
Refresh Page 2
View Comments:
This thread is closed to new comments.


Back To Main

More Headlines:
Main | Sports | Business | Geek | Entertainment | Politics | Video | FarkUs | Contests | Fark Party | Combined