Real News. Real Funny.

Comments

  • calling for an international treaty outlawing military weapons systems that decide - without a human "in the loop" - when to pull the trigger

    This is quite prudent. Machines are not capable of being responsible. Humans are.
  • Listen, and understand. That terminator is out there. It can't be bargained with. It can't be reasoned with. It can't be banned.
  • www.slashcastpodcast.comView Full Size


    He'll cut you if you try to ban him.
  • yawnnnnnnnnnnnnnnnnnnn
    bunch of worthless dumbasses say what?
  • Sgygus: calling for an international treaty outlawing military weapons systems that decide - without a human "in the loop" - when to pull the trigger

    This is quite prudent. Machines are not capable of being responsible. Humans are.


    Capable? Sure. Often? Meh.
  • Oh, a HUMAN rights group is against robots. Big farkin' surprise there. Bigots.
  • "We want you to unilaterally stop using technology that is keeping you safe from your enemies that don't have it"
  • do the robots get a pay raise?
  • Have none of these scientists ever watched a movie. OMG we're all going to die by autonomous replicating killer robots. Oh lordy lord.

    /Seriously the timeing and accuracy of properly calibrated automatic machines has to be a factor as well as the lack of human capacity for nuance which they will most assuradly lack. Machine sentinals is just a war crime waiting to happen.
  • Silly Luddites, you can't stop progress
  • I propose that all kill bots be programmed with a preset kill limit and once that kill limit is reached, the kill bot must deactivate. Just never announce what that limit is. No commander will ever be cunning enough to deal with that.
  • Go ahead and ban them. It will not stop them from being deployed. Seriously, do you think that if some Palestinian terrorist group could design one of these killer robots that they would refrain from setting it loose in Tel Aviv because they had been banned?
  • Everything will be fine until they go on strike.

    "What do we want?"

    "Lithium Batteries"

    "When do we want them?"

    "Right No w. Now. n oo ooo wwww..."
  • Article: They also want robot designers to enact a "code of conduct" to keep the genie of killing machines with artificial intelligence in the bottle.


    Christina Aguilera, fighting for our human rights.
  • But what will become of Chew-Chew, the cyborg train that runs on babymeat?
  • If only there were some sort of insurance available to protect us from this threat. Oh, Sam Waterston, where are you when we need you?
  • Sgygus: calling for an international treaty outlawing military weapons systems that decide - without a human "in the loop" - when to pull the trigger

    This is quite prudent. Machines are not capable of being responsible. Humans are.


    Kind of arguable. Machines also aren't capable of going off the reservation and making bad decisions. Humans are.

    In some ways, robotic defense systems are a human rights improvement over corruptible human soldiers.

    //Also, for the obvious historical reference, consider that international law following WW1 banned the use of aircraft as weapon platforms of any kind. Look up how long that lasted for an idea of how long we can keep auto systems a high schooler could build given sufficient money off the battleground.
  • 1 "Serve the public trust"
    2 "Protect the innocent"
    3 "Uphold the law"
    4 (Classified)
  • Sgygus: This is quite prudent. Machines are not capable of being responsible. Humans are.


    Yes, because humans have a rich and luxurious history of being peaceful and kind to one another even when in possession of tools that have no explicit purpose other than killing large quantities of other humans.
  • Jim_Callahan: Sgygus: calling for an international treaty outlawing military weapons systems that decide - without a human "in the loop" - when to pull the trigger

    This is quite prudent. Machines are not capable of being responsible. Humans are.

    Kind of arguable. Machines also aren't capable of going off the reservation and making bad decisions. Humans are.

    In some ways, robotic defense systems are a human rights improvement over corruptible human soldiers.

    //Also, for the obvious historical reference, consider that international law following WW1 banned the use of aircraft as weapon platforms of any kind. Look up how long that lasted for an idea of how long we can keep auto systems a high schooler could build given sufficient money off the battleground.


    We've already got automated war machines called land mines.
    The problem is machines aren't accountable for following orders. They do whatever they were rigged or programmed to do, and if the code is vague enough then they'll take a shot at an airliner as quickly as they'd shoot down an enemy fighter. Putting wheels on a mouse trap doesn't make it sympathetic to the differences between mice and hamsters. At least with a traditional fighter you've got the pilot to blame.
    Maybe You could bypass a ban by adding an authorization button, but that means the signatory soldier would be risking his name on a pile of code he didn't write.
    ...but he could just say he was ordered to sign and its all good. Who knows.

    I doubt they'll pass a ban to begin with, because the idea of using automated guns for perimeter defense is just too tempting for a first world army up against guerrillas.

    dl.dropbox.comView Full Size
  • way south: They do whatever they were rigged or programmed to do, and if the code is vague enough then they'll take a shot at an airliner as quickly as they'd shoot down an enemy fighter.


    So... you have no problem with automated combat, basically? Because you're only objecting to poorly programmed automated combat in that post.

    Sort of like saying "I object to bridges. They're always failing under stress, coming loose from their moorings, and falling down". Well, no, not if they're competently designed they're not.

    //If you think I'm making fun of you... I am, a little. The coding to distinguish a passenger airliner's profile from a fighter or a bomber is literally in "the intern could do it in an hour with a Kinect and the SDK" territory, that's not even slightly complex functionality at this point in automation tech.
  • Jim_Callahan: Kind of arguable. Machines also aren't capable of going off the reservation and making bad decisions. Humans are.


    You mean the complicated hardware and software to control an autonomous weapons platform will never malfunction? What a relief!

    Anyway, I doubt deployment can be banned. How about a law which holds the human who orders the deployment of such machines directly responsible for the actions of them, with punishments equal to that human having carried out such actions him/herself? You want to deploy a machine that cannot take responsibility for who it shoots? Then you take responsibility for who it shoots.

    /general "you", not you specifically, Jim_Callahan :)
  • mamoru: You want to deploy a machine that cannot take responsibility for who it shoots? Then you take responsibility for who it shoots.


    That's pretty much how it works, yes. You step on a land-mine that's not supposed to be there, the government that deployed the mine is the one at fault. Same with this stuff.

    You mean the complicated hardware and software to control an autonomous weapons platform will never malfunction? What a relief!

    Probably less frequently than the reprobates we sometimes hire slip off base to have a shooting spree among the local civilians, or mutilate corpses, or are bribed to let contractors abscond with millions of dollars in untraceable cash.

    100% absolute infallibility is miles and miles above the bar that terminators have to leap to be an improvement, is what I'm getting at here.
  • Load 25 of 66 newer comments

This thread is closed to new comments.