mindblowingscience:

Ethical trap: robot paralysed by choice of who to save

Can a robot learn right from wrong? Attempts to imbue robots, self-driving cars and military machines with a sense of ethics reveal just how hard this is
CAN we teach a robot to be good? Fascinated by the idea, roboticist Alan Winfield of Bristol Robotics Laboratory in the UK built an ethical trap for a robot – and was stunned by the machine’s response.
In an experiment, Winfield and his colleagues programmed a robot to prevent other automatons – acting as proxies for humans – from falling into a hole. This is a simplified version of Isaac Asimov’s fictional First Law of Robotics – a robot must not allow a human being to come to harm.
At first, the robot was successful in its task. As a human proxy moved towards the hole, the robot rushed in to push it out of the path of danger. But when the team added a second human proxy rolling toward the hole at the same time, the robot was forced to choose. Sometimes, it managed to save one human while letting the other perish; a few times it even managed to save both. But in 14 out of 33 trials, the robot wasted so much time fretting over its decision that both humans fell into the hole. The work was presented on 2 September at the Towards Autonomous Robotic Systems meeting in Birmingham, UK.
Winfield describes his robot as an “ethical zombie” that has no choice but to behave as it does. Though it may save others according to a programmed code of conduct, it doesn’t understand the reasoning behind its actions. Winfield admits he once thought it was not possible for a robot to make ethical choices for itself. Today, he says, “my answer is: I have no idea”.
As robots integrate further into our everyday lives, this question will need to be answered. A self-driving car, for example, may one day have to weigh the safety of its passengers against the risk of harming other motorists or pedestrians. It may be very difficult to program robots with rules for such encounters.
But robots designed for military combat may offer the beginning of a solution. Ronald Arkin, a computer scientist at Georgia Institute of Technology in Atlanta, has built a set of algorithms for military robots – dubbed an “ethical governor” – which is meant to help them make smart decisions on the battlefield. He has already tested it in simulated combat, showing that drones with such programming can choose not to shoot, or try to minimise casualties during a battle near an area protected from combat according to the rules of war, like a school or hospital.
Arkin says that designing military robots to act more ethically may be low-hanging fruit, as these rules are well known. “The laws of war have been thought about for thousands of years and are encoded in treaties.” Unlike human fighters, who can be swayed by emotion and break these rules, automatons would not.
"When we’re talking about ethics, all of this is largely about robots that are developed to function in pretty prescribed spaces," says Wendell Wallach, author ofMoral Machines: Teaching robots right from wrong. Still, he says, experiments like Winfield’s hold promise in laying the foundations on which more complex ethical behaviour can be built. “If we can get them to function well in environments when we don’t know exactly all the circumstances they’ll encounter, that’s going to open up vast new applications for their use.”
This article appeared in print under the headline “The robot’s dilemma”

Watch a video of these ‘ethical’ robots in action here

mindblowingscience:

Ethical trap: robot paralysed by choice of who to save

Can a robot learn right from wrong? Attempts to imbue robots, self-driving cars and military machines with a sense of ethics reveal just how hard this is

CAN we teach a robot to be good? Fascinated by the idea, roboticist Alan Winfield of Bristol Robotics Laboratory in the UK built an ethical trap for a robot – and was stunned by the machine’s response.

In an experiment, Winfield and his colleagues programmed a robot to prevent other automatons – acting as proxies for humans – from falling into a hole. This is a simplified version of Isaac Asimov’s fictional First Law of Robotics – a robot must not allow a human being to come to harm.

At first, the robot was successful in its task. As a human proxy moved towards the hole, the robot rushed in to push it out of the path of danger. But when the team added a second human proxy rolling toward the hole at the same time, the robot was forced to choose. Sometimes, it managed to save one human while letting the other perish; a few times it even managed to save both. But in 14 out of 33 trials, the robot wasted so much time fretting over its decision that both humans fell into the hole. The work was presented on 2 September at the Towards Autonomous Robotic Systems meeting in Birmingham, UK.

Winfield describes his robot as an “ethical zombie” that has no choice but to behave as it does. Though it may save others according to a programmed code of conduct, it doesn’t understand the reasoning behind its actions. Winfield admits he once thought it was not possible for a robot to make ethical choices for itself. Today, he says, “my answer is: I have no idea”.

As robots integrate further into our everyday lives, this question will need to be answered. A self-driving car, for example, may one day have to weigh the safety of its passengers against the risk of harming other motorists or pedestrians. It may be very difficult to program robots with rules for such encounters.

But robots designed for military combat may offer the beginning of a solution. Ronald Arkin, a computer scientist at Georgia Institute of Technology in Atlanta, has built a set of algorithms for military robots – dubbed an “ethical governor” – which is meant to help them make smart decisions on the battlefield. He has already tested it in simulated combat, showing that drones with such programming can choose not to shoot, or try to minimise casualties during a battle near an area protected from combat according to the rules of war, like a school or hospital.

Arkin says that designing military robots to act more ethically may be low-hanging fruit, as these rules are well known. “The laws of war have been thought about for thousands of years and are encoded in treaties.” Unlike human fighters, who can be swayed by emotion and break these rules, automatons would not.

"When we’re talking about ethics, all of this is largely about robots that are developed to function in pretty prescribed spaces," says Wendell Wallach, author ofMoral Machines: Teaching robots right from wrong. Still, he says, experiments like Winfield’s hold promise in laying the foundations on which more complex ethical behaviour can be built. “If we can get them to function well in environments when we don’t know exactly all the circumstances they’ll encounter, that’s going to open up vast new applications for their use.”

This article appeared in print under the headline “The robot’s dilemma”

Watch a video of these ‘ethical’ robots in action here

kennyvee:

kennyvee:

liberalsarecool:

ppaction:

NOPE. 

Republicans talking shit AGAIN. This @GOP tweet is the literal opposite of what they believe, campaign, and how they vote.

They know that no matter how outrageously they lie, their base will still believe them.

I reblogged this a couple weeks ago, but I’m reblogging it again because after sending that tweet out on September 1st, Republicans blocked equal pay (yet again) just two weeks later.

kennyvee:

kennyvee:

liberalsarecool:

ppaction:

NOPE. 

Republicans talking shit AGAIN. This @GOP tweet is the literal opposite of what they believe, campaign, and how they vote.

They know that no matter how outrageously they lie, their base will still believe them.

I reblogged this a couple weeks ago, but I’m reblogging it again because after sending that tweet out on September 1st, Republicans blocked equal pay (yet again) just two weeks later.

gothharrystyles:

hands down the best scene from any movie ever ever

mini-tuffs:

For PC and Secret
They are awesome and I love them both.
It’s a platonic freesome… freindsome…threendsome?

mini-tuffs:

For PC and Secret

They are awesome and I love them both.

It’s a platonic freesome… freindsome…threendsome?

compoundchem:

Today’s graphic looks at the 20 common amino acids that are combined to make up the proteins in our bodies. It also gives the three-letter and one-letter codes for each, as well as denoting whether they are ‘essential’ or ‘non-essential’.Read more information & grab the PDF here: http://wp.me/p4aPLT-tu

compoundchem:

Today’s graphic looks at the 20 common amino acids that are combined to make up the proteins in our bodies. It also gives the three-letter and one-letter codes for each, as well as denoting whether they are ‘essential’ or ‘non-essential’.

Read more information & grab the PDF here: http://wp.me/p4aPLT-tu

blackenedvelvet:

woesleeper:

this makes it so much better!

HAHAHA SORRY EVERYONE

blackenedvelvet:

woesleeper:

this makes it so much better!

HAHAHA SORRY EVERYONE

Shut the fuck up about vaccinations. Not everyone has to have them, not everyone believes in them. Uneducated fuck.
Anonymous

aspiringdoctors:

restless-wafarer:

aspiringdoctors:

image

You know, my homie and secret best friend Neil deGrasse Tyson said it best….

image

This isn’t an issue of belief or should even be up for discussion. It’s not a debate- like gravity or that the Earth revolves around the Sun isn’t up for debate. It’s a fact, whether or not you like it. Sorry bro.

And any ‘educated fuck’ knows that vaccines are necessary and everyone who can have them should have them.

Have a lovely day, sugar. 

Actually there’s a lot of research and knowledge supporting the fact that vaccines are NOT necessary. It is simply another thing that today’s health system is super big on, just like hospital births and c-sections. And a lot of people actually have long term and short term complications from getting vaccines. Ahem.

Dang guys, you thought I didn’t check my activity log every now and then? Because I knew shit like this would pop up. And, I just finished my block exam and am feeling fiesty.

Actually you’re wrong. That ‘research’ is either completely fabricated OR grossly misinterprets the data OR uses shitty research techniques to get the data they want- all which are grossly unethical, in case you’re curious. I’ve got slides from a recent lecture on vaccines (aka why I am so fired up about this nonsense). You can check out the citations on each slide if you don’t believe me… something unsurprisingly missing from literally every anti-vaccine comment I’ve gotten and website that I have visited. Show me your sources, honey, and if you do, I will blow them out of the water because not a single one stands up to current scientific research standards.

There are however tomes and tomes of research for the safety end efficacy of vaccines. Don’t believe me? Look at a simple google scholar search.

So! Here we go! 

image

image

Holy shit, it’s almost like vaccines SAVE SOCIETY MONEY. In fact, they give money back to society, along with the other programs indicated by red arrows. Which would be really weird for something that is just a healthcare fad like c-sections and hospital births.

And most people have no complications for getting vaccines, and if they do, most of them are short term. In fact, it is devilishly hard to prove an adverse effect was because of a vaccine. Why? Because it’s how we’re wired. We falsely see connections and causes where there are none (called a type 1 error; you are rejecting a true null hypothesis). People are more likely to attribute an adverse health event to a shot- even if that shot is the placebo and the numbers are just the background rate for whatever health event in the population.

image

And here is a graph showing the sample sizes necessary to prove that an adverse event is caused or related to a vaccine.

image

You know what, it was a really good lecture and I’m going to share more more relevant slides in case any one else feels like contradicting me.

These slides show the public health impact of vaccines. Note the differences between the historical peak and post-vaccine era deaths columns. Because saving literally thousands of lives is totally a conspiracy you should beware of.

image

image

And this is why herd immunity is so important! See how high it has to be for measles? Guess what we’re seeing outbreaks of thanks to anti-vaxxers? Don’t forget that one of the deadly complications of measles is SSPE.

image

Look how Hepatitis A infections in older adults when down after kids started getting immunized. Shocking! Could vaccines be… good for …. everyone????

image

Ahem.

consecratedeffort:

EPOC Neuroheadset by Emotiv

consecratedeffort:

EPOC Neuroheadset by Emotiv

isthistakenalready:

Aoi Honoo is Too Real

kskb:

(via 女子中学生ズ – 72q.org)