Einstein wrote it without one footnote (EN)
“According to experts in artificial intelligence, fully autonomous weapons, which would select and engage targets without meaningful human control, could be developed for use within years, not decades. Also known as “killer robots,” these weapons would have the power to make life-and-death determinations, a power previously reserved for humans. The prospect raises a host of moral, legal, and other concerns.
Enter, Jehanne! Interestingly though, the AI’ists sprung the ambush on them lawlifes.
“While states are still considering how to deal with the problems posed by these weapons, there is emerging agreement that the issue of meaningful human control should be a central point of discussion.
Emergent agreement – the transional ‘ality..
“Humans should exercise control over individual attacks, not simply overall operations. Only by
prohibiting the use of fully autonomous weapons can such control be guaranteed.“As the Holy See observed in a statement at a CCW meeting on lethal autonomous weapon systems, “Prudential judgement cannot be put into algorithms.”
Which remains to be seen! I’d not be so sure! In fact I’d put my money on them ‘cepticons!
“Machines lack morality and mortality, and should as a result not have life and death powers over humans.
There’s an argument if there ever was one.
“The ability to distinguish combatants from civilians or from wounded or surrendering soldiers as well as the ability to weigh civilian harm against military advantage require human qualities that would be difficult to replicate in machines, including fully autonomous weapons.
Humbug! Did diffies ever stop technology? Space-Axer?
“Determining whether an individual is a legitimate target often depends on the capacity to detect and interpret subtle cues, such as tone of voice and body language. Humans usually understand such nuances because they can identify with other human beings and thus better gauge their intentions.
Also clearly within reach of AI. The problem lies elsewhere..
“Assessing proportionality entails a case-by-case analysis, traditionally based on a reasonable commander standard. Such an analysis requires “distinctively human judgement” and the application of reason, which takes into account both moral and legal considerations.
But why, why should reason have a say in this! Wasn’t reason part of the genealogy?
“The United States and Israel have both advocated for using the term “appropriate human judgment” rather than meaningful human control in the discussion of lethal autonomous weapons systems.
Obviously… the wording is king. It is entirely clear where the hawkings want to.. tread.
“The bans on mines and chemical and biological weapons provide precedent for prohibiting weapons over which there is inadequate human control. [..] This Hague convention prohibited states parties from laying unanchored automatic contact mines “except when they are so constructed as to become harmless one hour at most after the person who laid them ceases to control them.” The text implies that these sea mines become unacceptably dangerous without human control.
Does it? We hope to believe in precedents. But precedents presuppose a history of similarity. And when something really novel happens precedence becomes obsolete.
“Mandating meaningful human control of weapons would help protect human dignity in war, ensure compliance with international humanitarian and human rights law, and avoid creating an accountability gap for the unlawful acts of a weapon.
If a robot can be made to kill wouldn’t U scrap the whole model? So does accountability work at all in this futuristic framework? But far more importantly:
Can humans not protect even their own dignity?
Source: https://www.hrw.org/news/2016/04/11/killer-robots-and-concept-meaningful-human-control 11 april 2016