Sunday, March 10, 2013

Conflict Resolution - 3: Contextual Optimization

Asimov's Zeroth Law
We continue this discussion from where we left off a few weeks ago: robot-ethics and how Asimov's robots resolved conflicts. A key update to Asimov's original three laws is the inclusion of the zeroth law (from Wikipedia):

"A robot may not harm humanity, or, by inaction, allow humanity to come to harm."

A fundamental difference between this law and the others is its abstract specification, with little clue on how it will be implemented. Furthermore, by giving this law the highest priority, Asenion robots are designed to first and foremost safeguard 'humanity', while also minimizing injury to individual humans and themselves as secondary and tertiary objectives. A robot is given the difficult computational task of proving that the cost of hurting a human is less than humanitarian benefit derived from an alternative action, and must do so within a finite amount of time.

The Universal Conflict-Resolution Model
Immanuel Kant's 'categorical imperative' is an example of an universal conflict resolution model that furiously strives to be context-free, and one that has greatly influenced western thought. Similarly, the Ten-Commandments' "Thou shall not kill" is absolute. Any machine that includes this hard constraint would be unable to kill, even defensively in order to protect a large number of humans under threat. Kantian rules are easy to 'encode-and-forget' within machines and systems since one does not have to ever worry about the context of its application. As we saw in the previous post, the robotic laws are context-free and Kantian in design. The original three laws operated as hard, must-satisfy constraints and necessary conditions. Per Wikipedia, Kant's

"perfect duties are those that are blameworthy if not met, as they are a basic required duty for a human being."

The problem of course is that (see prior post), the rigidity of all-hard rules is not practical and later versions of Asenion robots appear to additionally operate based on the concept of Kant's imperfect duty (again from Wikipedia):

"unlike perfect duties, you do not attract blame should you not complete an imperfect duty but you shall receive praise for it should you complete it, as you have gone beyond the basic duties and taken duty upon yourself"

Thus, imperfect duties are soft, rather than hard constraints, and the aim is to maximally satisfy (minimally violate) requirements. However, the optimization weights that trade off the degree of importance assigned to each of the Asenion robot's 'imperfect duties' are hard-coded, and additional context-specific inputs are required from humans to achieve a satisfactory result in resolving a dilemma. The robots have no authority to perform context-specific conflict resolutions on their own.

How then does this zeroth law practically work in Asimov stories? It's Kantian abstraction is incomprehensible to all except a couple of enlightened robots (with telepathic ability, no less). In general, the zeroth law remains useful only on paper for most of the robots.

Contextual Ethics: The Indian Way
Rajiv Malhotra's path-breaking book "Being Different: Indian Challenge to Western Universalism" provides a fascinating contrast between the traditional Indian approach of 'contextual ethics' (CE) that arose from its Dharmic thought system, and the 'context free ethics' that largely guides the western approach (for an earlier post based on his work, see here. Some of the methods in this post are applications of ideas in this book). From an optimization perspective, we can think of a CE-embedded robot as one that maximally satisfies a combination of hard, soft, and firm constraints, where a 'firm' constraint refer to a hard constraint that is minimally and temporarily relaxed, depending on the specific context of the dilemma, and for the benefit of the 'greatest good', at the expense of incurring a context-specific penalty. This flexibility must not be confused with moral relativism - where a set of soft constraints are tactically and optimally manipulated according to context to maximize some convenient self-serving objective. 'Optimally timing an apology' can be thought of as an example of a non self-serving, contextual optimization model.  It is worth doing a deep dive into this concept, by reviewing some passages in the aforementioned book:

"The frequently leveled charge of moral relativism against this contextual morality is inaccurate, because the conduct and motive are considered consequential in judging the ultimate value of statements.

.... Dharmic ethics are formulated in response to the situation and context of the problem in a way that makes Western ethics seem unduly codified, rigid, monolithic and even simplistic. A.K. Ramanujan, in his influential essay 'Is There an Indian Way of Thinking?', uses the terms 'context-free' and 'context-sensitive' to contrast the West and India in their respective approaches to ethics: "Cultures may be said to have overall tendencies to idealize, and think in terms of, either the context-free or the context-sensitive kind of rules. Actual behavior may be more complex, though the rules they think with are a crucial factor in guiding the behavior. In cultures like India's, the context-sensitive kind of rule is the preferred formulation" ....

.... Dharmic traditions, on the other hand, have long sought to arrive at truth by balancing universal truths and acts with those that can be determined only in the context in which they occur. Dharmic cultures have thus evolved to become comfortable with complexity and nuance, rejecting notions of the absolute and rigid ideals of morality and conduct....

....dharmic thought offers both universal and contextual poles – not just the latter, as that would be tantamount to moral relativism."


The dharmic approach lies in between an "all-soft constraint" and the Kantian "hard and soft constraint" approach to decision optimization.


Applying contextual optimization
Asimov's telepathic robot Giskard formulates and solves a probabilistic optimization problem where it trades off the opportunity cost (in terms of human lives) against the expected benefit to humanity. However the degree of uncertainty in this conflict-resolution model is too high and the robot eventually crashes. This episode comes across as an example of applying the CE approach to resolve a dilemma. The great Indian epics - the Ramayana and the Mahabharata, contain several brilliantly narrated instances of contextual conflict-resolution. Indian sci-fi movie buffs would not be surprised to know that George Lucas' Star Wars was inspired by the Ramayana.

Contextual optimization in the specific area of 'mathematical decision support software' would mean: allowing the rules of engagement to be configurable depending on the context. For regular users, advanced settings are greyed out, with only universal (default) rules enabled. Only super users, who are well-trained and comprehend the nature and consequences of the beast, get to work with 'firm' constraints, and on rare occasions. For example, an airline crew schedule optimization system should be configured to satisfy contractual and FAA rules, except during emergencies (e.g. post 9/11 recovery) where 'crew welfare' is only achievable by overriding one or more of these rules. Practical decision support systems should be carefully designed to allow such controlled contextual optimization.

Amending the Zeroth Law: The Dharmic Robot
The four laws do not quite protect the rest of the cosmos (e.g., from humanity) given their anthropocentric nature. From an Indian point of view, this gap can be closed by modifying the zeroth law based on the contextual ethics of dharma. Rajiv Malhotra, in his book, provides the etymology and a working definition of dharma:

"Dharma has the Sanskrit root dhri, which means 'that which upholds' or 'that without which nothing can stand' or 'that which maintains the stability and harmony of the universe'. Dharma encompasses the natural, innate behaviour of things, duty, law, ethics, virtue, etc. For example, the laws of physics describe current human understanding of the dharma of physical systems. Every entity in the cosmos has its particular dharma – from the electron, which has the dharma to move in a certain manner, to the clouds, galaxies, plants, insects, and of course, man. Dharma has no equivalent in the Western lexicon."

In such a framework, Asimov's laws would delineate a robot's various dharmas. At the highest level, we can require that a robot abide by the following fundamental law, that is based on an ancient Indian text:

"Non-harming is a robot's highest priority, except in the defense of dharma"

Conflict-resolution is always performed by first applying this highest dharmic principle and customizing it to the specific context. Note that by operating on the fundamental dharmic principle of least harm, a robot would usually satisfy Asimov's zeroth law, albeit in a context-specific manner, while also being in harmony with the original laws, as well as any new laws that get written in the future. Interestingly, the Hippocratic oath of medical doctors is based on a similar idea that represents a non-negative bound: "do good, or at least no harm".  If complex systems, new drugs, etc., are designed by always keeping this fundamental principle in context, it may well minimize the risk of catastrophic failure.

6 comments:

  1. Hi Shiva,

    Good post (as usual!). Unwinding this evening, I read this post with interest, and seeing myself as an "engineer-dabbling-in-OR-pretending-to-be-a-mathematician" (:)), I asked myself - where are the corner points in this approach? So the rest of my discussion is my attempt at finding corner points, and nothing else.

    If Dharma is defined as a law that aims at upholding and maintaining the cosmic order, then how would an autonomous robot encoded with such a belief system behave in today's real-world?

    Would the robots decide that the human race is causing the extinction of thousands of species and has a life-style that is unsustainable, and choose to extinguish humans? Or would it choose to destroy just so many omnivorous humans, so that an equilibrium with nature and sustainability can be achieved? (A min-max optimization,
    where it attempts to minimize the harm while maximizing dharmic benefits!)

    Would the robot conclude that animals, plants, cellular organisms all have similar dharmic rights as humans, and if so, whose rights would it advocate in our dog-eat-dog world? What sort of moral-relativism would it undertake to guide its actions?

    Given that any human (for that matter living-organism) activity involves the expenditure of energy (and hence the acquisition of karma), would such robots regulate all activity so that a universal dharmic metric can be maximized?

    Would it decide that a militant-robot-force that aims to uphold Dharma is to be upheld at all costs, and dispassionately (sic) remove non-dharmic forces wherever they exist? And in the same vein as the Asimov story, would it decide that governance by robots is far better than the imperfect human systems?

    What kind of laws would ruling-dharmic-robots promulgate? How would transgressions of the laws (which by definition would be adharmic) be handled by governing robots?

    Is it possible that the current research on sentient artificial intelligence can actually succeed in creating such emotive robots, and if so, what is the "I" that such robots would contemplate on? Would they see their AI-consciousness as part of the universal collective-consciousness?

    Lets discuss this further - as a starting point, please correct my misconceptions about the dharmic order. Thanks!

    RV
    p.s: now that I've book-marked this page, I'll make sure to visit often!

    ReplyDelete
  2. thanks for your kind words and initiating a very interesting discussion. Note sure i will be able to respond to the great questions you've raised, but thinking aloud:

    as can be seen in the direction these posts are going, I'm inclined (maybe biased) to view a Dharmic robot as an incredibly useful and reliable decision support system, but with overriding ability available to humans - far more limited in scope than Asimov's (which as the IEEE author points out is a great literary device, but not practical). If that overriding ability is deactivated, humans blindly delegate decision-taking ability to a machine - which is adharmic (i think), so a Dharmic Robot would suggest one or more optimal solutions and do its best to explain how/why it is optimal, but ensures that the human understand the reasons and presses 'accept/reject'. Furthermore, I do not expect a robot to be able to reach higher levels of consciousness that a trained Yogi can.... more later. thanks.

    ReplyDelete
    Replies
    1. oops. retry. Not sure i will be able to add my comments to the points you've raised right away, but will do so soon. thanks.

      Delete
    2. I agree that ensuring semi-autonomy (carefully defining the boundaries defined by "semi") can prevent such robots from disrupting our non-ideal world in any great way; but I wonder what the "killer-app" would be for such robots.

      Read the entries in http://en.wikipedia.org/wiki/Artificial_intelligence outlining work in social intelligence, creativity and general intelligence. I delivered a plenary talk in India in December on the frontiers of these AI areas, our current forays with massive data generation and Big Data analytics, state of OR, computing and the state of semiconductor miniaturization, and the Ray Kurzweil vision of the impending singularity.

      I do not think that all this is far-fetched, despite the decades-old head-scratching on the "P-vs-NP" conundrum.

      From my reading of technological trends, extrapolating from today's drones deployed by USA in Af-Pak, or bomb-sniffing terrestrial, I see autonomous military robots (airborne, or submarine or terrestrial) as a very possible future soldier. Could this trend be exploitable by the moneyed-church to make a bible-toting-quoting-enforcing robot to be deployed in poorer regions of the world?

      RV

      Delete
    3. thanks for taking the time to share. lots of interesting and audacious ideas, sir :))

      1. I haven't read RK's works, but thanks to your comments, for starters, I read the wikipedia page on his 'spiritual machine'. His def of 'spirituality' appears to be derived from some Unitarian branch. Also some mangled form of Advaita he uses to hint about "unity consciousness" and gets it wrong too. Unless he's done a thorough study of the work in this area within Indian thought systems (which i seriously doubt), tough to take him seriously there.

      2. On the other hand, when he talks about science & technology and the material world, the thoughts seem useful - hope to read his work soon. I found your own vision on AI very interesting - would love to learn more about it, kindly do share if possible. I too see a role for autonomous AI, but within some overall constrained operating environment - kinda like my pet vacuuming robot :)

      3. As far as 'intelligence' itself, Aurobindo did some pioneering work on this (since appropriated by some Harvard folks and repackaged as "multiple intelligences"). There are multiple parts and planes to a being, but RK merely looks at one component it seems.

      4. Related to your last question - there was recent discussion on how Mantra chanting by a machine doesn't 'fully work' in some online forum that u may find interesting. hope to discuss this in a future post. More later ...

      Delete
    4. I don't think I'd take any of RK's views on spirituality seriously. But his take on the accelerating pace of technology leading to a singularity where machine intelligence surpasses human intelligence is fascinating. I don't believe that this can happen unless P=NP, and in the absence of which, we'll periodically see "intelligent" machines taking nonsensical decisions as their heuristics fail.

      Ray Kurzweil's plausible, but debatable predictions on the singularity:
      1. By 2020, computers have the same processing power as human brain, shrunk to less than 100 nm.
      2. By 2030, mind uploading becomes possible
      3. By 2040, cybernetic implants create human body 3.0.
      4. By 2045, the singularity is reached: extremely disruptive
      5. After 2045, the universe is “woken up” as material is converted to intelligent substrate.

      RK is now at Google, potentially in a position to leverage billions of facts into some sort of "intelligence".

      RV

      Delete

Note: Only a member of this blog may post a comment.