Asimov's Zeroth Law
We continue this discussion from where we left off a few weeks ago: robot-ethics and how Asimov's robots resolved conflicts. A key update to Asimov's original three laws is the inclusion of the zeroth law (from Wikipedia):
"A robot may not harm humanity, or, by inaction, allow humanity to come to harm."
A fundamental difference between this law and the others is its abstract specification, with little clue on how it will be implemented. Furthermore, by giving this law the highest priority, Asenion robots are designed to first and foremost safeguard 'humanity', while also minimizing injury to individual humans and themselves as secondary and tertiary objectives. A robot is given the difficult computational task of proving that the cost of hurting a human is less than humanitarian benefit derived from an alternative action, and must do so within a finite amount of time.
The Universal Conflict-Resolution Model
Immanuel Kant's 'categorical imperative' is an example of an universal conflict resolution model that furiously strives to be context-free, and one that has greatly influenced western thought. Similarly, the Ten-Commandments' "Thou shall not kill" is absolute. Any machine that includes this hard constraint would be unable to kill, even defensively in order to protect a large number of humans under threat. Kantian rules are easy to 'encode-and-forget' within machines and systems since one does not have to ever worry about the context of its application. As we saw in the previous post, the robotic laws are context-free and Kantian in design. The original three laws operated as hard, must-satisfy constraints and necessary conditions. Per Wikipedia, Kant's
"perfect duties are those that are blameworthy if not met, as they are a basic required duty for a human being."
The problem of course is that (see prior post), the rigidity of all-hard rules is not practical and later versions of Asenion robots appear to additionally operate based on the concept of Kant's imperfect duty (again from Wikipedia):
"unlike perfect duties, you do not attract blame should you not complete
an imperfect duty but you shall receive praise for it should you
complete it, as you have gone beyond the basic duties and taken duty
Thus, imperfect duties are soft, rather than hard constraints, and the aim is to maximally satisfy (minimally violate) requirements. However, the optimization weights that trade off the degree of importance assigned to each of the Asenion robot's 'imperfect duties' are hard-coded, and additional context-specific inputs are required from humans to achieve a satisfactory result in resolving a dilemma. The robots have no authority to perform context-specific conflict resolutions on their own.
How then does this zeroth law practically work in Asimov stories? It's Kantian abstraction is incomprehensible to all except a couple of enlightened robots (with telepathic ability, no less). In general, the zeroth law remains useful only on paper for most of the robots.
Contextual Ethics: The Indian Way
Rajiv Malhotra's path-breaking book "Being Different: Indian Challenge to Western Universalism" provides a fascinating contrast between the traditional Indian approach of 'contextual ethics' (CE) that arose from its Dharmic thought system, and the 'context free ethics' that largely guides the western approach (for an earlier post based on his work, see here. Some of the methods in this post are applications of ideas in this book). From an optimization perspective, we can think of a CE-embedded robot as one that maximally satisfies a combination of hard, soft, and firm constraints, where a 'firm' constraint refer to a hard constraint that is minimally and temporarily relaxed, depending on the specific context of the dilemma, and for the benefit of the 'greatest good', at the expense of incurring a context-specific penalty. This flexibility must not be confused with moral relativism - where a set of soft constraints are tactically and optimally manipulated according to context to maximize some convenient self-serving objective. 'Optimally timing an apology' can be thought of as an example of a non self-serving, contextual optimization model. It is worth doing a deep dive into this concept, by reviewing some passages in the aforementioned book:
"The frequently leveled charge of moral relativism against this contextual morality is inaccurate, because the conduct and motive are considered consequential in judging the ultimate value of statements.
.... Dharmic ethics are formulated in response to the situation and context of the problem in a way that makes Western ethics seem unduly codified, rigid, monolithic and even simplistic. A.K. Ramanujan, in his influential essay 'Is There an Indian Way of Thinking?', uses the terms 'context-free' and 'context-sensitive' to contrast the West and India in their respective approaches to ethics: "Cultures may be said to have overall tendencies to idealize, and think in terms of, either the context-free or the context-sensitive kind of rules. Actual behavior may be more complex, though the rules they think with are a crucial factor in guiding the behavior. In cultures like India's, the context-sensitive kind of rule is the preferred formulation" ....
.... Dharmic traditions, on the other hand, have long sought to arrive at truth by balancing universal truths and acts with those that can be determined only in the context in which they occur. Dharmic cultures have thus evolved to become comfortable with complexity and nuance, rejecting notions of the absolute and rigid ideals of morality and conduct....
....dharmic thought offers both universal and contextual poles – not
just the latter, as that would be tantamount to moral relativism."
The dharmic approach lies in between an "all-soft constraint" and the Kantian "hard and soft constraint" approach to decision optimization.
Applying contextual optimization
Asimov's telepathic robot Giskard formulates and solves a probabilistic optimization problem where it trades off the opportunity cost (in terms of human lives) against the expected benefit to humanity. However the degree of uncertainty in this conflict-resolution model is too high and the robot eventually crashes. This episode comes across as an example of applying the CE approach to resolve a dilemma. The great Indian epics - the Ramayana and the Mahabharata, contain several brilliantly narrated instances of contextual conflict-resolution. Indian sci-fi movie buffs would not be surprised to know that George Lucas' Star Wars was inspired by the Ramayana.
Contextual optimization in the specific area of 'mathematical decision support software' would mean: allowing the rules of engagement to be configurable depending on the context. For regular users, advanced settings are greyed out, with only universal (default) rules enabled. Only super users, who are well-trained and comprehend the nature and consequences of the beast, get to work with 'firm' constraints, and on rare occasions. For example, an airline crew schedule optimization system should be configured to satisfy contractual and FAA rules, except during emergencies (e.g. post 9/11 recovery) where 'crew welfare' is only achievable by overriding one or more of these rules. Practical decision support systems should be carefully designed to allow such controlled contextual optimization.
Amending the Zeroth Law: The Dharmic Robot
The four laws do not quite protect the rest of the cosmos (e.g., from humanity) given their anthropocentric nature. From an Indian point of view, this gap can be closed by modifying the zeroth law based on the contextual ethics of dharma. Rajiv Malhotra, in his book, provides the etymology and a working definition of dharma:
"Dharma has the Sanskrit root dhri, which means 'that which upholds' or 'that without which nothing can stand' or 'that which maintains the stability and harmony of the universe'. Dharma encompasses the natural, innate behaviour of things, duty, law, ethics, virtue, etc. For example, the laws of physics describe current human understanding of the dharma of physical systems. Every entity in the cosmos has its particular dharma – from the electron, which has the dharma to move in a certain manner, to the clouds, galaxies, plants, insects, and of course, man. Dharma has no equivalent in the Western lexicon."
In such a framework, Asimov's laws would delineate a robot's various dharmas. At the highest level, we can require that a robot abide by the following fundamental law, that is based on an ancient Indian text:
"Non-harming is a robot's highest priority, except in the defense of dharma"
Conflict-resolution is always performed by first applying this highest dharmic principle and customizing it to the specific context. Note that by operating on the fundamental dharmic principle of least harm, a robot would usually satisfy Asimov's zeroth law, albeit in a context-specific manner, while also being in harmony with the original laws, as well as any new laws that get written in the future. Interestingly, the Hippocratic oath of medical doctors is based on a similar idea that represents a non-negative bound: "do good, or at least no harm". If complex systems, new drugs, etc., are designed by always keeping this fundamental principle in context, it may well minimize the risk of catastrophic failure.