The World and the Machine and Responsible Machine Learning

Christian Kästner
12 min readOct 5, 2020

--

(this article corresponds to parts of the lecture “Risk and Planning for Mistakes” and the recitation “Requirements Decomposition” in our lecture Software Engineering for AI-Enabled Systems)

For exploring risk, feedback loops, safety, security, and fairness in production systems with machine learning components, to me the single most useful perspective is to look at the system through the lens of The World and the Machine, Michael Jackson’s classic ICSE 1995 requirements engineering paper.

The paper makes it very clear — what should maybe be already obvious — what a software system can and cannot reason about and where it needs to make assumptions about the environment and how dangerous it can be to get those assumptions wrong. Wrong requirements and environment assumptions are the most common cause of all software problems (see National Research Council).

Separating World and Machine

No software is deployed in a vacuum: Every relevant system is deployed as part of the world, to understand something about the world, and to have an impact on the world. Indeed, the goals and requirements of the system are expressed as desired states in the world: For example, we might want to convince humans to buy something, help humans make medical diagnoses affecting the treatment of other humans, or help with college admission decisions, all affecting the real world. Software, with or without machine learning, is created to interpret parts of the world and to manipulate the world toward a desired state. The world is also referred to as environment, from the perspective of the software.

The somewhat obvious key problem is that software cannot directly reason about the environment, it can only work with inputs given from the environment and it can only produce outputs that are interpreted by actors and actuators in the environment. Those inputs may be more or less reliable indications of the state of the environment and outputs may more or less have the intended consequences.

Consider the example of software in a self-driving car: The car‘s software has no direct knowledge of the drivers attention, nor the speed the car is going at, nor of other cars on the road. The software can reason about information it gains from sensors or user input: A user may indicate a target address and push the “start driving” button, the GPS may provide information about the current speed, and a camera can provide images of the outside world. All that information hopefully, but not necessarily reflects the phenomena of the real world. In addition, the software cannot directly manipulate the car’s state in the real world, but the car may react to outputs of the software, such as steering and acceleration commands. If the designer of the car’s software makes the wrong assumptions about how the car will react to break commands and how quickly it can break, the software may work entirely as intended and still lead to unsafe outcomes where the car starts breaking too late.

In his article, Jackson draws the following conclusion:

The solutions to many development problems involve not just engineering in the world, but also engineering of the world. In this way, software development is like building bridges. The builder must study the geology and soil mechanics of the site, and the traffic both over and under the bridge. The engineering of the bridge is also engineering of its environment. The engineer must understand the properties of the world and manipulate and exploit those properties to achieve the purposes of the system. A computer system, like a bridge, can not be designed in isolation from the world into which it fits and in which it provides the solution to a problem.

Requirements, Assumptions, Specifications, and Implementation

Importantly, thinking clearly about the world and the machine and how the machine can only interact with the world mediated through sensors and actuators allows us to distinguish:

  • Requirements (REQ) describe how the system should operate, expressed entirely in terms of the concepts in the world. For example, the self-driving car should never exceed the speed limit.
  • Assumptions (ENV) express the relationship of real-world concepts to software inputs and outputs. For example, we assume that the GPS correctly represents the car’s speed, that the manually entered target address correctly represents the user’s intention, and that the car actually honors the system’s break commands and will slow down according to an expected pattern (as expected from the physics of the situation).
  • Specifications (SPEC) describe the expected behavior of the software system in terms of input and outputs. For example, we expect the system to never issue an acceleration command (output) if the speed (input from GPS) is larger than the speed limit (input from map) in the current location (input from GPS).
  • Implementation (IMPL) provide the actual behavior of the software system that is supposed to align with the specification (SPEC), usually given with some code or an executable model. A mismatch between implementation and specification may be detected say with testing and is considered a bug. For example, a buffer overflow in the implementation leads to acceleration commands (output) if the car is in a certain unusual location (input).

Logically, we expect that the ENV ∧ SPEC ⊨ REQ, that is that assumptions and specifications together should assure the expected behavior in the real world, and IMPL ⊨ SPEC, that is the specification is implemented correctly. Problems occur when:

  • The requirements REQ are flat out wrong. For example, the car actually should be able to exceed the speed limit in emergency situations.
  • The assumptions ENV are incorrect. For example, the GPS sensor provides wrong information about the speed or the car’s breaks do not act as quickly as expected.
  • The system’s specification SPEC is wrong. For example, the specification incorrectly sets a default top speed if no map is available.
  • Any one of these parts can be internally inconsistent or inconsistent with each other. For example, the specification (SPEC) together with the assumptions (ENV) are not sufficient to guarantee the requirements (REQ) if the specified breaking logic in the software (SPEC) does not account for sufficient breaking time (ENV) to avoid going over the speed limit (REQ). Even two requirements may not be consistent, which is actually a common problem already in non-ML systems, known as feature interactions.
  • The system is implemented (IMPL) incorrectly, differing from the specified behavior (SPEC), for example, a buffer overflow bug in the implementation causes the software to issue wrong acceleration commands that violate the specification.

Any of these parts can be cause problems leading to incorrect behavior in the real world (i.e., violating the requirements), but we typically focus all our attention on finding issues in the last category for which we have plenty of testing tools: implementation bugs, where the software does not behave as specified. Incorrect assumptions, seem to be a much more pressing problem in almost all discussions around safety, security, fairness, and feedback loops.

Lufthansa 2904 Runway Crash

A classic example of how incorrect assumptions can lead to catastrophe is Lufthansa Flight 2904 which crashed in Warsaw when it overran the runway after the pilot could not engage the thrust reversers in time after landing.

Wreckage of Flight 2904 on 15 September 1993

The airplane’s software implemented a simple safety requirement (REQ): Do not engage the thrust reversers if the plane is in the air. Doing so would be extremely dangerous, hence the software should ensure that thrust reversers are only engaged to break the plane after it has touched down on the runway.

The key issue is that the requirement is written in terms of real-world phenomena (“plane is in the air”), but the software simply cannot know whether the plane is in the air and has to rely on sensor inputs to make sense of the world. To that end, it sensed the weight on the landing gear and the speed with which the plane’s wheels are turning. The idea — the assumption (ENV) — was that the plane is on the ground if at least 6.3 tons of weight are on each landing gear or the wheel are turning faster than 72 knots. Both seemed pretty safe bets on how to make sense of the world in terms of available sensor inputs. The software’s specification (SPEC) was then simply to output a command to block thrust reversers unless those conditions hold on the sensed values.

Illustration of time elapsed between touchdown of the first main strut, the second, and engagement of brakes. CC BY-SA 3.0 Anynobody

Unfortunately, on a fatal day in 1993, due to rain and strong winds, neither condition was fulfilled for several seconds after Lufthansa flight 2904 landed in Warsaw. Due to aquaplaning the wheels did not turn fast enough and due to wind only one landing gear was loaded with weight. The assumption of what it means to be on the ground was simply not matching the status in the real world. The system thought the plane was still in the air and thus (exactly as specified) indicated that thrust reversers must not be engaged. Here the real world and what the software assumed about the real world simply didn’t match. In addition, the system was designed to trust the software’s output (an assumption on the actuator) and hence did not allow the pilot to overwrite the software.

In summary, the requirements (REQ) were good — the plane really should not engage thrust reversers while in the air. The specification (SPEC) and implementation (IMPL) were correct — the system produced the expected output for the given inputs. The problem was with the assumptions (ENV), on how the system would interpret the real world — whether it thought that the plane was in the air or not.

Questioning Assumptions in Machine Learning

As all other software systems, also systems with machine-learned components interact with the real world. Training data is input data derived through some sort of sensor input representing the real world (user input, logs of user actions, camera pictures, GPS locations, …) and predictions form outputs that are used in the real world for some manual or automated decisions. Specifications (SPEC) for machine learning models are a problem in itself that opens its own can or worms (discussed elsewhere), but let’s not worry about this here and simply assume that the model works as intended based on the input data it receives.

Safety issues typically stem from violations of safety requirements, which express conditions in the real world, e.g., the car shall not collide with other cars. Problems either stem from unrealistic specifications that an machine-learned model cannot satisfy or from unrealistic assumptions, such as, that other cars are always recognizable in the camera image (input). Thinking carefully about assumptions and specifications forces us to reflect more critically which safety requirements we can realistically assure.

Concept drift becomes now quite obviously a problem with changing assumptions (ENV). For example, what we consider to be credit card fraud may change. Data drift may change our assumptions about typical inputs to the model that we used to decide how to train the model (SPEC/IMPL). While we should be critical about what we assume in the first place, and especially about how well training data reflects the real world, drift makes this even harder.

Adversaries may trick our sensors and intentionally feed us malicious data that does not match our assumptions (ENV) about how inputs relate to the real world. In many cases, we likely just have too strong implicit assumptions that will become obvious once we write them down, such as, assuming that move ratings actually reflect the honest opinions of people who have actually watched the movie. Similarly, the outputs may not be interpreted as we assume they would be (again violating our assumptions), leading to different changes in the real world than what we expected. Adversaries can definitively exploit weaknesses in the implementation, but I believe that a large number of attacks can be anticipated by thinking carefully about how the inputs and outputs of the ML system actually relate to phenomena in the real world.

Feedback loops occur when the outputs effect change in the environment that then again influences sensed inputs. If outputs of a credit fraud model are used to prevent many forms of fraud, criminals will adapt their behavior, leading to different kinds of inputs; if predictive policing predictions affect the deployment of police, this will have changes in the environment on observed crime reports that are used as input to the model. The key here is to understand the environment and think about how outputs and inputs relate within the real world. We need to questions assumptions (ENV) that we make about the relationships between input/outputs and phenomena in the real world.

As a final example, consider product recommendations in an online shop like Amazon, where the requirement might be to rank products highly that many real-world customers like. If we assume that recommendations on which the models are trained reflect a real and random sample of actual customer preferences, we might easily be influenced by malicious tainted inputs. This would violate the requirement, because highly rated products may not actually relate to products that many customers like in the real world. A reputation system where we link reviews with accounts that actually bought the product and with the number and ratings of reviews may help to make more nuanced assumptions about the reliability of reviews with regard to actual user preferences. In the end, we may decide that a requirement to rank the products by average customer preferences may be unrealistic and we need to weaken requirements to more readily measurable properties — doing this would be more honest about what the system can achieve and force us to think about mitigations. Furthermore, we may realize that we should not assume that reviews from 2 years ago necessarily reflect customer preferences now and should correspondingly adjust our model (how we interpret ratings relative to their age) or our expectations (REQ). Finally, we might realize that we might introduce a feedback loop since we assume that recommendations affect purchase behavior, while recommendations are also based on purchase behavior. Hence, we may accept that we cannot neutrally identify what customers like and prepare for the presence of bias and feedback loops in our system. In all of these cases, we critically reflect assumptions and whether the stated requirements are actually achievable, possibly often retreating to weaker and more realistic requirements.

What now?

The world and the machine view encourages a decomposition of the problem that allows one to more critically reflect assumptions made and whether the system is actually likely to meet its requirements in the real world. It does not provide a magic bullet about how to identify wrong assumptions or unsuitable specifications, but it structures the discussion and invites a careful inspection of the problem. Separating the world from the machine and being explicit about assumptions is a key insight in developing better requirements that also every engineer of production ML systems should be aware.

A designer might carefully reflect which assumptions do really hold, e.g., do movie ratings reflect actual preferences, does the GPS signal provide accurate velocity information, will pilots react to the warning light, will the car break as quickly as needed? In many cases it may be worth considering how a system would do with weaker assumptions (mostly reflects preferences, usually reflects velocity, usually react, break more slowly). It may be possible that requirements may not be met — if that’s a problem other designs or additional mitigations (often at the system level outside the model) should be chosen. Classic failure analysis techniques like fault tree analysis could be used to model conditions leading to possible requirements violation (based on specification, implementation, or assumption mistakes) and reflect how mitigations help to reduce the hazard.

To understand feedback loops, an understanding of the environment is needed and how it relates to the assumptions and specifications of the system. Being explicit about assumptions helps to guide the analysis, to hopefully detect and mitigate issues in the design phase, not just when detected after deployment in production. There are probably more sophisticated techniques in modeling the entire environment, but being explicit about assumptions is going to be a great first step.

In the end, we might often realize that we cannot fully assure a specific requirement about behavior in the real world, even with mitigation strategies. We may get close though, accept a weaker requirement, or might be able to narrow down under which condition we have confidence that the requirement is met. Whenever we weaken requirements this way, we can then decide (e.g., after talking to stakeholders or using risk analysis) whether that is sufficient for operating a production system.

--

--

Christian Kästner

associate professor @ Carnegie Mellon; software engineering, configurations, open source, SE4AI, juggling