The World and the Machine and Responsible Machine Learning

Separating World and Machine

No software is deployed in a vacuum: Every relevant system is deployed as part of the world, to understand something about the world, and to have an impact on the world. Indeed, the goals and requirements of the system are expressed as desired states in the world: For example, we might want to convince humans to buy something, help humans make medical diagnoses affecting the treatment of other humans, or help with college admission decisions, all affecting the real world. Software, with or without machine learning, is created to interpret parts of the world and to manipulate the world toward a desired state. The world is also referred to as environment, from the perspective of the software.

Requirements, Assumptions, Specifications, and Implementation

Importantly, thinking clearly about the world and the machine and how the machine can only interact with the world mediated through sensors and actuators allows us to distinguish:

  • Assumptions (ENV) express the relationship of real-world concepts to software inputs and outputs. For example, we assume that the GPS correctly represents the car’s speed, that the manually entered target address correctly represents the user’s intention, and that the car actually honors the system’s break commands and will slow down according to an expected pattern (as expected from the physics of the situation).
  • Specifications (SPEC) describe the expected behavior of the software system in terms of input and outputs. For example, we expect the system to never issue an acceleration command (output) if the speed (input from GPS) is larger than the speed limit (input from map) in the current location (input from GPS).
  • Implementation (IMPL) provide the actual behavior of the software system that is supposed to align with the specification (SPEC), usually given with some code or an executable model. A mismatch between implementation and specification may be detected say with testing and is considered a bug. For example, a buffer overflow in the implementation leads to acceleration commands (output) if the car is in a certain unusual location (input).
  • The assumptions ENV are incorrect. For example, the GPS sensor provides wrong information about the speed or the car’s breaks do not act as quickly as expected.
  • The system’s specification SPEC is wrong. For example, the specification incorrectly sets a default top speed if no map is available.
  • Any one of these parts can be internally inconsistent or inconsistent with each other. For example, the specification (SPEC) together with the assumptions (ENV) are not sufficient to guarantee the requirements (REQ) if the specified breaking logic in the software (SPEC) does not account for sufficient breaking time (ENV) to avoid going over the speed limit (REQ). Even two requirements may not be consistent, which is actually a common problem already in non-ML systems, known as feature interactions.
  • The system is implemented (IMPL) incorrectly, differing from the specified behavior (SPEC), for example, a buffer overflow bug in the implementation causes the software to issue wrong acceleration commands that violate the specification.

Lufthansa 2904 Runway Crash

A classic example of how incorrect assumptions can lead to catastrophe is Lufthansa Flight 2904 which crashed in Warsaw when it overran the runway after the pilot could not engage the thrust reversers in time after landing.

Wreckage of Flight 2904 on 15 September 1993
Illustration of time elapsed between touchdown of the first main strut, the second, and engagement of brakes. CC BY-SA 3.0 Anynobody

Questioning Assumptions in Machine Learning

As all other software systems, also systems with machine-learned components interact with the real world. Training data is input data derived through some sort of sensor input representing the real world (user input, logs of user actions, camera pictures, GPS locations, …) and predictions form outputs that are used in the real world for some manual or automated decisions. Specifications (SPEC) for machine learning models are a problem in itself that opens its own can or worms (discussed elsewhere), but let’s not worry about this here and simply assume that the model works as intended based on the input data it receives.

What now?

The world and the machine view encourages a decomposition of the problem that allows one to more critically reflect assumptions made and whether the system is actually likely to meet its requirements in the real world. It does not provide a magic bullet about how to identify wrong assumptions or unsuitable specifications, but it structures the discussion and invites a careful inspection of the problem. Separating the world from the machine and being explicit about assumptions is a key insight in developing better requirements that also every engineer of production ML systems should be aware.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Christian Kästner

Christian Kästner

335 Followers

associate professor @ Carnegie Mellon; software engineering, configurations, open source, SE4AI, juggling