Skip to content

Glossary and Taxonomy

Contributors:

  • Marc P. Hauer (TU Kaiserslautern)
  • Patrick van der Smagt

Accountability

Accountability is a frequently used term in standardisation and legislation, though, it is usually not thoroughly defined while there are many possible definitions 2.

Recent scientific work revolving around AI usually refers to the accountability explanations provided by Maranke Wieringa, who maps the definition provided by Bovens specifically to the field of AI 3. Bovens’ definition says accountability is “a relationship between an actor and a forum, in which the actor has an obligation to explain and to justify his or her conduct, the forum can pose questions and pass judgement, and the actor may face consequences” 4.

As many standardisation and legislation texts addressing AI referring to the term accountability have been published before the work of Wieringa, it is unclear which definition has been kept in mind. To close the semantic gap, we relate to the definition of Bovens and the explanations of Wieringa when using the term accountability.

According to Hauer et al., transparency and examinability can enable the forum to control whether, e.g., the aimed and claimed properties of the system are actually implemented and whether the system is used as described 5.

Artificial Intelligence (AI)

The finesse of a definition of “AI” is influenced by the goal that that definition should reach. That also means that, once a definition is placed, it should not be used out of its context, in other realms.

The definition of “AI” discussed here is placed within a regulatory framework. Within such a definition it is, of course, important to align with the corresponding environment. We consequently follow and comment on two definitions.

In that, let us first look at this in the light of the given definition as provided by the EC in the proposal for the AI Act, Annex I:

(a) Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning;

(b) Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems;

(c) Statistical approaches, Bayesian estimation, search and optimization methods.

This legislative framework pilots a risk-based approach, meaning that it attempts to regulate methodologies based on their risk.

Following that, the risk that is exceptional in “AI” systems is related to the fact that solutions are based on two parts: first, the code, and second, data which are used in determining the input–output behaviour of the code. In traditional software systems similar risks can occur, but since their input–output behaviour is not highly influenced by data, it is more straightforward to explain and predict the behaviour of a piece of code. The lines of code alone clearly determine that behaviour.

This transparency disappears when the data has a considerable impact on the input–output behaviour of the algorithm. This is, for instance, the case for neural networks and other (nonlinear) machine-learning-based technologies: the input–output behaviour of such systems is to a large part determined by the data and the training method that is used for that particular algorithm.

Let’s look at another definition. ISO/IEC 22989 tackles it as follows:

AI is a set of methods or automated entities that together build, optimize and apply a model [3.1.26 = physical, mathematical, or otherwise logical representation of a system, entity, phenomenon, process or data] so that the system can, for a given set of predefined tasks [3.1.37 = actions required to achieve a specific goal] compute predictions [3.2.12 = output of a machine learning model when provided with input data], recommendations, or decisions.

One advantage of this definition is that it is broad, leading to a long-term validity. On the downside, it is so broad that it includes any piece of software which controls (“decision”) or influences (“predictions/recommendations”) a process.

The key to this definition is in “model”. That definition clearly includes traditional software approaches, also such that are not data-driven. Indeed, it does not. Also, what seems to be missing here is a machine-learning model, or indeed any model which is created on the basis of or heavily relying on data.

Possibly problematic in the definition is “predictions”: it (is the only part which) depends on a “machine-learning model”.

Including traditional, established, non-AI software methodologies in a new definition of AI would imply that new regulations are going to be imposed on those methodologies as well. This may be problematic as the existing approaches already have regulatory bodies in place. Putting in place a new legislative system for “AI” should therefore not target existing practice.

We argue that “data-driven approaches” pose a considerable risk. But it is hard to define what a data-driven approach is. The vagueness of my description in the first part of this text is intentional: “considerable impact on the working of the algorithm”… “to a large part determined by the data”… We are, of course, discussing a continuum of methodologies, where a standard control method may be the one extreme, and a neural network the other.

The logic- and knowledge-based approaches, as mentioned in the AI Act, are those where data-dependence is not automatic; instead, such systems rely on hand-crafted models. Excluding these from the AI Act definition is therefore within the above assumptions.

Finally, part (c), the statistical approaches. Many of the listed machine learning approaches often use statistical methods such as maximum likelihood estimation, Monte Carlo sampling, and Bayesian inference and estimation. However, it is customary in traditional statistics to separate these out of machine learning, and for those historical reasons we have to follow that path, too. Bayesian estimation is a method that is part of the Statistical approach, therefore, we recommend its removal.

Within the light of the above, and within the purpose of etami, we will understand “AI” as follows:

(a) Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including neural networks;

(b) Other data-driven approaches, including statistical approaches, search, and optimisation methods.

AI incident

A problematic or unpleasant event which involves AI technology.

Audit

The term audit is differently understood depending on the context. In the context of standardisation, auditing describes various procedures as basis for accredited certification. Different aspects around an AI product can be audited/certified:

  • the manufacturing process of the system (ISO 17021)
  • the product (ISO 17065)
  • persons performing an audit (ISO 17024)
  • testing and regulation laboratories (ISO 17025)

In the context of inspection of algorithmic systems, auditing describes specific accesses to information that allow inspection, like non-invasive user audit, scrapping audit, sock-puppet audit, crowd-sourced audit and code audit 67.

Behaviour

The term behaviour describes how an entity acts or reacts. Traditionally, this entity is a human or an animal, a living being that behaves somehow out of its own motivation. Nowadays, the term is also used to refer to the actions or reactions of a machine. Thereby, the machine is anthropomorphised, which is why it is not undisputed to speak of the behaviour of a machine. Nevertheless, it simplifies the communication about actions and reactions of machines (including AI systems), which is why we also use the term behaviour.

Bias

Bias, in a general and neutral formulation, indicates a deviation from a standard 8. When training a machine-learning model, a bias with respect to certain attributes is exactly what is to be achieved. However, an ML model is susceptible to learning bias with respect to attributes that have no causal influence on the desired output. Therefore, a bias may be learned and, when deployed, lead to discrimination. It must be noted that a bias to what attributes is considered as discrimination highly depends on the context of application, as many laws are context-specific.

Black Box

If a system and its decision structure is opaque, it is considered a black box. Additionally, some scientific publications define black box systems as systems that are transparent in principle, but too complicated for a human to comprehend 9.

Complexity

In computer science complexity has two meanings. The first meaning understands complexity as a runtime class. The complexity of an abstract problem can range from simple (polynomial-time algorithms; i.e., the time to compute it increases polynomially with the size) or hard (e.g., NP, indicating exponential cost with increasing size).

The second meaning relates to the psychological meaning of the term 10. Accordingly, complexity means that multiple elements are contained that are possibly diverse and possibly adapt their behaviour to the situation. These elements are connected by rather more than less interactions. Complex systems show emergent behaviour that is not just the linear aggregation of individual properties or behaviours.

Data

Information on which operations can be performed by an algorithm.

DataSheet

A Datasheet is an extensive list and structure for information about collected data. The term has been coined by Gebru et al., who provide a thorough suggestions what information should be part of a datasheet 11.

Deployment

Deployment is the process of providing or distributing software. Nowadays, deployment is often an automated process that also includes installation and configuration.

Discrimination

See Bias.

Examinability

Information is examinable, if it can be directly extracted by the forum, e.g., by using an API, to inspect the database or to interact with the AI system under test.

Explainability

The German Federal Ministry of Economy and Energy (BMWK) published a study on explainable AI at https://www.digitale-technologien.de/DT/Redaktion/DE/Downloads/Publikation/KI-Inno/2022/Studie_Erklaerbare_KI_Englisch.html which we currently consider state-of-the-art.

Explanation

An interface between humans and an AI decision-maker that is both comprehensible to humans and an accurate proxy of the AI. Philosophers have been debating the concept for thousands of years. What actually can be considered as explanation in a legal sense is still under debate but at least part of the necessary groundwork already exists 12. It is important to note that the general idea of an explanation is context dependent. To improve an AI model, a software engineer expects a different kind of explanation than a lawyer does to understand how a model made a decision.

Explainable Machine Learning

The field of explainable ML discusses a wide variety of techniques to provide explanations for ML behaviour. They focus on different levels of abstraction and can be divided into approaches to explain a model (also called global explanation, e.g., activation values) and approaches to explain a decision (also called local explanation, e.g., LIME).

Factsheet

Factsheets are multi-dimensional supplier’s declarations of conformity that contain purpose, performance, safety, security, and provenance information to be completed by AI service providers for examination by consumers. Their disclosure makes information about a system on a functional level transparent and can thus increase trust in the system 13. Arnold et al. provide a comprehensive collection of information that could be part of a factsheet.

Fairness

In computer science the term fairness is coined as any form of operationalization to measure the opposite of discrimination. There are more than 20 fairness measures representing different philosophies of fairness which, thus, contradict each other to a great extent. Fairness measures can be grouped by various attributes:

Group fairness vs. Individual fairness

Fairness based on a ground truth (oracle) vs. Fairness not based on a ground truth

Global Explanation

Attempts to understand the high-level concepts and reasoning behind a model. Contrary to local explanations that attempt to explain a behaviour.

Governance

TODO

Interpretability

Is the ability to explain or provide the meaning in understandable terms to a human. Interpretability is normally tied to the evaluation of model complexity.

Interpretable Model

An interpretable model is a model used for predictions, that can itself be directly inspected and interpreted by human experts.

Local Explanation

Aims to explain the model’s behaviour for a specific input. Contrary to global explanations that attempt to explain the reasoning behind a model.

Model Card

Model cards are a framework provided by Mitchell et al. for reporting information about a machine learning model, among others including model details, training data and ethical considerations 14.

Practice

Refers to the real-world context in which the model has been deployed.

Policy prototyping

Policy prototyping (PP) refers to an innovative way of policy making, similar to product or beta testing. In PP, stakeholders come together in an environment where the policy is tested in the field before they it is finalised and declared.

Risk

In general, a risk states whether there is a chance that an incident might happen. The term risk is differently understood in different contexts which results in various definitions and concepts to measures risks. In standardization risk is often defined as the damage that might result from an incident multiplied with the chance that the incident occurs. However, this definition requires a) knowledge about all possible incidents, b) a way to properly estimate the likelihood and the resulting damage and c) a method to operationalize the two parameters to result in a meaningful multiplication. In the context of AI applications these conditions can barely be met. Another approach is to evaluate possible risks based on a matrix, either with the same parameters 15 or other parameters, that allows more room for human judgement with regard to individual cases 16. The AI Act lists hard conditions under which a system is considered as high-risk, low-risk and no-risk, though these are still open to discussion (see AI Act Art. 6 §1 in conjunction with Annex II section A and B, and Art. 6 §2 in conjunction with Annex III).

Sector

There is much discussion about the pros and cons of a sector dependent regulation on ML applications. This discussion originates from the different levels of risks in specific sectors, as well as different already existing legal requirements. Some of the debated sectors are public service delivery in welfare and other public services, migration and border control, military/warfare, education, healthcare, policing and law enforcement, media and social networks, job market and work performance evaluation, credit scoring and financial market, mobility and traffic, industrial automation, data protection and security, facial recognition in the private sector, product safety, environmental sustainability and energy.

Stakeholders

Are the people who either want a model to be “explainable”, will consume the model explanation, or are affected by decisions made on model output.

Traceability

The terms examinability, intelligibility, interpretability and traceability are often used interchangeable in the literature but not necessarily mean the same. To prevent confusion, we provide a definition for examinability and interpretability, as these terms seem to be used more consistently. Intelligibility and traceability, however, can mean either of the two.

Transparency

Information is transparent, if it is disclosed and accessible for a targeted forum. Any information that can be documented in the life cycle of an AI product can be made transparent, from the requirement documents to the quality assessment results.

A model is considered transparent if human can understand its functioning without any additional explanation of its internal structure the algorithmic means by which the model processes information.

Given that the provided information is complete, accurate and comprehensible, many properties of the AI product can be assessed, depending on the specific information made transparent, however, in practice these assumptions do not necessarily hold. Carefully selected information might be left out or rigged to manipulate the assessment. Therefore, it is important for any kind of audit that the information made transparent is also examinable.

Transparent Method

Additionally, here are some different understandings of the term transparency:

  • Transparency in communication about technical possibilities. Privacy-related topics or ‘skills’ of technical artefacts, e.g., must not disappear in the small written terms of use.

  • Transparency in studies: Wizard of Oz Scenarios – or at least its eventuality – should be clearly explained to all participants.

  • Transparency in the design of artifacts as mentioned by Verbeek.

  • Transparency about decisions in the design process by using the scopes/Spielraum in design.

Trust

Interpersonal / organizational trust can be defined as follow, where B can be an AI system: If A believes that B will act in A’s best interest, and accepts vulnerability to B’s actions, then A trusts B 17.

Institutional trust has been proposed as more important for establishing public trust in AI than interpersonal trust due to the difficulties for the public to determine the trustworthiness of an AI 18.

Trustworthiness

Trustworthiness refers to the characteristics and behaviours of an AI system and its developer that are the determining factors in trust decisions, e.g., ability, benevolence and integrity 17.

Trustworthy AI

According to the Ethics Guidelines for Trustworthy AI provided by the High-Level Expert Group on Artificial Intelligence1, Trustworthy AI has three components, which should be met throughout the system’s entire life cycle:

  • It should be lawful, complying with all applicable laws and regulations;

  • it should be ethical, ensuring adherence to ethical principles and values; and

  • it should be robust, both from a technical and social perspective, since, even with good intentions, AI systems can cause unintentional harm.

Nowadays, trustworthy AI represents a research focus in its own right, including aspects such as transparency and explainability.

Last update: 2022.09.04, v0.1


  1. https://ec.europa.eu/futurium/en/ai-alliance-consultation.1.html 

  2. Severin Kacianka, Kristian Beckers, Florian Kelbert, and Prachi Kumari. How accountability is implemented and understood in research tools - A systematic mapping study. In Michael Felderer, Daniel Méndez Fernández, Burak Turhan, Marcos Kalinowski, Federica Sarro, and Dietmar Winkler, editors, Product-Focused Software Process Improvement - 18th International Conference, PROFES 2017, Innsbruck, Austria, November 29 - December 1, 2017, Proceedings, volume 10611 of Lecture Notes in Computer Science, 199–218. Springer, 2017. doi:10.1007/978-3-319-69926-4_15

  3. Maranke Wieringa. What to account for when accounting for algorithms: a systematic literature review on algorithmic accountability. In Proceedings of the 2020 conference on fairness, accountability, and transparency, 1–18. January 2020. 

  4. Mark Bovens. Analysing and assessing accountability: a conceptual framework 1. European law journal, 13(4):447–468, 2007. 

  5. M. P. Hauer, T. D. Krafft, and K. Zweig. Overview of transparency and examinability mechanisms to achieve accountability of ai systems. Submission is currently processed., 2022. 

  6. Christian Sandvig, Kevin Hamilton, Karrie Karahalios, and Cedric Langbort. An algorithm audit. Data and discrimination: Collected essays, pages 6–10, 2014. 

  7. Jack Bandy. Problematic machine behavior: a systematic literature review of algorithm audits. Proceedings of the ACM on human-computer interaction, 5(CSCW1):1–34, 2021. 

  8. David Danks and Alex John London. Algorithmic bias in autonomous systems. In IJCAI, volume 17, 4691–4697. 2017. 

  9. Cynthia Rudin. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5):206–215, 2019. 

  10. Dietrich Dorner. The logic of failure: Recognizing and avoiding error in complex situations. Perseus Press, 1997. ISBN 0201479486. 

  11. Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumé Iii, and Kate Crawford. Datasheets for datasets. Communications of the ACM, 64(12):86–92, 2021. 

  12. Tim Miller. Explanation in artificial intelligence: insights from the social sciences. Artificial intelligence, 267:1–38, 2019. 

  13. M. Arnold, R. K. E. Bellamy, M. Hind, S. Houde, S. Mehta, A. Mojsilović, R. Nair, K. Natesan Ramamurthy, A. Olteanu, D. Piorkowski, D. Reimer, J. Richards, J. Tsay, and K. R. Varshney. Factsheets: increasing trust in ai services through supplier’s declarations of conformity. IBM Journal of Research and Development, 63(⅘):6:1–6:13, 2019. doi:10.1147/JRD.2019.2942288

  14. Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. Model cards for model reporting. In Proceedings of the conference on fairness, accountability, and transparency, 220–229. 2019. 

  15. D. Dawson, Emma Schleiger, Joanna Horton, John McLaughlin, Cathy Robinson, George Quezada, J. Scowcroft, and Stefan Hajkowicz. Artificial intelligence: australia’s ethics framework. Data61 CSIRO, Australia, 2019. 

  16. S Hallensleben, Carla Hustedt, Tobias Krafft, Marc Hauer, Lajla Fetic, Andreas Kaminski, Michael Puntschuh, Philipp Otto, Christoph Hubig, Torsten Fleischer, Paul Grünke, and Rafaela Hillerbrand. From principles to practice – an interdisciplinary framework to operationalize ai ethics. URL: https://www.bertelsmann-stiftung.de/fileadmin/files/BSt/Publikationen/GrauePublikationen/WKIO_2020_final.pdf (visited on 2022-03-14). 

  17. Roger C. Mayer, James H. Davis, and F. David Schoorman. An integrative model of organizational trust. The Academy of Management Review, 20(3):709–734, 1995. URL: http://www.jstor.org/stable/258792 (visited on 2022-09-12), doi:10.2307/258792

  18. Bran Knowles and John T. Richards. The sanction of authority: promoting public trust in ai. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 262–271. 2021.