Skip to content

The etami principles

etami has set off to develop practical solutions for ethical and trustworthy AI. Yet, no set of tools and processes can exhaustively cover all use-cases, especially within a rapidly-evolving domain such as Artificial Intelligence. It is then essential to also define generic guidelines driving the work downstream; guidelines that would potentially enable a better adaptation when necessary and a greater creativity in finding suitable solutions.

etami postulates four guidelines, stated simply as,

  1. be savant
  2. be critical
  3. be transparent
  4. design around humans

1. be savant

Gaining as much knowledge as possible throughout the lifecycle of AI system is essential. When designing the system, it is necessary to gain knowledge about the problem tackled and the context in which the system will be deployed. This entails evaluating the societal context of deployment and the metrics used for evaluation, the assumptions made, and the potential proxy objectives that are optimised.

  • know your goals,
  • know your context,
  • know your data.

2. be critical

As the impact of critical AI systems is not always trivial to assess, one has to adopt an adversarial thinking. Doing it collectively is even better. That implies questioning every aspect of the system, from its raison d’être, to the underlying user experience it provides. Methods and toolboxes help of course, and they are legion; however, one of the most powerful implementations of this principle is organisational: appointing internal auditing roles, guaranteeing their independence, and integrating these practices into the AI system lifecycle itself are all elements to consider for high-risk contexts. Practices such as external auditing and red-teaming are laudable as well.

  • audit early, audit often;
  • integrate external feedback.

3. be transparent

It is needless to argue that transparency is key for accountability and ultimately a path towards trustworthiness. Transparency materializes first and foremost in organizational aspects before it translates into technicalities. Transparent organisations with a smooth flow of information, systematic documentation practices, and a clear distribution of responsibilities pave the way to successful AI projects. Leveraging transparent models—models that are inspectable by design, without the need for explainability methods—is also crucial for high-stake applications.

  • document systematically;
  • privilege transparent models;
  • clarify and validate your metrics.

4. design around humans

When deploying, surprises often happen. Models do not behave as expected or users do not interact as imagined, to only name the visible effects. Other effects, more latent but no less impactful also happen. AI systems can increased discrimination, widen inequalities, and hurt populations. Given these risks, it becomes imperative to constantly keep in mind who are the users and subjects of the system, how they might influence it, and how the system can impact them in turn.

  • thoughtful UX designs;
  • conduct multi-phase trials;
  • monitor what you can.

Last update: 2022.11.17, v0.2