Governance

A continuously updated framework

At SMR-group we work at the forefront of many different areas of AI, because we believe machines that understand data and people can help solve society's most challenging problems. And while we create our empowering technologies we are consciously aware that, with human beings at the center of our scope, we have an important responsibility. Our advanced algorithms should never be used in a way that they no longer solve but support society's problems. That is why all developments at SMR-group are guided by a set of important principles. These principles help to ensure that our technology never harms or negatively influences the life of any person that is exposed to it; that these people receive equal treatment regardless of pre-existing societal biases or other discriminating factors; that their privacy is always safeguarded and that any data we keep or own is handled with the utmost care and adequate precautions. A continuously updated governance framework ensures that we comply with these claims as well as all relevant (inter)national guidelines and regulations.

Sales Policy

Risk Categories and the 4 P's

A key element in our governance framework is a strict policy used to assess all sales requests.
For every requests we infer 3 key pieces of information that, extended with the relevant Product, form the so called 4 P's:

  • The Product
  • The Party that is requesting the product
  • The Place where the party is located
  • The Purpose for which the party wants to use the product

This could be a University (Party) in Norway (Place) that wants to use FaceReader Online (Product) in a study of behavioral psychology (Purpose). For every request we evaluate the 4 P's against legal frameworks such as the GDPR and the upcoming European AI Act, (inter)national sanctions and embargoes as well as relevant human rights and export regulations. Based on this information each P is assigned a risk level (minimal, limited, high, restricted) which together determine the course of further action.

dangerous

Restricted Category

This category is never allowed

If any of the P's is restricted the request is declined.
This holds for parties and places that are on (inter)national consolidation lists or under relevant sanctions/embargoes. Also, due to the inherent risks, we assign a restricted label to applications that relate to:

  • Active defense or the design of military technology
  • Surveillance in public spaces
  • Social scoring
  • Lie/Deception detection
  • The restricted areas of application listed in the European AI Act

Additionally, we restrict our technology from being used in any scenario where a decision with the potential to negatively affect a person's life is made directly on the output of our technology without human responsibility.

warning

High-Risk Category

This category can be allowed after internal review

When there is plausible risk of human rights violation, or the assigned risk labels indicate high risk, the sales request will be internally reviewed by our ethical and legal teams and requires CEO approval. To err on the side of caution we consider all countries on the EU sanctions list to be high risk places as well as parties such as:

  • Military
  • Law Enforcement
  • Border Control
  • Financial Institutions
  • Healthcare
  • Court of Justice

And the purposes listed in the European AI Act.

verified

Limited Risk Category

This category is always allowed but sometimes requires an end use statement

In all scenarios that are not restricted or high-risk our technology will be sold directly. In relevant situations we will however still require the insurance from our clients that they commit to appropriate privacy and transparency obligations. To this end we have an end use statement in place that automatically expires and requires evaluation of the previous period for renewal.

phone +31 (0) 20 5300330
location_on Singel 160, 1015 AH Amsterdam, The Netherlands