Digital Society Position Paper on ADMS

2022-02-21

I am a member of the Digital Society working group on automated decision making systems (ADMS). We have published the first version of our position paper on how ADMS should be regulated to benefit humanity.

The scope of our proposal includes ADMS that make, or at least support, decisions in a fully automated way using technical systems. Our proposed legal framework is technology-neutral, instead it follows an approach based on harm and risk. Sanctions are only imposed retrospectively in the event of harm, whereas high-risk applications are subject to certain a priori restrictions. Anyone using an ADMS must assess and categorize its risk to the health, safety and fundamental rights of individuals and society.

The framework provides three categories for this purpose: “low risk”, “high risk” and “unacceptable risk”. The categories are based on the risk posed by the system to individuals, as well as to society as a whole. “Low risk” systems pose a low risk to individuals and none to society, while “unacceptable risk” systems pose an unacceptably high risk to individuals or society, and therefore must be banned. In between are the “high risk” systems. For these ADMS, we demand a far-reaching duty of transparency and due diligence, which should enable the public to assess their risks and benefits, and thus make informed decisions on their use.