Context: I've described in which ways were people exploited historically and how users are currently exploited online (https://www.heulwen.net/blog/on-exploitation). I’m currently exploring ways how to empower users of digital platforms in the world of increasing automation. Disempowerment index for platforms is one of my ideas on how to provide decision-relevant information and create ground for other interventions such as promotion of safer platforms.

Epistemic status: The idea is preliminary, mostly inspired by indexes of democracy which are maybe vague but still, they contribute to common understanding of what societal features are important, and how to cultivate recognition of their value.

Disempowerment Index - A tentative proposal


In the gradual disempowerment scenario, even incremental increase in AI capabilities leads to  loss of human control over our environments and as a result, our lives. Virtual environments will be the first spaces to accelerate disempowerment. Today, digital platforms are already places full of addictive and dark patterns, flooded with harmful information and agents optimizing against regular users’ interests.


The first step to effectively fight back is a good map of the harms. There are several efforts to increase transparency regarding safety of digital platforms. The most prominent one is scorecard Ranking Digital Rights. [1] It rates big tech companies on governance, freedom of expression, and privacy (300+ indicators). The focus is more on formal indicators than real patterns which makes the scorecard less informative for end users. This gap could be bridged by a regulatory & research toolkit on addictive or manipulative design proposed by Spain’s data protection authority (AEPD) which offers suitable three-level taxonomy. If we are to create a compact informative user-focused index, it should be also anchored in the EU Digital Services Act [2] risk-assessment regime and the Commission’s European Centre for Algorithmic Transparency (ECAT). [3]


After my initial research, I suggest 5 areas of harm which can help evaluate platform design, behavior of advertisers on the platform, and behavior of other users (inauthentic or authentic): (1) Financial extraction, (2) Data exploitation (surveillance) (3) Inference and behavioral control (4) Attentional and Epistemic Exploitation (5) Psychosocial harm.