Anticipate and mitigate harms
Harm is always possible when it comes to technology. To avoid negative outcomes, plan for the worst while working to create the best outcomes.
Technology is now part of our everyday lives: no program or technology solution operates in isolation. Therefore, to live up to the commitment to do no harm, policymakers and practitioners need to anticipate and work to mitigate harms, even those that originate outside of a given initiative.
There are a number of potential harms that may arise from any given digital initiative, and any list offered here will prove to be insufficient. Examples of harms include enabling digital repression (including illegal surveillance and censorship); exacerbating existing digital divides associated with, for example, disability, income, or geographic location; technology-facilitated gender based violence; undermining local civil society and private sector companies; amplifying existing, harmful, social norms; and creating new inequities.
While harms are present with all technology, these harms are particularly relevant, and the impacts are less known, when it comes to machine learning and artificial intelligence (AI).
Harm mitigation is context-specific, and requires a multi-faceted approach that integrates technical, regulatory, policy, and institutional safeguards. Effective harm mitigation takes a long-term approach, considering how current challenges and inequities will be amplified by unknown developments.
Without these types of safeguards, specific groups of people may decide to disengage or systems may be used to intentionally target certain groups of people, undermining all sustainable development goals.