AI and robotics adverse impacts – how resilient or vulnerable are we?
Individuals and society benefit from advances in human genomics, human enhancement, AI and robotics technologies. But these technologies present ethical challenges and have the potential to challenge our values, way of living and adversely affect human rights.
As outlined by research in the SIENNA project, the adverse impacts of AI and robotics are diverse. These include, for example, reduction of individuals’ control over their data, adverse effects on privacy, increase in discrimination, ruling class domination and wealth, harm, threats of harm from autonomous systems (and weapons), and a rise in surveillance.
The SIENNA project explored two main questions: what is the resilience of potentially affected communities and how vulnerable are they to such adverse impacts?
Resilience has two crucial aspects: the inherent strength of a party and its capacity to bounce back from stresses and shocks. By vulnerability, we mean the inverse of resilience: the weakness and susceptibility of a party to negative impacts and its ability (if any) to recover from negative impacts.
It is important to understand and acknowledge, not only that certain parties will be more greatly affected, but also that certain affected parties may adapt and adjust better to the adverse or life-changing impacts of AI and robotics.
Resilient parties may have certain values or resources that, if distributed to vulnerable ones, could increase their well-being and minimise harms from adverse impacts. Considering and addressing matters of resilience and vulnerability can potentially help improve future AI and robotics research and policy.
Based on stakeholders’ interviews and its research, SIENNA found that greater resilience would be evident in Western societies where AI and robotics are advanced, and impacts are already being considered and addressed. It would also be evident in communities that are based on strong unifying values. Isolated communities would also be resilient (given they would be insulated from the use and consequently the effects), as would people with open attitudes and the ability to rapidly adapt to change.
Resilience to AI and robotics impacts also depends on other conditions/factors being met:
- whether individuals and society are aware of such impacts and prepared to deal with them;
- whether incentives exist to push the responsible design and support the lawful and ethical use of such technologies;
- whether harms can be redressed where harmful effects do occur.
Without regulation guiding the development and implementation of AI and robotics, vulnerability will increase and resilience decrease (or remain the same). This area needs future research and discussion. We also need an evaluation of which regulations may lead to the most equitable distributions of AI and robotics’ benefits.
Where lies the greater vulnerability? Who will AI and robotics impacts hit the hardest?
The SIENNA research identified the global south as being more vulnerable. While the concerns might be similar everywhere, Arun outlines that the risks of the ways in which AI affects Southern populations “can be exacerbated in the context of vulnerable populations, especially those without access to human rights law or institutional remedies.” A concern has also been expressed about how “Artificial intelligence (AI) could displace millions of jobs in the future, damaging growth in developing regions such as Africa”.
Another widely-recognised vulnerable group is workers. Workers, especially young ones who lose or have to change jobs are very vulnerable to AI impacts, especially if they are not educated or trained in the right skills set that will help them adapt to changing technologies and highly automated workplaces. Societies, where more individuals lack education and opportunities to re-train and re-skill, are more vulnerable.
Other vulnerable groups include children and the elderly. A UNICEF report highlights “children’s heightened vulnerabilities” due to their exposure, susceptibility to suggestions and lack of adoption of adequate safety and warning measures by companies selling AI and robotics-products and offering such services. The elderly are also vulnerable given the potential impacts of AI and robotics on their dignity, autonomy, privacy and distancing from other humans (where human services are replaced by machines).
So what do we do about it?
Further research and critical discussion continue to be the need of the hour – which will continue via the SIENNA, and SHERPA projects and our work in Trilateral to support responsible research and innovation by improving knowledge of the ethical, human rights and socio-economic impacts of emerging technologies.
For more information on our work in this research area, please contact our team.
Rowena Rodrigues, Research Manager at Trilateral Research