Machine learning algorithms are used across a wide variety of domains to produce useful outputs in the form of predictions, classifications, and clusterings. However, while many organisations can benefit from the use of machine learning algorithms, they raise a lot of ethical issues, which can make organisations reluctant to explore them.
To make these algorithms more socially acceptable, and thus enable their benefits to be experienced more widely, many organisations are exploring algorithmic transparency tools.
In domains where the outputs of a machine learning model can have real world consequences, for example, predictive policing, financial lending, or family screening, algorithmic transparency can provide explanations of how these outputs were reached. By tailoring the form and complexity of these explanations to the audience, practitioners can effectively audit, verify and contextualise a model’s outputs – building public trust in the tools and institutions that use them.
Trilateral Research evaluates machine learning algorithms using a diverse set of algorithmic transparency techniques in order to:
Our interdisciplinary team of experts uses techniques like Shapley values, Anchors, and counterfactual explanations to produce solutions that enable ethical human-focused machine-assisted decision-making, as opposed to prescriptive autonomous decision-making.
Trilateral Research has a wealth of experience in algorithmic transparency. We take a socio-technical approach and place strong emphasis on the ethical implications of the implementation of machine learning algorithms into decision-making processes, including our own applications.
Sign up for our newsletter
One Knightsbridge Green, London SW1X 7QA, UK
+353 (0)51 833 958
2nd Floor Marine Point, Belview Port, Waterford, X91 W0XW, Ireland