Artificial intelligence (AI) is increasingly used in data-based decision making, from general rule-based models to machine learning (ML) models. Decisions made by ML models are thought to be better, faster, and more consistent than human decisions. However, as AI becomes an integral part of our lives, the concerns over potentially biased and unfair models are growing. Insurance is one of many industries facing this problem. This white paper discusses how to detect bias and build a fair machine-learning model.
Share this page
Developing fair models in an imperfect world: How to deal with bias in AI
We need to start training models that reflect the world we want to live in.
Daniël van Dam, Raymond van Es, Jan Thiemen Postema