Developing fair models in an imperfect world: How to deal with bias in AI
23 December 2021
Artificial intelligence (AI) is increasingly used in data-based decision making, from general rule-based models to machine learning (ML) models. Decisions made by ML models are thought to be better, faster, and more consistent than human decisions. However, as AI becomes an integral part of our lives, the concerns over potentially biased and unfair models are growing. Insurance is one of many industries facing this problem. This white paper discusses how to detect bias and build a fair machine-learning model.
Explore more tags from this article
About the Author(s)
Contact us
We’re here to help you break through complex challenges and achieve next-level success.
Contact us
We’re here to help you break through complex challenges and achieve next-level success.