How can we assist you?

    Blog

    Decision making often involves balancing values so creating AI requires values to be made explicit

  • EN
  • Artificial intelligence (AI) can be a great tool for making better decisions about interactions with others. Because of its automation, it can be fast and consistent. Unfortunately, because of that consistency, any mistake is also made consistently. This justifies even greater attention to implementing AI than to implementing human decision processes.

    The human touch: a correction factor?

    Whether we ask Google to serve us relevant search results, read news articles Facebook considers of interest for us or fill out an online mortgage application, AI plays an important role in making decisions to provide us with desired results. Since ago, we have honed human decision making processes over many iterations, improving them step by step.

    Because humans are involved, silly rules are often quite easily corrected by human common sense. When we replace human decision making with AI, we replace a process developed over many iterations and corrected by a typical human trait: common sense. Logically, that requires careful consideration. (It can easily go wrong, resulting in unfair or even discriminatory decisions)

    Company values are key for "human-like" AI

    To get this right, McKinsey suggests starting with the company's values:

    1. Making those values explicit,

    2. Using them to decide what to automate and what not – keeping humans involved in the more sensitive decisions (e.g. automate deciding on what insurance claims to pay out on without further questions but not automate when to take legal action against people falling behind on mortgage payments),

    3. Clarifying the company values by developing standards for their implementation (e.g. how to determine fairness and compensate for biases) and

    4. Advising on the hierarchy among them (e.g. fairness over short-term profitability).

    With this value-based approach, the people creating the AI are much more likely to reflect the human trait of "common sense" and learnings from human decision making into the AI.

    Five tips for successfully integrating company values into your AI

    Company values, such as customer commitment, legal compliance, fairness and honesty, also play a role in the process of creating AI:

    1. When deciding what data to use: is it ok to use the relevant data for this specific purpose? Has the data been obtained in accordance with the General Data Protection Regulation? Is the contemplated use in accordance with the GDPR? Will the data subjects agree with this use if they actually become aware of such use? (Let's face it: the fact that they accepted it on a website may say little about their actual feelings.)

    2. When deciding what data sets to use for what purposes: is the data set sufficiently representative for the population to which the resulting AI is to be applied? Data collected from men may not be representative for women and may thus lead to incorrect (biased) decisions when applying the resulting AI to women.

    3. When selecting what data (features) from the data sets to use: will output based on using such features be fair? Is there a risk of automating biases already existing in the data set (e.g. caused by biases in the human decision making)?

    4. Testing the outcome of the AI: to be sure that the outcomes align with the company values. First, because biases may not have been signaled or features may in effect have been proxies for biases. Second, because independently of the fairness of the input and the features, external factors may skew the results. A good example is an advert that was shown much more often to men than to women, because the cost of showing adverts to women appeared to be more costly.

    5. Explaining the AI outcome: generally, the more sophisticated models are better at predicting but can be more difficult to explain, both in the process of AI decision-making as well as the AI outcomes. While in some contexts it may not be important whether the AI is transparent (as long as the outcomes can be shown to be fair), there is an increasing demand for transparency. This implies that in some cases it is better to use a simpler model (with slightly poorer results), than a more sophisticated model (that is perceived as a black box). The ability to explain the output often increases acceptance and thus adoption.

    For more information on this subject, contact Bart van Reeken.