People impacted by algorithms developed in artificial intelligence (AI) and big data projects want fairness, meaning moral or “just” treatment from algorithmic decision-making. However, fairness or the perception of fairness has several subjective components out of the scope of any development project, including pre-established attitudes and emotional reactions to algorithmic outcomes. Moreover, end-users understand, perceive, and process algorithm fairness, accountability, and transparency diﬀerently.
Nevertheless, the decisions made by the project team influence who is the judge of what is reasonable when the decision is made. Thus, the project team has some responsibility for the moral decisions produced by the algorithmic systems. The limits and bias in decisions produced, the end user’s ability to manipulate the system or override the decisions, and the information the end-users have to understand and enhance their decision autonomy mediate the project team’s accountability. The project team is an important actor that connects AI systems, stakeholders, and moral decisions.
As a keynote speaker at the SAS Learning Conference, Gloria Miller will break down the components of ethical AI and why they are important. What is the difference between AI and Data Driven Decisions? You will learn why the project team is central to connecting the different aspects of ethical AI: trustworthiness, transparency, explainability, accountability, sustainability, interpretability. And why the project team’s decisions are relevant in delivering ethical AI solutions.