algorithm · Artificial Intelligence · Project Management

AI Accountability — The Rite Aid Case

On December 19, 2023, we got an example from the Federal Trade Commission (FTC) of holding an operator/end-using organization accountable for the negative impacts of an artificial intelligence system, in this case, facial recognition system. This is interesting for us as it reflects the findings from our 2022 research papers on accountability for artificial intelligence projects.

What happened

Rite Aid, an American drugstore chain, used facial recognition software for surveillance to track security risks; we read that to mean identify (potential) shoplifters. The FTC listed the mistakes Rite Aid made in the usage and deployment of the system in a press release that you can find here. In short, the Rite Aid system generated thousands of false positives. “For example, the technology sometimes matched customers with people who had originally been enrolled in the database based on activity thousands of miles away, or flagged the same person at dozens of different stores all across the United States, according to the complaint.”

The misuse and errors caused significant harm to individuals since employees erroneously accused consumers of wrongdoing because of the false positives. According to the FTC complaint, “Employees, acting on false positive alerts, followed people around its stores, searched them, ordered them to leave, called the police to confront or remove consumers, and publicly accused them, sometimes in front of friends or family, of shoplifting or other wrongdoing.”

Consequently, the FTC banned Rite Aid from using AI Facial Recognition for five years.

What we said

In 2022, we worked on three articles on AI accountability. In the article, Stakeholder-accountability model for artificial intelligence projects, we argued that “the operating organization, including the end users, is most accountable to the public. This gives them some power to influence the system’s development.” The FTC action against Rite Aid underscores our findings. However, Rite Aid did not exercise its influence on the system developers; this is one of the complaints against Rite Aid. The complaint says “Rite Aid failed to:…Test, assess, measure, document, or inquire about the accuracy of its facial recognition technology before deploying it, including failing to seek any information from either vendor it used to provide the facial recognition technology about the extent to which the technology had been tested for accuracy.”

So what

The FTC complaint allows us to compare the success factors and accountabilities from our study to the Rite Aid case. Our studies considered many roles involved in AI project. However, craving out items related to operating/end-using organizations, we identified 54 success factors across 15 success groups and five categories for which operators/end-using organizations should be accountable.

We do not have the Rite Aid side of the story, so we cannot comment on the benefits and protections they received in implementing the system. The FTC press releases suggest it was to track or flag consumers as security risks. Nevertheless, we cannot comment on the Benefits & Protections category, which includes three success groups and seven success factors.

Further, the FTC press release does not have enough information for us to report on each success factor. However, in the following table, we list the remaining success factors by category and group and report on the Rite Aid failures and FTC safeguards for those categories, if possible.

Success Group/FactorRite Aid failed to…FTC required safeguard…
Project Governance
Ethical Practices: Ethics policies, Ethics training, Ombudsman, Professional membership  
Investigation: Algorithm auditing, Algorithm impact assessment, Audit response records, CertificationFor example, the complaint alleges the company conducted many security assessments of service providers orally, and that it failed to obtain or possess backup documentation of such assessments, including for service providers Rite Aid deemed to be “high risk.”Provide the Commission with an annual certification from its CEO documenting Rite Aid’s adherence to the order’s provisions.

Obtain independent third-party assessments of its information security program;
Product Quality
Source Data Qualities: Data transparencyPrevent the use of low-quality images in connection with its facial recognition technology, increasing the likelihood of false-positive match alerts;
Models & Algorithms Qualities: Algorithm transparencyTest, assess, measure, document, or inquire about the accuracy of its facial recognition technology before deploying it, including failing to seek any information from either vendor it used to provide the facial recognition technology about the extent to which the technology had been tested for accuracy;
Data & Privacy Protections: Confidentiality, Data anonymization, Data encryption, Data governance, Data retention policy, Informed consent, Personal data controls, Privacy safeguardsDelete, and direct third parties to delete, any images or photos they collected because of Rite Aid’s facial recognition system as well as any algorithms or other products that were developed using those images and photos;

Delete any biometric information it collects within five years;

Notify consumers when their biometric information is enrolled in a database used in connection with a biometric security or surveillance system and when Rite Aid takes some kind of action against them based on an output generated by such a system;
System Configuration: Security safeguards, System and architecture quality, Technical deployment records, Technical logging, Versioning and metadataRite Aid violated its 2010 data security order with the Commission by failing to adequately implement a comprehensive information security program.Implement a data security program to protect and secure personal information it collects, stores, and shares with its vendors;
User Interface Qualities: Equitable accessibility, Front-end transparency
Societal Impacts
Individual Protections: Civil rights and liberties protectionsConsider and mitigate potential risks to consumers from misidentifying them, including heightened risks to certain consumers because of their race or gender. For example, Rite Aid’s facial recognition technology was more likely to generate false positives in stores located in plurality-Black and Asian communities than in plurality-White communities;
Sustainability: Environmental sustainability
Usage Qualities
Decision Quality: Access and redress, Awareness, Decision accountability, Privacy and confidentialityInvestigate and respond in writing to consumer complaints about actions taken against consumers related to an automated biometric security or surveillance system;

Provide clear and conspicuous notice to consumers about the use of facial recognition or other biometric surveillance technology in its stores;
System Transparency & Understandability: Choices, Interaction safety-usage, Interpretable models, Onboarding procedures, Problem reporting, Specialized skills and knowledge-usage, Stakeholder-centric communicationAdequately train employees tasked with operating facial recognition technology in its stores and flag that the technology could generate false positives.
Usage controls: Algorithm renewal process, Complaint process, Consequence records, Process deployment records, Quality controls, Staff monitoring, System monitoring, Usage recordsRegularly monitor or test the accuracy of the technology after it was deployed, including by failing to implement or enforce any procedure for tracking the rate of false positive matches or actions that were taken based on those false positive matches; Even after Rite Aid switched to a technology that enabled employees to report a “bad match” and required employees to use it, the company did not take action to ensure employees followed this policy. 
AI success categories, factors and groups compared to Rite Aid failures and FTC safeguards

In conclusion

Rite Aid could have benefited from using our success factors in establishing their surveillance program. The factors provide more specificity than the safeguards the FTC proposes for Rite Aid. However, personally, we would suggest that the technology is not ripe (or suggested) for their use case.

The detailed description of the success factors is in our article Artificial Intelligence Project Success Factors—Beyond the Ethical Principles.

2 thoughts on “AI Accountability — The Rite Aid Case

Comments are closed.