FACET “FAirness in Computer Vision EvaluaTion”
First Published on 2 September 2023.
Introduction
In the ever-evolving landscape of artificial intelligence, the need for fairness and accountability in AI models has become increasingly critical. Meta, formerly known as Facebook, has just taken a step forward in addressing this concern by releasing a new AI benchmark called ‘FACET’ (FAirness in Computer Vision EvaluaTion). FACET aims to evaluate the fairness of AI models used in classifying and detecting objects, including people, in photos and videos.
Goals
FACET is designed to tackle biases present in computer vision models comprehensively. It consists of a vast dataset containing 32,000 images, featuring 50,000 individuals labeled by human annotators. These labels encompass various demographic attributes such as gender presentation, age group, physical characteristics like skin tone, lighting, tattoos, headwear, eyewear, hairstyle, and facial hair. Additionally, FACET includes categories related to occupations and activities like “basketball player,” “disc jockey,” and “doctor.”
One of FACET’s primary objectives is to allow researchers and practitioners to benchmark fairness in their AI models and monitor the effectiveness of measures taken to address potential biases. In a blog post, Meta stated, “We encourage researchers to use FACET to benchmark fairness across other vision and multimodal tasks.”
However, it’s essential to acknowledge that Meta has faced criticism regarding responsible AI practices in the past. Reports have pointed out instances where the company’s AI systems exhibited biases and the ineffectiveness of its anti-bias tools. Nevertheless, Meta asserts that FACET is more comprehensive than previous computer vision bias benchmarks and can address questions related to gender stereotypes and physical attributes in classification.
To create FACET, Meta utilized annotators from various geographic regions, including the United States, Colombia, Egypt, Kenya, the Philippines, and Taiwan. These annotators were compensated with an hourly wage that varied by country. While this approach aims to enhance diversity and fairness in the dataset labeling process, concerns have been raised regarding the wages paid to annotators and the transparency of the process.
Conclusions
Despite its potential shortcomings, FACET represents a significant step toward promoting fairness and accountability in AI models. It has already been used to uncover biases in Meta’s own DINOv2 computer vision algorithm, highlighting the importance of addressing potential biases during dataset curation.
Meta acknowledges that FACET may not fully capture real-world concepts and demographic groups, and some professions’ depictions may have changed since the dataset’s creation. However, they plan to allow users to flag objectionable content and remove it when identified.
In a rapidly advancing field like AI, benchmarks like FACET serve as essential tools to ensure that AI models are developed and deployed responsibly, minimizing biases and promoting fairness for all users. As technology continues to shape our lives, initiatives like these play a crucial role in building a more equitable and inclusive future.
editor's pick
news via inbox
Nulla turp dis cursus. Integer liberos euismod pretium faucibua