Sony's new dataset tests whether AI models treat different groups fairly
On November 6th, Sony AI released a new dataset for testing the fairness and bias of artificial intelligence models, named the Fair Human-Centric Image Benchmark (FHIBE), pronounced Phoebe. The company stated that this is the first publicly available

Image Source: Sony
Sony stated that FHIBE helps to address the challenges of ethics and bias in the AI industry. The dataset contains images of nearly 2,000 paid participants from over 80 countries, all of whose portraits were used with explicit consent - in contrast t
The tool has 'detected previously documented biases in current AI models', but Sony stresses that FHIBE can also conduct a detailed analysis of the specific factors that lead to these biases. For example, some models have a lower accuracy rate when u
Additionally, FHIBE has found that AI models can reinforce stereotypical perceptions when asked neutral questions (such as What is the occupation of this character?). Tests have shown that the models exhibit clear biases towards groups associated wit
Sony AI says that FHIBE has demonstrated the feasibility of achieving ethics, diversity, and fairness in data collection. The tool is now available to the public and will continue to be updated over time. The relevant research results were published


