Deleting the wiki page 'Researchers Reduce Bias in aI Models while Maintaining Or Improving Accuracy' cannot be undone. Continue?
Machine-learning designs can fail when they attempt to make predictions for individuals who were underrepresented in the datasets they were trained on.
For instance, a design that anticipates the finest treatment choice for somebody with a chronic illness may be trained using a dataset that contains mainly male patients. That design might make incorrect predictions for female patients when released in a hospital.
To enhance outcomes, engineers can try balancing the training dataset by removing information points until all subgroups are represented equally. While dataset balancing is appealing, it often needs eliminating large amount of data, hurting the design’s overall efficiency.
MIT scientists established a brand-new method that identifies and removes particular points in a training dataset that contribute most to a design’s failures on minority subgroups. By removing far less datapoints than other techniques, this technique maintains the total accuracy of the design while enhancing its performance regarding underrepresented groups.
In addition, the method can recognize concealed sources of bias in a training dataset that lacks labels. Unlabeled data are much more prevalent than identified information for many applications.
This technique could likewise be combined with other techniques to improve the fairness of machine-learning designs deployed in high-stakes scenarios. For example, it may one day assist make sure underrepresented clients aren’t misdiagnosed due to a prejudiced AI model.
“Many other algorithms that try to resolve this issue assume each datapoint matters as much as every other datapoint. In this paper, we are showing that assumption is not real. There specify points in our dataset that are contributing to this predisposition, and we can find those information points, remove them, and get much better performance,” states Kimia Hamidieh, an electrical engineering and computer technology (EECS) graduate trainee at MIT and co-lead author of a paper on this method.
She wrote the paper with co-lead authors Saachi Jain PhD ‘24 and fellow EECS graduate trainee Kristian Georgiev
Deleting the wiki page 'Researchers Reduce Bias in aI Models while Maintaining Or Improving Accuracy' cannot be undone. Continue?