Download PDFOpen PDF in browser

Mitigating Bias in Machine Learning Algorithms for Fair and Reliable Defect Prediction

EasyChair Preprint 13185

11 pagesDate: May 6, 2024

Abstract

Machine learning algorithms have revolutionized defect prediction in various industries, offering promising solutions for identifying potential issues in software systems. However, the deployment of these algorithms poses challenges related to bias, which can lead to unfair and unreliable predictions. This paper explores methods to mitigate bias in machine learning algorithms for defect prediction, aiming to enhance fairness and reliability in the prediction process.

 

The first part of this study examines the sources and types of bias that commonly affect machine learning models in defect prediction tasks. These biases may stem from historical data, feature selection, or algorithmic decision-making processes. Understanding these biases is crucial for developing effective mitigation strategies.

 

Next, we discuss various approaches to address bias in machine learning algorithms. These include preprocessing techniques such as data re-sampling, feature engineering, and algorithmic adjustments such as fairness constraints and post-processing fairness interventions. Additionally, we explore the importance of diverse and representative datasets to mitigate bias and improve model generalization.

Keyphrases: Algorithms, learning, machine

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@booklet{EasyChair:13185,
  author    = {Louis Frank and Saleh Mohamed},
  title     = {Mitigating Bias in Machine Learning Algorithms for Fair and Reliable Defect Prediction},
  howpublished = {EasyChair Preprint 13185},
  year      = {EasyChair, 2024}}
Download PDFOpen PDF in browser