Joymallya Chakraborty

College of Engineering

Works (8)

Updated: October 1st, 2024 10:54

2024 journal article

When less is more: on the value of "co-training" for semi-supervised software defect predictors

EMPIRICAL SOFTWARE ENGINEERING, 29(2).

By: S. Majumder*, J. Chakraborty* & T. Menzies*

author keywords: Semi-supervised learning; SSL; Self-training; Co-training; Boosting methods; Semi-supervised preprocessing; Clustering-based semi-supervised preprocessing; Intrinsically semi-supervised methods; Graph-based methods; Co-forest; Effort aware tri-training
Sources: Web Of Science, NC State University Libraries
Added: March 11, 2024

2023 journal article

Fair Enough: Searching for Sufficient Measures of Fairness

ACM TRANSACTIONS ON SOFTWARE ENGINEERING AND METHODOLOGY, 32(6).

By: S. Majumder n, J. Chakraborty n, G. Bai n, K. Stolee n & T. Menzies n

author keywords: Software fairness; fairness metrics; clustering; theoretical analysis; empirical analysis
TL;DR: This article shows that many of those fairness metrics effectively measure the same thing, and it is no longer necessary (or even possible) to satisfy all fairness metrics. (via Semantic Scholar)
Sources: Web Of Science, ORCID, NC State University Libraries
Added: October 31, 2023

2022 article

Fair-SSL: Building fair ML Software with less data

2022 IEEE/ACM INTERNATIONAL WORKSHOP ON EQUITABLE DATA & TECHNOLOGY (FAIRWARE 2022), pp. 1–8.

By: J. Chakraborty n, S. Majumder n & H. Tu n

author keywords: Machine Learning with and for SE; Ethics in Software Engineering
TL;DR: This is the first SE work where semi-supervised techniques are used to fight against ethical bias in SE ML models, and the clear advantage of Fair-SSL is that it requires only 10% of the labeled training data. (via Semantic Scholar)
Source: Web Of Science
Added: October 3, 2022

2022 journal article

FairMask: Better Fairness via Model-Based Rebalancing of Protected Attributes

IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, 49(4), 2426–2439.

By: K. Peng n, J. Chakraborty n & T. Menzies n

author keywords: Software fairness; explanation; bias mitigation
TL;DR: This work proposes a model-based extrapolation method that corrects the misleading latent correlation between the protected attributes and other non-protected ones and achieves significantly better group and individual fairness than benchmark methods. (via Semantic Scholar)
Sources: Web Of Science, ORCID, NC State University Libraries
Added: May 30, 2023

2021 article

Bias in Machine Learning Software: Why? How? What to Do?

PROCEEDINGS OF THE 29TH ACM JOINT MEETING ON EUROPEAN SOFTWARE ENGINEERING CONFERENCE AND SYMPOSIUM ON THE FOUNDATIONS OF SOFTWARE ENGINEERING (ESEC/FSE '21), pp. 429–440.

By: J. Chakraborty n, S. Majumder n & T. Menzies n

author keywords: Software Fairness; Fairness Metrics; Bias Mitigation
TL;DR: This paper postulates that the root causes of bias are the prior decisions that affect what data was selected and the labels assigned to those examples, and proposes the Fair-SMOTE algorithm, which removes biased labels; and rebalances internal distributions such that based on sensitive attribute, examples are equal in both positive and negative classes. (via Semantic Scholar)
Sources: Web Of Science, ORCID, NC State University Libraries
Added: March 7, 2022

2020 article

Making Fair ML Software using Trustworthy Explanation

2020 35TH IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATED SOFTWARE ENGINEERING (ASE 2020), pp. 1229–1233.

By: J. Chakraborty n, K. Peng n & T. Menzies n

TL;DR: This work shows how the proposed method based on K nearest neighbors can overcome shortcomings and find the underlying bias of black box models and describes the future framework combining explanation and planning to build fair software. (via Semantic Scholar)
Sources: Web Of Science, NC State University Libraries
Added: June 10, 2021

2019 article

Investigating the Effects of Gender Bias on GitHub

2019 IEEE/ACM 41ST INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING (ICSE 2019), pp. 700–711.

By: N. Imtiaz n, J. Middleton n, J. Chakraborty n, N. Robson n, G. Bai n & E. Murphy-Hill*

author keywords: GitHub; gender; open source
TL;DR: The effects of gender bias are largely invisible on the GitHub platform itself, but there are still signals of women concentrating their work in fewer places and being more restrained in communication than men. (via Semantic Scholar)
Source: Web Of Science
Added: September 7, 2020

2019 article

Predicting Breakdowns in Cloud Services (with SPIKE)

ESEC/FSE'2019: PROCEEDINGS OF THE 2019 27TH ACM JOINT MEETING ON EUROPEAN SOFTWARE ENGINEERING CONFERENCE AND SYMPOSIUM ON THE FOUNDATIONS OF SOFTWARE ENGINEERING, pp. 916–924.

By: J. Chen n, J. Chakraborty n, P. Clark*, K. Haverlock*, S. Cherian* & T. Menzies n

author keywords: Cloud; optimization; data mining; parameter tuning
TL;DR: SPIKE is a data mining tool which can predict upcoming service breakdowns, half an hour into the future, and performed relatively better than other widely-used learning methods (neural nets, random forests, logistic regression). (via Semantic Scholar)
Sources: Web Of Science, NC State University Libraries
Added: October 7, 2019

Citation Index includes data from a number of different sources. If you have questions about the sources of data in the Citation Index or need a set of data which is free to re-distribute, please contact us.

Certain data included herein are derived from the Web of Science© and InCites© (2024) of Clarivate Analytics. All rights reserved. You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.