Search Mailing List Archives

Limit search to: Subject & Body Subject Author
Sort by: Reverse Sort
Limit to: All This Week Last Week This Month Last Month
Select Date Range     through    

[liberationtech] CFP: ICML 2015 Workshop on Fairness, Accountability, and Transparency in Machine Learning

Solon Barocas solon at
Wed Apr 22 19:36:26 PDT 2015


2nd Workshop on Fairness, Accountability, and Transparency in Machine Learning

ICML 2015

July 11, Lille, France

Submission Deadline: May 1, 2015



Machine learning is increasingly part of our everyday lives, influencing not only our individual interactions with online websites and platforms, but even national policy decisions that shape society at large. When algorithms make automated decisions that can affect our lives so profoundly, how do we make sure that their decisions are fair, verifiable, and accountable? This workshop will explore how to integrate these concerns into machine learning and how to address them with computationally rigorous methods.

The workshop takes place at an important moment. The debate about ‘big data' on both sides of the Atlantic has begun to expand beyond issues of privacy and data protection. Policymakers, regulators, and advocates have recently expressed fears about the potentially discriminatory impact of analytics, with many calling for further technical research into the dangers of inadvertently encoding bias into automated decisions.  At the same time, there is growing alarm that the complexity of machine learning may reduce the justification for consequential decisions to “the algorithm made me do it”.  Decision procedures perceived as fundamentally inscrutable have drawn special attention.

The workshop will bring together an interdisciplinary group of researchers to address these challenges head-on.


We welcome contributions on theoretical models, empirical work, and everything in between, including (but not limited to) contributions that address the following open questions:

* How can we achieve high classification accuracy while preventing discriminatory biases?

* What are meaningful formal fairness properties?

* What is the best way to represent how a classifier or model has generated a particular result?

* Can we certify that some output has an explanatory representation?

* How do we balance the need for knowledge of sensitive attributes for  fair modeling and classification with concerns and limitations around the collection and use of sensitive attributes?

* What ethical obligations does the machine learning community have when models affect the lives of real people?


Papers are limited to four content pages, including figures and tables, and must follow the ICML 2015 format; however, an additional fifth page containing only cited references is permitted. Papers SHOULD be anonymized. Accepted papers will be made available on the workshop website; however, the workshop's proceedings can be considered non-archival, meaning contributors are free to publish their work in archival journals or conferences. Accepted papers will be either presented as a talk or poster (to be determined by the workshop organizers). 

Papers should be submitted here:

Deadline for submissions: May 1, 2015
Notification of acceptance: May 10, 2015


Workshop Organizers:

Solon Barocas, Princeton University
Sorelle Friedler, Haverford College
Moritz Hardt, IBM Almaden Research Center
Joshua Kroll, Princeton University
Carlos Scheidegger, University of Arizona
Suresh Venkatasubramanian, University of Utah
Hanna Wallach, Microsoft Research NYC

More information about the liberationtech mailing list