Tag: color code

Photo Courtesy Bradley Davison (CC ’17)

NOTE:

A few days ago, the ColorCode team posted a response in regard to a “RoboCop” assignment assigned to students in Professor Satyen Kale’s Machine Learning (COMS 4771) course. In response Professor Kale wrote a response on his website, which can be found here. In order to make sure that both ColorCode and the Professor’s views are visible to interested parties, we have shared his piece below:

The original task description (“Robocop”) was regrettably written in a highly offensive manner. It was not our intention to suggest that imitating the “SQF” practices (or any racially-prejudiced practices) in the future is desirable in any way. In fact, the made-up setting for the task in a fictitious, dystopian future was meant to be an ironic indicator of precisely the opposite sentiment. We are strongly against practices such as SQF. While the primary intention for the task was purely pedagogical—to give students exposure to using machine learning techniques in practice—we acknowledge that not providing proper context for the task was poor judgement on our part, and we sincerely apologize for that.

Two original motivations for using this data set were (i) to illustrate the difficulties in developing any kind of “predictive policing” tool (which already exist today), and (ii) to assess how predictive modeling could help shed light on this past decision-making. For instance, at the coarsest level, it is evident that very few of the cases where a subject is stopped actually lead to an arrest; this raises the question why the stops should have been made in the first place. Moreover, if it is difficult to predict the arrest decision from the features describing the circumstance, then it may suggest that there is some unrecorded aspect of the circumstance that drives the decision; such a finding could have policy implications.

There are critical aspects of the data set that make it highly inappropriate for use in developing any kind of predictive policing tool. First, the data only reflects the arrest decisions of past police officers, which are decidedly not what one would want to imitate. Second, even if the arrest decisions (i.e., labels) in the data set were appropriately modified (thereby altering the conditional distribution of the label given the features), the set of the cases there may only be representative of suspects that past police officers chose to stop, necessarily introducing biases into the distribution.

We originally thought that these challenging aspects of the data set would be of interest to the class. However, our formulation of the task was in poor taste and failed to provide adequate context. Because we can only objectively evaluate the predictive modeling aspects of the project that are independent of the context of the data set, we have decided to change the data set to one that is completely unrelated to the SQF data set.

A link to the Professor Kale’s original posting on his website can be found here. To respond to this piece or submit an op-ed of your own, email submissions@columbialion.com