Sunday, November 19, 2017

Trump administration wants racist AI for ‘Extreme Vetting Initiative’

It’s becoming increasingly evident the Trump administration doesn’t understand technology, or perhaps fears and hates it. The seemingly imminent abrogation of net neutrality, and its quest to build AI for its “Extreme Vetting Initiative,” appear to lend credibility to any theories suggesting we’re being led by Luddites.


Credit: CNN

US Immigration and Customs Enforcement (ICE) this June sent out a letter detailing an initiative to “obtain contractor services to establish an overarching vetting contract that automates, centralizes and streamlines the current manual vetting process while simultaneously making determinations via automation if the data retrieved is actionable.”


According to the letter, the current methodology ICE is forced to use doesn’t provide enough “high-value derogatory information to further investigations or support any prosecution by ICE or US attorneys in immigration or federal courts.”

The humans at ICE, in short, would like for someone in the technology sector to create a machine learning system to data-mine for information it can use to prosecute or deny entry to immigrants: the very definition of a biased AI.

This endeavor would probably involve a deep learning network capable of making correlations between disparate data-sets. In order to train such a network, there’s a pretty good chance that DHS or ICE would set a specific goal – a target number of people from the areas it wishes to extend “extreme vetting” to.

A group of 54 distinguished scientists and engineers today sent a different letter to the Department of Homeland Security beseeching it to leave AI out of its plans for immigrant vetting. In the letter the coalition states:


Simply put, no computational methods can provide reliable or objective assessments of the traits ICE seeks to measure.

To the best of our knowledge, there’s no machine capable of determining whether a person is likely to commit a crime, nor is there an AI that can determine a human’s intentions through the collection of social media data.

The movie “Minority Report” remains a work of fiction.

In fact, ProPublica’s piece “Machine Bias,” which was a Pulitzer finalist, explained:


There’s software used across the country to predict future criminals. And it’s biased against blacks.

It would then be logical to believe creating a similar AI to determine whether a person should be allowed entry into the country isn’t much different than making one determine whether Black people should get harsher sentences.

We contacted the American Civil Liberties Union who told us:


Using individual’s social media and other online presence to determine whether they should be allowed to enter the country or remain in the country has huge civil liberties implications. ICE has indicated that this initiative would use machine learning techniques to determine whether someone is going to be a contributing member of society or contribute to the national interest and whether they might commit “criminal or terrorist acts after entering the United States.” First, experts have said that there are no computational methods that can provide reliable measures of these traits. Second, this monitoring of people’s activities on the internet and social media is likely to scare people into censoring their activities, thereby creating a chilling effect on free speech. Third, programs and policies, such as those used to conduct surveillance, have used similarly vague terminology regarding terrorism, which has resulted in the unjust and discriminatory treatment of Muslim, Arab, Middle Eastern, and South Asian communities. Tied together with the President’s consistent calls for “ideological certification” and “extreme vetting” while making reference to Islam and Muslims, this initiative is ripe for the targeting of individuals of particular backgrounds.

In the face of all this information, it seems like a no-brainer that the tech community would be absolutely united against this particular application of AI, and from what we can tell they are.

The 56 scientists and researchers who signed the letter to DHS are a mix of academics and experts from companies like Google and Microsoft.

IBM, whose representatives attended a June meeting with government officials alongside a group of other companies, was name-checked by Reuters today, but less than a year ago a company rep told The Hill that:


We’ve been clear about our values. We oppose discrimination and we wouldn’t do any work to build a registry of Muslim Americans.

And it’s worth considering Virginia Rometti, CEO of IBM, helped disband Trump’s board of technology leaders. In a letter to employees she wrote:


We have worked with every U.S. president since Woodrow Wilson. We are determinedly non-partisan – we maintain no political action committee. And we have always believed that dialogue is critical to progress; that is why I joined the President’s Forum earlier this year.

But this group can no longer serve the purpose for which it was formed. Earlier today I spoke with other members of the Forum and we agreed to disband the group.

It’s likely a safe bet that IBM isn’t going to get behind this.

No technology company should enable a system that experts believe will result in the discrimination of people. It is counter to the very idea of research and advancement that anyone uses artificial intelligence in a way that intentionally marginalizes or otherwise, violates the civil rights of any human, no matter the color of their skin or country of origin.

We asked an ACLU representative what they’d say to a company considering filling a government contract to create an AI to aid the “Extreme Vetting Initiative” and they told us:


We would encourage companies to think carefully about participating in this initiative, including whether they want to be a part of efforts that are scientifically dubious, threaten our constitutional rights, and result in the targeting of immigrant communities.