the publisher of

Search
Close this search box.
Search

MASS SEEKS TO CREATE AI TASK FORCE, WON’T PUBLICLY NAME APPOINTEES

Original Image via iqlect.com

“This body does not seem to be centering issues related to accountability, justice, racial discrimination or other forms of bias.”


Massachusetts is seeking 25 “thought leaders” to help create artificial intelligence policy for the state. But despite widespread criticism of AI developments regarding racial bias and surveillance concerns, state leaders and administrators are not saying who those people are. That lack of transparency, according to civil liberties advocates, presents major problems.

The Mass Tech Collaborative, a state-run group focused on encouraging the tech economy, is working with the Executive Office of Housing and Economic Development to appoint a 25-person AI Task Force that will “develop a strategic plan that furthers economic growth in the AI sector,” according to bid documents. The collaborative is also looking to hire a consulting firm to work with the task force and create “comprehensive opportunity statements on public/private interventions that could have a transformational effect on the AI sector in Massachusetts within a three to five year timeframe.”

The task force will include AI “thought leaders” from academia, nonprofit, finance, and public sectors, as well developers and adopters from other industries, including health care, defense, and cybersecurity, according to Mass Tech spokesman Brian Noyes. The bid documents say the task force has already been created, but Noyes said while MassTech identified an initial group, selection was put on hiatus during the pandemic and MassTech and the Executive Office of Housing and Economic Development are “reconfirming the interest of potential members.”

MassTech did not release the original list of appointees or the current list upon request. Noyes said the task force’s recommendations would not replace legislative or executive discussions about AI policy and that the group will take regulatory considerations into account.

Kade Crockford, director of the Technology for Liberty Program at the ACLU of Massachusetts, said it made sense for the state to convene experts to examine a developing part of a major state industry, but the focus on economics and secrecy around membership could lead to that technology reinforcing injustice.

“The administration hasn’t done a lot of public information sharing about this task force or who’s going to be on it,” Crockford said. “This body does not seem to be centering issues related to accountability, justice, racial discrimination or other forms of bias, surveillance and privacy, or issues related to automation and the job market, and that is a problem, that is a big problem.”

As is clear in past examples, AI and machine learning reflect the biases of their architects, and those have come out in disturbing ways. In one survey, a New York Times article from earlier this year compiled examples of AI tech that discriminated against Black people, including an Amazon facial recognition service that misidentified darker-skinned women as men 31% of the time.

Dealing with issues like that is crucial for any state AI policy, and knowing who is on the task force is part of being able to determine if that task force is taking those considerations into account, Crockford said.

“AI and automated decision systems can facilitate more forms of discrimination and that discrimination can be difficult to detect, or hidden behind algorithmic black boxes,” Crockford said. “Centering racial justice privacy in discussion of AI is crucial; they really need to beef up their plans here. I don’t know who’s on the list, but that’s part of the problem.”

The British-based advocacy group Privacy International monitors government use of technology and has reported on the lack of transparency in data analysis, most recently on the analytics company Palantir’s work for the British government during the COVID-19 pandemic. Legal officer Lucie Audibert said governments need to be up front about who they’re working with as they develop AI policy.

“Transparency is particularly crucial when it comes to AI—it tends to be designed in ways that are opaquely influenced by certain assumptions, bias, or preconceptions. Its logic and conclusions are often difficult to challenge, while being blindly trusted as a perfect source of truth,” Audibert said. “Public deliberations about what types of AI the government wants to encourage, and for what purposes, are essential to avoid harmful and unaccountable uses.”

The Legislature is considering bills creating its own AI commission, on “transparency and use of artificial intelligence in government decision-making,” which would submit public reports and detail how the state is using AI in its departments. The ACLU would have a seat on that commission, and Crockford hopes the bills will pass soon.

“We hope that it becomes law this session so we have public accountability of how artificial, automated intelligence is being dealt with in our government,” Crockford said. “We want to make it so in creating new laws we’re not creating discrimination or continuing discrimination; this seems to have been developed in the background without public engagement.”

We have all too often seen industry actors wooing government officials to get them to use their technology, thereby privately dictating the direction of use of AI and other new technologies to perform public functions,” Audibert said. “This is often at the expense of public procurement processes, which usually require proper consideration of all market options, as well as ensure transparency and accountability in the deployment of these technologies.”


If you want to see more reporting like this, make a contribution at givetobinj.org.

Thanks for reading and please consider this:

If you appreciate the work we are doing, please keep us going strong by making a tax-deductible donation to our IRS 501(c)(3) nonprofit sponsor, the Boston Institute for Nonprofit Journalism!

BINJ not only produces longform investigative stories that it syndicates for free to community news outlets around Massachusetts but also works with dozens of emerging journalists each year to help them learn their trade while providing quality reporting to the public at large.

Now in its 10th year, BINJ has produced hundreds of hard-hitting news articles—many of which have taken critical looks at corporations, government, and major nonprofits, shedding light where it’s needed most.

BINJ punches far above its weight on an undersized budget—managing to remain a player in local news through difficult times for journalism even as it continues to provide leadership at the regional and national levels of the nonprofit news industry.

With your help BINJ can grow to become a more stable operation for the long term and continue to provide Bay State residents more quality journalism for years to come.

Or you can send us a check at the following address:

Boston Institute for Nonprofit Journalism

519 Somerville Ave #206

Somerville, MA 02143

Want to make a stock or in-kind donation to BINJ? Drop us an email at info@binjonline.org and we can make that happen!

Stay tuned to BINJ news

Subscribe To
Our Newsletter