The Massachusetts government seeks to buy a chat-based artificial intelligence assistant to be embedded in “the daily work of more than 40,000 Executive Branch employees” associated with the Executive Office of Technology Services and Security.
Over the next six months, the Commonwealth plans to contract a responsible and transparent partner to develop this assistant, according to its Request for Quotes. The plan is for the successful bidder to work side by side with the government to ensure that state employees are able to work more efficiently.
Alex Mark, research fellow at the Cambridge Boston Alignment Initiative, said that this type of procurement is unique, where the government is interested in developing their own assistant, rather than buying one.
“I do think this is a pretty smart way to go about a procurement, where you’re trying to customize an AI tool for the government,” Mark said. “Clearly, they’re trying to steer this towards bureaucracy.”
The assistant is required to have four main task abilities through natural language processing: drafting, summarization, analysis and translation.
In its RFQ, the state is seeking a partner not only for the development of a model with these capabilities, but to “position Massachusetts as a national leader in building an AI ecosystem that drives opportunity and safeguards the public interest.”
This procurement is planned for 24 months, with promises of potential renewals or extension of use in the future.
Similar plans have been implemented around the nation, including Pennsylvania’s governmental year-long generative AI pilot program. 175 employees across 14 agencies incorporated the AI assistant “ChatGPT Enterprise” into their work.
The employees involved are believed to have saved 95 minutes a day on routine tasks, such as brainstorming and proofreading. This implementation has also led to Pennsylvania being named a national leader in responsible AI.
However, there have been concerns about governments adopting AI systems to automate certain tasks. AI models are often biased, which could pose a threat in federal systems.
“I think at this point, a better policy is ensuring that humans remain in the loop and humans are aware of the bias in these systems, rather than trying to remove every instance of bias,” Mark said. “What is considered neutral to one person will be considered biased to another person. That’s a problem that predates AI assistance.”
Interestingly, private companies like Open AI, which owns ChatGPT, are committed to providing services like this, as part of its new partnership with the US General Services Administration. For the next year, ChatGPT Enterprise will be available to certain federal agencies for $1.
While the intentions of these AI producers are unclear, Mark spoke positively of ideas like procurement in private companies co-working with the public sector.
“It’s much easier to regulate a technology that the government is using,” Mark said.
Additional concerns do remain on the data security front when implementing AI models, as corporate security standards are typically lower than government standards.
However, one model designed for government use, ChatGPT Gov, claims that it allows governments to manage their security settings.
The results of the RFQ process are due to be announced on October 31.
This article was produced for HorizonMass, the independent, student-driven, news outlet of the Boston Institute for Nonprofit Journalism and is syndicated by BINJ’s MassWire news service.




