Main Banner

NEWS from the Office of the New York State Comptroller
Contact: Press Office 518-474-4015

DiNapoli: Improved Guidance Needed for State Agencies Using AI To Avoid Risks

Audit Finds State Agencies Are Largely on Their Own When It Comes to AI and Taking a Patchwork of Approaches To Oversee Use

April 3, 2025

New York state’s centralized guidance and oversight of agencies’ use of Artificial Intelligence (AI) is inadequate and creates a risk that the technology could be used irresponsibly, according to an audit released today by State Comptroller Thomas P. DiNapoli. The audit looked at the state’s overall AI policy and how AI was used at four state agencies: the Office for the Aging (NYSOFA), the Department of Corrections and Community Supervision (DOCCS), the Department of Motor Vehicles (DMV), and the Department of Transportation (DOT).

The audit, the second in a series on AI Use in New York Government, follows a 2023 audit of New York City’s AI Governance.

“New York state agencies are using AI to monitor prisoners’ phone calls, catch fraudulent driver’s license applications, assist older adults, and support government services,” DiNapoli said. “Our audit found insufficient central guidance and oversight to check that these systems are reliable and accurate, and no inventory of what AI the state is using. This audit is a wake-up call. Stronger governance over the state’s growing use of AI is needed to safeguard against the well-known risks that come with it.”

While the state has moved to implement AI systems, guardrails for these technologies have not kept pace. Without adequate guidelines and oversight, AI systems that are meant to help expedite and expand services can, for example, expose data to unintended sources and create inequalities in decision-making and the delivery of services.

In New York State, use of AI is governed by the Office of Information Technology Services (ITS), which issued its Acceptable Use of Artificial Intelligence Technologies Policy (AI Policy) in January 2024. The AI Policy requires agencies to assess the risks in the AI systems they use. DiNapoli’s audit highlights a disconnect between the state’s eight-page AI Policy and how agencies understand AI and their responsibilities. While New York’s AI Policy gives an overview of responsible AI use, it lacks any detailed guidance on its implementation and instead simply directs agencies to federal guidelines for further information.

A major problem with the AI Policy is that it leaves agencies free to determine what is, or is not, responsible use of AI. Conflicting and confusing guidance regarding use of confidential information with AI systems as well as lack of staff training also create opportunities for inadvertent noncompliance and contribute to concerns about unintended uses and consequences.

The U.S. Government Accountability Office (GAO) has cautioned that AI “has the potential to amplify existing biases and concerns related to civil liberties, ethics, and social disparities,” but the state’s AI Policy contains only two sentences dedicated to bias management, failing to address both the data used to set up AI systems and the monitoring of already implemented systems for fairness and equity.

ITS also does not have an inventory of AI systems in use by state entities and is still developing a process for creating one, more than a year after releasing the AI Policy and. Officials told auditors they become aware of AI systems when an agency makes a procurement request or reaches out for support, leaving it to them to determine whether the system they are using is AI and has to follow the state’s AI Policy. That was the case with NYSOFA, which had an AI system, but did not know the system fell under the AI Policy.

Knowing what AI systems are in use, how they’re being used and what data they’re drawing from is critical to ensuring this technology is being used ethically and responsibly.

Finally, ITS officials said state entities are responsible for their own AI review, risk assessments, reporting and compliance with the AI Policy requiring human oversight of AI systems and outcomes. There is, however, no mechanism for ITS to ensure these are done or done properly.

Agencies Use of AI 

Auditors found that while NYSOFA, DOCCS, and DOT use ITS’ definition of AI, they do not have in-house policies or specific procedures to govern how AI is authorized, developed or used, for ensuring the data is unbiased and reliable, or have formal requirements of human oversight. DOT has an AI working group that first met in June 2024, but it has not yet issued any formal policies.

DMV does have internal policies to assess AI risks and oversee its use, but no specific procedures to ensure these policies are carried out. It also has an AI Governance Committee and created its own definition of AI, but exempted its facial recognition software - which it said it did not consider to be an AI system - from AI oversight. It has not consulted with ITS on that decision, although ITS’ definition of AI explicitly considers a system using computer vision (i.e., that gathers information from digital images) and making recommendations based on that data to be AI.

DMV and DOT provided an informal inventory of the AI they have in use or in development. None of the agencies maintained a formal AI inventory.

DOCCS uses AI software that monitors inmates’ phone calls to ensure inmates are only making calls as authorized. DOCCS’ contract bars use or sharing of this information without its consent and owns the recordings. However, the agency does not have a plan for addressing potential AI risks, and the contract does not address reducing biases to decrease the possibility that an inmate could be unfairly or unnecessarily subjected to further investigation. The vendor explained to auditors how it mitigates biases in the system, but it was not clear if those efforts work because DOCCS does not monitor or measure the system’s error rates.

NYSOFA uses a voice-activated device that acts as an AI companion to combat social isolation and loneliness and foster independence among older people. It initiates conversations and remembers what users say. NYSOFA shared satisfaction surveys with auditors that reported a 95% reduction in loneliness among those using the device in 2023.

NYSOFA was uncertain if its use of the device met the definition of AI under the AI Policy, which it does. The policy requires human oversight by the agency, however since the devices are provided directly to users, the only human oversight is the user. NYSOFA officials said the quality of the product’s interactions is open to interpretation, based on each user’s experience. They also said the vendor is responsible for ensuring the device’s responses are accurate and appropriate, although that is not written into their contract, and NYSOFA does not conduct a review to check. When asked about data security and privacy of the data generated, NYSOFA stated that the developers of the product own the performance metric data and recorded data, and that the vendor can use and access this data. NYSOFA officials did not know if the vendor was allowed to use the data to build or improve other systems elsewhere.

The vendor for the system auditors reviewed at DOT stated that the system does use AI for other clients, but the way it was implementing the technology for DOT did not include AI. Ultimately, there was insufficient information to determine if the example case was in fact AI. However, DOT is piloting three AI systems.

None of the agencies have conducted periodic reviews or audits of their AI systems to determine if they are accurate, reliable, and free of biases. Only DMV has a policy requiring such a review. In addition, while the agencies have trained staff on using their AI systems, none have trained employees on the risk of inaccuracies or biases in AI.

Recommendations 

The audit made seven recommendations, including that ITS strengthen its AI Policy by including guidance for agencies on adopting AI, work with agencies to support their responsible use of AI, and implement training. The recommendations for the other agencies included creating AI governance structures and policies and coordinating with ITS. The audit also recommended that DMV review its facial recognition system with ITS to determine if it’s complying with the state’s AI Policy.

Agencies Responses 

ITS stated that it was reviewing the recommendations and considering improvements and was creating training materials on AI for state entities. NYSOFA, DOCCS and DOT generally agreed with the recommendations and said they would create IT governance and consult with ITS. DMV generally disagreed with the findings, but agreed with the recommendations. Their full responses are available in the audit.

DiNapoli's audit, along with a previous audit of New York City AI governance, underscores the importance of independent oversight to ensure that AI governance is appropriately designed and complied with by agencies. DiNapoli will be advancing a bill to the state legislature that would require regular, independent audits of state agencies' AI governance and their development, use, and management of AI tools and systems. If enacted, the legislation would help safeguard against risks and improve the likelihood that AI technologies are used responsibly, ethically and transparently.

Audit 
Office of Information Technology Services, New York State Office for the Aging, Department of Corrections and Community Supervision, Department of Motor Vehicles, Department of Transportation: New York State Artificial Intelligence Governance

Related Audit 
NYC Office of Technology and Innovation: Artificial Intelligence Governance (Feb. 2023)