ML practitioners push for mandatory AI Bill of Rights

The AI Bill of Rights, bias, and operational challenges amid tightening budgets are pressing issues affecting the adoption of ML as well as project and initiative success, according to Comet.

AI Bill of Rights

“Our latest survey comes as ML practitioners are facing a new reality, with its own unique set of challenges ahead,” said Gideon Mendels, CEO of Comet.

“Organizations may tighten their budgets during these unstable economic times, but because leaders recognize ML’s potential to unlock incredible business value, there may be a push for ML practitioners to tackle more complex problems quickly. This may include addressing bias or adhering to a new AI Bill of Rights, making it imperative for ML practitioners and the success of an organization’s ML projects, to have the right tools in place,” Mendels continued.

AI Bill of Rights

The US White House Office of Science and Technology Policy (WHOSTP) published a recent Blueprint for an “AI Bill of Rights,” setting out five principles which should guide the design, use and deployment of automated systems.

The document provides a framework for how government, technology companies, and citizens can collaborate to ensure more accountable AI.

In terms of the reaction within the ML community:

  • 73% of ML practitioners agree that the AI Bill of Rights should be mandatory by law vs. opt-in.
  • 39% think the AI Bill of Rights will affect their approach to ML deployment and development by slowing the process down.
  • 37% think the AI Bill of Rights will make the ML deployment and development process more difficult.
  • 35% believe it will make the process more expensive.
  • 38% believe the AI Bill of Rights will make the ML deployment and development process safer, and 37% think it will reduce the possibility of privacy violations.
  • 35% think the AI Bill of Rights will reduce the frequency of unsafe or ineffective ML systems.

Bias in AI products

One of the reasons the WHOSTP created the initial Blueprint for the “AI Bill of Rights” was the prominence of bias in AI products.

Bias has been one of the leading topics related to AI in recent years. Some view bias as overhyped, with ML practitioners capable of implementing best practices to mitigate it, while others think it is a problem that will continue to plague AI systems.

The latest survey reveals how ML practitioners are approaching bias.

  • 27% of ML practitioners surveyed believe that bias will never truly be removed from AI-enabled products.
  • 38% have a designated point of contact or support team that is looking out for bias when planning the design and/or launch of an AI-enabled product.
  • 33% of respondents think reducing the risk of bias occurring is one of the main benefits of explainable AI, which might indicate this could be a solution (though not without its own challenges).

Challenges for ML practitioners

The state of the economy clearly weighed on respondents’ minds as they considered its impact on their business and how that could trickle down to affect investments in ML.

They also identified other areas of stress or challenges they anticipate facing in 2023.

  • 100% machine learning practitioners surveyed said the economic situation will impact their business. The most common way, according to respondents, will be redundancies in the tech team (40%), followed by an impact on budgets (37%) and a hiring freeze (36%).
  • 32% say innovation will slow as a result.
  • Over the next year, ML practitioners surveyed anticipate the biggest to be sustainability (41%), followed by retention (39%), hiring staff with correct institutional knowledge (36%) and explainable AI (36%).

Don't miss