This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Big tech’s closed AI ecosystems are hindering trust and development, report claims
Non-profit open internet firm, Mozilla, is calling on big tech to drop the “black box” approach to AI, in favour of a more open ecosystem that ensures trust, development, and regulation.
In its latest “Accelerating Progress Toward Trustworthy AI” report, four years after its last, Mozilla has warned of closed AI ecosystems’ restrictions claiming that they have created opaque, centralised models using “our harvested data.”
Open source AI essentially allows anyone to access the source code for free to use, modify, and distribute.
Mozilla has accused big tech and its closed models of dominating the field, disabling competition, market access, independent research, and public scrutiny of AI systems.
Over the past few years, big tech firms have invested in players like ChatGPT’s OpenAI, and AI firm Anthropic, “as a way to control the field and bolster their own cloud computing businesses,” it says.
According to Mozilla’s report, this is making it difficult to advance open approaches needed to create a better AI ecosystem – something that a growing wave of startups, builders, educators, scholars, researchers, and civil society campaigners are also campaigning for in order to achieve trust and transparency.
“Building trustworthy AI is both urgent and complicated, and no one person or organisation can tackle all of these risks alone,” says Mark Surman, president, executive director, and board member of the Mozilla Foundation stated.
“At a time when AI is impacting society at large, we cannot continue to accept a black box approach kept in the hands of just a few companies. We need a wider set of voices involved in its design and deployment to ensure AI systems are trustworthy.”
The report detailed that by promoting open source, users will be able to hold AI firms to account when their systems cause harm through discriminatory outcomes, abuse of data, or unsafe practices.
“We will continue to expand our investment in startups, open source projects, and nonprofits building trustworthy AI,” the report reads. “Properly resources, this ecosystem has the potential to challenge the big players and push AI in a better direction.”
It also calls on policymakers to take a stronger stand in order to keep pace with the rapid growth of AI developments, and while acknowledging some improvements in diversity in the sector, says there is still a need for a “wider set of voices involved in [AIs] design and deployment.”
#BeInformed
Subscribe to our Editor's weekly newsletter