The White House Office of Science and Technology Policy has proposed guidelines for the use of artificial intelligence in its Blueprint for an AI Bill of Rights. While the blueprint emphasizes basic rights and principles of our democracy and catalogs examples of harm AI can cause, it fails to grapple with how to put those into practice without hobbling one of the most vibrant parts of the U.S. high-tech economy — its innovation ecosystem.
Compared to the European Union and China, America has a fundamentally different economic relationship to its technology innovation landscape: The U.S. innovates, the EU regulates, and China is determined to lead. The U.S. vastly outpaces Europe in almost every AI metric, from scientific paper citations, to venture capital dollars, to commercial activity. Meanwhile AI is a key domain of both economic and military competition between the U.S. and the Chinese Communist Party.
In an attempt to pull ahead in this duopoly, China is investing heavily in AI and has made it a lynchpin in both its commercial and national security sectors. The CCP is catching the U.S. across numerous dimensions through the savvy deployment of government guidance funds, and whole of national industrial policies including Made in China 2025 (2015) and the New Generation AI Development Plan (2017).
Legislating AI simply because it’s the zeitgeist, is risky to American competition and national security. Following the EU’s lead into a regulatory quagmire could hobble the speed of American innovation in AI, limiting the nation’s ability to compete in both economic and military spheres with China.
The OSTP document has four glaring flaws: Fair algorithms, data privacy, regulation, and overly broad and vague definitions.
Fair algorithms
How do you define a “fair algorithm?” The blueprint focuses on protection from algorithmic discrimination but fails to provide a proper empirical definition. Fairness as an abstract concept seems easy to grasp, but as a quantitative definition it is far murkier. Princeton computer scientist Arvind Narayanan highlighted 21 different approaches to defining fairness. Put simply, how can automated systems be evaluated, and OSTP’s guidelines be implemented, without a clear target metric?
By contrast, expert opinion is that China’s foray into AI regulation with the Internet Information Service Algorithmic Recommendation Management Provisions (2022) will provide few restrictions on use of AI. While this policy bears surface level similarities to EU legislation, its constraints will not apply to the Chinese government. As Russel Wald, policy direct at Stanford’s Institute for Human-Centered Artificial Intelligence has argued, this “regulation [is] geared towards benefiting the regime.”
Data privacy
The White House suggests that companies should allow users to withdraw consent for using their data. When a user does so, the White House advises that companies should have to remove that data from any machine learning models that were built from it.
Retraining all AI algorithms across all products and services each time any user requests it is economically unfeasible. Would this mean companies like Amazon, Netflix and social media networks need to rebuild their customer recommendation systems every time someone deletes their data?If so, would the same would be true for retailers such as Walmart who use personal data to optimize supply chain and inventory? On this, the blueprint is unclear. The economic and operational impact of these questions is potentially enormous. Mismanagement could allow Chinese competitors to pull ahead of American firms.
Regulation
Given the increasing pervasiveness and ubiquity of AI and automated systems, the OSTP Blueprint would place an onerous burden across vast swaths of the existing economy, hindering innovation. It’s possible that AI startups would take years rather than weeks and months to launch. Given languishing U.S. federal government R&D spend (roughly one third of Cold War levels), this has direct implications for the American national security innovation ecosystem.
Vague definitions
The OSTP’s definition of automated systems includes “any system, software, or process that uses computation as whole or part of a system to determine outcomes, make or aid decisions, inform policy implementation, collect data or observations, or otherwise interact with individuals and/or communities.” What modern electronic product or service is not covered by this definition?
This would require a dystopian nightmare of pre-deployment data use interviews, community input, pre-deployment testing and assessment, ongoing monitoring and reporting, independent evaluation, opt-out and data removal, and timely human alternatives. This would create a massive burden, not just for the tech industry, but any sector touched by plausibly AI or “automated” systems. It is essentially de-automating automation at an enormously high administrative and economic cost.
Privacy-Preserving Machine Learning
The goals of the White House’s blueprint are noble. AI should be used responsibly, such that it benefits society and prevents the illiberal ends to which China is deploying the technology. That said, rather than accomplish this through procedural checks, the U.S. government could promote non-regulatory protections such as privacy-preserving machine learning, or PPML. This broad class of approaches, including synthetic data generation, differential privacy, federated learning and edge processing would address some of the core concerns of the blueprint without decreasing the pace of innovation.
PPML certainly does not address all of the issues highlighted in the Blueprint for an AI Bill of Rights but provides a plausible alternative to legislation that could otherwise undercut America in AI. In doing so, it may create a template for non-regulatory mechanisms that safeguard the public, while mitigating national security threats to AI innovation in our continued competition with China.
If the Biden Administration is serious about showing thought leadership that matches America’s technological leadership, let’s move beyond idealistic principles. Or else the next generation of leading AI and automated systems will be built by our great power competitors.
Jonah Cader is a graduate fellow at Stanford University’s Institute for Human-Centered Artificial Intelligence. As a management consultant at McKinsey & Company, he worked in between the U.S. and China, leading strategy projects for companies across the high-tech value chain.
Have an Opinion?
This article is an Op-Ed and the opinions expressed are those of the author. If you would like to respond, or have an editorial of your own you would like to submit, please email C4ISRNET and Federal Times Senior Managing Editor Cary O’Reilly.