Shortly after the ratification of the U.S. constitution, the Bill of Rights added specific guarantees on freedom of speech and assembly, and the right to fair trials, aimed at setting limits on the powers of the newly appointed government. to be created. This is the precedent that White House science advisers Biden invoked when they propose a new bill of rights which aims to protect citizens in the face of transformative artificial intelligence technology. It is an admirable initiative, but one that should expand globally, not just to Americans.
If the notion of a new Bill of Rights sounds grand, consider the context. International and national protections for basic rights and against abuse and discrimination by governments and businesses have made great strides since World War II. But these are aimed at human actors.
For the first time, decisions crucial to human well-being are being made in part or even in full by machines – on everything from job applications and creditworthiness, to medical procedures to prison terms. . And algorithmic decision making is surprisingly prone to error or bias. Facial recognition technology can struggle with darker skin tones. What machines learn is influenced by the biases of those who program them and the partial data sets provided to them.
When things go wrong, it can be difficult to find humans who take responsibility. In the UK this month, a former black Uber driver whose account was disabled after automated facial scanning software repeatedly failed to recognize him launched a complaint before an employment tribunal.
The first task of an AI Bill of Rights is therefore to strengthen existing protections for an AI world. It should apply to algorithmic decision-making in legal or life-changing fields. And that should extend to data and privacy, enshrining the rights of individuals to know what data is held about them, how the information is used and to transfer it between providers.
AI decisions should not emerge from an unfathomable black box, but be ‘explainable’. A bill should guarantee an individual’s right to know when an algorithm is making decisions about them, how it works and what data is being used. The right to challenge decisions and obtain appeals must be guaranteed. Some human or corporate responsibility must be maintained, with managers being responsible for errors or erroneous decisions in the systems they oversee, as with those of human staff.
But AI gives unscrupulous governments new capabilities to spy on, control and potentially coerce their citizens. A bill should define which technologies are allowed and not, and the basic rules for their use.
America’s Bill of Rights initiative lags behind what Europe is doing. The EU’s General Data Protection Regulation already contains a right for citizens not to be subjected without consent to decisions “based solely on automated processing”, although this is not widely applied. An AI bill describes a hierarchy of risks for technologies subject to various safeguards. Some, such as the âsocial scoreâ – a nod to China’s social credit system which aims to assess behavior and reliability – would be banned.
The Biden administration is expected to respond to the EU’s invitation to work together on AI issues. But just as the 1948 UN Universal Declaration of Human Rights sets out basic human rights to be universally protected, a global AI charter is deserved. Some countries would choose to go further; others, like China, might refuse to register. But as during the Cold War, superior human rights protection, now against intrusive AI, could become a point of moral differentiation and leverage for democracies.