Artificial Intelligence: the view from the White House

Those interested in legal services and technology need to keep their eye on developments in artificial intelligence (AI). In the course of preparing a national US strategy, a committee of the National Science and Technology Council has drafted a report, Preparing for the Future of Artificial Intelligence, which is worth reading as a crib to the latest issues. This is much better written than you might expect from a committee (or, indeed, machine) authorship.

The report eschews fanciful speculation about the future and is largely concerned with what it terms ‘narrow AI’ ‘which addresses specific application areas such as strategic games, language translation, self-driving vehicles and imagine recognition’. In particular, it looks at ‘machine learning’ which it distinguishes from older ‘expert system’ approaches. Machine learning is concerned to analyse bodies of data and ‘derive a rule or procedure that explains the data or can predict future data’. The report explains how complex this basically simple process has become.

One of the most interesting findings from use of AI – which is relevant to its deployment in fields like law – is that the most effective way of using machines in complex intellectual areas may be in tandem with humans. ’In contrast to automation, where a machine substitutes for human work, in some cases a machine will complement human work … Systems that aim to complement human cognitive capabilities are sometimes referred to as intelligence augmentation. In many applications, a human-machine team can be more effective than either one alone, using the strengths of one to compensate for the weaknesses of the other. One example is in chess playing, where a weaker computer can often beat a stronger computer player, if the weaker computer is given a human teammate—this is true even though top computers are much stronger players than any human. Another example is in radiology. In one recent study, given images of lymph node cells, and asked to determine whether or not the cells contained cancer, an AI-based approach had a 7.5 percent error rate, where a human pathologist had a 3.5 percent error rate; a combined approach, using both AI and human input, lowered the error rate to 0.5 percent, representing an 85 percent reduction in error.’

This message of partnership is the one consistently put out by those like IBM and Ross Intelligence who are developing the use of AI in the field of law. Thus, AI may reduce the number of lawyers and legal workers required for a particular task but it will not eliminate the need for them. Those that remain will, however, work at a higher level.

The report gives a number of examples of the deployment of AI by government or otherwise in the public good. Government has, of course, also to regulate developments in AI such as the development of autonomous cars and pilotless drones. Jobs for lawyers – assisted by AI – should abound here. It notes also the composition of those employed in the field as an issue that government should address: ‘the lack of gender and racial diversity in the AI workforce mirrors the lack of diversity in the technology industry and the field of computer science generally’.

The deployment of AI in the justice system is considered – raising a specific point with a general relevance. Systems based on big data can be susceptible to bad data: ‘In the criminal justice system, some of the biggest concerns with Big Data are the lack of data and the lack of quality data. AI needs good data. If the data is incomplete or biased, AI can exacerbate problems of bias. It is important that anyone using AI in the criminal justice context is aware of the limitations of current data. A commonly cited example at the workshops is the use of apparently biased “risk prediction” tools by some judges in criminal sentencing and bail hearings as well as by some prison officials in assignment and parole decisions, as detailed in an extensively researched ProPublica article. The article presented evidence suggesting that a commercial risk scoring tool used by some judges generates racially biased risk scores. A separate report from Upturn questioned the fairness and efficacy of some predictive policing tools.’ This is likely to become an increasingly important issue as tools are developed to assist in the making of difficult decisions in a justice context.

AI is, of course, inherently international: ‘International engagement is necessary to fully explore the applications of AI in health care, automation in manufacturing, and information and communication technologies (ICTs). AI applications also have the potential to address global issues such as disaster preparedness and response, climate change, wildlife trafficking, the digital divide, jobs, and smart cities. The State Department foresees privacy concerns, safety of autonomous vehicles, and AI’s impact on long-term employment trends as AI-related policy areas to watch in the international context.’

The use of AI in weapon systems is an example of its international – and not uncontroversial – deployment: ‘These technological improvements may allow for greater precision in the use of these weapon systems and safer, more humane military operations.’ Here lie some pretty major ethical issues on which the US takes a distinctive view: ‘Over the past several years, in particular, issues concerning the development of so-called “Lethal Autonomous Weapon Systems” (LAWS) have been raised by technical experts, ethicists, and others in the international community. The United States has actively participated in the ongoing international discussion on LAWS in the context of the Convention on Certain Conventional Weapons (CCW),80 and anticipates continued robust international discussion of these potential weapon systems going forward. State Parties to the CCW are discussing technical, legal, military, ethical, and other issues involved with emerging technologies, although it is clear that there is no common understanding of LAWS. Some States have conflated LAWS with remotely piloted aircraft (military “drones”), a position which the United States opposes, as remotely-piloted craft are, by definition, directly controlled by humans just as manned aircraft are. Other States have focused on artificial intelligence, robot armies, or whether “meaningful human control” – an undefined term – is exercised over life-and-death decisions. The U.S. priority has been to reiterate that all weapon systems, autonomous or otherwise, must adhere to international humanitarian law, including the principles of distinction and proportionality. For this reason, the United States has consistently noted the importance of the weapons review process in the development and adoption of new weapon systems.’

The report concludes: ‘Government has several roles to play. It should convene conversations about important issues and help to set the agenda for public debate. It should monitor the safety and fairness of applications as they develop, and adapt regulatory frameworks to encourage innovation while protecting the public. It should support basic research and the application of AI to public goods, as well as the development of a skilled, diverse workforce. And government should use AI itself, to serve the public faster, more effectively, and at lower cost.’ Recent reports from, for example, the ABA and the Law Society of England and Wales indicate that they accept that legal professional bodies have a similar range of roles to fulfil.

Leave a Reply