‘Rules as Code’ will let computers apply laws and regulations. But over-rigid interpretations would undermine our freedoms
- Written by Guido Governatori, Software Systems Research Group Leader, Data61
Can computers read and apply legal rules? It’s an idea that’s gaining momentum, as it promises to make laws more accessible to the public and easier to follow. But it raises a host of legal, technical and ethical questions.
The OECD recently published a white paper on “Rules as Code” efforts around the world. The Australian Senate Select Committee on Financial Technology and Regulatory Technology will be accepting submissions on the subject until 11 December 2020.
Machines cannot read and respond to rules that are expressed in human language. To make rules machine-readable and actionable, the interpretation of the rules must also be coded. Determining how best to code law is important as we venture deeper into a digital future.
Decades in the making
The coding of legal rules is not entirely new. Over the past five decades, artificial intelligence and law researchers have produced a range of formally coded versions of tax and other laws.
In 1986 UK computer scientists Bob Kowalski and Marek Sergot, for example, coded the British Nationality Act. More familiar examples of the results of such work would include guiding instruments and tools provided by the Australian Tax Office to assist taxpayers.
Read more: Smart contracts – smart or dumb?
Over the past decade Data61, the data science arm of the CSIRO, has developed a way to re-imagine regulation as an open platform, based on digital logic. This platform makes it easier to develop software that can automatically check whether the processes of a business or other organisation comply with relevant rules. For example, this could be used to check whether a new company needs to apply for any permits, and if so how to do it.
What is ‘Rules as Code’?
Coding legal rules is often complex. Rules written in human language are not drafted with coding in mind. Vague, broad rules may be difficult to interpret and to apply to specific cases.
The coding process is painstaking and resource-intensive. Law and technology experts must grapple with each rule in sets of rules that are often very large.
In response, government projects in New Zealand and New South Wales (both closely linked to Australian digital government expert Pia Andrews) and in France, Canada and other countries have tried a different approach: “Rules as Code”. This means that drafters and coders develop legal rules together, producing a human-language text as well as an official coded version.
The recent OECD report contends that Rules as Code “could allow businesses to consume machine-consumable versions directly from government, reducing the need for individual interpretation and translation”. Further, technical capacity to translate human-readable rules into machine-consumable ones:
may eliminate (or, substantially minimise) the need for multidisciplinary cooperation and learning, thereby reducing the need for different types of experts to adjust their ways of working to improve the overall rule quality.
Loss of flexibility
While Rules as Code may hold efficiency benefits, it may also lead to a loss of flexibility in how laws are interpreted. Interpretation of law is carried out by various stakeholders, the courts being the final authority.
Coding makes it easy to apply the rules to cases that the rule-makers addressed, as well as ones they may have foreseen even if they didn’t address them explicitly. However, the coded version produced during drafting may be too rigid to respond appropriately and fairly to unforeseen cases.
Read more: People are using artificial intelligence to help sort out their divorce. Would you?
Rules as Code raises a number of thorny legal issues. Would it be constitutional when applied to complex laws, or would it be viewed as appropriating, undermining or limiting the role of courts to interpret the law? How authoritative is the drafter and coder’s view of the meaning of the new law?
If a Rules-as-Code tool informed by an incorrect interpretation provides wrong information how will the mistake be identified and who will be liable? A possible example would be a tool that erroneously advises a user they are ineligible for a welfare payment.
Understanding the risks
Excitement about the potential of Rules as Code should be balanced by a deep understanding of the structural risks. Rules as Code assumes the law, regulations and the role of government remain the same as they were in the 20th century.
However, technology is transforming law and empowering people and other entities. Colin Rule, a global leader in online dispute resolution, recently asserted this will have a significant impact on the future of justice.
Citizens use technology in almost every area of their lives, and they have the fundamental right to use, interpret and respond to rules in a way that is consistent with the law (that is, with what a court would hold). This is true whether or not it agrees with the government’s own interpretation built into code.
Read more: From robodebt to racism: what can go wrong when governments let algorithms make the decisions
Regulatory computer systems that implement a single “authoritative” or “official” view of the relevant rules can undermine the rules themselves, human freedoms, and democracy.
The long-standing approach to coding law and the new Rules-as-Code approach both provide important building blocks for digital law in the future. But neither approach can successfully navigate the legal challenges and demands while producing coded law at the scale required to support general AI solutions.
How to make it work
A better approach would be to build AI solutions that can interpret and code legal rules with sophistication and transparency, advancing the objectives of the rules while supporting the complex rights of individuals. This is a future vision that requires, among others, the development of mechanisms to determine when to interact with human regulators and domain experts, as well as institutions that would ensure the integrity of the outcomes.
A range of expert knowledge – not only legal, but also ethical, economic, financial, medical, psychological, and so on – is essential to correctly determine how this can be achieved.
Australia should support broad, collaborative, multi-disciplinary, public-private research partnerships into legal technologies or “lawtech”. This would harness our existing knowledge and capacity in AI and Rules as Code. By combining the right expertise and resources we can enable Australia to embrace the future opportunities and properly address the challenges of coding law.
Authors: Guido Governatori, Software Systems Research Group Leader, Data61