Asset digitali e token

Is Law Computable? by Sara Donati

27 Maggio 2022 0

The Dickson Poon School of Law, King’s College London

https://www.kcl.ac.uk/study/postgraduate-taught/courses/intellectual-property-and-information-law-llm

Is law computable?

by Sara Donati

 

Abstract

In a digitized world, where technology is increasingly present in everyday life and manages to insinuate itself even into fields that had never been considered before, the paper aims at investigating whether it is possible to use artificial intelligence (AI) in such a sensitive field as law. More specifically, the paper analyses the various issues inherent in the use of AI in a trial. Does this tool bring benefits to the judge’s decision-making process or does it create divergence among the defendants?

Although technology can undoubtedly make many aspects of a trial easier, the time when a machine is able to reason like a human being and therefore replace or advise the judge has not yet arrived.

 

 

 

 

Affiliation

Sara Donati

L.L.M. in Intellectual Property and Information Law Candidate 2020

King’s College London

Dickson Poon School of Law

E-mail: [email protected]

 

Introduction

Paragraph 1 Law and computability p.4

1.1What is the purpose of the law? p.4

1.2What does computable mea n? p.5

1.3Advantages of an AI system p.7

1.4The consciousness problem p.8

Paragraph 2 Problems of artificial intelligence systems in the criminal field p.9

2.1Bias p.9

2.2…individuality of the sentence p.12

2.3…opacity of the system p.13

2.4Why is the law not computable? P.14

2.5Possible future scenarios p.15

Conclusion p.16

Bibliography p.18

Introduction

“All are equal before the law and are entitled without any discrimination to equal protection of the law” [1].

This is an essential value had to be fought for and was through the milennia. It means that the law must ensure that no one is privileged or discriminated by the government and that everyone is treated equally without consideration for race, gender, religion and so on.

Now consider the idea that an algorithm could erase this principle, or weaken it. If an algorithm could make law enforcement unfair but faster, would this be desirable?

PARAGRAPH 1

Law and computability

1.1What is the purpose of the law?

Since the times of the ancient Greeks and Romans, people have been wondering about the concept of law and its implications. Very famous is the work of Sophocles, “Antigone”, where the contrast between the law of the State and the unwritten laws can be seen. The protagonist considered it more appropriate to follow the latter and gave a burial to her brother, even though it was forbidden by the laws of the State.

Also, Antiphon the Sophist maintained that “most things that are right according to the law are in opposition with nature”[2], since man by nature is inclined to selfishness, while laws, the result of a compromise, have as their goal that of imposing the principle of justice. The ancient Greeks and Romans gave a great importance to the values and laws of nature that were considered more significant than the laws of men. This great attention was given not only to human values but also to society and community. For instance, 2500 years ago, Pericles, argued that the law should regulate a society administrated not for the good of the few but for the majority[3]. These concepts were then further developed by the Romans, who established the idea of republic as one of “common good”[4].

In my opinion, law expresses that set of rules which, by incorporating and expressing values, such as equality and solidarity, are intended to guide citizens in the creation of a prosperous society. The law aims to protect the individual, but from a collective point of view.

But that is not all the law is. The law is also made up of exceptions, quibbles, peculiarities, nuances. When judges apply the law, for instance in criminal cases, they do not exercise mechanical work because the law changes from case to case. For example, two murderers in cases will probably have a different sentences, even if the crime is the same, because the judge will consider many variables: the mechanisms of the case, the facts, mitigating factors, aggravating factors, etc.

It is also important to consider that judges often work with intuition. They cannot always explain why they trust one witness deponent more than another one or why they think that one criminal is likely to commit a crime again.

1.2What does computable mean?

Artificial intelligence, machine learning (from now on ML) and deep learning are all concepts linked and based on an algorithm. But what exactly is an algorithm? An algorithm is a set of rules that must be followed when solving a particular problem[5]. It is that formula that allows Sophia the robot to work; it is that function on which the Turing machine is based; it is that tool that allows to do prediction based on generalised statistics. Fundamentally a computation takes information and transforms it through one or a set of algorithms[6].

In recent years there is more and more talk about ML and its interaction with the law.

ML refers to the capacity of a system to iteratively improve its performance by leveraging data[7]. Essentially, an ML algorithm performs a process of self-learning: it uses the data that is added each time, and learns from the correction of previous errors completely independently, without receiving instructions. The aim of ML is to progressively improve the performance of an algorithm in identifying patterns in data by building a model based on samples[8]. Once a neural network receives a set of data as input is it possible to, once the output is obtained, test the accuracy of this model through statistical functions. In addition, it is feasible to improve the performance of this algorithm using tools such as cross validation. But how can a machine compute? Just as humans have neurons, so too a computer machine has a network of neurons that can compute functions. In fact synapses are connecting dots representing neurons which weight the inputs and pass the result to the next hidden layer of the network until they create the output[9]. A machine can learn and distinguish cats from dogs because it is exposed to a wide training set. This consideration inevitably creates two thoughts: firstly, what kind of data and with which logic this is given to the machines; secondly, when a machine uses data and creates output, does it do so with awareness of the process it is carrying out? Moreover, although people are used to the belief that machines can compute everything, this is clearly a false statement. In fact, what a machine can do in many cases, is to reduce the error in approximating a result using algorithms that do not have a 100% correct solution. For instance, most of the time a differential equation is not solvable, however ML can arrive at a result that respects the condition imposed by the equation within a certain margin of error.

Just think of the rule of criminal law according to which “The defendant is presumed innocent unless the prosecution has proved guilt beyond a reasonable doubt”. How is it possible to express through algorithm such “abstract” concepts as beyond a reasonable doubt? How would a machine quantify it?

1.3 Advantages of an AI system

Some of the major problems in the American and European judicial system are the extreme length of trials, the presence of some injustice and the high costs. Would it not be a huge improvement if some of these problems were solved and many aspects of the trial were simplified? It might be possible to think that, for example in criminal proceedings, the judge’s job is simply to apply the law automatically on the basis of the evidence in the case, without great cognitive effort; a task that could then also be carried out by an algorithm. The use of AI systems could then make it possible to apply the same standards to every trial without discrimination and erasing the risks of human errors. In fact, a 2011 study of Israeli judges showed that they made harsher verdicts when they were hungry[10]. Moreover, a robot judge could have infinite memory and learning capacity, better than a human judge. In some American states, courts are using a system based on an algorithm that predicts whether a prisoner is likely to commit a crime again[11].

Maybe in the future, this kind of system could be more efficient and fairer than the current one. Or maybe not.

But it is not as easy as it sounds. When the computer machine uses algorithms established by the legislator and which therefore reflect the law, does it actually know what it is doing? In fact, it is legitimate to ask, as Max Tegmark did,: “will everybody feel that they understand its logical reasoning enough to respect its judgement?”[12]. Who would trust a judgment made by a computer using a secret algorithm? Any convicted person wants to know why they were convicted, which is also a right guaranteed by the criminal law of many countries. I do not think anyone would be satisfied to be told “we trained the system on lots of data, and this is the decision”[13]. This is in fact one of the reasons why Mr Loomies has appealed, as we will see later on.

1.4 The Consciousness problem

To understand one of the reasons why people do not trust machines to have the power to influence the decisions of judges, we need to focus on the concept of consciousness.

Alan Turing and John von Neumann, the founders of the modern science of computation, thought that it was possible for machines to mimic all of the brain’s abilities and then being considered conscious. Instead, recent studies have partly refuted this theory[14]. Technology has made a lot of progress, it has come to the creation of artificial neural networks inspired by neurobiology; nevertheless, these networks are not comparable to human brain. But what does consciousness mean? It is impossible to find a universal definition of it. Indeed, consciousness is controversial. Broadly, is it possible to say that consciousness is when you have experience[15]. For instance, it is not something only connected to verbal reportability: in fact, individuals who suffer from global aphasia may be unable to talk but this does not mean that they are unconscious. Furthermore, verbal reportability it is not sufficient to be conscious. For example, as a parrot repeating the words it hears is not conscious, even machines can be able to report their internal status but this does not mean that they have consciousness.

Indeed, any neural network relies on instructions gave by a human which decides what the machine has to do[16]. The machine simply takes the data it has and through the algorithm creates outputs, but it has no awareness of why a given input reaches a conclusion instead of another. In the human brain the structure of synapses and neurons is much more complicated than that of the neural network, which, although inspired by that of the human brain, is simpler. It is a copy of inferior quality and this relationship could be compared to that between the world of Platonic ideas and the real world. It is not to be excluded, however, that in the future the complexity of neural networks can reach that of the human being and therefore give consciousness to the machines in the human sense of the term.

Very clear in this regard is the experiment of the Chinese box. A man, who has no knowledge of Chinese, is inside a box and through a dictionary is able to translate sentences of Chinese that are passed to him from the outside. But does not understand the meaning of what he is doing, because he is only mechanically applying a meaning to symbols. We can say that for these reasons a system that uses algorithms, as could be a robot-judge, works in the same way: it has no consciousness and consequently it is natural for one to question giving it such an important role when human lives are at stake. As far as COMPAS is concerned, more than consciousness, it is necessary to ask ourselves how to accept a machine whose mechanisms are unknown. In fact, if it were known how a computer works, its judgment would be questioned as much as that of a human judge. Assuming in fact that its structure is balanced and tries to limit bias, its result would be much more logical than that of a human who has emotions that cannot be completely controlled. It is more instinctive to trust a human judge because he thinks the same way humans do.

PARAGRAPH 2

Problems of artificial intelligence systems in the criminal field

2.1Bias

The use of artificial intelligence systems in the criminal field, as seen, raises a number of major problems. Since the functioning of machines is all mechanical and mathematical, in theory algorithms should not have prejudices, as these are intrinsic in human reasoning. However, lately there have been circumstances where even the algorithms are having prejudices[17].

Famous in this regard is the case “Wisconsin v Loomis”[18], where the Wisconsin Supreme Court ruled on the appeal of Mr. Loomis sentenced to six years in prison. In determining the sentence, the judges also relied on the Presentence Investigation Report (PSI), a report of the investigative findings of the defendant’s criminal history. The PSI and the COMPAS programme, the results of which were included in the report, showed that Mr Loomis was at high risk of recidivism[19]. COMPAS it is the acronym for Correctional Offender Management Profiling for Alternative Sanctions[20]. It is a case management and decision support tool created by Northpointe (now Equivant), a private enterprise, and used by some American courts[21] to assess the likelihood of a defendant to commit a crime again. The COMPAS software uses an algorithm to assess potential recidivism risk. The algorithm processes the data obtained from the defendant’s file and from the answers given by filling in a questionnaire. Data that are entered in the database are collected, digitalized, adapted according to human-designed cataloguing criteria; consequently, they can reflect the bias of human designers. The agorithm is not disclosed by the programmers, so it is impossible to dispute both the procedure and the result.

If the algorithm was 100% correct, it would bring an advantage to the justice system in that it would make it fairer about who has to be incarcerated and for how long. The problem is that the algorithm is far away from being correct. It is possible to optimise for “true positive”, meaning that it will as many people as possible who have a high-probability to commit a crime again. The risk, however, is to increase the number of people who will not commit any crime. Or you can lower the number of “false positives”, but in this way you increase the number of “false negatives”, i.e. those who are likely to commit new crimes and who receive more favourable treatment[22]. In 2016 ProPublica, a non-profit, independent newsroom, conducted an investigation on the algorithm and found out that “blacks are almost twice as likely as whites to be labeled a higher risk but both actually re-offended” whereas COMPAS “makes the opposite mistake among whites: they are much more likely than blacks to be labeled lower-risk but go on to commit other crimes”[23]. Indeed, a teenage African-American girl (with no previous criminal records) was rated as a medium risk by COMPAS after she was caught trying to steal a bicycle; at the opposite, a 54-year-old white man with a criminal record and with drugs in his car, was rated as low risk by COMPAS after being arrested for shoplifting[24].

The questions used in the questionnaire make no reference to the person’s race or origin. However, this piece of information is deduced from the algorithm that processes it. Indeed, by analysing a person’s place of residence rather than the criminal or non-criminal behaviour of his/her neighbours or family members, the origin and social status of the person can be deduced. With regards to this point, it is important to take into account that the majority of detainees in American prisons are black. Consequently, when COMPAS is based on the data collected, it is easy to understand why this establishes a high-risk score for them. When these data are used by the algorithm to determine whether or not a person is likely to commit another crime, a discriminatory system is created.

The problem is that since people decide what data to use and how to use it, algorithms process and use data based on human criteria and thus reflect human bias. Those who created these algorithms have done nothing but automate the predominant view of the world, that is made up of prejudices which are then incorporated into the algorithms themselves[25].

Other algorithms reinforce stereotypes and preferences by selecting, for example, similar groups or user groups. An algorithm, for instance, could find out that recidivism is statistically connected to a prisoner’s sex or race. Would the algorithm then not be sexist or racist?

I would not exclude a priori the use of the algorithm in the criminal trial, but it should undoubtedly have a minor bias.

2.2…individuality of the sentence

Another problem of COMPAS is that it does not foresee the risk of individual recidivism of the defendant, but elaborates the prediction by comparing the information obtained by the individual with that relating to a group of individuals with similar characteristics. The risk scores made by COMPAS are intended to predict the general probability that individuals with a similar criminal history are more or less likely to commit a new offence. In fact, Loomis has complained that the penalty inflicted on him is not an individual penalty. The problem with this system, I think, is simply that when evaluating a person’s behaviour you can’t do so by making generalisations. It is correct to make decision about an individual case based on what similar “criminals” have done in the past. All the answers to the questionnaire given by convicts flow into the software that creates several scores including predictions of “Risk of recidivism” and “Risk of violent recidivism”[26]. It is not acceptable to use an algorithm that is based on questions such as “How many of your friends/acquaintances have ever been arrested?” or “Were you ever suspended or expelled from school?” and compare these with groups of individuals with similar characteristics. Indeed, “it makes prediction based in generalized statistics, not on someone’s individual situation”[27].

In addition, the judge in criminal matters must rely on facts that the person has committed and on the defendant’s history of criminal conduct. He cannot consider the higher or lower probability “that a person may commit a crime that has not yet been committed”[28]. Indeed, it would not be fair to lock up person for a crime that may be committed in the future?[29].

2.3…opacity of the system

It is also problematic that COMPAS involves the violation of the right to a fair trial[30]. Loomis supports the right to be sentenced to a certain penalty on the basis of accurate information, which cannot be disposed of because it is covered by industrial property rights. In fact, we do not know how the algorithm was created, we do not know what its details are, the weight given to the factors considered are kept secret[31]. Since the tool is covered by trade secret, it is not possible for the defence to have access to the operating mechanisms of the COMPAS software. How is it possible to guarantee the right of defence to a person if he/she does not completely know the logic behind the judges’ decision? In many cases, a system that uses AI tools can increase fairness. Human decision making sometimes can be incoherent and goes beyond standards of justice. Human judges can have bias as much as COMPAS has. But it is not as easy as it seems. Often, it is not know enough about how this system work and consequently it is difficult to establish whether it is fairer than human would be on their own.

Under which conditions is it possible to trust a person or to accept someone else’s decision even if it is disadvantageous? At least when the reasons for it are known. The problem is that COMPAS is more and more a black box[32], because the reasons of certain results and its mechanisms are unknown.

2.4 Why is the law not computable?

What is lacking algorithms to be applied to in the legal field? If the law was a set of rules and directives to apply then it would probably be feasible and easy to computerise. The problem is that there is more behind it. The law is not just rules or codes to apply to cases. Law has an argumentative nature, an internal perspective, an ethic. It is not just about an end product: the process is what matters[33]. Principles, values, legal concepts cannot be calculated.

What are the limitations of using mathematics in the legal field? What is lost in translation?

If we assume that both words and numbers are a form of language and therefore of communication, they differ from many points of view [34]. First of all, the code used to create algorithms aims to eliminate any form of ambiguity and flexibility which characterises legal language[35]. I think that one of the biggest problems in this is that you cannot translate everything you express in words into numbers because language has so many nuances and variations of meaning that computers does not grasp, at least at this time. There is a mismatch between human reasoning and mathematics. For example, if a self-driving car was told to get someone “as fast as possible” to the station, it would probably cause an accident or would drive very badly anyway. But that is not what the person meant[36]. Most of the time it is still not possible to explain to a machine what is the real meaning, because to understand what people really want it is necessary to go beyond the mere meaning of words. Maybe it is possible to make them understand it by continuously submitting them to examples, it is necessary the experience. Context helps too. Peolpe often understand a word according to the context in which they are and they give it a different meaning according to that context. It is difficult to encode the common sense. A computer does not do that. In those very mathematical fields of tax law, it is more feasible to apply algorithms and ML, because it is about numbers.

In fact, the mechanism of COMPAS makes it possible to face the question superficially, without going beyond the meaning of the single questions of the questionnaire and without judging a person as a whole. Precisely because all those unwritten rules that are part of the legal system are not codified.

2.5 Possible future scenarios

It has been seen that law is not computable for a number of reasons that have been listed. However, this is just my point of view and is contextualized to the current historical moment. It is legitimate to wonder if in the distant future things would be different. Maybe in 200 or 300 years, people will be able to build machines that can express human values or that can have human goals while considering all the variables of the human mind. An algorithm consists of a series of instructions that, using the data made available, creates outputs. Two machines, however, can lead to different results depending on the weight given to the various factors. It is not to be excluded that in the future algorithms with a very low bias level will be realized and therefore they are more accepted by society and are “fairer”. One of the most fascinating aspects of this subject is precisely the fact that, considering the very fast development of technologies, it leaves open many possible scenarios that nowadays would be unthinkable. Machines are already excellent at arithmetic, chess, cancer diagnosis etcetera and maybe the day will come when they will be better than humans in everything and for everything. What is important is to be ready: it is important to continue studying these issues and talk about them, so that people are not owned by technology but they own it[37].

Conclusion

Mathematics is an exact science, where everything has its own logic and precision. Calculations, operations, theorems are not refutable. Law, on the other hand, is. There are norms and rules to be applied, but then there are many variables at stake.

In essence, the logic of ML is not the same as legal reasoning[38]. It also has to be considered who builds the algorithm system and what values are encodes in it. Do the engineers have an understanding of the nuances of the law?[39]

Especially in criminal law, a judgment can be overturned: the Court of first instance may find a person guilty and the same person may be found innocent by the Court of second instance. Again, the same case judged by one judge could give rise to a verdict where if it were judged by another it could have a different one. Everything can change as new evidence emerges or simply as new considerations are made. So would be possible to mathematicise the mutability of human thought and judgment?

It is probably also a question of objectives. Human goals are not the same as those of the ML. When the purpose of a computer is to simplify and speed up time, it would, for instance, undoubtedly be badly balanced with the human purpose of a fair process. When the goals are different, it is difficult to translate in numbers and encode in computers dissimilar needs and principles.

Finally, there is a lot of discussion about the concept of bias. As it has been seen, one of the biggest problems is that of preventing the judicial system, whose task is to ensure fair application of the law and therefore equal treatment for all, from being turned into a discriminatory instrument. When COMPAS uses biased data, the output will be biased as well. This concept is best expressed by the phrase “Garbage in, garbage out”, according to which the quality of the outputs is determined by the quality of the inputs[40].

If everyone must be considered and judged equally before the law, without discrimination on the basis of race, religion or skin color etc, the use of systems such as COMPAS do not allow this to happen.

BIBLIOGRAPHY

Alpaydin E, Introduction to Machine Learning, (2nd edn, Massachussetts Institute of Technology 2010) 3

 

Angwin J, Larson J, Mattu S, Kirchner L, “Machine Bias”, (2016) PP < https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing> accessed 2 January 2020

 

Barry-Jester AM, Casselman B, Goldstein D, ‘Should prison sentences be based on crimes that haven’t been committed yet?’, (FiveThirtyEight, 4 August 2015) <https://fivethirtyeight.com/features/prison-reform-risk-assessment/> accessed 28 December 2019

 

Bennett Moses L, at the ‘Lex ex machina’ conference in Cambridge, 13 December 2019

 

Blackmare S, Consciousness: an introduction (1st edn, Hodder & Stoughton 2003) 22

 

Brennan T, Dieterich W, Ehret B, ‘Evaluating the predictive validity of the compass risk and needs assessment system’ (2009) 36 vol CJAB 21

 

Bryant B, ‘Judges are more lenient after taking a break, study finds’ The Guardian (April 2011)

 

Carrer S, ‘Se l’amicus curiae è un algoritmo: il chiacchierato caso Loomis alla Corte Suprema del Wisconsin’ (2019) GPW < http://www.giurisprudenzapenale.com/2019/04/24/lamicus-curiae-un-algoritmo-chiacchierato-caso-loomis-alla-corte-suprema-del-wisconsin/> accessed on 2 January 2019

 

Cicero, The Republic and the Law (Niall Rudd Tr, Oxford World’s Classic 2008)

 

Cobbe J, at the ‘Lex ex machina’ conference in Cambridge, 13 December 2019

 

Dehaene S, Lau H, Kouider S, ‘What is consciousness, and could machines have it?’, (2017) S < https://science.sciencemag.org/content/358/6362/486/tab-figures-data> accessed on 29 December 2019

 

Goldman AI, ‘Consciousness, Folk Psychology, and Cognitive Science’, [1993] CAC 364

 

Golumbia D, The cultural logic of computation (1st edn, Harvard University Press 2009)

 

Guidotti R, Monreale A, Pedreschi D, ‘The AI Black Box Explanation Problem’ (2019) KD https://www.kdnuggets.com/2019/03/ai-black-box-explanation-problem.html  accessed on 2 January 2020

 

Jaume-Palasi L, Spielkam M, ‘Ethics and algorithm processes for decision making and decision support’ [2017] AW 2

 

Larson J, Mattu S, Kirchner L, Angwin J, ‘How we analyzed the compass recidivism algorithm’ (2016) PP < https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm> accessed 4 January 2020

 

Livni E, ‘Nei tribunali del New Jersey è un algoritmo a decidere chi esce su cauzione’ Internazionale, (3 March 2017)

 

Oxford English dictionary, <https://www.oxfordlearnersdictionaries.com/definition/english/algorithm?q=algorithm >, accessed 20 November 2019

 

Pasquale F, ‘A rule of persons, not machines: the limits of legal automation’ (2018) SSRN < https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3135549> accessed 3 January 2020

 

Phalgune A, Kissinger C, Burnett M, Cook C, Beckwith L, Ruthruff JR, ‘Garbage in, garbage out?An empirical look are oracle mistakes by end-user programmers’, (2005) IEE DL < https://www.researchgate.net/publication/4175710_Garbage_in_garbage_out_An_empirical_look_at_oracle_mistakes_by_end-user_programmers> accessed 5 January 2020

 

Smith M, ‘In Wisconsin, a Blacklash against using data to foretell defendant’s futures’ The New York Times, (22 June 2016)

 

Skeem J and Lowenkamp CT, ‘Risk, Race and Recidivism: predictive bias and disparate impact’ (2016) C < https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2687339> accessed on 5 January 2020

 

Supreme Court of Winsconsin, State of Wisconsin v Eris L. Loomis, [2016] 881 N.W.2d 749

 

 

Max Tegmark, Life 3.0: Being human in the age of Artificial Intelligence, (2nd edn, Penguin books 2018) 61

 

Thucydides, The Peloponnesian War (Martin Hammond Tr, Oxford World’s Classics 2009)

 

Universal Declaration of Human Rights (UDHR), Article 7

 

Weisberger M, ‘Will AI ever become conscious?’ (2018) LS < https://www.livescience.com/62656-when-will-ai-be-conscious.html> accessed 16 December 2019

[1] Universal Declaration of Human Rights (UDHR), Article 7

[2] Text snippet

[3] Thucydides, The Peloponnesian War (Martin Hammond Tr, Oxford World’s Classics 2009)

[4] Cicero, The Republic and the Law (Niall Rudd Tr, Oxford World’s Classic 2008)

[5] Oxford English dictionary, <https://www.oxfordlearnersdictionaries.com/definition/english/algorithm?q=algorithm >, accessed 20 November 2019

[6] Max Tegmark, Life 3.0: Being human in the age of Artificial Intelligence, (2nd edn, Penguin books 2018) 61

[7] Susan Blackmare, Consciousness: an introduction (1st edn, Hodder & Stoughton 2003) 22

[8] Eithem Alpaydin, Introduction to Machine Learning, (2nd edn, Massachussetts Institute of Technology 2010) 3

[9] Tegmark, (n 6) 72

[10] Ben Bryant, ‘Judges are more lenient after taking a break, study finds’ The Guardian (April 2011)

[11] Julia Angwin, Jeff Larson, Surya Mattu, Lauren Kirchner, “Machine Bias”, (2016) PP < https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing> accessed 2 January 2020

 

[12] Tegmark (n 6) 106

[13] Ibid, 107

[14] Stanislas Dehaene, Hakwan Lau, Sid Kouider, ‘What is consciousness, and could machines have it?’, (2017)358  Science 486 < https://science.sciencemag.org/content/358/6362/486/tab-figures-data> accessed on 29 December 2019

[15] Alvin I. Goldman, ‘Consciousness, Folk Psychology, and Cognitive Science’, [1993] CAC 364

[16] Mindy Weisberger, ‘Will AI ever become conscious?’ (2018) LS < https://www.livescience.com/62656-when-will-ai-be-conscious.html> accessed 16 December 2019

 

[17] Riccardo Guidotti, Anna Monreale, Dino Pedreschi, ‘The AI Black Box Explanation Problem’ (2019) KD https://www.kdnuggets.com/2019/03/ai-black-box-explanation-problem.html  accessed on 2 January 2020

[18] Supreme Court of Winsconsin, State of Wisconsin v Eris L. Loomis, [2016] 881 N.W.2d 749

[19] Stefania Carrer, ‘Se l’amicus curiae è un algoritmo: il chiacchierato caso Loomis alla Corte Suprema del Wisconsin’ (2019) GPW < http://www.giurisprudenzapenale.com/2019/04/24/lamicus-curiae-un-algoritmo-chiacchierato-caso-loomis-alla-corte-suprema-del-wisconsin/> accessed on 2 January 2019

[20] Tim Brennan, William Dieterich, Beate Ehret, ‘Evaluating the predictive validity of the compass risk and needs assessment system’ (2009) 36 CJAB 21

[21] This tool has been mainly used in the States of New York, Wisconsin, California and Florida.

[22] Lorena Jaume-Palasi, Matthias Spielkam, ‘Ethics and algorithm processes for decision making and decision support’ [2017] AW 2

[23] Angwin et al (n 11)

[24] Ibid

[25] Ephrat Livni, ‘Nei tribunali del New Jersey è un algoritmo a decidere chi esce su cauzione’ Internazionale, (3 March 2017)

[26] Jeff Larson, Surya Mattu, Lauren Kirchner, Julia Angwin, ‘How we analyzed the compass recidivism algorithm’ (2016) PP < https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm> accessed 4 January 2020

[27] Jaume-Palasi (22)

[28] Jennifer Skeem and Christopher T. Lowenkamp, ‘Risk, Race and Recidivism: predictive bias and disparate impact’ (2016) C < https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2687339> accessed on 5 January 2020

[29] Anna Maria Barry-Jester, Ben Casselman, Dana Goldstein, ‘Should prison sentences be based on crimes that haven’t been committed yet?’, (FiveThirtyEight, 4 August 2015) <https://fivethirtyeight.com/features/prison-reform-risk-assessment/> accessed 28 December 2019

[30] Stefania Carrer, (n 19)

[31] Mitch Smith, ‘In Wisconsin, a Blacklash against using data to foretell defendant’s futures’ The New York Times, (22 June 2016)

[32] Guidotti, (n 17)

[33] Lyria Bennett Moses, at the ‘Lex ex machina’ conference in Cambridge, 13 December 2019

[34] Frank Pasquale, ‘A rule of persons, not machines: the limits of legal automation’ (2018) SSRN < https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3135549> accessed 3 January 2020

[35] David Golumbia, The cultural logic of computation (1st edn, Harvard University Press 2009)

[36] Tegmark (n 6) 261

[37] Tegmark (n 6) 335

[38] Lyria Bennet Moses, at the conference

[39] Jennifer Cobbe, at the ‘Lex ex machina’ conference in Cambridge, 13 December 2019

[40] Amit Phalgune, Cory Kissinger, Margaret Burnett, Curtis Cook, Laura Beckwith, Joseph R.Ruthruff, ‘Garbage in, garbage out? An empirical look are oracle mistakes by end-user programmers’ (2005) IEE DL < https://www.researchgate.net/publication/4175710_Garbage_in_garbage_out_An_empirical_look_at_oracle_mistakes_by_end-user_programmers> accessed 5 January 2020

 

File Allegati
# Tipo Dimensione Download
1 .pdf 319,24 KB (22.05.27) Is Law Computable – Sara Donati