In this talk I am interested in coming to a modal logic definition of backward looking responsibility. I will discuss some approaches suggested in the literature and will try to reformulate them using stit logic. The analysis will involve well known instruments for assessing an agent’s causal contribution to unlawful outcomes such as the but-for and NESS tests, the role of belief and intention, and the possibility to invoke interventionist theories of causation.
After working in computer science for 20 years, first as a PhD student / postdoc at VU University Amsterdam and later as an assistant / associate professor at Utrecht University, in 2015 Jan Broersen made the switch to theoretical philosophy at Utrecht University where he studies logic and artificial intelligence. In 2013 he received an ERC grant for his research on Responsible Intelligent Systems.
Full details: http://www.uu.nl/hum/staff/JMBroersen
The rationale for using Bayesian approaches in legal reasoning is to help determine the probability that the ultimate hypothesis (e.g. ‘defendant guilty’) is true. This requires some agreement on a fair prior probability of the ultimate hypothesis, which is then updated in the light of all the evidence. However, it is widely assumed that no such fair probability can be agreed. At one extreme the ‘innocent until proven guilty’ prior assumption is practically and theoretically useless since, when taken literally, it means a prior probability of 0, which can never be overturned no matter how much evidence there is to support it. And at the other extreme a 50:50 prior probability is inevitably biased against the defendant. Hence, most Bayesian approaches deliberately ignore the prior and focus on the ‘likelihood ratio’ of the evidence, which at best only tells us how the odds in favour/against guilt have changed. In this presentation I will show that, for a very large class of criminal cases, it is possible to agree a sensible and useful prior probability of the ultimate hypothesis, based on agreed locations and timings relating to the crime and the defendant that is actually consistent with a realistic notion of ‘innocent until proven guilty’.
Norman is Professor in Risk Information Management at Queen Mary University of London and also director of Agena, a company specialising in risk management for critical systems. Norman’s experience in quantitative risk assessment covers a wide range of application domains, but since 2006 much of his research has been on legal and forensic applications, notably with respect to improved analysis and presentation of probabilistic aspects of evidence using Bayesian networks. He has worked on several major criminal and civil cases, providing expert advice on the impact and potential of probabilistic and statistical evidence.
Full details: www.eecs.qmul.ac.uk/~norman/.