Artificial Intelligence and the Law, Cybercrime and Criminal Liability

In October 2020 the Taylor and Francis Group specialising in eBooks across a range of subject areas, published the book Artificial Intelligence and the Law, Cybercrime and Criminal Liability.

We are delighted to share that Chapter 3 of Artificial Intelligence and the Law is written by our CMM Ambassador Professor Jonathan Clough (Monash University) and Chapter 9 written by CMM Founding Director and member of OCSC’s Advisory Board, Professor Sadie Creese (University of Oxford).

Abstract: Chapter 3 “Between prevention and enforcement: the role of “disruption” in confronting cybercrime”.

“This chapter discusses the nature of disruption and its application in the context of cybercrime, with a particular focus on legal frameworks. It begins with the nature of disruption and the role of intelligence in disruptive practices, before providing examples of how disruption may apply in the context of cybercrime. The chapter considers three contexts in which legislative action may be required in order to provide both the necessary legal powers and appropriate oversight. These are: the need for criminal offences that support disruptive techniques, investigative powers that may be utilised for disruptive purposes and provisions that support transnational cooperation. Since the early 1990s, policing agencies have increasingly moved away from a reactive prosecution-directed mode of crime control towards a form of policing known as intelligence-led policing. Because of its more proactive nature, intelligence-led policing lends itself not only to crime prevention and reduction, but to the use of other techniques to disrupt criminal activity without necessarily proceeding to prosecution.”

Abstract: Chapter 9 “A Threat from AI”.

“This chapter considers whether the family of technologies and methods that are commonly thought of as constituting artificial intelligence (AI) could pose a form of threat. AI as a field of study is commonly thought of as being concerned with creating a system that has the capability to learn and reason like a human, but which is not human. For AI to be a threat, targets or victims would need to be affected by the AI negatively, but the harm that they experience does not need to be constrained in any way to cyberspace. The chapter requires the means to identify and manage risks as they emerge, and such risks could apply to all apertures: individual, organisational, national or societal, and global. When considering the threat from AI as a weapon for cyberattack, we do so with reference to the six key characteristics of cyberattacks: targetability, controllability, persistence, effects, covertness and (un)mitigatability.”