FREE CHAPTER from ‘Artificial Intelligence – The Practical Legal Issues’ by John Buyers

CHAPTER ONE
AN INTRODUCTION TO
ARTIFICIALLY INTELLIGENT
SYSTEMS

This is a book about technology and artificial intelligence – more specifically one that is designed to assist the non-scientifically minded legal practitioner to understand the implications of this new and exciting technology: to demystify, and hopefully provide clarity around the issues and implications that we as legal practitioners will need to take into account when navigating our disparate legal disciplines. It does not purport to be definitive – the technology is far too new for this and is in a state of constant evolution. What it strives to do is to provide a grounding relative to the current “state of the art” as it applies today, and separate fact from fiction.

We are currently experiencing a golden age or renaissance period for machine learning and wider artificial intelligence based systems. This book is predominantly focussed on the innovations provided by machine learning – that is to say technology which, through exponential growth and scale in computing power, has enabled effective use of neural networks to enable machines to adapt and learn without explicit programming. It is important however not to ignore associated technologies that complement and enhance machine learning systems, and it is this wider ensemble of techniques (including machine learning itself) which I refer to as artificial intelligence or AI.

To the casual observer, it seems that the technology is advancing on a daily or even hourly basis. You can now talk to an Amazon Echo device using the ordinary spoken word (in whatever language is your norm) and through the miracle of natural language processing get a cogent answer, or instruct Google to run context based searches on characteristics gleaned from your photo collection. Facebook will now identify your friends and family for you and Spotify will try to define your musical tastes by selecting music tracks based on your listening history.

We’re at the stage now where you can also get into your Tesla vehicle and it will drive you to your destination (albeit with your hands on the wheel). Motoring is an apt analogy, as we accelerate off into a brave new world, cosseted and nannied by ever complex varieties of AI powered gadgets. It’s not all about consumer experience though. Financial markets have long been driven by a variety of robo-trading tools and many “state of the art” cyber protection systems use artificial intelligence to protect against an ever-complex world of malware.

What makes these systems different from traditional computer systems and why should we be worried? Sometimes it can be difficult to differentiate fact from fiction – particularly in an environment which is driven by marketing hype and plain old misinformation – especially so given that the idea of the intelligent machine, fuelled by Hollywood, plays heavily on our collective psyche. Is for example artificial intelligence the end of the world as we know it and are we about to be suborned to a new species of supercomputer, or is it the key to a prosperous new future and a golden age of human endeavour ? This book will at least give you an oversight of the issues involved from a legal perspective, and I suspect an appreciation that the reality is somewhat more prosaic than either the pessimists or optimists would have us believe.

In order to understand the practical implications of the technology, it is worth understanding how traditional computer systems work, and analogise this against how neuroscience currently thinks that neurons in the human brain function.

Computers in the traditional sense need prescriptive and directive sets of instructions to execute complex tasks – in essence, their programming. A traditional computer is unable to handle complex tasks effectively unless every eventuality is programmed into its code. As we all know, this makes these machines highly effective at larger repetitive or data intensive tasks, with a defined set of pre-programmed variables. What such machines cannot do well is adapt, learn, evolve or extrapolate their decisions to new and unforeseen situations. In contrast, the human brain although rather less good at repetitive tasks, is a marvel of flexibility and of navigating through a chaotic world. It does this through a process of conceptualisation.

The simple act of recognising a human face (or indeed any image) provides a real-life example. You learn from an early age as a human being to recognise a face from any angle or orientation, full on or profile. You can identify someone in low light conditions and you don’t need an image of a face to be in colour or even in three dimensions for you to recognise it. Your brain can extrapolate a face from incomplete or partial data – indeed it is unbelievably good at “joining the dots” (so much so that we are often caught out by recognising anthropomorphic features in inanimate objects). A traditionally programmed computer finds this task almost insurmountably complex and difficult to achieve. You need to ensure that the face you want the machine to recognise is oriented in precisely the right way and under precisely the right lighting conditions – as otherwise it may not even identify it as a face. In this context, what the human neo-cortex achieves (and what current AI technologists are trying to replicate) is the holy grail of the “invariant representation“. This is the ability to learn what a “face” is as a concept (or indeed a cube, car, tree or any other animate or inanimate thing) and apply this to real world data. The notion of a concept introduces an entirely new dimension – a level of data abstraction, classification, recognition and labelling which enables semantic representation in areas such as pattern recognition and linguistics, in short, bringing order from chaos.

Put grandly, artificially intelligent systems aspire through their structures (to a greater or lesser degree) to have the ability to process unstructured data, to extrapolate it, and to adapt and evolve in ways which are comparable to human beings.


Robotics, perception and Artificial Intelligence

So what makes robots different from machine learning systems? It is worth briefly covering the difference as to the uninitiated, this can be a somewhat confusing question – particularly when the terms are used fast, loose and interchangeably in most commercial, non-technical literature.

Obviously, at its most simplistic level, a robot is a mechanism that is designed to replicate the actions and behaviours of living creatures. It is a manufactured autonomous agent in the real world which is capable of autonomous action to a defined degree. Robots in some form or another have been in existence for over 200 years.

As AI systems have become more sophisticated and able, robotic research has likewise developed more capable robotic machines and scientists in the field of robotics have become more pre-occupied with replicating animal characteristics of proprioception –the unconscious ability in living creatures of knowing at all times the boundaries and extent of their physical body and that when a limb is extended in
front of it, it is part of its own body and has a sense of movement. Knowing also that the limb is sensing hot or cold or touching another object, and whether it is constrained or injured are an integral part of this ability in living creatures. It is this need to physically interact with the real world on an effective basis which has been described by some commentators as one of the primary catalysts of animal and human intelligence and which is the driving emphasis for AI research in the field of robotics.


A brief history of AI

Before we get into how current “state of the art” artificially intelligent systems are structured, it is worth (briefly) looking at the historical evolution of machine learning. It was Alan Turing who first coined the term “artificial intelligence”, and who through his brilliant efforts in the second world war, managed to decode Nazi Germany ciphers produced on the Enigma machine. Alan Turing wrote on the concept of machine intelligence in a seminal 1950 paper1. His analysis centred on human intelligence as a benchmark for artificial intelligence (more on this later). He postulated that if you could hold a test where a human conversed with a computer and that human could be fooled by a clever computer program into thinking that the machine they were talking to was in fact human, then the machine would have passed a benchmark for intelligence. This of course evolved into the famous “Turing test“.

The Turing test led to a surge in the mid 20th century in traditional programming techniques being used to emulate intelligence – however developers very soon realised the limitations of this approach. Programs such as Eliza, one of the very first early winners of a Turing test, fooled reviewers by adopting clever, but simple linguistic tricks (e.g. through the repetition of questions) which gave a superficial semblance of self-awareness and interaction through mimicry but did not create anything approaching human equivalent intelligence. Unsurprisingly AI development stagnated after these initial attempts.

The father of AI is rightly credited to be Marvin Minsky, an American cognitive research scientist who developed the first randomly wired neural network learning machine, SNARC2 in January 1952. Minsky was also author of the book Perceptrons (with Seymour Papert) in 1969 that became a seminal work in the field of artificial neural networks.

For the purposes of this book I am of course paraphrasing a long and complex developmental history – there are many works which espouse the historical evolution of artificial intelligence in a much greater depth but it was subsequent to the development of the Turing test that AI research bifurcated into two distinct directions. One of which centred on the earlier approach of “emulation” – namely focussing on mimicking outwardly observable intelligent behaviours; and a new approach of “simulation” – one which was based on the view that in order to achieve machine intelligence the fundamental structure and processes of neurons firing in the human nervous system had to be simulated.

As we shall see later on, both branches of research are propelling AI development through its current renaissance. There are however still limitations. Not least due to the fact that despite much scientific endeavour, we are still a very long way off from understanding precisely how the human brain functions, and for this reason, probably a long way off from developing a machine with human equivalent (or greater) levels of sentience (or feeling, perception and subjective experience), combined with objective reasoning and logic – the holy grail of Artificial General Intelligence (AGI) or “Strong AI“.

Inevitably, AI research and development initiatives have themselves evolved and differentiated themselves by trial and error – as some structures have shown promise, those have been refined, so now we have lines of machine learning research and applications which may in fact have very little relationship to how neuroscience currently understands the organisation and structure of the human brain (in addition of course to those that still strive to closely model real organic brain function in so far as it is understood).


Current developments

So let’s now take a slightly more detailed look at what is actually happening in the field of artificial intelligence at the moment – both from the perspectives of the technology itself and also the wider political and industry context which is providing an ethical and regulatory response to these developments.

As I mentioned at the beginning of this chapter, we have seen a dramatic rise in the effectiveness and use of solutions based on “Weak AI” or “Narrow AI” or Artificial Specific Intelligence (ASI) – AI solutions that are based around a specific, narrowly defined task or application, collectively (and somewhat confusingly) falling under the umbrella term of IA or Intelligent Automation.

There are several real-world artificial intelligence applications which are driving developments in the technology:

Image processing and tagging

Image processing as it suggests requires algorithms to analyse images to get data or to perform transformations. Examples of this include identification/image tagging – as used in applications such as Facebook to provide facial recognition or to ascertain other data from a visual scan, such as health of an individual or location recognition for geodata; Optical Character Recognition – where algorithms learn to read handwritten text and convert documents into digital versions.


3D Environment processing

3D environment processing is an extension of the image processing and tagging skill – most obviously translated into the skills required by an algorithm in a robot or a “CAV” (connected and autonomous vehicle) to spatially understand its location and environment. This uses image data but also potentially radar and lidar sourced spatial scanning data to process 3D geometries. Typically this technology could also be used in free roaming robot devices including pilotless drones.


Textual Analysis

These are techniques and algorithms which extract information from or classify textual data. Textual analysis uses two distinct approaches – one based purely on pattern recognition of words and their meanings and concatenated sequences of the same, the other on grammar driven natural language processing. In terms of practical usage, these could include social media postings, tweets or emails. The technology may then be used to provide filtering (for SPAM); information extraction – for example to pull out particular pieces of data such as names and addresses or more complex “sense based” sentiment analysis – to identify the mood of the person writing (as Facebook has recently implemented in relation to postings by members who may be potentially suicidal3). Text analysis is also at the heart of Chatbot technology – allowing for interaction on social media, or providing for automated first line technical support.


Speech Analysis

Speech processing takes equivalent skills to those used for textual documents and applies them to the spoken word. It is this area which is seeing an incredible level of investment in the creation of personal digital home assistants from the likes of Amazon (with its Echo device), Microsoft with Cortana, Google’s Home device and Apple with Siri (and now the recently launched “Homepod” speaker).


Data Mining

This is the process of discovering patterns or extrapolating trends from data. Data mining algorithms are used for such things as Anomaly detection – identifying for example fraudulent entries or transactions as outliers or classifying them as known types of fraud; Association rules – detecting supermarket purchasing habits by looking at a shopper’s typical shopping basket; and Predictions – predicting a variable from a set of others to extrapolate for example a credit score.


Video game virtual environment processing

Video games are a multi-billion dollar entertainment industry but they are also key sandboxes for machines to interact with and learn behaviours in relation to other elements in a virtual environment, including interacting with the players themselves4.

Machine Learning – the basic elements

At the most simplistic level, machine learning systems are no different from conventional computer systems in that both rely on the elements of computational hardware and software to function effectively.

Whilst many modern machine learning systems exploit huge advances in computing scale and power and make use of the vast amounts of data that are available in our “big data” society, they still need what would be recognisable as a computing platform. It is the logic or software that such systems use which differs markedly from traditional computer programs directly created by human programmers which require sequential, explicit and largely linear instructions that are followed to the letter by the machine.

I have already explained that technologists developing machine learning systems have developed a variety of solutions, methodologies, models and structures to get machines to “think” in a narrow AI sense. In fact so many approaches have been developed, it can be difficult for the non-computer scientist to effectively decode them. To further complicate this issue, many proprietary machine learning systems employ a “mix” of adaptive learning solutions which are optimised for the particular applications at hand. We’ll step through some of these models in a moment, but for the moment, and in order to provide a consistent framework for you, the reader, it is worth setting out the very basic absolutes of machine learning systems – in other words, those conceptual elements that all current machine learning systems use. It is important to stress that this framework applies to that subset of AI which is machine learning as defined at the beginning of this chapter – the terms may not be relevant to other wider or peripheral AI technologies which are generally out of scope for the purposes of this book.

At the most simplistic level It is generally established that machine learning systems are comprised of three parts: the Model – that is to say the way in which the system is structured and organised – in short its architecture, including the nodes, links and weights which will be applied to the data to be processed. Then there are the Parameters – these are the properties of the training data that are to be learned during training and finally the Learner –generally comprised of a learning algorithm that part of the system that adjusts the model for how it processes the parameters on a supervised, unsupervised or reinforcement basis (see below under “Training”) and an activation or transfer algorithm which defines the way in which data are to be transferred between nodes within the network by forwards and or backwards propagation (see further below under “Backpropagation”).


Algorithms

The word “algorithm” is often used to generalise the entire process I have described above. In many texts you might as well substitute the words for “magic spell”. In fact, as we have seen above, an algorithm is merely a set of complex mathematical actions expressed as a formula. Clearly machine learning algorithms are not magic spells. More prosaically, learning algorithms selectively adapt and change the model and parameters based on the data that is introduced into the machine learning system. Rather confusingly as well, it is important to note (and often misunderstood), that simply referring to a solution that is “algorithmic” does not necessarily imply that that solution has any degree of artificial intelligence or machine learning.


Training

All machine learning systems need to be “trained” (or train themselves) with a data set – a set of data points which are intended to assist the system to “understand” the relevant narrow AI task at hand and which contain the parameters I have described above. Training data sets are, as we shall see later on in this book, an incredibly significant and important feature of these systems. The nature of the training data set provided however is very different depending upon whether the system is designed to learn on a “supervised”, “unsupervised” or “reinforcement” basis.


Supervised Learning

Supervised learning systems are provided with guiding or labelled training data sets that contain a mass of examples of the desired answers and outputs. In such cases, the training data are subsets of data that are well known in terms of all of their features, content, correlations and so forth and thus can provide a good representative example to benchmark the outputs of the system. Typically these data sets will be past, real life examples of the problem the system has been configured to resolve. Typically, this type of machine learning supports evaluation, classification or prediction outcomes.


Unsupervised Learning

Systems that are designed to learn on an unsupervised basis in contrast are provided with unlabelled data that may or may not contain the desired answer or output. In these cases, the systems attempt to find either outliers or correlations or patterns in the data without any form of guidance. Provided data sets may be clustered into different classes that share some common characteristics – unsurprisingly, this is often referred to as “clustering” by the industry. Typically, this type of machine learning is found in forensic tools.


Reinforcement Learning

Reinforcement learning is a similar process to unsupervised learning – however machine learning systems here are typically exposed to a competitive environment where they train themselves continuously using trial and error to try to find the best reward – which might be winning a game or earning more money. This type of AI system attempts to learn from past experience in order to refine and improve decision outcomes, and is particularly well suited to closed environments with static rules

Whether the system is designed to work on a supervised or unsupervised basis it is clearly vitally important that the data sets provided to train the system need to be representative of the underlying problem the system is being designed to resolve and should not be unbalanced or skewed in any particular direction. As we’ll see later on in this book, there are particular issues with this which could trap the unwary. Proactively editing out bias too far in a data set may well have the opposite and undesired effect of making that data set too specific (so called “overfitting”) and less representative of the data class, in turn leading to the corresponding outputs of the system being prejudiced in favour of or against particular outcomes.


Backpropagation

Backpropagation is the process which allows a machine learning system to adjust and change by reference to previous outcomes. In technical terms it is the means by which the machine learning system (via the activation algorithm, see above) computes and optimises the gradient descent required in the calculation of weights used in the network by distributing back through network nodes.

So these are the basic features of AI or machine learning systems. As I mentioned above, there are a variety of different AI Models which are being deployed, and it is worth spending a little time on the most significant of these.


Artificial Neural Networks

Artificial neural networks aim to most closely model the functioning of the human brain through the simulation approach and contain all of the basic machine learning elements described above.

Neuroscience has established that the human neo-cortex, which is where most of our higher brain function resides, consists of a very dense population of interconnected neurons or brain cells. Neurons consist of the soma (the cell body) and axons and dendrites (the filaments that extend from the cell itself). Observed higher human brain functions require groups or networks of neurons to fire together in electro chemical activity. Even though the base physical structure of the cortex is the same, it is settled science that there are cortical regions that specialise in particular skills, such as language, motor movement, sight and hearing. More recently, it has been established that there are progressively “higher” levels of brain processing driven by layers of neurons. You might for example have a collection of neurons at a low level that are specialised in the visual detection of “edges”. Data from these lower functions is passed up the tree (or network) to higher functions so that collectively, you are able to perceive a face.

In the world of artificial intelligence, scientists have attempted to replicate or model these structures and their functionality by use of neural networks. In simplistic terms, neural networks can be organised as “shallow” – 1-3 layers or “deep” – over 7 layers (as is generally the case in the human neo-cortex).

Artificial neural networks are composed of artificial input “neurons” – virtual computing “cells” that activate, that is to say, assume a numeric value (by reference to a chosen algorithm which applies weights and biases to that numeric value – in effect influencing it) and then hand it off to another layer of the network, which also applies an algorithmic treatment to it, and so on and so forth until the data has passed through the entire network and is outputted.

The process is heavily mathematical. Most neural networks apply something called a cost function, which is an additional mathematical process that determines how to adjust the network’s weights and biases in order to make it more accurate. Typically this is achieved by something called gradient descent – a calculus derived mathematical function which is designed to reach the minima (the lowest possible, and hence the most accurate value, of the cost function described above).

Stepping away from the maths for a moment, it is worth making the point that Artificial Neural Networks process data in a non-linear manner – as we saw from the basic elements section above, data may backpropagate as well as move forwards through the network (a further mathematically driven process of output refinement and tuning). They are often therefore referred to as “black boxes”. This can make it very difficult to understand and equally difficult to explain how or why the system has reached a particular outcome in a particular instance.

It is also important to note that neural networks come in a variety of flavours, so for example Regression Neural Networks are neural networks that are trained to analyse data on either a linear or non-linear regression basis (a simple example of regression analysis might be extrapolating growth rates from height measurements taken from a child).

Convolutional Neural Networks or (“CNNs“) have a structure which is specifically optimised for image recognition (such networks assume that all inputs are image related). Generative Adversarial Networks take this one step further and improve applications such as image recognition and optimisation by pitting one CNN against another. One network acts as image generator, the other acts as image discriminator, challenging the generating network to improve. Use of this technique has proved very valuable in picture enhancement – such as for example improving the resolution of fuzzy images or removing artefacts.

GANs are also particularly effective where no correlated data exists to underpin decisions relating to a processing application – in other words, it can be used to synthesise data where none actually exist. One particular company for example uses such an approach to synthesise claims data for insurers to enable them to more accurately model and price risk for the underwriting process.

 

Deep Learning Networks

Deep learning models are simply varieties of artificial neural networks that employ vastly greater computing power, more recent algorithmic innovations and much bigger data sets – in short, they take advantage of the much greater computing power available to us, but operate in much the same manner as I have described above. Of course, the challenges of explainability and transparency are also correspondingly amplified when such networks are used.

On a wider basis, it is probably worth providing a brief introduction to some other non-machine learning AI technologies, although I do not propose to delve into these in any great detail.


Decision Trees

In contrast to machine learning, decision trees are a “white box” artificial intelligence model – typically their decisions are more easily explicable and they are principally used for classification style problems. In simple terms a decision tree works by processing data through a series of question “nodes” (similar to a flow diagram type structure). Each node hands off data to corresponding next layer once a question has been answered. Decision trees usually work on a Boolean basis (ie yes or no). Decision trees depend upon being able to classify data sets in an expected manner and are not suitable for applications that are based on unsupervised pattern correlation or recognition. This means that they are in turn susceptible to in-built bias if the overall problem they are designed to resolve has been incorrectly modelled.


Random Forests

Random Forests and Deep Forests are very large ensembles of Decision Trees. In simple terms, Random Forests take random subsets of data to be analysed and assign these to individual trees. The collective output of the trees provides a range of responses which can be correlated on a statistical basis to provide a stronger prediction than a single decision tree alone. I do not propose to explain the intricacies of these structures further in this book, however the reader should note that by aggregating decision trees together in a very large hierarchy, such structures may become inherently less explicable in terms of their decision making.


Probabilistic artificial intelligence

Probabilistic or Bayesian artificial intelligence techniques are some of the hardest to conceptualise and understand. They may or may not incorporate machine learning technology. Systems that work on this basis are attempting, by application of mathematical probability theory, to express all forms of uncertainty associated with the problem the model is trying to resolve. They then apply inverse probability (Bayes’ Rule) to infer unknown characteristics of the problem to make predictions about, and learn from observed data (so called “inference algorithms“).

The greatest strength of Probabilistic models is that they know what they do not know or in effect have an internal representation of past outcomes that are learned, on the basis of which they can guess a probable outcome. We live and work in a very messy world – and many decisions we take are inferred from observable data sets that are incomplete. Such systems are therefore incredibly powerful but do depend on a very careful probabilistic representation of uncertainty.

So that was an introduction to the key elements of the technology. In so far as political, regulatory and industry responses are concerned, there are a number that are worth mentioning, as they reflect well some of the issues and concerns raised in this book.

The Partnership on AI

The first, and possibly the most significant is the Partnership on AI or “PAI”5 which has been established between Google, Microsoft, Apple, Amazon, IBM and Facebook (together with a number of other leading industry participants). The PAI attempts broadly to ensure that industry leaders in the development of AI do so on an ethical basis, taking into account a number of thematic “pillars” or priorities. These pillars include establishing conventions for safety in the use of AI and robotic systems (a key element of which is ensuring that AI systems cede gracefully when they fail rather than putting the user in a catastrophic position); making AI fair, transparent and accountable (in order to increase the auditability of decisions taken by AI systems); promoting co-operative AI systems; and variously considering the extent to which AI will influence society to ensure that it is for the greater good. Obviously it remains to be seen how influential this partnership will be in practice.


OpenAI

OpenAI6 is a non-profit AI research company which is focussed on developing AGI or artificial general intelligence. OpenAI, which was founded in 2015 jointly by entrepreneurs Sam Altman and Elon Musk, has been set up to counteract the concentrative effect of big corporation research into AI technologies and to ensure that AGI will ultimately benefit all of humanity. As at the date of writing of this book, OpenAI is intending to release a charter of principles in relation to the development of AGI.

The EU

The EU is currently undertaking a complex review of the impact of artificial intelligence on European Society, which includes a number of strategic initiatives. At this stage, the most that can be done is to indicate direction of travel – but one thing is for certain – this is a fast-moving area and it is very likely that by the time this work is published, much of the information listed will be out of date. The significance of the technology to the EU is clear – it has been awarded its own Unit within the Directorate General for Communication Networks, Content and Technology of the EU Commission. There are a number of research initiatives, the most significant of which are SPARC7 – a 2.2bn public-private partnership on Robotics in the EU (partially funded by the EU as part of its Horizon 2020 funding programme which covers expenditure between 2014-2020), and a 20m “AI On Demand” Platform initiative on standardisation – the stated and rather ambitious aims for which are to “mobilise the AI community across Europe in order to combine efforts and to optimise Europe’s potential.” Further details from this programme are expected in April 2018 from the Commission.

In March 2018, the European Group on Ethics in Science and New Technologies, part of the European Commission’s Directorate General for Research and Innovation, published its “Statement on Artificial Intelligence, Robotics and ‘Autonomous’ Systems8 which advocates the creation of an ethical and legal framework for the design, production, use and governance of artificial intelligence, robotics and autonomous systems, as well as proposing a set of fundamental ethical principles for the development of AI which are consistent with the EU’s charter of fundamental rights.

The UK

The UK has been keen to remain at the forefront of Artificial Intelligence development and research. In October 2017 an independent review commissioned by the UK government was published by Dame Wendy Hall and Jerome Presenti “Growing the Artificial Intelligence Industry in the UK9. The review made a number of recommendations about how the UK could best exploit new technologies created by Artificial Intelligence (principally in the areas of skills, driving uptake, data security and research) and is currently being considered as part of the UK government’s Industrial Strategy programme. This was followed in April 2018 by a wide ranging report by the House of Lords Select Committee on Artificial Intelligence entitled “AI in the UK; ready, willing and able?10, two of the main recommendations of which were to suggest that the AI industry set up a voluntary mechanism to inform consumers when AI is being used to make significant and sensitive decisions about individual consumers, and to ask the Law Commission to provide clarity in areas of the law where there is uncertainty around the consequences of artificial intelligence systems malfunctioning, underperforming or making decisions which cause harm.

MORE INFORMATION/ BUY THE BOOK ONLINE

1 Computing Machinery and Intelligence (1950) by A.M. Turing

2 Stochastic neural analog reinforcement calculator

3 See for example http://www.bbc.co.uk/news/technology-39126027 “Facebook artificial intelligence spots suicidal users”, 1st March 2017

4 See for example the July edition (E308) knowledge feature of Edge Magazine – “Machine Language” which discusses new startup SpiritAI – a business that has developed an intelligent character engine for NPCs (non-player characters) in video games, thus obviating the need for thousands of pages of pre-scripted dialogue.

5 See www.partnershiponai.org

6 https://openai.com

7 https://eu-robotics.net/sparc/

8 9th March 2018, ISBN 978-92-7980328-4

9 Available online at the Department for Digital, Culture, Media and Sport website

10  HL Paper 100, 16th April 2018