A new book by Conflict Analytics project member Jacob Turner is delving into some of the key issues surrounding AI. Published by Palgrave, Robot Rules argues that AI’s ability to make independent decisions makes it unique – and unpredictable.  “In Robot Rules: Regulating Artificial Intelligence, I explore what makes AI unique, what legal and ethical problems this will cause, and how we can solve them,” Turner says. He spoke at Queen's Law on January 25, 2019. Robot Rules is available from Palgrave Macmillan.

Auto-captured transcript:
[Inaudible - introduction by Prof. Samuel Dahan] We have heard in the press about how robots are going to take over the world or take all of our jobs I'm not going to be speaking about either of those things where I'm gonna be talking about today is how we can live alongside artificial intelligence And robots I'm gonna be talking about three things what is A.I. was unique what problems could lead to and how can we solve them so fast what is it what do I mean what I'm talking about it's the us judge process Stewart famously said when talking about pornography I can't define it but I know it when I see it now that's not good enough when it comes to A.I. because people often comments and conversations with very different concepts of what they mean and unless you have a shared concept you can't have a conversation Have a meaningful exchange so when I get my definition of a right now is not supposed to be a more encompassing long which is suitable for any circumstance this definition of A.I. is solely suits to the purposes of legal regulation so I'm looking at if there's anything legally unique about this technology now you'll be familiar with the various pictures up on the screen this is how most people think of ai and you'd be forgiven for doing the same got to terminate From x mackinaw judy to see three po and then the last image is some mechanical arms in a car factory and these are things that people typically think of when the words robots or ar mentions this is the core of what I'm talking about today um I'm gonna introduce my definition of A.I. with two distinctions the first and this is probably the key one is between automation and autonomy in my view a process which is automated that is to say Where you have something which is otherwise done by a human or by an animal which is replicated by a machine where everything is free program where everything is determined to stick where there is no discretion for that mechanical or machine based process that's automation so the picture up on the screen is of the way the traditional computer program works you have a set series of instructions they are logical propositions if yes then you move the stage of know the new moves that And so on and so forth the point with these systems is that everything is determined to stick you can trace back every single outcome to a choice which was made by a human at some points in programming that system that is not the way in my view that's true artificial intelligence work so some people who say artificial intelligence includes expert systems it will say that includes what's known as symbolic A.I. symbolic in expert systems are both examples of automation and those don't fall within my definition of A.I. my definition focuses on autonomy Say the ability to make decisions in a normal free program and manner but which also isn't random so that there are lots of randomized ing techniques but they don't count sos intelligence because there's no gold based activity that that there's no internal models of what the system is trying to do where is A.I. in my definition does have such a modern and is able to create a certain parameters in order to get to an end goal so the example Up on the screen is a screen shot from a famous uh game codes uh go which is played by alpha go a system created by a British company deep mind uh and it famously in 20 16 beat one of the world champions lisa doll and it did so using machine learning which is not the only type of A.I. but that's the most common type of area at the moment much of it was developed in Canada and the important thing to remember is that it can come to decisions without being told exactly what to do or how to come to buy the programmers the next section Between narrow and general A.I. so gemini is the type of thing which I showed on the first slide these human knowledge type um conscious um machines which in many ways resembles humans they can have conversations with humans they reason in similar ways to us the kind of like mechanized humans or animals now that type of uh wide range of abilities the ability to set your Cooperate and an unlimited number of circumstances the kinds of things that we see in human level intelligence that's what's known as general A.I. and where clearly nowhere near that yet some people say that we will never get there now I'm not very concerned with whether we'll ever get there or not because the legal and ethical issues that I think come up with regards to artificial intelligence can apply even at the level of narrow A.I. so narrow A.I. is on End of the spectrum from gemini is a system which can do one thing but do it very well you may have a system that can do a couple of different things again if it can do them very well and it can take itself to do with them so the example up on the screen is image recognition I but there's also natural language processing that A.I. which is able to navigate these are all examples of narrow A.I. O often people will say what we don't have artificial intelligence so it doesn't exist so we don't need to worry about it I'm probably Talking about this gentleman I what they're ignoring is that we're surrounded by narrow A.I. it is all around us on our phones is an Internet browsers we constantly interacting with it the next question is is artificial intelligence unique from a ethical or legal perspective some people will say well they are it's just a computer program we've had computer programs for a long time so we don't need anything new or is just an algorithm that's another thing one often hears But it is true this I I it involves computer programs box saying the is just a computer program is a bit like saying that the human mind is just electrical signals it's literally true but it doesn't really tell you the whole story and I think the autonomy of a does range that unique it does render it different from any other technology which is come before I'm gonna illustrate this with a story so I mentioned alpha go in deep minds um and it's a while ago that was one particular moment in the series of games that the alpha go program played against uh against lisa don't I was said by many people that A computer will never beat the human ago go is much much more complicated than chest um but what was interesting was not the fact that alpha go beat the human because computers have been beating humans at rule based games for some time we can probably all remember ibm's uh deep blue beating Gary casper or would have had would have heard about that in the 19 nineties the interesting thing in the unique thing about the way that uh alpha go beat lisa doe was the way that it did it and that was one moment in the game which illustrates this it was moved Of the second game I would not move I'll forgo did something which completely by food everybody who is watching it it's own programmers experts on go we played it for decades had no idea why I undertaken this particular move they thought it was narrow it turns out several hours later that was the winning move alpha girl with sort of a new way of playing the game that no human has ever come up with in the thousands of years of history of playing this game so that's just one example of the kind of innovative challenging thing the day I can do where it is breaking out of the paradigm of Those technologies next questions what could what problems could this lead to I think there are three corners of responsibility rights and ethics so it's the responsibility who is liable if II causes harm in a private law assessing we have the concepts of causation and of intervening acts in private law the idea being that you're unreliable for things that broadly speaking of course and if there was an intervening acts then whoever the human was whoever the legal director was may well be absolved from liability what I would suggest is that at a certain level of sophistication and independence I I could be An intervening act that means that it's difficult to use any of the legal techniques any of the principles are on the screen for ascribing responsibility back to an individual legal passing whether it's human or whether it's a Corporation so we could use all of these things in the short time I think all of the things up on the screen will be used to live very strict liability for the liability and so on and so forth to try to tie what's an A.I. system is done back to some kind of a human choice but ultimately Increasingly difficult so we may be in the situation where you are stretching existing new concept so far that they are broken they they no longer represent what they were intended to do which is a constraint upon human choices that's effectively what illegal systems do they tell you what uh what the consequences will be if you make certain choices So the uk government has attempted to solve some of these problems in one specific area this is an autonomous or autonomous vehicles or self driving cars is a popularly known in 20 eighteen it past the automated and electric vehicles act now you may be interested to hear that this was one of the only non directs at pieces of legislation which the uk parliament was able to pass last year uh and what that illustrates for me is how much importance the government was placing on creating a um predictable ecosystem for companies to be able to foster a self driving car um Uh industry in the u k and I went to a talk by the local Commissioner who wrote this piece of legislation and she said that one of the reasons why the uk were so keen to pass this was in order to beat Germany to doing so in order to be the first country to uh to have this regulatory framework so as to attract businesses and car manufacturers to have that there are indeed and to test the new technologies in the country so this is an example of how the regulatory structure is very much tied in with the economic Russia knows that people often think of a uh As being opposed to innovation but actually the uk government has taken that the policy choice of saying that's approaches they can be mutually supportive of each other so what is the automation electric vehicles act actually do what it says that when a uh a self driving car crashes in autonomous mode then so long as there was insurance the insurance will be liable and like in Canada there is a um requirement under English law that all vehicles on public roads being insured so effectively what it's saying is that there is certainty for any victim of a car crash even if this is the passing it in the The passenger them themselves that are starting to eat at the insurer will pay out now that those half of the block in my view so the victim has a has a recourse they um that they don't need to worry about any of these complex issues of causation and so on but it doesn't say what the insurer can do because the insurance is still able to subordinate the rights of whoever the victim was I'm sue somebody else to assume the legally liable party well that might be the person in the car whether it might be the man you The program uh that all sorts of other parties who could be involved and the acts doesn't answer any of those issues so it kicks the can down the road up to a point but anyway that's the far this that uh as far as I'm aware any country his goals and trying to work out the liability structure for A.I. and it was a worth bearing in mind this is just one area this is only self driving cars but he is now being used across the board in all sorts of different industries whether it's lower medicine insurance um music And so forth so it's only a very very small very narrow um attempt at solving these issues responsibly the same private or isn't the only issue there's also responsibility in criminal law and again A.I. raises difficult questions not just of causation but of intent and I'll illustrate this with another story so a couple of years ago a group of artists in Switzerland created an A.I. program called random dark net shopper and they gave it 200 bit coin every week and allowed it to go onto the dark Web which is as you may know an area of the net where It's illegal things can be purchased and I started the program for whatever he wants so it did it brought some spyware and bought some cigarettes and of course max's habits and uh the artist put all of these things on display at an art gallery and said look isn't this interesting look what the the computer program is purchased and this came to the attention of the local police force the Sun calling police force so they issued an arrest words for the artists and for the computer system now How would this of works that computer system start have legal personality um there's no ability to to punish them at least at the moment um they don't have the same mental structures as as humans to it's very difficult to identify mens raya we have the interest rate is that the the goatee acts but the the intense violence is is very difficult to identify with regards to computer system and yet there was potentially a gap because you have a I systems are making meaningful choices which if they were made by humans will attract criminal liability um but it's very difficult to necessarily hold a uh Human responsible says unless there is some form of on answer to this question we may have a situation where there are gaps and that those types of gaps can lead to public mistrust responsibility for harmful lights is not the only aspect of A.I. those are the question of how beneficial how creative acts are to be accommodated under the legal system and right now legal systems are not set up to deal with non human creativity so up on the screen you'll see two pictures and more both of these have in common is that they weren't created by human so the The monkey you might be familiar with from the monkey sophie case a British photographer David slater to set up a series of cameras in the jungle and one day a monkey picks up one of these cameras never show you the monkey and took that picture up on the uh up on the board of itself and he thought this was great but like the the artists and um in Switzerland he put this in the book and he told the story underneath the power the monkey taking a picture or you didn't realize was that an animal rights charity pizza was they're going to sue him The us courts and say that actually because the monkey had undertaken the creative active uh composing that picture of itself the monkey should be entire sued to have the proceeds of that picture was turned out to be a very valuable picture hundreds of thousands of dollars worth save many millions are at stake and this went all the way up to the us court of appeals you might be pleased to hear that the monkey lost but it lost after that was over the assessment so that the court ruled on it but the party is actually sued the the artist paid out to the monkey or the paid out to the to the animal rights charity so we can see that Questions even in existing um phenomena like like the animal kingdom are still challenging ones we still have great difficulty in legal terms of accommodating on human creativity um the other picture on the screen was created by again a generative adverse adversary O net system um which is a modern type of of a I I wanna go to our works with the key the key points about this picture is that it sold 400 and 20000 dollars and so their babies the most expensive Piece of A.I. generates adults and I don't think this is just a gimmick I think I actually is capable but it is increasingly creating um important creative works not just in the field of the arts but also in other very important foods like pharmaceuticals say I can now develop new drugs and so on and so forth so the output of a he has very definite uh economic value but as to how it is treated in the legal systems we have uncertainty well we have an answer on the Canadian law you'll see the passage um from the Canadian copyright act we That protections are recorded to citizen subjects of all persons ordinary residents and then there's various criteria so we have the answer they are is not as things stand the legal person so I can't fulfill any of those three tests and as a result there is no protection for creations of a under Canadian law that brings me on to my next point is it worthwhile having legal personality for ai at some point might be wants to solve these problems of responsibility and rights by giving a it's own personality by giving it The same suites of faculties that uh recorded two other non human legal persons think of corporations they can sue and they can be sued they can hold property in their own rights they can play a very important economic role and yet they're not they're not natural persons they don't have feelings or thoughts in the same way that we do so there is President for A.I. being given legal personality this is sophia the robot and in October 20 17 the fare was made a citizen of Saudi Arabia Said this is pretty ironic given that women do not have full rights in Saudi Arabia women can't drive cars or leave the country without um um a or supervision so um the idea of a robot giving being given rights uh a lot of people have been profound disagreements with and in fact even aside from Saudi Arabia people are people find it very challenging you may well find the challenging the idea that an A.I. system it shouldn't be given any form of legal personality parts even though the severe example is a bit of a gimmick it was clearly done by Saudi Arabia is a bit of a publicity stunt there's no real content this label citizenship actually I think there are some serious legal Questions that need to be asked about personality shouldn't just be dismissed out of hand and this was actually the approach that the European parliament took an a resolution or February when I proposed in order to solve the question of who is responsible for the high causes harm creating a specific legal States for robots in the long run so that at least the most sophisticated autonomous robots Knight said autonomous um could be a stop and it's just having the legal status of electronic persons responsible for making good any damage they may cause so it's not purely Gimmick the idea of giving a illegal personalities this is a serious proposal and I think one which ought to be considered and it's actually one which I think there is a lot of movement on in the next five to 10 years because as you may know from your corporations classes it only takes one uh government only takes one jurisdiction to accord a illegal personality or indeed to create any kind of new legal structure for others to follow very soon afterwards You can create whatever kind of legal past that weren't so for example in India 10 pools can be recognized as legal passes even though they wouldn't traditionally be recognized under other legal systems but we have doctrines of currency in uh in uh cold below across the world so that was a case in the uk where the uk courts recognized an Indian temple as having rights over asserted artifact was taken from that temple in order to be able to uh to suit based on based on those rights so what I would suggest is that that's likely to be Maybe a couple of small jurisdictions which will move very fast and create ar legal personality with a view to fostering a ecosystem much in the same way that the uk has legislative we might see this in tax havens like the Va or the cayman islands we might see it in a nation like Singapore which has very advanced um policies on A.I. and when that happens other countries will follow so it is worthwhile thinking about right now the next problem with regards to Is the board of ethics and I'm within this number two subsets of problems firstly how should a I take difficult decisions if we are delegating important choices to artificial intelligence what parameters should apply and so doing at the moment these choices are often taken by humans and sometimes we follow on with more rules but might not now being requirements or an accessibility that's uh that these rules will be notified in some way that these principles be sat down so the Care about in a is that already a problem which you might wanna know from uh from philosophy studies that's the idea of you've got say train which is running down tracks out of control and you're standing by the side of the tracks if you do nothing it will hit five people and killed them but if you switch the tracks it will his only one person and killed that person so you've got the question of the moral dilemma of acting and killing one person but saving five or doing nothing and allowing five people to die now they won't necessarily right or wrong answer to this by any needs and there are all sorts of permutations maybe the one person is a member of your family maybe the five people are norms or children or criminals you can change this Ucla however you want now some F assist will say well this never happens in real life we don't need to worry about this self driving cars never have to choose between five children and two nuns so we can just ignore it but I think they're missing the point because this is a thought experiment which is clearly extreme pops all is illustrating his tradeoffs just a choice between two difficult things and we have tradeoffs in every day life constantly so if you have one example of where I may well be used you can face similar tradeoffs is in an accident and emergency Ward day I might well be asked to decide who should be This is this is very difficult but it can be quantified and doctors often make mistakes when I retired that acting on the pressure so you might wanna stay tune II system in order to give people the best health outcomes can you give me an order in which people should be treated based on their symptoms in there and all the parameters and the ai system then has to decide who is going to be treated for us at the old person is that young person is the passengers critically oh but it's likely to die anyway and these are ethical choices so we do need to start thinking if you're going to be delegating these choices to A.I. how Take those choices that will further issues in terms of um how I should take decisions in terms of explain ability and transparency so the extent to which you can question and seek to understand what the A.I. has done I'm gonna come on to discuss those uh into course the next question with regards to ethics is are there any decisions that I should not take a tour so people often say killer robots autonomous weapons we should never give life or death decisions to ai Decisions people might say uh I know that the um people in the country critics lab may have some disagreements with um with with that but um we we have to bear in mind that these are society important questions and some people might say even if you've got a great a the system we still don't want to use it we still want imperfect humans making these these choices um in Europe and in with the gdp or there is potential in our piece of legislation which might require that humans have to be in the loop that humans have to be involved in decision making it might potentially outlaw From taking certain decisions and that's um particular caveats apply so these are not just philosophical questions these are real legal questions that we're going to have to start engaging in right now So the third thing I'm gonna talk about is how we can solve all of these problems there have been a lot of suggestions from the private sector of how we can go about doing so in fact the private sector was one of the main leaders in terms of thinking of um ethical codes uh for which a I showed out here um and this is understandable because the the private sector a wants to avoid public backlash but be um often will seek to avoid government regulation because when governments regulate they might do things that the big tech companies don't like so the big tech companies Together in an Organization called the partnership on ai this was founded by Microsoft Google ibm you'll be familiar with the with the names and now involves lots of lots of companies that also involves players from the uh the public sector ngos the aclu is a member of the partnership and like the partnership they've come up with um or within the partnership they've they've come up with various propositions domestic pillows that cooled um and A deputy at a very high level of generality they they say things like hey I should be used for good and I should not be used to buy ads and you know what what one of these things mean and and part of that the point is it the positive part there is virtue signalling these companies are are trying to signal to the public on the governments don't worry about us don't worry about how we could recover it but we've seen in the past from industry self regulation that often this is unsustainable and it can lead Um uh huge moral issues uh in the long run they can lead to a backlash one thing to point out with the partnership is that by the major Chinese tech company recently joined in a 20 eighteen and I'll come back to talking about why china's engagement with uh the regulation of artificial intelligence is important but um one thing to to note it is is that perhaps country to what some people might think about China it is actually very interested in A.I. regulation um The fire the baidu is joining the partnership which is predominantly a Western led Organization is an example of that so as well as the private sector initiatives that have been lots of National initiatives and this is more from 20 17 on board so if the private sector starts about 20 16 National um initiative started run about 20 17 and there are various ones this is out there um I'll come back to Congo in a few slides time but I said I would talk about China trying to approach to A.I. regulation is I think extremely interesting if I were to pick one jurisdiction to watch over the next few years It would be China in 20 17 trying to release a National A.I. development plan in which they said that China wants to become a leader in the development of A.I. technology now that was fairly widely reported upon and um people people noted this What was less widely reported on was that China also said it wants to be a world leader in the regulation of that technology so they very much see the development of the technology and the regulation of the technology as a two problems strategy and they are intertwined so China has taken some important steps in pushing this forward both on a National level in terms of setting out a White paper in the beginning of um 20 eighteen which I believes to listen from slate officially into English although I have read a an unofficial translation and it goes into a lot of detail about things like privacy algorithmic decision making from spy agency over the same kind of things that we're thinking about in the West but they are really Forward with this the other thing with China is that its seeking to influence the international discussion of A.I. technology as well um I think the way they're trying to seize it is as a piece of soft power the idea being that if you are writing the rules then you have a great day of sway internationally we saw this with the us following the Russian Woods accords in the fifties when the international monetary system was designed in the us became the heart but it was where all of the major institutions well the major academics were that was where All around the world have to evolve and as a result the us have tremendous economic power and not solely because of that but it certainly contributed to it in the last part of the twentieth century and going into the 20 first and with the us stepping back from emotional out for rulemaking role as we now see on the current administration China is very much looking to step into the gap and the one of the areas that are seeking to do so is in the international regulation of artificial intelligence so up on the screen is um uh a schematic Somebody who was in the um uh the Canadian government tim dawson who he may have come across um just showing the um or all of the initiatives um in terms of both National development plans for the economic side and also the regulatory ones and this even though it's fairly reason is now inaccurate so Singapore just two days ago released a set of ethical guidelines on in divorce which are very detailed and uh a definitely worth having a look at so the key take Slide is it is that countries are moving very fast but that mainly moving at the level of proposals as yeah there's there's not much concrete a tool or you also see and this is partly because it was written by by somebody from Canada uh bruce exit says true is that and there was one of the first countries which came up with with uh with such a strategy um so the National strategies are not the only uh area where lou making is being designed there also international Organizations which are not so prominent politically speaking but I have an enormous impact on the way that technologies are regulated and uh homemade interoperable around the world so there's the tripoli which does a lot of technical standards perhaps the most famous one is wifi um which is a uh a regular piece of technology that exists because there is a international standard set that was not legally binding um they International law and I think neither the I triple e nor the eyes so you can do that but they make recommendations they have expert advisory panels which are made up of um representatives from all sorts of industries around the world in the various areas uh they come up with shared proposals and shed uh standards so that the existing in all sorts of areas whether it's in kettles whether it it's in cause that the fact that a car in peru works in the same way as a car in France that's as a result of standardization um by virtue of these types of parties on both of them and With regards to artificial intelligence so they are tripoli has come up with and a draft at the airline design principles for a it's been a long document worth having a look at is now in the second situation is going to be a third one in due course and the iso has come up with a new standard I see 42 specific day for artificial intelligence now I mention China is taking a major role the first meeting was held in China for this new standard of the iso the first chairman is Of war way so now and in line exactly a lot in the last couple of months is that speaking the Marshall controversial thing maybe it wouldn't have happened if this body had been forms uh more recently but this the decision about the Championship was taken in late um 20 17 it would have been so this was before such controversy was um was known about but maybe that's just another example of the type of leading role that China is looking to take an international obligation It's kind of the stats from the Canadian government is very interested in using a in order to augment its own services to be able to make distribution decisions Justice decisions and so on and so forth and the chief information officer has released um a few policy announcement which I think are uh having a having a look at because he had he and his colleagues have realised it does lead to assassin uh of the problems the life identify particularly in terms of the ethical uh aspect of Baking and the need to maintain public trust in this technology and what he said is that we're going to release a directive on automated decision making um which will apply to federal departments using a so it's not applying to everyone is not playing in the in the private sector but it would apply to governmental uses of artificial intelligence and I will look at a few of the provisions of that at the moment is didn't draft form but um but there is a fairly advanced draft of it what are internationally so uh Particularly through uh Justin trudeau who is very interested in the use of technology um has teamed up with President micron in France who again is it is one of the world leaders who is most interested in the technology and France and counter agreed at the end well during 20 eighteen and this announcement was made right at the end of 20 eighteen um that there's going to be an international panel on artificial intelligence to be ford's um and it's going to look at um scientific Advances economic transformation human rights the collective in society geopolitical developments and cultural diversity so you know a nice sounding thing they all sound great we all kind of agree those are important as to what it's gonna do we don't know when it just be replicating or sort of these other bodies are doing wouldn't be adding anything is it going to be replacing other bodies are they going to be consoled 800 we just don't know this is kind of symptomatic of where we are at the moment there is a lot of um effort and Making announcements and getting together groups of experts but actually in terms of the output in terms of solving these questions of liability responsibility rights and indeed ethics we're still um not quite there in terms of seeing proper rules but who knows this might this might be the source of them so to look in a bit more granular fashion at um where the regulatory codes uh attending what I've done is I've cross it on the screen um various different things that a lot of the regulatory codes are looking at This isn't by any means all of the different regulatory codes that have been proposed uh this is Mark long but I think these are these are some of the main ones and you'll see that on a substantive level that's quite a lot of agreement between all the different body is as to what the rules should be there's lots of takes that you can see in all of the different boxes um and this I think shows that that on a substantive level there is a color lessing of opinion as to what kinds of requirements we want to have for A.I. I'm gonna focus particularly on one of them that says explain Transparency and I mentioned this before in terms of the ethics of how I make decisions because you'll see that disappears and every single one of the proposed regulatory codes that was even in singapore's one from uh from two days ago so how do we how do we do it in practice how do we impose um this requirement on area and I think one of the key things in terms of in terms of solving these issues is to make sure that what we want to achieve from an ethical perspective so define Defining those policy choices is one thing that's a political question we then need to try and achieve that from a technological perspective there's no real good stipulating an ethical standard that makes no sense technologically speaking and there are a couple of examples of attempts to do this with regards to explain policy and transparency so you might have heard of the gdp are the general data protection regulation which came into force in Europe in may 20 eighteen but it didn't just come into force in Europe is a because it has extra territory in effect so if any Canadian and Steve any Canadian individual is processing data which comes from Europe Then they are subjects to uh to the requirements of the gdp and um so even if this is a smaller somebody clicking on a link on the website and that cooking being recorded if that person is in the uk or Scotland or Germany than which have a company owns the Web site is subject to the gdp are and the price is the processing of that data uh it's within its remit and this could have an enormous economic impact fines up to 20 million or four percent of worldwide turnover whichever is higher this is a very important thing not just the Canadian companies but for companies around the world Has some requirements which look at the right to explanation now the interesting thing about these provisions of the gdp are is that they were designed for time before I I was really not so much an existence but it was being rolled out um in in any kind of uh food scale manner the gdp has been being negotiated since around about 20 12 but this particular provision which we see in oscars 13 and 14 is actually taken almost directly word for word from the Protection directive at 19 95 so we're going way back before the current era spring which starts in about um to a time before the motor II technology more machine learning new nets and so on will being used before they weren't they weren't known about and I think this is shown that the fact that the lord is somewhat out of date is shown by the language they've used um cuz what it requires is that when a decision is made um in an automated fashion which I think I would qualify for about a person and That decision has sufficiently important consequences so alone decision credit decision employment board session could be could easily be one of them um then meaningful information about the logic involved has to be provided as an individual so what is meaningful information what does that what does that mean um we don't know there's no definition this hasn't gone through the courts it hasn't gone through any of the regulators so it's it's really very unclear the use of the words logic involved an important because logic to me sounds like they were thinking about automated programs expert systems decision trees Then there's if no than that that is logic but ai doesn't work using logic neuro nets work using weights they weren't using uh sarcastic radiant dissent systems which are not amenable to being set out in standard or school formats they're often referred to as black boxes the reason being that is very difficult to explain in a human sense or to predict what the system is going to do it at farms this is not to say that is not helpful we can have extremely interesting and important choices and um decisions made by the A.I. as we've seen in game to move at seven and lots of other But expanding it can be very difficult and I think the EU is gonna have a real time of reckoning when it comes to applying this uh in a practical sense what about Canada I mentioned earlier the draft directive on automated decision making which is going to apply if it's enacted the federal government and they've got very similar language I think they probably took it from the gdp because it requires that a meaningful explanation to affected individuals of how or why that decision was made be provided so again we see meaningful explanation but this design which I think is a bit better actually because Talk about logic so it doesn't have the outdated um uh that's outdated um set of terminology and also it gives a bit more content than the European law but it says how and why so that's maybe a a sort of causation based analysis which mites throw from that that might be a counter factual analysis we wish you could apply but fundamentally we still don't know how the Canadian uh will will apply or even if this is going to be the finally enacted form of the directive um so again Have significant are known and the more the people who understand the technology they're able to feed into the legislative process the better in terms of creating legislation which is not just ethically acceptable but also effective so zooming out a bit from the micro requirement to expand ability and transparency there are a couple of lessons um which we can take from other developing industries in terms of regulating ai my general answer to how do we regulate ai is not to write the regulations for us is not to go to straight to writing the rules that's what the tech companies I will try to do and some governments are going straight to setting out the six principles that entrance brews that 23 principles the things that we need to do in my view is to design the institutions which are capable of writing the rules from point of view of Democratic legitimacy or political legitimacy in whatever system you're in so being an institution that people trust and also being able to effectively enforce those those rules having the understanding the competence to be able to create rules which fits with the technology These are the types of things which need to be done first by governments before they write the writing of the rules the secondary process so to give one example of why this was done well and badly in Europe just be modified foods are basically nonexistent there was one crop which is licensed in Europe is a drop in Spain which could be produced using Gm and this is because in the seventies when the technology was develops the conversation was very quickly taken over by interest groups who would Against Gm food they said it was a natural that it was harmful that was dangerous now there's no scientific evidence whatsoever the jam food is dangerous or harmful and yet they were able to persuade the public and politicians that it was and that it was so and desirable that shouldn't be used as a tool and it became completely economically are invisible for the biotech companies to use Gm food in Europe even though it is less costly that uses less water is better for the environment the the many natural strains it's paid is completely unacceptable compare that to The situation in Congress expect is more similar to the us but someone will tell me otherwise in the us what they did was the fda in the 19 seventies so I'm just like a very definite program of educating the public and getting them involved of having citizens juries and consultation panels which informed and educated people about the new technology they involved it the the population in its regulation and as a result when they sat down allegations they have political legitimacy people trusted them people felt safe When these technologies are being used 78 percent of corn in the us is produced using genetic modifications so we can see that this vast disparity in terms of the acceptance of a new technology based on the way that you go to his House and that's why um uh designing the institutions is a really crucial thing in regulating artificial intelligence my next suggestion is that ideally we would have rules set on an international level now The reason for this is that as we've seen from the uk is current situation where you're trying to break away from common regulatory standards they can be enormous barriers to trade not entire of various different standards between different countries can be a huge difficulty we saw that there was a protracted um negotiations with theater and it's still very controversial some of the um negotiate says some of the uh things which were agreed in terms of um what Have been dropped and raised us as between the different jurisdictions now we don't have that problem where they are at the moment because we have a blank slate most countries don't have any rules which apply to it whatsoever so we have an opportunity to build common rules and to create a situation where you can have increased across the board of trade which in standard regarding economics increases gdp increases consumer welfare so there are beneficial things uh that all States would be able to enjoy if they were able to coordinate the regulations after high level and of course We don't live in a utopia that all boss political differences there are power plays economically which may well prevent this but I don't think it's impossible that uh that this could happen and one of the things to bear in mind that international law is that it is not binary in nature we don't just have binding laws or no laws there are different intensity of noise which can be applied in different techniques which can allow for international coordination without necessarily requiring that every country is found by Rule making body so you can have more laws like the under trial uh arbitration law which allows countries to coordinate by enacting a given law in their own legal systems so no one 's telling say ghana and Scotland to have the same law of arbitration and yet they've chosen to do so and that allows for March better cross border trade because you have the same system of laws in the EU we've got different intensity of laws so regulations which automatically binding on all Davis which gave individual countries a choice as to how they're going to be trump posed right down to less finding things like decisions and recommendations so all of these different flavors of lords these different tools could be used in creating an international system is it is by no means and all or nothing thing with regards to ai and to give one example of where there has been a successful use of international rules to regulate new technology space low so in the 19 60 s right at the height of the cold war when the us the us hours close that they could possibly be to destroying each other they nonetheless agreed to regulate the use of outer space they agreed the outer space treaty and it wasn't just about for agreement there was international eyes under the auspices of the UN I started offers some fairly high level propositions but nonetheless meaningful so there were things like you can put nuclear weapons in space no country can colonize the moon or mars this will um Coming up at the time and space technology was new and it was thought that these things may well be possible and even though this is high level and that the the framework for a lot of modern coordination that we see today so the international space station wouldn't exist if it weren't the outer space treaty the ability to have satellites which uh compatible with each other which don't crash into each other when they're in our bays this after light support the Internet gps sorts of services that we take for granted today none of that would be possible we're not The steps taken in the outer space treaty in the 19 60 s so even though it might be last comment than failures in the UN security Council there are systems which exist internationally where that can be successful regulation is by no means exactly the same space but I think there are lessons that we can we can learn there is cause for some optimism so to conclude I've suggested that A.I. is unique the basic problems of responsibility rights ethics I think the government Leave these questions just to the tech companies international coordination is very important to success and right now we have a real opportunity to shape a regulation as I see it there are two choices that businesses individuals governments could take all that you do nothing and you just allow the technology to develop and become a fragmented it becomes very difficult to understand or control and you have barriers to trade and all of the issues that we see in other areas or we aren't now we are Practically um strategically in order to uh recognize high I think the problem is clear invited a ti the tools at our disposal the question is not whether we can whether we will thank you very much Thank you so much uh Good question so I repeat it for anyone who didn't hear the question is um with regards to the European parliament's resolution um that suggested creating a type of electronic personality for autonomous robots um and the question is is whether that definition is that is one way protects mine or what they have there their own one thousand students the With this resolution I should have flagged it is that it was non binding in nature is just a recommendation so the European parliament doesn't on its own make legislation it has to work in conjunction with the other European rule making bodies including the Commission which is empowered to draft the legislation and this was a excessive suggestions a menu for the Commission um and as a result of just being a menu that wasn't a full definition of autonomous robots and this was part of the issue um and we see this Time again in terms of regulating A.I. that people often try to skip the definition stage because it's difficult because it's very hard to get agreement even between experts as to what a and what or autonomy means that said I suspect that the term autonomous was along the lines of what I was suggesting um in in that it goes beyond a pure decision making systems which are free programs um and the reason why I think that Because what the EU what about the European parliament were specifically trying to do this is actually the liability questions for autonomous robots and there's live any questions don't arise if you have a presets series of instructions because you can just trace trace the instructions back to whoever put them in um so so I think probably what they were guessing after similar to to my view but it was on the specified and actually probably partly as a result of that partly as a result of pushback from lots of people who said we should never give a illegal personality that idea it does not seem to have been so much abandoned by the So I think it's unlikely that certainly has been a beautiful um that is going to be a creation of the legal personality anytime soon but let's not say the individual countries could decide to do so they still have the competence to be able to do that So you got up to the voters what to A lot of years ago talk about how that move actually thought humans of authority and out of this level of creativity so have you seen any examples in the legal field were A.I. has taught or spur or invasion so far but I think that those examples where A.I. is a general point has spawned innovation when you're sitting next to whatever the legal innovators right now so I think for us to harness probably uh a better place to speak to that Um if I'm honest I think as a general rule the ability of A.I. to spot hidden patterns in uh in data and to come to conclusions in uh unexpected ways but ways which achieve um uh preset goals is the kind of thing that can be applied in um the legal systems in terms of um um spotting patterns between different cases which is well complicated basics is seeking to do um but the same kind of um insights can be taken in all sorts of different time so it's not just law Also medical diagnostics we see similar things where a is now able to better identify certain types of cancer than than human doctors can so it's a type of competence the type of skill which is not just unique to uh to law by any means but I think your point about the combination of ai and human it is a very good one um I I certainly don't think that is going to be taking over everything anytime soon on the water lots of experiments are shown as you suggested is that Powerful systems a lot of humans on the moon or a homicide but rather a combination that's good our school um we're talking about seven tours the idea of humans and horses being combined and the author applies that uh metaphor to A.I. systems and suggests that um that that is what we are likely to see some some real um uh leaps forward in the next few years Success the barriers to that objective in this year since x it's more likely to become pro National records eventually reaching a level of complexity where there's a sense that your Department is importation or pizza or let Or make sure that that needs to uh work investigators second to develop international rules I think it's not just the combination of the two actually um some of these Nations which of God the most advanced um programs in terms of ai regulation are simultaneously looking outward and not just in Woods um to to try to be world leaders in this and uh I think rightly they've taken the um policy choice that um if they are going to go to the trouble of setting in regulations they want everybody else to be a hearing to those A.I. regulations you don't want to be an outlier in this in this sense so um it's no That Singapore announced the devils that's it was uh creating these new A.I. ethical regulations is no coincidence that China was becoming increasingly involved in the international standards Organization so uh I think for lots of countries part of their National A.I. regulation plans also involve uh international ones um the the bigger choices is not so much between National and international reputation is morphing regulation Regulation on the us is very much taking a view of new regulation as being a good thing or no federal regulation so I get pie the chairman of the fcc announced in November that uh at the federal level the us was really not going to be doing anything it was going to wait and see um and that was very much the opposite approach of uh China Singapore and I think 20 seconds as well we've seen with its National and and international programs I'm leading the biggest um I think with a public facing role it will be trust it will be um the did the requirement that some people have psychologically at the moment um that there be a human decision make a uh a lot of people place a great deal of importance on this and I think we may well see fundamental rights challenges I didn't talk to this March in the presentation that was mentioned on the slides but I think we may well see I'm the human rights act in the uk and Canada um challenge Decisions made by in the basis that they are unlawful um uh divisions between a person and the property between a person and their lights to a fair trial um we've seen this in the us there have been some fundamental values challenges already so in a case called loomis in wisconsin um a uh a felon challenged a decision to buy a sentencing which was made on the basis of a black box and automated decision making process and the Wisconsin said that was fine they said that that was actually compatible with his right to a fair trial but I'm not sure that would be decided in the same way if that happened in Canada or in Europe particularly with the gdp and uh there was another case in the us where teachers jobs were decided on the basis of algorithmic scores which again were black box this is the juice Houston teachers case and in that case the uh Texas divisional court when the different direction they they held in favor of the teachers the case So I never went to trial but they they said that the black box was on except for that because it was taking away I think it was the uh thirteenth amendment rights to property um in terms of the jobs so I think that's going to be the area where um where if legally it is imposed upon people that supposed to them choosing to use it um that's the barrier is by no means insurmountable barrier as we've seen with Gm foods if you get people comfortable with technology than they can be very happy with it but it's important to That we shouldn't be developing the technology we should be working on the public trust the technology at the same time They are well the difficulty where they are is on the current product liability rules it's very difficult to say whether it is a product or not jonathan products are defying the state as being things which are physical suffering in Europe that's the case um in the us under the um totally statement third um there are some suggestions that the computer program might be might be covered by that but it's it's ultimately not clear the way the product liability works fundamentally I think it's a philosophical level is that it assumes that there was a Mystic process that once something is rolling off the production line it doesn't change and so you can in theory trace back what that's uh what product has done to a some kind of a fault in the decision making process whether it is a design folds in in terms of the overall concepts whether it's manufacturing foods in terms of the actual execution of that concepts or whether it's a failure to warn I my understanding is that in the californian system as the Canadian system is broadly similar to the us one in those terms and when you have a dynamic system like artificial intelligence which changes as it uh after it has Online as well which is deciding to change which is designed to right itself it becomes quite difficult to um tie that into the same kind of responsibility structure so you certainly could do it but you may well then have um potential designs of ira saying well we're not gonna release this potentially very helpful and I because we don't want to be on the hook for it so you might have defensive practices which spring up as a result of this um so it could be done it could be around towards it could be around by carrier style but it's not some ways of tying back why has done to existing But my big question is is whether that is in the long run the the right thing to do from a economic perspective or whether it might be better to hive off a liability in terms of a a new legal person it's not the only solution but I think it's definitely one that there is worth contemplating I'm just here to let us all they're questioning what is it good yeah what is it windy in the us economy chicken high tech is set up somewhat differently for the Chinese economy the Chinese economy as you know that was a high level of coordination between government and the tech jobs the bat companies um and um We are very much sim biotic why was the us approach with silicon Valley and barriers are the hubs of innovation is to uh taking a much more less a fair approach and so part of the difference is just a cultural one and in the Chinese economy is already set up to have a high level of government regulation and coordination um the us federal picture is not necessarily reflected across the whole of the country so in some States we do see we reducing regulation failure now being developed physically whether Established interest russo California there are some requirements coming in New York that was a transparency type requirement coming in so the situation is not totally uniform across the us but part of it's actually driven by the National government so in uh late 20 16 in the last months of the Obama premiership um the um the Obama administration released two reports both of which called for the us to take a lead in um developing a and in regulating I now you can That was completely abandoned by the trump administration and so it's partly based on that overall White House based um direction of travel that the that they're taking a safer approach across the board for everything whether it's pollution whether so in silence a fourth so those are the kinds of motivation so they're going into the us policy Uh I just wanna thank you and I guess you have some time for Thank you very much thanks for having me