Tiffany C. Li is an attorney and Resident Fellow at Yale Law School’s Information Society Project. She frequently writes and speaks on the privacy implications of artificial intelligence, virtual reality, and other technologies. Our discussion is based on her recent paper on the difficulties with getting AI to forget. In this second part, we continue our discussion of GDPR and privacy, and then explore some cutting edge areas of law and technology. Can AI algorithms own their creative efforts? Listen and learn.
Tiffany C. Li is an attorney and Resident Fellow at Yale Law School’s Information Society Project. She frequently writes and speaks on the privacy implications of artificial intelligence, virtual reality, and other technologies. Our discussion is based on her recent paper on the difficulties with getting AI to forget. In this second part, we continue our discussion of GDPR and privacy, and then explore some cutting edge areas of law and technology. Can AI algorithms own their creative efforts? Listen and learn.
But what's missing often is someone who actually knows what that means on the technical end. For example, all the issues that I just brought up are not in that room with the lawyers and policymakers really, unless you bring in someone with a tech background, someone who works on these issues and actually knows what's going on. So this is something that's not just an issue with the right to be forgotten or just with EU privacy law, but really any technology law or policy issue. I think that we definitely need to bridge that gap between technologists and policymakers.
One option is giving it to the designer of the AI system on the theory that they created a system which is the main impetus for the work being generated in the first place. Another theory is that the person actually running the system, the person who literally flipped the switch and hit run should own the rights because they were provided the creative spark behind the art or the creative work. So other theories prevail or exists right now. Some people say that there should be no rights to any of the work because it doesn't make sense to provide rights who are not the actual creators of the work. Others say that we should try to figure out a system for giving the AI the work. And this of course is problematic because AI can't own anything. And even if it could, even if we get the world where AI is a sentient being, we don't really know what they want. We can't pay them. We don't know how they would prefer to be incentivized for their creation, and so on. So a lot of these different theories don't perfectly match up with reality.
But I think the prevailing ideas right now are either to create a contractual basis for figuring this out. For example, when you design your system, you signed a contract with whoever you sell it to, that lays out all the rights neatly in the contract so you bypass a legal issue entirely. Or think of it as a work-for-hire model. Think of the AI system as now just an employee who is simply following the instructions of an employer. In that sense for example, if you are an employee of Google and you develop something, you develop a really great product, you don't own the product, Google owns that product, right? It's under the work-for-hire model. So that's one theory.
And what my research is finding is that none of these theories really makes sense because we're missing one crucial thing. And I think the crucial point they're missing is really goes back to the very beginnings of why we have copyright in the first place, or why we have intellectual property, which is that we want to incentivize the creation of more useful work. We want more artists, we want more musicians, and so on. So the key question then if you look at works created by non-humans isn't, you know, if we can contractually get around this issue, the key question is what we want to incentivize. Whether we want to incentivize work in general, art in general, or if for some reason we think that there's something unique about human creation, that we want humans to continually be creating things, and those two different paradigms I think should be the way we look at this issue in the future. So it's a little high level but I think that that's interesting distinction that we haven't paid enough attention to yet when we think about the question of who should own intellectual properties for works that are created AI and non-humans generally.
So in my personal opinion, I believe if we do get to that point, if there are artificially intelligent beings who are as intelligent as humans, who we believe to be almost exactly the same as humans in every way in terms of having intelligence, being able to mimic or feel emotion, and so on, we should definitely look into expanding our definition of citizenship and fundamental rights. I think, of course, there is the opposite view, which is that there is something inherently unique about humanity and there's something unique about life as we see it right now, biological, carbon based life as we see it right now. But I think that's a limited view and I think that that limited view is not something that really serves us well if you consider the universe as a whole and the large expanse of time outside of just these few millennia that humans have been on this earth.
And this is something that, you know, maybe is giving too much moral responsibility to the day to day actions of most people. But if you consider that any small action within a company can affect the product, and any product can then affect all the users that it reaches, you kind of see this easy scaling up of your one action to effect on the people around you, which can then affect maybe even larger areas and possibly the world. Which is not to say, of course, that we should live in fear of having to the decide every single aspect of our lives based on greater impact the world. But I do think it's important to remember that especially if you are in a role in which you're dealing with things that might have really direct impact on things that matter, like privacy, like free speech, like global idealistic human rights values, and so on.
I think it's important to consider ethics and technology definitely. And if we can provide training, if we can make this part of the product design process, if we can make this part of what we expect when hiring people, sure. I think it would be great. Adding it to curriculum, adding tech or information ethics course into the general computer science curriculum for example would be great. I also think that it would be great to have a tech course for the law school curriculum as well. Definitely both sides can learn from each other. We do in general just need to bridge that gap.
These tech companies and their responsibilities or their duties, towards users, towards movements, towards governments, and possibly towards the world and larger ideals. So it's a really interesting new initiative and I would definitely welcome different feedback and ideas on these topics. So if people want to check out more information, you can head to our website. It's law.yale.edu/isp. And you can also follow me on twitter @Tiffany, T-I-F-F-A-N-Y-C-L-I. So I would love to hear from any of your listeners and love to chat more about all of these fascinating issues.