AI’s future for media and tech companies

 

Managing risks and maximizing opportunities in intellectual property

 

The new capabilities of artificial intelligence have already had a major influence on the media, entertainment and related technology companies. For example, disputes over the use of AI were a major driver in last year’s strikes by the Writers Guild of America and SAG-AFTRA last year. But we are only at the earliest stage of grasping the full implication of how these industries will be changed by AI, particularly generative AI through platforms such as ChatGPT and Google’s Bard, How these tools will be used and controlled for the benefit of media, entertainment and tech companies may be their most pressing challenge for the future.

 

In a panel discussion about the use of AI in the industry held at Grant Thornton’s New York offices late last year, Grant Thornton Senior U.S. Media & Entertainment Advisor Howard Homonoff questioned three panelists on a wide range of ways AI is currently influencing and challenging the media, entertainment and tech industries. The panel discussion included Forbes Media Chief Data Officer David Johnson, Made Music Studio CEO and Chairman Joel Beckerman, and Grant Thornton Cybersecurity Risk Managing Director Caesar Sedek,

 

The following is an edited transcript of this conversation.

 

 

 

Defining AI

 

Howard: How should we define AI? And generative AI?

 

Caesar: AI has been around for a while, I think since the 50s, essentially, it's a computer software that simulates human intelligence or a reasoning decision-making process, using various algorithms to train machines, to train computers, to predict, to be able to do better data crunching, data analysis, things like that.

 

Generative AI is a subset of that, essentially AI [that] specifically creates unique and different content. It can predict human language, human interactions based on its large language model. It's very good at creating kind of unique and different pieces of words, images, music that is derivative and based on the training and learning that occurred over X number of years.

 

Howard: It’s ironic, in a way, we're at a media and entertainment and tech industry event, given that our industries are, in some sense, to blame, for some of the fear and loathing that surrounds AI, from “2001: A Space Odyssey” with Hal, the computer, to the “Terminator” franchise, to last year's “Black Mirror” episode about the use and abuse of the images of actors and their voices through AI. David, how do you address with that kind of fear of AI?

 

David: For those who don't know me, I'm chief data officer at Forbes Media. I've been with them a little for five years. Our AI work kind of goes back to our in-house CMS platform, Bertie, which was done by a separate team at that point.

 

My group, it started off with AI a little under five years ago, and we're responsible for our first major first-party data platform, which uses AI as its core underpinning, and that's called Forbes One.

 

The chief start for us was to really help us understand collectively what are all the editorial pieces that are created on any given day and Forbes collectively generates somewhere between 300 and 400 articles a day. And so, for us to understand what's working, who are the audiences and really help to unlock this for the organization.

 

So, in order to enroll other partners throughout and to dispel fears: it's important to really understand what their needs are, what they're looking for, what their challenges are to be able to have this conversation.

 

 

 

Getting started: Investing in AI

 

Howard: David, one of the things people are talking about on AI is the investment needed to get started and to spend money wisely. How do you sort out “That's good money, that's bad money?”

 

David: That's a great question to be to be asking, because a lot of people are communicating, “We need to do something. Everyone else is doing this. We need to go out there and have AI.” There are a lot of low barrier products out there that can help you get into it. I think the first thing, if you're running a business, you need to understand is: what's the opportunity? What am I investing in? What do I expect for return? What's my risk? What’s the ultimate value and risk for the organization?

 

The second thing is it's important that you have trustworthy data and you have data that's legally vetted. [Finally,] you want to make sure that you're not just doing something quickly, but you're doing something that is above-board and there's a lot of elements in there.

 

 

 

Balancing human creativity and machine learning

 

Howard: Joel, you are a composer who's also a hugely successful entrepreneur, et cetera. How are you trying to bring AI into an enterprise that is so inherently focused on originality, creativity, and humanity?

 

Joel: Big question. So, Joel Beckerman, I'm the CEO, also in addition to doing film and television composing, of a company called Made Music Studios. We help create sonic branding, which is sonic identity systems for Fortune 500 companies. We also develop strategies to help them understand well what bands they should be in business with, what content that they don't own that they might license to help with brand lift and help basically raise the value of media and other kinds of properties around media that our people are already spending in.

 

It’s a very complex question and I'm going to try to just point at a few things. Let me go just from the straight creative side. If I have a great model to be able to manipulate those materials, I have an unlimited source of ethical use of IP to create 10,000 TikToks.

 

My biggest problem in terms of creating original material is it's possible that everything in the world from an entertainment and IP standpoint has already been stolen.

 

That's a huge problem, because where we have to create for clients works that are wholly original that you can copyright. Maybe you might be able to trademark. So that's our responsibility in terms of creating original work.

 

So, for us it's both a tool and potentially a scourge.

 

Howard: Caesar, in addition to Grant Thornton, you have been an executive inside of studios such as Disney and Warner Brothers. Talk about what you're seeing in terms of the way big IP owners are innovating in this area? How and where have they been successful at getting started?

 

Caesar: Sure. So maybe a little piggybacking off of what Joel said, too, is that AI is not this one ginormous, centralized big computer that will take over the world.

 

But we're not feeding any of that creative, you know, copyrighted intellectual property unless we are willing to do so to the corporate-owned engines and training models like ChatGPT or any one of the variants out there. The option here is to create your own artificial intelligence instances that are specifically geared towards your business, towards the intellectual property that you produce. So, in this case music, movies, written word, et cetera.

 

But where companies can start, figure out your own use case, start small, get a proof of concept going, do a risk analysis, understand where your data is, have really good data management, work with your Chief Information Security Officer and Privacy Officer first. Many organizations have been using ChatGPT or other generative AI with proprietary corporate data to get their productivity going or answer faster emails or do some analysis of PDFs, financial statements etcetera. But they haven't considered the privacy implications. I think companies have a really good opportunity right now to really start that process internally, learn as much as they can about it and find somebody in the internal organization who knows and cares, and that's a good start.

 

 

 

AI’s impact on talent

 

Howard: Joel can you chime in on the impact of AI on creative talent?

 

Joel: This is a talent-driven industry. Talent provides the essential raw material that, without talent, without writers and actors and music people, there is no media and entertainment business.

 

So, I'd be really interested in seeing what's been passed along but cutting out talent from compensation from their work if there are derivative works -- and one person's fair use is another person's derivative work -- but that's probably one of the key elements that needs to be addressed. It was addressed with the writers’ contract, I'm sure you'll talk about that, it's being addressed with the actors right now, but I think it's an essential piece of this conversation is making sure that people who create the works continue to be paid for the value that they're creating.

 

Howard: There has not been a collective bargaining agreement involving SAG-AFTRA, or until very recently, the Writers Guild governing the use of AI in the creative industry. SAG-AFTRA at least, a series of kind of one-offs with, sometimes, technology companies, studios, independent producers, independent creators in which that have governed the use of AI around on a project-by-project basis.

 

Sue Anne Morrow, SAG-AFTRA’s AI guru, shared some concepts of what they're trying to incorporate in their individual agreements. And I'll just share those and then we can talk about these a little bit.

 

The first is “safe storage” of the performers voice, likeness and performance, and the products and content created from them. We have our security guy here, so that's good to talk about that safe storage component. And this is the next critical element, the right to consent.

 

It isn't a blanket “No,” but it is a “We've got to be able to have it be a permission-based system.”

 

Joel: But the studios have maintained the right within that agreement to be able to ingest all scripts that are created.

 

Howard: Going forward, the writer is allowed to basically say “no” and to say, “you can't use my work in AI.” It is permission-based going forward. But you're right. We can’t go back to putting “the genie back in the bottle” with respect to the prior written product. Another point, obviously, for SAG-AFTRA is getting to appropriate payment for the creation of any digital doubles and its use and the right of the performer to opt out for continued use and production sort of the other side of consent or permission.

 

Caesar, how do you enforce any of this stuff?

 

Caesar: In the absence of regulations or any kind of code of conduct that would be universally accepted, you have to really do the right thing. Hopefully in the future we'll have some sort of regulation. Most likely, they'll come out of Europe, because they’re already had of it.

 

From a technology perspective, I think there are conceptual technologies now, for example. like in cryptocurrency, right? You have blockchain right? So, it's essentially attached to a digital media and can be immutable. We can prove exactly who had it, who touched it, who copied it, where it is, and it stores that in the in the universal ledger somewhere.

 

But I don't know if that's necessarily the answer, either, because that technology is still few years away. Right now I think we're just counting on our own goodwill,

 

Howard: Which sounds like a precarious place to be. David how do you make sure that within your organization, you and your partners are following the rules that you've tried to establish?

 

David: Yeah. This is a really important question. You shouldn't blindly, arbitrarily think, “oh, I think this is the way we need to secure the data. This is the way we need to set up privacy policy.” You need to really lean on your legal organization on outside counsel if you have access to that, and your consent management provider. We lean on all these to help us. We proactively do routine audits as well and have a separate security team that double validates. It’s a very inclusive and collaborative approach. Lastly, we invest in our resources to help us make sure that we're staying abreast of this, and we're constantly evolving on this practice.

 

Howard: Joel you are working with some of the biggest, most iconic brands that exist and helping them both strategically and operationally execute in the audio world.

 

How are you getting things done with a bunch of very independently minded clients?

 

Joel: It's a great question and we are so, so in the very, very beginnings of this. It’s extremely challenging for us to get people off the mark.

 

We’re talking to a lot of like-minded companies on this, but I’ll give one plug to a woman, Maya Ackerman, in a company called Wave AI. She is a professor, and she owns this company that basically is playing with different AI applications. We’ve been sort of dreaming up stuff, she's been dreaming up stuff I think the way we're going to be able to do this is to keep sort of knocking on the door softly, starting with the clients, and start to talk about “Well, we could do this and this would be the benefit.”

 

It’s just the beginning of that conversation. But I think it's an exciting place to be playing.

 

 

 

Regulating AI

 

Howard: Caesar there are efforts in individual states like with privacy that are looking at—California among others—certainly looking at AI regulation. Europe most likely is and will be ahead of the U.S. and states on this. Putting aside the complete dysfunction that's going on in our federal government now how does government get at trying to regulate something that is so amorphous?

 

Caesar: Well, in terms of governmental bodies, we do have some, like NIST, for example, the National Institute of Standards and Technology, who recently have published some guidelines around how to think about implementing AI in your own environment. What are some of the controls you should have? It's obviously not a regulation. It's just a standard. It's a guideline for professionals who are in this space.

 

Maybe some sort of consent has to be implemented very much like cookies are now implemented. Where anytime you use piece of technology, you go to the website, you have to acknowledge either the privacy policy, which nobody reads, and just click OK, or reject all cookies, which you have an option to do.

 

We will probably have for the foreseeable future a hodgepodge of state-enacted regulations or guidelines. New York was actually the first city to specifically have a regulation regarding hiring practices and whether or not AI can be used to screen employees. There's concern about bias and ethics and things like that.

 

Maybe individual companies and individual organizations will have to come up with their own and enforcement mechanisms but, being realistic, we haven't done anything in the privacy space. Unless there are some very big financial penalties that are going to come from Europe, for example, like they are for GDPR (General Data Protection Regulation), then we probably won't see anybody implementing anything. I don't see regulations being able to be truly enforced unless we have some technology backup to it.

 

Howard: Joel part of your value is not just the providing advice but actually creating works that are, in fact, human created. How do you do that in a world where you're using AI in processes, but you want the output to be human and protectable?

 

Joel: We've been talking for years about what is a composer and what is a composer going to be going forward. Is a composer going to be somebody who creates the most amazing algorithm? Is that really what the act of creation is going to be? And I mean, this is a great question for the Registrar of Copyrights in Washington, DC. Maybe our IP attorney can help us on this. But the question is, if there is a work created by a human, is that actually a human who has to play the piano or it could it be a human who's created an algorithm that creates art?

 

You know, I think those become sort of questions about, again, what's ethical? We've been talking about what's ethical in AI and these are all “grays.”

 

Howard: David, are there useful guardrails out there specifically around generative AI?

 

David: I desperately crave guardrails on this stuff. I do want to say I would go back to the original point though, that not all AI is generative AI, right? So that's a subset of AI. So there's a lot of stuff that we do with AI that, provided you have guardrails on your data up front, that's not as much of an issue.

 

 

 

Closing thoughts – what lies ahead?

 

Howard: Why don't I ask each of you guys to share a final, closing thought, looking ahead, on what are you most excited about or most scared about in the next 12 months around the AI world? David, we’ll start with you.

 

David: There's a lot of new stuff bringing big change with AI and specifically generative AI. For us – it’s a tool to enhance productivity and to enhance our journalism within Forbes and that is really something that I'm looking forward to. We work with our partners to bring the opportunity to them, to make sure they're fully on board, and that it’s something that they're excited about as well.

 

Joel: I think in terms of creative, using ChatGPT or other kinds of generative AI as a collaborator brainstorming tool, I think that's very interesting. You need to make sure it's not pushing you in the direction of something that's copywritten. But I think there's something very interesting there in terms of speeding the process of ideation.

 

The thing I'm most scared about is how this affects people just starting out in creative fields. Generative AI is going to very soon be able to create tons of good enough music and if all the needs for good enough music have been served, then what happens with the people who are just getting started out? There's no jobs for them. What happens to the journeyman approach to developing talent?

 

Caesar: For me, I think the most exciting and probably short-term success and use case for AI, generative AI, is going to be based on the language model. I grew up in Europe. I used to speak multiple languages and that always had to be a kind of cerebral, very thought-intensive process. Nowadays, you can literally have conversations in multiple languages using your phone, and that will only get better.

 

I'm not a visionary or a futurist, but you know, I have a pretty good feeling that probably within the next five years there will be some very portable computing device that will allow us to have a conversation in any language we want to, back and forth.

 

And in education, I think generative AI has a great capacity to be a tool for students, for educators, to augment, enhance their teaching styles. It’s very helpful in coming up with use case scenarios in cyber security fields, kind of playing the offense and defense and using that to teach the next generation of ethical hackers, who will inevitably hack the AI systems at some point in time in the future.

 

What scares me is that we're going to screw it all up if we don't start thinking about the intellectual property, the ethics, fair use, use cases, and how it affects creative people, how it affects jobs, as well. Hopefully in the long term, we'll not end up like Skynet and Terminator.

 

Howard: OK, well I I'm sorry "The Terminator” might be the last word we hear tonight, but that aside, thank you all for coming to this most fascinating discussion. 

 

Contact:

 
 

Our fresh thinking