Artificial Intelligence: Racism & the Digital Divide | Marc Beckman & NY Assemblyman Clyde Vanel
Marc Beckman: [00:01:00] Alright, welcome. It's really such an honor to be with you. it's, I appreciate it so much, but we've been hovering around so many of these topics today, and finally, we have a member of our government sitting on stage ready to break down some of these things. So let's talk a little bit about AI, truth, and politics.
I know we're entering the political season and things are gonna get a little, cranked up just in general, but [00:02:00] AI can add some fire to the flames, deep fakes and beyond. So let's So, what are your thoughts right now as far as like how artificial intelligence, like what problems we might see with regards to artificial intelligence and deep fakes in
Clyde Vanel: politics?
So, first of all, Marc, thanks for, having me here and thanks for, having this, at NYU. and when we talk about, artificial intelligence, we have to approach it, holistically and not only understand that there are, So, we want to protect New Yorkers from the harms, but embrace all the benefits also, right?
So just hearing the speech before, seeing how, all the benefits that are inherent with the technology. But we have to also make sure that we protect New Yorkers and people from the harms. And when it comes to politics, that's a big deal. When it comes to truth, that's a big deal. And you said that we're entering into a political, a political, season, but we [00:03:00] actually, currently, there's an election going on right now in New York.
An election for the third congressional district, part of that district is my district, where truth is kind of a big deal. The person that was there before was probably one of the biggest liars in politics. Possibly. I agree. As a matter of fact, when he came into office, when he was running for office, he said, And we saw him, I saw him debate, I was like, this guy is so good.
I mean, everything was, he one upped everything. So truth is really important. Also when it comes to politics now, one thing that happens oftentimes since the beginning of political campaigns is people spin and you're, this is, we're talking about marketing and advertising, spinning what someone says is a big deal.
spinning what, taking one word of what someone says or one phrase. And making it mean something else is something that happens a lot. And you're saying, [00:04:00] they're trying to create and repackage my message, but what happens when there's a technology that makes it real easy to create a whole new message, to make a person in politics say something on video that they didn't say?
Well, we've seen that happen. We've seen that happen with A month ago, there was a fake robocall audio, AI generated voice of President Biden telling people not to vote. Happened. We've seen, video, right, where, politics were using. Full videos of folks telling them, making them say or do things, and it wasn't even the person.
It was AI generated, altered video, audio visual. So it's really important to make sure that, we stop that indefinitely in politics. because that's, it's a really dangerous space where people can believe what is said or what they see or what they hear. [00:05:00]
Marc Beckman: So, so it's like pretty alarming because obviously, if there's not truth in politics that can have negative implications on not just communities, but generations of communities.
So what should people do? Like how, does the average person know that's not Joe Biden calling my house? How does the average person know that? That's not my local, Mayor Adams,
Clyde Vanel: Marc, that's the problem. How does the average person know, right? Did you guys see what was going on here?
How does the average person know? So one of the things that we, there's a couple of things that need to happen. So we have to put laws and regulations in place to be able to make sure that people don't misuse, people, organizations don't misuse these technologies. We also have to make sure that industry has, does certain things and technological things.
we're, looking at watermarks, potential watermarks, technological watermarks and visual watermarks. [00:06:00] We're looking at providing notices. I'm working for an example. Working on a, there's, we can think about the negative uses of AI where someone is trying to change what someone said or the robocall where they use, where they had the president tell people not to vote.
But what about offensive too? Should, that provide notice? For example, our mayor, Mayor Adams, did a robocall where he spoke about some kind of public service announcement and they used AI to have him say it in seven different languages. I saw that. I saw that. I thought that was a good thing, but people said, wow, he doesn't speak Spanish.
Is that's, I thought you spoke Spanish.
Marc Beckman: But it was still just, if you're not familiar, it was Mayor Adams, his voice, but speaking Spanish. So it still came across as if it was. So, why is that [00:07:00] problematic? Why do people have a problem if he's trying to speak to his constituency in a language that they might be better at, understand more
Clyde Vanel: clear?
I don't want to answer that question. What I want to do is have you guys answer that question. should we provide notice even on, in that situation? I'm wrestling that. I'm wrestling with that now. We're wrestling with, if we provide the notice, right? We talk about audio and we talk about notice.
There's different notice when it comes to video. When it comes to audio alone, when it comes to text, but if it's audio notice, do we provide the notice? Do I write a law? Do we write laws and provide what the warning is? And is that, if it's a robocall Marc, do we provide that notice in the beginning of the call?
Do we provide it in the middle of the call? Do we provide it in the end of the call? If we provide it at the end of the call, What if people hang up before they hear the whole thing? When you're dealing with robocalls, you [00:08:00] pay per second, per period of time. If that notice is too Anyway, so there's a lot of things to think about.
And there's a lot of things to think about when it comes to, when it comes to artificial intelligence and when it comes to creating whole new things. Content. How do, when should we let people know, right? When should we let people know, that, it's wholly or partly used by, created by
Marc Beckman: AI.
So it's kind of interesting. Like, I know that you're into emerging technology and technology in general. And I think one way that we can solve, and I'm already seeing companies do this, solve authenticity is by essentially minting this content on top of the blockchain, into the blockchain. And I think that's interesting.
However. If we're seeing deepfakes permeate throughout society, whether it's in politics, in fashion, in sports and beyond, it seems like our populace can have a certain level of skepticism [00:09:00] that might in fact work negatively for us. So how do we overcome, just I guess from a policy perspective, how do we overcome what could be a time period where everybody doubts that anything is real anymore?
Clyde Vanel: Marc, we have to figure that out. So look, there's the work that lawmakers and policy makers make. It's interesting that I sit in New York. I am a state lawmaker, right? Then we have the, and there's a role for the state lawmakers. There's a role for the federal government. I don't know how fast Congress can move on this type of stuff, right?
The, our president came out with, with executive orders, for the agencies, but we're trying to figure it out on the state level. What does it mean for society? What does it mean for society now, right? When we see, what does it mean for labor, right? One of the things that we were wrestling with for years was, wow, technology is replacing low skilled workers.
How do [00:10:00] we address that? Well, when it comes to generative AI, how do we address that? Technology with respect to replacing creatives, what do creatives do, or do creatives use it, or do, is, or is handmade much more elevated after that point, right? Is, I don't know what the answer is, but part of the answer is not just what I do legislatively.
Part of the answer is what the culture does, what society does, how society approaches
Marc Beckman: that. So, so it's interesting when you talk about creatives and the labor force. I was lovingly highlighting Run DMC, which I rarely get to do, but the reality is now that generative AI is impacting all of these creative classes from fashion to music to publishing to film creation and a lot of those.
Hubs are right here in New York State in your, in your state, in your district. So is this something that is concerning [00:11:00] to you or do you think artificial intelligence can be turned up to actually create more job growth in these areas? So
Clyde Vanel: I thought that I was immune as a lawmaker, I thought I was immune to this technology that my job was immune to it.
That's part of the reason why I used. Generative AI to help me write a bill to wrestle with this, to understand, to try to help understand how to navigate the world when it comes to this technology and as it improves, because we're talking about, we're all up in arms about this technology where Generative AI has been publicly available for less than two years, right?
Short period of time. Anyway, what I was really trying to see is, how do we use this? How do I use it to make me better? How do I level up what I do? How do I use it for drafts? How do we make sure that [00:12:00] the use of the technology, it's always human centric and there's a human at the center of it.
How do I make sure that, example of what not to do, here's an example of what not to do. A lawyer in New York used generative AI to write his, write the memo and they didn't check it and they submitted it. Well, that draft was all fake, had, I don't know all, but fake sites, right? So you don't use it that way.
That is not the way to use it. That's just pure lazy. That's just not you. You're not using it. That's just not you. You're not working with the machine or the software to come up with something better. So Clyde, your, process,
Marc Beckman: I guess, just to, interject for a second, your process, just to be clear, your process in writing, using artificial intelligence as a tool in writing that bill did not take you out of the process.
As people might think that, but the reality is that you oversaw. The writing, the [00:13:00] drafting of that bill, including research at every stage of the experience, correct? That's
Clyde Vanel: correct. So, so what's really important is, whatever field of endeavor that you do, right? So, people are worried about in journalism and in, in the creative space or what have you.
and it's good to try to wrestle to see how this can, if or how it can help with what you do. But I could also imagine, for example, this is not an old argument. I'm sorry, this is not a new argument. This is an old argument, right? When we see the proliferation of industrialization, handmade stuff, artisan types of things are valued more than furniture that was made, in an assembly line.
So what would that mean for art, [00:14:00] right? if I can take the voice of your favorite artist and have the AI process study their style, or what have you, and come up with different music. Any kind of music from them, what have you, what would be the value of the actual person doing it?
Marc Beckman: I think that's a good question, but I also wonder whether or not AI will become its own standalone art classification.
Like, for example, throughout the history of fine art, we've seen tools like the camera. I, I remember there was, a fantastic fine artist named Vermeer where, it was questioned now as to whether or not he was just painting, with his hand or actually using tools to trace, images from a mirror.
So is, AI so much for joining us today,
Clyde Vanel: and [00:15:00] we hope to see you again soon. Bye. Bye. Program themselves, then that's a different story when we come to machine learning or deep learning. But at this point, it's a tool. Yeah, and I think that folks You know folks should be able to in every field of endeavor should be able to use whatever newest technology there are Out there to help make what they do better, if it does.
So it's interesting,
Marc Beckman: like, you have, you represent people of all ages, backgrounds, just, such a range. So, from your perspective, the younger generation, when we start to look at Gen Z and even Gen Alpha, Do you think our schools should be using, teaching these children now to use these tools so perhaps they could become a fantastic artist, a fantastic fashion designer, more skilled at coding and
Clyde Vanel: [00:16:00] beyond?
We don't even have to go that far. The answer is definitely yes. But just looking today at even at this institution, for the past year and a half we were dealing with trying to figure out and trying to help guide our educational institutions in New York state. on how to deal with generative AI. Some institutions embraced it, right?
And they had to figure out how you cite and how you use these platforms. Some outright banned it. Many are still figuring it out. I don't know what this institution does, but it's really important to be able to, just like, it's, for me, I think it's really important to, you can't put the, you can't put the horse back in the barn.
You can't put the genie back in the bottle. It's important to be able to try to master these different, technologies in whatever space that you do. Definitely in education.
Marc Beckman: Yeah, I think just to share, like, one of the themes that's been, [00:17:00] prevalent throughout our talks today and last night has been this concept, as it relates to NYU specifically, of being tentative.
Let's be, cautiously optimistic about artificial intelligence and how it can progress us as an academic institution, as a community, but, watchful, because the technology still is not, completely figured out itself. So, I think that's interesting, but as you are aware, New York University joined Governor Hochul's Empire AI, which was this 500 million private public initiative.
So, do you want to just kind of tell for those in the audience that aren't familiar with it, what this Empire AI is
Clyde Vanel: about? So we're really excited in New York State government where New York, where our governor, Governor Kathy Hochul. uh really boldly, understood and understands the importance of this technology and understands that if New York, that New York State has to be able to make sure that we garner the best [00:18:00] of what we have in New York.
So we have, we got, NYU is one of the members of the Empire AI Consortium, consortium of educational institutions and private institutions, to be able to help figure out what the general policy should be around this technology. And also to work on creating chips and hardware with respect to processing in the artificial intelligence space.
New York State will be part of research and development when it comes to artificial intelligence. But what's really important is that, we want to make sure also part of that is that we use these technologies to be able to help close the digital divide. We're at a time in 2024 where we're still dealing with some people don't have access to quality broadband.
If that's the case, and we're talking about underserved communities, I'm talking about rural communities. And if that's still the conversation, and we have some folks over here working on high level [00:19:00] generative AI, the gap will increase even more and faster. So we have to figure out how to use these technologies beneficially, so that we could also close that digital divide also.
excited about that too.
Marc Beckman: So it's kind of, interesting because historically, we've looked at technology As, and you and I have had these conversations actually specifically as it relates to like cryptocurrency, digital wallets, and blockchain where technology can help close the digital divide But perhaps with artificial intelligence, because of lack of access, because of lack of broadband capabilities, this could actually further and accelerate the digital divide?
Clyde Vanel: Oh, 100%, right? So if people, again, if we have communities that are working on high, much higher level technologies and much higher level, software, and, things in the technological space. Right, that, that [00:20:00] divide would widen even faster. If we close the gap, if we train people, in these spaces and technologies, then we can address that also.
So we could use it either as a sword or a shield. If you look at third world countries, same thing, and it's interesting to see that if. in, in developing, I don't want to say third world, sorry about that. In developing countries, the, use of technology makes them be able to communicate across, across the globe immediately, makes transfer of funds easier.
Of value even easier, makes certain kinds of revolutions and positive things in the countries happen even faster. So if that can happen in developing countries, it could happen in this country. So New
Marc Beckman: York state, if we think in terms the [00:21:00] socioeconomic divide. If we're able to empower, individuals that might not have access to, certain, let's say, levels of education today because there are, barriers of entry because of economic concerns.
Artificial intelligence. Like, wouldn't it make sense to provide access to artificial intelligence so that individuals across our entire community, across all of New York State, can unlock what's already in them as it relates to innovation, entrepreneurship, to create new jobs, to create wealth within all of their communities, and then with that, New York State can really rise above the rest of the states in the union.
Clyde Vanel: So, one of the biggest one of the biggest goals and one of the biggest jobs of government is to make sure that people are able to take care of themselves, people are able to feed their families and their next generations, is to be able to provide for [00:22:00] economic opportunity and mobility for folks. And, there's no question mark that technology is, it can play a major role in that part.
But there's also no question that if you're not involved in technology, if you don't have If you don't have an internet connection, if you don't have an email address If you don't have, if you're not connected to the market that you will, severely suffer.
Marc Beckman: So it's kind of interesting because technology can be used to accelerate growth, arguably about humankind, right?
We can find with artificial intelligence advancements with regards to entrepreneurship, with regards to science and literature, healthcare and beyond. But artificial intelligence is also scary, right? We, this is like a doomsday scenario. Like will AI destroy our community? Will it destroy our society?
Should we put the brakes on it?
Clyde Vanel: So [00:23:00] yes and no. Today, where we are today, we're not, we're not at the doomsday period. Right, right, right now, generally speaking. So, I've visited many, I've visited a number of different companies and organizations and folks working on different levels of technology.
Are we going to get to the place where the robots will kill us?
I can't say definitely no. Maybe. That's kind of scary. I have, one of the programs that I worked on, and when I talk about, closing the digital divide and closing the, knowledge gap for underserved communities, we really work on this for real. So I have one of the schools in my district working with, with a major company on not coding now, they're working on machine learning.
They're working on how to get the algorithms to teach [00:24:00] itself. Great but dangerous. Dangerous meaning there must be baked into the algorithm. We want to focus on certain basic things, and you can think about it from sci fi, right? I don't want to talk about, I don't want to go that far, but we want to make sure that within the programming, once machines can program themselves, because it's going to get to that point, that there's no harm done, or to minimize harms.
Or that they don't, that you can imagine if they program themselves that, it can get to a certain level. We want to make sure that's the case. As in what we do with humans also, right? we have a limit on probably this school testing, embryo testing, right? Genetic testing.
we have a [00:25:00] certain limit on cloning, right? You can go to certain countries and. Clone your pet today. We don't allow that here for ethical reasons, right? So, so ethics, so figuring out the ethics in AI is a very important thing that we're still trying to wrestle with. There's ethics in science and there are certain things, again, that our scientists cannot do or that they're limited on doing.
So I'm getting to the point of one of my, I don't want to say, talk about my bills, but, How do we deal with high risk AI? How do we deal with, but before we even get that far, Marc, today I'm dealing with huge ethical issues when it comes to algorithmic discrimination. When we have, whenever you use, whenever you have a software, machine, something taking the place of human decision making, [00:26:00] and if it takes the place of human decision making for things that are kind of important.
It's important for us to make, sure that we audit that and that we're reviewing that. Because artificial inte today is not, generative AI is not that smart. We call it intelligence, but it's, depends on the data that goes in, right? Bad data in, bad results out. What happens when, which happens today.
What happens when a bank. That provides mortgages to people where you don't have to go to the local banker and sit down and talk to them and give them your papers to get a mortgage. What happens when now you go online and you fill out the information and when we find out that, which happens now, and what happens when we find out that There's a higher likelihood [00:27:00] of you getting a mortgage and me not getting a mortgage because of whatever factors, right?
What happens when it's a higher likelihood that women get turned down when it comes to these kinds of things. And when I, in the world where you have to walk into the bank, the law, the government can do something about that. But what happened if the bank said, it wasn't me, we didn't discriminate, the software did it.
Marc Beckman: So, are you working on a bill right now to help protect New
Clyde Vanel: Yorkers from that type of attack? New York City actually has, New York City actually has a bill on, has a law with respect to that. New York State, we're working on that type of law. on that type of, we're working on that bill now.
So the issue is
Marc Beckman: whether or not the bank
Clyde Vanel: will be held accountable for algorithmic bias? whether or not, yeah, whether or not just the bank was just an example, but whether or not, [00:28:00] when it comes to certain decision making, whether or not there's algorithmic bias, whether there's bias in the software.
So, when you think about, the bank, we talk about telehealth, we talk about a whole host of things. We want to make sure that today we're dealing with this kind of, we're dealing with these kinds of issues. We're also dealing with, you can't fully self drive in New York State at this point. We have limited use of testing on self driving vehicles, but it has to be done really limited.
But we found out that the algorithm Now it's less likely to pick me up as a human because of my dark skin. So that's how you pick me up. So
Marc Beckman: the algorithm, just to be clear, he's not talking about a living human taxi driver. Clyde is talking about the algorithm for an autonomous vehicle.
Clyde Vanel: Yeah. I'm sorry about that.
Yeah. So, right. Yes. Yeah. So, yeah. Well, it's not [00:29:00] that remarkable, right? So, so if the data set provides less darker skinned people, then it won't pick up those darker skinned people. So like I went to China a couple of years ago. with a delegation and they, were using, one of the cities, first it was crazy because every city, I went to like 10 different cities, every city was more populous than New York City, which doesn't make sense.
But anyway, so we went to one of the cities where it was, where they provide services to folks and you don't have to bring your ID, it would pick you up, pick up facial recognition. in a kiosk and one of the, one of the, our members, another, person of color went in and just tried it out.
The software did not pick him up as a human. China, it didn't pick him up at all. China, in China, wow. Not even in a big city though. But it wasn't a big city. It wasn't still, it wasn't Beijing, it wasn't [00:30:00] Shanghai. It wasn't in
Marc Beckman: the technology should transcend geography.
Clyde Vanel: No. Right. No. There was no brothers over there.
That's wild. It's not racist. there was no, it's not, it was no brothers over there. Yeah. Ever. Yeah. It was, remember this was a, I can't remember the name of the city. Not wild. The information that you put in there is based on, it's not intelligent, right? So, so what's really, so what does that highlight?
It highlights a number of things. It highlights, it's really important for us to figure out what these data sets are. It's really important for us to make sure that when it comes to high risk AI. that the data that you put in there is diverse. So, so here's the problem.
Marc Beckman: This was highlighted, for those of you that were here last night, NASA highlighted on this topic specifically because a lot of the information that trained these AIs is information that was built at a time where people were discriminated, we weren't able, like creators of, literature and beyond.
There was an opportunity for people [00:31:00] of color or women to even include these content if you go back, 50, 60, 70 years ago, and yet that's already in the algorithm. So from a, politician's perspective, how are you going to protect us? How are you going to protect our society if it's already baked into the algorithm?
Clyde Vanel: Couple of things that, a couple of things, periodic audits. Right? Because to make sure to see what results, what the results are periodically, also, to make sure previous to that, make sure that when you talk about diversity, now we have to, now we have to look at data diversity, data set diversity.
Yeah. Right? That's right. So, so, these are things we couldn't have imagined before, but these are things that are really important for us to make sure that we, we, are very
Marc Beckman: sensitive to. Data diversity. one of the I don't
Clyde Vanel: know if that's I just made that up. I don't know if that's a thing.
Yeah,
Marc Beckman: but that's a soundbite for his I might If this is the [00:32:00] election season, remember data diversity. Vote Vanel. one of the most exciting things for me personally as a New Yorker, that I've ever heard a politician say, because I feel like there's a separation between us as citizens and politicians, came from Assemblyman Vanel.
He said to me, and he said it to the entire room, he said, I don't care where in New York you're from, I'm a New York State Assembly member, and if you want, reach out to me, we could speak. If you're not in my district, we could speak. And I thought that was really modern and thoughtful, and I appreciate it.
And I think a lot of the people in the room here today would love to have an opportunity to ask you a question or two, if that's okay with you, we'd, I'd love to welcome everybody to line up and, grab the
Clyde Vanel: mic. is that cool? Of course. No problem. Of course, yeah. All right. Come on up.
Marc Beckman: I knew you were gonna have a question. Of course. , [00:33:00] first
Clyde Vanel: of all, thank you so much for your time. I'm very interested in the topic, which of course, I'm sure everyone else is here. I'm a senior at NYU, in the GLS major in the law, ethics and history concentration with a profound interest in ai. And my thesis topic for my senior year is the result.
for, it's kind of a mouthful, still a work in progress, but data privacy and machine learning as it relates to democratic processes and the integrity of free and fair elections in America, which I think is a very current topic considering it's election season as well. So, I was just curious, how do you prevent something like, for example, the Cambridge Analytica scandal that happened even before machine learning was so popular?
and now if you just add machine learning into the mix, it only takes one bad actor, like you said, that Joe Biden calls a couple weeks ago. how do you sit with that? what are some thoughts, in relation to how to prevent something like this? it seems you're just like adding gasoline to the fire at this point, if anyone wants to try it again.
[00:34:00] So, a couple of things, so, one of the things that's really interesting, really important is that, one of the, we're working on figuring out what to do when it comes to privacy and data, and the federal government has been talking about it, there are a couple of bills in place, but, but New York State is trying to figure that out.
In 2018, the EU, the European Union came out with the GDPR. Where, we came out with, with policies to really protect their citizens on the use that the big platforms, how they use their data, how people have ownership of their data, how they have the right to have the platforms delete or exclude their data or what have you.
And part of the issue is that New York, in America, we haven't, we're still trying to figure that out. And three years ago, California came out [00:35:00] with a bill, which I, that kind of mirrored the European Union's, we're talking about how to use data, how to protect folks, blah, blah, blah.
It's on the states to decide that. It's on, so, yes and no. So California came out with their bills, and we can come out with our bills also. My major concern. is, and I think that we need to make sure that we have, we protect people's data properly. My major concern is that I think the federal government needs to lead with that because we're in a place where we have 50 states.
If California has their own thing and New York has their own thing and we'll have a patchwork of regulations that'll be, could be detrimental to, that could be detrimental to. to the expansion of technology in this country. At the same time, the state has the ability to be able to lead where the federal government should go if they can't because we're more [00:36:00] nimble.
I think that, we're looking at privacy laws in New York state and how to, how to get that right. I am trying to lean towards trying to figure out what we're doing within the state, right? California is. Everyone's data against California and it's California versus the whole internet. I want to try to, I think that we should try to figure out New York centric kind of data first to help figure it out and try to help the federal government try to, try to get at it.
So that's the data dealing with, Cambridge Analytica and trying to figure out what to, what platforms, roles are with data, what have you. Very, concerned when it comes to artificial intelligence. So we have a number of bills that we're doing to, address elections in particular. Right now, we have three or four different bills to address immediately what's going to, this up and coming election.
So we're [00:37:00] really concerned about, we're really concerned about, those issues. But what's really important is that all of these issues are being addressed. All of these issues and all of these regulations, we're all, we're working on them as we speak. So this is not something that, is, happening three years from now or happened, but we're in the moment right now.
And what's really interesting is that if you are writing your paper on these issues, you can take your experience to another level by actually participating and figuring out what our policy should be in
Marc Beckman: New York. I, I think just as a point of information, if we look at where we are with regards to cryptocurrency regulation, and then think in terms of, for the most part, it's the same people running the government on a state and federal level, perhaps where we're going with artificial intelligence, it could become a bit of a, [00:38:00] logjam.
So, for example, with cryptocurrency, because we had such uncertainty with regulation, as you're very well aware, Clyde, we saw, investment move overseas from venture capital firms. We saw the better qualified individuals move overseas with their skill sets. because of the uncertainty in the marketplace.
At the same time, we're seeing states regulate cryptocurrency plus the federal government, and I think that's having a chilling effect on entrepreneurship, particularly where it's so expensive to just get going, to get a license. So, if artificial intelligence has the same impact, I think ultimately, in my opinion, we'll see a parallel here.
We'll see states regulate and the federal government regulate, and maybe perhaps even on a local level at some, point. So, is that the best way to go, where we have every single level of the government, local state and federal? perhaps regulate and slow down opportunity, job growth, investment [00:39:00] across, the artificial intelligence spectrum.
Clyde Vanel: So what's interesting is that the, the industry wants good regulations, right? Most of the industry wants to say, Hey, listen, this is a problem, right? protect us from, protect the world from us. We want you to protect us. So, it's about finding the proper balance, right? And keep in mind, there are different levels of government.
All right, we're trying to find that balance is something that we're, we're trying to do. But I think for you, when you talk about cryptocurrency, New York is a place that was one of the first, government to provide regulations around cryptocurrency. And when you have, regulations that are, that makes sense, that are, that are balanced.
I think it's better for the entire industry.
Marc Beckman: So, do you think that, like, New York State has the ability, I'm gonna, I just wanna finish this question and this topic, like, do you think New York State has the ability to, balance [00:40:00] between protecting with regulation, but then also not over regulating, so, Perhaps that individual dreamer, the person that doesn't have tons of investment that's behind her, can innovate and become, a successful entrepreneur, hire people, create more value in the community and beyond.
Is there a way to, like, modernize this concept of over regulating and still
Clyde Vanel: protecting? I think that's at the core of what we do. And I think that's the, the core of what we do is to be able to try to find that balance. And, I think that the process is iterative, Meaning that, y'know, it may be here still today, but it may be moving toward another direction. There was a point where I can think about that even in the crypto space Where in 2015, people were complaining about, Y'know, 2015, 2016, 2017 people were complaining about our crypto regulations. In 2023, it's different.
And, but also we've changed over time also. So I [00:41:00] think that the, the process it's really important for us to, for the citizens to be able to work through the process and see and try to help make it better. And I don't think it's a ironclad thing. Now keep in mind New York's environment, that's more, that has more regulations in another place, right?
We're not, we're not. Wyoming. I'm not dissing Wyoming, but we're not Wyoming. So I can't, we can't have the same kind of laws that they have. but it's a place where we could figure out and try to find that
Marc Beckman: proper balance. Yeah. And I know that you've always been open to this idea of welcoming people to participate, experts and, our citizenry to participate in the government with you and your office.
So I just want to share Vanel is very unique in that way. If you guys have ideas. and you could add your expertise as it relates to artificial intelligence or any of the other emerging technologies. Mr. Vanel's door is open and he welcomes your recommendations.
Clyde Vanel: Thank you. My name is Pamela Roach. I am, [00:42:00] a professor with, integrated marketing and communications.
I have a question about, not personal privacy, but when you start to get into the area with entrepreneurs and businesses, you start to talk about copyright law. And so, I'm interested in hearing thoughts that you have about the future of restrictions or management of copyrights as we start to think about scraping sites that seem to be public, but in fact are full of IP that a firm did not intend to be used in that way.
Great question, Professor. Again, you get a tough question from a professor. Love it. First of all, thanks a million. So, when it comes to generative, when it comes to generative AI, you, when you talk about, you've got a couple of concepts that you've got to worry about, right?
Ownership, copyright, authorship, all three different, all three means different things.[00:43:00]
And when it comes to generative AI, it's very interesting to see how, you could say, you could give it a topic and the, what, happens, the reason why it's intelligent Picking up stuff or scraping stuff from different sites, from different publications or what have you. Relatively recently, we've seen a number of lawsuits from different publications.
I don't wanna name anyone out, but you've seen different publications saying, Hey, you're smart, but you're smart because of us. Right? you're taking a lot of the, content from our articles. We need to eat off of that. So, you're doing all this kind of stuff. You are using, you are using, so in copyright, there are a number of different rights in copyright.
There's six rights in copyrights, right? One of the rights is the right to derivative works, taking a piece of something, right? Taking a piece of something and making [00:44:00] it something different. And these publications are saying, publications or whatever saying, or the content creators are saying, hey, that's.
we need to, we need a piece of that. So there's a, I can't speak about a lawsuit that's happening right now, but there's a lawsuit that's happening, there are lawsuits happening right now with some of these big generative AI companies that go and, again, they're getting information from places and many of the, much of the information, or some.
At least some of the information is copyright protected.
Marc Beckman: So, so something that's interesting. So, wait. So, I'm sorry.
Clyde Vanel: So, what does that take us? Sorry. Professor, I don't know. I'm not sure, right? This is new. I don't know what it means, right? So, what happens if, generative AI company has a subscription? [00:45:00] Is that copyright infringement?
What happens if there's a special subscription for them? What happens if there's a, right, there are commercial subscriptions. What happens if they use a commercial subscription? Now it would be really clear, professor, if the information that they scrape was in the public domain. Right now we are in, it's on January 1st, 2024.
It was kind of a big deal in copyright. Big deal day. Because Mickey Mouse is in the public domain. It's kind of a big deal. Yeah. Well, not the Mickey Mouse, Gen 1 Mickey Mouse. But Gen 1 Mickey Mouse is in the public domain. Holy cow. That's a big deal. So now you guys can go and use that first generation Mickey Mouse and it's not protected by copyright.
[00:46:00] So if they scraped non copyrighted information, that wouldn't be an issue. Right? Information that was in the public domain. We are still trying to see what it, what is going to mean moving forward. I don't know. I don't know what's going to land. I don't know if it's going to be, I don't know if the courts are going to, I don't know what the courts are gonna say.
I can imagine a commercial solution. I can imagine a commercial solution, but keep in mind, what's that going to mean? If there's a commercial solution for these generative AI programs to pay extra to whatever big content creators out there, then the, your, the public access to these programs aren't going to be as available.
It's going to, it's going to cost you. We had the same issue. This is not a new issue. When I was in college, when I was in law school, [00:47:00] I used to just have to go on. We didn't have Netflix. We didn't have Hulu and whatever, all these things. We had something called Napster or these other things where you could just go and download to your heart's content.
That's what you did. And interesting was now what happened was More so than the, more so than the legal response and the regulatory response, there was a societal and business response, right? So now, the industry figured out how to make sure that now that there's, and if you see now, there's, there's content conglomerates.
I don't want to name all that, but you know. So for you to watch whatever it is. you have to have a subscription, users have to have a subscription to blah, blah, blah. [00:48:00] And then the content collaborators, which generative AI is looking like that to me, may have to pay for usage of the content and then the general users would have, so I think that eventually, I think that's where we go.
But I don't know, I'm not too sure. If the courts, and if the courts are in a good space to figure that out, I think it's going to be a, I think it's going to be an industry commercial solution.
Marc Beckman: I think that you're going to see a lot of these issues run up into the Supreme Court over and over again now.
Something that I look at quite a bit is a recent case surrounding the author, George Orwell, which I think went all the way up to the Supreme Court, where they had this standard of whether or not the, author that used a lot of George Orwell's characters, without his permission, and wrote a book, and had the book published, and sold the [00:49:00] published, was in, was infringing on his intellectual property.
And the standard that was established, they said, no, in fact, That is not a violation, and it is not because it was trans he took the creat this author, I think it was a woman actually, she took the, concepts and the characters and transformed them in nature. And then as it relates to generative AI, something that's really interesting and beautiful too, if you guys take a look at it, on Midjourney, there's an artist that is combining the beautiful glassblower, Chihuly with pop culture things like Nike Air Max, and it's incredible if you look at the generative AI.
So the question is, is that artist using generative AI infringing on both Chihuly and Nike, or is it transformative in nature?
Clyde Vanel: Basic,
basic question in copyright law, just because it's generative AI doesn't make it, doesn't take you out of copyright law, I [00:50:00] don't think, but is it, if you take the expression of the artist. Then it's copyright infringement, but if you take the idea, then it's not. So you know, the idea of, generally speaking, the idea of someone who's, someone who's down and out, who's a boxer, that, was trying to come up in the ranks and then wins against all, odds, that's a general idea.
So you can't, if you come up with that kind of movie. Sylvester Stallone can't sue you if you come out with a movie of a boxer up against all odds. But, if the boxer's from Philadelphia, and he has an Italian accent, and he fights a guy named Apollo, and he da the more expression that you take from it, the more copyright, the more it looks like you're infringing on the copyright.
So the idea, so the, so when it comes [00:51:00] to whatever it is, I don't care what technology put on top of it. if the, if you take out the expression, more of the expression and not just the idea, then you may have issues when it comes to copyright law.
Marc Beckman: we need new laws as it relates to intellectual property and artificial intelligence or is it just reinterpreting the law on top of AI?
Clyde Vanel: That's another argument that folks are having, right? Do we need to, do we need to update the intellectual property laws or I don't know, but I do know that it's great to answer, to ask these questions. and, generative AI is making us look at this differently. So if there
Marc Beckman: are no more, oh, there are a question.
Okay, go ahead.
Clyde Vanel: so, beginning of this, you were talking a little bit about bridging the technological gap. and so I just wanted to ask you, like, where in the education system do you think chat GPT and other similar AI stuff like this, [00:52:00] where do you think it should start coming into play in education discipleship?
In New York, for example, all the public schools have banned the use of these AI generators, whereas a lot of private schools are taking the complete opposite direction. So that's, that seems like just another place worth expanding the gaps. Like, at what point in a student's education, like, say, elementary, high school, university, at what point do you think that is?
It's a great question, and I'm dealing with that right now. Again, my role, I am trying to work with, the New York's, department, New York City Department of Education to make sure that we have access to these programs. I think that it's important for us to, for our students to wrestle with these technologies, because if they go into institutions where people have been using it.
Creating and building and growing or if we just talk about writing papers, if people have been using, if students know how to use it for first [00:53:00] drafts and, folks don't even use it at all. I think that you are, you're, you're disadvantaged. But even besides papers, people are using generative AI for search, right?
People are using it for search for more basic things. So when, imagine I go to school and they say, no, you can't use the computer. You can't use a calculator, you can only do long division by hand, right? There was a point where I can't even remember how we used calculators when I was in school.
Did we use calculators? I guess we did. I don't, but I can imagine pre calculator days. Yeah. Right? I can imagine those days. I can imagine, I remember when I took the LSAT, I think it was probably, I don't know if I, if they let me use a laptop. They probably didn't let me use a laptop. Now I can imagine.
I know people using, they do it on the computer now, right? I know they do it. So, so it's a tool. Generative AI is a tool, right? Today, I mean, today, [00:54:00] later on, I don't know. But generative AI is a tool that folks should use and figure out how to use, and I think students should use it early. I think when should we teach a a, student how to search?
Right, when should they learn how to do basic search? I think early. I don't know how early, but real early, right? So, so I think it's, one of the things we've been working with, one of the things I mentioned earlier is that, in my position, we could influence how institutions use these technologies.
And just, I mean, last year, remember. Generative AI has been publicly available, generally speaking, November 20, two winters ago. So really, less than, less than two years ago. So these institutions are, are still trying to wrestle on how to use it. And what's really interesting is [00:55:00] that, people are really affected by how media speaks about these things.
And the news for the last year was showing the malfeasance of these technologies and not showing the use of it and people react to bad news much faster than they react to understand the benefits of these technologies. And that's something that no matter what the technology is, that, that generally speaking we're fighting headwinds, fighting against headwinds because it's easier to say they're not even writing their papers anymore.
The end. The talk is they're not writing papers anymore, the machines are, and if that's where the conversation stops, then, we're dealing with trying to convince our academic institutions, which are not the most progressive, to be able to, embrace something like, like that. So it's a challenge, but I think that, we're having these conversations and we're trying to
Marc Beckman: get there.
I think it's interesting that, you [00:56:00] dichotomized New York City public schools and private schools, and I think everybody should also consider a broader issue, a more macro issue of United States schools versus the rest of the world, because I think artificial intelligence is going to totally change the landscape of entrepreneurship and power as it relates to economies.
All of a sudden, those who couldn't afford Western University education from Oxford to Harvard to NYU can, at their fingertips, gain access and information. Knowledge is power. And then they can even learn, after they're inspired, about how to create their business. They can follow the artificial intelligence guidelines after they learn that they want to be a fashion designer or a designer.
somebody that's inventing the next autonomous vehicle and then follow the AI. So I think it's interesting to look at it at a local level, particularly with Assembly member Vanel sitting here, but also on a, on an [00:57:00] international level, like where should we go with the acceleration of technology?
Clyde Vanel: And not all private schools were on board either, right? So we're still trying to figure that out too, right? Not all private schools were on board. and when it comes to higher education, at least our public. Our public institutions, we're trying to figure that out with what we call SUNY and CUNY.
but this is, this wrestling with this is something that we're wrestling with in society. and the younger generation, More embracive to it than the not. So.
Marc Beckman: So when I was young, my house was like John F. Kennedy was, God, it was Camelot in our house. And he was inspiring.
He was like this incredible political figure. We wake up in the morning and we want to do better for our community, for our family and beyond. And a lot of the things that you share with me and as we've built this relationship over the past couple of years, I see a lot of that inspiration in [00:58:00] you, and I appreciate that, so I'm wondering if you could leave the room today with some words of inspiration as it relates to embracing new technologies and looking forward into the future.
Clyde Vanel: Wow, that's a lot of responsibility and it's not me, it's the AI that did it, not me. Look, what's really important, I think no matter who you are, where you are, and I tell folks, don't be afraid of these technologies in whatever space that you're in, wrestle with it, try to figure it out, and try to make sure that what's really important is that we control the technology, not the other way around.
So no matter what business you're in, whatever job you're in, whether you're a student or what have you, you be the programmer you control it, you use it, and don't be afraid of it. And what's really important and what's really interesting is that, when we, when you do that, you'll be, you'll be in a much further place.
No matter what age you are, you're not too old [00:59:00] or too young to be able to work with these technologies because when you don't. Well, the technology doesn't care about you, it'll just pass you by.
Marc Beckman: Alright everyone, thank you so much for your time today.
[01:00:00]