Artificial Intelligence & Racism: Can It Be Eradicated? | John Pasmore & Marc Beckman

Marc Beckman: [00:01:00] John Pasmore, it is such an honor to have you join me today on Some Future Day. How are you,
John Pasmore: I am good. I'm good. kind of cloudy Monday morning here in New York City.
Marc Beckman: John? You are the founder and CEO of an ai. Platform, which is super compelling. Latimer, obviously the audience knows it today. What brought you into that [00:02:00] space? What was the background, the impetus, the inspiration in building it? And this is, you know, obviously a very difficult task and undertaking. So, uh, really like, let's, let's start, let's go backwards a little bit in time and share with the audience, uh, how you came up with the idea and, and what led you to where we are today.
John Pasmore: Um, I think you and I met some time ago. I was the, the CEO at that time of a startup called Voyage tv, which was, um, an online, kind of like an Expedia with video. I. And I was considered a non-technical founder during those days. Um, and I went back to Columbia to get a, a computer science degree, which took a bunch of years.
Um, so really the impetus is that armed with kind of an, uh, a, a degree in, in the field. Um, and at, at this point, looking at a teenager when ChatGPT came out, you know, I was looking at my son and the idea that he would encounter not just. You know, the things that we worry about as [00:03:00] parents, uh, in terms of the, the, the society as a whole not being fair or encountering racism.
But now we had this new machine that everybody was looking at as being so intelligent, but it also had very obvious. Obvious biases, and I just thought like, wow, if, if we drop the ball as adults and this next generation has to deal with not only all of the things that we've dealt with coming up, but now you would have, you know, essentially racist AI. I think that, that, that just seemed way above and beyond what we should be leaving the next generation. Um, so we thought that we could, we could fix it basically. I, I thought some of the folks, uh, that were really, really, uh, critical of the bias that we were seeing on Twitter and X and everywhere else.
Um, were going to jump in and say, Hey, I know that this is a problem and here's my solution, but we didn't see any solutions. So, um, you know, [00:04:00] January, 2024, we, we launched Latimer.
Marc Beckman: So John, let's, let's talk a little bit about, um, the issue of racism as it relates to artificial intelligence and the training of LLMs. Um, there in, in my mind, John, I break it out in two ways. There's like the overt racism, but then there's also a part of it that is, um, missing and will always be missing because society was racist during those time periods.
Right? Where, you know, black scholars or leaders within the black community never had an opportunity in the fifties to submit. Very important documents and reports that might be considered, for example, for, uh, some sort of noble Laureate prize or something along this line because of their race, because of racism.
And as a result, these LLMs will always be implicitly biased. So I'm curious from your perspective, what did you see? What were you seeing back then? And then what are your concerns [00:05:00] today? And then what's the solve?
John Pasmore: Mm-hmm. I think the solve is, is kind of an ongoing process. But, um, you know, I think when, when, when this, when Generative AI first launched, you know, we knew that it was using as training data, you know, comments from I think like Reddit and YouTube and other places where you know that you don't want.
Generally your kids to spend a lot of time immersed in that social media is bad enough. But I think once you dive into the comments, you start seeing, you know, very, you know, polarized language, people don't hold back. You know, you have that kind of veneer of, of being anonymous online so people can say anything.
So when you use that as training data and you know, AI can't necessarily discern initially. Hey, what's, what's good or what's acceptable speech? Um, I look at the LMS first. They just needed a lot of words. I think we're up to now, um, models with trillion parameter or a trillion [00:06:00] word, you know, uh, training sets.
Um, and then you try and make it as smart as possible. So you have, you know, this machine that understands the language or has access to the language, and then you try and, and kind of give it some guidance, almost like a, a child like, hey. What's acceptable? What do people expect you to say? What, what, um, what's offensive?
Um, and that's an ongoing process. You know, we have a, a lot of conversation now about guardrails that have been built. Um, so if you ask certain questions about certain topics, uh, the, the AI itself knows, oh, this is a sensitive topic. I'm not, you know, and sometimes it won't even answer, you know, some subjects it won't even delve into, you know, if I wanna.
You know, create some sort of, you know, mass casualty device that knows, hey, I'm not gonna give out the recipe for that. Uh, so people spend a lot of time trying to jailbreak the machines because they know that the data's in there. I think, you know, what we did is we built what's called a rag, meaning that we have our [00:07:00] own data, which.
Seems to kind of inform, um, a foundation model. At this point, we have Latimer as a, as a DA database, and we're sitting on top of open ai. We have two models from Open ai, two models from Google's Gemini, two models from, uh, anthropic, and then, uh, a self-hosted version of LAMA that we're always be training to make, uh, make better.
But. Uh, when those models have access to a database like Latimer that has things like the New York Amsterdam news from 1925 to present, um, dissertations, textbooks, uh, by, you know, ma Asante, you know, storied kind of historians, um, in, uh, the African-American culture. Um, and then you wait the model to, to say, Hey, if you have a question, and that, and the answer is in this database, you know, use that database as your preferred source, essentially, is what it's saying.
Or it [00:08:00] mixes our database with what it can find in the foundation models. And now you have a whole other layer where the models can kind of go out and search the internet as well.
Marc Beckman: So you're doing very important work. How does the end user know about Latimer? Like for example, um, it seems to me like, uh, creating a lens, I. That is unbiased would be important for everyone, and everybody should be effectively putting that tool on top of OpenAI. They should all, everybody, including myself, we should all be using Latimer.
So how are you reaching that audience? Um. You know, I think it's, it's important to meet people like me, like white people, right? Should be looking through the Latimer lens rather than always going to chat GPT or perplexity or, you know, grok or Deep Seek or any of these other, specifically talking. Now in, in, in terms of AI search, uh, it seems like it's, it's important for Latimer to reach a, a very wide [00:09:00] audience.
John Pasmore: Yeah. It, it certainly is. And, and you know, that's why we say on the, on the homepage of Latimer, it says AI for everyone because we do think that it's, um, you know, it's how we would. Think that everyone would wanna approach the world and, they want as most, the most accurate view of the world as possible.
And that's inclusive. You can't, you know, um, eliminate a, a culture from the data. And then, you know, pres present that as, hey, this is a great view of the world. And certainly we're a young startup. We have a ton more to do. You know, we want to do, um, indigenous. Uh, or Native American. Um, history and culture.
We want to do more. Um, with Latinx we want to do more even, um, with, with genders and sexual orientation, all of that, we feel you kind of need, um, that view of the world because there's so many important contributions. Not only scientific, but just to the culture itself. What, what these differences [00:10:00] make us as, as a group is different than, you know, being siloed in, in one culture.
Um, and again, we're, we're just really still getting started. We, we were, you know, we launched and we were, I would say adopted where we got a ton of inbound interest from, um, higher ed initially. And we're still kind of working through that. Um, we have some big universities and thousands of students that are using, uh, Latimer at this point.
And again, we're, we're a young company. We're small, so we're taking their feedback and trying to make a, build a better product. We just released this Multimodel version last week and we just have a new, uh, design that goes up this week. Uh, so you know, we're building as fast as we can.
Marc Beckman: So multimodal meaning, um, now it's more than just AI search, it's, it's providing different capabilities, creating images, creating
John Pasmore: Uh, not, um, multi-model as opposed to multimodal multi-model just means that y you know, when we launched [00:11:00] we had Latimer plus, um, essentially two open AI model. And now we have kind of, we run the gamut where the, the user has a dropdown menu and they can choose, uh, which model, you know, people at this point have preferences as to, as to which model we'll get into.
Um, we'll get into images, uh, next. Uh, but again, with our focus on, on higher ed and in in text generation as it relates to marketing in healthcare and other industries. You know, I think we still have a lot of work to do on that side, and we still have a lot of do, uh, work to do. On the data acquisition side, we're talking to a university, um, one of the, the, the largest, um, historically black colleges and universities, um, that has 15 million uns scanned pages.
And that, you know, there's Frederick Douglass papers, there are papers even, uh, from like David Dinkins, the former mayor of New York. Um, so nobody has that and you know, that. Those things create a, a blind spot for everyone because the [00:12:00] models just feel like if it doesn't see it, it doesn't exist, or it didn't happen, or it didn't happen in that detail.
Um, so we need to get, there's a lot of data. You know, we have the Schomburg, uh, museum uptown in, in New York City that's doing some work with Google, but their, their stuff isn't online. So a lot of things that are in universities, museums. Big chunks of our culture that I think as humans we recognize or we're aware of.
Um, you know, again, if the model isn't trained on it, it's not aware of it, it's as if it didn't happen.
Marc Beckman: Yeah, that's really important work. That's where I see a big. I, as we started the conversation, that's where I see one of, you know, if not the biggest issue. It it is certainly. One of the biggest issues as it relates to racism, sexism, and bias. if you're gonna drill a little deeper, do you feel like your job personally isn't just to stand up Latimer, but now you're really a champion for, um, getting black history?
I. Correct as it relates to the future. Education is [00:13:00] such a critical part as it relates to generative ai. It can really, um, shape the way that your son's generation and, and your grandchildren's generation see the world. But without this type of information, it's just wrong. So are you, do you feel per, on a personal level, John, that like you've transformed from like a business commercially oriented person into business slash uh, black activist in a way?
Black tech activist,
John Pasmore: Um, occasionally, you know, you know as well as I do what you're trying to do, you know, we are a for-profit business, so a lot of people ask that. Are you a non-profit? Uh, no. We're a for-profit business. So as, as a CEO, you know, your, your first job, uh, necessarily is to build, you know, a profitable business and.
Ultimately return capital to your, to your shareholders, your partners. Um, and you know, how we're doing that is, is we're building this machine that we do think, um, solves a critical [00:14:00] issue, uh, in terms of the information that's available. Um. It does mean I think that, you know, when, when I'm speaking on a campus or, or something like that where you, where you're speaking to young people, I think that's how they view it.
Like, oh wow, this is really important for our future, that we have this, this alternative. Especially, you know, now we're seeing in. Certain states where books are being banned or books are being taken outta the library. Um, you know, and it could be a book that we all know, like, you know, to Kill a Mockingbird or things that we thought, you know, oh, we settled that conversation 80, 90 years ago, where, um, that would never happen in the United States.
We're seeing kind of this throwback where there is this, this effort to kind of reframe history. Um, and, you know, we just really want the facts. We're trying, you know, what we put in, in Latimer are things like, you know, dissertations, textbooks, newspapers. We're not trying to be another source for opinions.
We're [00:15:00] just, we're just trying to get as, as much of the factual, uh, record as possible from a bunch of different sources. The AI itself is smart enough I think, at this point, to discern, um, what's mostly factual. We're still seeing some issues with AI in search when, when AI just says, oh, I'm gonna go out to the open internet to answer, uh, Marc's question.
Um, if, you know, we saw this before the election, if there are a ton of articles that are saying something, um, and it could be false or it could be kind of an opinion about, let's say a conflict or one of our relationships with another country, um, it's gonna assume, oh, I've seen so many articles like this, that must be the, the truth, so to speak.
And that's what it returns. So. You know, all those I think are technical issues that we can, we can, we can get to. But you know, with anything in history where we don't have [00:16:00] video, right? We we're going to have even different historians that sit on different sides of an issue or sit on different sides of an event and look at it differently, you know?
Um.
Marc Beckman: It, it's even worse than that. John, I've seen recently in the past six months, um, individuals who are not qualified as historians, they think they're historians and they go on major shows like Tucker Carlson, who, you know, end up influencing people even though he's not an expert on these types of topics, and they disseminate false and misleading misinformation about different subsets of society. So we have like so many issues as it relates to our children's generation and beyond. I want to ask you a question, like when you go back into the jailbreak topic for a minute, and then you're, you know, you discu, you uncover this, you know, corpus of information at, um, you know, at a university for example.
Um, you need to go back then I imagine. Uh, or it would be ideal to [00:17:00] go to the centralized entity like in Sam Altman's open ai or back over to Google and say like, Hey, look, I just found this incredible corpus of information that's relevant for, you know, uh, black individuals. And, and furthermore that's important as it relates to getting the training for artificial intelligence done.
Shouldn't we incorporate that back down below Latimer so that it's, it's right for Google, it's right for. Open ai. Do you, do you end up taking it down, uh, below Latimer, like to me, I see Latimer sitting on top of all of these different, um, LLM. So if I'm correct, do you take it down below also?
John Pasmore: it, it depends, you know, we, very early on we had a conversation, uh, with one of the big models, well with two, um, actually, um. Where they could take an an API from, um, Latimer, a call to Latimer to answer certain queries. And then in the, in the case of these documents, now that we're, we're looking at assisting some of these universities and other institutions, [00:18:00] uh, what we're, what we're building really is a licensing platform.
So it, it's not even. You know, the money is really going back to the university. So in, in some senses, it allows them to generate recurring revenue from something that's just been essentially gathering dust, uh, in some cases lit, you know, literally in a basement. you know, so we want to digitize it.
We want to create a, a licensing platform that uses technology very similar to Latimer, in that it's a rag or retrieval augmented generation, meaning that. The information never leaves the university's server. So you can have access Google, you can have access anthropic, but you're not downloading or you're not owning the, the underlying, um, intellectual property.
And I think that also has been kind of a stumbling block, um, as these giant companies have landed, um, in their search for, for data they've landed at these institutions. And when, when these [00:19:00] institutions look at the actual agreements, they're like, well, this. This agreement does more than just gives you training data.
And so they, you know, they get in these, uh, fights and in some cases they can't resolve them because, um, you know, um, intellectual property is something that, uh. Um, the, you know, a Google let's say has had a ton of experience with, um, they are definitely not a nonprofit and they are, you know, fairly aggressive from a legal standpoint.
So I think some of the folks that we've talked to in the academic space, you know, they appreciate when Google Education is doing positive things. But when you're talking, I think sometimes to Google Legal. Um, it's, it's a little bit more, uh, thorny in terms of those relationships. So that's the goal is just to build a licensing platform.
Marc Beckman: But what about the ethical consideration? Like just sticking on Google for a minute, don't they have like an ethical obligation to ensure that the corpus of, of data and information that, Gemini is trained on is appropriate so [00:20:00] that our children and our grandchildren have a accurate, um, perception of the world or as close to accurate as possible?
John?
John Pasmore: Yeah, you know, you've been around for a while and when I thought, you know, in, in creating this, you know, uh. Really since, since the nineties is as far back as I can remember with these, these companies, we kind of know that there have been efforts, you know, to get met at a hire more diverse black and brown engineers.
The number of engineers at these companies has been, I. Roughly where their number of their diverse hiring has been roughly at 2%. So we saw that in the nineties. We saw that the following decade. We saw that the following decade. In my opinion, you know, if you have a billion dollar entity and you have an internal problem that you wanted to solve, I'm going to universities all the time.
There's a ton of. Black and brown and, and women, um, engineers. Um, if, if that would be your goal, you could certainly [00:21:00] bump that number up to seven or 10%, whatever you really wanted to. So since I haven't seen that there's this effort, I'm not gonna assume that, um, the ethics around the, the inclusion of these models.
Especially with this infor, this administration. I'm not, I'm not sure that that's like a burning issue. I don't know. I'm not, you know, I know that we, what we do, um, I can't really speak for them.
Marc Beckman: Yeah, that's too bad because, um, you know, to your point, if you're saying that, going through llama too with Zuckerberg, if you're saying that these guys aren't, um, hard charging with regards to, um, hiring with a, you know, a, a core team of engineers in a diverse way, then what's the motivation from a moral perspective or an ethical perspective in getting the content right.
John Pasmore: Yeah. And as you know, you know, just like with, you know, if you have a diverse board, you know, in terms of a board of directors, you, you do that because, uh, that diverse [00:22:00] board. Different issues are gonna resonate differently, you know, with a woman or, or with a, you know, a black person or a person, uh, from the Caribbean or Mexico.
And you want that diversity of, of, of perspective generally in an organization. Uh, we're just at a, a funny time where that level of. Diversity is now being challenged in some ways, or it's, it's very hazy at, at best in terms of how the administration is supporting that. Um, you know, we do have laws on the books about, uh, you know, discrimination, but we've seen a rollback from some very large companies stepping away from, you know, their stated, um, support for, um, diversity in their organizations.
Marc Beckman: So what would you suggest, um, happens? Like where do you see a pathway forward, a runway, uh, to ensure that what you're talking about actually occurs in America?
John Pasmore: I mean, it, I, I think it kind of goes to the core of what America is. And I'm, and I'm still [00:23:00] confident, uh, that most of our country, um, believes and, you know, all men are created equal. You know, these are the kind of the foundational, um, things that, that, uh. Created America. And in my opinion, these are the things that make America great.
Is, is, is our, our diversity. I mean, we've, we've gotten, you know, everybody here is actually from some somewhere else. You know, it might've been a hundred years ago, it might've been last year, but that's what has made us kind of super resilient. That's what, you know, the best and brightest from around the world would always want to come here.
So that's why, you know, we have the best companies because we have the, the pick of some of the smartest people on the planet.
Marc Beckman: Well, John, looking at the federal government, obviously we have a, um, a, a government that's set up of. You know, the executive branch legislation and, and the court system. So I'm curious from your perspective, like the executive orders don't have teeth, in my opinion. I know they have impact early on and they're having an impact here for sure.
But [00:24:00] ultimately legislation is what is needed. So have you been linking into senators and, and, um, congress people as well? Like where, where does that, where does that stand? Representative senators? Like how, how's Latimer doing on that front?
John Pasmore: Yeah, again, we're, you know, we're a young company. We do, I think, punch above our way in terms of, uh, you know, our reputation and, you know, I think people understand what we stand for. Um, we early on spent some time, you know, with the staff at Chuck Schumer's office with the Congressional Black Caucus. Generally speaking, federal government moves pretty slow. so that's where I guess, uh, you know, president Trump has, has had an edge because he can just write an executive order and it's done. Or, you know, now you have a bunch of those that are being challenged in court, but he can, he can kind of write those faster than, uh, Congress can kind of keep up with them, number one.
Um, and I, I think we're, we're in a period where, um, you're seeing just less. [00:25:00] Legislation or less desire to, uh, you know, regulate industry. AI being, I think at the top of that list where we're just not gonna see, um, any real legislation. And I think recently they said not only are we not gonna legislate, but the states cannot legislate AI around ai.
So, you know, you had some. Some states like California, Colorado, even New York, um, that were, you know, taking the lead because there was a vacuum in terms of what the federal government was or was not doing. Um, so, you know, we just had this period, uh, I guess you could call it unbridled capitalism, where, you know, it's just the free market and it's, you know, it's a little bit rough and tumble.
But again, you know, I think, uh, you know, I have faith in people. I have faith in the market and I do think that, you know, there's a lot of. Uh, there's a lot of energy, uh, in terms of when I look at what's going on at college campuses, there's a lot of, um, political thought. There's a lot of energy, uh, with the younger [00:26:00] generation.
So we want to give 'em a tool, um, that harnesses that energy. And, you know, as we continue to build, maybe, um, add some, uh, connectivity or community aspects. So, um, like-minded people can use these tools as teams essentially.
Marc Beckman: So, uh, what you're referencing is this legislation that I think, uh, we're gonna see Congress like bulk, you know, butting heads really soon now, um, with regards to. Uh, this massive bill. They call it what? The big, beautiful Bill. I think that is what they're referencing to. And within it, there's, I was kind of shocked to find this too, John.
There's, there's a, um, uh, a section of the big beautiful bill which highlights that the, um, uh, AI industry itself. Will regulate it will not allow for the states to regulate. And I think you're right, the, my interpretation is to allow for AI entrepreneurs like American ingenuity to unlock and create wealth.
And I think that if, [00:27:00] if that happens, that could be. Very expansive. It could spread across different business sectors from the creative fields to, uh, finance, medicine, you know, war, war games, et cetera. I'm curious from your perspective, if, if you take off your hat on the, um, issue of racism for a second and just look at it.
You said like obviously you're, you're a, uh, commercially oriented business. So just from looking at it from the entrepreneurial perspective. Is that a good thing? Do we want the states to regulate? Or for a company like Latimer, which is in early growth, are you happy to see, just purely commercially speaking now, are you happy to see, um, like hands off from the states?
I.
John Pasmore: The only reason I'm, I'm, I might be happy is that I, I don't think you know, that there's a lot of understanding of ai. So when you look at, you know, some of the questions when you have a Sam Altman talking to the Congress or Senate, you can see that, um, a lot of people really still don't understand [00:28:00] ai. So having those people draft rules a, around how it should be implemented.
We even see it at, at universities. Somebody just sent me, Ohio State is allowing, um, allowing AI for all their students. Um, some colleges have a zero tolerance for AI still. I think that will quickly change, but I think that is a generational issue as well. But, you know, some of these institutions are just led.
By folks that just see this as a way to enable student cheating, um, as opposed to student learning. And it's, that is a, that is a danger. But I think that we as an industry can continue to innovate, create products. Um, that address those issues and that if people are, you know, and I, I have faith in the American public, I do think that they want fair and accurate, um, information.
Um, although, you know, we seek. You know, that craziness of opinions and, and news. But I think when we go to [00:29:00] the internet, when we're searching, um, in one of the search engines or we go to one of these AI platforms, I don't think it's for additional opinions. I think people want to drill down and get to some sort of factual basis.
Marc Beckman: You know, John, it's interesting on the educational front, it really hits. Home with me as you, um, are aware. I am a professor. I've been a professor at NYU for seven years, and over the past two years I've seen my students leveraging artificial intelligence in the, in the classroom in real time. Even when we're doing, um, quizzes or tests like full on exams, they'll leverage it.
And, um, I don't believe that there's an artificial intelligence that has been built yet really to perfectly detect when AI is built. But, you know, because I'm so deep into it, I could see in knowing the individual and seeing what's on the page, I could see where AI was used. But it, it makes me. Think about this, um, another area, like we're talking about, um, let's say inequities within society.
And I think [00:30:00] this could become a new type of inequity in society, John, where, you know, whereby on the one hand we have access to information, so we are democratizing, um, intelligence and knowledge, which could be phenomenal. Everybody has. You know, a phone and, and they're able to access that information.
But on the other side, you have maybe, um, our generation, which likes to learn, will do the homework and, and, um, expand us as humans versus the younger generation. And I know I'm, I'm generalizing right now, but it seems like a lot of the younger generation might like to use the tool because they're a little lazy and they just want a good grade.
And I guess, do you see some sort of a, uh, an imbalance. Notwithstanding the tremendous access to knowledge, data, and information, do you see some sort of an imbalance coming, um, between the knowledge haves and the knowledge have nots on a human level, on an individual level, because of ai? Because of ai.
John Pasmore: that, [00:31:00] that's, that's a, uh, a very, um, kind of core question about how we learn. And, um, you know, maybe, maybe the fact that we, you know, I, I think we, we've already seen it where you don't remember something 'cause you know that you can Google it so quick. So your mind already says. You know, I'm not gonna remember that.
It's, it's at my fingertips. And then you have efforts like what, you know, Neuralink with, uh, Elon Musk that basically wants to connect you directly, connect your brain directly to a computer. Um, you know, in, in some senses it's, it's, you know, we've never been at this point anything like it where we have a machine that has read the entire internet and can recall it, um, and talk to you about it as if it's an a person.
Um. That person or that entity, whatever you want to call it, um, you know, for young people is going to be, as long as there's electricity, right? They're going to be, um, a companion for young [00:32:00] people. You know, young people, uh, you know, my, my son will ask ai, Hey, um, how do I tell this? You know, a friend this, this uncomfortable thing?
And it'll give 'em advice about that. So, um. I just see that it's going to, you know, we're in a, the steep curve of, of evolution in terms of the product. We don't know where it's going actually. We don't know how it's going to, we know that there's some cognitive, um, um, kind of knocks against the technology because it, it.
You know, it solves stuff for you. Um, but maybe there's more that you can do because you don't have to get involved in the mundane, so to speak. You see that in code creation right now? Um, you know, we don't know. And even, you know, last year I think everybody's saying, well, in five years, you know, MIT is gonna be very different.
I think now we're saying. Oh, in three years or two years, it's gonna be very different because you have a machine that, um, can [00:33:00] essentially, um, write code and tell you everything that you need to do. You just have to keep talking to it about what it is you're trying to do, and it'll give you the code necessary to do it.
Um, so that's, that's an interesting phenomenon. The next phenomenon right after that is I could tell the machine this is what I want to do, and then. Maybe I don't even have to tell it anything more. It can iterate itself. Um, and do you know what we say is the machine doesn't, or AI doesn't have agency? Um, now, but it could, I mean, you can program it in where it can do what it wants to do.
So you could say, Hey, ChatGPT, improve yourself, write the code. And store it, you know, wherever you have to store it, to make yourself even smarter and do that recursively just endlessly, maybe test it. Different things you could do, but it really doesn't, at some point, [00:34:00] it's not gonna need engineers to tell it how to improve itself.
Because even as an engineer, you haven't read, you know. I don't know, a billion lines of code. And remember it, you, you know how to put things together. You know how to solve certain tasks, but you haven't read, you know, the entire code base, let's say that's available on the internet.
Marc Beckman: So the picture that you're painting is really, to me, very encouraging because what you're talking about now is moving a, um, the language of coding. Away from effectively science to plain English. And when you do that, you're opening up a lens for entrepreneurial, creative types to create digital platforms that perhaps all of these coders have never thought of before.
I'm curious, so what happens in your mind to the next generation when English becomes the common coding language?
John Pasmore: Uh, yeah, I mean, you know, we're just, again, we're just at [00:35:00] the, at the, at the precipice of that and, and seeing it. I'm excited to see what, what people, um, come up with. You know, when we do our events on college campuses, we take lair to college campuses. We bring an artist with us, uh, Delphine Diallo from Brooklyn, and Delphine thinks like.
Hey, I use AI all the time because it allows me to do things that I could only have imagined before. Um, either, you know, uh, an, an animation or some creation that would've taken a year. Now it takes a week or some hours. So now she can create things that she'd never been able to create before. Um. And I think we'll see that in a lot of different realms.
Again, we don't know where that lands, uh, but we do see that the, the people that are adopting and engaging with AI are starting to, you know, have, you know, if it in a commercial sense, they can be more productive in a personal sense, um, in artistic sense. Um, more creative. There certainly is, you know, a [00:36:00] downside to, to all this as with everything.
Um, and you know, we're not. You know, we didn't evolve, evolve as human beings with, uh, with tools like this. You know, essentially, you know, you have, you know, it's like. You have Einstein or what, however you want to frame it as, as somebody that you can speak to at any time, uh, about anything, about any topic.
Um, so, uh, you know, I, I think that folks that wanna learn, um, it's, it's maybe a different kind of learning, you know, where we. Uh, growing up, you know, there was a lot of memorization. Um, you had to memorize the stuff, US history. Then you had to come in on a, you know, on Friday for the test and regurgitate what, what you just memorized, and then you forget that the following week because you had onto the next.
Decade or time period. Um, so, you know, maybe that's not the most effective way, you know, it, it leaves an impression about history. [00:37:00] But you know, do you know, you know, what happened in 1863 in the United States? You know, most people don't know.
Marc Beckman: Right, and unfortunately we're living also in a time period where I think people are happy to forget the past. To forget humans, um, error, human error, um, issues going back to, you know, the impetus behind Latimer issues like racism and bias and, and sexism, antisemitism, it's like we just keep, as humans just keep making the same mistakes over and over again.
We're happy to do it. It's interesting though, you mentioned, um, Elon's Neuralink. He's already. Embedded some of those chips. I've seen video myself. I, I imagine you did too with regards to helping, um, couple of people with, uh, regaining their ability to, you know, quote unquote speak. Um, but let's take that concept into the boardroom.
From a a, a business perspective, would someone like you embed a Neuralink in your, in your brain so that you could have access to more data [00:38:00] and an edge with regards to not just. Uh, the people in the boardroom with you, but over your competition.
John Pasmore: Um, you know, I wouldn't. A hundred percent rule it out. I would say that it's highly unlikely at this point. But, you know, I think Elon had said, well, you'll be competing with somebody that does. So then how are you gonna compete? Um, if the person next to you has, you know, instant, you know, at the speed of thought they can access the entire, um, internet and, and converse with it.
Uh, yeah, that would be very difficult. I think. Uh, you know, I saw an another interesting, um, iteration of that, um, in the board. What happened if you had kind of a digital person? I. On your board who you know is in some ways more insightful it, it can read all of your board minutes from your very first.
Board meeting to current. And you know, as you know, when you have new board members, getting them up to speed takes some time. So you have a machine that can read all your board me [00:39:00] minutes, it can read all your marketing materials, it can read everything about your company that you want to give it to and then sit on the board.
It can not only do that, but it can look around the board, it can understand what Marc's background is, everything that you've said and done, and, and actually create a persona and understand that, hey, if I say this. Marc's probably gonna think this, he may not say it, but he's gonna think this and, and, and kind of work its way around the board boardroom and understand, um, who's sitting at the table with a degree of, uh, fluency that, you know, we currently don't have.
So, you know, I, I met with a startup that, that's essentially working on that. Um, and that's, you know, that would certainly change board dynamics to have that person, entity, whatever you wanna call it at the table.
Marc Beckman: For sure. That's a tremendous amount of influence with that knowledge base. Um, when you referenced Delphine Go the artist Delphine going with you to different campuses, it, it got me thinking to a [00:40:00] topic that I often talk about. I talk about the age of imagination, which is fueled by AI and creating, um. Uh, industries specifically, uh, fashion, art, music, um, entertainment, Hollywood industries that can be disrupted now.
So when, when you go on campus with Delphine, you have Delphine, the living, breathing, amazing artist, super talented individual who's been, you know, finessing her craft for a lifetime. But then you have these. Kids on the college campus who are dreaming of becoming the next film director. Right? The next Spike Lee, the next great author, um, the great, the next great fine artist, and they're going to leverage artificial intelligence in ways that.
You and I probably have not yet dreamed of, uh, they'll have the ability to disrupt Hollywood's big budget films, right? When, when Spike Lee steps in, he's got the backing of, you know, [00:41:00] millions and hundreds of millions of dollars. In fact, in some cases with thousands of staffers to help with regards to character development, writing, creation of music.
I mean, it goes on and on and on. But now the dreamer that, that young. Women on a college campus that you're standing in front of with Delphine could say, wow, I could unlock AI and go after that Hollywood studio. I can now create my own film in a way that is way more efficient, not just from a cost perspective, but from a timing perspective as well.
I could distribute across other types of emerging technology like. Social media for promotional purposes. I can be paid even digitally with cryptocurrency. I mean, we're gonna see a, a radical transformation in society during this age of imagination. What do you predict will happen to, um, let's say, let's stay focused on Hollywood for a minute.
Like what do you think will happen to the types of content we're going to see? The way PE people consume [00:42:00] content. Are we only gonna be in theaters or streaming? Will we see something that's more 3D as we walk across the streets, uh, what's happening, you know, what's on the horizon on that front
John Pasmore: Well, I
Marc Beckman: because of ai?
John,
John Pasmore: Yeah. Uh, you know, Google just, just updated their, you know, runway I think was the first platform to really kind of come out and impress people what they could do with video. And then even then, you know, you kind of realize, oh, well it can do, I don't know what it was, 15 or 20 seconds at a clip.
Um, but you can see, you know, you can, you can say, well, now you just. Extend that you can see that, wow, this is going to be very disruptive. And I think, you know, Tyler Perry, when, when runway came out, you know, he stopped development. He was building a huge billion dollar studio in Georgia and he said, well, let's just, let's just hit the pause button here.
There's something going on that's going to radically transform the landscape. Um, we have, uh, you know, what seems like an insatiable demand when you look at, uh, you know, Netflix and Amazon and all these other streaming platforms for [00:43:00] streaming content. You know, it used to be, you know, a couple movies, a, a, a month came out.
Now, uh, we, we. Demand so much more. So I think that AI is definitely gonna fill that void. We do have, um, you know, more creative talents I think that can play in the, in that sandbox because there's no barrier or very small barrier, um, to entry. Um, you know, the, the first paper, uh, that really transformed generative AI with something called attention is all you need.
And I think that that applies to humans as well, is if you're paying attention. And you're using these, these tools, uh, that's really all you need because then, you know, you're, you know, and your imagination and you're unlocking, um, opportunities that maybe people just dreamed of or nobody dreamed of, of it.
But it's a huge opportunity for you.
Marc Beckman: John, our conversation is making me think about the concept of fame. Right now because I think like when you and I were younger, a person, an individual [00:44:00] was, was famous for their accomplishments and their achievements, and then people would see them in, you know, in the flesh. We would press the flesh. Now, this next generation I.
Fame becomes something that's a little bit different even, you know, for people in our generation that are quote unquote famous people are, you know, famous when they have cliques or views. People are famous when they don't press the flesh. You don't get to see these individuals anymore. I was just talking, I was just speaking with a high prominent individual and I was criticizing her because it seems like she's using, uh, social justice platforms for fame. She's using her name. I. And her likeness, but not giving her own money to the platform and appearing as if, um, you know, the, the, uh, she's trading off of her fame to be philanthropic, but she's not real, in my opinion.
This is like a heated debate we had. I said, you're not really all in if you're not contributing your own money. So I'm [00:45:00] curious from your perspective, what's gonna happen? Could AI shift the perception or the concept of fame for the next generation too?
John Pasmore: Um, yeah, I think we saw that right. With, with YouTube creators and then influencers and now you have, um, you know, um. Digital influence. You have influencers that are really, um, just synthetic, right? They're not even real people. Um, so I think, uh, I think that we're at the tip of that, uh, iceberg. I really see that it's going to be, it's gonna take us some time to, you know, once, once these avatars, um, are truly.
Fluent. Um, it's gonna take us some time to figure out really what, what makes us, you know, so different because, you know, they're appearing to have feelings, they're appearing to be sensitive and well spoken, and generally very attractive physically. I think AI kind of understands at this point. You know, it can just measure by [00:46:00] engagement.
Right? What, what did. People, you know, find attractive or, or influential. What do we spend time with? Um, and, uh, you know, that's going to be a whole learning curve that we haven't had to deal with in the past where, you know, act, you know, you might be competing with a synthetic person for some role or, or for, uh, for, for people's attention.
Marc Beckman: For sure. But you were talking before about like the ag, the agentic part of artificial intelligence, which I think is really interesting and you can take that a little further now into the concept of like meta humans where we can basically. Basically throw out the whole segmentation approach. The traditional segmentation approach where we look at, um, age, gender, household income, uh, geography across the, uh, nation.
And I think we could get into these micro segments and bring up Metahumans to appeal to these micro segments in a way that. She or he may [00:47:00] prefer. So for example, if you're a, a 26-year-old who is, you know, really, um, let's say into the LGBTQIA plus community and you don't want to go to Bank of America and interact with a guy that looks like me, you can have a metahuman that's photorealistic and and speaking effect effectively in the same.
Type of language that you and your peers would speak and totally disrupt because of ai, the way that interactions are in a good way with regards to a synthetic being as well as a, a human being. I.
John Pasmore: That, uh, yeah. So you know, you're starting to see, right, the future is this. Um, it's just hard to figure out. I think, um, I think, you know, what we saw with ChatGPT, which still amazes me, right? You had companies like Google or even Meta that were working on these technologies forever, you know, since their inception.
But ChatGPT kind of got first mover advantage and then they just never really [00:48:00] get, gave it up. They never let up, you know? So here you have, you know, a, a, a. You know, Google being the leader, so to speak, is a, a, you know, multi-billion dollar, uh, company that's essentially being disrupted, right? In the same way we've lived through a couple of disruptions, you know, uh, Napster and the music industry, Netflix and the film industry.
So now you have ChatGPT and maybe the whole, um, the whole software industry because, you know, you do have software companies that are, you know, maybe. You know, trying to embrace ai. Certainly Microsoft has made a big bet, um, but they can be disrupted as well, because now you, you know, you as a, as a young person could figure out, you know what, really, I want a male client that does X, Y, and z.
I'm not a coder. I know how I want, look, I know how I want it to be. Um, and you can create that yourself.
Marc Beckman: John, so let's, let's uh, break it down a little bit for the audience. Which one of these major tech companies do you think is doing [00:49:00] it the best when it relates to combating racism?
John Pasmore: Oh, I think they're, I think they're kind of all neck and neck. I think Anthropic talks about it, uh, more or talks about fairness and equity more. Um, and I think even just. Talking about it, you know, moves your brand around a little bit, where people are expecting that, that you're kind of this kinder, gentler, more progressive, um, um, company.
And so I think that, that they would be kind of the leader, you know, you have stalwarts like, you know, Microsoft or Amazon that don't seem. Super sensitive to the topic. Don't talk about it a lot. So, you know, I think, uh, eventually that young people or people that care about these issues will vote with their attention essentially.
Marc Beckman: So during our conversation, you've landed on a couple of couple of elements of an ecosystem where racism could permeate. Through and impact AI [00:50:00] training as well as, um, the content that the individual taking in the, the ai, um, either views or understands. Right. You've spoke about academia. You, you spoke about major corporations, you've discussed government.
Which other part of the ecosystem, in your opinion, needs to be impacted in a positive way to further combat racism in ai?
John Pasmore: Uh, I mean the two, uh, the, the two industries I think that are um, can be most, um. Um, impactful or, or, or have negative consequences if they don't are, uh, healthcare and law enforcement. Um, so in, in healthcare, we already see that there's, um, very wide disparities that are sometimes hard to dis discern as to, you know, what, what the, the reason is for this.
You have, uh, black women mortality, um, you know, uh. Maternal mortality that is, that is, [00:51:00] you know, equivalent in some states in the United States equivalent to third world countries. So, you know, a woman going in to have a child in a hospital in Oklahoma, um, has, uh, you know, a very high risk of having a bad outcome.
And there's really not a good reason for that. There's not a good, uh, technical reason for that. Um. You know, and partners with, uh, spike Lee's wife, Tonya Lewis Lee in another business, and, you know, she has a documentary about that. So, uh, aftershock, I think that that's a huge issue. Whether or not, uh, healthcare companies really wanna open up that can of worms.
You do have something now called, you know, it's generally, you know, passive listening so that you have a, a listening device. So the AI is maybe listening to how a doctor or a nurse is interacting with a patient. And the AI can kind of flag language or it can say, hey. This patient, um, expressed, uh, that they were in pain six hours ago.
I see. No, I see no remediation. There's been nobody here to drop off any [00:52:00] kind of pain medicine, so, you know, whereas maybe that would've just been forever overlooked. It can be. I. Flagged it can be examined by the institution as to, well, why did this happen? Um, and then, uh, law enforcement is, as you can see, just a can of worms.
I think we're in a, in a crazy scenario, uh, where we have law enforcement and, you know, really, you know, you've, you're seeing, you know. What, what people have been talking about with privacy all this time because you see, um, you know, in our immigration efforts, uh, just really, really, um, egregious violations of people's privacy.
Whether that's, hey, you're traveling, you're come back to the United States and they're checking your phone and your messages for your statements and, and detaining people against that. That's, that's not something that we would expect. So, um, law enforcement, um. And privacy, right? So, you know, it, it may be that for some [00:53:00] time your protection of your privacy is really up to you as a as because you know, the law hasn't caught up to it or the um, you know, the law is not abiding by the law, you know, which is unfortunate.
Marc Beckman: John, it seems like you have such a Herculean. Effort in front of you. I'm curious from your perspective, do you have other colleagues or institutions that are supporting you with these efforts that you would like to take a minute to talk to and highlight so that the audience can do a little research and see beyond Latimer, who else is fighting this fight with you?
John Pasmore: Well, we have support. We have support both from institutions, I mean the colleges themselves have been, um, you know, Southern New Hampshire University was our big first. Uh, client, they have 170,000 students, uh, big supporter for, for diversity, big supporter for visibly showing. They have over [00:54:00] 30,000 black and brown students.
They wanna visibly show, um, Hey, you know, we care about you as a part of our college community. So I think that that's, that's, you know, we want more of that. Um, HBCUs themselves have been tremendously supportive. The very first school miles. College, uh, um, uh. Little college in, in, uh, Alabama was the very first to adopt, um, Latimer, uh, the United Negro College Fund.
The UNCF has been tremendously supportive. Um, the billionaire, Robert Smith, um, has, has an org organization called Student Freedom Initiative that has been very supportive, uh, black Girls Code. Uh, we have a partnership, so we are getting, uh, we get a tremendous amount of, uh, inbound interest. Um, and so there's really just.
Only almost, uh, too many organizations to name that are aligned. Um, so we're excited about that and, and love the support.
Marc Beckman: That's great, John. So John, you've given me a ton of time, but I, I just, I wanna make sure before we end this [00:55:00] conversation, um, that we covered, you know, we've gone really broad here, but are there any specific topics or, or issues that are on your mind that you think the audience should be aware of as it relates to racism and bias in ai?
John Pasmore: Um, I, I think we did cover everything, uh, that we, we've, we've had a broad conversation. I do think that, you know, it's up to people to vote with their attention, um, in terms of what do you want? You know, the future to look like. Um, we have an attention economy where, you know, everything on the internet really is, is, you know, we do have some subscription fees, but we're always looking for your time, essentially.
Um, and um, I. You know, if you invest your time with ai, I think that there's a, there's a ton of possibilities to improve yourself, to improve your education to, uh, for your family. We recently read that African American mothers are over indexing on their use of, of [00:56:00] AI because they see it as a way to kind of leapfrog some of the issues around tutoring or access to tutoring, uh, for their kids.
So we hope to see more of that.
Marc Beckman: That's great. So John, every guest that comes onto my show on some future day ends the the show with me in the same format. I provide the guest with a lead question that incorporates the show's name and allow for my guest to finish it. Are you game.
John Pasmore: All right. Let's give it a whirl.
Marc Beckman: We're gonna let you go, John. This is it. You gotta finish this last task and then we're gonna, we're gonna let you go.
So in some future day, artificial intelligence will not be biased or racist if.
John Pasmore: If we pay attention, we will have, um, I don't know the, the companion, uh, that we've always dreamed about.
Marc Beckman: John Pasmore, thank you so much for joining me on this episode of Some Future Day. It's really great to see you.
John Pasmore: Thanks, Marc. [00:57:00] Appreciate the time.
[00:58:00]

Artificial Intelligence & Racism: Can It Be Eradicated? | John Pasmore & Marc Beckman
Broadcast by