drawer logo

How Blockchain Can Address the Inherent Risks in Artificial Intelligence

In this episode of the Blockchain Journal podcast, Blockchain Journal editor-in-chief David Berlind explores the intersection of artificial intelligence (AI) and blockchain with guest CasperLabs CEO Mrinal Manohar. As more people become personally exposed to the productivity gains that can be had from offerings like ChatGPT and MidJourney (automated art and image generation), more enterprises are taking a closer look at how AI can optimize business processes and improve the bottom line. But, in its current state, AI is neither risk-free nor without potential long-term legal consequences.

During the interview, Manohar discusses the applicability of blockchain's tamper-proof nature to the trustworthiness of artificial intelligence. As Manohar puts it, AI is a "black box", lacks transparency, and suffers from a range of governance issues not the least of which has to do with version control. According to Manohar, Blockchain can bring a certifiable and trustworthy audit trail to AI decision-making, addressing concerns related to biases, data integrity, and accountability.

So who should care among those in the world's largest enterprises where AI is already playing a role? Manohar says the Chief Information Officer (CIO) and Chief Risk Officer (CRO) are probably at the top of the list with the former bearing responsibility for technology deployments and the latter concerned with the potential for legal exposure due to the risks associated with AI.

(The full-text transcript appears below.)

Artificial Intelligence (AI)

By David Berlind

Published:November 21, 2023

clock icon

19 min read

In this Story
Amazon Web ServicesAmazon Web Services

Audio-Only Podcast



Full-text transcript of David Berlind's Interview with Mrinal Manohar, CEO of CasperLabs

David Berlind: Today is November 13th, 2023. I'm David Berlind and this is the Blockchain Journal podcast.

You know, these days, artificial intelligence is capturing the imagination of millions of people worldwide. They're using it for all sorts of things, of course. Chat GPT is a big deal. Also, you've got a variety of different platforms out there that let people kind of create artwork that's amazing, [and] doesn't even look like it was created by a computer. And there are still a bunch of unanswered questions about artificial intelligence that a lot of people are asking. For example, the ethics behind the decisions is making or the copyright to the content that artificial intelligence creates and then the final business outcomes from all of the things that artificial intelligence can do for businesses, people, [and] anybody for that matter. So joining me today is Mrinal Manohar. He is the CEO of CasperLabs, one of the enterprise blockchain providers out there. And CasperLabs has been doing some research into the intersection of artificial intelligence and blockchain. Mrinal, thanks very much for joining me here on the Blockchain Journal podcast.

Mrinal Manohar: David, thank you so much for having me over. It's my pleasure.

Berlind: It's great to have you. So let's just dive right in. We know artificial intelligence gets a lot of airplay these days. Blockchain – for a while blockchain, was the big idea and the big news and suddenly artificial intelligence came along and kind of swallowed it whole, and now all the mainstream media is talking about is artificial intelligence. Blockchain has taken sort of a backseat. Some people talk about this idea of where the two meet but when I hear that, quite frankly, I don't know where the two meet. So, why don't we start there?

What is the intersection of artificial intelligence and blockchain?

Manohar:That's a really, really good question, David. And I think...

I think your response, when I hear blockchain and AI, I don't see where they meet, is justifiably a little skeptical because I think oftentimes when people talk about two technologies, it's "Let's sprinkle some fairy dust of this one on the other." But if I may, David, would it be helpful if I just stepped back a second on what the two technologies actually do? Because then I think the intersection makes a lot more sense. So if we're just stepping way back, right?

Berlind: Yeah, let's go ahead. Let's step back. Go ahead. Take the...

Manohar: What blockchain does, and I think people often confuse it just for cryptocurrency, but really, what blockchain is, is it's a tamper-proof ledger, meaning everything that happens on a blockchain, especially a decentralized blockchain, is absolutely tamper-proof, it's 100% serialized, and this is why crypto was a first use case, because you couldn't have digital money before because you didn't have copy protection. But really, the huge value is copy protection, [and] serialization – meaning, you know when what happened and by whom. And then finally automation, meaning you can put any level of intelligence on top of this via smart contracts. So that's what blockchain does.

What AI does is augment decision-making processes. Meaning – I get a picture and your AI can tell you if it's a cat or a dog. All it's doing essentially is weighting certain neurons that are part of making this decision and then giving you an output to that decision. [For example:] Should I thumbs up or thumbs down this resume? Should I identify this picture as a cat or a dog? And then if you combine different AI systems, you can do something that's even more complicated.

The intersection really is, AI is a black box, meaning right now it's very hard to understand what set of inputs actually created the decisions that an AI emits later on. And secondly, once, say you do identify that something bad has gone wrong with your AI, it's almost impossible to version control it. You know, go back to a version which was working.

The intersection happens where blockchain is an excellent governance layer for an AI system. You augment your AI governance with a blockchain solution. You now have a completely tamper-proof and certifiable set of audits for how exactly your AI was built, [and] what data sets went into building those decisions. Secondly, you have the great ability to version control and skip back a version to when the AI was performing up to spec.

Berlind: Is that all there is to [the] governance of AI? I think about [that] once AI is on its way and making decisions, I would actually love to see something like blockchain make a record of those decisions as they're being made. For example, literally to put a... blockchain APIs or code where those decisions are being made in the AI code so that you're making a record of those decisions as they happen in a way that makes those decisions very transparent to the many different parties that need to see that information and need to verify that it's working fairly. That could be the customers of a company. It could be the executives of a company. It could be regulators or lawmakers or lawyers. I mean, what is the totality of governance when you're talking about AI?

Manohar: Yeah, and David, you're 100% right. Like the two examples I gave, version control, knowing that no personally identifiable information or copyright information went into your AI is just the tip of the iceberg. The AI governance and what blockchain can bring to AI governance is actually fairly comprehensive.

So let's start first with the version control, as well as removing the black box of what data went in to build that AI. As a result, now, you can actually certify your AI. For example, if your AI has been trained and you want to resell it. For example, I've built the perfect insurance chatbot. You're actually able to show a complete audited path of every single training data set that went into the AI, [to] prove that nothing's copyrighted, [to] prove that there's no personally identifiable information. And then to what you said, you can extend this many levels above. We were just talking about AI in isolation. AI, just like databases, ERPs, or other systems, very often times touch multiple entities. You might have one company that's providing the chatbot to a customer that could be a financially regulated entity, and then you have end users as well as users who have supplied the data for the actuarial tables, etc.

With a blockchain governance system, you can ensure that all the permissions on what data goes in and what is surfaced up to the top is always under an immutable ledger that... I wouldn't say you control, anyone controls based on the rules you set up for that governance system. So the governance of AI would be all-encompassing and blockchain is just the most cost-efficient way to do it and also the most secure way to do it because making any other system tamper-proof isn't really tenable in any cost-effective manner.

Berlind: Well, let's talk about the tamper-proof part. Where does that impact the world of AI? Are there fears that when it comes to artificial intelligence and the data that it's using that there will be tampering with that for whatever reasons, malicious reasons, maybe just scientific reasons, whatever it may be, there could be a lot of motivations for tampering with that, but is that a concern? I mean, you guys have done some research into where enterprises are working with API and what their concerns are.

Manohar: Yes, the tamper-proofing is actually incredibly important in the sense that it makes your audit function incredibly cheap. So when something goes wrong and you typically have to do an audit, if it's not all in one place and if you don't know that it's been tampered with, you can look at a set of transactions and/or a set of events and say, yes, I know that happened. It's fully auditable. I trust you. Check. So... and when you think about AI, the volume of data, the volume of queries, and the volume of activity, I don't think we've even seen anything yet. I'm incredibly excited to see where AI goes. I mean I don't work at an AI company but I still think it's a really fascinating and exciting technology.

I think the volume of interactions is just going to be so large that having this automated tamper-proof system is incredibly important. Because when you're dealing with incredibly large volumes of either interactions, sessions, etc, auditing them becomes a huge pain if you don't know that the audit path has not been tampered with. And that's why the tamper proofing is incredibly important.

Berlind: So who would a tamper-proof audit trail of artificial intelligence matter to in an enterprise? Like, who cares about that? And what would motivate them to say, "Wait a minute, full stop before we put this AI in place. We need to start to kind of hitch it to the blockchain wagon in a way that we're guaranteed. The data hasn't been tampered with, the algorithms haven't been tampered with, the decisions it's making are fair and reasonable and ethical."

Manohar: Absolutely. So, let me start by saying prevention is better than the cure. Meaning. if you started with governance right from the outset, that means things will go a lot smoother. But the cure is still also very, very good. But let me give you two very specific examples just to think about the kind of risks and the kind of use cases where this would really, really matter.

So, HR is one. If you think about it, chatbots or AI bots are used all the time to sort resumes, help with hiring decisions, etc. And recently, if you've noticed, there have been some lawsuits and some complaints about biases that have crept up within AIs either racial or based on you know – they've accidentally tagged some universities or some regions unfairly – because we don't know what's happening in the black box of AI but you suddenly see like okay this AI HR bot is now rejecting every resume from Wesleyan for example Again, that's a made-up example. I don't remember what the particular university was, but it did happen at some point.

But think about any company that's using that HR bot, right? Or the producer of that HR bot, having adequate governance on it, the ability to identify when a certain hallucination or bias happens. Plus, if you're the person who's making that, each of them needs to be tuned for different companies. You know, what Goldman Sachs is looking for is very different from what Pfizer is looking for. But during that tuning, you need to make sure that, A, you have an audit path of what exactly happened [and] when. So you can identify when a bias occurred and if it does occur, you need to have the ability to switch back to a state when that wasn't happening. I'll give you another -

Berlind: So who does this appeal to? Because I want to go back to that question. Who in the enterprise should take notice and then start to assert their influence when it comes to the implementation of AI? To say, hold on, let's take a break here. How auditable is this? How do we know there's no biases? How do we maybe apply blockchain to raise the level of governance to something that's acceptable to all stakeholders?

Manohar: So... it again comes down to what the corporate structure of the company is, but off the top of my head, and again, this is a novel and new space, but off the top of my head, I think the function would primarily be with the chief information officer and the chief risk officer.

Berlind: Okay.

Manohar: And I'll talk about why the second. Now the chief information officer, I think that's fairly obvious, right? Or CIL/CTL.

Berlind: It's an IT system, yeah.

Manohar: It's the person who makes all the, yeah, it's the person who makes all the technical decisions, AI obviously can save companies a lot of money, assuming it doesn't do something bad and you face a lawsuit. Now the reason why it also involves a chief risk officer is twofold. One, these hallucinations and biases can cause real, real issues, as well as any personally identifiable or copyright information can bring a lot of legal issues.

Berlind: Sure.

Manohar: But the other reason why it's important for the chief risk officer is... recently, I believe it was November 1, our administration here in the US already announced an executive order on how AIs need to be run and governed in a safer, more transparent way. And I believe that this is going to become table stakes going forward. Obviously, I don't have a crystal ball. I don't know exactly how the regulatory landscape emerges. But there seems to be a lot of interest in ensuring that AI tools are well-governed and highly transparent. So I do think it does affect the risk function within an organization significantly as well.

Berlind: Yeah, interesting point. Of course, the US government can't get its act together when it comes to regulating blockchain and AI, at least blockchain. I think once you understand it, you understand it. AI is like on a whole nother level.

Manohar: Yeah.

Berlind: So, I wish the US government luck in getting that one done. Obviously, there's also a lot of tension between the different parties in the US and they always tend to take opposite sides on any issue, which will stall any legislation or lawmaking around something like that. That's I think what we see happening with blockchain. So, I remain skeptical about that piece of it, but I do think that … When I think about what I've read about the biases of artificial intelligence, it's, I certainly see the possibility for blockchain to kind of create a single source of truth around the decisions and the data that go into, you know, the machine learning and ultimately the decisions that AI is making in any use case. But I do have a question. This is more from the enterprise point of view.

My guess is that most enterprises are not going to be out there baking their own artificial intelligence and then thinking about, okay, how do we get blockchain wrapped into this? They're going to be acquiring third-party solutions that do it for them. They'll be acquiring specialized solutions that address their particular – to your point about, you know, what's a financial company going to do versus, you know, some other company in another industry they're all going to have different needs and so you're going to see a lot of very custom solutions come out. Won't we be depending on the purveyors of those solutions to build in this auditability?

Manohar: So that's a great point. There's two worlds. Every company is doing their own AI initiatives, or most companies are going to use third-party tools with some rejiggering of the third-party tool. The good news... So let me address that point first.

Regardless of how that matters AI governance looks exactly the same meaning even if you're a company using a third-party tool and you know You don't have access to the actual weight of the neurons. You're always setting the parameters for that AI You're always setting up the configuration for that AI and tiny changes and people you know within the AI industry You know like they have all these like little things that can cause massive butterfly effects like changing p-value changing k-value can just dramatically change the way your AI reacts. So whether it's [a] third-party or it's something you're building yourself, the AI governance tools – to get, you know, version control, automation, etc. apply regardless.

But then it comes down to who should be doing – who should be caring about the governance. I think at the end of the day, the person who really cares about it is the chief risk officer and CIO at the end, at the company. They're the ones who will want to ensure that at least any AI tool that they use doesn't expose them to massive... And to your point, assume the governments do nothing about real regulation in AI.

That wouldn't stop the fact that lawsuits are being thrown left, right, and center for personally identifiable information, copyrighted information, et cetera. Or that AI – your AI bot just said something racist to me. That happened recently. Those things, the ability to fix, repair, audit, and version control against that really is something that every CIO or CRO should think about.

Berlind: Yeah, I don't disagree. I just think that everybody's taking their applications off the shelf right now.

Manohar: Yeah.

Berlind: And there are so many providers of those applications. So the onus, you talk about how the stakeholder that should be most interested in this might be the CIO or the chief risk officer, maybe even the chief compliance officer, to the extent that there may be regulations and there's some compliance issues that come up as a result of those. But I still think it also goes to the product managers and the purveyors of these solutions because it's sort of a check-the-box. Is there AI that comes with a solution? ... Is there some sort of blockchain that comes with a solution where the governance to a single source of truth that no one single party can tamper with or control? Like, I would be looking for that in the solutions I acquire so that then I have access to whatever the blockchain is making transparent to the customers of the solution provider. That to me sounds like, it's really a clarion call to the solution providers to get this done.

Manohar: Yes, and it could work from either end. And to be honest, it's hard to tell, right? AI governance is a very, very new industry. You know, I think ... I was reading a report. I think AI governance in totality is a $120 million industry today.

Berlind: Million?

Manohar: It's just starting.

Berlind: That's not a lot, 120 million.

Manohar: No, no. It's just starting. Right?

Berlind: That's yeah... It's just started.

Manohar: AI is just starting. People weren't even thinking about governance. But I think... I was reading a market report. They [are] expected to grow at like a 50 percent CAGR for the next 12 years or more, reaching multiple billions fairly soon. So, not trying to escape answering the question by saying it's too early –

Berlind: Sure.

Manohar: But because it sort of is what we did. We did a webinar that we co-hosted with IBM talking about – and we demoed – how a blockchain could actually show version control in an AI. We actually showed an insurance chatbot go from giving ridiculous answers, and you hit one button, and now your chatbot is giving really, really good answers. So we showed that it actually works. What we're really working on now is building a council of potential customers and enterprises. Obviously, if anyone wants to be involved in this, they go to casperlabs.io, our website, they can sign up, and we're happy to get feedback.

But really stage one is really figuring out, to your point, where does that onus reside, right? For example, with cloud providers, what ended up happening is basically on the governance and being hack-proof and being very, very secure, AWS, Google Cloud, and Azure basically took the onus on themselves and they indemnify their customers to a certain extent. I believe that will start to happen.

I know that Watson X, for example, at IBM indemnifies you against using their data sets because they've already been governed, they've already been looked at, et cetera, and they're pre-certified. So where exactly this onus and economic moat sits is it's hard to tell when an industry is this early. But the reason why everyone's so optimistic about this movement towards AI governance is because it makes what's a really, really exciting technology safe. AI is a wonderful technology that could be actually great for humanity. But as with any technology, if it starts going off the rails and we don't have adequate governance... I hate using the word control. That just sounds wrong. But if you don't have adequate governance around it, weird things can happen, as we've seen, right?

Berlind: Yeah, I get that.

Manohar: Like, you know, the –

Berlind: Yeah. I also... I think that blockchain, of all the technologies out there to create a single source of truth around all of the parameters of artificial intelligence, blockchain is probably the best one.

Manohar: Yeah.

Berlind: I mean, it's inherently, as you point out, inherently immutable, tamper-proof, and designed for multi-party transparency, which is exactly what you want when it comes to a technology like artificial intelligence.

You guys have done a survey though, so I'm just kind of curious what are some of the highlights in terms of the survey that you conducted? Like what did you learn from the enterprises that you went out and surveyed?

Manohar: Absolutely. So, we were very fortunate to have Zogby do the survey for us. And we surveyed about 600 businesses, [the] US, UK, Europe, [and] some of China. And one thing that was really interesting was [that] we saw a lot of confusion between blockchain and crypto.

Like when we when we asked people are blockchain and crypto the same thing We saw almost two-thirds say it was the same thing when we read the survey recently We've actually seen a dramatic shift. We see a lot of people actually understand it a lot better now In fact a full 77 percent of respondents told us that they now fully understand blockchain Compared to you know, [a] much lower number before.

Berlind: Wow, that's amazing. [I] fully understand it.

Manohar: Yeah, and...

Berlind: [It] took me a long time just to figure it out. I think I needed a degree in it or something, yeah.

Manohar: Yeah, and again, it's a survey, so it's self-reporting, so you can't take it, you know, apples to apples, but it is indicative, like, statistically.

Berlind: People are starting to understand that cryptocurrency is just one thing that happens on blockchain. You can use blockchain as a platform for many other things.

Manohar: But, I'll tell you what really surprised us, because we didn't think this many people would say that. 71% of those business leaders said that they actually viewed blockchain and AI as complementary technologies. And they said, when we were like stack listed, like, okay, if they're complimentary, what are they complimentary for? Well over half said, it's actually improving AI's effectiveness, safety, and governance, which is the most interesting part of the convergence of AI and blockchain.

Berlind: That's amazing that any one person, I mean, if I had to guess how many people in the world would say something like that, I would say it was under a hundred. So that's amazing. Yeah.

Manohar: No, 100%. We were surprised ourselves. I mean, that was our thesis going in. And we've done a lot of work with external parties to vet that, but you only get real data when you ask 600 and really, really distributed. So it seems like the message has resonated. But again, I forget the percentage here, but it was a small percentage of, "Have you already started purchasing an AI governance product or something?"

Berlind: Right.

Manohar: And that number is much smaller, which is to be expected. It's a pretty incipient industry.

Berlind: Okay, well, Mrinal Manohar, you're the CEO of CasperLabs. You've done a survey of 600 people, 600 business executives to learn about their perceptions of the intersection of AI and blockchain. Thank you very much for sharing all of these great thoughts. I certainly hope myself that, we do see some level of transparency around the AI and if blockchain happens to be what brings that, then all the better.

Manohar: [I] hope so too, we're just here to help.

Berlind: Okay, you've been watching the Blockchain Journal podcast. I'm your host, David Berlind. For more podcasts like this, you can come to our YouTube channel on YouTube, of course, just search Blockchain Journal there or come to our website, blockchainjournal.com. See you at the next podcast.

footer logo

© 2024 Blockchain Journal