Vertigo3d | E+ | Getty Images
DAVOS, Switzerland — Blockchain could be used to prevent bias in the data that artificial intelligence models are being trained on — and that could be a “killer use case” for the technology, executives told CNBC.
One of the concerns about the AI models — the kind that underpin applications like ChatGPT — is that the data they are trained on could contain biases or misinformation. That means the answers an AI system may give would contain those biases and false information.
Blockchain hit the market in 2009 with the launch of the cryptocurrency bitcoin. In the context of bitcoin, the technology is an immutable and tamper-proof public ledger of transactions. Businesses have been looking to put these principles to use in other applications for blockchain, which is sometimes referred to as distributed ledger technology.
In the case of AI, training data can be put on the blockchain. That will allow the developers of the AI system to keep track of the data that the model has been trained on.
Casper Labs, a business-focused blockchain firm, partnered with IBM this month to create such a system.
“The product that we are developing, the datasets are actually checkpointed and stored on the blockchain so you have a proof of how the AI is trained,” Medha Parlika, chief technology officer and co-founder of Casper Labs, told CNBC during a panel discussion at the World Economic Forum in Davos this week.
“And so as you use the AI, if it’s learning and you find that the AI is starting to hallucinate, you can actually roll back the AI. And so you can undo some of the learning and go back to a previous version of the AI.”
Hallucinations broadly refer to when an AI system gives out false information.
Blockchain is a technology that has been spoken about for many years, and a host of industries ranging from finance to health care have been looking at ways to use it.
Sheila Warren, the CEO of the Crypto Council for Innovation, said, however, that a…
Read the full article here