Author: Grace Carney, USV Analyst; Translation: 0xjs@黄金财经
It has never been harder to tell whether what you see online is real or fake. Someone hired an AI bot to make fake comments on Reddit, Drake resurrected Tupac in a rap, and Morgan Freeman is not Morgan Freeman. However, Blinken playing guitar in Kiev is real.
Digital fakery is not new and has been around since the birth of the Internet. Traditional forgeries have been around even longer. Rock paintings, ancient pottery, and stained glass windows are all examples of things used to mislead the truth or cover up the facts completely. For as long as forgeries have existed, it has been almost impossible to be 100% sure whether something is real or fake. The same may apply to generative AI.
That's why we are considering the concept of "deep reals". Deep reals are the opposite of deep fakes. It doesn’t assume that everything is real, and then we just need to prove something is fake. Instead, it assumes that everything is fake, and we need to fully prove that something is real.
Historically, our solution to determining authenticity has been to rely on and trust institutions. However, in an age of media overload and information overload, this model has collapsed. The flaws and biases of these institutions have been exposed, leading to a profound crisis of trust. Without a centralized authority to rely on, we now need to prove the authenticity (or lack thereof) of objects in a way that embraces decentralized architecture.
In other words, we believe that deep truth should not be "trustworthy" but "trustless". Instead of placing trust in a single authority, trust should be dispersed across a network of digital signatures, encryption algorithms, community notes, and immutable blockchain technology. This avoids single points of failure, enhances transparency, and gives individuals more control over the rules by which they are judged. Fred Wilson, co-founder of USV, once said that AI and web3 are two sides of the same coin, and web3 solves the AI attribution problem.
How will this work? Andy put it best when describing our investment in the Mediachain protocol in 2015:
“The protocol allows anyone to attach information to a creative work, making it persistent and discoverable in a blockchain-based database. … The data is maintained by network participants and does not require permission to contribute or access, making it an ideal place for collaboration between creators, developers, platforms, and media organizations. It works for any form of media—images, gifs, videos, written works, and music.”
USV has long believed that the contextual information of the media we consume should be more open. A creator should have the option to attach who she is, where she took a photo, and when. And her audience should be able to thank her for her work with micropayments. But protocol-based media has always felt more like a vitamin than a painkiller.
That’s not the case anymore. 2024 will be the biggest election year in history. More than half of the world’s population — that’s 4 billion people — will vote this year. A system that gives users more information about the media they consume is desperately needed. Not just in politics, but in areas like dating apps, second-hand fashion marketplaces, and even vacation rentals.
But it would be a mistake to think of deepfakes only as a tool to combat disinformation. We think they can be a new media primitive. Just as entertaining and addictive as the short videos you see on IG and TikTok. Don’t get me wrong, AI-generated media is magic, but we believe people will always crave authentic, human-generated, real-life content. We’re excited about the ways new platforms can capture and express that content.
What will it take for this technology to scale? There are some interesting efforts right now, like the Content Authenticity Initiative and C2PA, which enable existing media platforms like TikTok and the New York Times to add cryptographic “credentials” to their content. However, we wonder if the breakthrough solutions will ultimately be more web3-native and full-stack. That’s why we were interested in understanding ways to integrate content creation, signing, and sharing into one platform, reducing the risk of contamination between each step. Paragraph’s blogging platform is a great example.
Deepfakes are not new, they are just another illusion. As the technology by which we consume, share, and believe information changes, “deep truths” will emerge to strengthen our collective sense of connection and trust in one another.