If you visit a website, you might have been asked to select pictures with ‘traffic lights’ or ‘buses’. This is an age-old method to verify that you are a human being, not a bot. But maybe AI can provide a better solution using “Personhood Credentials”.
Highlights:
- Researchers from OpenAI, Harvard, Microsoft etc. have introduced “Personhood Credentials” to verify whether you are a real human or not on the internet.
- AI can work better than CAPTCHAs or stringent verification to identify bots or imposters.
- The issuer gives a maximum of 1 credential to each eligible user.
Personhood Credentials to Verify You Are Human or Bot
There are many bad actors online today, who use bots and imposter accounts to deceive their identity. And as AI can regenerate images, voice or video of a person, this can lead to fraud, and more disinformation online. A solution for this problem can be “Personhood Credentials”.
Personhood Credentials, abbreviated as PHC, are digital credentials that give users the ability to prove that they are human to online services without disclosing any of their private information.
These can be local or global, and not biometric-based. According to the research paper, there are two main reasons why this is much needed in today’s society:
- Indistinguishability: Now, Artificial Intelligence can generate human-like content, create human-like avatars and take human-like actions. It is becoming really hard to distinguish between humans and AI because the technology has become so advanced.
- Scalability: AI-powered deception by malicious actors is becoming easier to scale because the costs associated with these technologies are decreasing, and access to advanced AI tools is becoming more widespread. Open-weight deployments make it harder to prevent large-scale misuse, allowing bad actors to use these tools more effectively and at a lower cost.
As AI-generated images are not getting more realistic day-by-day as well AI is getting more accessible to everyone, these problems are only going to get worse. So, when anyone can create AI bots, it will become more difficult to find who is a real human or not.
Tech behind PHCs
PHCs are built on two main deficits of AI: they can’t be offline and can not forge advanced cryptography. So, the entire system is built on this. The research paper contains a small flowchart that explains how the process works.
First, the user requests credentials and provides necessary evidence which contains very minimal personal information. Next, the issuer validates the application and assigns a personhood credential. Then, the user can use the PHCs to authorize their usage and gain verification from the service provider.
However, there are 2 requirements that a person for a PHC system:
- The issuer gives a maximum of 1 credential to each eligible user.
- PHCs ensure that the user’s identity is anonymous; the user’s digital activity is untraceable by the issuer and unlinkable across service providers, even if service providers and issuers collude.
How it is better than other alternatives?
CAPTCHAs are not safe as many studies have shown that “non-humans” can also solve it.
While you might say, why not use a strict verification process like asking users to submit any official document? Or like Services such as Netflix get information from financial institutions when people pay them? But that kills the idea of anonymity as we discussed.
There is also the option of Biometrics like fingerprints, irises, or facial features. It looks cool in the movies but poses a big privacy issue on the internet. The concern is how secure the database where biometric data is stored is and how can we ensure that such sensitive information is handled with foolproof security.
Benefits of using PHC
The researchers have planned to counter the problem of indistinguishability by creating a credential only people can acquire, and help to counter the problem of scalability by enabling per-credential rate limits on activities. They encouraged governments, technologists, and standards bodies to invest in the development, piloting, and adoption of personhood credentials.
The idea is to give people and sites an optional tool to show there’s a real person behind an account, without showing anything more. This helps with a range of use-cases, where a bad actor might enlist AI to carry out deception. (8/n) pic.twitter.com/mjVM0whYS6
— Steven Adler (@sjgadler) August 16, 2024
The two main challenges the PHC systems face are: protecting user privacy and civil liberties while limiting fraudulent activity.
They suggest having several issuers, each allowing a person to obtain a limited number of credentials. This system would enable individuals to get more than one credential to protect their privacy, while still restricting the total number to prevent widespread fraud.
Still, AI has many more issues like Knowledge Collapse, which is the risk that as humans become overly reliant on AI systems, the breadth of information and perspectives we are exposed to could progressively narrow over time.
Conclusion
PHCs are a good solution to combat frauds that happen when AI poses as humans but there may be many difficulties in implementing this tool on a large scale. We need to see how different institutions like governments and technical bodies react to it.