During the Google I/O event on Tuesday, Google announced a series of new features, including an experimental Call Scanning feature that is powered by AI.
Google Call Scanning Feature uses AI
Call Scanning is an experimental feature that can record and scan phone calls in real time for potential scams. The demo during their I/O event was powered by Gemini Nano, Google’s smallest model.
We're testing a new feature that uses Gemini Nano to provide real-time alerts during a call if it detects conversation patterns commonly associated with scams. This protection all happens on-device so your conversation stays private to you. More to come later this year! #GoogleIO pic.twitter.com/l87wGCz62x
— Made by Google (@madebygoogle) May 14, 2024
Dave Burke, Google VP for Engineering, shed light on the company’s efforts to leverage artificial intelligence to identify patterns associated with scams and subsequently alert Android phone users when potential scams are detected.
Burke characterized this feature as a security measure and provided a practical example. During an onstage demonstration, he received a call from an individual impersonating a bank representative, who suggested moving his savings to a new account to ensure its safety.
Promptly, Burke’s phone displayed a flashing notification: “Likely scam: Banks will never ask you to move your money to keep it safe,” accompanied by an option to terminate the call.
“Gemini Nano alerts me the second it detects suspicious activity,” Burke stated, referring to a Google-developed AI model. However, he refrained from specifying the precise signals or indicators utilized by the software to determine whether a conversation was potentially suspicious.
What is the Controversy Around It?
Soon after the reveal of the Call Scanning feature, which is currently in testing, privacy advocates expressed their concern over the potential invasion of privacy if Google continued with the development of this feature. Experts are worried about the line between helpful scanning and surveillance of people by listening to their conversations.
The announcement also faced major backlash on the social media platform X with many users expressing how uncomfortable they felt with the idea of Google listening in on their private phone calls.
Tech Crunch reported on this problem, discussing the concerns of many leading members in the security field about this feature. They also pointed out Apple’s failed attempt to detect child sexual abuse material (CSAM) by deploying client-side scanning. The project had to be dropped in 2021 due to the huge backlash.
Not only that, an app to detect spam calls named “Truecaller” already exists that uses crowdsourcing to identify callers and display their names, numbers, and locations before the call is answered.
Truecaller also uses community suggestions to determine contact names and color-codes Caller IDs to differentiate between normal, priority, spam, and business calls. None of which requires an AI to hear call contents.
Are experts making a Mountain out of a molehill?
Nadim Kobeissi, an applied cryptographer from France posted an in-depth critique of the Tech Crunch article in a post on X.
— Nadim Kobeissi (@kaepora) May 16, 2024
“Crucially, Google’s feature is based on an entirely local LLM (large language model) which, according to Google, is simply not designed to send any data about your conversation back to Google, to law enforcement, or any other third party.” He stated in the post.
He also highlighted a “one-sided” perspective presented by the experts quoted in the article.
“The article at large draws parallels with Apple’s 2021 CSAM (child sexual abuse materials) scanning controversy, where the primary issue was the fact that that scanning algorithm was specifically built, from day one, to report back to Apple servers and law enforcement in case objectionable materials were found locally on the user’s device. Privacy advocates (including myself, who lobbied extensively against Apple’s CSAM proposal) were concerned about the lack of transparency and potential for misuse.
Similarly, I am also strongly against current legislative proposals for scanning messaging apps for CSAM. However, there is simply no clear relationship between Google’s proposed local scanning model and that legislative push.”
Not just that, he gave examples of on-device local scanning being employed for many years in apps like Apple Mail, which scans the content of an email before classifying it as spam.
Conclusion
There has been a lot of hue and cry about data privacy especially as AI gains more prominence. But before jumping on concerns of surveillance, users must understand that local LLMs are in no way connected to any external servers.