owned this note
owned this note
Published
Linked with GitHub
###### tags: `Reading sessions`
[toc]
---
# 2023
<https://www.usenix.org/conference/usenixsecurity23/summer-accepted-papers>
## [**Freaky Leaky SMS: Extracting User Locations by Analyzing SMS Timings**](https://arxiv.org/pdf/2306.07695.pdf)
* Evangelos Bitsikas, Northeastern University; Theodor Schnitzler, Research Center Trustworthy Data Science and Security; Christina Pöpper, New York University Abu Dhabi; Aanjhan Ranganathan, Northeastern University
* This paper presents a timing-based side channel attack to duduce the geographic lication of a mobile phone user by sending silent messages (which are not displayed on the phone) and then measuring the delay in receiving the delivery report.
## [**UnGANable: Defending Against GAN-based Face Manipulation**](https://www.usenix.org/system/files/sec23summer_136-li_zheng-prepub.pdf)
* By Zheng Li, Ning Yu, Ahmed Salem, Michael Backes, Mario Fritz, Yang Zhang
* [FH] The paper presents a countermeasure to the DeepFake problem. Face images can be easily manipulated by using GAN to generate DeepFake images. A crucal step in this process is the so-called GAN-inversion, which generates a latent code from a victim's facial images. The proposed solution works by converting the original victim's image to a cloaked one, so that the cloaked image looks no different to the real image to the human eyes, but makes the GAN-based DeepFake generation process less effective. This allows a user to protect her images by only publishing cloaked images. The idea is similar to the "posioning attack", but is applied in a positive context for privacy protection.
## [**Inducing Authentication Failures to Bypass Credit Card PINs**](https://ethz.ch/content/dam/ethz/special-interest/infk/inst-infsec/information-security-group-dam/research/publications/pub2023/mastercard-usenix23.pdf)
* By David Basin, Patrick Schaller, and Jorge Toro-Pozo
* [MM] In the Mastercard contactless transaction, the payment terminal validates the card offline using a PKI, where the root CA’s PK is looked up from a terminal’s internal list. The index of this root PK in the list is determined from card-supplied data that can be arbitrarily modified. We have observed that if this index is modified to an invalid one (e.g. one that is out of bounds) then the terminal does not perform any PKI checks during the transaction. This flawed failure mode in the protocol makes critical data, whose integrity is only protected offline, vulnerable to adversarial modification. Such critical data includes the card’s list of supported methods for cardholder verification.
* As a proof-of-concept exploit, we developed a man-in-the-middle attack that modifies this cardholder verification support to make the payment terminal believe that the card (under attack) does not support PIN verification. We realized two versions of this attack; 1) **downgrade from PIN to (paper) signature**, and **complete removal of the cardholder verification support.**
* We have successfully tested both versions of the attack with Mastercard and Maestro cards, in several real-world payment terminals.
* It would be interesting to study similar behaviour for the online transactions, rather than offline.
###### tags: `EMV` `PINBypass` `Mastercard`
## [**Combating Robocalls with Phone Virtual Assistant Mediated Interaction**](https://www.usenix.org/system/files/sec23summer_308-pandit-prepub.pdf)
* By Sharbani Pandit, Krishanu Sarker, Roberto Perdisci, Mustaque Ahamad, Diyi Yang
* [MD] The article proposes a solution to the problem of robocalls. The proposed solution is a NLP-based virtual assistant for smartphones that automatically screens incoming calls to determine whether the call is from a human or a robocaller. The virtual assistant interrupts the user only when the call is determined to be from a human, preserving the phone call user experience. The article also reports security analysis and user studies that support the effectiveness of this solution in blocking current and future robocallers while preserving user experience.
* [FH] This paper proposed an NLP based solution to distinguish if a caller is a human or a robot. This is very similar to the idea in CAPTCHA. The proposed solution excludes the threat of using an AI as the attacker. By comparison, CAPTCH mainly deals with an AI attacker (which is more challenging). The NLP model used in this work is relatively simple as the questions are drawn from a list. The idea of using NLP to engage the caller in a conversation seems new and interesting.
###### tags: ``caller ID spoofing``, ``CAPTCHA``
## [**How fast do you heal? A taxonomy for post-compromise security in secure-channel establishment**](https://www.usenix.org/system/files/sec23summer_243-blazy-prepub.pdf)
* By Olivier Blazy, Ioana Boureanu, Pascal Lafourcade, Cristina Onete, and Léo Robert
* [MD] In this paper, the authors have established a novel formal definition called "Secure-Channel Establishment schemes with Key-Evolution (SCEKE)" that covers the two-party protocols allowing for key-evolution. They have also developed a framework to quantify the post-compromise security of SCEKE protocols based on the stages required so that the security of the protocol is recovered. They have considered different types of adversaries with varying strengths and abilities in developing their framework. To illustrate this framework, they have compared three SCEKE protocols: Signal, SAID and 5GAKA.
###### tags: `Key management` `Key update`
## [**Diving into Robocall Content with SnorCall**](https://www.usenix.org/system/files/sec23fall-prepub-344-prasad.pdf)
* By Sathvik Prasad, Trevor Dunlap, Alexander Ross, Bradley Reaves
* [MD]
**Purpose of the Study:** The study aims to address the issue of illegal robocalls in the United States by designing a system that can analyze large volumes of robocall recordings. The goal is to extract insights and understand the prevalence, tactics, and impact of various types of robocalls.
**Methodology**: The study involved operating a honeypot with 6,000 phone numbers, recording over 1.3 million robocalls spanning a 23-month period, and uncovering 27,000 robocalling campaigns. The researchers utilized Snorkel, a semi-supervised machine learning framework, to label robocall transcripts accurately and swiftly with minimal training data. They also extracted "callback numbers" tied to robocalling infrastructure for further analysis.
**Results**: The study revealed several significant findings, including the prevalence of different robocall topics, tactics used by government impersonation robocalls, financial scams targeting taxpayers, and the misrepresentation of political events during the 2020 US Presidential Elections. The researchers also highlighted the deceptive tactics employed by Social Security scammers, the average fraud amount in tech support scams, and the targeting of Mandarin and Spanish-speaking populations.
**Utilizing the Results for Combating Robocalls:** The study's results provide valuable insights and data that can be used to combat robocalls effectively. Regulators, investigators, and carriers can leverage this information to proactively identify and prioritize the takedown of malicious robocalling operations. The findings help in understanding the tactics employed by different types of robocalls and can aid in developing targeted countermeasures to protect phone users from fraudulent and deceptive practices.
For example, based on SnorCall's analysis, it is discovered that these scams target victims by posing as Apple iCloud support agents, attempting to defraud them of an average of $400. Armed with this information, government agencies and consumer protection organizations can use SnorCall's findings to educate the public about the specific tactics used in tech support scams. By raising awareness of the typical script, methods of impersonation, and fraudulent demands made by these scams, individuals are more likely to recognize and avoid falling victim to such robocalls.
###### tags: `Robocalls` `Caller ID Spoofing`
---