CCS 2012

An Historical Examination of Open Source Releases and Their Vulnerabilities.

This paper examines historical releases of Sendmail, Postfix, Apache httpd and OpenSSL by using static source code analysis and the entry-rate in the Common Vulnerabilities and Exposures dictionary (CVE) for a release, which we take as a measure of the rate of discovery of exploitable bugs. We show that the change in number and density of issues reported by the source code analyzer is indicative of the change in rate of discovery of exploitable bugs for new releases — formally we demonstrate a statistically significant correlation of moderate strength. The strength of the correlation is an artifact of other factors such as the degree of scrutiny: the number of security analysts investigating the software. This also demonstrates that static source code analysis can be used to make some assessment of risk even when constraints do not permit human review of the issues identified by the analysis. We find only a weak correlation between absolute values measured by the source code analyzer and rate of discovery of exploitable bugs, so in general it is unsafe to use absolute values of number of issues or issue densities to compare different applications or software. Our results demonstrate that software quality, as measured by the number of issues, issue density or number of exploitable bugs, does not always improve with each new release. However, generally the rate of discovery of exploitable bugs begins to drop three to five years after the initial release.

Nigel Edwards (Hewlett-Packard Laboratories), Liqun Chen (Hewlett-Packard Laboratories).

Non-tracking Web Analytics.

Today, websites commonly use third party web analytics services t obtain aggregate information about users that visit their sites. This information includes demographics and visits to other sites as well as user behavior within their own sites. Unfortunately, to obtain this aggregate information, web analytics services track individual user browsing behavior across the web. This violation of user privacy has been strongly criticized, resulting in tools that block such tracking as well as anti-tracking legislation and standards such as Do-Not-Track. These efforts, while improving user privacy, degrade the quality of web analytics. This paper presents the first design of a system that provides web analytics without tracking. The system gives users differential privacy guarantees, can provide better quality analytics than current services, requires no new organizational players, and is practical to deploy. This paper describes and analyzes the design, gives performance benchmarks, and presents our implementation and deployment across several hundred users.

Istemi Ekin Akkus (Max Planck Institute for Software Systems (MPI-SWS)), Ruichuan Chen (Max Planck Institute for Software Systems (MPI-SWS)), Michaela Hardt (Twitter Inc.), Paul Francis (Max Planck Institute for Software Systems (MPI-SWS)), Johannes Gehrke (Cornell University).

A Cross-Protocol Attack on the TLS Protocol.

This paper describes a cross-protocol attack on all versions of TLS; it can be seen as an extension of the Wagner and Schneier attack on SSL 3.0. The attack presents valid explicit elliptic curve Diffie-Hellman parameters signed by a server to a client that incorrectly interprets these parameters as valid plain Diffie-Hellman parameters. Our attack enables an adversary to successfully impersonate a server to a random client after obtaining 240 signed elliptic curve keys from the original server. While attacking a specific client is improbable due to the high number of signed keys required during the lifetime of one TLS handshake, it is not completely unrealistic for a setting where the server has high computational power and the attacker contents itself with recovering one out of many session keys. We remark that popular open-source server implementations are not susceptible to this attack, since they typically do not support the explicit curve option. Finally we propose a fix that renders the protocol immune to this family of cross-protocol attacks.

Nikos Mavrogiannopoulos (KU Leuven - IBBT), Frederik Vercauteren (KU Leuven - IBBT), Vesselin Velichkov (University of Luxembourg), Bart Preneel (KU Leuven - IBBT).

Mobile Data Charging: New Attacks and Countermeasures.

3G/4G cellular networks adopt usage-based charging. Mobile users are billed based on the traffic volume when accessing data service. In this work, we assess both this metered accounting architecture and application-specific charging policies by operators from the security perspective. We have identified loopholes in both, and discovered two effective attacks exploiting the loopholes. The “toll-free-data-access-attack” enables the attacker to access any data service for free. The “stealth-spam-attack” incurs any large traffic volume to the victim, while the victim may not be even aware of such spam traffic.Our experiments on two operational 3G networks have confirmed the feasibility and simplicity of such attacks. We also propose defense remedies.

Chunyi Peng (University of California, Los Angeles), Chi-yu Li (University of California, Los Angeles), Guan-Hua Tu (University of California, Los Angeles), Songwu Lu (University of California, Los Angeles), Lixia Zhang (University of California, Los Angeles).

The Devil is in the (Implementation) Details: An Empirical Analysis of OAuth SSO Systems.

Millions of web users today employ their Facebook accounts to sign into more than one million relying party (RP) websites. This web-based single sign-on (SSO) scheme is enabled by OAuth 2.0, a web resource authorization protocol that has been adopted by major service providers. The OAuth 2.0 protocol has proven secure by several formal methods, but whether it is indeed secure in practice remains an open question. We examine the implementations of three major OAuth identity providers (IdP) (Facebook, Microsoft, and Google) and 96 popular RP websites that support the use of Facebook accounts for login. Our results uncover several critical vulnerabilities that allow an attacker to gain unauthorized access to the victim user’s profile and social graph, and impersonate the victim on the RP website. Closer examination reveals that these vulnerabilities are caused by a set of design decisions that trade security for implementation simplicity. To improve the security of OAuth 2.0 SSO systems in real-world settings, we suggest simple and practical improvements to the design and implementation of IdPs and RPs that can be adopted gradually by individual sites.

San-Tsai Sun (University of British Columbia), Konstantin Beznosov (University of British Columbia).

CensorSpoofer: Asymmetric Communication using IP Spoofing for Censorship-Resistant Web Browsing.

A key challenge in censorship-resistant web browsing is being able to direct legitimate users to redirection proxies while preventing censors, posing as insiders, from discovering their addresses and blocking them. We propose a new framework for censorship-resistant web browsing called CensorSpoofer that addresses this challenge by exploiting the asymmetric nature of web browsing traffic and making use of IP spoofing. CensorSpoofer de-couples the upstream and downstream channels, using a low-bandwidth indirect channel for delivering outbound requests (URLs) and a high-bandwidth direct channel for downloading web content. The upstream channel hides the request contents using steganographic encoding within Email or instant messages, whereas the downstream channel uses IP address spoofing so that the real address of the proxies is not revealed either to legitimate users or censors. We built a proof-of-concept prototype that uses encrypted VoIP for this downstream channel and demonstrated the feasibility of using the CensorSpoofer framework in a realistic environment.

Qiyan Wang (University of Illinois at Urbana-Champaign), Xun Gong (University of Illinois at Urbana-Champaign), Giang T. K. Nguyen (University of Illinois at Urbana-Champaign), Amir Houmansadr (University of Illinois at Urbana- Champaign), Nikita Borisov (University of Illinois at Urbana-Champaign).

Self-service Cloud Computing.

Modern cloud computing infrastructures use virtual machine monitors (VMMs) that often include a large and complex administrative domain with privileges to inspect client VM state. Attacks against or misuse of the administrative domain can compromise client security and privacy. Moreover, these VMMs provide clients inflexible control over their own VMs, as a result of which clients have to rely on the cloud provider to deploy useful services, such as VM introspection-based security tools. We introduce a new self-service cloud (SSC) computing model that addresses these two shortcomings. SSC splits administrative privileges between a system-wide domain and per-client administrative domains. Each client can manage and perform privileged system tasks on its own VMs, thereby providing flexibility. The system-wide administrative domain cannot inspect the code, data or computation of client VMs, thereby ensuring security and privacy. SSC also allows providers and clients to establish mutually trusted services that can check regulatory compliance while respecting client privacy. We have implemented SSC by modifying the Xen hypervisor. We demonstrate its utility by building user domains to perform privileged tasks such as memory introspection, storage intrusion detection, and anomaly detection.

Shakeel Butt (Rutgers University), H. Andres Lagar-Cavilla (GridCentric Inc.), Abhinav Srivastava (AT&T Labs-Research), Vinod Ganapathy (Rutgers University).

Aligot: Cryptographic Function Identification in Obfuscated Binary Programs.

Analyzing cryptographic implementations has important applications, especially for malware analysis where they are an integral part both of the malware payload and the unpacking code that decrypts this payload. These implementations are often based on well-known cryptographic functions, whose description is publicly available. While potentially very useful for malware analysis, the identification of such cryptographic primitives is made difficult by the fact that they are usually obfuscated. Current state-of-the-art identification tools are ineffective due to the absence of easily identifiable static features in obfuscated code. However, these implementations still maintain the input-output (I/O) relationship of the original function. In this paper, we present a tool that leverages this fact to identify cryptographic functions in obfuscated programs, by retrieving their I/O parameters in an implementation-independent fashion, and comparing them with those of known cryptographic functions. In experimental evaluation, we successfully identified the cryptographic functions TEA, RC4, AES and MD5 both in synthetic examples protected by a commercial-grade packer (AsProtect), and in several obfuscated malware samples (Sality, Waledac, Storm Worm and SilentBanker). In addition, our tool was able to recognize basic operations done in asymmetric ciphers such as RSA.

Joan Calvet (Universite de Lorraine, LORIA), Jose M Fernandez (Ecole Polytechnique de Montreal), Jean-Yves Marion (Universite de Lorraine, LORIA).

Populated IP Addresses – Classification and Applications.

Populated IP addresses (PIP) – IP addresses that are associated with a large number of user requests are important for online service providers to efficiently allocate resources and to detect attacks. While some PIPs serve legitimate users, many others are heavily abused by attackers to conduct malicious activities such as scams, phishing, and malware distribution. Unfortunately, commercial proxy lists like Quova have a low coverage of PIP addresses and offer little support for distinguishing good PIPs from abused ones. In this study, we propose PIPMiner, a fully automated method to extract and classify PIPs through analyzing service logs. Our methods combine machine learning and time series analysis to distinguish good PIPs from abused ones with over 99.6% accuracy. When applying the derived PIP list to several applications, we can identify millions of malicious Windows Live accounts right on the day of their sign-ups, and detect millions of malicious Hotmail accounts well before the current detection system captures them.

Chi-Yao Hong (UIUC), Fang Yu (MSR Silicon Valley), Yinglian Xie (MSR Silicon Valley).

On the Parameterized Complexity of the Workflow Satisfiability Problem.

A workflow specification defines a set of steps and the order in which those steps must be executed. Security requirements may impose constraints on which groups of users are permitted to perform subsets of those steps. A workflow specification is said to be satisfiable if there exists an assignment of users to workflow steps that satisfies all the constraints. An algorithm for determining whether such an assignment exists is important, both as a static analysis tool for workflow specifications, and for the construction of run-time reference monitors for workflow management systems. Finding such an assignment is a hard problem in general, but work by Wang and Li in 2010 using the theory of parameterized complexity suggests that efficient algorithms exist under reasonable assumptions about workflow specifications. In this paper, we improve the complexity bounds for the workflow satisfiability problem. We also generalize and extend the types of constraints that may be defined in a workflow specification and prove that the satisfiability problem remains fixed-parameter tractable for such constraints.

Jason Crampton (Royal Holloway, University of London), Gregory Gutin (Royal Holloway, University of London), Anders Yeo (University of Johannesburg).

A Software-Hardware Architecture for Self-Protecting Data.

We propose a software-hardware architecture, DataSafe, that realizes the concept of self-protecting data: data that is protected by a given policy whenever it is accessed by any application – including unvetted third-party applications. Our architecture provides dynamic instantiations of secure data compartments (SDCs), with hardware monitoring of the information flows from the compartment using hardware policy tags associated with the data at runtime. Unbypassable hardware output control prevents confidential information from being leaked out. Unlike previous hardware information flow tracking systems, DataSafe software architecture bridges the semantic gap by supporting flexible, high-level software policies for the data, seamlessly translating these policies to efficient hardware tags at runtime. Applications need not be modified to interface to these software-hardware mechanisms. DataSafe architecture is designed to prevent illegitimate secondary dissemination of protected plaintext data by authorized recipients, to track and protect data derived from sensitive data, and to provide lifetime enforcement of the confidentiality policies associated with the sensitive data.

Yu-Yuan Chen (Princeton University), Pramod A. Jamkhedkar (Princeton University), Ruby B. Lee (Princeton University).

Innocent by Association: Early Recognition of Legitimate Users.

This paper presents the design and implementation of Souche, a system that recognizes legitimate users early in online services. This early recognition contributes to both usability and security. Souche leverages social connections established over time. Legitimate users help identify other legitimate users through an implicit vouching process, strategically controlled within vouching trees. Souche is lightweight and fully transparent to users. In our evaluation on a real dataset of several hundred million users, Souche can efficiently identify 85% of legitimate users early, while reducing the percentage of falsely admitted malicious users from 44% to 2.4%. Our evaluation further indicates that Souche is robust in the presence of compromised accounts. It is generally applicable to enhance usability and security for a wide class of online services.

Yinglian Xie (Microsoft Research Silicon Valley), Fang Yu (Microsoft Research Silicon Valley), Qifa Ke (Microsoft Research Silicon Valley), Martin Abadi (Microsoft Research Silicon Valley), Eliot Gillum (Microsoft Corporation), Krish Vitaldevaria (Microsoft Corporation), Jason Walter (Microsoft Corporation), Junxian Huang (University of Michigan), Zhuoqing Morley Mao (University of Michigan).

Deanonymizing Mobility Traces: Using Social Network as a Side-Channel.

Location-based services, which employ data from smartphones, vehicles, etc., are growing in popularity. To reduce the threat that shared location data poses to a user’s privacy, some services anonymize or obfuscate this data. In this paper, we show these methods can be effectively defeated: a set of location traces can be deanonymized given an easily obtained social network graph. The key idea of our approach is that a user may be identified by those she meets: a “contact graph” identifying meetings between anonymized users in a set of traces can be structurally correlated with a social network graph, thereby identifying anonymized users. We demonstrate the effectiveness of our approach using three real world datasets: University of St Andrews mobility trace and social network (27 nodes each), SmallBlue contact trace and Facebook social network (125 nodes), and Infocom 2006 bluetooth contact traces and conference attendees’ DBLP social network (78 nodes). Our experiments show that 80% of users are identified precisely, while only 8% are identified incorrectly, with the remainder mapped to a small set of users.

Mudhakar Srivatsa (IBM T. J. Watson Research Center), Mike Hicks (University of Maryland).

DCast: Sustaining Collaboration in Overlay Multicast despite Rational Collusion.

A key challenge in large-scale collaborative distributed systems is to properly incentivize the rational/selfish users so that they will properly collaborate. Within such a context, this paper focuses on designing incentive mechanisms for overlay multicast systems. A key limitation shared by existing proposals on the problem is that they are no longer able to provide proper incentives and thus will collapse when rational users collude or launch sybil attacks. This work explicitly aims to properly sustain collaboration despite collusion and sybil attacks by rational users. To this end, we propose a new decentralized DCast multicast protocol that uses a novel mechanism with debt-links and circulating debts. We formally prove that the protocol offers a novel concept of safety-net guarantee: A user running the protocol will always obtain a reasonably good utility despite the deviation of any number of rational users that potentially collude or launch sybil attacks. Our prototyping as well as simulation demonstrates the feasibility and safety-net guarantee of our design in practice.

Haifeng Yu (National University of Singapore), Phillip B. Gibbons (Intel Labs), Chenwei Shi (Mozat Pte Ltd).

FlowFox: a Web Browser with Flexible and Precise Information Flow Control.

We present FlowFox, the first fully functional web browser that implements a precise and general information flow control mechanism for web scripts based on the technique of secure multi-execution. We demonstrate how FlowFox subsumes many ad-hoc script containment countermeasures developed over the last years. We also show that FlowFox is compatible with the current web, by investigating its behavior on the Alexa top-500 web sites, many of which make intricate use of JavaScript. The performance and memory cost of FlowFox is substantial (a performance cost of around 20% on macro benchmarks for a simple two level policy), but not prohibitive. Our prototype implementation shows that information flow enforcement based on secure multi-execution can be implemented in full-scale browsers. It can support powerful, yet precise policies refining the same-origin-policy in a way that is compatible with existing websites.

Willem De Groef (KU Leuven), Dominique Devriese (KU Leuven), Nick Nikiforakis (KU Leuven), Frank Piessens (KU Leuven).

Scriptless Attacks.

Due to their high practical impact, Cross-Site Scripting (XSS) attacks have attracted a lot of attention from the security community members. In the same way, a plethora of more or less effective defense techniques have been proposed, addressing the causes and effects of XSS vulnerabilities. NoScript, and disabling scripting code in non-browser applications such as e-mail clients or instant messengers. As a result, an adversary often can no longer inject or even execute arbitrary scripting code in several real-life scenarios. In this paper, we examine the attack surface that remains after XSS and similar scripting attacks are supposedly mitigated by preventing an attacker from executing JavaScript code. We address the question of whether an attacker really needs JavaScript or similar functionality to perform attacks aiming for information theft. The surprising result is that an attacker can also abuse Cascading Style Sheets (CSS) in combination with other Web techniques like plain HTML, inactive SVG images or font files. Through several case studies, we introduce the so called scriptless attacks and demonstrate that an adversary might not need to execute code to preserve his ability to extract sensitive information from well protected websites. More precisely, we show that an attacker can use seemingly benign features to build side channel attacks that measure and exfiltrate almost arbitrary data displayed on a given website. We conclude this paper with a discussion of potential mitigation techniques against this class of attacks. In addition, we have implemented a browser patch that enables a website to make a vital determination as to being loaded in a detached view or pop-up window. This approach proves useful for prevention of certain types of attacks we here discuss.

Mario Heiderich (Ruhr-University Bochum), Marcus Niemietz (Ruhr-University Bochum), Felix Schuster (Ruhr-University Bochum), Thorsten Holz (Ruhr-University Bochum), Jrg Schwenk (Ruhr-University Bochum).

Enhancing Tor’s Performance using Real-time Traffic Classification.

Tor is a low-latency anonymity-preserving network that enables its users to protect their privacy online. It consists of volunteer-operated routers from all around the world that serve hundreds of thousands of users every day. Due to congestion and a low relay-to-client ratio, Tor suffers from performance issues that can potentially discourage its wider adoption, and result in an overall weaker anonymity to all users. We seek to improve the performance of Tor by defining different classes of service for its traffic. We recognize that although the majority of Tor traffic is interactive web browsing, a relatively small amount of bulk downloading consumes an unfair amount of Tor’s scarce bandwidth. Furthermore, these traffic classes have different time and bandwidth constraints; therefore, they should not be given the same Quality of Service (QoS), which Tor offers them today. We propose and evaluate DiffTor, a machine- learning-based approach that classifies Tor’s encrypted circuits by application in real time and subsequently assigns distinct classes of service to each application. Our experiments confirm that we are able to classify circuits we generated on the live Tor network with an extremely high accuracy that exceeds 95%. We show that our real-time classification in combination with QoS can considerably improve the experience of Tor clients, as our simple techniques result in a 75% improvement in responsiveness and an 86% reduction in download times at the median for interactive users.

Mashael AlSabah (University of Waterloo), Kevin Bauer (University of Waterloo), Ian Goldberg (University of Waterloo).

Computational Soundness Without Protocol Restrictions.

The abstraction of cryptographic operations by term algebras, called Dolev-Yao models, is essential in almost all tool-supported methods for verifying security protocols. Recently significant progress was made in establishing computational soundness results: these results prove that Dolev-Yao style models can be sound with respect to actual cryptographic realizations and security definitions. However, these results came at the cost of imposing various constraints on the set of permitted security protocols: e.g., dishonestly generated keys must not be used, key cycles need to be avoided, and many more. In a nutshell, the cryptographic security definitions did not adequately capture these cases, but were considered carved in stone; in contrast, the symbolic abstractions were bent to reflect cryptographic features and idiosyncrasies, thereby requiring adaptations of existing verification tools. In this paper, we pursue the opposite direction: we consider a symbolic abstraction for public-key encryption and identify two cryptographic definitions called PROG-KDM (programmable key- dependent message) security and MKE (malicious-key extractable) security that we jointly prove to be sufficient for obtaining computational soundness without imposing assumptions on the protocols using this abstraction. In particular, dishonestly generated keys obtained from the adversary can be sent, received, and used. The definitions can be met by existing cryptographic schemes in the random oracle model. This yields the first computational soundness result for trace-properties that holds for arbitrary protocols using this abstraction (in particular permitting to send and receive dishonestly generated keys), and that is accessible to all existing tools for reasoning about Dolev-Yao models without further adaptations.

Michael Backes (Saarland University and MPI-SWS), Ankit Malik (IIT Delhi), Dominique Unruh (Tartu University).

Full Proof Cryptography: Verifiable Compilation of Efficient Zero-Knowledge Protocols.

Developers building cryptography into security-sensitive applications face a daunting task. Not only must they understand the security guarantees delivered by the constructions they choose, they must also implement and combine them correctly and efficiently. Cryptographic compilers free developers from this task by turning high-level specifications of security goals into efficient implementations. Yet, trusting such tools is hard as they rely on complex mathematical machinery and claim security properties that are subtle and difficult to verify. In this paper we present ZKCrypt, an optimizing cryptographic compiler achieving an unprecedented level of assurance without sacrificing practicality for a comprehensive class of cryptographic protocols, known as Zero-Knowledge Proofs of Knowledge. The pipeline of ZKCrypt integrates purpose-built verified compilers and verifying compilers producing formal proofs in the CertiCrypt framework. By combining the guarantees delivered by each stage, ZKCrypt provides assurance that the output implementation securely realizes the abstract proof goal given as input. We report on the main characteristics of ZKCrypt, highlight new definitions and concepts at its foundations, and illustrate its applicability through a representative example of an anonymous credential system

Jos Bacelar Almeida (Universidade do Minho), Manuel Barbosa (Universidade do Minho), Endre Bangerter (Bern University of Applied Sciences), Gilles Barthe (IMDEA Software Institute), Stephan Krenn (IST Austria), Santiago Zanella Bguelin (Microsoft Research).

Fides: Selectively Hardening Software Application Components against Kernel-level or Process-level Malware.

Protecting commodity operating systems against software exploits is known to be challenging, because of their sheer size. The same goes for key software applications such as web browsers or mail clients. As a consequence, a significant fraction of internet-connected computers is infected with malware. To mitigate this threat, we propose a combined approach of (1) a run-time security architecture that can efficiently protect fine-grained software modules executing on a standard operating system, and (2) a compiler that compiles standard C source code modules to such protected binary modules. The offered security guarantees are significant: relying on a TCB of only a few thousand lines of code, we show that the power of arbitrary kernel-level or process-level malware is reduced to interacting with the module through the module’s public API. With a proper API design and implementation, modules are fully protected. The run-time architecture can be loaded on demand and only incurs performance overhead when it is loaded. Benchmarks show that, once loaded, it incurs a 3.22% system-wide performance cost. For applications that make intensive use of protected modules, and hence benefit most of the security guarantees provided, the performance cost is up to 14%.

Raoul Strackx (KU Leuven), Frank Piessens (KU Leuven).

Double-Spending Fast Payments in Bitcoin.

Bitcoin is a decentralized payment system that relies on Proof-of-Work (PoW) to verify payments. Nowadays, Bitcoin is increasingly used in a number of fast payment scenarios, where the time between the exchange of currency and goods is short (in the order of few seconds). While the Bitcoin payment verification scheme is designed to prevent double-spending, our results show that the system requires tens of minutes to verify a transaction and is therefore inappropriate for fast payments. An example of this use of Bitcoin was recently reported in the media: Bitcoins were used as a form of emph{fast} payment in a local fast- food restaurant. Until now, the security of fast Bitcoin payments has not been studied. In this paper, we analyze the security of using Bitcoin for fast payments. We show that, unless appropriate detection techniques are integrated in the current Bitcoin implementation, double-spending attacks on fast payments succeed with overwhelming probability and can be mounted at low cost. We further show that the measures recommended by Bitcoin developers for the use of Bitcoin in fast payments are not always effective in detecting double-spending; we show that if those recommendations are integrated in future Bitcoin implementations, double-spending attacks on Bitcoin will still be possible. Finally, we propose and implement a modification to the existing Bitcoin implementation that ensures the detection of double-spending attacks against fast payments.

Ghassan O. Karame (NEC Laboratories Europe), Elli Androulaki (ETH Zurich), Srdjan Capkun (ETH Zurich).

Secure Two-Party Computations in ANSI C.

The practical application of Secure Two-Party Computation is hindered by the difficulty to implement secure computation protocols. While recent work has proposed very simple programming languages which can be used to specify secure computations, it is still difficult for practitioners to use them, and cumbersome to translate existing source code into this format. Similarly, the manual construction of two-party computation protocols, in particular ones based on the approach of garbled circuits, is labor intensive and error-prone. The central contribution of the current paper is a tool which achieves Secure Two- Party Computation for ANSI C. Our work is based on a combination of model checking techniques and two-party computation based on garbled circuits. Our key insight is a nonstandard use of the bit-precise model checker CBMC which enables us to translate C programs into equivalent Boolean circuits. To this end, we modify the standard CBMC translation from programs into Boolean formulas whose variables correspond to the memory bits manipulated by the program. As CBMC attempts to minimize the size of the formulas, the circuits obtained by our tool chain are also size efficient; to improve the efficiency of the garbled circuit evaluation, we perform optimizations on the circuits. Experimental results with the new tool CBMC-GC demonstrate the practical usefulness of our approach.

Andreas Holzer (TU Wien), Martin Franz (CrypTool Project), Stefan Katzenbeisser (TU Darmstadt), Helmut Veith (TU Wien).

Neighborhood Watch: Security and Privacy Analysis of Automatic Meter Reading Systems.

Research on smart meters has shown that fine-grained energy usage data poses privacy risks since it allows inferences about activities inside the home. While smart meter deployments are very limited, more than 40 million meters in the United States have been equipped with Automatic Meter Reading (AMR) technology over the past decades. AMR utilizes wireless communication for remotely collecting usage data from electricity, gas, and water meters. Yet to the best of our knowledge, AMR has so far received no attention from the security research community. In this paper, we conduct a security and privacy analysis of this technology. Based on our reverse engineering and experimentation, we find that the technology lacks basic security measures to ensure privacy, integrity, and authenticity of the data. Moreover, the AMR meters we examined continuously broadcast their energy usage data over insecure wireless links every 30s, even though these broadcasts can only be received when a truck from the utility company passes by. We show how this design allows any individual to monitor energy usage from hundreds of homes in a neighborhood with modest technical effort and how this data allows identifying unoccupied residences or people’s routines. To cope with the issues, we recommend security remedies, including a solution based on defensive jamming that may be easier to deploy than upgrading the meters themselves.

Ishtiaq Rouf (University of South Carolina), Hossen Mustafa (University of South Carolina), Miao Xu (University of South Carolina), Wenyuan Xu (University of South Carolina), Rob Miller (Applied Communication Sciences), Marco Gruteser (Rugers University).

New Privacy Issues in Mobile Telephony: Fix and Verification.

Mobile telephony equipment is daily carried by billions of subscribers everywhere they go. Avoiding linkability of subscribers by third parties, and protecting the privacy of those subscribers is one of the goals of mobile telecommunication protocols. We use formal methods to model and analyse the security properties of 3G protocols. We expose two novel threats to the user privacy in 3G telephony systems, which make it possible to trace and identify mobile telephony subscribers, and we demonstrate the feasibility of a low cost implementation of these attacks. We propose fixes to these privacy issues, which also take into account and solve other privacy attacks known from the literature. We successfully prove that our privacy-friendly fixes satisfy the desired unlinkability and anonymity properties using the automatic verification tool ProVerif.

Myrto Arapinis (University of Birmingham), Loretta Mancini (University of Birmingham), Eike Ritter (University of Birmingham), Mark Ryan (University of Birmingham), Nico Golde (Technische Universitt Berlin), Kevin Redon (Technische Universitt Berlin), Ravishankar Borgaonkar (Technische Universitt Berlin).

Vanity, Cracks and Malware.

Today, a large amount of software products include mechanisms to counter software piracy. However, most protection mechanisms can be easily circumvented by applying software patches (cracks) or license key generators (keygens) with seemingly no financial incentives. Our research shows that the distribution of cracks and keygens not only allows miscreants to generate revenue (e.g. through advertising or malware infections), but it also leads to high risks for the end- users of pirated software. We collected more than 43,900 download links and analyzed more than 23,100 (3,551 unique) real-world cracks, showing that these tools are heavily used by criminals to spread malware. Our results indicate that even state of the art virus scanners can not fully protect users from these threats. Moreover, we conducted a manual analysis, showing how many cracks and keygens actually work and how much effort is necessary to acquire them. In addition, we made our data-set publicly available to the research community.

Markus Kammerstetter (Vienna University of Technology), Christian Platzer (Vienna University of Technology), Gilbert Wondracek (Vienna University of Technology).

Before We Knew It.

Little is known about the duration and prevalence of zero-day attacks, which exploit vulnerabilities that have not been disclosed publicly. Knowledge of new vulnerabilities gives cyber criminals a free pass to attack any target of their choosing, while remaining undetected. Unfortunately, these serious threats are difficult to analyze, because, in general, data is not available until after an attack is discovered. Moreover, zero-day attacks are rare events that are unlikely to be observed in honeypots or in lab experiments. In this paper, we describe a method for automatically identifying zero-day attacks from field- gathered data that records when benign and malicious binaries are downloaded on 11 million real hosts around the world. Searching this data set for malicious files that exploit known vulnerabilities indicates which files appeared on the Internet before the corresponding vulnerabilities were disclosed. We identify 18 vulnerabilities exploited before disclosure, of which 11 were not previously known to have been employed in zero-day attacks. We also find that a typical zero-day attack lasts 312 days on average and that, after vulnerabilities are disclosed publicly, the volume of attacks exploiting them increases by up to 5 orders of magnitude.

Leyla Bilge (Symantec Corporation), Tudor Dumitras (Symantec Corporation).

Revoke and Let Live: A Secure Key Revocation API for Cryptographic Devices.

While extensive research addresses the problem of establishing session keys through cryptographic protocols, relatively little work has appeared addressing the problem of revocation and update of long term keys. We present an API for symmetric key management on embedded devices that supports key establishment and revocation, and prove security properties of our design in the symbolic model of cryptography. Our API supports two modes of revocation: a passive mode where keys have an expiration time, and an active mode where revocation messages are sent to devices. For the first we show that once enough time has elapsed after the compromise of a key, the system returns to a secure state, i.e. the API is robust against attempts by the attacker to use a compromised key to compromise other keys or to keep the compromised key alive past its validity time. For the second we show that once revocation messages have been received the system immediately returns to a secure state. Notable features of our designs are that all secret values on the device are revocable, and the device returns to a functionally equivalent state after revocation is complete.

Vronique Cortier (CNRS, Loria, UMR 7503), Graham Steel (INRIA), Cyrille Wiedling (CNRS, Loria, UMR 7503).

You Are What You Include: Large-scale Evaluation of Remote JavaScript Inclusions.

JavaScript is used by web developers to enhance the interactivity of their sites, offload work to the users’ browsers and improve their sites’ responsiveness and user-friendliness, making web pages feel and behave like traditional desktop applications. An important feature of JavaScript, is the ability to combine multiple libraries from local and remote sources into the same page, under the same namespace. While this enables the creation of more advanced web applications, it also allows for a malicious JavaScript provider to steal data from other scripts and from the page itself. Today, when developers include remote JavaScript libraries, they trust that the remote providers will not abuse the power bestowed upon them. In this paper, we report on a large- scale crawl of more than three million pages of the top 10,000 Alexa sites, and identify the trust relationships of these sites with their library providers. We show the evolution of JavaScript inclusions over time and develop a set of metrics in order to assess the maintenance-quality of each JavaScript provider, showing that in some cases, top Internet sites trust remote providers that could be successfully compromised by determined attackers and subsequently serve malicious JavaScript. In this process, we identify four, previously unknown, types of vulnerabilities that attackers could use to attack popular web sites. Lastly, we review some proposed ways of protecting a web application from malicious remote scripts and show that some of them may not be as effective as previously thought.

Nick Nikiforakis (KU Leuven), Luca Invernizzi (University of California, Santa Barbara), Alexandros Kapravelos (University of California, Santa Barbara), Steven Van Acker (KU Leuven), Wouter Joosen (KU Leuven), Christopher Kruegel (University of California, Santa Barbara), Frank Piessens (KU Leuven), Giovanni Vigna (University of California, Santa Barbara).

Knowing Your Enemy: Understanding and Detecting Malicious Web Advertising.

With the Internet becoming the dominant channel for marketing and promotion, online advertisements are also increasingly used for illegal purposes such as propagating malware, scamming, click frauds, etc. To understand the gravity of these malicious advertising activities, which we call malvertising, we perform a large-scale study through analyzing ad-related Web traces crawled over a three- month period. Our study reveals the rampancy of malvertising: hundreds of top ranking Web sites fell victims and leading ad networks such as DoubleClick were infiltrated. To mitigate this threat, we identify prominent features from malicious advertising nodes and their related content delivery paths, and leverage them to build a new detection system called MadTracer. MadTracer automatically generates detection rules and utilizes them to inspect advertisement delivery processes and detect malvertising activities. Our evaluation shows that MadTracer was capable of capturing a large number of malvertising cases, 15 times as many as Google Safe Browsing and Microsoft Forefront did together, at a low false detection rate. It also detected new attacks, including a type of click-fraud attack that has never been reported before.

Zhou Li (Indiana University Bloomington), Kehuan Zhang (Indiana University Bloomington), Yinglian Xie (MSR Silicon Valley), Fang Yu (MSR Silicon Valley), XiaoFeng Wang (Indiana University Bloomington).

How Secure are Power Network Signature Based Time Stamps?.

A time stamp based on the power network signature called the Electrical Network Frequency (ENF) has been used by an emerging class of approaches for authenticating digital audio and video recordings in computer-to-computer communications. However, the presence of adversaries may render the time stamp insecure, and it is crucial to understand the robustness of ENF analysis against anti-forensic operations. This paper investigates possible anti-forensic operations that can remove and alter the ENF signal while trying to preserve the host signal, and develops detection methods targeting these operations. Improvements over anti-forensic operations that can circumvent the detection are also examined, for which various trade-offs are discussed. To develop an understanding of the dynamics between a forensic analyst and an adversary, an evolutionary perspective and a game-theoretical perspective are proposed, which allow for a comprehensive characterization of plausible anti-forensic strategies and countermeasures. Such an understanding has the potential to lead to more secure and reliable time stamp schemes based on ENF analysis.

Wei-Hong Chuang (University of Maryland), Ravi Garg (University of Maryland), Min Wu (University of Maryland).

Machine-Generated Algorithms, Proofs and Software for the Batch Verification of Digital Signature Schemes.

As devices everywhere increasingly communicate with each other, many security applications will require low-bandwidth signatures that can be processed quickly. Pairing-based signatures can be very short, but are often costly to verify. Fortunately, they also tend to have efficient batch verification algorithms. Finding these batching algorithms by hand, however, can be tedious and error prone. We address this by presenting AutoBatch, an automated tool for generating batch verification code in either Python or C++ from a high level representation of a signature scheme. AutoBatch outputs both software and, for transparency, a LaTeX file describing the batching algorithm and arguing that it preserves the unforgeability of the original scheme. We tested AutoBatch on over a dozen pairing-based schemes to demonstrate that a computer could find competitive batching solutions in a reasonable amount of time. Indeed, it proved highly competitive. In particular, it found an algorithm that is significantly faster than a batching algorithm from Eurocrypt 2010. Another novel contribution is that it handles cross-scheme batching, where it searches for a common algebraic structure between two distinct schemes and attempts to batch them together. We describe other features and performance details herein. AutoBatch is a useful tool for cryptographic designers and implementors, and to our knowledge, it is the first attempt to outsource to machines the design, proof writing and implementation of signature batch verification schemes.

Joseph A. Akinyele (Johns Hopkins University), Matthew Green (Johns Hopkins University), Susan Hohenberger (Johns Hopkins University), Matthew W. Pagano (Johns Hopkins University).

PERM: Practical Reputation-Based Blacklisting without TTPs.

Some users may misbehave under the cover of anonymity by, e.g., defacing webpages on Wikipedia or posting vulgar comments on YouTube. To prevent such abuse, a few anonymous credential schemes have been proposed that revoke access for misbehaving users while maintaining their anonymity such that no trusted third party (TTP) is involved in the revocation process. Recently we proposed BLACR, a TTP-free scheme that supports ‘reputation-based blacklisting’ — the service provider can score users’ anonymous sessions (e.g., good vs. inappropriate comments) and users with insufficient reputation are denied access. The major drawback of BLACR is the linear computational overhead in the size of the reputation list, which allows it to support reputation for only a few thousand user sessions in practical settings. We propose PERM, a revocation- window-based scheme (misbehaviors must be caught within a window of time), which makes computation independent of the size of the reputation list. PERM thus supports millions of user sessions and makes reputation-based blacklisting practical for large-scale deployments.

Man Ho Au (University of Wollongong), Apu Kapadia (Indiana University).

SkypeMorph: Protocol Obfuscation for Tor Bridges.

The Tor network is designed to provide users with low-latency anonymous communications. Tor clients build circuits with publicly listed relays to anonymously reach their destinations. However, since the relays are publicly listed, they can be easily blocked by censoring adversaries. Consequently, the Tor project envisioned the possibility of unlisted entry points to the Tor network, commonly known as bridges. We address the issue of preventing censors from detecting the bridges by observing the communications between them and nodes in their network. We propose a model in which the client obfuscates its messages to the bridge in a widely used protocol over the Internet. We investigate using Skype video calls as our target protocol and our goal is to make it difficult for the censoring adversary to distinguish between the obfuscated bridge connections and actual Skype calls using statistical comparisons. We have implemented our model as a proof-of-concept pluggable transport for Tor, which is available under an open-source licence. Using this implementation we observed the obfuscated bridge communications and compared it with those of Skype calls and presented the results.

Hooman Mohajeri Moghaddam (University of Waterloo), Baiyu Li (University of Waterloo), Mohammad Derakhshani (University of Waterloo), Ian Goldberg (University of Waterloo).

The Most Dangerous Code in the World: Validating SSL Certificates in Non-Browser Software.

SSL (Secure Sockets Layer) is the de facto standard for secure Internet communications. Security of SSL connections against an active network attacker depends on correctly validating public-key certificates presented when the connection is established. We demonstrate that SSL certificate validation is completely broken in many security-critical applications and libraries. Vulnerable software includes Amazon’s EC2 Java library and all cloud clients based on it; Amazon’s and PayPal’s merchant SDKs responsible for transmitting payment details from e-commerce sites to payment gateways; integrated shopping carts such as osCommerce, ZenCart, Ubercart, and PrestaShop; AdMob code used by mobile websites; Chase mobile banking and several other Android apps and libraries; Java Web-services middleware including Apache Axis, Axis 2, Codehaus XFire, and Pusher library for Android and all applications employing this middleware. Any SSL connection from any of these programs is insecure against a man-in-the-middle attack. The root causes of these vulnerabilities are badly designed APIs of SSL implementations (such as JSSE, OpenSSL, and GnuTLS) and data-transport libraries (such as cURL) which present developers with a confusing array of settings and options. We analyze perils and pitfalls of SSL certificate validation in software based on these APIs and present our recommendations.

Martin Georgiev (The University of Texas at Austin), Subodh Iyengar (Stanford University), Suman Jana (The University of Texas at Austin), Rishita Anubhai (Stanford University), Dan Boneh (Stanford University), Vitaly Shmatikov (The University of Texas at Austin).

Towards a Bayesian Network Game Framework for Evaluating DDoS Attacks and Defense.

With a long history of compromising Internet security, Distributed Denial-of- Service (DDoS) attacks have been intensively investigated and numerous countermeasures have been proposed to defend against them. In this work, we propose a non-standard game-theoretic framework that facilitates evaluation of DDoS attacks and defense. Our framework can be used to study diverse DDoS attack scenarios where multiple layers of protection are deployed and a number of uncertain factors affect the decision making of the players, and it also allows us to model different sophistication levels of reasoning by both the attacker and the defender. We conduct a variety of experiments to evaluate DDoS attack and defense scenarios where one or more layers of defense mechanisms are deployed, and demonstrate that our framework sheds light on the interplay between decision makings of both the attacker and the defender, as well as how they affect the outcomes of DDoS attack and defense games.

Guanhua Yan (Los Alamos National Laboratory), Ritchie Lee (Carnegie Mellon University Silicon Valley), Alex Kent (Los Alamos National Laboratory), David Wolpert (Los Alamos National Laboratory).

Privacy-Aware Personalization for Mobile Advertising.

Mobile advertising is an increasingly important driver in the Internet economy. We point out fundamental trade-offs between important variables in the mobile advertisement ecosystem. In order to increase relevance, ad campaigns tend to become more targeted and personalized by using context information extracted from user’s interactions and smartphone’s sensors. This raises privacy concerns that are hard to overcome due to the limited resources (energy and bandwidth) available on the phones. We point out that in the absence of a trusted third party, it is impossible to maximize these three variables - ad relevance, privacy, and efficiency - in a single system. This leads to the natural question: can we formalize a common framework for personalized ad delivery that can be instantiated to any desired trade-off point? We propose such a flexible ad-delivery framework where personalization is done jointly by the server and the phone. We show that the underlying optimization problem is NP-hard and present an efficient algorithm with a tight approximation guarantee. Since tuning personalization rules requires implicit user feedback (clicks), we ask how can we, in an efficient and privacy-preserving way, gather statistics over a dynamic population of mobile users? This is needed for end-to-end privacy of an ad system. We propose the first differentially-private distributed protocol that works even in the presence of a dynamic and malicious set of users. We evaluate our methods with a large click log of location-aware searches in Microsoft Bing for mobile. Our experiments show that our framework can simultaneously achieve reasonable levels of privacy, efficiency, and ad relevance and can efficiently support a high churn rate of users during the gathering statistics that are required for personalization.

Michaela Hardt (Twitter), Suman Nath (Microsoft Research).

GPS Software Attacks.

Since its creation, the Global Positioning System (GPS) has grown from a limited purpose positioning system to a ubiquitous trusted source for positioning, navigation, and timing data. To date, researchers have essentially taken a signal processing approach to GPS security and shown that GPS is vulnerable to jamming and spoofing. In this work, we systematically map out a larger attack surface by viewing GPS as a computer system. Our surface includes higher level GPS protocol messages than previous work, as well as the GPS OS and downstream dependent systems. We develop a new hardware platform for GPS attacks, and develop novel attacks against GPS infrastructure. Our experiments on consumer and professional-grade receivers show that GPS and GPS-dependent systems are significantly more vulnerable than previously thought. For example, we show that remote attacks via malicious GPS broadcasts are capable of bringing down up to 30% and 20% of the global CORS navigation and NTRIP networks, respectively, using hardware that costs about the same as a laptop. In order to improve security, we propose systems-level defenses and principles that can be deployed to secure critical GPS and dependent systems.

Tyler Nighswander (Carnegie Mellon University), Brent Ledvina (Coherent Navigation), Jonathan Diamond (Coherent Navigation), Robert Brumley (Coherent Navigation), David Brumley (Carnegie Mellon University).

PScout: Analyzing the Android Permission Specification.

Modern smartphone operating systems (OSs) have been developed with a greater emphasis on security and protecting privacy. One of the mechanisms these systems use to protect users is a permission system, which requires developers to declare what sensitive resources their applications will use, has users agree with this request when they install the application and constrains the application to the requested resources during runtime. As these permission systems become more common, questions have risen about their design and implementation. In this paper, we perform an analysis of the permission system of the Android smartphone OS in an attempt to begin answering some of these questions. Because the documentation of Android’s permission system is incomplete and because we wanted to be able to analyze several versions of Android, we developed PScout, a tool that extracts the permission specification from the Android OS source code using static analysis. PScout overcomes several challenges, such as scalability due to Android’s 3.4 million line code base, accounting for permission enforcement across processes due to Android’s use of IPC, and abstracting Android’s diverse permission checking mechanisms into a single primitive for analysis. We use PScout to analyze 4 versions of Android spanning version 2.2 up to the recently released Android 4.0. Our main findings are that while Android has over 75 permissions, there is little redundancy in the permission specification. However, if applications could be constrained to only use documented APIs, then about 22% of the non-system permissions are actually unnecessary. Finally, we find that a trade-off exists between enabling least-privilege security with fine-grained permissions and maintaining stability of the permission specification as the Android OS evolves.

Kathy Wain Yee Au (University of Toronto), Yi Fan Zhou (University of Toronto), Zhen Huang (University of Toronto), David Lie (University of Toronto).

TreeDroid: A Tree Automaton Based Approach to Enforcing Data Processing Policies.

Current approaches to security policy monitoring are based on linear control flow constraints such as ‘runQuery’ may be evaluated only after ‘sanitize’. However, realistic security policies must be able to conveniently capture data flow constraints as well. An example is a policy stating that arguments to the function ‘runQuery’ must be either constants, outputs of a function ‘sanitize’, or concatenations of any such values. We present a novel approach to security policy monitoring that uses tree automata to capture constraints on the way data is processed along an execution. We present a »-calculus based model of the framework, investigate some of the models meta-properties, and show how it can be implemented using labels corresponding to automaton states to reflect the computational histories of each data item. We show how a standard denotational semantics induces the expected monitoring regime on a simple “while” language. Finally we implement the framework for the Dalvik VM using TaintDroid as the underlying data flow tracking mechanism, and evaluate its functionality and performance on five case studies.

Mads Dam (KTH Royal Institute of Technology), Gurvan Le Guernic (KTH Royal Institute of Technology), Andreas Lundblad (KTH Royal Institute of Technology).

Collaborative TCP Sequence Number Inference Attack.

In this study, we discover a new class of unknown side channels — “sequence- number-dependent” host packet counters — that exist in Linux/Android and BSD/Mac OS to enable TCP sequence number inference attacks. It allows a piece of unprivileged on-device malware to collaborate with an off-path attacker to infer the TCP sequence numbers used between a client and a server, leading to TCP injection and hijacking attacks. We show that the inference takes, in common cases, under a second to complete and is quick enough for attackers to inject malicious Javascripts into live Facebook sessions and to perform malicious actions on behalf of a victim user. Since supporting unprivileged access to global packet counters is an intentional design choice, we believe our findings provide important lessons and offer insights on future system and network design.

Zhiyun Qian (University of Michigan), Z. Morley Mao (University of Michigan), Yinglian Xie (Microsoft Research Silicon Valley).

Operating System Framed in Case of Mistaken Identity.

When asking users to enter credentials, today’s desktop operating systems often use windows that provide scant evidence that a trusted path has been established; evidence that would allow a user to know that a request is genuine and that the password will not be read by untrusted principals. We measure the efficacy of web-based attacks that spoof these operating system credential-entry windows to steal users’ device-login passwords. We recruited 504 users of Amazon’s Mechanical Turk to evaluate a series of games on third-party websites. The third such website indicated that it needed to install software from the publisher that provided the participants’ operating system: Microsoft’s Silverlight for Windows Vista/7 users and Apple’s QuickTime for Mac OS users. The website then displayed a spoofed replica of a window the participant’s client operating system would use to request a user’s device credentials. In our most effective attacks, over 20% of participants entered passwords that they later admitted were the genuine credentials used to login to their devices. Even among those who declined to enter their credentials, many participants were oblivious to the spoofing attack. Participants were more likely to confirm that they were worried about the consequences of installing software from a legitimate source than to report that they thought the credential-entry window might have appeared as a result of an attempt to steal their password.

Cristian Bravo-Lillo (Carnegie Mellon University), Lorrie Cranor (Carnegie Mellon University), Julie Downs (Carnegie Mellon University), Saranga Komanduri (Carnegie Mellon University), Stuart Schechter (Microsoft Research), Manya Sleeper (Carnegie Mellon University).

Practical Yet Universally Composable Two-Server Password-Authenticated Secret Sharing.

Password-authenticated secret sharing (PASS) schemes, first introduced by Bagherzandi et al. at CCS 2011, allow users to distribute data among several servers so that the data can be recovered using a single human-memorizable password, but no single server (or even no collusion of servers up to a certain size) can mount an off-line dictionary attack on the password or learn anything about the data. We propose a new, universally composable (UC) security definition for the two-server case (2PASS) in the public-key setting that addresses a number of relevant limitations of the previous, non-UC definition. For example, our definition makes no prior assumptions on the distribution of passwords, preserves security when honest users mistype their passwords, and guarantees secure composition with other protocols in spite of the unavoidable non-negligible success rate of online dictionary attacks. We further present a concrete 2PASS protocol and prove that it meets our definition. Given the strong security guarantees, our protocol is surprisingly efficient: in its most efficient instantiation under the DDH assumption in the random-oracle model, it requires fewer than twenty elliptic-curve exponentiations on the user’s device. We achieve our results by careful protocol design and by exclusively focusing on the two-server public-key setting.

Jan Camenisch (IBM Research), Anna Lysyanskaya (Brown University), Gregory Neven (IBM Research).

Hourglass Schemes: How to Prove that Cloud Files Are Encrypted.

We consider the following challenge: How can a cloud storage provider prove to a tenant that it’s encrypting files at rest, when the provider itself holds the corresponding encryption keys? Such proofs demonstrate sound encryption policies and file confidentiality. (Cheating, cost-cutting, or misconfigured providers may bypass the computation/management burdens of encryption and store plaintext only.) To address this problem, we propose hourglass schemes, protocols that prove correct encryption of files at rest by imposing a resource requirement (e.g., time, storage or computation) on the process of translating files from one encoding domain (i.e., plaintext) to a different, target domain (i.e., ciphertext). Our more practical hourglass schemes exploit common cloud infrastructure characteristics, such as limited file-system parallelism and the use of rotational hard drives for at-rest files. For files of modest size, we describe an hourglass scheme that exploits trapdoor one-way permutations to prove correct file encryption whatever the underlying storage medium. We also experimentally validate the practicality of our proposed schemes, the fastest of which incurs minimal overhead beyond the cost of encryption. As we show, hourglass schemes can be used to verify properties other than correct encryption, e.g., embedding of “provenance tags” in files for tracing the source of leaked files. Of course, even if a provider is correctly storing a file as ciphertext, it could also store a plaintext copy to service tenant requests more efficiently. Hourglass schemes cannot guarantee ciphertext-only storage, a problem inherent when the cloud manages keys. By means of experiments in Amazon EC2, however, we demonstrate that hourglass schemes provide strong incentives for economically rational cloud providers against storage of extra plaintext file copies.

Marten van Dijk (RSA Laboratories), Ari Juels (RSA Laboratories), Alina Oprea (RSA Laboratories), Ronald L Rivest (MIT), Emil Stefanov (University of California Berkeley), Nikos Triandopoulos (RSA Laboratories).

Routing Around Decoys.

Decoy Routing is a new approach to Internet censorship circumvention that was recently and independently proposed at FOCI‘11, USENIX Security‘11 and CCS‘11. Decoy routing aims to hamper nation-state level Internet censorship by having routers, rather than end hosts, relay traffic to blocked destinations. We analyze the security of these schemes against a routing capable adversary, a censoring authority that is willing to make routing decisions in response to decoy routing systems. We explore China, Syria, Iran, and Egypt as routing capable adversaries, and evaluate several attacks that defeat the security goals of existing decoy routing proposals. In particular, we show that a routing capable adversary can enumerate the participating routers implementing these protocols; can successfully avoid sending traffic along routes containing these routers with little or no adverse effects; can identify users of these schemes through active and passive attacks; and in some cases can probabilistically identify connections to targeted destinations.

Max Schuchard (University of Minnesota), John Geddes (University of Minnesota), Christopher Thompson (University of California), Nicholas Hopper (University of Minnesota).

Measuring Vote Privacy, Revisited.

We propose a new measure for privacy of votes. Our measure relies on computational conditional entropy, an extension of the traditional notion of entropy that incorporates both information-theoretic and computational aspects. As a result, we capture in a unified manner privacy breaches due to two orthogonal sources of insecurity: combinatorial aspects that have to do with the number of participants, the distribution of their votes and published election outcome as well as insecurity of the cryptography used in an implementation. Our privacy measure overcomes limitations of two previous approaches to defining vote privacy and we illustrate its applicability through several case studies. We offer a generic way of applying our measure to a large class of cryptographic protocols that includes the protocols implemented in Helios. We also describe a practical application of our metric on Scantegrity audit data from a real election.

David Bernhard (University of Bristol), Vronique Cortier (CNRS Loria), Olivier Pereira (Universit Catholique de Louvain), Bogdan Warinschi (University of Bristol).

Why Eve and Mallory Love Android: An Analysis of Android SSL (In)Security.

Many Android apps have a legitimate need to communicate over the Internet and are then responsible for protecting potentially sensitive data during transit. This paper seeks to better understand the potential security threats posed by benign Android apps that use the SSL/TLS protocols to protect data they transmit. Since the lack of visual security indicators for SSL/TLS usage and the inadequate use of SSL/TLS can be exploited to launch Man-in-the-Middle (MITM) attacks, an analysis of 13,500 popular free apps downloaded from Google’s Play Market is presented. We introduce MalloDroid, a tool to detect potential vulnerability against MITM attacks. Our analysis revealed that 1,074 (8.0%) of the apps examined contain SSL/TLS code that is potentially vulnerable to MITM attacks. Various forms of SSL/TLS misuse were discovered during a further manual audit of 100 selected apps that allowed us to successfully launch MITM attacks against 41 apps and gather a large variety of sensitive data. Furthermore, an online survey was conducted to evaluate users’ perceptions of certificate warnings and HTTPS visual security indicators in Android’s browser, showing that half of the 754 participating users were not able to correctly judge whether their browser session was protected by SSL/TLS or not. We conclude by considering the implications of these findings and discuss several countermeasures with which these problems could be alleviated.

Sascha Fahl (Distributed Computing & Security Group, Leibniz University Hannover), Marian Harbach (Distributed Computing & Security Group, Leibniz University Hannover), Thomas Muders (Distributed Computing & Security Group, Leibniz University Hannover), Matthew Smith (Distributed Computing & Security Group, Leibniz University Hannover), Lars Baumgrtner (Department of Math. & Computer Science, Philipps University Marburg), Bernd Freisleben (Department of Math. & Computer Science, Philipps University Marburg).

Intransitive Noninterference in Nondeterministic Systems.

This paper addresses the question of how TA-security, a semantics for intransitive information-flow policies in deterministic systems, can be generalized to nondeterministic systems. Various definitions are proposed, including definitions that state that the system enforces as much of the policy as possible in the context of attacks in which groups of agents collude by sharing information through channels that lie outside the system. Relationships between the various definitions proposed are characterized, and an unwinding- based proof technique is developed. Finally, it is shown that on a specific class of systems, access control systems with local non-determinism, the strongest definition can be verified by checking a simple static property.

Kai Engelhardt (The University of New South Wales), Ron van der Meyden (The University of New South Wales), Chenyi Zhang (The University of Queensland).

SABOT: Specification-based Payload Generation for Programmable Logic Controllers.

Programmable Logic Controllers (PLCs) drive the behavior of industrial control systems according to uploaded programs. It is now known that PLCs are vulnerable to the uploading of malicious code that can have severe physical consequences. What is not understood is whether an adversary with no knowledge of the PLC’s interface to the control system can execute a damaging, targeted, or stealthy attack against a control system using the PLC. In this paper, we present SABOT, a tool that automatically maps the control instructions in a PLC to an adversary-provided specification of the target control system’s behavior. This mapping recovers sufficient semantics of the PLC’s internal layout to instantiate arbitrary malicious controller code. This lowers the prerequisite knowledge needed to tailor an attack to a control system. SABOT uses an incremental model checking algorithm to map a few plant devices at a time, until a mapping is found for all adversary-specified devices. At this point, a malicious payload can be compiled and uploaded to the PLC. Our evaluation shows that SABOT correctly compiles payloads for all tested control systems when the adversary correctly specifies full system behavior, and for 4 out of 5 systems in most cases where there where unspecified features. Furthermore, SABOT completed all analyses in under 2 minutes.

Stephen McLaughlin (The Pennsylvania State University), Patrick McDaniel (The Pennsylvania State University).

Protecting Location Privacy: Optimal Strategy against Localization Attacks.

The mainstream approach to protecting the location-privacy of mobile users in location-based services (LBSs) is to alter the users’ actual locations in order to reduce the location information exposed to the service provider. The location obfuscation algorithm behind an effective location-privacy preserving mechanism (LPPM) must consider three fundamental elements: the privacy requirements of the users, the adversary’s knowledge and capabilities, and the maximal tolerated service quality degradation stemming from the obfuscation of true locations. We propose the first methodology, to the best of our knowledge, that enables a designer to find the optimal LPPM for a LBS given each user’s service quality constraints against an adversary implementing the optimal inference algorithm. Such LPPM is the one that maximizes the expected distortion (error) that the optimal adversary incurs in reconstructing the actual location of a user, while fulfilling the user’s service-quality requirement. We formalize the mutual optimization of user-adversary objectives (location privacy vs. correctness of localization) by using the framework of Stackelberg Bayesian games. In such setting, we develop two linear programs that output the best LPPM strategy and its corresponding optimal inference attack. Our optimal user-centric LPPM can be easily integrated in the users’ mobile devices they use to access LBSs. We validate the efficacy of our game theoretic method against real location traces. Our evaluation confirms that the optimal LPPM strategy is superior to a straightforward obfuscation method, and that the optimal localization attack performs better compared to a Bayesian inference attack.

Reza Shokri (EPFL), George Theodorakopoulos (Cardiff University), Carmela Troncoso (K.U.Leuven), Jean-Pierre Hubaux (EPFL), Jean-Yves Le Boudec (EPFL).

Verifiable Data Streaming.

In a verifiable data streaming protocol, the client streams a long string to the server who stores it in its database. The stream is verifiable in the sense that the server can neither change the order of the elements nor manipulate them. The client may also retrieve data from the database and update them. The content of the database is publicly verifiable such that any party in possession of some value $s$ and a proof Ö can check that s is indeed in the database. We introduce the notion of verifiable data streaming and present an efficient instantiation that supports an exponential number of values based on general assumptions. Our main technique is an authentication tree in which the leaves are not fixed in advanced such that the user, knowing some trapdoor, can authenticate a new element on demand without pre- or re-computing all other leaves. We call this data structure chameleon authentication tree (CAT). We instantiate our scheme with primitives that are secure under the discrete logarithm assumption. The algebraic properties of this assumption allow us to obtain a very efficient verification algorithm. As a second application of CATs, we present a new transformation from any one-time to many-time signature scheme that is more efficient than previously known solutions.

Dominique Schroeder (University of Maryland), Heike Schroeder (CASED).

Leveraging Choice” to Automate Authorization Hook Placement”.

When servers manage resources on behalf of multiple, mutually-distrusting clients, they must mediate access to those resources to ensure that each client request complies with an authorization policy. This goal is typically achieved by placing authorization hooks at appropriate locations in server code. The goal of authorization hook placement is to completely mediate all security-sensitive operations on shared resources. To date, authorization hook placement in code bases, such as the X server and postgresql, has largely been a manual procedure, driven by informal analysis of server code and discussions on developer forums. Often, there is a lack of consensus about basic concepts, such as whatconstitutes a security-sensitive operation. In this paper, we propose an automated hook placement approach that is motivated by a novel observation — that the deliberate choices made by clients for objects from server collections and for processing those objects must all be authorized. We have built a tool that uses this observation to statically analyze the server source. Using real- world examples (the X server and postgresql), we show that the hooks placed by our method are just as effective as hooks that were manually placed over the course of years while greatly reducing the burden on programmers.

Divya Muthukumaran (The Pennsylvania State University), Trent Jaeger (The Pennsylvania State University), Vinod Ganapathy (Rutgers University).

Publicly Verifiable Delegation of Large Polynomials and Matrix Computations, with Applications.

Outsourced computations (where a client requests a server to perform some computation on its behalf) are becoming increasingly important due to the rise of Cloud Computing and the proliferation of mobile devices. Since cloud providers may not be trusted, a crucial problem is the verification of the integrity and correctness of such computation, possibly in a public way, i.e., the result of a computation can be verified by any third party, and requires no secret key – akin to a digital signature on a message. We present new protocols for publicly verifiable secure outsourcing of Evaluation of High Degree Polynomials and Matrix Multiplication. Compared to previously proposed solutions, ours improve in efficiency and offer security in a stronger model. The paper also discusses several practical applications of our protocols.

Dario Fiore (New York University), Rosario Gennaro (City College of New York).

Precise Enforcement of Progress-Sensitive Security.

Program progress (or termination) is a covert channel that may leak sensitive information. To control information leakage on this channel, semantic definitions of security should be progress sensitive and enforcement mechanisms should restrict the channel’s capacity. However, most state-of-the-art language- based information-flow mechanisms are progress insensitive—allowing arbitrary information leakage through this channel—and current progress-sensitive enforcement techniques are overly restrictive. We propose a type system and instrumented semantics that together enforce progress-sensitive security more precisely than existing approaches. Our system is permissive in that it is able to accept programs in which the termination behavior depends only on low- security (e.g., public or trusted) information. Our system is parameterized on a termination oracle, and controls the progress channel precisely, modulo the ability of the oracle to determine the termination behavior of a program based on low-security information. We have instantiated the oracle for a simple imperative language with a logical abstract interpretation that uses an SMT solver to synthesize linear rank functions. In addition, we extend the system to permit controlled leakage through the progress channel, with the leakage bound by an explicit budget. We empirically analyze progress channels in existing Jif code. Our evaluation suggests that security-critical programs appear to satisfy progress-sensitive security.

Scott Moore (Harvard University), Aslan Askarov (Harvard University), Stephen Chong (Harvard University).

Single Round Access Privacy on Outsourced Storage.

We present SR-ORAM1, the first single-round-trip polylogarithmic time Oblivious RAM that requires only logarithmic client storage. Taking only a single round trip to perform a query, SR-ORAM has an online communication / computation cost of O(log n log log n), and an offline, overall amortized per-query communication cost of O(log2 n log log n), requiring under 2 round trips. The client folds an entire interactive sequence of Oblivious RAM requests into a single query object that the server can unlock incrementally, to satisfy a query without learning its result. This results in an Oblivious RAM secure against an actively malicious adversary, with unprecedented speeds in accessing large data sets over high-latency links. We show this to be the most efficient storage-free-client Oblivious RAM to date for today’s Internet-scale network latencies.

Peter Williams (Stony Brook Network Security and Applied Cryptography Lab), Radu Sion (Stony Brook Network Security and Applied Cryptography Lab).

Binary Stirring: Self-randomizing Instruction Addresses of Legacy x86 Binary Code.

Unlike library code, whose instruction addresses can be randomized by address space layout randomization (ASLR), application binary code often has static instruction addresses. Attackers can exploit this limitation to craft robust shell codes for such applications, as demonstrated by a recent attack that reuses instruction gadgets from the static binary code of victim applications. This paper introduces binary stirring, a new technique that imbues x86 native code with the ability to self-randomize its instruction addresses each time it is launched. The input to STIR is only the application binary code without any source code, debug symbols, or relocation information. The output is a new binary whose basic block addresses are dynamically determined at load-time. Therefore, even if an attacker can find code gadgets in one instance of the binary, the instruction addresses in other instances are unpredictable. An array of binary transformation techniques enable STIR to transparently protect large, realistic applications that cannot be perfectly disassembled due to computed jumps, code-data interleaving, OS callbacks, dynamic linking and a variety of other difficult binary features. Evaluation of STIR for both Windows and Linux platforms shows that stirring introduces about 1.6% overhead on average to application runtimes.

Richard Wartell (The University of Texas at Dallas), Vishwath Mohan (The University of Texas at Dallas), Kevin W. Hamlen (The University of Texas at Dallas), Zhiqiang Lin (The University of Texas at Dallas).

PrivateFS: A Parallel Oblivious File System.

PrivateFS is an oblivious file system that enables access to remote storage, while keeping both the file contents and client access patterns secret. PrivateFS is based on a new parallel Oblivious RAM mechanism (PD-ORAM)—instead of waiting for the completion of all ongoing client-server transactions, client threads can now engage a server in parallel without loss of privacy. This critical piece is missing from existing Oblivious RAMs (ORAM), which can not allow multiple clients threads to operate simultaneously without revealing intra- and inter-query correlations and thus incurring privacy leaks. And since ORAMs often require many communication rounds, this significantly and unnecessarily constrains throughput. The mechanisms introduced here eliminate this constraint, allowing overall throughput to be bound by server bandwidth only, and thus to increase by an order of magnitude. Further, new de- amortization techniques bring the worst case query cost in line with the average cost. Both of these results are shown to be fundamental to any ORAM. Extensions providing fork consistency against an actively malicious adversary are then presented. A high performance, fully functional PD-ORAM implementation was designed, built and analyzed. It performs multiple queries per second on a 1TB+ database across 50ms latency links, with unamortized, bound query latencies. Based on PD-ORAM, PrivateFS was built and deployed on Linux as a userspace file system.

Peter Williams (Stony Brook University), Radu Sion (Stony Brook University).

Cross-VM Side Channels and Their Use to Extract Private Keys.

This paper details the construction of an access-driven side-channel attack by which a malicious virtual machine (VM) extracts fine-grained information from a victim VM running on the same physical computer. This attack is the first such attack demonstrated on a symmetric multiprocessing system virtualized using a modern VMM (Xen). Such systems are very common today, ranging from desktops that use virtualization to sandbox application or OS compromises, to clouds that co- locate the workloads of mutually distrustful customers. Constructing such a side-channel requires overcoming challenges including core migration, numerous sources of channel noise, and the difficulty of preempting the victim with sufficient frequency to extract fine-grained information from it. This paper addresses these challenges and demonstrates the attack in a lab setting by extracting an ElGamal decryption key from a victim using the most recent version of the libgcrypt cryptographic library.

Yinqian Zhang (University of North Carolina), Ari Juels (RSA Laboratories), Michael K. Reiter (University of North Carolina), Thomas Ristenpart (University of Wisconsin).

Priceless: The Role of Payments in Abuse-advertised Goods.

Large-scale abusive advertising is a profit-driven endeavor. Without consumers purchasing spam-advertised Viagra, search-advertised counterfeit software or malware-advertised fake anti-virus, these campaigns could not be economically justified. Thus, in addition to the numerous efforts focused on identifying and blocking individual abusive advertising mechanisms, a parallel research direction has emerged focused on undermining the associated means of monetization: payment networks. In this paper we explain the complex role of payment processing in monetizing the modern affiliate program ecosystem and characterize the dynamics of these banking relationships over two years within the counterfeit pharmaceutical and software sectors. By opportunistically combining our own active purchasing data with contemporary disruption efforts by brand-holders and payment card networks, we gather the first empirical dataset concerning this approach. We discuss how well such payment interventions work, how abusive merchants respond in kind and the role that the payments ecosystem is likely to play in the future.

Damon McCoy (George Mason University), Hitesh Dharmdasani (George Mason university), Christian Kreibich (International Computer Science Institute), Geoffrey M Voelker (University of California, San Diego), Stefan Savage (University of California, San Diego).

Provable Security of S-BGP and other Path Vector Protocols: Model, Analysis and Extensions.

This paper provides the provable-security treatment of path vector routing protocols. We first design a security definition for routing path vector protocols by studying, generalizing, and formalizing numerous known threats. Our model incorporates three major security goals. It is quite strong, yet simple to use. We prove by reduction that S-BGP satisfies two out of the security model’s three goals, assuming the underlying signature scheme is secure. Under the same assumption, we next show how the protocol can be modified to meet all three security goals simultaneously. Finally, we study security of partial PKI deployment of path vector protocols when not all nodes have public keys. We investigate the possibilities of relaxing the PKI requirement and relying on the non-cryptographic physical security of the protocol in order to achieve possibly weaker, but still well-defined, notions of security. We also present the necessary and sufficient conditions to achieve full security in the partial PKI deployment scenario. We believe our conclusions will prove useful for protocol developers, standards bodies and government agencies.

Alexandra Boldyreva (Georgia Institute of Technology), Robert Lychev (Georgia Institute of Technology).

Using Probabilistic Generative Models for Ranking Risks of Android Apps.

One of Android’s main defense mechanisms against malicious apps is a risk communication mechanism which, before a user installs an app, warns the user about the permissions the app requires, trusting that the user will make the right decision. This approach has been shown to be ineffective as it presents the risk information of each app in a “tand-alone” ashion and in a way that requires too much technical knowledge and time to distill useful information. We introduce the notion of risk scoring and risk ranking for Android apps, to improve risk communication for Android apps, and identify three desiderata for an effective risk scoring scheme. We propose to use probabilistic generative models for risk scoring schemes, and identify several such models, ranging from the simple Naive Bayes, to advanced hierarchical mixture models. Experimental results conducted using real-world datasets show that probabilistic general models significantly outperform existing approaches, and that Naive Bayes models give a promising risk scoring approach.

Hao Peng (Purdue University), Chris Gates (Purdue University), Bhaskar Sarma (Purdue University), Ninghui Li (Purdue University), Yuan Qi (Purdue University), Rahul Potharaju (Purdue University), Cristina Nita-Rotaru (Purdue University), Ian Molloy (IBM Research).

Manufacturing Compromise: The Emergence of Exploit-as-a-Service.

We investigate the emergence of the exploit-as-a-service model for driveby browser compromise. In this regime, attackers pay for an exploit kit or service to do the “dirty work” of exploiting a victim’s browser, decoupling the complexities of browser and plugin vulnerabilities from the challenges of generating traffic to a website under the attacker’s control. Upon a successful exploit, these kits load and execute a binary provided by the attacker, effectively transferring control of a victim’s machine to the attacker. In order to understand the impact of the exploit-as-a-service paradigm on the malware ecosystem, we perform a detailed analysis of the prevalence of exploit kits, the families of malware installed upon a successful exploit, and the volume of traffic that malicious web sites receive. To carry out this study, we analyze 77,000 malicious URLs received from Google Safe Browsing, along with a crowd- sourced feed of blacklisted URLs known to direct to exploit kits. These URLs led to over 10,000 distinct binaries, which we ran in a contained environment. Our results show that many of the most prominent families of malware now propagate through driveby downloads–32 families in all. Their activities are supported by a handful of exploit kits, with Blackhole accounting for 29% of all malicious URLs in our data, followed in popularity by Incognito. We use DNS traffic from real networks to provide a unique perspective on the popularity of malware families based on the frequency that their binaries are installed by drivebys, as well as the lifetime and popularity of domains funneling users to exploits.

Chris Grier (UC Berkeley), Lucas Ballard (Google, Inc.), Juan Caballero (IMDEA Software Institute), Neha Chachra (UC San Diego), Christian J. Dietrich (University of Applied Sciences Gelsenkirchen), Kirill Levchenko (UC San Diego), Panayiotis Mavrommatis (Google, Inc.), Damon McCoy (George Mason University), Antonio Nappa (IMDEA Software Institute), Andreas Pitsillidis (UC San Diego), Niels Provos (Google, Inc.), M. Zubair Rafique (IMDEA Software Institute), Moheeb Abu Rajab (Google, Inc.), Christian Rossow (University of Applied Sciences Gelsenkirchen), Kurt Thomas (UC Berkeley), Vern Paxson (UC Berkeley), Stefan Savage (UC San Diego), Geoffrey M. Voelker (UC San Diego).

Differentially Private Sequential Data Publication via Variable-Length N-Grams.

Sequential data is being increasingly used in a variety of applications. Publishing sequential data is of vital importance to the advancement of these applications. However, as shown by the re-identification attacks on the AOL and Netflix datasets, releasing sequential data may pose considerable threats to individual privacy. Recent research has indicated the failure of existing sanitization techniques to provide claimed privacy guarantees. It is therefore urgent to respond to this failure by developing new schemes with provable privacy guarantees. Differential privacy is one of the only models that can be used to provide such guarantees. Due to the inherent sequentiality and high- dimensionality, it is challenging to apply differential privacy to sequential data. In this paper, we address this challenge by employing a variable-length n-gram model, which extracts the essential information of a sequential database in terms of a set of variable-length n-grams. Our approach makes use of a carefully designed exploration tree structure and a set of novel techniques based on the Markov assumption in order to lower the magnitude of added noise. The published n-grams are useful for many purposes. Furthermore, we develop a solution for generating a synthetic database, which enables a wider spectrum of data analysis tasks. Extensive experiments on real-life datasets demonstrate that our approach substantially outperforms the state-of-the-art techniques.

Rui Chen (Concordia University), Gergely Acs (INRIA), Claude Castelluccia (INRIA).

Minimizing Private Data Disclosures in the Smart Grid.

Smart electric meters pose a substantial threat to the privacy of individuals in their own homes. Combined with non-intrusive load monitors, smart meter data can reveal precise home appliance usage information. An emerging solution to behavior leakage in smart meter measurement data is the use of battery-based load hiding. In this approach, a battery is used to store and supply power to home devices at strategic times to hide appliance loads from smart meters. A few such battery control algorithms have already been studied in the literature, but none have been evaluated from an adversarial point of view. In this paper, we first consider two well known battery privacy algorithms, Best Effort (BE) and Non-Intrusive Load Leveling (NILL), and demonstrate attacks that recover precise load change information, which can be used to recover appliance behavior information, under both algorithms. We then introduce a stepping approach to battery privacy algorithms that fundamentally differs from previous approaches by maximizing the error between the load demanded by a home and the external load seen by a smart meter. By design, precise load change recovery attacks are impossible. We also propose mutual-information based measurements to evaluate the privacy of different algorithms. We implement and evaluate four novel algorithms using the stepping approach, and show that under the mutual- information metrics they outperform BE and NILL.

Weining Yang (Purdue University), Ninghui Li (Purdue University), Yuan Qi (Purdue University), Wahbeh Qardaji (Purdue University), Stephen McLaughlin (Penn State University), Patrick McDaniel (Penn State University).

Resource-Freeing Attacks: Improve Your Cloud Performance (at Your Neighbor’s Expense).

Cloud computing promises great efficiencies by multiplexing resources among disparate customers. For example, Amazon’s Elastic Compute Cloud (EC2), Microsoft Azure, Google’s Compute Engine, and Rack-space Hosting all offer Infrastructure as a Service (IaaS) solutions that pack multiple customer virtual machines (VMs) onto the same physical server. The gained efficiencies have some cost: past work has shown that the performance of one customer’s VM can suffer due to interference from another. In experiments on a local testbed, we found that the performance of a cache-sensitive benchmark can degrade by more than 80% because of interference from another VM. This interference incentivizes a new class of attacks, that we call resource-freeing attacks (RFAs). The goal is to modify the workload of a victim VM in a way that frees up resources for the attacker’s VM. We explore in depth a particular example of an RFA. Counter- intuitively, by adding load to a co-resident victim, the attack speeds up a class of cache-bound workloads. In a controlled lab setting we show that this can improve performance of synthetic benchmarks by up to 60% over not running the attack. In the noisier setting of Amazon’s EC2, we still show improvements of up to 13%.

Venkatanathan Varadarajan (University of Wisconsin-Madison), Thawan Kooburat (University of Wisconsin-Madison), Benjamin Farley (University of Wisconsin- Madison), Thomas Ristenpart (University of Wisconsin-Madison), Michael M Swift (University of Wisconsin-Madison).

PeerPress: Utilizing Enemies’ P2P Strength against Them.

We propose a new, active scheme for fast and reliable detection of P2P malware by exploiting the enemies’ strength against them. Our new scheme works in two phases: host-level dynamic binary analysis to automatically extract built-in remotely-accessible/controllable mechanisms (referred to as Malware Control Birthmarks or MCB) in P2P malware, followed by network-level informed probing for detection. Our new design demonstrates a novel combination of the strengths from both host-based and network-based approaches. Compared with existing detection solutions, it is fast, reliable, and scalable in its detection scope. Furthermore, it can be applicable to more than just P2P malware, more broadly any malware that opens a service port for network communications (e.g., many Trojans/backdoors). We develop a prototype system, PeerPress, and evaluate it on many representative real-world P2P malware (including Storm, Conficker, and more recent Sality). The results show that it can effectively detect the existence of malware when MCBs are extracted, and the detection occurs in an early stage during which other tools (e.g., BotHunter) typically do not have sufficient information to detect. We further discuss its limitations and implications, and we believe it is a great complement to existing passive detection solutions.

Zhaoyan Xu (Texas A&M University), Lingfeng Chen (Texas A&M University), Guofei Gu (Texas A&M University), Christopher Kruegel (University of California).

OTO: Online Trust Oracle for User-Centric Trust Establishment.

Malware continues to thrive on the Internet. Besides automated mechanisms for detecting malware, we provide users with trust evidence information to enable them to make informed trust decisions. To scope the problem, we study the challenge of assisting users with judging the trustworthiness of software downloaded from the Internet. Through expert elicitation, we deduce indicators for trust evidence, then analyze these indicators with respect to scalability and robustness. We design OTO, a system for communicating these trust evidence indicators to users, and we demonstrate through a user study the effectiveness of OTO, even with respect to IE’s SmartScreen Filter (SSF). The results from the between-subjects experiment with 58 participants confirm that the OTO interface helps people make correct trust decisions compared to the SSF interface regardless of their security knowledge, education level, occupation, age, or gender.

Tiffany Hyun-Jin Kim (Carnegie Mellon University), Payas Gupta (Singapore Management University), Jun Han (Carnegie Mellon University), Emmanuel Owusu (Carnegie Mellon University), Jason Hong (Carnegie Mellon University), Adrian Perrig (Carnegie Mellon University), Debin Gao (Singapore Management University).

StegoTorus: A Camouflage Proxy for the Tor Anonymity System.

Internet censorship by governments is an increasingly common practice worldwide. Internet users and censors are locked in an arms race: as users find ways to evade censorship schemes, the censors develop countermeasures for the evasion tactics. One of the most popular and effective circumvention tools, Tor, must regularly adjust its network traffic signature to remain usable. We present StegoTorus, a tool that comprehensively disguises Tor from protocol analysis. To foil analysis of packet contents, Tor’s traffic is steganographed to resemble an innocuous cover protocol, such as HTTP. To foil analysis at the transport level, the Tor circuit is distributed over many shorter-lived connections with per- packet characteristics that mimic cover-protocol traffic. Our evaluation demonstrates that StegoTorus improves the resilience of Tor to fingerprinting attacks and delivers usable performance.

Zachary Weinberg (Carnegie Mellon University), Jeffrey Wang (Stanford University), Vinod Yegneswaran (SRI International), Linda Briesemeister (SRI International), Steven Cheung (SRI International), Frank Wang (Stanford University), Dan Boneh (Stanford University).

On Significance of the Least Significant Bits For Differential Privacy.

We describe a new type of vulnerability present in many implementations of differentially private mechanisms. In particular, all four publicly available general purpose systems for differentially private computations are susceptible to our attack. The vulnerability is based on irregularities of floating-point implementations of the privacy-preserving Laplacian mechanism. Unlike its mathematical abstraction, the textbook sampling procedure results in a porous distribution over double-precision numbers that allows one to breach differential privacy with just a few queries into the mechanism. We propose a mitigating strategy and prove that it satisfies differential privacy under some mild assumptions on available implementation of floating-point arithmetic.

Ilya Mironov (Microsoft Research Silicon Valley).

Adaptive Defenses for Commodity Software through Virtual Application Partitioning.

Applications can be logically separated to parts that face different types of threats, or suffer dissimilar exposure to a particular threat because of external events or innate properties of the software. Based on this observation, we propose the virtual partitioning of applications that will allow the selective and targeted application of those protection mechanisms that are most needed on each partition, or manage an application’s attack surface by protecting the most exposed partition. We demonstrate the value of our scheme by introducing a methodology to automatically partition software, based on the intrinsic property of user authentication. Our approach is able to automatically determine the point where users authenticate, without access to source code. At runtime, we employ a monitor that utilizes the identified authentication points, as well as events like accessing specific files, to partition execution and adapt defenses by switching between protection mechanisms of varied intensity, such as dynamic taint analysis and instruction-set randomization. We evaluate our approach using seven well-known network applications, including the MySQL database server. Our results indicate that our methodology can accurately discover authentication points. Furthermore, we show that using virtual partitioning to apply costly protection mechanisms can reduce performance overhead by up to 5x, depending on the nature of the application.

Dimitris Geneiatakis (Columbia University), Georgios Portokalidis (Columbia University), Vasileios P. Kemerlis (Columbia University), Angelos D. Keromytis (Columbia University).

Foundations of Garbled Circuits.

Garbled circuits, a classical idea rooted in the work of Yao, have long been understood as a cryptographic technique, not a cryptographic goal. Here we cull out a primitive corresponding to this technique. We call it a garbling scheme. We provide a provable-security treatment for garbling schemes, endowing them with a versatile syntax and multiple security definitions. The most basic of these, privacy, suffices for two-party secure function evaluation (SFE) and private function evaluation (PFE). Starting from a PRF, we provide an efficient garbling scheme achieving privacy and we analyze its concrete security. We next consider obliviousness and authenticity, properties needed for private and verifiable outsourcing of computation. We extend our scheme to achieve these ends. We provide highly efficient blockcipher-based instantiations of both schemes. Our treatment of garbling schemes presages more efficient garbling, more rigorous analyses, and more modularly designed higher-level protocols.

Mihir Bellare (University of California, San Diego), Viet Tung Hoang (University of California), Phillip Rogaway (University of California, Davis).

Secure Two-Party Computation in Sublinear (Amortized) Time.

Traditional approaches to generic secure computation begin by representing the function f being computed as a circuit. If f depends on each of its input bits, this implies a protocol with complexity at least linear in the input size. In fact, linear running time is inherent for non-trivial functions since each party must “touch” every bit of their input lest information about the other party’s input be leaked. This seems to rule out many applications of secure computation (e.g., database search) in scenarios where inputs are huge. Adapting and extending an idea of Ostrovsky and Shoup, we present an approach to secure two- party computation that yields protocols running in sublinear time, in an amortized sense, for functions that can be computed in sublinear time on a random-access machine (RAM). Moreover, each party is required to maintain state that is only (essentially) linear in its own input size. Our approach combines generic secure two-party computation with oblivious RAM (ORAM) protocols. We present an optimized version of our approach using Yao’s garbled-circuit protocol and a recent ORAM construction of Shi et al. We describe an implementation of our resulting protocol, and evaluate its performance for obliviously searching a database with over 1 million entries. Our implementation outperforms off-the-shelf secure-computation protocols for databases containing more than 218 entries.

S. Dov Gordon (Applied Communication Sciences), Jonathan Katz (University of Maryland), Vladimir Kolesnikov (Bell Labs), Fernando Krell (Columbia University), Tal Malkin (Columbia University), Mariana Raykova (Columbia University), Yevgeniy Vahlis (AT&T).

Computational Verification of C Protocol Implementations by Symbolic Execution.

We verify cryptographic protocols coded in C for correspondence properties with respect to the computational model of cryptography. The first step uses symbolic execution to extract a process calculus model from a C implementation of the protocol. The new contribution is the second step in which we translate the extracted model to a CryptoVerif protocol description, such that successful verification with CryptoVerif implies the security of the original C implementation. We implement our method and apply it to verify several protocols out of reach of previous work in the symbolic model (using ProVerif), either due to the use of XOR and Diffie-Hellman commitments, or due to the lack of an appropriate computational soundness result. We analyse only a single execution path, so our tool is limited to code following a fixed protocol narration. This is the first security analysis of C code to target a verifier for the computational model. We successfully verify over 3000 LOC. One example (about 1000 LOC) is independently written and currently in testing phase for industrial deployment; during its analysis we uncovered a vulnerability now fixed by its author.

Mihhail Aizatulin (Open University), Andrew D. Gordon (Microsoft Research Cambridge), Jan Jrjens (TU Dortmund & Fraunhofer ISST).

CHEX: Statically Vetting Android Apps for Component Hijacking Vulnerabilities.

An enormous number of apps have been developed for Android in recent years, making it one of the most popular mobile operating systems. However, the quality of the booming apps can be a concern [4]. Poorly engineered apps may contain security vulnerabilities that can severally undermine users’ security and privacy. In this paper, we study a general category of vulnerabilities found in Android apps, namely the component hijacking vulnerabilities. Several types of previously reported app vulnerabilities, such as permission leakage, unauthorized data access, intent spoofing, and etc., belong to this category. We propose CHEX, a static analysis method to automatically vet Android apps for component hijacking vulnerabilities. Modeling these vulnerabilities from a data- flow analysis perspective, CHEX analyzes Android apps and detects possible hijack-enabling flows by conducting low-overhead reachability tests on customized system dependence graphs. To tackle analysis challenges imposed by Android’s special programming paradigm, we employ a novel technique to discover component entry points in their completeness and introduce app splitting to model the asynchronous executions of multiple entry points in an app. We prototyped CHEX based on Dalysis, a generic static analysis framework that we built to support many types of analysis on Android app bytecode. We evaluated CHEX with 5,486 real Android apps and found 254 potential component hijacking vulnerabilities. The median execution time of CHEX on an app is 37.02 seconds, which is fast enough to be used in very high volume app vetting and testing scenarios.

Long Lu (Georgia Institute of Technology), Zhichun Li (NEC Labs America, Inc.), Zhenyu Wu (NEC Labs America, Inc.), Wenke Lee (Georgia Institute of Technology), Guofei Jiang (NEC Labs America, Inc.).

Touching from a Distance: Website Fingerprinting Attacks and Defenses.

We present a novel web page fingerprinting attack that is able to defeat several recently proposed defenses against traffic analysis attacks, including the application-level defenses HTTPOS and randomized pipelining over Tor. Regardless of the defense scheme, our attack was able to guess which of 100 web pages a victim was visiting at least 50% of the time and, with some defenses, over 90% of the time. Our attack is based on a simple model of network behavior and out- performs previously proposed ad hoc attacks. We then build a web site fingerprinting attack that is able to identify whether a victim is visiting a particular web site with over 90% accuracy in our experiments. Our results strongly suggest that ad hoc defenses against traffic analysis are not likely to succeed. Consequently, we describe a defense scheme that provides provable security properties, albeit with potentially higher overheads.

Xiang Cai (Stony Brook University), Xin Cheng Zhang (Stony Brook University), Brijesh Joshi (Stony Brook University), Rob Johnson (Stony Brook University).

Dynamic Searchable Symmetric Encryption.

Searchable symmetric encryption (SSE) allows a client to encrypt its data in such a way that this data can still be searched. The most immediate application of SSE is to cloud storage, where it enables a client to securely outsource its data to an untrusted cloud provider without sacrificing the ability to search over it. SSE has been the focus of active research and a multitude of schemes that achieve various levels of security and efficiency have been proposed. Any practical SSE scheme, however, should (at a minimum) satisfy the following properties: sublinear search time, security against adaptive chosen-keyword attacks, compact indexes and the ability to add and delete files efficiently. Unfortunately, none of the previously-known SSE constructions achieve all these properties at the same time. This severely limits the practical value of SSE and decreases its chance of deployment in real-world cloud storage systems. To address this, we propose the first SSE scheme to satisfy all the properties outlined above. Our construction extends the inverted index approach (Curtmola et al., CCS 2006) in several non-trivial ways and introduces new techniques for the design of SSE. In addition, we implement our scheme and conduct a performance evaluation, showing that our approach is highly efficient and ready for deployment.

Seny Kamara (Microsoft Research), Charalampos Papamanthou (UC Berkeley), Tom Roeder (Microsoft Research).

Strengthening User Authentication through Opportunistic Cryptographic Identity Assertions.

User authentication systems are at an impasse. The most ubiquitous method – the password – has numerous problems, including susceptibility to unintentional exposure via phishing and cross-site password reuse. Second-factor authentication schemes have the potential to increase security but face usability and deployability challenges. For example, conventional second-factor schemes change the user authentication experience. Furthermore, while more secure than passwords, second-factor schemes still fail to provide sufficient protection against (single-use) phishing attacks. We present PhoneAuth, a system intended to provide security assurances comparable to or greater than that of conventional two-factor authentication systems while offering the same authentication experience as traditional passwords alone. Our work leverages the following key insights. First, a user’s personal device (eg a phone) can communicate directly with the user’s computer (and hence the remote web server) without any interaction with the user. Second, it is possible to provide a layered approach to security, whereby a web server can enact different policies depending on whether or not the user’s personal device is present. We describe and evaluate our server-side, Chromium web browser, and Android phone implementations of PhoneAuth.

Alexei Czeskis (University of Washington), Michael Dietz (Rice University), Tadayoshi Kohno (University of Washington), Dan Wallach (Rice University), Dirk Balfanz (Google).

Vigilare: Toward Snoop-based Kernel Integrity Monitor.

In this paper, we present Vigilare system, a kernel integrity monitor that is architected to snoop the bus traffic of the host system from a separate independent hardware. This snoop-based monitoring enabled by the Vigilare system, overcomes the limitations of the snapshot-based monitoring employed in previous kernel integrity monitoring solutions. Being based on inspecting snapshots collected over a certain interval, the previous hardware-based monitoring solutions cannot detect transient attacks that can occur in between snapshots. We implemented a prototype of the Vigilare system on Gaisler’s grlib- based system-on-a-chip (SoC) by adding Snooper hardware connections module to the host system for bus snooping. To evaluate the benefit of snoop-based monitoring, we also implemented similar SoC with a snapshot-based monitor to be compared with. The Vigilare system detected all the transient attacks without performance degradation while the snapshot-based monitor could not detect all the attacks and induced considerable performance degradation as much as 10% in our tuned STREAM benchmark test.

Hyungon Moon (Seoul National University), Hojoon Lee (Korea Advanced Institute of Science and Technology), Jihoon Lee (Seoul National University), Kihwan Kim (Korea Advanced Institute of Science and Technology), Yunheung Paek (Seoul National University), Brent Byunghoon Kang (George Mason University).

Blacksheep: Detecting Compromised Hosts in Homogeneous Crowds.

The lucrative rewards of security penetrations into large organizations have motivated the development and use of many sophisticated rootkit techniques to maintain an attacker’s presence on a compromised system. Due to the evasive nature of such infections, detecting these rootkit infestations is a problem facing modern organizations. While many approaches to this problem have been proposed, various drawbacks that range from signature generation issues, to coverage, to performance, prevent these approaches from being ideal solutions. In this paper, we present Blacksheep, a distributed system for detecting a rootkit infestation among groups of similar machines. This approach was motivated by the homogenous natures of many corporate networks. Taking advantage of the similarity amongst the machines that it analyses, Blacksheep is able to efficiently and effectively detect both existing and new infestations by comparing the memory dumps collected from each host. We evaluate Blacksheep on two sets of memory dumps. One set is taken from virtual machines using virtual machine introspection, mimicking the deployment of Blacksheep on a cloud computing provider’s network. The other set is taken from Windows XP machines via a memory acquisition driver, demonstrating Blacksheep’s usage under more challenging image acquisition conditions. The results of the evaluation show that by leveraging the homogeneous nature of groups of computers, it is possible to detect rootkit infestations.

Antonio Bianchi (UC Santa Barbara), Yan Shoshitaishvili (UC Santa Barbara), Christopher Kruegel (UC Santa Barbara), Giovanni Vigna (UC Santa Barbara).

Kargus: a Highly-scalable Software-based Intrusion Detection System.

As high-speed networks are becoming commonplace, it is increasingly challenging to prevent the attack attempts at the edge of the Internet. While many high- performance intrusion detection systems (IDSes) employ dedicated network processors or special memory to meet the demanding performance requirements, it often increases the cost and limits functional flexibility. In contrast, existing software-based IDS stacks fail to achieve a high throughput despite modern hardware innovations such as multicore CPUs, manycore GPUs, and 10 Gbps network cards that support multiple hardware queues. We present Kargus, a highly-scalable software-based IDS that exploits the full potential of commodity computing hardware. First, Kargus batch processes incoming packets at network cards and achieves up to 40 Gbps input rate even for minimum-sized packets. Second, it exploits high processing parallelism by balancing the pattern matching workloads with multicore CPUs and heterogeneous GPUs, and benefits from extensive batch processing of multiple packets per each IDS function call. Third, Kargus adapts its resource usage depending on the input rate, significantly saving the power in a normal situation. Our evaluation shows that Kargus on a 12-core machine with two GPUs handles up to 33 Gbps of normal traffic and achieves 9 to 10 Gbps even when all packets contain attack signatures, a factor of 1.9 to 4.3 performance improvements over the existing state-of-the-art software IDS. We design Kargus to be compatible with the most popular software IDS, Snort.

Muhammad Asim Jamshed (KAIST), Jihyung Lee (KAIST), Sangwoo Moon (KAIST), Insu Yun (KAIST), Deokjin Kim (NSRI), Sungryoul Lee (NSRI), Yung Yi (KAIST), KyoungSoo Park (KAIST).

Verified Security of Redundancy-Free Encryption from Rabin and RSA.

Verified security provides a firm foundation for cryptographic proofs by means of rigorous programming language techniques and verification methods. EasyCrypt is a framework that realizes the verified security paradigm and supports the machine-checked construction and verification of cryptographic proofs using state-of-the-art SMT solvers, automated theorem provers and interactive proof assistants. Previous experiments have shown that EasyCrypt is effective for a posteriori validation of cryptographic systems. In this paper, we report on the first application of verified security to a novel cryptographic construction, with strong security properties and interesting practical features. Specifically, we use EasyCrypt to prove in the Random Oracle Model the IND-CCA security of a redundancy-free public-key encryption scheme based on trapdoor one-way permutations. Somewhat surprisingly, we show that even with a zero- length redundancy, Boneh’s SAEP scheme (an OAEP-like construction with a single- round Feistel network rather than two) converts a trapdoor one-way permutation into an IND-CCA-secure scheme, provided the permutation satisfies two additional properties. We then prove that the Rabin function and RSA with short exponent enjoy these properties, and thus can be used to instantiate the construction we propose to obtain efficient encryption schemes. The reduction that justifies the security of our construction is tight enough to achieve practical security with reasonable key sizes.

Gilles Barthe (IMDEA Software Institute), David Pointcheval (cole Normale Suprieure), Santiago Zanella Bguelin (Microsoft Research).

Salus: A System for Server-Aided Secure Function Evaluation.

Secure function evaluation (SFE) allows a set of mutually distrustful parties to evaluate a function of their joint inputs without revealing their inputs to each other. SFE has been the focus of active research and recent work suggests that it can be made practical. Unfortunately, current protocols and implementations have inherent limitations that are hard to overcome using standard and practical techniques. Among them are: (1) requiring participants to do work linear in the size of the circuit representation of the function; (2) requiring all parties to do the same amount of work; and (3) not being able to provide complete fairness. A promising approach for overcoming these limitations is to augment the SFE setting with a small set of untrusted servers that have no input to the computation and that receive no output, but that make their computational resources available to the parties. In this model, referred to as server-aided SFE, the goal is to tradeoff the parties’ work at the expense of the servers. Motivated by the emergence of public cloud services such as Amazon EC2 and Microsoft Azure, recent work has explored the extent to which server-aided SFE can be achieved with a single server. In this work, we revisit the sever-aided setting from a practical perspective and design single-server-aided SFE protocols that are considerably more efficient than all previously-known protocols. We achieve this in part by introducing several new techniques for garbled-circuit-based protocols, including a new and efficient input-checking mechanism for cut-and-choose and a new pipelining technique that works in the presence of malicious adversaries. Furthermore, we extend the server-aided model to guarantee fairness which is an important property to achieve in practice. Finally, we implement and evaluate our constructions experimentally and show that our protocols (regardless of the number of parties involved) yield implementations that are 4 and 6 times faster than the most optimized two-party SFE implementation when the server is assumed to be malicious and covert, respectively.

Seny Kamara (Microsoft Research), Payman Mohassel (University of Calgary), Ben Riva (Tel Aviv University).