Oberseminar über Kryptographie
Dozent | Zeit | Raum | Erstmals am |
---|---|---|---|
Prof. A. May Prof. Eike Kiltz |
freitags, 10.45-12.00 | NA 5/64 | 24.10.14 |
New results on constant-size NIZK Proofs
In the last year, there have been several exciting advances towards making zero-knowledge proofs even more practical. In particular, several works (Libert et al, EC 14, Jutla and Roy, Crypto 14) have constructed constant-size proofs of membership in a linear subspace of G^n, where G is a group where the discrete logarithm is hard, which are even competitive with their random oracle counterpart. We show how to extend these results to give a constant-size NIZK proof that a commitment opens to a bit string. To achieve these results we abstract GS proofs and observe they are a special type of membership proof in G_T (where e:G_1 x G_2 --> G_T is a bilinear map) and then combine this with some of the ideas/techniques underlying the recent results constant-size NIZK Proofs for membership in linear spaces. For this talk I will not assume any prior knowledge of Groth Sahai proofs. I will also cover the recent results on constant-size NIZK Proofs of membership in linear spaces.
The BKW algorithm
We present the Blum, Kalai, Wasserman (STOC 2000) algorithm for learning parities with noise and some of its recent modifications in the crypto literature.
The Selective Opening Security of DHIES
This talk will focus on the (simulation-based) selective opening security (under sender corruptions) of the broadly used "Diffie-Hellman Integrated Encryption Scheme (DHIES)" proposed by Abdalla, Bellare and Rogaway in 2001. Coming up with the 'right' definition for selective opening security has proven to be highly non-trivial and right now there are three definitions for selective opening security. We will focus on a simulation-based definition - the strongest one that has been achieved yet. We will see that for certain constructions selective opening security comes for free in the random oracle model.
Communication Complexity of Conditional Disclosure of Secrets and Attribute-Based Encryption
We initiate a systematic treatment of the communication complexity of conditional disclosure of secrets (CDS), where two parties want to disclose a secret to a third party if and only if their respective inputs satisfy some predicate. We present a general upper bound and the first non-trivial lower bounds for conditional disclosure of secrets. Moreover, we achieve tight lower bounds for many interesting setting of parameters for CDS with linear reconstruction, the latter being a requirement in the application to attribute-based encryption. In particular, our lower bounds explain the trade-off between ciphertext and secret key sizes of several existing attribute-based encryption schemes based on the dual system methodology.
Nearest Neighbor: Faster Decoding of Random Linear Codes
We examine the impact of approximate matching algorithms on Stern's algorithm, a decoding algorithm for random binary linear codes. Based on an intuitive information set decoding algorithm proposed by Prange in 1962, Lee and Brickell have improved this basic algorithm by a polynomial factor. The algorithm published by Stern in 1988 is based on the Lee-Brickell Algorithm from the same year and enhances it considerably. However, the runtime of Stern's algorithm is dominated by the complexity of a naive matching algorithm. It has been shown in by May and Ozerov (2014) that it is possible to improve Stern's complexity from 2^{0.117n} to 2^{0.114n} by applying an improved matching algorithm. Applying the techniques proposed by Becker, Joux, May and Meurer in 2012 the complexity of Stern's enhanced algorithm has been reduced to 2^{0.097n}, which is currently the best, i.e. fastest, known decoding algorithm for random binary linear codes. In this thesis, we replace the naive matching in Stern's algorithm by two different approximate matching algorithms, the SPLIT Algorithm and the Pattern Algorithm, that are designed to find so-called nearest neighbors, i.e. vectors with a small difference. The expected runtime of Stern's algorithm improved by each of the approximate matching algorithms is analyzed and evaluated.
Generic algorithms for solving the discrete logarithm problem with known hamming weight
The talk will be about generic algorithms for solving the discrete logarithm problem with known hamming weight. First a recent algorithm by Prof. May and Mr. Ozerov is shown, which interpolates between Meet-in-the-Middle and Silver-Pohlig-Hellman algorithm. After this some ideas about how to apply the representation technique to the discrete logarithm problem with known hamming weight are presented. This technique was used by Nick Howgrave-Graham and Antoine Joux in their paper 'New generic algorithms for hard knapsacks' to solve the subset sum problem more efficiently.
On Solving the Sparse LPN Problem by Solving the Light Bulb Problem
The sparse LPN problem is the restriction of the classical LPN problem to a secret s \in {0,1}^n with small (constant) Hamming weight k << n. A brute force approach solves the problem in time n^k. In this talk we will focus on an algorithm by Gregory Valiant (FOCS 2012) that solves the problem in time n^{0.81k}. This new approach reduces the problem to the so-called Light Bulb Problem: given n light bulbs that are (except for two, randomly) switched on and off many times, find the two light bulbs that are correlated. Valiant presents a new algorithm that solves the Light Bulb Problem in time n^{1.62} using fast matrix multiplication.
Tightly-Secure Signatures From Lossy Encryption
In this paper we present three digital signature schemes with tight security reductions. Our first signature scheme is a particularly efficient version of the short exponent discrete log based scheme of Girault et al. (J. of Cryptology 2006). Our scheme has a tight reduction to the decisional Short Discrete Logarithm problem, while still maintaining the non-tight reduction to the computational version of the problem upon which the original scheme of Girault et al. is based. The second signature scheme we construct is a modification of the scheme of Lyubashevsky (Asiacrypt 2009) that is based on the worst-case hardness of the shortest vector problem in ideal lattices. And the third scheme is a very simple signature scheme that is based directly on the hardness of the Subset Sum problem. We also present a general transformation that converts what we term lossy identification schemes into signature schemes with tight security reductions. We believe that this greatly simplifies the task of constructing and proving the security of such signature schemes.
Efficient privacy-preserving aggregation of time-series data
For sending data in the context of privacy-preserving statistical analysis in databases there exists the concept of Private Stream Aggregation (PSA). PSA is a computationally secure protocol with low communication cost which enables each participant of the database to send its own data to an untrusted analyst in a way that this analyst only learns the aggregate of all participants data. Establishing the requirements for a secure PSA scheme we find out that it can be build out of any key-homomorphic pseudo-random function. A security proof is provided in the talk. We also give a computationally efficient instantiation based on the DDH problem.
Analyse des Einsatzes homomorpher Kryptographie zur Privatsphäre-bewahrenden Nutzung biometrischer Merkmale
Die Arbeit befasst sich dabei zuerst mit homomorphen Verschlüsselungsverfahren. Dann wird ein biometrisches Verifikationsprotokoll vorgestellt, das homomorphe Kryptographie nutzt, und die Privatsphäre des Protkolls analysiert. Im letzten Teil wird das Protokoll in den ISO Standart 24745 eingeordnet. Zudem werden verschiedene Speichermodelle vorgestellt und anhand des Proktolls konkretisiert. Das Verfikationsprotokoll basiert dabei nicht auf Passwörtern o.ä., sondern auf biometrischen Daten wie z.B. Fingerabdrücken.
Gentry-Szydlo Algorithm
In the talk the paper "Cryptanalysis of the Revised NTRU Signature Scheme" (Eurocrypt 2002) by C. Gentry and M. Szydlo will be presented. The paper uses an interesting combination of lattice-reduction attack and an algebraic structure of NTRU-ring to recover the private key in polynomial time.
Cryptanalysis of Multilinear Maps over the Integers
So far there have been two candidate instantiations of $\kappa$-linear maps for $\kappa>2$ that are useful in cryptography (i.e. at the very least, the discrete logarithm problem should be hard). One is based on ideal lattices due to Garg, Gentry and Halevi ( https://eprint.iacr.org/2012/610 ). The other construction is based on ideals over the integers (i.e. computations modulo n), due to Coron, Lepoint and Tibouchi ( Crypto2013, https://eprint.iacr.org/2013/183.pdf ). Both schemes are essentially adaptions of corresponding fully/leveled homomorphic encryption schemes. Base group elements g^a,g^b,g^c correspond to encryptions of a,b,c and multiplied group elements g^{abc} corresponds to the product of those encryptions (which is the encryption of the product, since the scheme is homomorphic). A weakened decryption key (called zero-testing parameter) is made available that allows to infer something about ciphertexts that were obtained by multiplying at most $\kappa$ basic ciphertexts in order to get a functionality resembling multilinear maps. Unfortunately, both known constructions suffer from a so-called weak discrete logarithm attack, observed already by Garg, Gentry and Halevi. This attack is made possible by publishing the zero-testing parameter and rerandomizers (needed to rerandomize the encryptions/group elements after homomorphic operations, in order to hide how they were obtained). For the ideal lattice based scheme, this attack renders matrix assumptions like (external) DDH in the base group(s) false. For the integer based scheme, a recent cryptanalysis due to Cheon, Han, Lee, Ryu and Stehle ( https://eprint.iacr.org/2014/906.pdf ) shows that the attack can be extended for the integers, where it actually allows solving discrete logarithms (hence totally breaking the construction). In the talk, I will introduce multilinear maps and very briefly outline the design methodology of both constructions. Then I will explain the cryptanalysis in detail. If time permits, I will also sketch some recent purported fixed of the scheme, which were subsequently broken as well.
Encrypted Data Storage in NoSQL Databases
A major disadvantage of cloud storage providers running NoSQL databases is their lack of security features, compared to those using standard SQL databases. Similar to the approach of CryptDB (Popa, Raluca Ada, et al. "Cryptdb: protecting confidentiality with encrypted query processing." Proceedings of the Twenty-Third ACM Symposium on Operating Systems Principles. ACM, 2011.), we try to enable NoSQL database servers to compute over completely encrypted data, without the server ever decrypting it to plaintext. Instead the server returns results in an encrypted form, only to be decrypted by the trusted client application. To achieve this we use a layer based encryption that reveals just enough information for the server to process queries, using searchable, order-preserving and basic hommomorphic encryption schemes. Thereby we focus on the two most popular NoSQL databases Apache Cassandra and Apache HBase.
Causality in cryptographic channels (or: Why TLS is not the right protocol to protect your online chat)
In the academic literature, cryptographic channels in which messages
are exchanged in strictly sequential order are often modeled through
the sfAEAD security notion (stateful authenticated encryption with
associated data). For a unidirectional link, this notion ensures
confidentiality and integrity of messages (the latter also taking into
account reordering attacks, etc). When it comes to analyzing
bidirectional communication protocols like TLS, the general
understanding is then that a bidirectional channel can be canonically
composed from two unidirectional channels, and that the security of
the latter, once established, can be lifted to the former.
I present recent research results that shed more light on this type of
composition. For instance, I show that if both directions of a
composed channel provide CCA-like confidentiality then this is not
necessarily the case for the composition. If more than two parties are
involved in the communication, then also an intuitively expectable
integrity notion is not attained any more. I finally propose a refined
set of security notions and a construction for a multi-party
channel that provably achieves them.
Improved Dual System ABE in Prime-Order Groups via Predicate Encodings
We present a modular framework for the design of efficient adaptively secure attribute-based encryption (ABE) schemes for a large class of predicates under the standard k-Lin assumption in prime-order groups; this is the first uniform treatment of dual system ABE across different predicates and across both composite and prime-order groups. Via this framework, we obtain concrete efficiency improvements for several ABE schemes. Our framework has three novel components over prior works: (i) new techniques for simulating composite-order groups in prime-order ones, (ii) a refinement of prior encodings framework for dual system ABE incomposite-order groups, (iii) an extension to weakly attribute-hiding predicate encryption (which includes anonymous identity-based encryption as a special case).
Modellbildung in der algebraischen Kryptoanalyse
Trivium ist eine der eSTREAM-Stromchiffren und bis dato trotz der einfachen & eleganten Struktur ungebrochen. Algebraische Angriffe vermochten keinen Angriff auf rundenreduzierte Varianten Triviums auszuführen. In diesem Oberseminar wird der erste erfolgreiche / wettbewerbsfähige algebraische Angriff auf Trivium vorgestellt.
Rebound Attack on JH Hash Function
Rebound attacks are statistical cryptanalytic methods for hash functions. They are based on well defined differential path and have been applied to several hash functions of the SHA-3 competition, showing the best known analysis in each case [1]. The aim of this Thesis is to study these cryptanalytic methods in detail, with the JH hash function as a case study and to show that the existing time complexities can be improved as follows: First of all we identify the problems that optimally adapt to the cryptanalytic situation, then using better algorithms to find solutions for the differential path. Most Rebound Attacks feature one common operation, which by the way happens to be the bottleneck of the attack [1]. This operation can simply be described as merging large lists. Some examples will be used to demonstrate how to reduce the complexity of attack on the JH hash function for the Bit-slice-Representation. [1] Maria Naya-Plasencia. How to improve rebound attacks. In Proceedings of the 31st Annual Conference on Advances in Cryptology, CRYPTO’11, pages 188–205, Berlin, Heidelberg, 2011. Springer-Verlag.
How to Obfuscate Programs Directly
In this talk we will present the obfuscator by Zimmerman (https://eprint.iacr.org/2014/776.pdf). Firstly, we will consider the security requirements and functionalities of the scheme. We will then discuss some of the shortcomings of the scheme, in particular some potential attacks.