_____
 ____  __
  ______     _______
    ____________________
           _______________
          _________________
          ________________
           ______________
            ____________
            __________
            ________
            _______
            _____
            _____
            ___
            ___
            ___



LATIN 2018


Flavia Bonomo, (Universidad de Buenos Aires), On the thinness and proper thinness of a graph.
Leslie Goldberg, (University of Oxford),
Andrea Richa, (Arizona State University), Algorithmic Foundations of Programmable Matter.
Santosh Vempala, (Georgia Institute of Technology),


[Top] [Home] [LATIN 2018]



LATIN 2016


Jin Akiyama , (Tokyo University of Science), Reversible Figures and Solids.
An example of reversible (or hinge inside-out transformable) figures is Dudeney's Haberdasher's puzzle in which an equilateral triangle is dissected into four pieces, hinged like a chain, and then is transformed into a square by rotating the hinged pieces. Furthermore, the entire boundary of each figure goes into the inside of the other figure and becomes the dissection lines of the figure. Many intriguing results on reversibilities of figures have been found in the preceding research, but most of them are results on polygons. We generalize those results to general connected figures. It is shown that two nets obtained by cutting the surface of an arbitrary convex polyhedron along non-interesting dissection trees are reversible. Moreover, we generalize reversibility for 2D-figures to one for 3D-solids.

Joint work with Jin Akiyama (Tokyo University of Science).

Allan Borodin , (University of Toronto), Simplicity Is in Vogue (again).
Throughout history there has been an appreciation of the importance of simplicity in the arts and sciences. In the context of algorithm design, and in particular in apprxoximation algorithms and algorithmic game theory, the importance of simplicity is currently very much in vogue. I will present some examples of the current interest in the design of "simple algorithms". And what is a simple algorithm? Is it just "you'll know it when you see it", or can we benefit from some precise models in various contexts?
José Correa, (Universidad de Chile), Subgame Perfect Equilibrium: Computation and Efficiency.
The concept of Subgame Perfect Equilibrium (SPE) naturally arises in games which are played sequentially. In a simultaneous game the natural solution concept is that of a Nash equilibrium in which no players has an incentive to unilaterally deviate from her current strategy. However, if the game is played sequentially, i.e., there is a prescribed order in which the players make their moves, an SPE is a situation in which all players anticipate the full strategy of all other players contingent on the decisions of previous players. Although most research in algorithmic game theory has been devoted to understand properties of Nash equilibria including its computation and the so-called price of anarchy in recent years there has been an interest in understanding the computational properties of SPE and its corresponding efficiency measure, the sequential price of anarchy.

In this talk we will review some of these recent results putting particular emphasis on a very basic game, namely that of atomic selfish routing in a network. In particular we will discuss some hardness results such as the PSPACE-completeness of computing an SPE and its NP-hardness even when the number of players fixed to two. We will also see that for interesting classes of games SPE avoid worst case Nash equilibria, resulting in substantial improvements for the price of anarchy. However, for the atomic network routing games with linear latencies, where the price of anarchy has long been known to be equal to 5/2, we prove that the sequential price of arachy is not bounded by any constant and can be as large as Ω(√n), with n being the number of players.

Alan Frieze, (Carnegie Mellon University), Buying Stuff Online.
Suppose there is a collection x1, x2, ..., xN of independent uniform [0, 1] random variables, and a hypergraph F of target structures on the vertex set {1, ..., N}. We would like to buy a target structure at small cost, but we do not know all the costs xi ahead of time. Instead, we inspect the random variables xi one at a time, and after each inspection, choose to either keep the vertex i at cost xi, or reject vertex i forever.

In the present paper, we consider the case where {1, ..., N} is the edge-set of some graph, and the target structures are the spanning trees of a graph; the spanning arborescences of a digraph; the Hamilton cycles of a graph; the prefect matchings of a graph; the paths between a fixed pair of vertices; or the cliques of some fixed size.

Joint work with Wesley Pegden (CMU).

Héctor García-Molina, (Stanford University), Data Crowdsourcing: Is It for Real?.
Crowdsourcing refers to performing a task using human workers that solve sub-problems that arise in the task. In this talk I will give an overview of crowdsourcing, focusing on how crowdsourcing can help traditional data processing and analysis tasks. I will also give a brief overview of some of the crowdsourcing research we have done at the Stanford University InfoLab.


[Top] [Home] [LATIN 2016]



LATIN 2014


Ronitt Rubinfeld, (MIT), Something for Almost Nothing: Advances in Sub-linear Time Algorithms.
Linear-time algorithms have long been considered the gold standard of computational endciency. Indeed, it is hard to imagine doing better than that, since for a nontrivial problem, any algorithm must consider all of the input in order to make a decision. However, as extremely large data sets are pervasive, it is natural to wonder what one can do in sub-linear time. Over the past two decades, several surprising advances have been made on designing such algorithms. We will give a non-exhaustive survey of this emerging area, highlighting recent progress and directions for further research.
Gilles Barthe, (IMDEA Software Institute), Computer-Aided Cryptographic Proofs.
EasyCrypt [6] is a computer-assisted framework for reasoning about the security of cryptographic constructions, using the methods and tools of provable security, and more specifically of the game-based techniques. The core of EasyCrypt is a relational program logic for a core probabilistic programming language with sequential composition, conditionals, loops, procedure calls, assignments and sampling from discrete distributions. The relational program logic is key to capture reductionist arguments that arise in cryptographic proofs. It is complemented by a (standard, non-relational) program logic that allows to reason about the probability of events in the execution of probabilistic programs; this program logic allows for instance to upper bound the probability of failure events, that are pervasive in game-based cryptographic proofs. In combination, these logics capture general reasoning principles in cryptography and have been used to verify the security of emblematic constructions, including the Full-Domain Hash signature [8], the Optimal Asymmetric Encryption Padding (OAEP) [7], hash function designs [3] and zero-knowledge protocols [5, 1]. Yet, these logics can only capture instances of general principles, and lack mechanisms for stating and proving these general principles once and for all, and then for instantiating them as needed. To overcome this limitation, we have recently extended EasyCrypt with programming language mechanisms such as modules and type classes. Modules provide support for composition of cryptographic proofs, and for formalizing hybrid arguments, whereas type classes are convenient to model and reason about algebraic structures. Together, these extensions significantly expand the class of examples that can be addressed with EasyCrypt. For instance, we have used the latest version of EasyCrypt to verify the security of a class of authenticated key exchange protocols, and of a secure function evaluation protocol based on garbled circuits and oblivious transfer.

Our current work explores two complementary directions. On the one hand, we are extending the EasyCrypt infrastructure in order to derive security guarantees about implementations of cryptographic constructions. Indeed, practical attacks often target specific implementations and exploit some characteristics that are not considered in typical provable security proofs; as a consequence, several widely used implementations of provably secure schemes are vulnerable to attacks. In order to narrow the gap between provable security and implementations, we are extending EasyCrypt with support to reason about C-like implementations, and use the CompCert verified C compiler (http://compcert.inria.fr) to carry the security guarantees down to executable implementations [2]. On the other hand, we are developing specialized formalisms to reason about the security of particular classes of constructions. For instance, we have recently developed the ZooCrypt framework [4], which supports automated analysis of chosen-plaintext and chosen ciphertext-security for public-key encryption schemes built from (partial-domain) one-way trapdoor permutations and random oracles. Using ZooCrypt, we have analyzed over a million (automatically generated) schemes, including many schemes from the literature. For chosen-plaintext security, ZooCrypt is able to report in nearly 99% of the cases a proof of security with a concrete security bound, or an attack. We are currently extending our approach to reason about encryption schemes based on Diffie-Hellmann groups and bilinear pairings, both in the random oracle and in the standard models. More information about the project is available from the project web page http://www.easycrypt.info.

References

1. Almeida, J.B., Barbosa, M., Bangerter, E., Barthe, G., Krenn, S., Zanella-Bé́guelin, S.: Full proof cryptography: verifiable compilation of efficient zero-knowledge protocols. In: 19th ACM Conference on Computer and Communications Security. ACM (2012)

2. Almeida, J.B., Barbosa, M., Barthe, G., Dupressoir, F.: Certified computer-aided cryptography: efficient provably secure machine code from high-level implementations. In: ACM Conference on Computer and Communications Security. ACM (2013)

3. Backes, M., Barthe, G., Berg, M., Grégoire, B., Skoruppa, M., Zanella-Béguelin, S.: Verified security of Merkle-Damgård. In: IEEE Computer Security Foundations. ACM (2012)

4. Barthe, G., Crespo, J.M., Grégoire, B., Kunz, C., Lakhnech, Y., Schmidt, B., Zanella-Béguelin, S.: Automated analysis and synthesis of padding-based encryption schemes. In: ACM Conference on Computer and Communications Security. ACM (2013)

5. Barthe, G., Grégoire, B., Hedin, D., Heraud, S., Zanella-Béguelin, S.: A MachineChecked Formalization of Sigma-Protocols. In: IEEE Computer Security Foundations. ACM (2010)

6. Barthe, G., Grégoire, B., Heraud, S., Zanella-Béguelin, S.: Computer-aided security proofs for the working cryptographer. In: Rogaway, P. (ed.) CRYPTO 2011. LNCS, vol. 6841, pp. 71–90. Springer, Heidelberg (2011)

7. Barthe, G., Grégoire, B., Lakhnech, Y., Zanella-Béguelin, S.: Beyond Provable Security Verifiable IND-CCA Security of OAEP. In: Kiayias, A. (ed.) CT-RSA 2011. LNCS, vol. 6558, pp. 180–196. Springer, Heidelberg (2011)

8. Zanella-Béguelin, S., Barthe, G., Grégoire, B., Olmedo, F.: Formally certifying the security of digital signature schemes. In: IEEE Symposium on Security and Privacy. IEEE Computer Society (2009)

Robert Sedgewick, (Princeton), "If You Can Specify It, You Can Analyze It" - The Lasting Legacy of Philippe Flajolet.
The "Flajolet School" of the analysis of algorithms and combinatorial structures is centered on an effective calculus, known as analytic combinatorics, for the development of mathematical models that are sufficiently accurate and precise that they can be validated through scientific experimentation. It is based on the generating function as the central object of study, first as a formal object that can translate a specification into mathematical equations, then as an analytic object whose properties as a function in the complex plane yield the desired quantitative results. Universal laws of sweeping generality can be proven within the framework, and easily applied. Standing on the shoulders of Cauchy, Polya, de Bruijn, Knuth, and many others, Philippe Flajolet and scores of collaborators developed this theory and demonstrated its effectiveness in a broad range of scientific applications. Flajolet's legacy is a vibrant field of research that holds the key not just to understanding the properties of algorithms and data structures, but also to understanding the properties of discrete structures that arise as models in all fields of science. This talk will survey Flajolet's story and its implications for future research.

"A man ... endowed with an an exuberance of imagination which puts it in his power to establish and populate a universe of his own creation".

Gonzalo Navarro, (University of Chile), Encoding Data Structures.
Classical data structures can be regarded as additional information that is stored on top of the raw data in order to speed up some kind of queries. Some examples are the suffix tree to support pattern matching in a text, the extra structures to support lowest common ancestor queries on a tree, or precomputed shortest path information on a graph.

Some data structures, however, can operate without accessing the raw data. These are called encodings. Encodings are relevant when they do not contain enough information to reproduce the raw data, but just what is necessary to answer the desired queries (otherwise, any data structure could be seen as an encoding, by storing a copy of the raw data inside the structure).

Encodings are interesting because they can occupy much less space than the raw data. In some cases the data itself is not interesting, only the answers to the queries on it, and thus we can simply discard the raw data and retain the encoding. In other cases, the data is used only sporadically and can be maintained in secondary storage, while the encoding is maintained in main memory, thus speeding up the most relevant queries.

When the raw data is available, any computable query on it can be answered with sufficient time. With encodings, instead, one faces a novel fundamental question: what is the effective entropy of the data with respect to a set of queries? That is, what is the minimum size of an encoding that can answer those queries without accessing the data? This question is related to Information Theory, but in a way inextricably associated to the data structure: the point is not how much information the data contains, but how much information is conveyed by the queries. In addition, as usual, there is the issue of how efficiently can be the queries answered depending on how much space is used.

In this talk I will survey some classical and new encodings, generally about preprocessing arrays A[1, n] so as to answer queries on array intervals [i, j] given at query time. I will start with the classical range minimum queries (which is the minimum value in A[i,j]?) which has a long history that culminated a few years ago in an asymptotically space-optimal encoding of 2n+o(n) bits answering queries in constant time. Then I will describe more recent (and partly open) problems such as finding the second minimum in A[i, j], the k smallest values in A[i, j], the kth smallest value in A[i, j], the elements that appear more than a fraction τ of the times in A[i, j], etc. All these queries appear recurrently within other algorithmic problems, and they have also direct application in data mining.

J. Ian Munro, (University of Waterloo), Succint Data Structures ... Not Just for Graphs.
Succinct data structures are data representations that use (nearly) the information theoretic minimum space, for the combinatorial object they represent, while performing the necessary query operations in constant (or nearly constant) time. So, for example, we can represent a binary tree on n nodes in 2n + o(n) bits, rather than the "obvious" 5n or so words, i.e. 5n lg(n) bits. Such a difference in memory requirements can easily translate to major differences in runtime as a consequence of the level of memory in which most of the data resides. The field developed to a large extent because of applications in text indexing, so there has been a major emphasis on trees and a secondary emphasis on graphs in general; but in this talk we will draw attention to a much broader collection of combinatorial structures for which succinct structures have been developed. These will include sets, permutations, functions, partial orders and groups, and yes, a bit on graphs.


[Top] [Home] [LATIN 2014]



LATIN 2012


Martin Davis, (Courant Institute, NYU, USA), Universality is Ubiquitous.
Scott Aaronson, (Massachusetts Institute of Technology, USA), Turing Year Lecture.
Kirk Pruhs, (University of Pittsburgh, USA), Green Computing Algorithmics: Managing Power Heterogeneity.
Marcos Kiwi, (Universidad de Chile, Chile), Combinatorial and Algorithmic Problems Involving (Semi-)Random Sequences.


[Top] [Home] [LATIN 2012]



LATIN 2010


Piotr Indyk, (Massachusetts Institute of Technology, USA), Sparse Recovery Using Sparse Random Matrices.
Cristopher Moore, (University of New Mexico and Santa Fe Institute, USA), Continuous and Discrete Methods in Computer Science.
Sergio Rajsbaum, (Universidad Nacional Autónoma de México, Mexico), Iterated Shared Memory Models.
Leslie Valiant, (Harvard University, USA), Some Observations on Holographic Algorithms.
Ricardo Baeza-Yates, John Brzozowski, Volker Diekert, and Jacques Sakarovitch, (Yahoo Research (Spain), University of Waterloo (Canada), Universität Stuttgart (Germany), and Ecole Nationale Supérieure des Télécommunications (France)), Vignettes on the work of Imre Simon.


[Top] [Home] [LATIN 2010]



LATIN 2008


Claudio Lucchesi, (Unicamp, Brazil), Pfaffian Bipartite Graphs: The Elusive Heawood Graph.
Moni Naor, (Weizmann Institute, Israel), Games, Exchanging Information and Extracting Randomness.
Wojciech Szpankowski, (Purdue University, USA), Tries.
Eva Tardos, (Cornell U., USA), Games in Networks.
Robert Tarjan, (Princeton U., USA), Graph Algorithms.


[Top] [Home] [LATIN 2008]



LATIN 2006


Ricardo Baeza-Yates, (Universidad de Chile & Universitat Pompeu Fabra), Algorithmic Challenges in Web Search Engine.
In this paper we present the main algorithmic challenges that large Web search engines face today. These challenges are present in all the modules of a Web retrieval system, ranging from the gathering of the data to be indexed (crawling) to the selection and ordering of the answers to a query (searching and ranking). Most of the challenges are ultimately related to the quality of the answer or the efficiency in obtaining it, although some are relevant even to the existence of search engines: context based advertising.
Anne Condon , (RNA molecules: glimpses through an algorithmic lens), U. British Columbia.
Dubbed the ``architects of eukaryotic complexity'', RNA molecules are increasingly in the spotlight, in recognition of the important catalytic and regulatory roles they play in our cells and their promise in therapeutics. Our goal is to describe the ways in which algorithms can help shape our understanding of RNA structure and function.
Ferran Hurtado, (Universitat Politècnica de Catalunya), Squares.
In this talk we present several results and open problems having squares, the basic geometric entity, as a common thread. These results have been gathered from various papers; coauthors and precise references are given in the descriptions that follow.
R. Ravi, (Matching Based Augmentations for Approximating Connectivity Problems), Carnegie Mellon University.
We describe a very simple idea for designing approximation algorithms for connectivity problems: For a spanning tree problem, the idea is to start with the empty set of edges, and add matching paths between pairs of components in the current graph that have desirable properties in terms of the objective function of the spanning tree problem being solved. Such matching augment the solution by reducing the number of connected components to roughly half their original number, resulting in a logarithmic number of such matching iterations. A logarithmic performance ratio results for the problem by appropriately bounding the contribution of each matching to the objective function by that of an optimal solution.

In this survey, we trace the initial application of these ideas to traveling salesperson problems through a simple tree pairing observation down to more sophisticated applications for buy-at-bulk type network design problems.

Madhu Sudan, (Modelling Errors and Recovery for Communication), MIT.
The theory of error-correction has had two divergent schools of thought, going back to the works of Shannon and Hamming. In the Shannon school, error is presumed to have been effected probabilistically. In the Hamming school, the error is modeled as effected by an all-powerful adversary. The two schools lead to drastically different limits. In the Shannon model, a binary channel with error-rate close to, but less than, 50\% is useable for effective communication. In the Hamming model, a binary channel with an error-rate of more than 25\% prohibits unique recovery of the message.

In this talk, we describe the notion of list-decoding, as a bridge between the Hamming and Shannon models. This model relaxes the notion of recovery to allow for a "list of candidates". We describe results in this model, and then show how these results can be applied to get unique recovery under "computational restrictions" on the channel's ability, a model initiated by R.~Lipton in 1994.

Based on joint works with Venkatesan Guruswami (U. Washington), and with Silvio Micali (MIT), Chris Peikert (MIT) and David Wilson (MIT).

Sergio Verdú, (Lossless Data Compression via Error Correction), Princeton.
This plenary talk gives an overview of recent joint work with G. Caire and S. Shamai on the use of linear error correcting codes for lossless data compression, joint source/channel coding and interactive data exchange.
Avi Wigderson, (The power and weakness of randomness in computation), IAS.
Humanity has grappled with the meaning and utility of randomness for centuries. Research in the Theory of Computation in the last thirty years has enriched this study considerably. We describe two main aspects of this research on randomness, demonstrating its power and weakness respectively.


[Top] [Home] [LATIN 2006]



LATIN 2004


Cynthia Dwork, (Microsoft Research), Fighting Spam: The Science.
Mike Paterson, (U. of Warwick), Analysis of Scheduling Algorithms for Proportionate Fairness.
We consider a multiprocessor operating system in which each current job is guaranteed a given proportion over time of the total processor capacity. A scheduling algorithm allocates units of processor time to appropriate jobs at each time step. We measure the goodness of such a scheduler by the maximum amount by which the cumulative processor time for any job ever falls below the ``fair'' proportion guaranteed in the long term.

In particular we focus our attention on very simple schedulers which impose minimal computational overheads on the operating system. For several such schedulers we obtain upper and lower bounds on their deviations from fairness. The scheduling quality which is achieved depends quite considerably on the relative processor proportions required by each job.

We will outline the proofs of some of the upper and lower bounds, both for the unrestricted problem and for restricted versions where constraints are imposed on the processor proportions. Many problems remain to be investigated and we will give the results of some exploratory simulations.

This is joint research with Micah Adler, Petra Berenbrink, Tom Friedetzky, Leslie Ann Goldberg and Paul Goldberg.

Yoshiharu Kohayakawa, (U. Sáo Paulo), Advances in the Regularity Method.
Jean-Eric Pin, (CNRS/U. Paris VII), The consequences of Imre Simon's work in the theory of automata, languages and semigroups.
In this lecture, I will show how influential has been the work of Imre in the theory of automata, languages and semigroups. I will mainly focus on two celebrated problems, the restricted star-height problem (solved) and the decidability of the dot-depth hierarchy (still open). These two problems lead to surprising developments and are currently the topic of very active research.

I will present the prominent results of Imre on both topics, and demonstrate how these results have been the motor nerve of the research in this area for the last thirty years.

Dexter Kozen, (Cornell U.), Kleene Algebra with Tests and the Static Analysis of Programs.
I will propose a general framework for the static analysis of programs based on Kleene algebra with tests (KAT). I will show how KAT can be used to statically verify compliance with safety policies specified by security automata. The method is sound and complete over relational interpretations. I will illustrate the method on an example involving the correctness of a device driver.


[Top] [Home] [LATIN 2004]



LATIN 2002


Jennifer T. Chayes, (Microsoft Research), Phase Transitions in Computer Science.
Phase transitions are familiar phenomena in physical systems. But they also occur in many probabilistic and combinatorial models, including random versions of some classic problems in theoretical computer science.

In this talk, I will discuss phase transitions in several systems, including the random graph -- a simple probabilistic model which undergoes a critical transition from a disordered to an ordered phase; k-satisfiability -- a canonical model in theoretical computer science which undergoes a transition from solvability to insolvability; and optimum partitioning -- a fundamental problem in combinatorial optimization, which undergoes a transition from existence to non-existence of a perfect optimizer.

Technically, phase transitions only occur in infinite systems. However, real systems and the systems we simulate are large, but finite. Hence the question of finite-size scaling: Namely, how does the transition behavior emerge from the behavior of large, finite systems? Results on the random graph, satisfiability and optimum partitioning locate the critical windows of these transitions and establish interesting features within these windows.

No knowledge of phase transitions will be assumed in this talk

Christos H. Papadimitriou, (U. California, Berkeley), The Internet, the Web, and Algorithms.
The Internet and the worldwide web, unlike all other computational artifacts, were not deliberately designed by a single entity, but emerged from the complex interactions of many. As a result, they must be approached very much the same way that cells, galaxies or markets are studied in other sciences: By speculative (and falsifiable) theories trying to explain how selfish algorithmic actions could have led to what we observe. I present several instances of recent work on this theme, with several collaborators
Joel Spencer, (Courant Institute), Erdos Magic.
The Probabilistic Method is a lasting legacy of the late Paul Erdos. We give two examples - both problems first formulated by Erdos in the 1960s with new results in the last few years and both with substantial open questions. Further in both examples we take a Computer Science vantagepoint, creating a probabilistic algorithm to create the object (coloring, packing respectively) and showing that with positive probability the created object has the desired properties.

Given m sets each of size n (with an arbitrary intersection pattern) we want to color the underlying vertices Red and Blue so that no set is monochromatic. Erdos showed this may always be done if m< 2(n-1), we give a recent argument of Srinivasan and Radhukrishnan that extends this to m < c 2n\sqrt{n/\ln n}. One first colors randomly and then recolors the blemishes with a clever random sequential algorithm.

In a universe of size N we have a family of sets, each of size k, such that each vertex is in D sets and any two vertices have only o(D) common sets. Asymptotics are for fixed k with N,D. We want an asymptotic packing, a subfamily of N/k disjoint sets. Erdos and Hanani conjectured such a packing exists (in an important special case of asymptotic designs) and this conjecture was shown by Rodl. We give a simple proof of the speaker that analyzes the random greedy algorithm.

Jorge Urrutia, (UNAM), Open Problems in Computational Geometry.
In this paper we present a collection of problems which have defied solution for some time. We hope that this paper will stimulate renewed interest in these problems, leading to solutions to at least some of them.
Umesh V. Vazirani, (U. California, Berkeley), Quantum Algorithms.
Mihalis Yannakakis, (Avaya Laboratories), Testing and Checking of Finite State Systems.
Finite state machines have been used to model a wide variety of systems, including sequential circuits, communication protocols, and other types of reactive systems, i.e., systems that interact with their environment. In testing problems we are given a system, which we may test by providing inputs and observing the outputs produced.

The goal is to design test sequences so that we can deduce desired information about the given system under test, such as whether it implements correctly a given specification machine (conformance testing), or whether it satisfies given requirement properties (black-box checking).

In this talk we will review some of the theory and algorithms on basic testing problems for systems modeled by different types of finite state machines. Conformance testing of deterministic machines has been investigated for a long time; we will discuss various efficient methods. Testing of nondeterministic and probabilistic machines is related to games with incomplete information and to partially observable Markov decisions processes.

The verification of properties for finite state systems with a known structure (i.e., "white box" checking) is known as the model-checking problem, and has been thoroughly studied for many years, yielding a rich theory, algorithms and tools. Black-box checking, i.e., the verification of properties for systems with an unknown structure, combines elements from model checking, conformance testing and learning.


[Top] [Home] [LATIN 2002]



LATIN 2000


Allan Borodin, (U. Toronto), On the Competitive Theory and Practice of Portfolio Selection.
Philippe Flajolet, (INRIA), .
Joachim von zur Gathen, (U. Paderborn), Subresultants Revisited.
Yoshiharu Kohayakawa, (U. Sao Paulo), Algorithmic Aspects of Regularity.
Andrew Odlyzko, (AT&T Labs), Integer Factorization and Discrete Logarithms.
Prabhakar Raghavan, (IBM Almaden), Graph Structure of the Web: A Survey.


[Top] [Home] [LATIN 2000]



LATIN 1998


Noga Alon, (Tel Aviv Univ.), Spectral Techniques in Graph Algorithms.
Richard Beigel, (Lehigh Univ.), The Geometry of Browsing.
Gilles Brassard, (Univ. of Montréal), Quantum Cryptanalysis of Hash and Claw-Free Functions.
Herbert Edelsbrunner, (Univ. of Illinois, Urbana-Champaign), Shape Reconstruction with Delaunay Complex.
Juan A. Garay, (IBM, Yorktown Heights), Batch Verification with Applications to Cryptography and Checking.


[Top] [Home] [LATIN 1998]



LATIN 1995


Josep Diaz, NC Approximations.
Isaac Scherson, Load Balancing in Distributed Systems.
J.Ian Munro, Fast, Space Efficient Data Structures.
Alberto Apostolico, The String Statistics Problem.
Mike Waterman, Probability Distributions for Sequence Alignment Scores .


[Top] [Home] [LATIN 1995]



LATIN 1992


Jean-Paul Allouche, (CNRS), q-Regular Sequences and other Generalizations of q-Automatic Sequences.
Manuel Blum, (U. California, Berkeley), Universal Statistical Tests.
Kosaburo Hashiguchi, (Toyohashi U. of Technology), The Double Reconstruction Conjectures about Colored Hypergraphs and Colored Directed Graphs.
Erich Kaltofen, (Rensselaer Polytechnic Institute), Polynomial Factorization 1987-1991.
Arjen K. Lenstra, (Bellcore), Massively Parallel Computing and Factoring.
Gene Myers, (U. of Arizona), Approxiamte Matching of Network Expressions with Spacers.
Jean-Eric Pin, (Bull), On Reversible Automata.
Vaughan Pratt, (Standford U.), Arithmetic + Logic + Geometry = Concurrency.
Daniel D. Sleator, (Carnegie Mellon U.), Data Structures and Terminating Petri Nets.
Michel Cosnard, (Ecole Normale Supérieure de Lyon), Complexity Issues in Neural Network Computations.


[Top] [Home] [LATIN 1992]