|
||||
Manuel Hermenegildo | herme@fi.upm.es | ![]() |
|
|
Technical University of Madrid Departamento of Computer Science Campus de Montegancedo E-28660 Boadilla del Monte. Madrid. Spain. |
Tlf. +34913367448 Fax +34913524819 |
|||
Resumen: Abstract Interpretation has allowed the development of sophisticated
program analyses which are provably correct and practical. The
semantic approximations produced by such analyses have been
traditionally applied to optimization during program compilation. In
this talk we present a framework for combined static/dynamic debugging
and validation using semantic approximations. Program validation and
detection of errors is first performed at compile-time by inferring
properties of the program via abstract interpretation-based static
analysis and comparing this information against (partial)
specifications written in terms of assertions. The comparison process
can result in proving statically (i.e., at compile-time) that the
assertions hold (i.e., they are validated) or that they are violated,
and thus bugs have been detected. User-provided assertions (or parts
of assertions) which cannot be statically proved nor disproved are
optionally translated into run-time tests. Both the static and the
dynamic checking are provably safe in the sense that all errors
flagged are definite violations of the specifications. Classical
examples of such assertions are type declarations. However, herein we
are interested in supporting a more general setting in which
assertions are optional and can be of a much more general nature,
stating additionally other properties, some of which cannot always be
determined statically for all programs. We will illustrate the
practical usefulness of the framework by demonstrating an
implementation of these ideas in CiaoPP, the preprocessor of the Ciao
programming system, which uses abstract interpretation to infer
properties of program points, including types, modes and other
variable instantiation properties, non-failure, determinacy, bounds on
computational cost, bounds on sizes of data in the program, etc. |
||||
|
||||
Heikki Mannila (A) | Heikki.Mannila@cs.Helsinki.FI | ![]() |
|
|
HIIT Basic Research Unit University of Helsinki and Helsinki University of Technology
PO Box 26, FIN-00014 Helsinki. Finland. |
||||
Resumen: Data mining has in recent years emerged as an interesting area in the boundary between algorithms, probabilistic modeling, statistics, and databases. Data mining research can be divided into global approaches, which try to model the whole data, and local methods, which try to find useful patterns
occurring in the data. We discuss briefly some simple local and global techniques, review two attempts at combining the approaches, and list open problems with an algorithmic flavor. |
||||
|
||||
Madhav Marathe | marathe@lanl.gov |
|
||
Los Alamos National Laboratory
USA. |
||||
Resumen: Over the last three decades, language recognition models of
computation and associated resource bounded reductions have played a
central role in characterizing the computational complexity of
combinatorial problems. However, due to their generality, these
concepts have inherent limitations -- they typically ignore the
underlying structure and semantics of the problem instances. Thus they
are generally not "robust" in terms of simultaneously classifying
variants of the original problem.
For example consider the two well studied satisfiability problems 3SAT
and 1-3SAT (One-in-Three SAT). Highly efficient resource-bounded
reductions (quasi-linear time and linear space) exist from 3SAT to
1-3SAT showing that 1-3SAT is NP-hard. But such efficient
resource-bounded reductions cannot generally be used to characterize
simultaneously, the complexity of the variants such as MAX-1-3SAT
(optimization version) or #1-3SAT (counting version) or Q-1-3SAT
(quantified version), or H-1-3SAT (hierarchically specified version)
or even Pl-1-3SAT (planar version). In the past, the complexities of
these variants has been characterized by devising "new" reductions for
each variant.
In this talk, I will outline research efforts aimed at developing a
"predictive computational complexity theory''. The goal here is to
characterize simultaneously (and in a predictive manner) the relative
complexity and approximability of a large class of combinatorial
problems. Applications of these general ideas include:
(A) [Upper bounds] Efficient approximation algorithms
and approximation schemes with provable performance guarantees for
large classes of natural NP-, PSPACE- and NEXPTIME-hard optimization
problems.
(B) [Lower bounds] General results characterizing simultaneously, the
decision, counting, optimization and approximation complexities of
generalized constraint satisfaction problems and their applications to
combinatorial problems.
(C) Further insights on recently proposed ideas on periodically and
hierarchically specified problems and its implications to
understanding the relationship between various computational
complexity classes (e.g. Freedman's ideas on approaching the P vs NP
question). |
||||
|
||||
Maurice Nivat (2002 EATCS award) | tcsmn@club-internet.fr | ![]() |
|
|
Paris. France. |
||||
Resumen: This paper studies the problem of reconstructing binary matrices
that are only accessible through few evaluations of their discrete
X-rays. Such question is prominently motivated by the demand in
material science for developing a tool for the reconstruction of
crystalline structures from their images obtained by
high-resolution transmission electron microscopy. Various
approaches have been suggested for solving the general problem of
reconstructing binary matrices that are given by their discrete
X-rays in a number of directions, but more work have to be done to
handle the ill-posedness of the problem. We can tackle this
ill-posedness by limiting the set of possible solutions, by using
appropriate a priori information, to only those which are
reasonably typical of the class of matrices which contains the
unknown matrix that we wish to reconstruct. Mathematically, this
information is modelled in terms of a class of binary matrices to
which the solution must belong. Several papers study the problem
on classes of binary matrices on which some connectivity and
convexity constraints are imposed.
We study the reconstruction problem on some new classes consisting
of binary matrices with periodicity properties, and we propose a
polynomial-time algorithm for reconstructing these binary matrices
from their orthogonal discrete X-rays. |
||||
|
||||
Andrew Pitts | Andrew.Pitts@cl.cam.ac.uk | ![]() |
|
|
University of Cambridge Departamento Computer Laboratory William Gates Building, JJ Thompson Avenue CB3 0FD Cambridge. United Kingdom. |
Tlf. +44 1223 334629 Fax +44 1223 334678 |
|||
Resumen: The notion of symmetry in mathematical structures is a powerful tool in many branches of mathematics. The talk presents an application of this notion to programming language theory. |
||||
|
||||
John Reif | reif@cs.duke.edu | ![]() |
|
|
Duke University
Durham. USA. |
||||
Resumen: While the topic of Molecular Computation would have appeared even a half dozen years ago to be purely conjectural, it now is an emerging subfield of computer science with the development of its theoretical basis and a number of moderate to large-scale experimental demonstrations. This paper focuses on a subarea of Molecular Computation known as " DNA self-assembly". Self-assembly is the spontaneous self-ordering of substructures into superstructures driven by the selective affinity of the substructures. DNA provides a molecular scale material for effecting this programmable self-assembly, using the selective affinity of pairs of DNA strands to form DNA nanostructures. DNA self-assembly is the most advanced and versatile system known for programmable construction of patterned systems on the molecular scale. The methodology of DNA self-assembly begins with the synthesis of single-strand DNA molecules that self-assemble into macromolecular building blocks called DNA tiles. These tiles have sticky ends that match the sticky ends of other DNA tiles, facilitating further assembly into
large structures known as DNA tiling lattices. In principal you can make the DNA tiling assemblies form any computable two- or three-dimensional pattern, however complex, with the appropriate choice of the tile?s component DNA. This paper overviews the evolution of DNA self-assembly techniques from pure theory to experimental practice. We describe how some theoretical developments have made a major impact on the design of self-assembly experiments, as well as a number of theoretical challenges remaining in the area of DNA self-assembly. We descuss algorithms and software for the design, simulation and optimization of DNA tiling assemblies. We also describe the first experimental demonstrations of DNA self-assemblies that execute molecular computations and the assembly of patterned objects at the molecular scale. Recent experimental results indicate that this technique is scalable. Molecular imaging devices such as atomic force microscopes and transmission electron microscopes allow visualization of self-assembled two-dimensional DNA tiling lattices composed of hundreds of thousands of tiles. These assemblies can be used as scaffolding on which to position molecular electronics and robotics components with precision and specificity. The programmability lets this scaffolding have the patterning required for fabricating complex devices made of these components. |
||||
|
||||
G?raud S?nizergues (2002 G?del Prize) | ges@labri.u-bordeaux.fr | ![]() |
|
|
Bordeaux I Nouvelle Departamento LaBRI Bordeaux. France. |
||||
Resumen: The equivalence problem for deterministic pushdown automata is shown to be decidable.
We exhibit a complete formal system for deducing equivalent pairs of deterministic rational boolean series on the alphabet associated with a dpda M. We then extend the results to deterministic pushdown transducers from a free monoid into an abelian group. A general algebraic and logical framework, inspired by Harrison et al. (Theoret. Comput. Sci. 9 (1979) 173-205), Courcelle (Math. Systems Theory 16 (1983) 191-231) and Meitus (Kybernetika 5 (1989) 14-25 (in Russian)) is developed.
Theoretical Computer Science, Vol 251 (2001), pages 1 - 166
copyright 2001 Elsevier Science B.V. SEE THE NEWS |
||||