SCSS 2013:Papers with Abstracts

Papers
Abstract. In this talk, we will exemplify the spirit of a new type of mathematics by a report on the Theorema system being developed in the speaker's research group. Theorema is both a logic and a software frame for doing mathematics in the way sketched above. On the object level, Theorema allows to prove and program within the same logical frame and, on the meta-level, it allows to formulate reasoning techniques that help proving and programming on the object level. In particular, we will show how this type of doing mathematics allows to mimic the invention process behind the speaker’s theory of Gröbner bases, which provides a general method for dealing with multivariate nonlinear polynomial systems.
Abstract. This work aims to build a semantic framework for automated debugging. A debugging process consists of tracing, locating, and fixing processes consecutively. The first two processes are accomplished by a tracing procedure and a locating procedure, respectively. The tracing procedure reproduces the execution of a failed test case with well-designed data structures and saves necessary information for locating bugs. The locating procedure will use the information obtained from the tracing procedure to locate ill-designed statements and to generate a fix-equation, the solution of which is a function that will be used to fix the bugs. A structural operational semantics is given to define the functions of the tracing and locating procedure. Both procedures are proved to terminate and produces one fix-equation. The main task of fixing process is to solve the fix-equation. It turns out that for a given failed test case, there exist three different types of solutions: 1. the bug is solvable, there exists a solution of the fix-equation, and the program can be repaired. 2. There exists a non-linear error in the program, the fix-equation generated at each round of the locating procedure is solvable, but a new bug will arise when the old bug is being fixed. 3. There exists a logical design error and the fix-equation is not solvable.
Abstract. Linear recurrence sequences (LRS), such as the Fibonacci numbers, permeate vast areas of mathematics and computer science. In this talk, we consider three natural decision problems for LRS, namely the Skolem Problem (does a given LRS have a zero?), the Positivity Problem (are all terms of a given LRS positive?), and the Ultimate Positivity Problem (are all but finitely many terms of a given LRS positive?). Such problems (and assorted variants) have applications in a wide array of scientific areas, such as theoretical biology (analysis of L-systems, population dynamics), economics (stability of supply-and-demand equilibria in cyclical markets, multiplier-accelerator models), software verification (termination of linear programs), probabilistic model checking (reachability and approximation in Markov chains, stochastic logics), quantum computing (threshold problems for quantum automata), discrete linear dynamical systems (reachability and invariance problems), as well as combinatorics, statistical physics, term rewriting, formal languages, cellular automata, generating functions, etc.

We shall see that these problems have deep and fascinating connections
to rich mathematical concepts and conjectures, particularly in the
fields of analytic and algebraic number theory, diophantine geometry
and approximation, real algebraic geometry, mathematical
logic, and complexity theory.
Abstract. This paper presents a parameterized technique for the inspection of Rewriting Logic computations that allows the non-deterministic execution of a given rewrite theory to be followed up in different ways. Starting from a selected state in the computation tree, the exploration is driven by a user-defined, inspection criterion that specifies the exploration mode. By selecting different inspection criteria, one can automatically derive useful debugging facilities such as program steppers and more sophisticated dynamic trace slicers that facilitate the detection of control and data dependencies across the computation tree. Our methodology, which is implemented in the Anima graphical tool, allows users to capture the impact of a given criterion, validate input data, and detect improper program behaviors.
Abstract. In this paper we propose a parametric technique to automatically infer algebraic property-oriented specifications from Term Rewriting Systems. Namely, given the source code of a TRS we infer a specification which consists of a set of most general equations relating terms that rewrite, for all possible instantiations, to the same set of normal forms.

The semantic-based inference method that we propose can cope with non-constructor-based TRSs, and considers non-ground terms. Particular emphasis is posed to avoid the generation of “redundant” equations that can be a logical consequence of other ones.

To experiment on the validity of our proposal we have considered an instance of the method employing a novel (condensed) semantics for left-linear TRSs and we have implemented a “proof of concept” prototype in Haskell which is available online.
Abstract. In this paper we present a novel condensed narrowing-like semantics that contains the minimal information which is needed to describe compositionally all possible rewritings of a term rewriting system. We provide its goal-dependent top-down definition and, more importantly, an equivalent goal-independent bottom-up fixpoint characterization.
We prove soundness and completeness w.r.t. the small-step behavior of rewriting for the full class of term rewriting systems.
Abstract. Making a knot on a rectangular origami or more generally on a tape of a finite length gives rise to a regular polygon. We present an automated algebraic proof that making two knots leads to a regular heptagon. Knot fold is regarded as a double fold operation coupled with Huzita's fold operations. We specify the construction by describing the geometrical constraints on the fold lines to be used for the construction of a knot. The algebraic interpretation of the logical formulas allows us to solve the problem of how to find the fold operations, i.e. to find concrete fold lines. The logical and algebraic framework incorporated in a system called Eos (e-origami system) is used to simulate the knot construction as well as to prove the correctness of the construction based on algebraic proof methods.
Abstract. t is recognized that security verification of cryptographic protocols tends to be difficult and in fact, some flaws on protocol designs or security proofs were found after they had been presented. The use of formal methods is a way to deal with such complexity. Especially, process calculi are suitable to describe parallel systems. Bisimilarity, which denotes that two processes behave indistinguishably from the outside, is a key notion in process calculi. However, by-hand verification of bisimilarity is often tedious when the processes have many long branches in their transitions. We developed a software tool to automatically verify bisimulation relation in a quantum process calculus qCCS and applied it to Shor and Preskill's security proof of BB84. The tool succeeds to verify the equivalence of BB84 and an EDP-based protocol, which are discussed in their proof.
Abstract. In applications of symbolic computation an often required but complex procedure is the computation of Gröbner bases and hence it is obvious
to realize parallel algorithms to compute them. There are parallel flavours of the F4 algorithm using the special structure of the occurring matrices
to speed up the reduction. In this paper we start from this and present modifications allowing efficient computations of Gröbner bases on parallel architectures
using shared as well as distributed memory. To achieve this we concentrate on one objective: reducing the memory consumption and avoiding communication overhead.
We remove unrequired steps of the reduction, split the columns of the matrix in blocks for distribution and review the effectiveness of the SIMPLIFY function.
Finally we provide benchmarks with up to 256 distributed threads of an implementation which will be available at https://github.com/svrnm/parallelGBC.
Abstract. We present the key notion of "asymptotically non-terminant initial variable values" for non-terminant loop programs. We show that those specific values are directly associated to inital variable values for which the loop program does not terminate.
Considering linear diagonalizable programs, we describe powerful computational methods that generate automatically and symbolically a semi-linear space represented by a linear system of equalities and inequalities. Each element of this space provides us with asymptotically non-terminant initial variable values. Our approach is based on linear algebraic methods and results. We obtain conditions using a decomposition on a specific basis, involving the loop condition and the matrix encoding the instructions of the loop.
Abstract. This paper presents modeling of
a binary tree that represents a natural number
and gives an inductive proof for its properties
using theorem provers.
We define a function for converting data from a natural number
into a binary tree
and give an inductive proof for its well-definedness.
We formalize this method, develop a computational model based on it,
and apply it to an electronic cash protocol.
We also define the payment function on the binary tree
and go on to prove the divisibility of electronic cash
using the theorem provers Isabelle/HOL and Coq, respectively.
Furthermore, we discuss the effectiveness of this method.
Abstract. With the aid of computer algebra systems COCO and GAP with
its packages we are investigating all seven known primitive
triangle-free strongly regular graphs on 5, 10, 16, 50, 56,
77 and 100 vertices. These graphs are rank 3 graphs, having
a rich automorphism group. The embeddings of each graph
from this family to other ones are described, the
automorphic equitable partitions are classified, all
equitable partitions in the graphs on up to 50 vertices are
enumerated. Basing on the reported computer aided results
and using techniques of coherent configurations, a few new
models of these graphs are suggested, which are relying on
knowledge of just a small part of symmetries of a graph in
consideration.
Abstract. The paper presents the usage of invariants for symbolic verification of requirements for reactive systems. It includes checking of safety, incompleteness, liveness, consistency properties, and livelock detection. The paper describes the iterative method of double approximation and the method of undetermined coefficients for invariants generation. Benefits, disadvantages, and comparison of this technique with existing methods are considered. The paper is illustrated by examples of invariants technique usage for symbolic verification.
Abstract. Polynomial interpolation is a classical method to approximate
continuous functions by polynomials. To measure the correctness of the
approximation, Lebesgue constants are introduced. For a given node system $X^{(n+1)}=\{x_1<\ldots<x_{n+1}\}\, (x_j\in [a,b])$, the Lebesgue function $\lambda_n(x)$ is the sum of the modulus of the Lagrange basis polynomials built on $X^{(n+1)}$. The Lebesgue constant $\Lambda_n$ assigned to the function $\lambda_n(x)$ is its maximum over $[a,b]$. The Lebesgue constant bounds the interpolation error, i.e., the interpolation polynomial is at most $(1+\Lambda_n)$ times worse then the best approximation.
The minimum of the $\Lambda_n$'s for fixed $n$ and interval $[a,b]$ is called the optimal Lebesgue constant $\Lambda_n^*$.
For specific interpolation node systems such as the equidistant system, numerical results for the Lebesgue constants $\Lambda_n$ and their asymptotic
behavior are known \cite{3,7}. However, to give explicit symbolic expression for the minimal Lebesgue constant $\Lambda_n^*$ is computationally difficult. In this work, motivated by Rack \cite{5,6}, we are interested for expressing the minimal
Lebesgue constants symbolically on $[-1,1]$ and we are also looking for the
characterization of the those node systems which realize the
minimal Lebesgue constants. We exploited the equioscillation property of the Lebesgue function \cite{4} and
used quantifier elimination and Groebner Basis as tools \cite{1,2}. Most of the computation is done in Mathematica \cite{8}.