GCAI 2015:Papers with Abstracts

Papers
Abstract. Conflict-Driven Clause-Learning (CDCL) SAT and SAT Modulo Theories (SMT) solvers are well known as workhorses for, e.g., formal verification applications. Here we discuss ways to go beyond by learning not only clauses, but also much more expressive constraints. We outline techniques for Integer Linear Programming (ILP), going first from SAT to SMT for ILP and then to SMT with on-the-fly bottleneck constraint encoding. Then we illustrate the power of learning full constraints, and the resulting methods for 0-1 ILP (Pseudo-Boolean solvers) and full ILP (Cutsat and IntSat), outlining difficulties and their solutions, giving examples and some intuition on why these techniques work so well.
Abstract. The Countdown game is one of the oldest TV show in the
world. It started broadcasting in 1972 on the french television
and in 1982 on British channel 4, and it has been running since in
both countries.
The game, while extremely popular, never received any serious
scientific attention, probably because it seems too simple at first sight.
We present in this article an in-depth analysis of the
numbers round of the countdown game. This includes a complexity
analysis of the game, an analysis of existing algorithms and the presentation
of a new algorithm that increases resolution speed by a large factor.
It also includes some leads on how to turn the game into a
more difficult one, both for a human player and for a computer, and
even to transform it into a possibly undecidable problem.
Abstract. Some constraint programming solvers and constraint modelling languages feature the SORT(L,P,S) constraint, which holds if S is a nondecreasing rearrangement of the list L, the permutation being made explicit by the optional list P. However, such sortedness constraints do not seem to be used much in practice. We argue that reasons for this neglect are that it is impossible to require the underlying sort to be stable, so that SORT cannot be guaranteed to be a total-function constraint, and that L cannot contain tuples of variables, some of which form the key for the sort. To overcome these limitations, we introduce the StableKeysort constraint, decompose it using existing constraints, and propose a propagator. This new constraint enables a powerful modelling idiom, which we illustrate by elegant and scalable models of two problems that are otherwise hard to encode as constraint programs.
Abstract. The application of e-mail is the most widespread communication method. The increased email number includes permanently increased amount of unwanted notifications (spam). The statistics claims that 80% of the traffic is spam that lags the internet traffic, consumes space on hard disc, and reduces the capacity of the network, not to mention time spent for email’s classification. The advanced E-mail service packs contain spam filtration program. Nevertheless, to classify the e-mails as spam fully depends on the consumers, as information acceptable (or vital) for one consumer, may be a spam for other. Hence, it is crucial to have automatic spam filtration system for every individual consumer.
The paper discuss the solution, a learning system that filters emails through consumers’ parameters and gradually learn how to filter them accordingly. The system based on multi-layer neural network, which uses logistic activation function, learns via tutor and counter-spread algorithm of mistakes
Abstract. Ontology-based query answering augments classical query answering in databases by adopting the open-world assumption and by including domain knowledge provided by an ontology. We investigate temporal query answering w.r.t. ontologies formulated in DL-Lite, a family of description logics that captures the conceptual features of relational databases and was tailored for efficient query answering. We consider a recently proposed temporal query language that combines conjunctive queries with the operators of propositional linear temporal logic (LTL). In particular, we consider negation in the ontology and query language, and study both data and combined complexity of query entailment.
Abstract. We consider two classes of computations which admit taking linear combinations of execution runs: probabilistic sampling and generalized animation. We argue that the task of program learning should be more tractable for these architectures than for conventional deterministic programs. We look at the recent advances in the "sampling the samplers" paradigm in higher-order probabilistic programming. We also discuss connections between partial inconsistency, non-monotonic inference, and vector semantics.
Abstract. We discuss the evaluation of conditionals. Under classical logic a conditional of the form "A implies B" is semantically equivalent to "not A or B". However, psychological experiments have repeatedly shown that this is not how humans understand and use conditionals.
We introduce an innovative abstract reduction system under the three-valued Łukasiewicz logic and the weak completion semantics, that allows us to reason abductively and by revision with respect to conditionals, in three values. We discuss the strategy of minimal revision followed by abduction and discuss two notions of relevance.
Psychological experiments will need to ascertain if these strategies and notions, or a variant of them, correspond to how humans reason with conditionals.
Abstract. We investigate the logical difference problem between general EL-TBoxes. The logical difference is the set of concept subsumptions that are logically entailed by a first TBox but not by a second one.
We show how the logical difference between two EL-TBoxes can be reduced to fixpoint reasoning wrt. EL-TBoxes. Entailments of the first TBox can be represented by subsumptions of least fixpoint concepts by greatest fixpoint concepts, which can then be checked wrt. the second TBox.
We present the foundations for a dedicated procedure based on a hypergraph representation
of the fixpoint concepts without the use of automata-theoretic techniques,
avoiding possible complexity issues of a reduction to modal mu-calculus reasoning.
The subsumption checks are based on checking for the existence of simulations between
the hypergraph representations of the fixpoint concepts and the TBoxes.
Abstract. Automata allow many constraints on sequences of variables to be specified in a high-level way for constraint programming solvers. An automaton with accumulators induces a decomposition of the specified constraint into a conjunction of constraints with existing inference algorithms, called propagators. Towards improving propagation, we design a fully automated tool that selects, in an off-line process, constraints that are implied by such a decomposition. We show that a suitable problem-specific choice among the tool-selected implied constraints can considerably improve solving time and propagation, both on a decomposition in isolation and on entire constraint problems containing the decomposition.
Abstract. Metis is an automated theorem prover based on ordered paramodulation.
It is widely employed in the interactive theorem provers Isabelle/HOL and HOL4
to automate proofs as well as reconstruct proofs found by automated provers.
For both these purposes, the tableaux-based MESON tactic is frequently used
in HOL Light. However, paramodulation-based provers such as Metis
perform better on many problems involving equality.
We created a Metis-based tactic in HOL Light which translates HOL problems
to Metis, runs an OCaml version of Metis, and reconstructs proofs
in Metis' paramodulation calculus as HOL proofs.
We evaluate the performance of Metis as proof reconstruction method
in HOL Light.
Abstract. In recent years it was proposed to encode bounded model checking (BMC) into the effectively propositional fragment of first-order logic (EPR). The EPR fragment can provide for a succinct representation of the problem and facilitate reasoning at a higher level.
In this paper we present an extension of the EPR-based bounded model checking
with k-induction which can be used to prove safety properties of systems over
unbounded runs. We present a novel abstraction-refinement approach based on
unsatisfiable cores and models (UCM) for BMC and k-induction in the EPR setting.
We have implemented UCM refinements for EPR-based BMC and k-induction in a first-order automated theorem prover iProver. We also extended iProver with the AIGER format and evaluated it over the HWMCC'14 competition benchmarks. The experimental results are encouraging. We show that a number of AIG problems can be verified until deeper bounds with the EPR-based model checking.
Abstract. The article proposes a multi-attribute decision making (MADM) approach, which is applied to the problem of optimal selection of the investment projects. This novel methodology comprises two stages. First, it makes ranking of projects based on TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) method presented in hesitant fuzzy environment. We consider the case when the information on the weights of the attributes is completely unknown. The identification of the weights of the attributes is made in the context of hesitant fuzzy sets and is based on the De Luca-Termini information entropy. The ranking of alternatives is made in accordance with the proximity of their distance to the positive and negative ideal solutions. Second stage of the methodology allows making the most profitable investment in several projects simultaneously. The decision on an optimal distribution of allocated investments among the selected projects is provided using the method developed by the authors for a possibilistic bicriteria optimization problem. An investment example is given to illustrate the application of the proposed approach.
Abstract. In this paper the problem of recognition of patient's intent to move hand prosthesis is addressed. The proposed method is based on recognition of electromiographic (EMG) and mechanomiographic (MMG) biosignals using a multiclassifier system (MCS) working with dynamic ensemble selection (DES) scheme and original concept of competence function. The competence measure is based on relating the response of the classifier with the decision profile of a test object which is evaluated using K nearest objects from the validation set (static mode). Additionally, feedback information coming from bioprosthesis sensors on the correct/incorrect classification is applied to the adjustment of the combining mechanism during MCS operation through adaptive tuning competences of base classifiers depending on their decisions (dynamic mode). Experimental investigations using real data concerning the recognition of five types of grasping movements and computer-simulated procedure of generating feedback signals are performed. The performance of MCS with the proposed competence measure is experimentally compared against 5 state-of-art MCS's in static mode and furthermore the MCS system developed is evaluated with respect to the effectiveness of the procedure of tuning competence. The results obtained indicate that the modification of competence of base classifiers during the working phase essentially improves performance of the MCS system. The system developed achieved the highest classification accuracy demonstrating the potential of MCS with feedback signals from prosthesis sensors for the control of bioprosthetic hand.
Abstract. Organic Synthesis is a computationally challenging practical problem concerned with constructing a target molecule from a set of initially available molecules via chemical reactions. This paper demonstrates how organic synthesis can be formulated as a planning problem in Artificial Intelligence, and how it can be explored using the state-of-the-art domain independent planners.
To this end, we develop a methodology to represent chemical molecules and generic reactions in PDDL 2.2, a version of the standardized Planning Domain Definition Language popular in AI. In our model, derived predicates define common functional groups and chemical classes in chemistry, and actions
correspond to generic chemical reactions. We develop a set of benchmark problems. Since PDDL is supported as an input language by many modern planners, our benchmark can be subsequently useful for
empirical assessment of the performance of various state-of-the-art planners.
Abstract. Genetic algorithms have been applied to various optimization problems in the past. Our library GeneiAL implements a framework for genetic algorithms specially targeted to the area of hybrid electric vehicles. In a parallel hybrid electric vehicle (PHEV), an internal combustion engine and an electrical motor are coupled on the same axis in parallel. In the area of PHEVs, genetic algorithms have been extensively used for the optimization of parameter tuning of control strategies. We use GeneiAL to control the torque distribution between the engines directly. The objective function of this control strategy minimizes the weighted sum of functions that evaluate the fuel consumption, the battery state of charge, and drivability aspects over a prediction horizon of fixed finite length.
We analyze the influence of these weights and different configurations for the genetic algorithm on the computation time, the convergence, and the quality of the optimization result. For promising configurations, we compare the results of our control strategy with common control strategies.
Abstract. We study two modal logics of trust that interconnect the trust modality with the individual belief modalities of agents. One logic includes an axiom stating that if agent i trusts agent j with respect to a statement p then i believes that j does not disbelieve p. In the second logic, if i trusts j concerning p then i believes that j believes p. These extensions of the logic of trust help to solve some unintuitive consequences that arise when the semantics of trust and belief are independent. As a main technical result we prove the soundness and completeness of these logics.
Abstract. We introduce the notion of strongly distributed structures and present a uniform approach to incremental automated reasoning on them. The approach is based on a systematic use of two logical reduction techniques: Feferman-Vaught reductions and syntactically defined translation schemes. The distributed systems are presented as logical structures A’s. We propose a uniform template for methods, which allow for a certain cost evaluation of formulae of logic L over A from values of formulae over its components and values of formulae over the index structure I. Given logic L, structure A as a composition of structures Ai,i ∈ I, index structure I and formula φ of the logic to be evaluated on A, the question is: what is the reduction sequence for φ if any. We show that if we may prove preservation theorems for L as well as if A is a strongly distributed composition of its components then the corresponding reduction sequence for A may be effectively computed. We show that the approach works for many extensions of FOL but not for all. The considered extensions of FOL are suitable candidates for modeling languages for components and services, used in incremental automated reasoning, data mining, decision making, planning and scheduling. A short complexity analysis of the method is also provided.
Abstract. The aim of this work is to define a resolution method for the modal logic S5. We
first propose a conjunctive normal form (S5-CNF) which is mainly based on using labels
referring to semantic worlds. In a sense, S5-CNF can be seen as a generalization of the
conjunctive normal form in propositional logic by using in the clause structure the modal
connective of necessity and labels. We show that every S5 formula can be transformed
into an S5-CNF formula using a linear encoding. Then, in order to show the suitability
of our normal form, we describe a modeling of the problem of graph coloring. Finally, we
introduce a simple resolution method for S5, composed of three deductive rules, and we
show that it is sound and complete. Our deductive rules can be seen as adaptations of
Robinson’s resolution rule to the possible-worlds semantics.
Abstract. First-order theorem provers have to search for proofs in an infinite
space of possible derivations. Proof search heuristics play a vital
role for the practical performance of these systems. In the current
generation of saturation-based theorem provers like SPASS, E,
Vampire or Prover~9, one of the most important decisions is the
selection of the next clause to process with the given clause
algorithms. Provers offer a wide variety of basic clause evaluation
functions, which can often be parameterized and combined in many
different ways. Finding good strategies is usually left to the users
or developers, often backed by large-scale experimental
evaluations. We describe a way to automatize this process using
genetic algorithms, evaluating a population of different strategies
on a test set, and applying mutation and crossover operators to good
strategies to create the next generation. We describe the design and
experimental set-up, and report on first promising results.
Abstract. The paper describes a project aiming at developing formal foundations
of combined multi-language constraint solving in the form of an algebra of modular systems.
The basis for integration of different formalisms is the classic model theory.
Each atomic module is a class of structures.
It can be given, e.g., by a set of constraints in a constraint formalism
that has an associated solver.
Atomic modules are combined using a small number of algebraic operations.
The algebra simultaneously resembles Codd's relational algebra, (but is defined
on classes of structures instead of relational tables), and process
algebras originated in the work of Milner and Hoare.

The goal of this paper is to establish and justify
the main notions and research directions,
make definitions precise.
We explain several results, but do not give proofs.
The proofs will appear in several forthcoming papers.
We keep this paper as a project description paper to discuss the overall project,
to establish and bridge individual directions.
Abstract. Energy optimization problem in Wireless Sensor Networks (WSN) is a backbone of efficient performance of sensor network consisting of small devices with limited and non-recovering battery. WSN lifetime maximization problem under assumption of that the coverage is main task of the network is known as Maximal lifetime coverage problem (MLCP).
This problem belongs to a class of NP-hard problems. In this paper we propose a novel simulated annealing (SA) algorithm to solve MLCP. The proposed algorithm is studied for high dense WSN instances under different parameter setup.
Abstract. BliStr is a system that automatically develops strong targetted theorem-proving strategies for the E prover on a large set of diverse problems. The main idea is to interleave (i) iterated low-timelimit local search for new strategies on small sets of similar easy problems with (ii) higher-timelimit evaluation of the new strategies on all problems. The accumulated results of the global higher-timelimit runs are used to define and evolve the notion of "similar easy problems'", and to control the evolution of the next strategy. The technique significantly strengthened the set of E strategies used by the MaLARea, E-MaLeS, and E systems in the CASC@Turing 2012 competition, particularly in the Mizar division. Similar improvement was obtained on the problems created from the Flyspeck corpus.