Proving Correctness of Graph Programs Relative to Recursively Nested Conditions Electronic Communications of the EASST Volume 73 (2016) Graph Computation Models Selected Revised Papers from GCM 2015 Proving Correctness of Graph Programs Relative to Recursively Nested Conditions Nils Erik Flick 20 pages Guest Editors: Detlef Plump Managing Editors: Tiziana Margaria, Julia Padberg, Gabriele Taentzer ECEASST Home Page: http://www.easst.org/eceasst/ ISSN 1863-2122 http://www.easst.org/eceasst/ ECEASST Proving Correctness of Graph Programs Relative to Recursively Nested Conditions Nils Erik Flick∗ Carl von Ossietzky Universität, 26111 Oldenburg, Germany, flick@informatik.uni-oldenburg.de Abstract: We propose a new specification language for the proof-based approach to verification of graph programs by introducing µ -conditions as an alternative to existing formalisms which can express many non-local properties of interest. The contributions of this paper are the lifting of constructions from nested conditions to the new, more expressive conditions and a proof calculus for partial correctness relative to µ -conditions. Most importantly, we prove the correctness of a new con- struction to compute weakest preconditions with respect to finite graph programs. Keywords: correctness, graph programs, non-local graph conditions, weakest pre- condition calculus, proof calculus 1 Introduction Graph transformations provide a formal way to model the graph-based behaviour of a wide range of systems by way of diagrams, amenable to formal verification. One approach to verification proceeds via model checking of abstractions, notably Gadducci et al., Baldan et al., König et al., Rensink et al. [GHK98, BKK03, KK06, RD06]. This can be contrasted with the proof-based approaches of Habel, Pennemann and Rensink [HPR06, HP09] and Poskitt and Plump [PP13]. Here, state properties are expressed by nested graph conditions, and a program can be proved correct with respect to a precondition c and a postcondition d. The following figure presents a schematic overview of the approach, which is also our starting point: precondition calculus pr ov er graph program weakest precondition d (postcondition) c (precondition) yes, correct no unknown Figure 1: Overview of the proof-based verification approach Correctness proofs are done in the style of Dijkstra’s [Dij76] predicate transformer approach in Pennemann’s thesis [Pen09], while Poskitt’s thesis [Pos13] features a Hoare [Hoa83] logic for partial and total correctness. Both works are based on nested conditions, which cannot ex- press non-local properties of graphs, such as connectivity. In this paper, we consider non-local properties, and we present an extension to the proof calculus from [Pen09]. Our formalism is an extension of nested conditions by recursive definitions. Several extensions of nested graph ∗ This work is supported by the German Research Foundation (DFG), grant GRK 1765 (Research Training Group – System Correctness under Adverse Conditions) 1 / 20 Volume 73 (2016) mailto:flick@informatik.uni-oldenburg.de Recursively Nested Conditions conditions to non-local conditions already exist (Radke [Rad13], Poskitt and Plump [PP14]). We argue that as opposed to the former, ours offers a weakest precondition calculus that can handle any condition expressible in it. As compared to the latter, which relies more heavily on expressing properties directly in (monadic second-order) logic, ours is more closely related to nested conditions and shares the same basic methodology. Thus µ -conditions offer a viewpoint sufficiently different from existing ones to be worth investigating. This paper is structured as follows: Section 2 recalls graph programs and conditions. Section 3 introduces µ -conditions and Section 4 defines correctness under µ -conditions and the proof cal- culus, together with proofs of the results and a (small) exemplary application of the method. Section 5 provides context by listing related work. Section 6 concludes with an outlook. 2 Graph Conditions and Programs In this section, we introduce graph conditions and graph programs. We assume familiarity with graph transformation systems in the sense of Ehrig et al. [EEPT06], and the basic notions of category theory. For practical approaches to semi-automatic theorem proving in this context, we also refer the reader to Pennemann [Pen09]. In this paper, all graphs are assumed to be finite. Notation The domain and codomain of a morphism f : G → H are denoted by dom( f ) = G and cod( f ) = H. Curly arrows f : G ↪→ H denote injective morphisms (monomorphisms) while double-ended arrows f : G � H denote surjective ones (epimorphisms). M denotes the class of all graph monomorphisms. An isomorphism of graphs is both a mono- and an epimorphism. A partial morphism is a pair of monomorphisms with the same domain. /0 denotes the empty graph. Let us recall nested conditions, initially due to Habel and Pennemann. Finite nested conditions are known to be equally expressive as graph-interpreted first-order predicate logic. Definition 1 (Nested Graph Conditions) Let Cond be the class of nested conditions, defined inductively as follows (where P,C′,C are graphs): • If J is a countable set and for all j ∈ J, c j is a condition (over P), then ∨ j∈J c j is a condition (over P). This includes the case J = /0 (for any P). • If c is a condition (over P), then ¬c is also a condition (over P). • If a : P ↪→ C′ is a monomorphism, ι : C ↪→ C′ is a monomorphism and c′ is a condition (over C), then ∃(a,ι,c′) is a condition (over P). Notation We call c′ a direct subcondition of ∃(α,ι,c′), ¬c′ and c′∨c′′ and use subcondition for the reflexive and transitive closure of this syntactically defined relation. If c is a condition over P, then P is its type1, denoted c : P, and CondP is the class of all conditions over P. The usual abbreviations define the other standard operators: ∧ is ¬ ∨ ¬, ∀ is ¬∃¬. No morphism satisfies the disjunction over the empty index set. To avoid special cases, we write it as ⊥ (false), and > (true) for ¬⊥, though technically for each graph P there is one ⊥ : P and one > : P. We write ∃(a) for ∃(a,ι,>), ∃(a,c) for ∃(a,idcod(a),c) and ∃−1(ι,c) for ∃(idcod(ι),ι,c). 1 When we mention “type graphs” in the text, we just mean graphs used as types. Selected Revised Papers from GCM 2015 2 / 20 ECEASST With finite index sets, one obtains the finite nested conditions. The morphism ι serves to unse- lect2 a part of C′. Our extension is similar to lax conditions [RAB+15], but slighter in scope. Definition 2 (Satisfaction) A monomorphism f : P ↪→ G satisfies a condition c : P, denoted f |= c, iff c = >, c = ¬c′ and f 6|= c′, or c = ∨ j∈J c j and there is a j ∈ J such that f |= c j, or c =∃(a,ι,c′) (where a : P ↪→C′, ι : C ↪→C′, c′ : C) and there exists a monomorphism q : C′ ↪→ G such that f = q◦a and q◦ι |= c′. ∃(P C′ C, ) G a ι qf q◦ι c′ |= A graph G satisfies a condition c : /0 if and only if the unique morphism /0 ↪→ G satisfies c. Notation The symbol ≡ denotes logical equivalence, i.e. for conditions c,c′ : P, c ≡ c′ if and only if for all monomorphisms m with domain P, ⇒ m |= c ⇔ m |= c′. Remark 1 (No Added Expressivity) Our conditions with ι are equally expressive as the nested conditions defined in [Pen09]. The proof, omitted, relies on the transformation A from [Pen09]. Notation As one can see in Fig. 2, the notation for graph conditions often only depicts source or target graphs of morphisms. The small blue numbers show the morphisms’ node mappings. We also adopt the convention of representing the morphism ι in a situation ∃(a,ι,xi) implicitly: we prefer to annotate the variable’s type graph with the images of items under ι in parentheses. ∃ ( /0 ↪→ 1 2 ←↩ 2 ,¬∃ ( 2 ↪→ 2 ←↩ 2 ,> )) ≡ ∃ ( 1 2 ,¬∃ ( 2 )) Figure 2: A nested graph condition, in full and in abbreviated notation (stating the existence of two nodes linked by an edge, the second node not having a self-loop). Next, we introduce graph transformations. We follow the double pushout approach with injec- tive rules and injective matches. For technical reasons, we define graph transformations in terms of four elementary steps, namely selection, deletion, addition and unselection. Deletion and addition always apply to a selected subgraph, and selection and unselection allow the selection to be changed. skip is a no-op used in the definition of sequential composition. The definition below allows for somewhat more general combinations of the basic steps, which cannot be ex- pressed as sets of graph transformation rules. Another reason for breaking up rules into more elementary steps is to make constructions and proofs easier to follow. Its only complication is that programs can only be composed if they agree on the currently selected subgraph, called the interface. This ensures that an addition is performed in the same place as the deletion. A graph transformation rule is then nothing else than a selection; a deletion; an addition; an unselection. The semantics of a graph program is a triple of two monomorphisms and one partial morphism. The two monomorphisms represent the selected subgraphs before and after the execution of the 2 We will use the term “unselection” anytime a morphism is used in the inverse direction: in Def. 1, the morphism ι is used to base subconditions on a smaller subgraph, in effect reducing the selected subgraph; it will also appear in our definition of graph programs as the name of an operation that reduces the current selection, i.e. the subgraph the program is currently working on – similarly for “selection”. Unselection is an indispensable part of our formalism. 3 / 20 Volume 73 (2016) Recursively Nested Conditions program respectively, and the partial morphism records the changes effected by the program. Our programs are a proper subset of those in Pennemann [Pen09], and use the same semantics. Definition 3 (Graph Programs) In the following table, x, l, r, y, min and mout are monomorph- isms, with x, l, r and y arbitrarily chosen to define a program step, while the interfaces min (input) and mout (output) are universally quantified in the set comprehensions that appear in the defini- tions below. Each triple (min,mout,(pl, pr)) has cod(pl)=dom(min) and cod(pr)=dom(mout). Name Program P Semantics JPK selection Sel(x) {(min,mout,x) | mout ◦x = min} deletion Del(l) {(min,mout,l−1) | ∃l′,(mout,l,min,l′) pushout} addition Add(r) {(min,mout,r) | ∃r′,(min,r,mout,r′) pushout} unselection U ns(y) {(min,mout,y−1) | mout = min ◦y} skip skip {(m,m,iddom(m)) | m ∈M} If P and Q are graph programs, so are their disjunction {P,Q} and sequence P; Q. The seman- tics of disjunction is a set union JPK∪JQK and the semantics of sequence is JP; QK ={(m,m′, p) |∃(m,m′′, p′)∈ JPK,(m′′,m′, p′′)∈ JQK, p = p′; p′′}, where partial morphisms p′ = (l1,r1), p′′ = (l2,r2) compose as p′; p′′ = (l1 ◦ l′2,r2 ◦r ′ 1) using the pullback (r ′ 1,l ′ 2) of (r1,l2). If P is a graph program, so is its iteration P∗: JP∗K = ⋃ j∈NJP jK where P j = P; P j−1 for j ≥ 1 and P0 = skip. Remark 2 The definitions generalise the state transitions in plain graph transformation: a rule ρ = (L l←↩ K r↪→R) is exactly simulated by the program Sel(/0 ↪→L); Del(l); Add(r);U ns(/0 ↪→R). 3 µ -Conditions In this section, we define µ -conditions on the basis of nested graph conditions. As opposed to nested conditions, the ones defined here can express path and connectivity properties, which frequently arise in the study of the correctness of programs with recursive data structures, or in the modelling of networks. We then define and prove the correctness of some basic constructions. An example is provided at the end of this section to illustrate the constructions step by step. 3.1 Defining µ -Conditions Nested conditions are a very successful approach to the specification of graph properties for verification. However, they cannot express non-local properties such as connectedness. Our idea is to generalise nested conditions to capture certain non-local properties by adding recursion. The resulting formalism is similar to first order fixed point logics, see e.g. Kreutzer [Kre02]. The reader might want to compare our µ -conditions to a distinct formalism for expressing non-local properties, the very powerful grammar-based HR∗ conditions of Radke [Rad13]. We argue that µ -conditions are worth looking into despite the availability of strong contenders for an extension of nested conditions to non-local properties, such as MSO-conditions [PP14] because our approach provides a new and different generalisation of nested conditions, also is it not obvious how the respective expressivities compare.3 Specifically, we show in this section that 3 See Section 5 for a summary of related work on non-local graph condition formalisms. Selected Revised Papers from GCM 2015 4 / 20 ECEASST the weakest liberal precondition transformation, core of the Dijkstra-style approach, carries over. Notation Sequences (of graphs, placeholders, morphisms) are written with a vector arrow ~P,~x, ~f , and their components are numbered starting from 1. The length of a sequence ~P is denoted by ‖~P‖. Indexed typewriter letters x1 stand for placeholders, i.e. variables. The notation c : P indicating that c has type P is also extended to sequences: ~c : ~P (provided ‖~c‖=‖~P‖). To define fixed point conditions, we need something to take fixed points of, and to enforce existence and uniqueness. Choosing a partial order on Cond~P, one can define monotonic oper- ators on Cond~P. The semantics of satisfaction already defines a pre-order: c ≤ c ′ if and only if every morphism that satisfies c also satisfies c′, which is obviously transitive and reflexive. As in every pre-order, ≤∩≤−1 is an equivalence relation compatible with ≤ and comparing rep- resentants via ≤ partially orders its equivalence classes. We introduce variables as placeholders where further conditions can be substituted4. To represent systems of simultaneous equations, we work on tuples of conditions. If ~P = P1,...,P‖~P‖ is a sequence of graphs, then Cond~P is the set of all ‖~P‖-tuples ~c of conditions, whose i-th element is a condition over the i-th graph of ~P. Satisfaction is defined component-wise: ~f |=~c if and only if ∀k ∈{1,...,‖~P‖} fk |= ck. Disjunctions ∧ and conjunctions ∨ of countable sets of CondP conditions, which by definition exist for any P, are easily seen to be least upper resp. greatest lower bounds of the sets. This makes Cond≡P a complete lattice. Let Cond~P be ordered with the product order by defining ~f |=~c to be true when the conjunction holds. This again induces a partial order on the set of equivalence classes, Cond≡~P. Thus, Cond≡~P is also a complete lattice, and a monotonic operator F has a least fixed point (lfp), given by the limit of ~Fn( −→ ⊥) for all n ∈ N, by the Knaster-Tarski theorem [Tar55]. This is crucial in the definition of a µ -condition. We extend the inductive definition Def. 1 by placeholders, and define substitutions of conditions for placeholders: Definition 4 (Graph Conditions with Placeholders) Given a graph P and a finite sequence ~P of graphs, a condition with placeholders from ~P over P is a (graph) condition with placeholders is either ∃(a,ι,c), or ¬c, or ∨ j∈J c j, or xi, 1 ≤ i ≤‖~P‖ where xi is a variable of type Pi. Definition 5 (Substitution) If F is a condition with placeholders ~x of types ~P and ~c ∈ Cond~P, then F[~x/~c] is obtained by substituting ci for each occurrence of xi for all i ∈{1,...,‖~P‖}. Satisfaction of such a condition by a morphism f is defined relative to a valuation val, which is an assignment of > or ⊥ to each monomorphism of the type graph of the variable into cod( f ), by f |= xi iff val(xi) => (where dom( f ) = Pi and xi : Pi). As discussed above, a lfp shall be defined only up to logical equivalence. To guarantee its existence, the operator must be monotonic (~c ≤ ~d ⇒~F(~c)≤~F(~d) for any ~c, ~d ∈Cond~P). The following remark is very useful: Remark 3 The least fixed point of ~F is equivalent to ∨ n∈N ~Fn( −→ ⊥). Proof. This is a fixed point because~F( ∨ n∈N ~Fn( −→ ⊥)) = ∨ n∈N−{0} ~Fn( −→ ⊥) =⊥∨ ∨ n∈N−{0} ~Fn( −→ ⊥) 4 Note that in our approach variables stand for subconditions, not for attributes or parts of graphs. Wherever confusion with similarly named concepts from the literature could arise, we use the word “placeholder” for “variable”. 5 / 20 Volume 73 (2016) Recursively Nested Conditions = ∨ n∈N ~Fn( −→ ⊥). It is the least fixed point because any other fixed point must also be a least upper bound of all ~Fn( −→ ⊥) and therefore greater or equal to the one proposed. Definition 6 (µ -Condition) Given a finite list ~P and conditions {Fi}i∈{1,...,‖~P‖} with placehold- ers ~x : ~P (Fi having type Pi and so on), then µ[~x]~F(~x) denotes the least fixed point (lfp) of the operator that to any ~c assigns ~F[~x/~c]. A µ -condition is a pair (b,l) consisting of a condition with placeholders b, and a finite list of pairs l = (xi,Fi(~x)) of a variable xi : Pi and a condition Fi(~x) : Pi, with placeholders from ~x, for some graph Pi, such that ~F is monotonic. We allow µ -conditions with open variables (i.e. not occurring as left-hand sides). For a least fixed point of a subset of variables to exist, the system of equations must correspond to a mono- tonic operator under any valuation of the open variables. An operator is said to be monotonic in a subset of variables when it is monotonic under any valuation of the remaining variables. Notation We write the list of pairs l = (xi,Fi(~x))i∈{1,...,‖~P‖} as a system of equations~x = ~F(~x). We call b the main body and l the recursive specification of (b,l) (and Fi(~x) the body or right hand side of the variable xi in l, or the i-th component of ~F ), and ~F(~c) is understood as substitu- tion of conditions ~c for the variables~x. ~F is said to define the variables~x. Such systems of equations may be used in a broader sense, to define nested fixed points: Definition 7 (Transitive Variable Use) Let {Fi}i∈I} be a list of conditions as in Def. 6. The use relation of F, ;F, is defined on literals {xi,¬xi}i∈I by xi ;F x j (¬x j) iff x j occurs as a subcondition under an even (odd) number of negations in Fi. The transitive use paths of F are all sequences of literals πp1...πpm such that ∀1≤i or x j′, j 6= j′ (trivial), or x j (monotonic). The other cases are negation, disjunction and existential quantifiers. Examining Def. 2, negation interchanges both even/odd and monotonicity/antitonicity. Disjunction, defined via propositional logic too, is monotonic, quantifiers ∃(a,ι,c′) are monotonic in c′. Hence the latter two cases do not affect either property. If all components of ~F are monotonic in x j, then so is ~F . Porting the argument to stratified systems merely requires checking the monotonicity of each stratum. Remark 5 (First Example: µ -Conditions are More General than Nested) 1. µ -conditions generalise nested conditions, consequently all examples for nested condi- tions are examples for µ -conditions (with no variables or equations). 2. µ -conditions are strictly more general than nested conditions: the following expresses the existence of a path of unknown length between two given nodes. x1 [ 1 2 ] where x1 [ 1 2 ] = ∃ ( 1 2 ) ∨∃ ( 1 2 3 ,x1 [ 1(3) 2(2) ]) It reads as follows: the word “where” stands between main body and equations. The only variable is x1. Its type graph is indicated in square brackets. The second existential quantifier uses a morphism to unselect node 1 and the sole edge: its source is the type graph of x1, which is syntactically required for using the variable in that place. The unselection morphism ι is not written as an arrow, instead it is expressed in compact notation by appending small blue numbers in parentheses to the node numbers in its source graph to specify the mapping. To ease reading, we adopt the convention to always use the same layout for the type graph of a given variable. Let us motivate the necessity of “unselection”. The nesting depth n : Cond → N is defined as n( ∨ j∈J c j) = max({0}∪{n(c j) | j ∈ J}), n(¬c) = n(c), n(∃(a,ι,c)) = n(c)+ 1, (n(xi)=0). Lemma 2 (Absorption) Any condition with placeholders c = ∃(a,ι,c′) where a and ι are iso- morphisms is equivalent to a condition of smaller nesting depth (or equal if n(c) = 1). Proof. Define the reduced condition ra,ι (c′) thus: if c′ = ∨ j∈J c j, then ra,ι (c ′) = ∨ j∈J ra,ι (c j). If c′ =¬c′′, then ra,ι (c′) =¬ra,ι (c′′). If c′ =∃(a′,ι′,c′′), then ra,ι (c′) =∃(a′◦ι−1 ◦a,c′). Directly from Def. 2, ra,ι (c′)≡c and at the same time, n(ra,ι (c′)) = n(c′) = n(c)−1. The only case where nesting depth does not decrease is when c′ is a variable, resulting in nesting depth 1. Remark 6 (Why ι ) Any µ -condition b |~x =~F(~x) where ι is the identity in all subconditions of b and of the components Fi(~x) is equivalent to a nested condition. Proof. Decompose ~F by Lemma 1 such that each stratum ~FIm only defines variables of the same type. This is indeed possible since with no non-trivial unselection, a variable may transitively depend on itself only via a morphism that is both injective and surjective. Induction over the number of strata: for each ~FIm , after each step of the lfp iteration, the nesting level can be reduced by Lemma 2 whenever ∃(a,ι,c) with a isomorphism occurs. Hence an equivalent condition of nesting level 0 or 1 can be reached. It must be a Boolean combination of conditions of the 7 / 20 Volume 73 (2016) Recursively Nested Conditions form ∃(a,ι,xi) with isomorphisms a and ι . Finitely many distinct conditions of this form, hence finitely many distinct Boolean combinations, exist. The monotonic operator ~FIm thus converges after finitely many steps to a finitely deeply nested condition with placeholders, for which the next stratum’s lfp, by induction hypothesis possessing the desired property, is substituted. Definition 9 (Satisfaction) b |~x =~F(~x) with ~x : ~P is satisfied by f iff f |= b[~x/µ[P]F]. This means that the µ -condition b |~x =~F(~x) can be understood by substituting the lfp solution of the system of equations ~x =~F(~x) in the main body b (for stratified systems, use appropriate nested fixed points). Satisfaction of µ -conditions with open variables is analogous to satisfaction of conditions with placeholders, i.e. requires a valuation to be given. Remark 7 (Finite Nesting) By the “infinite disjunction” characterisation of the lfp, any µ -con- dition is equivalent to an infinite nested condition. Infinitely deep nesting is not needed because the characterisation in Rem. 3 yields a countable disjunction of finitely deeply nested conditions. A morphism satisfies a given µ -condition if and only if it satisfies the finite nested condition obtained by unrolling the recursive specification up to some finite depth: Proposition 1 (Satisfaction at Finite Depth) f |= b |~x =~F(~x) iff ∃n ∈N, f |= b[~x/~Fn( −→ ⊥)]. Proof. The lfp is equivalent to ∨ i∈N ~Fi( −→ ⊥), which is satisfied by f iff at least one ~Fn( −→ ⊥) is. Theorem 1 (Deciding Satisfaction of µ -Conditions) Given a morphism f : P ↪→ G and a µ - condition c, it is decidable whether f satisfies c. Proof. The following algorithm CheckMu decides f |= c. For the type graph Pi of each variable xi, list all monomorphisms mik : Pi ↪→ G. Build a table which records in each column a Boolean value for each pair (xi,mik). The entries in column j + 1 are computed by evaluating satisfaction of the right hand side corresponding to the row’s variable by the morphism mik associated with the row, under the valuation given by column j. Stop after producing two adjacent columns with the same entries. Output the value of the main body under that valuation. The algorithm is cor- rect because the j-th column corresponds to satisfaction by ~F j( −→ ⊥), by definition. It terminates because of monotonicity: as values never change back to ⊥ from > while progressing through the columns, there is a finite number j∗ ∈N such that ~F j ∗ is satisfied by f iff ~F j ∗+1 is. 3.2 Weakest Liberal Preconditions of µ -conditions In this subsection, we present a construction to compute the weakest liberal precondition of a µ - condition with respect to any iteration-free graph program P (“liberal” means termination of P is not implied. It is redundant in the absence of iteration, as only iteration causes non-termination). Definition 10 (Weakest Liberal Precondition) The weakest liberal precondition (wlp) of c with respect to the program P, Wlp(P,c), is the least condition with respect to implication such that f ′ |= c ⇒ f |= Wlp(P,c) if ( f , f ′, p)∈ JPK for some partial morphism p. Selected Revised Papers from GCM 2015 8 / 20 ECEASST We show that under this assumption there is a µ -condition that expresses precisely the weakest liberal precondition of a given µ -condition with respect to a rule, and it can be computed. The result is similar to the situation for nested conditions. To derive it, we use the shift transformation Am(c) from [Pen09] whose fundamental property is to transform any nested condition c into another nested condition such that m′′ |= Am(c) if and only if m′′◦m |= c for all composable pairs m′′, m of monomorphisms (Lemma 5.4 from [Pen09]). Since this and similar constructions play an important role in this section, we recall the case c =∃(a,c′): if (m′,a′) is the pushout of (m,a), let Epi be the set of all epimorphisms e with domain cod(m′) that compose to monomorphisms b := e◦a′ and r := e◦m′. Then Am(∃(a,c′)) = ∨ e∈Epi∃(b,Ar(c′)). With help of the unselection ι in ∃(a,ι,c), it is at first glance trivial to exhibit a weakest lib- eral precondition with respect to U ns(y). However, to handle the addition and deletion steps, a construction becomes necessary that makes the affected subgraph explicit again. This informa- tion is crucial to obtain the weakest liberal precondition with respect to Add(r) and Del(l) and must not be forgotten at any nesting level in order to obtain the correct result. To that aim, we define a partial shift construction which makes sure that the type graph of the main body is never unselected in the µ -condition but is instead mapped in a consistent way as a subgraph of the type graph of each variable. The following serves to obtain the new type graphs: Construction 1 (New type graphs for partial shift) We assume that an arbitrary total order on all graph morphisms is fixed. If c = b |~x = µ[~K]~F(~x) is a µ -condition, then for a variable xi of ~K, XR,c(xi) is defined as the sequence of morphisms ~f obtained as below, in ascending order. The new list of variables ~K′ and their respective types ~P′ are obtained by concatenating all XR,c(xi) of the variables of ~K in order. The morphisms f are obtained from ~P′ by collecting all epimorphisms that compose to monomorphisms with the pushout morphisms in the diagram: /0 R Pi X P ′ j f Lemma 3 (Decomposing “exists”) ∃(a,ι,c)≡∃(a,∃−1(ι,c)) Proof. f |=∃(a,∃−1(ι,c)) ⇔∃q ∈M, f = q◦a∧q◦idcod(a) |=∃−1(ι,c) ⇔∃q ∈M, f = q◦a∧∃q′ ∈M,q◦idcod(a) = q′◦idcod(ι)∧q′◦ι |= c′ ⇔∃q ∈M, f = q◦a∧∃q′ ∈M,q = q′∧q′◦ι |= c′ ⇔∃q ∈M, f = q◦a∧q◦ι |= c′ ⇔ f |=∃(a,ι,c) Construction 2 (Partial shift Px,y) Given monomorphisms x : P ↪→ H and y : R ↪→ H, we define the partial shift of b | µ[~K]F with respect to (x,y) as Px,y(b | µ[~K]F) := Px,y(b) | µ[~K′]F′, where the new equations are obtained by applying P f ,y to the variables of the left hand sides with all possible morphisms f from R, as below, and accordingly to the right hand sides. On condition bodies, Px,y is defined as follows: Boolean combinations of conditions are transformed to the cor- responding combinations of the transformed members. Px,y(xi) := x x,y i if xi : P, where x x,y i : H is a new variable, H = cod(y). Quantification is processed separately from unselection (Lemma 3): Px,y(∃(P a ↪→C′,c′)) = ∨ Epi∃(H ↪→ E,Pb,r◦y(c′)), where Epi is the set of all epimorphisms e with 9 / 20 Volume 73 (2016) Recursively Nested Conditions domain H′ that compose to monomorphisms r = e◦x′ and b = e◦h with the pushout morphisms (diagram below left). Px,y(∃−1(C′ ι ←↩ C,c′)) = ∃−1(ι′,Pi,y′(c′)): form the pullback of r◦ι and b◦y, then pushout the obtained morphisms to (y′,i) (diagram below right): P C′ H H′ E R a y h x x′ e b r r◦y C′ C E J R B x ι y i ι ′ y′ Remark 8 ((Un)ambiguous Variable Contexts) Note that in a µ -condition it is not necessarily true that in all contexts where xi is used, it appears with the same morphism R ↪→ Pi (where R is the type of b). It is however possible to equivalently transform every µ -condition into a “normal form” that has that property. Applying PidR,idR will by construction result in a µ -condition with unambiguous inclusions R ↪→ Pi for all variables (namely the morphisms from the sequences XR,c), and this property is also preserved by the constructions introduced later in this section. Unreachable variables created by X and P can be pruned to obtain an equivalent µ -condition. Equivalence of conditions with placeholders (unlike µ -conditions) is defined for conditions using the same sets of variables, as equivalence in the sense of nested conditions under any valuation. We extend A to conditions with placeholders by defining Am(x) to be ∃(idcod(m),m,x) if x : P. We show below that Px,y is equivalent to Ax. The reason for introducing Px,y is to gain precise control over the types of the variables in the transformed condition, which should all include the type graph of the main body. Intuitively, as this corresponds to the currently selected subgraph of a graph program, additions and deletions are applied to that subgraph and one must ensure that the changes apply to the whole µ -condition. Three minor lemmata are required: Lemma 4 (Removal of Unselection) If c′ is a condition with placeholders, then ∃(a,ι,c′) ≡ ∃(a,A(ι,c′)) (A being Pennemann’s shift as described earlier in this subsection). Proof. Using the fundamental property of A, the nontrivial case being m |= ∃(a,ι,c′) ⇔∃q ∈ M,q◦a = m∧q◦ι |= c′ ⇔∃q ∈M,q◦a = m∧q |= A(ι,c′) ⇔ m |=∃(a,A(ι,c′)). Lemma 5 (Shift composition and decomposition) Given two morphisms m′′, m′, if m′′◦ m′ exists, then Am′′◦m′(c)≡ Am′′(Am′(c)) for all conditions (with placeholders) c. Proof. f |= Am′′◦m′(c)⇔ f ◦m′′◦m′ |= c ⇔ f ◦m′′ |= Am′(c)⇔ f |= Am′′(Am′(c)) (Lemma 5.4 in [Pen09]) Lemma 6 The conditions Px,y(c) and Ax(c) are equivalent. Proof. By induction over the recursion depth, and structural induction over c: If c is a variable symbol xi, then either recursion depth is 0 and the assertion is proved, since it is ⊥, or it is true because the right hand side of the equation for xi has the property. The case ∃(a,ι,c′) of the structural induction is handled as follows (disjunctions ranging over the suitable epimorphisms Selected Revised Papers from GCM 2015 10 / 20 ECEASST e and compositions b, r): m |= Ax(∃(a,ι,c′))⇔ m |= Ax(∃(a,A(ι,c′))) according to Lemma 4 ⇔ m |= ∨ ∃(b,Ae◦x′(Aι (c′))) by Lemma 5.4 from [Pen09] ⇔ m |= ∨ ∃(b,Ae◦ι′(Ai(c′)) by Lemma 5 (twice) ⇔ m |= ∨ ∃(b,e◦ι′,Ai(c′)) by Lemma 4 ⇔ m |= ∨ ∃(b,e◦ι′,Pi,r′(c′)) by induction hypothesis ⇔ m |= Px,r′(∃(a,ι,c′)) by Constr. 2 We introduce two transformations δ ′l(c), α′r(c) (based on auxiliary transformations δl,y(c) and αr,y(c)). These are applied to main body and right hand sides and serve to compute the wlp with respect to addition and deletion, respectively5, of a µ -condition that has already undergone partial shift. Recall the statement of Rem. 8 that partial shift fixes inclusions from the current interface to each graph occurring in the condition (as domain or codomain of a morphism a or ι ). When the condition c obtained after partial shift is evaluated on a morphism to check satisfaction, the current interface is never unselected in the recursion but appears included in each variable type. The condition α′r(c) stipulates the existence of cod(r) (the Add(r) step’s input interface) instead of dom(r) (the output interface), which is intuitively why it yields the correct expression of the wlp of c with respect to Add(r). It might well be that an occurrence of cod(r) cannot have been obtained by a rule application because the pushout demanded by the semantics of Add(r) fails to exist, in which case α′ eliminates a branch of the condition. Likewise, in δ ′l(c), cod(l) takes the place of dom(l) since this corresponds exactly to the effect of the step Del(l).6 Definition 11 (Transformations δ ′ and α′) Let c : P be a condition with placeholders. If r : K ↪→R and y : R ↪→P (resp. l : K ↪→L and y : K ↪→P) are monomorphisms, then δr,y(c) (αl,y(c)) is defined as follows: δr,y(¬c) =¬δr,y(c) and δr,y( ∨ j∈J c j) = ∨ j∈J δr,y(c j) (respectively: αl,y(¬c) = ¬αl,y(c) and αl,y( ∨ j∈J c j) = ∨ j∈J αl.y(c j)). For c =∃(a,ι,c′), decompose using Lemma 3:7 P C′ R W X K a y y′ h′ r h a′ r′r′′ C′ C R X V K ι y y′ h′ r h ι ′ r′ r′′′ ︸ ︷︷ ︸ δr,y(∃(a,c)) and δr,y(∃−1(ι,c)) P C′ K W X L a y y′ h′ l h a′ l′ l′′ C′ C K X V L ιy y′ h′ l h ι ′ l′′ l′′′ ︸ ︷︷ ︸ αl,y(∃(a,c)) and αl,y(∃−1(ι,c)) Case of δr,y(∃(a,c)): if no pushout complement of r and y′ = a◦y exists, then δr,y(c) = ⊥. Otherwise, obtain it as (x′,h′) and pullback (a,r′) to (a′,r′′) with source W ; this yields a unique morphism h from K to W to make the diagram commute. Apply the special pushout-pullback lemma [EEPT06] to the compositions h′ = a′◦h and y′ = a◦y to see that the left and top squares in the diagram are pushouts. δr,y(∃(a,c)) = ∃(a′,δr,y′(c′)). Case of δr,y(∃−1(ι,c)): Pullback (ι,r′) to (ι′,r′′′). The pullback property yields existence and uniqueness of h′ : K →V to make the diagram commute. δr,y(∃−1(a,c)) =∃−1(ι′,δr,y′(c)). Case of αl,y(∃(a,c)): pushout (y,l) to (l′,h); pushout (l′,a) to (l′′,a′). h′ is obtained as a′◦h 5 The letters were chosen so as to indicate the effect of the transformation: to compute the wlp with respect to addition, δ ′ needs to delete portions of the morphisms in the condition, and vice versa. 6 In the case of Del(l), it is possible that δ ′l(c) specifies an occurrence of l which cannot be the input of a Del(l) step, hence to obtain the actual wlp, a nested condition expressing the applicability of Del(l) must be adjoined to δ ′l(c). 7 The morphism y′, just like y, was obtained during the partial shift; the transformations yield corresponding mor- phisms h′ from the new program interface to each graph occurring in the condition body. 11 / 20 Volume 73 (2016) Recursively Nested Conditions and the composed square is a pushout. αl,y(∃(a,c)) = ∃(a′,αl,y′(c′)). Case of αl,y(∃−1(ι,c)): let (h,l′′) be the pushout over (y,l) and (h′,l′′′) over (y′,l). The commuting morphism from the latter pushout object to X is ι′. αl,y(∃−1(ι,c)) =∃−1(ι′,αl,y′(c′)). For variables, δr,y(xi) = x′i is a new variable of type K, likewise αl(xi) has type L (see Rem. 8). Finally, δ ′r(c) = δr,id(Pid,id(c)) and α′l(c) = αl,id(Pid,id(c)). In contrast to P, the transformations α′ and δ ′ leave the number of variables unchanged. Only the types of the variables are modified. We recall that for any l : K ↪→ L, there is a condition ∆(l) that expresses the possibility of effecting Del(l), i.e. ∆(l) is satisfied exactly by the first components of tuples in JDel(l)K. We describe ∆(l) only informally: f |= ∆(l) states the non- existence of edges that are in im( f ) but incident to a node in im( f )−im( f ◦l). Now we have all ingredients for a weakest liberal precondition theorem for µ -conditions. The proofs again rely on the general theoretical framework of double-pushout rewriting [EEPT06]. Theorem 2 (Weakest Liberal Precondition for µ -conditions) For each rule ρ , there is a trans- formation Wlp ρ that transforms µ -conditions to µ -conditions and assigns to each condition c such that m′ |= c another condition Wlp ρ (c) such that m |= Wlp ρ (c) whenever (m,m′, p) ∈ JρK and Wlp ρ (c) is the least condition with respect to implication having this property. Proof. We exhibit and prove the transformation in four steps, which compose to the full rule. 1. Wlp(U ns(y),c) =∃−1(y,c) 2. Wlp(Add(r),c) = δ ′r(b) | µ~x′ = ~F′(~x′) where c = b | µ~x =~F(~x), and ~x′, ~F′ are obtained by applying δ ′r to the main body and the equations (adapting the variable types). 3. Wlp(Del(l),c) = ∆(l)⇒ α′l(c) | µ~x′ = ~F′(~x′), new equations analogous to Add(r) 4. Wlp(Sel(x,c′),c) =¬∃(x,(c′∧¬c)) The proof for Sel(x,c′) is exactly as in [Pen09], while correctness of the first step, U ns(y), is immediate from the semantics. For steps 2 and 3, we proceed by inductively comparing c to Wlp(Del(l),c) resp. Wlp(Add(r),c), which in turn requires inductively comparing conditions with placeholders that appear in the main body and right hand sides. The outer induction over N (see Rem. 3) compares the least fixed points. This takes care of the case of variables. The induction hypothesis here states that the valuation at the current iteration satisfies the hypothesis. As the variables satisfy the system of equations, we show by induction over the nesting that the construction is correct for the right hand sides under any valuation satisfying the hypothesis. The interesting case of the induction over the nesting lies in comparing satisfaction of ∃(a,ι,c) to αl,y(∃(a,ι,c)), resp. δr,y(∃(a,ι,c)). The goal is to obtain bi-implications in both cases. The diagrams depict the situation in each case (deletion and addition). The names of the morphisms are as in Def. 11. Dotted arrows represent the morphisms whose existence must be shown. In all four cases, the induction hypothesis asserts correctness of the weakest precondition construction for the morphisms named a◦y and a′◦h in Def. 11 and the induction step concludes correctness at y and h; the ι part is much easier: composition is unequivocal and the appropriate morphisms from the program interface to the graphs P,C′... (W,X...) again exist to advance the induction. Selected Revised Papers from GCM 2015 12 / 20 ECEASST G D P C′ W X L K l′h y a d G D P C′ W X L K l∗ g D G′ P C′ W X K R D G′ P C′ W X K R Case JDel(l)K / α′ Case JAdd(r)K / δ ′ (min,mout,l−1)∈JDel(l)K, mout : K ↪→ D, min : L ↪→ G (as depicted in the diagram) and mout |= ∃(a,ι,c) via d: consider αl,y(c), where y is the morphism K ↪→ P obtained from the partial shift construction. Build the pushout over (l′,d) and compose it with the lower pushout square, which yields the outer pushout by uniqueness. The morphism g : W ↪→ G obtained in the pushout and the unique commuting morphism q : X ↪→ G yield satisfaction. (min,mout,l−1)∈JDel(l)K, mout : K ↪→D, min : L ↪→G and min |= αl,y(∃(a,ι,c)) via g: pullback l∗ and g with object P′, consider the universal morphism from K to P′ and conclude that since a canonical isomorphism P′∼= P exists by uniqueness of the pushout complement (since h : L ↪→W is a monomorphism [EEPT06]) yielding a morphism d : P ↪→ D to complete the pushout square by the special PO-PB lemma. (min,mout,r) ∈ JAdd(r)K, mout : R ↪→ G′, min : K ↪→ D and mout |= ∃(a,ι,c) via g′: pullback D ↪→ G′ and P ↪→ G′ with object W ′, then use the unique commuting morphism K ↪→W ′ which yields a decomposition of the pushout square from the semantics, which by the special PO-PB lemma consists of pushouts and by uniqueness of M-pushout complements implies W ∼= W ′. (min,mout,r) ∈ JAdd(r)K, mout : R ↪→ G′, min : K ↪→ D and min |= δr,y(∃(a,ι,c)) via g, in the same way as the opposite direction of the Del(l) case. In the case of Del(l), the condition for the pushout complement required by the semantics to exist is precisely ∆(l). In the case of Add(r), the construction of δ ′ asserts the existence of the pushout. From the induction hypothesis and universality of the morphisms constructed to complete the diagrams, the diagrams must commute and we conclude that mout |= c ⇔ min |= Wlp(Del(l),c), resp. mout |= c ⇔ min |= Wlp(Add(r),c) under the given circumstances. 3.3 A Weakest Liberal Precondition Example In this subsection, we construct a weakest liberal precondition of a µ -condition step by step. Fig. 3 shows a single-rule graph program which matches a node with exactly one incoming and one outgoing edge and replaces this by a single edge. The effect of the rule is to contract paths, and it can be applied as long as no other edges are attached to the middle node. Fig. 4 shows a µ -condition whose weakest liberal precondition we wish to compute. It is a typical example of a µ -condition, which evaluates to > on those graphs that are fully (directed-) connected, i.e. where any pair of nodes is linked by a directed path. In Fig. 7 and Fig. 8, a partial shift has been applied to the condition (Wlp(U nsc,c4)) of Fig. 5, and the modifications the condition undergoes in the computation of the weakest precondition with respect to Addc and Delc are highlighted in various colours (see Fig. 6 for a legend). Constr. 1 has yielded a new list of variables8, x1,...,x7, 8 Although the original µ -condition had only one variable, partial shift usually yields one with multiple variables. 13 / 20 Volume 73 (2016) Recursively Nested Conditions Sel ( /0 ↪→ ) ; Del ( 1 3 2 ←↩ 1 2 ) ; Add ( ↪→ ) ;U ns ( ←↩ /0 ) Figure 3: A path-contracting rule ρcontract = Selc; Delc; Addc;U nsc. ∀ ( 1 2 ,x1 ) where x1 [ 1 2 ] =∃ ( 1 2 ) ∨∃ ( 1 2 3 ,x1 [ 1(3) 2(2) ]) Figure 4: A µ -condition c4 = (b,l) expressing connectedness. ∃ ( 3 4 ←↩ /0,∀ ( 1 2 ,x1 )) where x1 [ 1 2 ] =∃ ( 1 2 ) ∨∃ ( 1 2 3 ,x1 [ 1(3) 2(2) ]) Figure 5: Wlp(U nsc,c4). The nodes under the universal quantifier are not the same as those of the existential one, as these have been unselected: the type of the subcondition ∀(...) is /0. the corresponding equations are shown in Fig. 8, in abbreviated notation: variable types are suppressed in subconditions ∃(a,ι,xi) if the mapping ι from the type graph to the target of a is the identity. No other simplifications were applied. We have highlighted the type of the main body of Wlp(U nsc,c4) throughout Fig. 7: edges drawn in red are deleted to compute Wlp(Addc ,Wlp(U nsc,c4)) as per Def. 11; the green edges and nodes, not present initially, are added to compute Wlp(Delc,Wlp(Addc,Wlp(U nsc,c4))) as per Def. 11, which is obtained by adjoining ∆(l) ⇒ to the main body (∆(l) omitted in Fig. 7 as it is straightforward to compute); the yellow nodes belong to both the red and green sets. A universal quantifier with /0 ↪→ L completes the weakest precondition with respect to the rule, as for nested conditions [Pen09]. When following the construction through the nesting levels, please keep in mind that one may sometimes choose among isomorphic pushout objects and that the numbers of new nodes are arbitrary, but the nodes 1, 2 and (as created by the transformation α′) 5 are never “unselected” and therefore present in every type graph occurring in the weakest preconditions, similarly for the edges (not numbered because their mapping is unambiguous in the example). 4 Correctness Relative to µ -conditions In the previous section, we have shown how the weakest liberal precondition construction for nested conditions carries over to µ -conditions. The next task, for which we offer a partial solution in this section, is to develop methods for the deduction of correctness relative to µ -conditions by extending Pennemann’s proof calculus K. We recall that K works on nested conditions which are in conjunctive normal form at each nesting level; it features rules called (supporting) lift and (partial) resolve: the former serve to lift a member of a conjunction to a deeper nesting level, conjoining its shift to the subcondition of an existential quantifier, while the latter seek to derive contradictions. The rule descend allows a member ∃(a,⊥∧c) of a clause to be replaced by ⊥. 4.1 A Proof Calculus for µ -conditions The soundness of K in the context of nested conditions has been established in the publications introducing them; recently a tableaux based completeness proof of K has been published [LO14]. The resolution-style proof rules of K are clearly sound for µ -conditions as well. In our calculus Selected Revised Papers from GCM 2015 14 / 20 ECEASST node/edge decoration meaning items (nodes and edges) selected for W l p(U ns(y),c) items to be deleted to obtain W l p(Add(r),c) items to be added to obtain W l p(Del(l),c) Figure 6: Legend for the partial shift and weakest precondition example. ∃ ( 1 2 5 ,∀ ( 1 2 3 4 5 ,x7 ) ∧∀ ( 1 2 3 5 ,x6 ) ∧∀ ( 1 2 3 5 ,x4 ) ∧ ∀ ( 1 2 3 5 ,x5 ) ∧∀ ( 1 2 5 3 ,x3 ) ∧∀ ( 1 2 5 ,x1 ) ∧∀ ( 1 2 5 ,x2 )) Figure 7: Construction of Wlp(Delc; Addc;U nsc,ρc): application of δ ′r and α′l to the main body. x1 [ 1 2 5 ] = ∃ ( 1 2 5 ) ∨∃ ( 1 2 5 ) ∨∃ ( 1 2 5 3 ,x6 [ 1(1) 2(2) 3(3) 5(5) ]) x2 [ 1 2 5 ] = ∃ ( 1 2 5 ) ∨∃ ( 1 2 5 3 ,x5 [ 1(1) 2(2) 5(5) 3(3) ]) x3 [ 1 2 3 5 ] = ∃ ( 1 2 3 5 ) ∨∃ ( 1 2 3 4 5 ,x7 [ 1(1) 2(2) 5(5) 3(3) 4(4) ]) ∨∃ ( 1 2 5 3(3) ,x4 [ 1(1) 2(2) 5(5) 3(3) ]) ∨ ∃ ( 1 2 5 3(3) ,x4 [ 1(1) 2(2) 5(5) 3(3) ]) x4 [ 1 2 5 3 ] = ∃ ( 1 2 5 3 ) ∨∃ ( 1 2 3 4 5 ,x7 [ 1(1) 2(2) 5(5) 3(3) 4(4) ]) ∨∃ ( 1 2 5 3(3) ,x3 [ 1(1) 2(2) 5(5) 3(3) ]) x5 [ 1 2 3 5 ] = ∃ ( 1 2 3 5 ) ∨∃ ( 1 2 5 3 4 ,x5 [ 1(1) 2(2) 5(5) 3(4) ]) ∨∃ ( 1 2 5 3 ,x2 [ 1(1) 2(2) 5(5) ]) x6 [ 1 2 5 3 ] = ∃ ( 1 2 5 3 ) ∨∃ ( 1 2 5 3 4 ,x6 [ 1 2 5 3(4) ]) ∨∃ ( 1 2 5 3 ,x1 [ 1(1) 2(2) 5(5) ]) x7 [ 1 2 5 3 4 ] = ∃ ( 1 2 5 3 4 ) ∨∃ ( 1 2 5 6 3 4 ,x7 [ 1(1) 2(2) 5(5) 3(6) 4(4) ]) ∨∃ ( 1 2 5 3 4 ,x4 [ 1(1) 2(2) 5(5) 3(4) ]) ∨ ∃ ( 1 2 5 3 4 ,x3 [ 1(1) 2(2) 5(5) 3(4) ]) Figure 8: Construction of Wlp(Delc; Addc;U nsc,ρc): equations for the variables. Kµ we adopt all rules of K. However dealing with recursive definitions requires an extension. The strategy used in Pennemann’s PROCON theorem prover [Pen09] (converting the condition to be refuted to a conjunctive normal form at each nesting level and deducing contradictions at the innermost nesting levels) is not applicable in the presence of recursion. Instead, we add all Boolean manipulations as rules, and propose an induction rule to deal with situations involving fixed points. This proved to be sufficient to handle all situations encountered in the examples. We employ a sequent notation: the inference rules manipulate sequents F,Γ ` ∆, where F is a system of equations, Γ and ∆ are sets of µ -condition bodies, with the intended meaning that the disjunction of ∆ can be deduced from the conjunction of Γ where the least fixed point solution of F is substituted for the variables. Additionally, variables are annotated with an arithmetic expres- 15 / 20 Volume 73 (2016) Recursively Nested Conditions sion over natural numbers and identifiers n1,..., which serve the important purpose of ensuring well-foundedness in the recursive refutation rule. Note that it is always sound to increment an annotation in an inference because by monotonicity, Fni (~⊥) ⇒ F n+1 i ( ~⊥). The context rule al- lows access to any subcondition. Ctx is a µ -condition syntactically monotonic (or antitonic) in a distinguished open variable x of same type as c, c′: F : c ` c′ (c′ ` c) F]F′ : Ctx[x/c]`Ctx[x/c′] if Ctx is monotonic (antitonic) in x (CTX) Note that variables used in x and x′ may have to be renamed in order not to conflict with those in Ctx, hence we write ]. Soundness is then immediate. Another auxiliary rule, sound by the fixed point semantics, allows unrolling xi to the i-th component Fi(~x). When used inside a nested context via Rule CTX, it replaces a specific occurrence of a variable by its right hand side: F : Γ ` ∆,x(n)i F : Γ ` ∆,Fi(~x(n−1)) Fi(~x) is the right hand side for xi in F (UNROLL1) In this rule, the annotations of the variables in the new expression are decremented: when f |= F(n)i (~⊥), then it satisfies xi in the next step of the fixed point iteration (cf. Th. 1), hence in the conclusion the variables used in the right hand side are all annotated with (n−1). We detect absurdity by exploiting the annotations (~n′<~n: whatever numbers are substituted for the identifiers of ~n′ and ~n, the comparison must hold) in a recursive refutation rule: ∀i ∈ I.Hi(~x(~n))`~G(~H(~x( ~n′))) ~G(~⊥) = ~⊥∨ i∈I .Hi(~x) =⊥ ~n′ <~n; ~G monotonic; < well-founded. (EMPTY) Rule EMPTY is sound: if one can to find suitable I,~H,~G, then induction over~n shows that at any level of the fixed point iteration, the expressions Hi(~x) imply absurdity. A useful instantiation is based on defining conjuncts Hi, j := ∃−1(ιi,xi)∧¬∃−1(ι j,y j) where xi and x j range over the variables of two µ -conditions whose main bodies have been combined as b∧¬b′ (this situation is frequently encountered when attempting to prove that a specified precondition implies a weakest precondition in the Dijkstra approach). The goal is to express the Hi, j in terms of (annotated versions of) each other and then to apply Rule EMPTY to deduce that in the lfp, the chosen variable combinations Hi, j(~x) are unsatisfiable. Several details require attention: Boolean operations must be extended to µ -conditions, which entails variable renaming and union of the systems of equations; rules for exploiting logical equivalences between different Boolean combinations are needed to rewrite conditions into a form suitable for the application of the rules of K ([Pen09] instead puts each Boolean combina- tion appearing as a subcondition into conjunctive normal form prior to the application of rules). Proof trees in our sequent-style calculus Kµ start with instances of the axiom (A ` A with no antecedents), and make use of all the classical sequent rules [Gen35] not involving quantifiers. As well as the major rules presented above, we use rules from K: the partial resolve rule is unchanged (¬∃(a)∧∃(b,d)∃(m∗) for a = m◦b and (m ∗,b∗) the M-pushout complement of (b,m), d 6≡⊥), the (supporting) lift rules without automatic application of shift are merely ∃(a,c)∧d∃(a,c∧∃−1(a,d)) . We also Selected Revised Papers from GCM 2015 16 / 20 ECEASST use the classical rules for Boolean logic [Gen35], structural rules for morphism decomposition and removal of trivial nesting ( ∃(a◦a ′,c) ∃(a,∃(a′,c)) , ∃(a,ι◦ι′,c) ∃(a,ι′,∃−1(ι,c)) and vice versa, ∃(id,id,c) c ) (all of these are upgraded to operate on a single condition body on the right side of a sequent). The other rules from K are adapted: the descent rule ∃(a,⊥∧c)⊥ is replaced by a more versatile absorption rule ∃(a,c) ra(c) (mirroring Lemma 2, ra is defined as in the proof of that lemma except that a need not be an isomorphism); a partial shift rule (∃ −1(ι,c) A(ι,c) ) which is correct by Lemma 4. Theorem 3 The calculus Kµ := K∪{EMPTY, UNROLL1}∪(classical+structural rules) is sound. Proof. The soundness of the K rules has been established in [Pen09], the supplementary rules have been established in the text above. 4.2 A Proof Example For this subsection, we have opted for a new minimal example without the blowup from the weakest liberal precondition in Subsection 3.3. The example (Fig. 9) uses a minimal number of variables to show the calculus Kµ and its inductive refutation rule at work. We examine the µ -condition x1 ∧¬x2, whose main body has type [ 1 2 ] . Consider the following system F: x1 [ 1 2 ] =∃ ( 1 2 ) ∨∃ ( 1 2 3 ,x1 [ 1(3) 2(2) ]) ; x2 [ 1 2 ] =∃ ( 1 2 ) ∨∃ ( 1 2 3 ,x2 [ 1(3) 2(2) ]) While the equations are syntactically identical up to variable renaming, this is not exploited by Kµ , hence the proof is not a one-liner: it starts by defining a suitable list of auxiliary conditions F : xn1 ∧¬x m 2 `F1(~x (n−1))∧¬F2(~x(n−1)) (H1,2(~x) = x1 ∧¬x2) (1) F : x(n)1 ∧¬x (n) 2 ` ( ∃ ( 1 2 ) ∨∃ ( 1 2 3 ,x (n−1) 1 [ 1(3) 2(2) ])) ∧¬∃ ( 1 2 ) ∧¬∃ ( 1 2 3 ,x (n−1) 2 [ 1(3) 2(2) ]) (2) F′ : ...`∃ ( 1 2 3 ,x (n−1) 1 [ 1(3) 2(2) ]) ∧¬∃ ( 1 2 3 ,x (n−1) 2 [ 1(3) 2(2) ]) (3) F′ : ...`∃ ( 1 2 3 ,x (n−1) 1 [ 1(3) 2(2) ] ∧¬∃ ( 1 2 3 ,x (n−1) 2 [ 1(3) 2(2) ])) (4) F′ : x(n)1 ∧¬x (n) 2 `∃ ( 1 2 3 ,x (n−1) 1 [ 1(3) 2(2) ] ∧¬x(n−1)2 [ 1(3) 2(2) ]) ∃ ( 1 2 3 ,⊥ ) ` ∃ ( 1 2 3 ,⊥ ) ∃ ( 1 2 3 ,⊥ ) `⊥ F : x1 ∧¬x2 `⊥ Figure 9: Deducing a contradiction from x1 ∧¬x2 under the system of equations F. Multiple steps have been contracted into single inference lines for the sake of brevity. 17 / 20 Volume 73 (2016) Recursively Nested Conditions ~H (in this case actually a single one, which we name H1,29), unrolling both variables once (1), then uses distributivity of conjunction over disjunction to resolve the base case of the right hand side of x1 (2), then shifts the other branch of x2 over the corresponding branch of x1. In a lift and shift step (3), a conjunction of two subconditions is obtained (depending on whether the nodes 3 are identified). In step (4), one of these is dropped and the other is used to obtain x1 ∧¬x2, with lower annotations, as a subcondition. Finally we show that the context of this subcondition (monotonic by virtue of being syntactically positive) has lfp ⊥, and apply Rule EMPTY. 5 Related Work A summary overview of graph conditions for non-local properties is attempted below (a proof calculus is presented in [PP14] but completeness of a proof calculus has only recently been obtained by Lambers and Orejas [LO14] for nested conditions and remains to be researched for the other approaches). Note that while HR∗ conditions are known to properly contain the monadic second-order definable properties [Rad13] and nested conditions are a special case of each of the other three, we have not yet been able to separate µ -conditions from MSO or HR∗: reference [Pen09] (here) [Rad13] [PP14] conditions Nested µ - HR∗ MSO- wlp yes yes incomplete10 yes theorem prover yes future work complete proof calculus yes future work Recently, Poskitt and Plump [PP14] have presented a weakest precondition calculus for an- other extension of nested conditions (monadic second-order conditions) and demonstrated its use in a Hoare logic. The method is arguably closer to reasoning directly in a logic and less graph condition like, but seems successful at solving some of the same problems in a different way. HR∗ conditions [Rad13] are another approach towards the same goal; they have already been mentioned in the main text; there is an ongoing effort at extending the weakest precondition calculus to a subclass including path expressions. Strecker et al. [Str08, PST13] have performed verification of graph transformation system within general-purpose theorem proving environ- ments, with positive path conditions. Dyck and Giese [DG15] automatically check certain kinds of inductive invariants of graph transformation systems. Verification of graph transformation systems via abstract model checking, as opposed to the prover-based approach, can be found in Gadducci et al., Baldan et al., König et al., Rensink et al. [GHK98, BKK03, KK06, RD06]. 6 Conclusion and Outlook We have introduced µ -conditions and achieved a weakest liberal precondition transformation (Th. 2) and a sound proof calculus (Th. 3) for correctness relative to µ -conditions, which seems a fruitful ground for further investigations. In analogy to the equivalence between first-order 9 Note that a larger example would likely have required more than one branch to handle each conjunct. 10 Radke, personal communication: construction only partially defined. Selected Revised Papers from GCM 2015 18 / 20 ECEASST graph logic and nested graph conditions, we conjecture that µ -conditions have the same expres- sivity as fixed point extensions to first-order logic on finite graphs. Also, the expressivity of HR∗ conditions [Rad13] or MSO likely differs from µ -conditions, but this remains to be examined. As the examples show, our weakest precondition calculus (still a research question for HR∗ con- ditions [Rad13] but readily available by logical means in the MSO-conditions formalism [PP14]) produces unwieldy expressions due to partial shift. The blowup is exponential in the interface size (a related blowup is inherited from the weakest precondition calculus of [Pen09]). We can heuristically simplify the expressions and hope that many cases can be resolved automatically. Future work includes more proof theory and tool support with special attention to semi-auto- mated reasoning, based on the reasoning engine ENFORCE implemented in [Pen09]. To extend the wlp construction to programs with iteration, one would have to provide (or have the prover attempt to find) an invariant, as in the original work of Pennemann; for termination, one could proceed as in [Pos13] and prove termination variants. It appears that µ -conditions might readily generalise to temporal properties, even with the option to nest temporal operators inside quanti- fiers, which would allow properties such as the preservation of a specific node to be expressed (but require further proof rules). This could be achieved via a next operator parameterised on atomic subprograms (the basic steps of Def. 3) and since in the semantics of programs the rela- tionship between the interfaces is deterministic, this would again confer an unambiguous type to such an expression and make it suitable for use as a subcondition, and allow the expression of eventualities as in the modal µ -calculus[BS07]. Whether this offers any new insights remains to be seen. We plan to deal with algebraic operations on attributes and extend our work to a verifi- cation method that separates the graph specific concerns from other aspects and allows proofs of properties that depend on both, for example involving data structures with ordered elements. Acknowledgements: We thank Annegret Habel, many other members of SCARE as well as several anonymous reviewers for providing valuable criticism of the approach and the text. Bibliography [BKK03] P. Baldan, B. König, B. König. A logic for analyzing abstractions of graph transfor- mation systems. In Static Analysis. Pp. 255–272. Springer, 2003. [BS07] J. Bradfield, C. Stirling. Modal µ -calculi. Studies in Logic and Practical Reasoning 3:721–756, 2007. [DG15] J. Dyck, H. Giese. Inductive Invariant Checking with Partial Negative Application Conditions. In ICGT. LNCS 9151, pp. 237–253. 2015. [Dij76] E. W. Dijkstra. A discipline of programming. Prentice Hall, 1976. [EEPT06] H. Ehrig, K. Ehrig, U. Prange, G. Taentzer. Fundamentals of Algebraic Graph Trans- formation. Monographs in Theoretical Computer Science. Springer, 2006. [Gen35] G. Gentzen. Untersuchungen über das logische Schließen. I. Mathematische Zeitschrift 39(1):176–210, 1935. 19 / 20 Volume 73 (2016) Recursively Nested Conditions [GHK98] F. Gadducci, R. Heckel, M. Koch. A Fully Abstract Model for Graph-Interpreted Temporal Logic. In TAGT’98. LNCS 1764, pp. 310–322. 1998. [Hoa83] C. A. R. Hoare. An axiomatic basis for computer programming. Communications of the ACM 26(1):53–56, 1983. [HP09] A. Habel, K.-H. Pennemann. Correctness of High-Level Transformation Systems Relative to Nested Conditions. Math. Struct. in Comp. Sci. 19(2):245–296, 2009. [HPR06] A. Habel, K.-H. Pennemann, A. Rensink. Weakest Preconditions for High-Level Programs. In ICGT 2006. LNCS 4178, pp. 445–460. 2006. [KK06] B. König, V. Kozioura. Counterexample-Guided Abstraction Refinement for the Analysis of Graph Transformation Systems. LNCS 3920, pp. 197–211. 2006. [Kre02] S. Kreutzer. Pure and Applied Fixed-Point Logics. PhD thesis, Dissertation thesis, RWTH Aachen, 2002. [LO14] L. Lambers, F. Orejas. Tableau-Based Reasoning for Graph Properties. In Graph Transformation. LNCS 8571, pp. 17–32. 2014. [Pen09] K.-H. Pennemann. Development of Correct Graph Transformation Systems. PhD thesis, Universität Oldenburg, 2009. [Pos13] C. M. Poskitt. Verification of Graph Programs. PhD thesis, University of York, 2013. [PP13] C. M. Poskitt, D. Plump. Verifying Total Correctness of Graph Programs. Electronic Communications of the EASST 61, 2013. [PP14] C. M. Poskitt, D. Plump. Verifying Monadic Second-Order Properties of Graph Pro- grams. In Graph Transformation. LNCS 8571, pp. 33–48. 2014. [PST13] C. Percebois, M. Strecker, H. N. Tran. Rule-Level Verification of Graph Transforma- tions for Invariants Based on Edges’ Transitive Closure. In SEFM 2013. LNCS 8137, pp. 106–121. 2013. [RAB+15] H. Radke, T. Arendt, J. Becker, A. Habel, G. Taentzer. Translating Essential OCL Invariants to Nested Graph Constraints Focusing on Set Operations. In Proc. ICGT. LNCS 9151, pp. 155–170. 2015. [Rad13] H. Radke. HR* Graph Conditions Between Counting Monadic Second-Order and Second-Order Graph Formulas. Electronic Communications of the EASST 61, 2013. [RD06] A. Rensink, D. Distefano. Abstract graph transformation. ENTCS 157:39–59, 2006. [Str08] M. Strecker. Modeling and Verifying Graph Transformations in Proof Assistants. ENTCS 203(1):135–148, 2008. [Tar55] A. Tarski. A lattice-theoretical fixpoint theorem and its applications. Pacific J. Math. 5(2):285–309, 1955. Selected Revised Papers from GCM 2015 20 / 20 Introduction Graph Conditions and Programs -Conditions Defining -Conditions Weakest Liberal Preconditions of -conditions A Weakest Liberal Precondition Example Correctness Relative to -conditions A Proof Calculus for -conditions A Proof Example Related Work Conclusion and Outlook