Hoare2Hoare Logic, Part II


Review

A few exercises to get us warmed up...

Which of these is a good specification for the following program prog?
   Y ::= 0;
   WHILE 2 ≤ X DO
      X ::= X - 2;
      Y ::= Y + 1
   END;
(1) m, {{ X = m }} prog {{ Y = m/2, X = m}}
(2) m, {{ X = m }} prog {{ Y = 2, X = m MOD 2}}
(3) m, {{ X = m }} prog {{ Y = m/2, X = m MOD 2}}

On a piece of paper, write down a reasonable specification for the following program:
    X ::= 2;
    Y ::= X + X
Write down a reasonable specification for the following program:
    X ::= X + 1;
    Y ::= X + 1
Write down a reasonable specification for the following program:
    IFB X ≤ Y THEN
      SKIP
    ELSE
      Z ::= X;
      X ::= Y;
      Y ::= Z
    FI
Write down a reasonable specification for the following program:
    X ::= m;
    Z ::= 0;
    WHILE X ≠ 0 DO
      X ::= X - 2;
      Z ::= Z + 1
    END

Decorated Programs

A decorated program carries with it an informal proof of its own correctness.
      {True }
      {m = m }}
    X ::= m;
      {X = m }
      {X = m  p = p }}
    Z ::= p;
      {X = m  Z = p }
      {Z - X = p - m }}
    WHILE X ≠ 0 DO
        {Z - X = p - m  X ≠ 0 }
        {{ (Z - 1) - (X - 1) = p - m }}
      Z ::= Z - 1;
        {Z - (X - 1) = p - m }}
      X ::= X - 1
        {Z - X = p - m }}
    END;
      {Z - X = p - m  ¬ (X ≠ 0) }
      {Z = p - m }
A decorated program represents a valid proof if each individual command is locally consistent with its accompanying assertions:
  • SKIP is locally consistent if its precondition and postcondition are the same:
        {P }}
        SKIP
        {P }}
  • The sequential composition of c1 and c2 is locally consistent (with respect to assertions P and R) if c1 is locally consistent (with respect to P and Q) and c2 is locally consistent (with respect to Q and R):
        {P }}
        c1;
        {Q }}
        c2
        {R }}
  • An assignment is locally consistent if its precondition is the appropriate substitution of its postcondition:
        {P [X  a}}
        X ::= a
        {P }}
  • A conditional is locally consistent (with respect to assertions P and Q) if the assertions at the top of its "then" and "else" branches are exactly P b and P ¬b and if its "then" branch is locally consistent (with respect to P b and Q) and its "else" branch is locally consistent (with respect to P ¬b and Q):
        {P }}
        IFB b THEN
          {P  b }}
          c1
          {Q }}
        ELSE
          {P  ¬b }}
          c2
          {Q }}
        FI
        {Q }}
  • A while loop with precondition P is locally consistent if its postcondition is P ¬b and if the pre- and postconditions of its body are exactly P b and P:
        {P }}
        WHILE b DO
          {P  b }}
          c1
          {P }}
        END
        {P  ¬b }}
  • A pair of assertions separated by is locally consistent if the first implies the second (in all states):
        {P }
        {P' }}
    This corresponds to the application of hoare_consequence and is the only place in a decorated program where checking if decorations are correct is not fully mechanical and syntactic, but involves logical and/or arithmetic reasoning.

Example: Swapping Using Addition and Subtraction

Here is a program that swaps the values of two variables using addition and subtraction (instead of by assigning to a temporary variable).
  X ::= X + Y;
  Y ::= X - Y;
  X ::= X - Y
We can prove using decorations that this program is correct — i.e., it always swaps the values of variables X and Y.
Do it together...

Example: Simple Conditionals

Here's a simple program using conditionals, with a possible specification:
       {True }}
       IFB X ≤ Y THEN
         Z ::= Y - X
       ELSE
         Z ::= X - Y
       FI
       {Z + X = Y  Z + Y = X }}
Let's turn it into a decorated program (together)...

Example: Reduce to Zero (Trivial Loop)

Here is a very simple WHILE loop with a simple specification:
        {True }}
        WHILE X ≠ 0 DO
          X ::= X - 1
        END
        {X = 0 }}
Fill in the rest of the decorations together...
From this informal proof, it is easy to read off a formal proof using the Coq versions of the Hoare rules. Note that we do not unfold the definition of hoare_triple anywhere in this proof — the idea is to use the Hoare rules as a "self-contained" logic for reasoning about programs.

Definition reduce_to_zero' : com :=
  WHILE BNot (BEq (AId X) (ANum 0)) DO
    X ::= AMinus (AId X) (ANum 1)
  END.

Theorem reduce_to_zero_correct' :
  {{fun stTrue}}
  reduce_to_zero'
  {{fun stst X = 0}}.
Proof.
  unfold reduce_to_zero'.
  (* First we need to transform the postcondition so
     that hoare_while will apply. *)

  eapply hoare_consequence_post.
  apply hoare_while.
  Case "Loop body preserves invariant".
    (* Need to massage precondition before hoare_asgn applies *)
    eapply hoare_consequence_pre. apply hoare_asgn.
    (* Proving trivial implication (2) ->> (3) *)
    intros st [HT Hbp]. unfold assn_sub. apply I.
  Case "Invariant and negated guard imply postcondition".
    intros st [Inv GuardFalse].
    unfold bassn in GuardFalse. simpl in GuardFalse.
    (* SearchAbout helps to find the right lemmas *)
    SearchAbout [not true].
    rewrite not_true_iff_false in GuardFalse.
    SearchAbout [negb false].
    rewrite negb_false_iff in GuardFalse.
    SearchAbout [beq_nat true].
    apply beq_nat_true in GuardFalse.
    apply GuardFalse. Qed.

Example: Division

The following Imp program calculates the integer division and remainder of two numbers m and n that are arbitrary constants in the program.
  X ::= m;
  Y ::= 0;
  WHILE n ≤ X DO
    X ::= X - n;
    Y ::= Y + 1
  END;
In other words, if we replace m and n by concrete numbers and execute the program, it will terminate with the variable X set to the remainder when m is divided by n and Y set to the quotient.
Here's a possible specification:
    {True }
    X ::= m;
    Y ::= 0;
    WHILE n ≤ X DO
      X ::= X - n;
      Y ::= Y + 1
    END;
    {n × Y + X = m  X < n }}
Finish the rest together...

Finding Loop Invariants

Example: Slow Subtraction

The following program subtracts the value of X from the value of Y by repeatedly decrementing both X and Y. We want to verify its correctness with respect to the following specification:
             {X = m  Y = n }}
           WHILE X ≠ 0 DO
             Y ::= Y - 1;
             X ::= X - 1
           END
             {Y = n - m }}
To verify this program we need to find an invariant I for the loop. As a first step we can leave I as an unknown and build a skeleton for the proof by applying backward the rules for local consistency. This process leads to the following skeleton:
    (1)      {X = m  Y = n }}       (a)
    (2)      {I }}
           WHILE X ≠ 0 DO
    (3)        {I  X ≠ 0 }}        (c)
    (4)        {I[X  X-1][Y  Y-1] }}
             Y ::= Y - 1;
    (5)        {I[X  X-1] }}
             X ::= X - 1
    (6)        {I }}
           END
    (7)      {I  ~(X ≠ 0) }}           (b)
    (8)      {Y = n - m }}
By examining this skeleton, we can see that any valid I will have to respect three conditions:
  • (a) it must be weak enough to be implied by the loop's precondition, i.e. (1) must imply (2);
  • (b) it must be strong enough to imply the loop's postcondition, i.e. (7) must imply (8);
  • (c) it must be preserved by one iteration of the loop, i.e. (3) must imply (4).
Finish the rest together...

Example: Parity

Here is a cute little program for computing the parity of the value initially stored in X (due to Daniel Cristofani).
    {X = m }}
  WHILE 2 ≤ X DO
    X ::= X - 2
  END
    {X = parity m }}
The mathematical parity function used in the specification is defined in Coq as follows:

Fixpoint parity x :=
  match x with
  | 0 ⇒ 0
  | 1 ⇒ 1
  | S (S x') ⇒ parity x'
  end.

Do the rest together...

Example: Finding Square Roots

The following program computes the square root of X by naive iteration:
      {X=m }}
    Z ::= 0;
    WHILE (Z+1)*(Z+1) ≤ X DO
      Z ::= Z+1
    END
      {Z×Zm  m<(Z+1)*(Z+1) }}
Do the rest together...

Example: Squaring

Here is a program that squares X by repeated addition:
    {X = m }}
  Y ::= 0;
  Z ::= 0;
  WHILE  Y  ≠  X  DO
    Z ::= Z + X;
    Y ::= Y + 1
  END
    {Z = m×m }}
Do the rest together...

Weakest Preconditions (Advanced)

A useless (though valid) Hoare triple:
      {False }}  X ::= Y + 1  {X ≤ 5 }}
A better specification for the same program:
      {Y ≤ 4  Z = 0 }}  X ::= Y + 1 {X ≤ 5 }}
The best specification for this program:
      {Y ≤ 4 }}  X ::= Y + 1  {X ≤ 5 }}
We call Y 4 the weakest valid precondition of the command X ::= Y + 1 with respect to the postcondition X 5.
In general, we say that "P is the weakest precondition of command c for postcondition Q" if {{P}} c {{Q}} and if, whenever P' is an assertion such that {{P'}} c {{Q}}, we have P' st implies P st for all states st.

Definition is_wp P c Q :=
  {{P}} c {{Q}}
  P', {{P'}} c {{Q}} (P' P).

Exercise: 1 star, optional (wp)

What are the weakest preconditions of the following commands for the following postconditions?
  1) {{ ? }}  SKIP  {X = 5 }}

  2) {{ ? }}  X ::= Y + Z {X = 5 }}

  3) {{ ? }}  X ::= Y  {X = Y }}

  4) {{ ? }}
     IFB X == 0 THEN Y ::= Z + 1 ELSE Y ::= W + 2 FI
     {Y = 5 }}

  5) {{ ? }}
     X ::= 5
     {X = 0 }}

  6) {{ ? }}
     WHILE True DO X ::= 0 END
     {X = 0 }}
(* FILL IN HERE *)

Formal Decorated Programs (Advanced)

With a little more work, we can formalize the definition of well-formed decorated programs and mostly automate the mechanical steps when filling in decorations...

Syntax

The first thing we need to do is to formalize a variant of the syntax of commands with embedded assertions. We call the new commands decorated commands, or dcoms.

Inductive dcom : Type :=
  | DCSkip : Assertion dcom
  | DCSeq : dcom dcom dcom
  | DCAsgn : id aexp Assertion dcom
  | DCIf : bexp Assertion dcom Assertion dcom
            Assertion dcom
  | DCWhile : bexp Assertion dcom Assertion dcom
  | DCPre : Assertion dcom dcom
  | DCPost : dcom Assertion dcom.

Tactic Notation "dcom_cases" tactic(first) ident(c) :=
  first;
  [ Case_aux c "Skip" | Case_aux c "Seq" | Case_aux c "Asgn"
  | Case_aux c "If" | Case_aux c "While"
  | Case_aux c "Pre" | Case_aux c "Post" ].

Notation "'SKIP' {{ P }}"
      := (DCSkip P)
      (at level 10) : dcom_scope.
Notation "l '::=' a {{ P }}"
      := (DCAsgn l a P)
      (at level 60, a at next level) : dcom_scope.
Notation "'WHILE' b 'DO' {{ Pbody }} d 'END' {{ Ppost }}"
      := (DCWhile b Pbody d Ppost)
      (at level 80, right associativity) : dcom_scope.
Notation "'IFB' b 'THEN' {{ P }} d 'ELSE' {{ P' }} d' 'FI' {{ Q }}"
      := (DCIf b P d P' d' Q)
      (at level 80, right associativity) : dcom_scope.
Notation "'' {{ P }} d"
      := (DCPre P d)
      (at level 90, right associativity) : dcom_scope.
Notation "{{ P }} d"
      := (DCPre P d)
      (at level 90) : dcom_scope.
Notation "d '' {{ P }}"
      := (DCPost d P)
      (at level 80, right associativity) : dcom_scope.
Notation " d ;; d' "
      := (DCSeq d d')
      (at level 80, right associativity) : dcom_scope.

Delimit Scope dcom_scope with dcom.

Here's how our decorated programs look now:

Example dec_while : dcom := (
  {{ fun stTrue }}
  WHILE (BNot (BEq (AId X) (ANum 0)))
  DO
    {{ fun stTrue st X ≠ 0}}
    X ::= (AMinus (AId X) (ANum 1))
    {{ fun _True }}
  END
  {{ fun stTrue st X = 0}}
  {{ fun stst X = 0 }}
) % dcom.

It is easy to go from a dcom to a com by erasing all annotations.

Fixpoint extract (d:dcom) : com :=
  match d with
  | DCSkip _SKIP
  | DCSeq d1 d2 ⇒ (extract d1 ;; extract d2)
  | DCAsgn X a _X ::= a
  | DCIf b _ d1 _ d2 _IFB b THEN extract d1 ELSE extract d2 FI
  | DCWhile b _ d _WHILE b DO extract d END
  | DCPre _ dextract d
  | DCPost d _extract d
  end.

The choice of exactly where to put assertions in the definition of dcom is a bit subtle. The simplest thing to do would be to annotate every dcom with a precondition and postcondition. But this would result in very verbose programs with a lot of repeated annotations: for example, a program like SKIP;SKIP would have to be annotated as
        {{P}} ({{P}SKIP {{P}}) ;; ({{P}SKIP {{P}}) {{P}},
with pre- and post-conditions on each SKIP, plus identical pre- and post-conditions on the semicolon!
Instead, the rule we've followed is this:
  • The post-condition expected by each dcom d is embedded in d
  • The pre-condition is supplied by the context.
In other words, the invariant of the representation is that a dcom d together with a precondition P determines a Hoare triple {{P}} (extract d) {{post d}}, where post is defined as follows:

Fixpoint post (d:dcom) : Assertion :=
  match d with
  | DCSkip PP
  | DCSeq d1 d2post d2
  | DCAsgn X a QQ
  | DCIf _ _ d1 _ d2 QQ
  | DCWhile b Pbody c PpostPpost
  | DCPre _ dpost d
  | DCPost c QQ
  end.

Similarly, we can extract the "initial precondition" from a decorated program.

Fixpoint pre (d:dcom) : Assertion :=
  match d with
  | DCSkip Pfun stTrue
  | DCSeq c1 c2pre c1
  | DCAsgn X a Qfun stTrue
  | DCIf _ _ t _ e _fun stTrue
  | DCWhile b Pbody c Ppostfun stTrue
  | DCPre P cP
  | DCPost c Qpre c
  end.

Using pre and post, and assuming that we adopt the convention of always supplying an explicit precondition annotation at the very beginning of our decorated programs, we can express what it means for a decorated program to be correct as follows:

Definition dec_correct (d:dcom) :=
  {{pre d}} (extract d) {{post d}}.

To check whether this Hoare triple is valid, we need a way to extract the "proof obligations" from a decorated program. These obligations are often called verification conditions.

Extracting Verification Conditions

The function verification_conditions takes a dcom d together with a precondition P and returns a proposition that, if it can be proved, implies that the triple {{P}} (extract d) {{post d}} is valid.

Fixpoint verification_conditions (P : Assertion) (d:dcom) : Prop :=
  match d with
  | DCSkip Q
      (P Q)
  | DCSeq d1 d2
      verification_conditions P d1
       verification_conditions (post d1) d2
  | DCAsgn X a Q
      (P Q [X a])
  | DCIf b P1 d1 P2 d2 Q
      ((fun stP st bassn b st) P1)
       ((fun stP st ¬ (bassn b st)) P2)
       (Q post d1) (Q post d2)
       verification_conditions P1 d1
       verification_conditions P2 d2
  | DCWhile b Pbody d Ppost
      (* post d is the loop invariant and the initial precondition *)
      (P post d)
       (Pbody (fun stpost d st bassn b st))
       (Ppost (fun stpost d st ~(bassn b st)))
       verification_conditions Pbody d
  | DCPre P' d
      (P P') verification_conditions P' d
  | DCPost d Q
      verification_conditions P d (post d Q)
  end.

And now, the key theorem, which states that verification_conditions does its job correctly. Not surprisingly, we need to use each of the Hoare Logic rules at some point in the proof.

Theorem verification_correct : d P,
  verification_conditions P d {{P}} (extract d) {{post d}}.
Proof.
  dcom_cases (induction d) Case; intros P H; simpl in ×.
  Case "Skip".
    eapply hoare_consequence_pre.
      apply hoare_skip.
      assumption.
  Case "Seq".
    inversion H as [H1 H2]. clear H.
    eapply hoare_seq.
      apply IHd2. apply H2.
      apply IHd1. apply H1.
  Case "Asgn".
    eapply hoare_consequence_pre.
      apply hoare_asgn.
      assumption.
  Case "If".
    inversion H as [HPre1 [HPre2 [[Hd11 Hd12]
                                  [[Hd21 Hd22] [HThen HElse]]]]].
    clear H.
    apply IHd1 in HThen. clear IHd1.
    apply IHd2 in HElse. clear IHd2.
    apply hoare_if.
      eapply hoare_consequence_pre; eauto.
      eapply hoare_consequence_post; eauto.
      eapply hoare_consequence_pre; eauto.
      eapply hoare_consequence_post; eauto.
  Case "While".
    inversion H as [Hpre [[Hbody1 Hbody2] [[Hpost1 Hpost2] Hd]]];
    subst; clear H.
    eapply hoare_consequence_pre; eauto.
    eapply hoare_consequence_post; eauto.
    apply hoare_while.
    eapply hoare_consequence_pre; eauto.
  Case "Pre".
    inversion H as [HP Hd]; clear H.
    eapply hoare_consequence_pre. apply IHd. apply Hd. assumption.
  Case "Post".
    inversion H as [Hd HQ]; clear H.
    eapply hoare_consequence_post. apply IHd. apply Hd. assumption.
Qed.

Examples

The propositions generated by verification_conditions are fairly big, and they contain many conjuncts that are essentially trivial.

Eval simpl in (verification_conditions (fun stTrue) dec_while).

(((fun _ : state ⇒ True (fun _ : state ⇒ True)) 
 ((fun _ : state ⇒ True (fun _ : state ⇒ True)) 
 (fun st : state ⇒ True  bassn (BNot (BEq (AId X) (ANum 0))) st) =
 (fun st : state ⇒ True  bassn (BNot (BEq (AId X) (ANum 0))) st
 (fun st : state ⇒ True  ¬ bassn (BNot (BEq (AId X) (ANum 0))) st) =
 (fun st : state ⇒ True  ¬ bassn (BNot (BEq (AId X) (ANum 0))) st
 (fun st : state ⇒ True  bassn (BNot (BEq (AId X) (ANum 0))) st
 (fun _ : state ⇒ True) [X  AMinus (AId X) (ANum 1)]) 
(fun st : state ⇒ True  ¬ bassn (BNot (BEq (AId X) (ANum 0))) st
(fun st : state ⇒ st X = 0)
In principle, we could certainly work with them using just the tactics we have so far, but we can make things much smoother with a bit of automation. We first define a custom verify tactic that applies splitting repeatedly to turn all the conjunctions into separate subgoals and then uses omega and eauto (a handy general-purpose automation tactic that we'll discuss in detail later) to deal with as many of them as possible.

Lemma ble_nat_true_iff : n m : nat,
  ble_nat n m = true nm.
Proof.
  intros n m. split. apply ble_nat_true.
  generalize dependent m. induction n; intros m H. reflexivity.
    simpl. destruct m. inversion H.
    apply le_S_n in H. apply IHn. assumption.
Qed.

Lemma ble_nat_false_iff : n m : nat,
  ble_nat n m = false ~(nm).
Proof.
  intros n m. split. apply ble_nat_false.
  generalize dependent m. induction n; intros m H.
    apply ex_falso_quodlibet. apply H. apply le_0_n.
    simpl. destruct m. reflexivity.
    apply IHn. intro Hc. apply H. apply le_n_S. assumption.
Qed.

Tactic Notation "verify" :=
  apply verification_correct;
  repeat split;
  simpl; unfold assert_implies;
  unfold bassn in ×; unfold beval in ×; unfold aeval in ×;
  unfold assn_sub; intros;
  repeat rewrite update_eq;
  repeat (rewrite update_neq; [| (intro X; inversion X)]);
  simpl in ×;
  repeat match goal with [H : _ _ _] ⇒ destruct H end;
  repeat rewrite not_true_iff_false in ×;
  repeat rewrite not_false_iff_true in ×;
  repeat rewrite negb_true_iff in ×;
  repeat rewrite negb_false_iff in ×;
  repeat rewrite beq_nat_true_iff in ×;
  repeat rewrite beq_nat_false_iff in ×;
  repeat rewrite ble_nat_true_iff in ×;
  repeat rewrite ble_nat_false_iff in ×;
  try subst;
  repeat
    match goal with
      [st : state _] ⇒
        match goal with
          [H : st _ = _ _] ⇒ rewrite H in ×; clear H
        | [H : _ = st _ _] ⇒ rewrite H in ×; clear H
        end
    end;
  try eauto; try omega.

What's left after verify does its thing is "just the interesting parts" of checking that the decorations are correct. For very simple examples verify immediately solves the goal (provided that the annotations are correct).

Theorem dec_while_correct :
  dec_correct dec_while.
Proof. verify. Qed.

Another example (formalizing a decorated program we've seen before):

Example subtract_slowly_dec (m:nat) (p:nat) : dcom := (
    {{ fun stst X = m st Z = p }}
    {{ fun stst Z - st X = p - m }}
  WHILE BNot (BEq (AId X) (ANum 0))
  DO {{ fun stst Z - st X = p - m st X ≠ 0 }}
       {{ fun st ⇒ (st Z - 1) - (st X - 1) = p - m }}
     Z ::= AMinus (AId Z) (ANum 1)
       {{ fun stst Z - (st X - 1) = p - m }} ;;
     X ::= AMinus (AId X) (ANum 1)
       {{ fun stst Z - st X = p - m }}
  END
    {{ fun stst Z - st X = p - m st X = 0 }}
    {{ fun stst Z = p - m }}
) % dcom.

Theorem subtract_slowly_dec_correct : m p,
  dec_correct (subtract_slowly_dec m p).
Proof. intros m p. verify. (* this grinds for a bit! *) Qed.