2020-04-06 8:24:56 (GMT)
I have a proposal for a calculus now. It's in matryoshka/papers/2020-hosup/fool.tex. I have a proof sketch only for ground completeness, but I am quite convinced that also the nonground version is complete. What do you think? Can this calculus perform reasonable well in practice?
2020-04-06 8:41:37 (GMT)
I'll write my comments as I read.
2020-04-06 8:42:0 (GMT)
I guess literals are unordered pairs. This should be clarified.
2020-04-06 8:42:51 (GMT)
Yes, clauses and literals are as in FO Superposition.
2020-04-06 8:43:1 (GMT)
I can fix this.
2020-04-06 8:43:4 (GMT)
I have the file open.
2020-04-06 8:43:15 (GMT)
I already fixed Bruijn a couple of times.
2020-04-06 8:45:45 (GMT)
oops, I should stick to DB :-)
2020-04-06 8:46:31 (GMT)
I'm guessing terms may not contain loose bound variables?
2020-04-06 8:46:43 (GMT)
E.g. what is the exact meaning of the subterm property?
2020-04-06 8:47:26 (GMT)
Yes, subterm property only for terms that do not contain DB's.
2020-04-06 8:47:48 (GMT)
I guess defining terms as not containing loose bound variables would be a good idea.
2020-04-06 8:47:49 (GMT)
Do not contain loose DBs.
2020-04-06 8:47:55 (GMT)
right.
2020-04-06 8:48:17 (GMT)
Yes, this needs a bit of clarifying.
2020-04-06 8:48:25 (GMT)
There are a few points like this concerning the presentation that I am not sure about...
2020-04-06 8:48:27 (GMT)
Of course this is just a sketch, not the final product,
2020-04-06 8:48:42 (GMT)
What do you mean?
2020-04-06 8:48:55 (GMT)
In the sketch, it just has to be clear enough to us.
2020-04-06 8:49:16 (GMT)
great! I will take a look soon!
2020-04-06 8:49:44 (GMT)
In the final paper, everything will be defined just like in the lambda-sup paper (what's a lambda-term, what's a beta-eta-equiv. class, etc.).
2020-04-06 8:50:21 (GMT)
Yes, I hope it's good enough for you to understand, but we'll have to see how we can eventually explain these things better and maybe more elegantly.
2020-04-06 9:23:41 (GMT)
Isn't there a typo in the cases rule? The conclusion is , but wouldn't it make more sense if it was ?
2020-04-06 9:25:4 (GMT)
I was going to say the same as Sophie.
2020-04-06 9:25:20 (GMT)
Counterexample: u is some uninterpreted "true" constant.
2020-04-06 9:29:39 (GMT)
I've fixed this in the document. @Alexander Bentkamp , if you prefer the unsound rule, just revert my change. ;)
2020-04-06 9:31:55 (GMT)
I think it should be u = ⊤ because u != ⊥ is too large.
2020-04-06 9:33:48 (GMT)
Sounds reasonable.
2020-04-06 9:34:26 (GMT)
Concerning all the rules until (and excl.) Sup:
2020-04-06 9:34:39 (GMT)
Are there cases where these are simplification rules?
2020-04-06 9:35:12 (GMT)
Sure, but I haven't thought about it too much.
2020-04-06 9:35:32 (GMT)
I haven't read yet about the redundancy criterion, but I'm guessing there might be some inferences that yield conclusions that entail their premise
2020-04-06 9:36:4 (GMT)
but where you wouldn't necessarily notice it using the standard redundancy machinery
2020-04-06 9:36:43 (GMT)
But the word "redundancy" or "criterion" doesn't appear in the document.
2020-04-06 9:37:15 (GMT)
Specifically, I'd hope that BoolRw could be a simplification rule.
2020-04-06 9:37:41 (GMT)
I am confident that in can be in most cases.
2020-04-06 9:37:49 (GMT)
Right.
2020-04-06 9:37:57 (GMT)
But I haven't thought about that yet in detail.
2020-04-06 9:37:57 (GMT)
But what would the red criterion look like?
2020-04-06 9:38:1 (GMT)
W.r.t. which theory?
2020-04-06 9:39:32 (GMT)
There is no floor/ceiling layer here. So I guess it could just be "ground instances are entailed by smaller ground instances".
2020-04-06 9:52:15 (GMT)
In general, I was told Boolean should always be written with a big B (same as with De Bruijn indices, there are people that are touchy about this capitalization thing).
2020-04-06 9:55:3 (GMT)
Ok, we can do that. In my math lectures I learned that it is a real honor to be spelled lowercase, as in "abelian group". But I think that was in German...
2020-04-06 9:57:4 (GMT)
Interesting. :smiley: This may indeed depend on the culture (I got this piece of knowledge while working in Japan).
2020-04-06 9:57:34 (GMT)
Alexander Bentkamp said:
There is no floor/ceiling layer here. So I guess it could just be "ground instances are entailed by smaller ground instances".
And by "entailed" I mean entailed wrt models that interpret the Boolean connectives.
2020-04-06 10:14:18 (GMT)
Ok. I read the document and what is cool about this approach is that it is way more lazy than what we had before and does not require dynamic clausification
2020-04-06 10:14:44 (GMT)
Some thoughts and questions
2020-04-06 10:15:46 (GMT)
I really hope BoolRw could be a simp rule at least for the cases in which T and F are being removed
2020-04-06 10:16:33 (GMT)
the conclusion and premise are equivalent in that sense and the conclusion should be smaller
2020-04-06 10:17:46 (GMT)
Do you need both the BoolRW rule that introduces skolems and the elimination of quantifiers?
2020-04-06 10:19:18 (GMT)
For BoolRW a good idea is to reuse skolem names whenever possible (of course separately for and , since I guess to skolemize you used and then skolemized and applied double negation?
2020-04-06 10:20:34 (GMT)
Is this logic clausal? Now that logical symbols are inside terms, I think we should pay more attenition to clausification than we did in previous papers
2020-04-06 10:23:7 (GMT)
and maybe lazy clausification is interesting in this context since it might allow for interesting interactions that were previously not possible
2020-04-06 10:27:40 (GMT)
(e.g. it will allow literals like that might rewrite stuff and get to proof faster
2020-04-06 10:28:27 (GMT)
For one side of the equation is unified with and the other with ?
2020-04-06 10:32:2 (GMT)
Even though I think it is a super smart idea to make the calculus more lazy by not casing terms that have logical symbol heads, I am wondering if there is a way to make it even more lazy
2020-04-06 10:36:59 (GMT)
When you have many intermediate clauses will be created over and over again
2020-04-06 10:40:20 (GMT)
Boolean can be spelled "boolean" or "Boolean".
2020-04-06 10:40:39 (GMT)
The Ox. Am. E. Dict. for example gives "Boolean".
2020-04-06 10:41:29 (GMT)
Unfortunately there's no general rule that applies to all people names. "Abelian" is lowercase in the same dictionary.
2020-04-06 10:41:41 (GMT)
And nobody talks about a Curried function.
2020-04-06 10:42:9 (GMT)
I write "Boolean" out of habit but if my coauthors prefer "boolean" I could live with it.
2020-04-06 10:42:21 (GMT)
or Curry chicken
2020-04-06 10:42:34 (GMT)
Lamb Curry actually. :)
2020-04-06 10:42:59 (GMT)
You missed the Zoom session wih the theory group, where I flaunted my Indian food.
2020-04-06 10:43:15 (GMT)
It's a weekly tradition (my flaunting my Indian food).
2020-04-06 10:45:28 (GMT)
Petar Vukmirovic said:
I really hope BoolRw could be a simp rule at least for the cases in which T and F are being removed
Yes, I think it can be a simp rule if the unifier is the identity.
2020-04-06 10:47:25 (GMT)
Jasmin Blanchette said:
You missed the Zoom session wih the theory group, where I flaunted my Indian food.
1200 is too early lunch for me :slight_smile:
I still haven't started preparing the food
2020-04-06 10:48:16 (GMT)
Yes, I think it can be a simp rule if the unifier is the identity.
Or maybe in the cases
where is a variable
2020-04-06 10:51:47 (GMT)
Can't all such cases where or appear in the input be preprocessed anyway? It doesn't matter that is a variable or something else.
2020-04-06 10:53:1 (GMT)
@Petar Vukmirovic Maybe you are misunderstanding the rule a bit. The description can certainly be improved. So for example
C[p ⋁ ⊤] ------------BoolRw C[⊤]
can be a simp rule, but
C[p ⋁ X]
---------------BoolRw
C[⊤]{X ↦ ⊤}
cannot.
2020-04-06 10:53:51 (GMT)
But now, one application of BoolRw where some gets remapped to or which makes other Boolean constants appear that can be simplified.
2020-04-06 10:54:38 (GMT)
Yes, so doing this as preprocessing is not enough.
2020-04-06 10:55:16 (GMT)
In fact the ground rule can be a simp rule but I am not sure it is a good idea to make the ground rule simp and not the non-ground one. This could possibly create troubles.
2020-04-06 10:57:27 (GMT)
What is a simplification rule or not should be an afterthought to the completeness proof. The proof itself should only discuss the inference system and the redundancy criterion.
2020-04-06 10:58:27 (GMT)
Yes, and the ground calculus is only needed in the proof anyway so my previous comment is not useful at all (sorry ^^').
2020-04-06 11:6:58 (GMT)
Small correction: I think BoolRw cannot be a simplification rule for the ∀ and ∃ rewrite rules because the conclusion does not entail the premise there.
2020-04-06 11:9:15 (GMT)
Petar Vukmirovic said:
For BoolRW a good idea is to reuse skolem names whenever possible (of course separately for and , since I guess to skolemize you used and then skolemized and applied double negation?
Right. I've added a note that we should reuse skolems. I think ∀ and ∃ can even share skolems, e.g., ∃x. p x and ∀x. ¬ p x can share the skolem in principle. For completeness, we could even always reuse the same skolem. But that would not be satisfiablity-preserving :-).
2020-04-06 11:13:49 (GMT)
Petar Vukmirovic said:
Is this logic clausal? Now that logical symbols are inside terms, I think we should pay more attenition to clausification than we did in previous papers
The logic itself is not clausal (like FOL). So indeed we do not need clausification. If we wanted, we could put the assumptions and the negated conjecture into one big term t and then start the derivation with the clause t ≈ ⊤.
2020-04-06 11:15:36 (GMT)
Then shouldn't the vocabulary be changed, i.e. using formulas instead of clauses?
2020-04-06 11:15:53 (GMT)
But maybe traditional CNF transformation does some smart things that we also want? @Petar Vukmirovic You know CNF transformation better than I do. Maybe we need to integrate their tricks into the calculus?
2020-04-06 11:16:43 (GMT)
The logic is not clausal, but the calculus is!
2020-04-06 11:17:31 (GMT)
Do you mean that there are two disjunction symbols?
2020-04-06 11:17:55 (GMT)
That's why I use different type setting for ⋁ in clauses and ⋁ in terms (and maybe this is too subtle right now).
2020-04-06 11:18:18 (GMT)
The same goes for the different type settings of ≈
2020-04-06 11:18:25 (GMT)
I see it now. I had noticed for but not for the rest.
2020-04-06 11:18:45 (GMT)
It's only for ≈ and ⋁
2020-04-06 11:19:29 (GMT)
Well, and [¬] means really that this literal could be positive or negative. It has nothing to do with the ¬ in terms.
2020-04-06 11:21:14 (GMT)
@Alexander Bentkamp By the way, can you explain to me why the BoolRW rule is the same in the two quantified cases?
2020-04-06 11:21:58 (GMT)
I guess in the existential case, this rule is needed but is it also the case for the universal quantifier?
2020-04-06 11:22:43 (GMT)
or are both rules optional due to the presence of the $$\exists$$Elim and $$\forall$$Elim rules?
2020-04-06 11:24:1 (GMT)
Using bold is a good idea IMO, but we'll need to make the bold even bolder to prevent any ambiguities.
2020-04-06 11:24:34 (GMT)
The trick is to do what \pmb ("poor man's bold" -- sorry, Sophie !) does but with different kerning values.
2020-04-06 11:24:58 (GMT)
\pmb just superimposes a symbol three times with some offsets.
2020-04-06 11:25:10 (GMT)
Horizontal and/or vertical offsets
2020-04-06 11:25:25 (GMT)
I could look into that at some point.
2020-04-06 11:25:50 (GMT)
Yes, that and having a sentence in the text that makes the difference explicit would help.
2020-04-06 11:33:1 (GMT)
Quite a few people put a dot on \approx to mean "\approx or \not\approx". I guess it's more standard than "[\lnot]" (which might be due to me :S)).
2020-04-06 11:33:36 (GMT)
This could be a good notation here to avoid any suggestion that [\lnot] has anything to do with bold \lnot.
2020-04-06 11:33:53 (GMT)
What I have also seen being used (and used myself) is .
2020-04-06 11:34:21 (GMT)
Of course the same issue arises with \approx and \lor, but this \lnot is I think confusing in a (semi)clausal context.
2020-04-06 11:34:59 (GMT)
In a purely clausal context, with no Bools, it's quite natural to consider "s \not\approx t" and "\lnot s \approx t" as the same thing.
2020-04-06 11:36:28 (GMT)
But in a nonclausal context, there are subtle differences between various logically equivalent forms. I'd avoid aliasing.
2020-04-06 11:36:53 (GMT)
I can't remember ⋈ but I'm not really a superposition person. ;)
2020-04-06 11:37:8 (GMT)
Otherwise, to zoom out from the issues of notations.
2020-04-06 11:37:23 (GMT)
For me it's hard to comment on the rules.
2020-04-06 11:37:36 (GMT)
Nicolas uses it.
2020-04-06 11:37:54 (GMT)
I suspect the dot notation is a Saarbrücken thing. ;)
2020-04-06 11:38:22 (GMT)
To go back to the rules: Of course they all look sound, which is good.
2020-04-06 11:38:27 (GMT)
And the conclusions are smaller, right?
2020-04-06 11:38:56 (GMT)
Then there's the question of simplification, but until the redundancy criterion is defined, I can't really comment on this.
2020-04-06 11:39:37 (GMT)
(Except that I wouldn't mind rules that, on some conditions, are simp rules and otherwise aren't; simp rules don't have to be 100% distinct from generating rules in my mind.)
2020-04-06 11:40:10 (GMT)
I can't see a needless rule either.
2020-04-06 11:40:30 (GMT)
And if the completeness theorem goes through, then things look pretty good.
2020-04-06 11:41:10 (GMT)
Superposable positions are quite restrictive.
2020-04-06 11:41:42 (GMT)
(It's a kind of mixture between green subterms and eligible literals or something.)
2020-04-06 11:52:1 (GMT)
Sorry I was having lunch :slight_smile:
2020-04-06 11:52:21 (GMT)
Let me try fighting with Zulip interface to correctly quote :slight_smile:
2020-04-06 11:54:43 (GMT)
The logic itself is not clausal (like FOL). So indeed we do not need clausification. If we wanted, we could put the assumptions and the negated conjecture into one big term
tand then start the derivation with the clauset ≈ ⊤.
This is super interesting and leaves a lot of place to play with different kinds of clausification:
2020-04-06 11:58:56 (GMT)
Sophie said:
Alexander Bentkamp By the way, can you explain to me why the BoolRW rule is the same in the two quantified cases?
I guess in the existential case, this rule is needed but is it also the case for the universal quantifier?
or are both rules optional due to the presence of the $$\exists$$Elim and $$\forall$$Elim rules?
This is very difficult to wrap your head around. I just noticed that the rules ∀Elim and ∃Elim had the ⊤s and ⊥s in the wrong places. Now I think it's correct.
The rules ∀Elim and ∃Elim clearly only handle the cases where a ∀-expression is true and where a ∃-expression is false. But there are also the cases where a ∀-expression is false and where a ∃-expression is true. That's what BoolRw is for. For both ∀ and ∃, we can use the same term v{x ↦ sk(y)}. If this term is true, then the corresponding ∃-expression is true (because we have a witness). If it is false, the corresponding ∀-expression is false (because we have a witness where the expression is false).
EDIT: swapped the ⊤s and ⊥s back. The reasoning should be like this:
The rules ∀Elim and ∃Elim clearly only handle the cases where a ∀-expression is false and where a ∃-expression is true. But there are also the cases where a ∀-expression is true and where a ∃-expression is false. That's what BoolRw is for. For both ∀ and ∃, we can use the same term v{x ↦ sk(y)}. If the ∃-expression is false, this term must be false. If the ∀-expression is true, the term must be true.
2020-04-06 11:58:56 (GMT)
Alexander Bentkamp said:
Petar Vukmirovic Maybe you are misunderstanding the rule a bit. The description can certainly be improved. So for example
C[p ⋁ ⊤] ------------BoolRw C[⊤]can be a simp rule, but
C[p ⋁ X] ---------------BoolRw C[⊤]{X ↦ ⊤}cannot.
But wait... Then I think you need to apply subtitution to the rewrite rules, not the terms in the clause. Reason: if you had e.g.
where is ground it is not going to be equal to
2020-04-06 12:0:10 (GMT)
This might be just a thing of presentation of course
2020-04-06 12:0:20 (GMT)
Petar, what do you mean by "different kinds of clausification"?
2020-04-06 12:0:36 (GMT)
Do you mean as preprocessor or interleaved with the calculus?
2020-04-06 12:0:46 (GMT)
Those would be two kinds
2020-04-06 12:1:0 (GMT)
Here's the thing that might interest you then, Petar.
2020-04-06 12:1:15 (GMT)
As far as completeness is concerned,
2020-04-06 12:1:38 (GMT)
you can of course always perform some transformations that preserve (un)sat before starting saturation.
2020-04-06 12:1:42 (GMT)
That's preprocessing.
2020-04-06 12:2:7 (GMT)
It's trivial to transfer the soundness/completeness guarantees of the calculus to the original problem.
2020-04-06 12:2:8 (GMT)
But
2020-04-06 12:2:24 (GMT)
it's also OK to perform such processing in the middle of proof search
2020-04-06 12:2:35 (GMT)
at the condition that this is done at most a bounded number of times.
2020-04-06 12:3:40 (GMT)
So you can let superposition work for, say, 1 second and then clausify aggressively.
2020-04-06 12:3:47 (GMT)
Then you wait 2 secs, then 4, then 8.
2020-04-06 12:3:52 (GMT)
We discussed this before, but I cannot remember what was the occasion :slight_smile:
2020-04-06 12:4:14 (GMT)
Although this would have some impact on the status of clauses.
2020-04-06 12:4:36 (GMT)
You mean, they would all be moved to passive again?
2020-04-06 12:4:40 (GMT)
Something like that.
2020-04-06 12:4:47 (GMT)
Makes sense.
2020-04-06 12:4:59 (GMT)
I was hoping we could keep some in active but probably they all end up in passive.
2020-04-06 12:4:59 (GMT)
E does something called presaturation interreduction
2020-04-06 12:5:5 (GMT)
Ah.
2020-04-06 12:5:27 (GMT)
in which it performs all simplifying inferences of the calculus on the initial problem and then saturates
2020-04-06 12:5:32 (GMT)
Oh that's when it finds a proof without having moved anything to active, right?
2020-04-06 12:5:49 (GMT)
Stephan claims that many problems are solved that way
2020-04-06 12:6:1 (GMT)
But imagine if we would do that on the lazily clausified problem
2020-04-06 12:6:11 (GMT)
You interreduce before you clausify
2020-04-06 12:6:22 (GMT)
that might simplify the problem significantly
2020-04-06 12:6:27 (GMT)
Ah.
2020-04-06 12:6:33 (GMT)
But do these CNF transformations (Ganzinger/Weidenbach) make clauses bigger? I was hoping that they could be simplification rules.
2020-04-06 12:7:5 (GMT)
Their paper is unclear
2020-04-06 12:7:8 (GMT)
about that
2020-04-06 12:7:34 (GMT)
They say all of our alpha/beta (I forgot the name) rules can be used as simplification
2020-04-06 12:7:43 (GMT)
but that stroke me as odd
2020-04-06 12:8:6 (GMT)
What is "Ganzinger/Weidenbach"?
2020-04-06 12:8:19 (GMT)
Sorry, I was unclear
2020-04-06 12:8:26 (GMT)
By Weidenbach do you mean Nonnengart & Weidenbach?
2020-04-06 12:8:31 (GMT)
Or Azmy & Weidenbach?
2020-04-06 12:8:37 (GMT)
Jasmin Blanchette said:
By Weidenbach do you mean Nonnengart & Weidenbach?
Nonnengart -- the traditional CNF algorithm
2020-04-06 12:8:57 (GMT)
Ganzinger-style CNF or Weidenbach-style CNF. I am refering to what Petar mentioned.
2020-04-06 12:9:8 (GMT)
I have no idea what it means...
2020-04-06 12:9:9 (GMT)
that would be applied beforehand, as a prepocessor
2020-04-06 12:9:11 (GMT)
So "Their paper is unclear" you mean "Their papers are unclear"?
2020-04-06 12:9:40 (GMT)
Let's call it Nonnengart. Otherwise we'll start calling the Eho paper Blanchette.
2020-04-06 12:10:3 (GMT)
Ok, I will write this clearly as bullet points:
2020-04-06 12:10:21 (GMT)
I thought Nonnengart & Weidenbach was subsumed by Azmy & Weidenbach (CADE 2013)?
2020-04-06 12:12:22 (GMT)
We can deal with clausification in this calculus in 3 ways:
2020-04-06 12:14:33 (GMT)
Ain't 1 and 3 similar?
2020-04-06 12:15:2 (GMT)
I'd guess that a problem that falls within the G&S fragment would be handled similarly by Alex's calculus?
2020-04-06 12:16:9 (GMT)
It's not obvious, but yes, I think it amounts to the same thing.
2020-04-06 12:16:23 (GMT)
In Alex's calculus there is nothing to simplify the clause into
2020-04-06 12:17:13 (GMT)
This is a screen grab from their paper: image.png
2020-04-06 12:17:40 (GMT)
And those are elimination rules
2020-04-06 12:17:42 (GMT)
2020-04-06 12:18:11 (GMT)
2020-04-06 12:20:26 (GMT)
The paper is delayed clausification by G&S
2020-04-06 12:21:12 (GMT)
You are right. It's different. My proposal is lazier in that respect.
2020-04-06 12:21:43 (GMT)
My main point is that I wouldn't use terminology such as "no clausification" vs. "live clausification" to describe two approaches that are very similar.
2020-04-06 12:22:18 (GMT)
Maybe what we have is "delayed clausification" and "even more delayed clausification". ;)
2020-04-06 12:22:57 (GMT)
What I am afraid of is that this might come to bite us in practice
2020-04-06 12:23:5 (GMT)
My proposal is lazier in that respect.
Make sure to remember that when discussing the differences with G&S! Maybe even in the motivation/introduction.
2020-04-06 12:23:17 (GMT)
What will bite us?
2020-04-06 12:23:36 (GMT)
A ref. complete calculus is only about what one must do. The less, the better.
2020-04-06 12:23:42 (GMT)
The fact that in principle, formulas like that do not get simplified into literals
2020-04-06 12:24:9 (GMT)
But Petar, it's a simp rule, right?
2020-04-06 12:24:17 (GMT)
We can add it in the "Extensions" section of the paper.
2020-04-06 12:24:45 (GMT)
Alex's calculus is only the things that must be done. We can add more rules, esp. simplification rules, as we want.
2020-04-06 12:25:56 (GMT)
Of course. I was just wandering whether a rule like that, should be made part of the calculus.
My gut feeling is that without rules like that, we are going to suffer a lot :slight_smile:
2020-04-06 12:25:58 (GMT)
Alex has only focused on refutational completeness so far. And I'm still waiting for his redundany criterion. ;)
2020-04-06 12:26:53 (GMT)
We (or you) should surely have as many simplification rules as possible. That's not controversial I think.
2020-04-06 12:27:17 (GMT)
I guess it's like for the lambda-sup paper: Petar, you'll have to take the lead on that part.
2020-04-06 12:27:50 (GMT)
No problem :slight_smile:
2020-04-06 12:31:43 (GMT)
Maybe we could call the two approaches delayed clausification top down (G&S) and bottom up (my proposal). G&S act on any top-level connective. I rewrite inside the term using BoolRw and others until ⊤ or ⊥ appears at the top.
2020-04-06 12:33:8 (GMT)
Yeah, very good idea :slight_smile:
2020-04-06 12:33:37 (GMT)
One thing. I would change the presentation of BoolRw
2020-04-06 12:34:5 (GMT)
Yes, please!
2020-04-06 12:34:17 (GMT)
Upon closer inspection there are two substitutions really : the one applied on u and the one that matches the rule to u
2020-04-06 12:34:20 (GMT)
that is a bit confusing
2020-04-06 12:36:6 (GMT)
We could formulate it as a variant of the Superposition rule where the side premise comes from a different clause set.
2020-04-06 12:36:47 (GMT)
yeah, because it really is a unification, not match
2020-04-06 12:36:49 (GMT)
*mathcing
2020-04-06 12:36:51 (GMT)
*matching
2020-04-06 12:37:54 (GMT)
I shouldn't have used the word match there. I really meant "match" in the sense of "fit".
2020-04-06 12:44:8 (GMT)
One more problem that I might see with the bottom up approach:
Say you are given a clause
G&S will swiftly conclude
You will be concluding same clauses over and over again -- using cases you will get in two ways
2020-04-06 12:48:45 (GMT)
I am hopeful that this can also be a simplification rule in the bottom up approach:
2020-04-06 12:49:22 (GMT)
The second clause is clearly smaller and entails the first.
2020-04-06 12:52:17 (GMT)
Ok. Then, there are almost no diferencess between the bottom up and top-down approach
2020-04-06 12:52:39 (GMT)
except for treating the quantifiers -- do you really need to consider both cases if the quantifier is on the top level?
2020-04-06 12:53:15 (GMT)
e.g.
2020-04-06 12:53:28 (GMT)
G&S would apply fresh var to this term if it is on the literal level
2020-04-06 13:41:0 (GMT)
Ok, I just swapped the ⊤s and ⊥s in the ∀Elim and ∃Elim rules back. :-) Now it's correct, I hope...
∀x. p(x) ≈ ⊤ yields p(sk) ≈ ⊤ by BoolRw and ⊥ ≈ ⊤ ⋁ p(y) ≈ ⊤ by ∀Elim. That might look like it is producing more clauses than G&S, but ⊥ ≈ ⊤ ⋁ p(y) ≈ ⊤ can be simplified into p(y) ≈ ⊤, which subsumes p(sk) ≈ ⊤. So after simplification only p(y) ≈ ⊤ is left. Same as for G&S.
2020-04-06 14:33:38 (GMT)
This means that for top-level foralls, we can infer only p(y) = T?
2020-04-06 14:43:50 (GMT)
If by top-level forall, you mean clauses of the form ∀x. t[x] ≈ ⊤, then yes. If on the other side of the equality, there is something else than ⊤, probably not.
2020-04-06 14:44:32 (GMT)
We should start writing an extensions section for these things.
2020-04-06 14:45:3 (GMT)
Ok. I will add this to the doc.
2020-04-06 14:45:13 (GMT)
I meant the first fortm
2020-04-06 14:45:17 (GMT)
Let me first move the document into another directory.
2020-04-06 14:45:22 (GMT)
ok
2020-04-06 14:45:48 (GMT)
202X-folbool?
2020-04-06 14:47:50 (GMT)
why not :slight_smile:
2020-04-06 14:48:12 (GMT)
Let's be more realistic than 2020-hosup and call it 2021 :-)
2020-04-06 14:52:14 (GMT)
ok: it's 2021-fboolsup now.
2020-04-06 15:5:56 (GMT)
maybe we get it done in 2020 :slight_smile:
2020-04-06 15:6:2 (GMT)
OK.
2020-04-06 15:6:14 (GMT)
I am fixing one very stubborn bug in Zulip
2020-04-06 15:6:20 (GMT)
But will get back to you very soon]
2020-04-06 15:9:2 (GMT)
Depends on whether this is the year of submission or of publication...
2020-04-06 15:15:46 (GMT)
Ideally it should be publication, but that's hard to predict.
2020-04-06 15:16:21 (GMT)
Actually, it was Pascal who introduced the year- convention in the matryoshka repos.
2020-04-06 15:16:26 (GMT)
It comes from VeriDis.
2020-04-06 15:16:36 (GMT)
I think it's the year where you create the directory. ;)
2020-04-06 15:27:22 (GMT)
Ok bug fixed!
2020-04-06 15:27:54 (GMT)
ok bug is FINALLY fixed
2020-04-06 15:28:13 (GMT)
I will start sketching the extensions
2020-04-06 15:28:24 (GMT)
one interesting thing that we should definitely discuss is common subformula renaming
2020-04-06 15:34:1 (GMT)
fbool sup is empty.
Do you want me to copy the stuff from the old document?
2020-04-06 17:26:3 (GMT)
@Alexander Bentkamp I added a small discussion of the rules that clausify formulas together with the discussion of the renaming to the end of the paper.