2020-06-26 14:45:38 (GMT)
Dear all,
I finally got some time to read about Waldmeister heurisitcs and I found out that they have the following criterion when a superposition is redundant:
image.png
2020-06-26 14:46:20 (GMT)
Does anyone know if this can be lifted to non-unit clauses, that is to superposition calculus (as opposed to unfailing KB completion)
2020-06-26 14:47:59 (GMT)
@Stephan Schulz maybe you are familiar with this?
2020-06-26 14:52:44 (GMT)
Superposition redundancy is not a generalization of the redundancy of unfailing KB completion. They are incomparable, but I forgot the details. @Uwe Waldmann explained this to me before.
2020-06-26 15:16:45 (GMT)
Ok, thanks :slight_smile:
2020-06-26 15:20:55 (GMT)
I would need a bit more context, but this may have something to do with Basic Superposition.
2020-06-26 15:21:42 (GMT)
Context it is :slight_smile:
image.png
2020-06-26 15:26:53 (GMT)
2020-06-26 15:31:21 (GMT)
That's what I call a URL. And they call themselves Science Direct?
2020-06-26 15:31:39 (GMT)
still more direct than a paywall…
2020-06-26 15:32:32 (GMT)
Do you know if any of these results can be lifted to superposition calculus?
2020-06-26 15:34:36 (GMT)
I have a bad feeling it cannot be lifted -- both because otherwise it would have been done before, and also because superposition has the reputation of being rather tight. But perhaps an expert can enlighten us.
2020-06-26 15:34:41 (GMT)
Anyways, the context says that it usually only pays off for the AC case. And I think that at least half of the benefit is already in the given-clause loop, since we simply don't use simplifyable clauses for superpositions - hence none of the terms in the unifier are simplifiable.
2020-06-26 15:35:14 (GMT)
Ok, I see... thanks
2020-06-26 15:35:35 (GMT)
There is also an interesting addition to the inference loop in the Waldmeister manual
2020-06-26 15:35:56 (GMT)
where every k step they reduce passive clauses using active clauses
2020-06-26 15:36:6 (GMT)
it seemed it was useful for some problems
2020-06-26 15:37:47 (GMT)
so, a kind of amortized Otter loop?
2020-06-26 15:38:16 (GMT)
I guess :slight_smile:
2020-06-26 15:39:10 (GMT)
Everything is useful for some problems. ;)
2020-06-26 15:39:23 (GMT)
Well, apart from most of my ideas. Somehow they don't seem to make any difference...
2020-06-26 15:40:2 (GMT)
The solution for that is to create new problems!
2020-06-26 16:12:59 (GMT)
We're working on it, actually. Petar has created quite a few cool ones, which we're collecting in the Matryoshka repository and will submit to Geoff.
2020-06-26 19:4:50 (GMT)
One more thing: It is possible to see the DISCOUNT loop in different ways. In one of those perspectives, all the newly generated and simplified unprocessed clauses conceptually don't really exist. They are just a help in estimating which inference should be done next (and once we figured out which inference to do, well, since we've already computed the result, let's use the old one, although conceptually it does not really exist ;-). If you look at the "New Waldmeister Loop" paper, thats is what they do - they throw away nearly all generated clauses, just keeping (in a clever and compact way) their evaluations, and then recompute them when needed. With current memory sizes, the underlying motivation for that approach has become a lot weaker, though...
2020-06-26 20:4:31 (GMT)
Indeed, this architecture is described in our IJCAR 2020 paper (Waldmann, Tourret, Robillard, and B., 1 July @ 14:00). See pp. 12-13 of http://matryoshka.gforge.inria.fr/pubs/satur_paper.pdf . We even mention E on p. 13. A more detailed description can be found in our tech report on pp. 29-30: http://matryoshka.gforge.inria.fr/pubs/satur_report.pdf .
Please let me know if there's any inaccuracy in how we characterize E and I'll get it fixed for the journal submission.
2020-06-26 20:6:39 (GMT)
In the refinement relation between the "Lazy Given Clause" derivation and the abstract theorem proving derivation, the "to do" inferences are indeed ignored. They are simply a bookkeeping device for not forgetting to compute inferences. I.e., what Stephan wrote is correct, but we formulated it formally (on paper and in Isabelle).