On Friday, I tried to explain proof trees.
I don't think I did a very good job of it. In fact, I think I did a terrible job. I apologize!
So, to try to fix whatever confusion has been sown, I thought I'd try to post at least a brief, high level explanation here.
Be sure to note, though, that this explanation is not mathematically rigorous, and I'm going to gloss over a lot of nuance and detail. It's just a high level illustration of some concepts. If you want mathematical rigor, read the notes or the textbook. I'm just writing this in the hopes that it might help people who are really, really confused get a foothold and start climbing.


If this explanation doesn't help, and your confused about even the basic assumptions I'm making here, I'd recommend this extremely gentle introduction to symbolic logic. It's a very good explanation of all of this, though they don't use proof trees; they use an alternate form of proving, more akin to programming, which you might actually find more intuitive. It's equivalent to proof trees, and trees will become important later, though, so we want you to learn them.


Proof trees are, in essence, a way of visually representing a series of steps which allow you to arrive at some conclusion, based on application of whatever set of logical rules you have available to you.

Thus, the first questions to ask are, what rules are available to me, and what conclusion to I want to reach?

This is kind of analogous to the process of some kind of arts and crafts project. You decide what you want to make, and then figure out the tools you have at your disposal that will help you get there.

In the homeworks, the conclusion is some logical statement or sentence, and the "tools" are the rules of the deduction system we're using; in our case, the Natural Deduction system (that squiggly N with the m and the arrow) from which, so far, we've discussed -> introduction and -> elimination.

First, I think it's important to note an implicit kind of rule here, which has to do with assumptions. In logic, as in real life, you're allowed to assume anything you want. However, when you're done, whatever you conclude must be independent of those assumptions; it can't rely on them.

For example, say we join the brilliant detective Poirot as he is trying to solve the Murder on the Orient Express. He might say something to us like, "let us assume that the murderer did not leave the train." From there, he might go on to form many conclusions about the case, which are perfectly valid, as long as we recognize they are all made under the assumption that the murderer did not leave the train.

Eventually, Poirot would want to prove something about the case that did NOT rely on this assumption. (After all, what if he could not prove the assumption?) We call this discharging the assumption, and there are rules on how to do this. More on that later.

Back to the rules. -> Introduction is the first rule, one you are surely familiar with.
If we assume X,
and can somehow derive Y,
Then we can discharge the assumption X, and write
(X -> Y)

Think about this for a moment and you'll see it is perfectly reasonable. Say we assumed
The kimono was stolen (X)
and we did some sleuthing and are absolutely able to conclude
The killer was a man (Y)
then we can surely say for certain that
If the kimono was stolen, then the killer was a man. (X->Y)

Perfectly reasonable, right? Note that the statement we produced (X->Y) doesn't actually rely on the assumption. I can say "If the kimono was stolen, then the killer was a man," without actually assuming that the kimono was stolen!

Fantastic, you've got your first weapon tool. Tuck it away in your toolbelt. No, not like that! What if it ignited? Better wizards than you have lost buttocks, you know!


The second rule in our toolbox is -> Elimination. Again, one you are likely familiar with!
All it says is that if we know X is provable ("true" in some sense)
and we know (X -> Y),
we know Y is provable.

Think back to that previous example. Say that Poirot could indeed prove that
The kimono was stolen (X),
and previously we had deduced that
If the kimono was stolen, then the killer was a man. (X->Y)
Well, clearly, we can say with confidence that
The killer was a man! (Y)

Fantastic!

Those are the only two tools you should need for the homework. Isn't that fantastic? We'll probably learn more tools later.


Now, the bit about building trees using these two rules... the key is that we're going to build a tree with the conclusion that we want at the root. The leaves of the tree will be any provided information, or any assumptions that we are forced to make and cannot discharge (though if we can't discharge them, it's not actually a proof tree! But that's another show...)

The trick is, if you don't know what you're starting from, you're going to have to start at the root, the conclusion, and work backwards. This is why things got a little confusing on Friday: I drew the tree upside-down, and I was trying to explain applying the rules in the forward direction, while trying to apply them backwards! Ridiculous.

The reason it's not really helpful to start with what you "know" (that is, starting with the leaves and working down toward the root) is that most of the time you know too much stuff. That's right! You're too smart for your own good. Think about it: if I handed you every fact in mathematics, and told you to prove some theorem, you'd have a hard time picking which things to combine together with your deduction rules to produce the result. (This isn't always the case; more on that later...)

So, start at the root. Work upward. Let's do an example, maybe that will help... (example coming soon!)




Maybe you get this far, and you're saying, "but Alex, why in the name of Russell's Teapot should I care about any of this?"
Well, there are a few reasons.

The first is that we all want to be mathematicians, and thus we want to be able to take the interesting things we know and combine them together in new and different ways to come up with fascinating observations about the world. To do this, we need a "language" that describes what "valid reasoning" is.

The second is that logicians are and efficient bunch. So, often, when trying to prove that the stuff that we know leads logically to a new, beautiful conclusion, the precise semantics of what we know isn't always what is important. It's often the syntactical form. That's why we abstract away from the specific terms ("I am a cat," "I have four legs," "If I am a cat, then I have four legs.") and instead focus on the symbolic representation of those terms (P, Q, P=>Q).

We can determine the provability of some larger, more complicated sentence if we know the provability of the composite parts. You do this every day, without knowing it. For example, your roommate might say, "If you have a chance to get to Fresh Grocer, buy me some pears and some oatmeal, and if peaches are on sale, get some peaches too." You can probably parse this compound sentence with little thought, but imagine trying to explain precisely, to a robot, for example, why you know that your roommate would not expect you to buy peaches if you don't make it to Fresh Grocer.

Things are more nuanced than that, of course, and I think you'll grow to have a greater appreciation for this stuff as the course, and your life as a scientist, goes on. For now, hopefully you can make due with this kind of... wibbly wobbly explanation.
Allons-y! Back to CIS160 Page