Propositional Logic: Why? soning Starts with George Boole around 1850

Transcription

Propositional Logic: Why? soning Starts with George Boole around 1850
Propositional Logic: Why?
We want to build a mathematical model of logical reasoning
Starts with George Boole around 1850
Actually, of part of it, in particular, of part of mathematical reasoning
In consequence, mathematics itself becomes a subject of
study of mathematics
Mathematical logics starts ...
Logical reasoning is captured in terms of symbolic processes, like algebra
We also talk of symbolic logic
1
• Algebra:
x+y=2
-y + 2 x = 3
——————
3x = 5
• Logic:
(p → (r ∨ q))
(p ∧ ¬r)
——————
q
Computer science emerges from mathematical logic in
the 1930s
Mathematical logic relevant to CS today
Important parts of CS have to do with symbolic processing, like symbolic logical reasoning
Logic can help understand and support those symbolic
processes in computers
2
Computers can implement logical reasoning, because it
can be captured in symbolic terms
Computers can be made to do logical reasoning, in particular, mathematical reasoning!
Propositional logic can be used to represent knowledge
Given a domain, one chooses propositional variables, e.g.
p, q, r, s, ... to denote propositions about the domain of
discourse
Example: Courses at a university
p: “Student takes 1805”
q: “Student takes Advanced Programming”
r: “Student takes Analysis of Algorithms”, etc.
These are indivisible, atomic propositions
More complex propositions can be represented (constructed)
using the propositional variables and the logical connectives
3
This is the syntax of the symbolic languages of propositional logic
1. “If student takes Analysis of Algorithms, then it takes
1805” can be represented by (r → p)
2. “Student takes Analysis of Algorithms or Advanced
Programming”: (q ∨ r)
3. “Student takes Analysis of Algorithms or Advanced
Programming, but not both”: (q ⊕ r)
4. “Taking 1805 is a necessary condition to take Analysis
of Algorithms”: (r → p)
5. “Taking 1805 is a sufficient condition to take Analysis
of Algorithms”: (p → r)
6. “Not possible to take Analysis of Algorithms without
1805”: (¬p → ¬r)
The contrapositive of 5.
7. “If 1805 is taken when Advanced Programming is
taken, and Advanced Programming is taken, then
1805 is taken”: ((q → p) ∧ q) → p)
4
We know already how to assign truth values to complex
propositions when truth values for the atomic propositions (propositional variables) have been assigned ...
Whenever we talk about meaning, interpretations, truth
(values), we are in the realm of semantics
Truth tables!
p
1
0
1
0
q p → q (p → q) ∧ p ((p → q) ∧ p) −→ q
1
1
1
1
1
1
0
1
0
0
0
1
0
1
0
1
The formula on the RHS is always true, independently
from the truth values of its atomic propositions
It is called a tautology
It represents a class of common valid arguments
5
If Socrates is a man, then Socrates is a mortal. Socrates
is a man. In consequence, Socrates is a mortal.
Replace in this argument, the propositions Socrates is a
man and Socrates is a mortal by any two propositions,
and you will always get a valid argument
p ¬p p ↔ ¬p
1 0
0
0 1
0
This formula is a contradiction: it is always false
p
1
0
1
0
¬p
0
1
0
1
q p → q (p → q) ∨ ¬p
1
1
1
1
1
1
0
0
0
0
1
1
The formula is neither a tautology nor a contradiction
It is satisfiable or consistent, i.e. it can be made true
6
Logical equivalence:
• Based on truth tables
• As logical validity of double implication
• Based on basic algebraic laws for logical equivalence
and the transitivity of logical equivalence
Basic laws can be verified in general using truth tables and other previously derived laws for equivalence
Logical equivalence can be used as the basis for an algebraic methodology to derive new (true) formulas from
other formulas (that are assumed to be true)
Example: Derive q from ((p → q) ∧ p), i.e. obtain
that every time ((p → q) ∧ p) is true, also q is true
One method: check with truth table that
7
• When the column for ((p → q) ∧ p) is true, also the
column for q is true; or
• (((p → q) ∧ p) → q) is a tautology (in consequence,
the antecedent cannot be true without having the
consequent true)
An alternative (algebraic):
We can use that
• (ϕ ↔ ψ) ≡ (ϕ → ψ) ∧ (ψ → ϕ))
• (ϕ → (ψ ∧ χ)) ≡ ((ϕ → ψ) ∧ (ϕ → χ))
Now:
(p → q) ∧ p ≡ (¬p ∨ q) ∧ p ≡ (¬p ∧ p) ∨ (q ∧ p) ≡
F ∨ (q ∧ p) ≡ (q ∧ p)
That is ((p → q) ∧ p) ↔ (q ∧ p) is always true
Then ((p → q) ∧ p) → (q ∧ p) is always true
Then ((p → q) ∧ p) → q is always true
8
Still in propositional logic ... (will also hold for predicate
logic)
General facts:
Assume we want to prove that the formula χ:
(ϕ1 ∧ · · · ∧ ϕn) → ψ
is always true (tautology), i.e. that ψ is a logical consequence of (ϕ1 ∧ · · · ∧ ϕn), i.e. ψ is true whenever
(ϕ1 ∧ · · · ∧ ϕn) is true
Alternatives:
• Check that χ has always value 1
• Check that whenever all the ϕis have value 1, also ψ
gets value 1
• Check that
ϕ1 ∧ · · · ∧ ϕn ∧ ¬ψ
is always false (a contradiction), i.e. it is not possible
to have all the ϕis all simultaneously true with the
negation of ψ also true
Example: (p → q) ∧ p → q always true is equivalent to (p → q) ∧ p ∧ ¬q being always false
9
Example: If Joe fails to submit the a project in course
CS414, then he fails the course. If Joe fails CS414, then
he cannot graduate. Hence, if Joe graduates, he must
have submitted a project.
Want to represent this argument, and check it is valid
It is of the form:
F0 ∧ F1 → C
Atomic formulas:
• s: Joe submits the project in CS414.
• f : Joe fails CS414.
• g: Joe graduates.
Then,
• F0: ¬s → f
• F1: f → ¬g
• C: g → s
And the argument is represented by
(¬s → f ) ∧ (f → ¬g)
10
→
(g → s)
Check using a truth that
• it is valid ; or alternatively, that
• (g → s) is true every time (¬s → f ) ∧ (f → ¬g) is
true; or ...
• (¬s → f ) ∧ (f → ¬g) ∧ ¬(g → s) is always false
(unsatisfiable)
Another method: algebraically ...; by transitivity of implication
(¬s → f ) ∧ (f → ¬g) → (¬s → ¬g) (≡ T )
But (¬s → ¬g) ≡ (g → s) (contrapositive); replacing
inside ...
(¬s → f ) ∧ (f → ¬g) → (g → s) (≡ T )
11
Example: If X is greater that zero, then if Y is zero,
then Z is zero. Variable Y is zero. Hence, either X is
greater than zero or Z is zero.
Atomic propositions:
• x: X is greater that zero.
• y: Y is zero.
• z: Z is zero.
Formalization:
(x → (y → z)) ∧ y → x ∨ z
(*)
Valid argument?
Does not seem so ... (intuitively, y does not contribute
to the conclusion, and the info about x and z is conditional ...)
It is not: a counter example is the valuation that assigns
x → 0; z → 0 (the consequent of (*) must be 0), and
y → 1 (to make the antecedent true)
This valuation makes (*) take value 0
12
Example: If Superman were able ad willing to prevent evil, he would do so. If Superman were unable to
prevent evil, he would be impotent; if he were unwilling
to prevent evil, he would be malevolent. Superman does
not prevent evil. If Superman exists, he is neither impotent nor malevolent. Therefore, Superman does not
exist.
Introduce the following propositional variables to denote
the atomic sentences
• a: Superman is able to prevent evil.
• w: Superman is willing to prevent evil.
• i: Superman is impotent.
• m: Superman is malevolent.
• p: Superman prevents evil.
• e: Superman exists.
The main propositions in the argument are:
• F0 : (a ∧ w) → p
• F1 : (¬a → i) ∧ (¬w → m)
• F2 : ¬p
• F3 : e → (¬i ∧ ¬m)
13
The whole argument can be represented by the formula:
(F0 ∧ F1 ∧ F2 ∧ F2 ∧ F3) → ¬e.
The argument is valid: try to deduce ¬e reasoning backwards using the hypothesis ..
Contrapositive of F3:
¬(¬i ∧ ¬m) → ¬e
Try to prove ¬(¬i ∧ ¬m), i.e. i ∨ m
(*)
Now from F1 we get: ¬a → i, then instead of proving
(*) it is good enough to prove ¬a ∨ m (¬a replaced i)
Again from F1 we get ¬w → m, i.e. good enough to
prove ¬a ∨ ¬w (same idea)
, i.e. ¬(a ∧ w)
From F0 we get
¬p → ¬(a ∧ w) (contrapositive)
We need to prove p; but this given directly by F2
We are done ... Much easier than truth tables ...
14
Predicate Logic
The argument:
If Socrates is a man, then Socrates is a mortal. Socrates
is a man. In consequence, Socrates is a mortal.
Could be expressed as a valid (tautologic) argument in
propositional logic
Consider now:
All men are mortal. Socrates is a man. Then, Socrates
is a mortal.
Can it be expressed in propositional logic?
p: all men are mortal.
q: Socrates is a man.
r: Socrates is a mortal.
(p ∧ q) → r ??
Not a tautology, and the argument seems to be a valid
argument ...
Propositional logic is not expressive enough to capture
that argument
15
The connections between the elements of the argument
is lost in propositional logic
Here we are talking about general properties (also called
predicates) and individuals of a domain of discourse who
may or may not have those properties
Instead of introducing names for complete propositions
-like in propositional logic- we introduce:
• names for the properties or predicates,
• names for the individuals (or constants)
• variables that can be used in combination with the
predicates
• quantifiers to express things like “all individuals have
property ...” or “there exists an individual with the
property ...”
• And we also use all the logical connectives from propositional logic; in this sense predicate logic properly
extends propositional logic
16
Example: The new argument around Socrates
• Man(·): for “being a man”
• Mortal (·): for “being a mortal”
• Socrates: a symbolic name for the individual Socrates
• Variables: x, y, z, ...
• ∀: universal quantifier; ∃: existential quantifier
Expressing the argument:
∀x(Man(x) → Mortal (x)) ∧ Man(Socrates)
→ Mortal (Socrates)
We can use other names, capturing the same argument
17
∀x(P (x) → Q(x)) ∧ P (c) → Q(c)
(*)
Here: P (x), Q(x), P (c), Q(c) are atomic formulas (the
simplest formulas), and (*) is a complex formula
The syntax of predicate logic tells us which strings of
symbols are real formulas
Does (*) express the original argument as a valid argument?
Is it always true?
This has to do with the semantics of predicate logic:
meaning, interpretations, truth, ...
18
Languages of predicate logic are interpreted in structures
consisting of:
• A universe or domain of discourse
• Interpretations of the predicates (predicate names) as
properties over the domain of discourse
• Interpretations of the constant (names for individuals) as concrete individuals in the domain
Example (contd): the (specific) building blocks of the
language are the predicates P (·), Q(·) and the constant
c
Possible interpretations:
• – Universe U = the set of all the ancient Greeks,
including humans and divinities, etc.
– P (·): interpreted as the ancient male Greeks, i.e.
as a subset of U
– Q(·): ... the mortals among all the ancient Greeks,
also as a subset of U
– Socrates: ... as the philosopher Socrates, an element of (individual in) U
19
In this structure (the one of ancient Greeks), the formula
- ∀x(P (x) → Q(x)) becomes true, because, all men
are mortal.
- P (Socrates) becomes true, because Socrates is a
man.
- Q(Socrates) becomes true, because Socrates is a
man.
(*) is of the form p ∧ q → r, with p, q, r true, then
(*) is true (as expected)
20
• – Universe: U = living beings on earth
– P (·): the plants, a subset of U
– Q(·): the animals ...
– c: a name for the canary Tweety
In this structure, the formula
- ∀x(P (x) → Q(x)) becomes false
- P (Socrates) becomes false
- Q(Socrates) becomes true
(*) is of the form p ∧ q → r, with p, q false, and r
true; then (*) is true (as expected)
Actually (*) becomes true in every possible structure (or
under every possible interpretation)
It captures a valid argument (always true) ...
21
Example: We can use predicates, etc. to symbolically
represent knowledge about a numerical domain (like algebra)
Predicates: B(·, ·), M (·, ·, ·), S(·, ·, ·), E(·, ·)
Constants: 0, 1
Atomic formulas: B(x, y), B(1, 0), B(1, 1), S(1, z, x), ...
More complex formulas:
- ∀x∀y∀z(S(x, y, z) → S(y, x, z))
- ∀x∀y(M (x, x, y) ∧ ¬E(x, 0) → B(y, 0))
- ∀x∀y∀z(B(x, 0)∧B(y, 0)∧B(z, 0)∧M (x, y, z) →
(B(z, x) ∧ B(z, y)) ∨ E(z, 1))
- ∃x∃z(M (x, x, z) ∧ S(z, 1, 0))
Are these formulas true?
22
In general, we cannot answer yes or not: it depends on
the interpretation of the symbols in it ...
Only in certain cases we can say the formula is true, independently from the interpretation, e.g.
∀x(P (x) → P (x)) is always true (or valid)
In other cases, we can say it is (always) false, e.g.
∃x(P (x) ∧ ¬P (x)) is always false (a contradiction)
The formulas in the previous slide are neither valid nor
contradictions (sure?)
Their truth value depends upon the interpretation domain and the interpretation of the symbols on that domain (i.e. on the interpretation structure)
The interpretation structures for that language require a
domain (universe), two binary relations, and two ternary
relations on it, and two distinguished elements in it (for
which the 0 and 1 above are names)
23
We had the predicates: B(·, ·), M (·, ·, ·), S(·, ·, ·), E(·, ·),
and the names 0, 1
Example: Structure N, >, $(·, ·, ·), +(·, ·, ·), =, 0, 1
N = {0, 1, 2, 3, ...}
E.g. 9 > 3, ∗ (2, 3, 6), + (8, 1, 9), 5 = 5
- ∀x∀y∀z(S(x, y, z) → S(y, x, z))
becomes interpreted as a proposition about natural numbers and their usual operations, namely
for every l, m, n ∈ N, it holds that if +(l, m, n), then
+(m, l, n)
Or simpler: l + m = n ⇒ m + l = n
The original formula is true in that structure
24
- ∀x∀y(M (x, x, y) ∧ ¬E(x, 0) → B(y, 0))
becomes for all m, n ∈ N, if m2 = n and m = 0, then
n>0
also true
- ∀x∀y∀z(B(x, 0)∧B(y, 0)∧B(z, 0)∧M (x, y, z) →
(B(z, x) ∧ B(z, y)) ∨ E(z, 1))
becomes
for all ... if l > 0 and m > 0 and n > 0 and l $ m = n,
then both n > l and n > m or n = 1
true ...
- ∃x∃z(M (x, x, z) ∧ S(z, 1, 0))
becomes there is m ∈ N and there is n ∈ N such that
false
m2 = n and n + 1 = 0
This same language (formulas) of predicate logic can be
interpreted in a different structure
25
Example: Structure R, <, $, =, +, 0, 1
We can symbolically say other true things about the real
numbers, e.g.
- 2 has a square root in R
- every number has an additive inverse
- there is a multiplicative neutral element
Example:
Structure C, ∅, $, =, +, 0, 1
26
Exercise: (a) Check the formulas in R, <, $, −, π, e
(b) Check the formulas in a non numerical domain (why
not?)
Free and bound variables
In all the examples before, the variables in the formula
are bound, i.e. they are affected or under the scope of
a quantifier
Formulas of predicate logic that have al their variables
bound are called sentences
However, formulas are allowed to contain free variables,
i.e. not affected by a quantifier
- ∃y(R(x, y) ∧ S(y, z))
has a bound variable y, and two free variables x, z
Is the formula true?
Now it depends on the structure and the values that we
assign to the free variables in the domain
27
Example: Structure D, Flies, UsedTo
28
By means of the formula ∃y(R(x, y) ∧ S(y, z)) with
the existentially quantified variable y that appears in the
two tables, we are connecting the two tables
A usual operation in relational data bases
Predicate logic is an important tool in data bases:
• To give meaning, i.e. semantics, to data
• We can pose (symbolic) queries, that can be processed and answered by the data base management
systems
E.g. “Give me all the pairs of pilots and possible destinations for them”: Q(x, z) : ∃y(R(x, y)∧S(y, z))
The system returns all pairs of possible values for x
and z, i.e. x and z are used as real variables, who
get values from the database
29
• To specify that something must always happen is the
database, for any update (integrity constraints)
E.g. “Every aircraft in the table Flies must appear
in the table UsedTo”
∀x(∃yFlies(y, x) → ∃zUsedTo(x, z))
Not satisfied in the data base above
30
(*)
Some useful facts about formulas of predicate logic
Since predicate logic extends the propositional logic:
- we inherit all the “predicate logic like” tautologies now
as valid formulas, i.e. formulas that are true in every
interpretation (compatible with the language of the fla.)
E.g. ∀xP (x) ∨ ¬∀xP (x) (an instance of a tautology
from propositional logic)
- But there are some “new” valid formulas, that do not
come from propositional logic, e.g.
∀xP (x) → P (c)
P (c) → ∃yP (y)
- the notion of logical equivalence applies basically as
before: Two formulas ϕ, ψ (written in the same language of predicate logic) are logically equivalent, denoted ϕ ≡ ψ, iff for every interpretation (compatible
with the language) they take the same truth value
31
E.g. ∀xP (x) → ∀yQ(y) ≡ ¬∀xP (x) ∨ ∀yQ(y)
∀x∀y(P (x) → Q(y)) ≡ ∀x∀y(¬P (x) ∨ Q(y))
∀x∀y(P (x) ∨ ¬Q(y)) ≡ ∀x∀y¬(¬P (x) ∧ Q(y))
Any new equivalences, not inherited from propositional
logic?
• ¬∀xϕ(x) ≡ ∃x¬ϕ(x)
• ¬∃xϕ(x) ≡ ∀x¬ϕ(x)
• ∀xϕ(x) ≡ ∀yϕ(y)
On the RHS, y replaces x in ϕ
But when y does not appear free on the LHS
Quantified (bound) variables can be replaced by fresh
variables
32
• ∀x(ϕ(x) ∧ ψ(x)) ≡ ∀xϕ(x) ∧ ∀xψ(x))
• ∃x(ϕ(x) ∨ ψ(x)) ≡ ∃xϕ(x) ∨ ∃xψ(x))
• ∀x(ϕ(x) ∨ ψ) ≡ ∀xϕ(x) ∨ ψ
If x does not appear (free) in ψ
• ∃x(ϕ(x) ∧ ψ) ≡ ∃xϕ(x) ∧ ψ)
If x does not appear (free) in ψ
Example:
Verify that (*) is equivalent to
∀x∀y∃z(Flies(y, x) → UsedTo(x, z))
33
Since ∀x∀y∃z(Flies(y, x) → UsedTo(x, z)) is false in
the given data base, its negation is true in it
I.e. the data base satisfies
¬∀x∀y∃z(Flies(y, x) → UsedTo(x, z))
That is equivalent to:
Exercise: Verify that
• ∀x(ϕ(x) ∨ ψ(x)) ≡ ∀xϕ(x) ∨ ∀xψ(x)
• ∃x(ϕ(x) ∧ ψ(x)) ≡ ∃xϕ(x) ∧ ∃xψ(x)
Hint: In each case, a counterexample must be found,
i.e. a pair of formulas ϕ, ψ and a structure where the
two formulas do not take the same truth values
34
Example: Transform the formula
∀x∃y(P (x, y) → (¬∃z∃yR(z, y) ∧ ¬∀xS(x)))
into a (set of) formula(s) in prenex conjunctive normal
form
35
Proofs
The most important notion in logic is the one of logical
consequence
Given a set of formulas (premises, hypothesis, axioms,
...) Σ, a formula ψ is a logical consequence of Σ if whenever all the formulas in Σ are true, then also ψ becomes
true
We saw some examples in the context of propositional
logic, e.g. the non existence of Superman
In propositional logic, at least in principle, if there are
finitely many propositional variables involved, logical consequence can be checked by means of truth tables (cumbersome)
We also saw alternatives, e.g. backward reasoning
In predicate logic, truth tables cannot be used in general
(domains can be infinite)
36
It is important, both in propositional logic and in predicate logic (in this case, crucial) to find symbolic methodologies to establish logical consequence
Or, equivalently, inconsistency: remember
ϕ1 ∧ · · · ∧ ϕ n → ψ
is valid (making ψ a logical consequence of Σ = {ϕ1, . . . , ϕn})
iff
ϕ1 ∧ · · · ∧ ϕn ∧ ¬ψ
is a contradiction
There are such formal deductive systems for symbolic
(computational) manipulation of formulas that do the
job
Those systems have deduction rules that closely mimic
the way we obtain conclusions in, e.g. mathematical
reasoning
37
Deduction Rules:
•ϕ→ψ
ϕ
−−−
ψ
Modus Ponens
From the hypothesis ϕ → ψ and ϕ one can jump to
the conclusion ψ
Used all the time in mathematical and everyday reasoning
If Socrates is man, then Socrates is a mortal
Socrates is a man
− − − − − − − − − − − − − − − − −−
Socrates is a mortal
Can be justified observing that
((ϕ → ψ) ∧ ϕ) → ψ
is valid (always true)
A variant ...
38
•ϕ→ψ
¬ψ
−−−
¬ϕ
Modus Tollens
Can be obtained from Modus Ponens using the equivalent contrapositive of the top formula: ¬ψ → ¬ϕ
If Socrates is man, then Socrates is a mortal
Socrates is not a mortal
− − − − − − − − − − − − − − − − −−
Socrates is not a man
• See a list of other deduction rules in the book
Deduction rules are used in combination with certain
hypothesis that are assumed to be true, plus other
previously obtained conclusions, plus valid formulas
that are always true and can always be used safely in
reasoning
39
• A very useful deduction rule, implemented in many
computational reasoning systems
(¬)p ∨ q
(¬)r ∨ ¬q
−−−−−
(¬)p ∨ (¬)r
Resolution Rule
(p, q are propositional variables)
Can be generalized to a larger number of literals
(propositional variables or negations of them)
s ∨ ¬l ∨ ¬t
u∨s∨q∨t
−−−−−−−−−
s ∨ ¬l ∨ u ∨ s ∨ q
¬p ∨ q
p ∨ ¬q
−−−−−
¬p ∨ p
¬p ∨ q
p ∨ ¬q
−−−−−−−
¬q ∨ q
40
There are different proof strategies that have a sound
logical basis; two of them are
• Proofs by contradiction:
Usually, theorems in mathematics are of the form:
HYPOTHESIS (H) ⇒ THESIS (T )
(α)
The proof can be direct (start from H and reach T )
or by contradiction
In the second case, we try to prove:
H and not T ⇒ F
(β)
where F is a contradiction (an always false proposition)
If we establish (β), we know that (α) is true
Why?
Because (β) ⇒ (α) is true
That is, in terms of propositional logic,
((H ∧ ¬T ) → F ) → (H → T )
is always true ...
41
H
0
1
0
1
T
0
0
1
1
F ¬T H ∧¬T (β) (α) (β) ⇒ (α)
0 1
0
1 1
1
0 1
1
0 0
1
0 0
0
1 1
1
0 0
0
1 1
1
• Proofs by cases: again we want to establish a theorem of the form (α)
We consider two (or more) cases C1, C2 for H that
cover all the possibilities
(H ∧ C1) → T
(H ∧ C2) → T
C1 ∨ C2
−−−−−−−
H→T
Can be justified:
((H ∧ C1) → T ) ∧ ((H ∧ C2) → T ) ∧
(C1 ∨ C2)
−→
(H → T )
is a tautology (check!)
42
There are also deductive rules for predicate logic
• ∀x ϕ(x)
− − −−
ϕ(t)
where t is a constant or a variable ...
• etc.
43