Professional Documents
Culture Documents
pqfor some c
xP(x)
xP(x)
qr for some element c
P(c)
pr
pq
p
q
q
pq
p
p
q
p
q
pq
pq
pq
pq
T
T
F
F
T
T
T
T
T
F
F
T
F
T
F
F
F
T
T
F
F
T
T
F
F
F
T
T
F
F
T
T
Proof: a valid argument that establishes the truth of a math statement
Proposition: a declarative sentence that is either true or false but not both; ex Gainesville is the capitol of Florida. (F); 1+1=2 (T)
Converse: qp is the converse of pq
Contrapositive: qp is the contrapositive of pq
Inverse: pq is the inverse of pq
Tautology: a proposition that is always true; ex pp
Contradiction: a proposition that is always false; ex pp
Contingency: proposition thats neither tautology/contradiction; ex p
De Morgans Laws:
Identity Laws:
(pq)pq
pTp
(pq)pq
pFp
Domination Laws:
Idempotent Laws:
pTT
ppp
pFF
ppp
Negation Laws:
Double Negation Law:
ppF
(p)p
ppT
Commutative Laws:
Associative Laws:
pqqp
(pq)rp(qr)
pqqp
(pq)rp(qr)
Distributive Laws:
Absorption Laws:
p(qr)(pq)(pr)
p(pq)p
p(qr)(pq)(pr)
p(pq)p
Logical Equivalences involving Conditionals:
pqpq
pqqp
pqpq
pq(pq)
(pq)pq
Logical Equivalences Involving Biconditionals:
pq(pq)(qp)
pqpq
pq(pq)(pq)
(pq)pq
Universal Quantifiers: xP(x); P(x) for all values of x in the domain.; an element for which P(x) is false is called a counter example; xP(x) is true when P(x) is true for every x;
xP(x) is false when there is an x for which P(x) is false
Existential Quantification: xP(x); There exists an element x in the domain such that P(x).
Arguments: sequence of propositions; statements are called premises, final statement called conclusion; an argument is valid if the truth of all its premises implies that the conclusion is
true
Modus Ponens:
Modus Tollens:
Hypothetical Syllogism:
Disjunctive Syllogism:
pq
p
q
p
pq
pq
p
p
q
pr
pq
pr
qr
xP(x)
P(c)
Addition:
Simplification:
Conjunction:
Resolution:
Universal Instantiation:
Universal Generalization:
P(c) for any arbitrary c
xP(x)
Existential Instantiation:
Existential Generalization:
Theorem: a statement that can be shown to be true
Direct Proof: pq; assuming p is true, steps are constructed using rules of inference; ultimately we show that q must also be true
Proof by Contraposition: pq; Make use of the fact that pqqp; Show that p must follow from q
Proof by Contradiction: Suppose that we want to show that p is true; suppose that we can find a contradiction q such that pq is true; since q is false and pq is true, p should be
false, and p is true
Fallacy of Affirming the Conclusion: ((pq)q)p
Fallacy of Denying the Hypothesis: ((pq)p)
Proof by Cases: To prove a conditional of the form (p1p2pn)q we make use of the tautology: [(p1p2pn)q][(p1q)(p2q)(pnq)]
A=A
AU=U
AU=A
A=
Idempotent Laws:
Complementation Law:
AA=A
AA=A
Commutative Laws:
Associative Laws:
AB=BA
(AB)C=A(BC)
AB=BA
(AB)C=A(BC)
Distributive Laws:
Absorption Laws:
A(BC)=(AB)(AC)
A(AB)=A
A(BC)=(AB)(AC)
A(AB)=A
Generalized Union: set that contains
Generalized Intersection: set that
those elements that are members of at
contains those elements that are members
least one set in the collection.
of all sets in the collection.
A1A2An=
A1A2An=
i=1 n A i
i=1 n A i
Functions: A function f from A to B is an assignment of exactly one element of B to each element of A. We write f(a)=b if b is the unique element of B assigned by the function f to the
element a of A. If f is a function from A to B, we write f: AB. A is the domain of f and B is the codomain of f; b is the image of a and a is the pre-image of b. Range (image) of f is set of
all images of elements of A; f maps A to B. Let f1 and f2 be functions from A to R. Then f1+f2 and f1f2 are also functions from A to R defined for all xA by (f1+f2)(x)=f1(x)+f2(x) and (f1f2)
(x)=f1(x)f2(x). Let f be a function from A to B and let S be a subset of A; the image of S under the function f is the subset of B that consists of the images of the elements of S. We denote the
image of S by f(S), so f(S)={t|sS(t=f(s))}.
One-to-One Functions: A function f is one-to-one if and only if f(a)=f(b) implies that a=b for all a and b in the domain of f. A function is injective if its one-to-one. A function f from A to
B is onto, if and only if for every element bB there is an element aA with f(a)=b. A function is surjective if its onto.
Bijective Functions: A function f is bijective if its one-to-one and onto.
Inverse Functions: Let f be a one-to-one function from set A to set B. The inverse function of f is the function that assigns to an element bB that unique element aA such that f(a)=b.
The inverse function of f is denoted by f-1. Hence f-1(b)=a when f(a)=b.
Composition of Functions: Let g be a function from A to B and let f be a function from B to C. The composition of the functions f and g, denoted for all aA by f
g is defined by (f
g)(a)=f(g(a))
The Graph of Functions: Let f be a function from set A to set B. The graph of the function f is the set of ordered pairs (a, b): aA and bB and f(a)=b.
Floor: Assigns to the real number x the largest integer that is less than or equal to x. the value of the floor function is denoted by x.
Ceiling: Assigns to the real number x the smallest integer that is greater than or equal to x. The value of the ceiling function is denoted by x.
Useful Properties of the Floor and Ceiling Functions: x=n IFF nx<n+1; x=n IFF n1< x n; x=n IFF x1<nx+1; x=n IFF xn<x+1; x1<xx x< x+1; x=x;
x=x; x+n=x+n; x+n=x+n
Sequences: a function from a subset of the set of integers to a set of S; we use the notation an to denote the image of the integer n; we call an, a term of the sequence
Geometric Progression: a geometric progression is a sequence of the form a, ar, ar2, , arn, ; a is called the initial term and r is called the common ratio
Arithmetic Progression: an arithmetic progression is a sequence of the form a, a+d, a+2d, , a+nd, ; initial term a and common difference d are real numbers
Recurrence Relations: an equation that expresses an in terms of one or more of the previous terms of the sequence
Fibonacci Sequence: defined by the initial conditions f0=0, f1=1 and the recurrence relation fn=fn-1+fn-2 for n=2, 3, 4,
Solving Recurrences: we have solved the recurrence relation together with the initial conditions when we find an explicit formula for the terms of the sequence
n
Summations: Consider the terms am, am+1, , an from the sequence {an}; The sum of the terms in this sequence is denoted by
i=m
Identity Matrix: of order n is the nxn matrix In=[i,j], where i,j =1 if i=j and i,j =0 otherwise; multiplying a matrix with an identity doesnt change the matrix
Transpose of a Matrix: A is mxn matrix; At is nxm matrix made by interchanging the rows and columns of A; a square matrix is called symmetric if A=At
Zero-One Matrices: a matrix all of whose entries are either 0 or 1;
Join: AB; join of A and B is the zero-one matrix with (i, j)th entry aijbij
Meet: AB; meet of A and B is the zero-one matrix with (i, j)th entry aijbij
Boolean Product: Let A=[aij] and B=[bij] be mxn zero-one matrices; then AB is the mxn matrix with (i, j)th entry cij where cij=(ai1b1j)(ai2b2j)(aikbkj)
Mathematical Induction: a proof technique generally used to establish a statement for all natural numbers
Basis Step: We verify that P(1) is true
Inductive Step: We show that the conditional statement P(k)P(k+1) is true for all positive integers k: (P(1)k(P(k)P(k+1))nP(n)
Example: Prove that n<2n for all positive integers n
basis step P(12) is true, it can be formed using 3, 4-cent stamps and P(13), P(14), P(15) are true
induction step assuming P(j) is true 12jk and k15, prove P(k+1) is true; if P(k) is true, P(k3) is true; add a 4-cent stamp to the stamps used for amount k3 to form k+1
Well-ordering Property: every nonempty set of nonnegative integers has at least one element
Recursively Defined Function: basis specify the value of the function at zero; recursive give a rule for finding its value at an integer from its values at smaller integers; a function f(n)
is the same as a sequence a0, a1, , where f(i)=ai
Example: Suppose f is defined recursively as follows: f(0)=3; f(n+1)=2f(n)+3
Recursively Defined Sets: set S; basis 4; recursive if xS and yS, x+yS
Set of Strings: The set * of strings over the alphabet is defined recursively by the basis step: * and the recursive step: if w* and x*, then wx*
Concatenation: denoted by ; combines two strings; let be a set of symbols and * the set of strings formed from symbols in ; basis if w*, then w=w, is the empty string;
recursive if w1* w2* and x, then w1(w2x)=(w1w2)x
Length of a string: Give a recursive definition of the length of a string I(x)
basis: I()=0 and recursive: I(wx)=I(w)+1 if w* and x
Well-formed formulae in propositional logic: basis T, F, and s are well formed formulae; recursive E and F are well formed formulae, then (E), (EF), (EF), (EF), and (EF),
are well formed formulae
Structural Induction Basis Step: show that the result holds for all elements specified in the basis step of the recursive definition to be in the set
Structural Induction Recursive Step: show that if the statement is true for each of the elements used to construct new elements in the recursive step of the definition, the result holds for
these new elements
Algorithm: a finite set of precise instructions for performing a computation
Input: an algorithm has input values from a specified set
Output: from input values, the algorithm produces output values from a specified set; the solution
Correctness: should produce correct output values for each set of input values
Finiteness: should produce the output after a finite number of steps for an input
Effectiveness: must be able to perform steps correctly in a finite amount of time
Generality: the algorithm should work for all problems of the desired form
Sorting Problems: putting the elements of a list into increasing order
Optimization Problems: determining the optimal value (maximum or minimum) of a particular quantity over all possible inputs
Searching Problems: the general searching problem is to locate an element x in the list of distinct elements a1, a2, , an, or determine that it is not in the list; the solution to a searching
problem is the location of the term in its list that equals x (that is, i is the solution if x=ai) or 0 if x is not in the list
Linear Search: locates an item in a list by examining elements in the sequence one at a time; first compare x with a1 if they are equal, return the position to 1, and if not, try a2. If x=a2,
return the position 2; keep going, and if no match is found when the entire list is scanned, return 0; worst case complexity of O(n)
Binary Search: assume the input is a list of items in increasing order
later, show that binary search algorithm is more efficient than linear search
Bubble Sort: multiple passes through a list; every pair of elements that are found out of order are interchangeable; worst case complexity of O(n2)
Insertion Sort: begins with the 2nd element; compares the 2nd element with the 1st and puts it before the 1st if it is not larger; next the 3rd element is put into the correct position among the
first 3 elements; in each subsequent pass, the (n+1)st element is put into its correct position among the first n+1 elements; worst case complexity of O(n2)
Greedy Algorithms: used to solve optimization problems; after specifying what the best choice at each step is, we try to prove that this approach always produces an optimal solution,
or find a counterexample to show that it does not
Coin Change Problem: Design a greedy algorithm for making change of n cents with the following coins: quarters, dimes, nickels, and pennies, using the least total number of coins.
Lemma 1: If n is a positive integer, then n cents in change using quarters, dimes, nickels, and pennies, using the fewest coins possible has at most 2 dimes, 1 nickel, 4 pennies, and cannot
have 2 dimes and a nickel. The total amount of change in dimes, nickels, and pennies must not exceed 24 cents.
Big-O Notation: let f and g be functions from the set of integers or the set of real numbers to the set of real numbers; we say that f(x) is O(g(x)) if there are constants c and k such that
f(x)cg(x) whenever x>k; read as f(x) is big-O of g(x) or g asymptotically dominates f; c and k are witness to the relationship
Example: Show that f(x)=x2+2x+1 is O(x2); Find k and c, such that f(x)cg(x); 0x2+2x+1; x2+2x2+x2=4x2; We can take k=1, and c=1, as the witness to show that f(x) is O(x2).
Growth of Combination of Functions: Theorem suppose that f1(x) is O(g1(x)) and that f2(x) is O(g2(x)); then (f1+f2(x)) is O(max(|g1(x)|, g2(x))
Corollary suppose f1(x) and f2(x) are O(g(x)); then (f1+f2)(x) is O(g(x))
Theorem suppose f1(x) is O(g1(x)) and f2(x) is O(g2(x)); then (f1f2(x)) is O(g1(x)g2(x))
Big-Omega Notation: let f and g be functions from the set of integers or the set of real numbers to the set of real numbers; we say that f(x) is (g(x)) if there are constants c and k such
that |f(x)|C|g(x)| when x>k; we say that f(x) is big-Omega of g(x).; Big-O gives an upper bound on the growth of a function, while Big-Omega gives a lower bound
Big-Theta Notation: let f and g be functions from the set of integers or the set of real numbers to the set of real numbers; we say that f(x) is (g(x)) if f(x) is O(g(x)) and f(x) is (g(x)); we
say that f is big-Theta of g(x) and f(x) is of order g(x) and f(x) and g(x) are of the same order.; this is true if and only if there exists constants c1, c2, and k such that c1g(x)<f(x)<c2g(x)
if x>k
Time Complexity: an analysis of the time required to solve a problem of a particular size; expressed in terms of the number of operations used by the algorithm when the input has a
particular size; operations used to measure can be the comparison/addition/ multiplication/division of integers; time complexity is described in terms of the number of operations required
Space Complexity: an analysis of the computer memory required involves the space complexity of the algorithm
Example:
findMax
input : a1, a2, a3, , an : integers
output: The maximum element among the input integers
max < a1
for i := 2 to n do
if max < ai then
max := ai
end
end
return max
We use the number of comparisons as the measure of time complexity. Observe that we use two comparisons for each term on the list. We use exactly 2(n 1) + 1 = 2n 1 comparisons.
Hence the algorithm has a time complexity of (n).
Brute Force Algorithms: problem is solved in the most straightforward manner based on statement of problem and definitions of terms; designed to solve problems without regard to
computing resources required; for example, in some brute-force algorithms the solution to a problem is found by examining every possible solution, looking for the best possible; naive
approaches for solving problems that do not take advantage of any special structure of the problem or clever ideas