You are on page 1of 9

PSEUDOCODE

Code Pseudocode - capitalise function name, including


= <- parameters
== = - code blocks are indented
Array start from 0 Array start from 1
For i in range 20 For i<-1 to length[20]
For i<-20 Downto length[1]
NONE NIL
*, /, +, - MULT, DIV, ADD, SUBT
% MOD
# //

BIG O

Used to describe the performance or complexity of an algorithm

O(1), O(log n), O(n), O(n log n), O(n^2), O(n^3)

RECURSION

- Programming technique where a function calls itself


- Must have a condition to stop (base case).

1. FACTORIAL(n)
2. IF n < 2
3. RETURN 1
4. ELSE
5. RETURN FACTORIAL(n-1) * n

1. SUM_DIGITS(n) #343 = 10
2. IF n == 0
3. return 0
4. ELSE
5. return (n/10) + n % 10

1. COUNT_X(string): #'yyyxxyy' = 2
2. if string == '':
3. return 0
4.
5. temp = string[-1]
6. if temp == 'x':
7. return COUNT_X (string[:-1]) + 1
8. return COUNT_X (string[:-1]) + 0

1. def REV_STRING(s):
2. if s == "":
3. return s
4. else:
5. return REV_STRING(s[1:]) + s[0]
DATA STRUCTURES

Data structures are constructions for storing and organising data. Ie. Arrays, Stacks, Queues, Trees..

STACKS

LIFO Last In First Out

- Items added to the top (push append())


- Also removed from the top (pop pop(-1))

QUEUES

FIFO First In First Out

- Items added to the top (push append())


- Removed from the bottom (pop pop(0))

LINKED LISTS

An ordered set of data elements, each containing a link to its successor (and sometimes its
predecessor doubly-linked list).

- First element is the head


- Last element points to null

Has these attributes

- N.value the item in the node


- N.previous pointer to the previous node
- N.next pointer to the next node
- L.head first node
- L.tail last node

TREES

Tree represents the nodes connected by edges.

Root top most node

Parent any node which has


a node under it.

Child the node below a


given node

Leaf node with no child

Traversing passing through


nodes

- Good for speed (of navigation of data). Logarithmic time.


BINARY SEARCH TREE

- Nodes left child must have value less than parent, and right child have value greater than
parent.
- Attributes;
o N.value value of the node
o N.parent pointer to the nodes parent
o N.left pointer to the nodes left child
o N.right pointer to the nodes right child
- Used to quickly find data.
- Insertion Is O(logN)

Insertion

ALGORITHM PYTHON
1. def insert(tree, data):
If root is NULL
then create root node 2. if tree.value == NONE:
return 3. tree=BinTreeNode(data)
4.
If root exists then 5. elif tree.value > data:
compare the data with node.data 6. if tree.leftChild:
7. return tree.leftChild.insert(
while until insertion position is
data)
located
8. else:
If data is greater than node.data 9. tree.leftChild = Node(data)
goto right subtree 10.
else 11. else:
goto left subtree 12. if tree.rightChild:
13. return tree.rightChild.insert
endwhile (data)
14. else:
insert data
15. tree.rightChild = Node(data)
end If

Finding a node

RECURSIVELY ITERATIVELY
1. BIN-TREE-FIND(tree,target) 9. BIN-TREE-FIND(tree,target)
2. IF tree.value = target OR tree = 0: 10. R tree
3. RETURN tree 11. WHILE r
4. ELSE IF target < tree.value 12. IF r.value = target
5. RETURN BIN-TREE- 13. RETURN r (or TRUE)
FIND(tree.left, target) 14. ELSE IF r.value > target
6. ELSE 15. r r.left
7. RETURN BIN-TREE- 16. ELSE
FIND(tree.right, target) 17. r r.right
8. RETURN 0 18. RETURN FALSE
Traversal:

1. Pre-order NLR - output item, then


follow left tree, then right tree.

2. Post-order LRN - follow the left child,


follow the right child, output node
value.

4. In-order LNR - for each node, display


the left-hand side, then the node itself,
then the right. When displaying the left
or right, follow the same instructions.

PRE-ORDER IN-ORDER POST-ORDER


1. PREORDER(tree): 7. INORDER(tree): 13. POSTORDER(tree):
2. PRINT tree.value 8. IF tree.left 0: 14. IF tree.left 0:
3. IF tree.left 0: 9. INORDER(tree.left) 15. POSTORDER (tree.left)
4. PREORDER (tree.left) 10. PRINT tree.value 16. IF tree.right 0:
5. IF tree.right 0: 11. IF tree.right 0: 17. POSTORDER (tree.right)
6. PREORDER (tree.right) 12. INORDER(tree.right) 18. PRINT tree.value

GRAPH TRAVERSAL

BFS 1. def bfs(self, start):


2. queue = [start] #start value as first node
3. visited = [] #list of the visited nodes
4.
5. while queue: #while queue has a value
6. cur_node = queue.pop(0) #start working with the first node
7.
8. for neighbour in self.neighbours[cur_node]: #for all neighbours of the node

9. if neighbour not in visited:


10. queue.append(neighbour) #append to queue if not visited
11.
12. visited.append(cur_node) #append current node to visited
13. return visited

DFS Implement exactly the same but with a stack ( stack.pop() )


SORTS

INSERTION SORT

- Pick the first item.


- Keep swapping it with its left neighbour until the swap would put it to the left of a number
smaller than it.
- Pick the next number along.
- Repeat.

1. #=======================================================================
2. # Time Complexity of Solution:
3. # Best O(n); Average O(n^2); Worst O(n^2).
4. #
5. # Approach:
6. # Insertion sort is good for collections that are very small
7. # or nearly sorted. Otherwise it's not a good sorting algorithm:
8. # it moves data around too much. Each time an insertion is made,
9. # all elements in a greater position are shifted.
10. #=======================================================================
11.
12. def insertionSort(alist):
13. for index in range(1,len(alist)):
14.
15. currentvalue = alist[index]
16. position = index
17.
18. while position>0 and alist[position-1]>currentvalue:
19. alist[position]=alist[position-1]
20. position = position-1
21.
22. alist[position]=currentvalue
23.
24. alist = [54,26,93,17,77,31,44,55,20]
25. insertionSort(alist)
26. print(alist)
BUBBLE SORT

1. #=======================================================================
2. # Time Complexity of Solution:
3. # Best O(n^2); Average O(n^2); Worst O(n^2).
4. #
5. # Approach:
6. # Bubblesort is an elementary sorting algorithm. The idea is to
7. # imagine bubbling the smallest elements of a (vertical) array to the
8. # top; then bubble the next smallest; then so on until the entire
9. # array is sorted. Bubble sort is worse than both insertion sort and
10. # selection sort. It moves elements as many times as insertion sort
11. # (bad) and it takes as long as selection sort (bad). On the positive
12. # side, bubble sort is easy to understand. Also there are highly
13. # improved variants of bubble sort.
14. #
15. # 0] For each element at index i from 0 to n, loop:
16. # 1] For each element at index k, from n to i exclusive, loop:
17. # 2] If the element at k is less than that at k-1, swap them.
18. #=======================================================================
19.
20. def bubbleSort(alist):
21. for passnum in range(len(alist)-1,0,-1):
22. for i in range(passnum):
23. if alist[i]>alist[i+1]:
24. temp = alist[i]
25. alist[i] = alist[i+1]
26. alist[i+1] = temp
27.
28. alist = [54,26,93,17,77,31,44,55,20]
29. bubbleSort(alist)
30. print(alist)
MERGE SORT

1. #=======================================================================
2. # Time Complexity of Solution:
3. # Best = Average = Worst = O(nlog(n)).
4. #
5. # Approach:
6. # Merge sort is a divide and conquer algorithm. In the divide and
7. # conquer paradigm, a problem is broken into pieces where each piece
8. # still retains all the properties of the larger problem -- except
9. # its size. To solve the original problem, each piece is solved
10. # individually; then the pieces are merged back together.
11. #
12. # mid = len(aList) / 2
13. # left = mergesort(aList[:mid])
14. # right = mergesort(aList[mid:])
15. #
16. # That approach take too much memory for creating sublists.
17. #=======================================================================
18.
19. def merge_sort(aList):
20.
21. if len(aList) < 2:
22. return aList
23.
24. result = []
25. mid = int(len(aList)/2)
26.
27. left = merge_sort(aList[:mid])
28. right = merge_sort(aList[mid:])
29.
30. while (len(left) > 0) and (len(right) > 0):
31. if left[0] > right[0]:
32. result.append(right.pop(0))
33. else:
34. result.append(left.pop(0))
35.
36. result.extend(left+right) #extend merges lists
37. return result
38.
39. print(merge_sort([4,2,5,7,2,1,3]))
QUICK SORT

1. #=======================================================================
2. # Time Complexity of Solution:
3. # Best = Average = O(nlog(n)); Worst = O(n^2).
4. #
5. # Approach:
6. # Quicksort is admirably known as the algorithm that sorts an array
7. # while preparing to sort it. For contrast, recall that merge sort
8. # first partitions an array into smaller pieces, then sorts each piece,
9. # then merge the pieces back. Quicksort actually sorts the array
10. # during the partition phase.
11. #
12. # Quicksort works by selecting an element called a pivot and splitting
13. # the array around that pivot such that all the elements in, say, the
14. # left sub-array are less than pivot and all the elements in the right
15. # sub-array are greater than pivot. The splitting continues until the
16. # array can no longer be broken into pieces. That's it. Quicksort is
17. # done.
18. #=======================================================================
19.
20. import random
21.
22. def quicksort( aList ):
23. _quicksort( aList, 0, len( aList ) - 1 )
24.
25. def _quicksort( aList, first, last ):
26. if first < last:
27. pivot = partition( aList, first, last )
28. _quicksort( aList, first, pivot - 1 )
29. _quicksort( aList, pivot + 1, last )
30.
31.
32. def partition( aList, first, last ) :
33. pivot = first + random.randrange( last - first + 1 )
34. swap( aList, pivot, last )
35. for i in range( first, last ):
36. if aList[i] <= aList[last]:
37. swap( aList, i, first )
38. first += 1
39.
40. swap( aList, first, last )
41. return first
42.
43.
44. def swap( A, x, y ):
45. A[x],A[y]=A[y],A[x]
SEARCH

BINARY SEARCH

RECURSIVE 1. def binary(myList, value):


2.
3. mid = int(len(myList)//2)
4.
5. if(len(myList) == 0):
6. print("\nNumber not found!")
7. return False
8.
9. else:
10. if (myList[mid] == value):
11. print("number found")
12. return True
13.
14. elif (myList[mid] > value):
15. return binary(myList[:mid], value)
16. else:
17. return binary(myList[mid+1:], value)
18.
19. myList = [1,4,5,6,9,10,12,14,16,24,29,30]
20. binary(myList, 6)

ITERATIVE 1. def binarySearch(alist, item):


2. first = 0
3. last = len(alist)-1
4. found = False
5.
6. while first<=last and not found:
7. midpoint = (first + last)//2
8. if alist[midpoint] == item:
9. found = True
10. else:
11. if item < alist[midpoint]:
12. last = midpoint-1
13. else:
14. first = midpoint+1
15.
16. return found
17.
18. testlist = [0, 1, 2, 8, 13, 17, 19, 32, 42,]
19. print(binarySearch(testlist, 3))

You might also like