Algorithms

Wikipedia: Algorithms

Wikipedia: Analysis of Algorithms

Some people compare an algorithm to a recipe, the precise series of steps and combination of ingredients needed to produce a particular dish such as spaghetti bolognese or tarte tatin. There is no formal definition of algorithm and it may be enough to say that it provides a way of calculating something.

In computing an algorithm provides the details of a solution to a problem such as summing a series of numbers, finding the largest number in a list or sorting a list into ascending order. A key part of an algorithm is the way that it defines the flow of control through its internal processes.

Computer scientists have developed a number of techniques to help with the development and definition of algorithms. It is useful to identify the small number of possible elements in an algorithm:

Other techniques include:

Pseudo-Code

Pseudo-code is a formal language that is midway between English and program code. It includes:

If <expression> Then <statements> EndIf

Case x Of
1: <statements>
2: <statements>
EndCase

For i ß 1 to 10 do <statements>  Endfor

While <expression> Do
<statements>
EndWhile

Example:

1 algorithm Fermat-Test(n)
2 begin
3 Choose a random number a between 2 and n − 1 inclusive
4 Compute d = gcd(a, n), the greatest common divisor of a and n
5 If d 6= 1 then return false as the result
6 Compute t = an−1 (mod n)
7 If t = 1 then return true as the result, otherwise return false
8 end

Use pseudo-code to define solutions to these problems: adding two integers, both in the range 1-99; calculating the average of a series of numbers; outputting n triangle numbers; find smallest and largest in a list.

CS4FN: Algorithm Section Bristol university guide How Algorithms Shape the World
UC Berkeley CS10 Lecture 6: Algorithms UC Berkeley CS!0: Algorithmic Complexity UC Berkeley CS10 Lecture 22: Game Theory
Algorithms in Pseudo-code:

Read through the chapter on algorithms from the Bristol University guide. Link. This will show you how to write algorithms in pseudo-code so you can now tackle these problems.

Write algorithms:

To add two numbers, a and b without knowledge of number bonds e.g. how do you do 2 + 2 without the knowledge that this is 4? (Note that this style of algorithm is similar to the level at which Turing worked in the 1930s: a human knows that 2 + 2 = 4 but a machine does not unless we tell it how to work it out from first principles.)

To subtract two numbers, b from a i.e. a-b; again this should be done without prior knowledge that 5-3=2, etc.

To multiply two numbers, a x b.

To divide two integers and show the quotient and remainder.

To count in increments of n, e.g. 1, 2, 3.

To produce the numbers of the Fibonacci sequence, starting with 0 & 1.

To test whether one number is a factor of another.

To find all the factors of a number.

To find all the prime factors of a number.

To find the greatest common divisor of two numbers.

To test whether a number is prime.

To find perfect numbers.

To perform a bubble sort.

To perform a binary search.

Algorithm Efficiency

One of the key properties of an algorithm is its efficiency or the speed at which it achieves its task.

Computational Complexity: Time

One of the measures of the computational complexity of an algorithm is time: how long does it take? We deal with three main types of time complexity: linear, polynomial and exponential. We can see these at work in this table:

n n2 2n n!
1 1 2 1
2 4 4 2
3 9 8 6
4 16 16 24
5 25 32 120
6 36 64 720
7 49 128 4,320
8 64 256 30,240
9 81 512 241,920
10 100 1024 2,177,280
11 121 2048  
12 144 4096  
13 169 8192  
14 196 16364  

The first column shows how the number of operations will increase if an algorithm runs in linear time. This is what happens inside a single loop, for example when we sum the integers 1..n.

The second column shows increases in the value of the outputs in polynomial time, in this case n2. The same principle applies to polynomials of n3, n4 and so on. This is what happens inside a nested loop two levels deep, as in many sorting algorithms.

The third column shows increases in the value of the outputs in exponential time, in this case 2n. The same principle applies to exponentials of any value e.g. 5n. Many algorithms run in exponential time and a great deal of time has been invested in finding optimal solutions that run more quickly than the 'brute force' alternative.

The fourth column shows the growth of a factorial, which is exponential in nature and increases very quickly.

The time that an algorithm takes to complete an operation depends on the size of the input. It takes longer to sort a list of 1000 names than it does to sort 10 (polynomial); it takes longer to complete a Tower of Hanoi puzzle with 10 rings than it does with 4 (exponential time).

Finding Factors

How would you find the smallest factor of a number? Devise an algorithm to do this. Code the algorithm. Find the smallest factor for each of the first 1000 natural numbers. Note the time taken to find the first 1000, the first 2000, the first 3000 and so on up to 5000. What do you notice about the relationship between N and the time taken? How efficient is your algorithm? Could it be improved?

Example: The sum of n natural numbers.

What is the sum of the first 10 natural numbers? (1+2+...10). What about the sum of the first 100 or the first 1,000 natural numbers?

One way to solve this problem is to use a loop with a variable such as: 'for i:= 1 to n do sum := sum + i.

For i = 10 the sequence would be: 1, 3, 6, 10, 15, 21, 28, 36, 45, 55.

For a given value of N the number of times the calculation is made is also N so this relationship is linear. If the number of calculations increases by a fixed factor then the relationship between the input N and the number of operations to compute the output is linear. For example, the factor might be 0.5 or 2 or 10 but the relationship is still linear and the line on a graph is straight. A linear relationship generally means that a computer can find a solution for most values of N. A linear algorithm is described as O(n).

A much better algorithm for finding the sum of N natural numbers is (n *( n + 1)) / 2. For n=10 this gives 110/2 = 55. For N=100 this gives 10100/2 = 5,050. See here: CS4FN.

This algorithm finds the answer for almost any value of N in more or less the same time.

program SumRepeatLoop;

{$APPTYPE CONSOLE}

uses
SysUtils, Windows;

Var
Sum, Frequency : Int64;
StartTime, EndTime : Int64;
n, i : Integer;
Ch : Char;
begin
 QueryPerformanceFrequency(Frequency); {Get frequency}
 Repeat
 Sum := 0;
 i := 1;
 Write('Input n: ');
 Readln(n);
 QueryPerformanceCounter(StartTime); {Get initial count}

 Repeat
  Sum := Sum + i;
  i := i + 1;
  Until i > n;
 QueryPerformanceCounter(EndTime);
 Writeln('Sum = ', Sum);
 If Frequency > 0
 Then Writeln('Time taken is ', ((EndTime-StartTime)/Frequency):15:10, ' seconds');
 Write('Another go Y/N? ');
 Readln(Ch);
 Until Ch In ['N','n'];
end.

 

program SumFormula;

{$APPTYPE CONSOLE}

uses
SysUtils, Windows;

Var
Sum, Frequency : Int64;
StartTime, EndTime : Int64;
n, i : Integer;
Ch : Char;
begin
 QueryPerformanceFrequency(Frequency); {Get frequency}
 Repeat
  Write('Input n: ');
  Readln(n);
  QueryPerformanceCounter(StartTime); {Get initial count}
  Sum := n*(n + 1) Div 2;
  QueryPerformanceCounter(EndTime);
  Writeln('Sum = ', Sum);
  If Frequency > 0
  Then Writeln('Time taken is ', ((EndTime-StartTime)/Frequency):15:10, ' seconds');
 Write('Another go Y/N? ');
 Readln(Ch);
 Until Ch In ['N','n'];
end.

The application of Gauss's equation for summing integers 1..n is an example of an algorithm that runs in linear time.

Example: Bubble Sort

The bubble sort employs two loops, one nested inside the other. The outer loop does:

for i:= 1 to n-1 do

The inner loop does:

for j:= i + 1 to n do

To sort an array of 10 items the loops make the following comparisons:

for i = 1, j= 2, 3, 4, 5, 6, 7, 8, 9, 10
for i = 2, j= 3, 4, 5, 6, 7, 8, 9, 10
for i = 3, j= 4, 5, 6, 7, 8, 9, 10
for i = 4, j= 5, 6, 7, 8, 9, 10
for i = 5, j= 6, 7, 8, 9, 10
for i = 6, j= 7, 8, 9, 10
for i = 7, j= 8, 9, 10
for i = 8, j= 9, 10
for i = 9, j= 10

The number of comparisons for n = 10 is thus 9 + 8 + 7 + 6 + 5 + 4 + 3 + 2 + 1 = 45 or: (n-1 x n)/2 = 90/2 = 45. This can be expressed as (n2-n)/2. Thus the number of comparisons for values of N rises to the power of 2, for example:

N=10: comparisons = 45

N=100: comparisons = 4950

N=1,000: comparisons = 499,500

The power of 2 in the expression is far more important than the value of n subtracted so the relationship for large values of n is a power of 2. The expression is a polynomial and this is the name given to this type of relationship. A line on a graph slopes steeply upwards. This type of algorithm is described as O(n2).

Thus a bubble sort runs in polynomial time. What about QuickSort?

Example: Finding the prime factors of a number that is the product of two primes

To find the factors of a number that is known to be the product of two prime numbers we might use brute force:

Think of an algorithm for this.

use a nested loop for values of p1 and p2; start the outer loop at 2 (we discount 1 as a factor) and start the inner loop at 2 as well. For each value of p1 in the outer loop multiply the value of p2 in the inner loop. The outer loop should continue to n div 2 and the inner loop should be square root (n) (or vice versa). For example, 3x5=15:

2x2=4; 2x3=6; 2x4=8; 2x5=10; 2x6=12; 2x7=14;

3x2=6; 3x3=9; 3x4=12; 3x5=15

At this point the product is found and we see that we have counted 10 tries.

This algorithm can be improved by using only prime numbers in the multiplication. This would involve either testing each number to see if it was prime or reading the prime numbers from an array or file. In this case the products tested would be:

2x2=4; 2x3=6; 2x5=10; 2x7=14;

3x3=9; 3x5=15

This requires six tries and is more efficient than the brute force method, though time will be spent looking up the primes each time. For the type of prime numbers used in the RSA method and every day public key encryption these methods would be very slow. With the increasing speed of computers, the use of 'grid computing' and clever algorithms to solve the problem the length of keys needed has grown steadily. The sudden usefulness of large prime numbers came as something of a surprise to the mathematicians interested in them.

For 7 x 7 = 49: outer loop runs from 2 to 7, inner loop runs from 2 to 7 so it takes 36 iterations to find a solution. The maximum number of iterations would be (n div 2) x square root (n) = 24 * 7 = 168 (or 23 x 6 = 138 when 1 is discounted)

For 7 x 13 = 91: outer loop runs from 2 to 7, inner loop runs from 2 to 13 so it takes 72 iterations to find a solution. The maximum number of iterations would be (n div 2)-1 x int(square root (n))-1 = 44 x 8 = 352.

What form of algorithmic time does this algorithm use?

Is there a better way? Find the first factor (using mod) and then divide it into n to ge the second.

Example: Linear Search

Given a list of 10 numbers or strings the number of searches required to find a given value is N/2: over many searches the number of comparisons averages out to 5. This holds true for any value of N so the relationship between input values of N and the time taken to execute the algorithm is linear. For example:

N=10: average number of comparisons to find a given value = 10/2 = 5 (or 5.5 if you sum 1.10 and divide by 10)

N=100: average number of comparisons to find a given value = 100/2 = 50 (or 50.5 if you sum 1.100 and divide by 100)

N=1,000: average number of comparisons to find a given value = 1000/2 = 500 (or 500.5 if you sum 1.1000 and divide by 1000)

What form of algorithmic time does a linear search use?

Example: Binary Search

This is a more complex search algorithm but it is more efficient than the linear approach. The relationship between input values of N and the number of comparisons needed to find a given value is given by int (log 2 n )+ 1. For example:

N=10: average number of comparisons to find a given value = int (log 2 10) + 1 = 3 + 1 = 4

N=100: average number of comparisons to find a given value = int (log 2 100) + 1 = 6 + 1 = 7

N=1,000: average number of comparisons to find a given value = int (log 2 1,000) + 1 = 9 + 1 = 10

1,000 lies between 512 (2^9) and 1,024 (2^10) and the number of searches is given by the larger index. It would take a maximum of 10 searches to look through 513 items and the same number to look through 1,023 items. Rule: find the next power of 2 above the number of items and take the index as the maximum number of searches requried.

Clearly this is more efficient than the linear search and any algorithm whose time increases in a logarithmic relationship to given values of N is more efficient (takes less time) than a linear one.

Verify these figures experimentally. Can you get the program to produce values of more than 10 comparisons for an array of 1,000 data items before the item is found?

The Quicksort method uses a similar approach of chopping lists into two parts before performing the comparisons and it is correspondingly faster than the bubble sort method. When you encounter the binary tree data structure you will find an identical relationship between the number of searches required to find an item in a tree.

What form of algorithmic time does a binary search use?

Example: The Towers of Hanoi

Investigation shows that the number of moves required for increasing numbers of rings is as follows:

2 rings: 3 moves
3 rings: 7 moves
4 rings: 15 moves
5 rings: 31 moves

The relationship is n2 - 1 so for 16 rings the number of moves required is 65,535 and for 32 rings it is 4,294,967,296 or 4.2 billion. This is an exponential relationship and its implementation on a computer for large values of N would take a long time to complete.

Example: Cards with a Different Colour on Each Side

Take 9 cards, each with half of a different coloured dot on each side. The challenge is to fit the cards together so that the colours on each side match up. This is not difficult for a human to complete but for a computer the only approach may be to use brute force and examine every possible arrangement. The number of arrangements can be calculated for a 3x3 grid as follows:

The first card can be placed in any of the 9 squares and can be rotated 4 times, giving 36 possible locations.

The second card can be placed in any of the 8 remaining squares and can be rotated 4 times, giving 32 possible locations.

The third card can be placed in any of the 7 remaining squares and can be rotated 4 times, giving 28 possible locations...

The ninth card can be placed in one position and can be rotated 4 times, giving 4 possible locations.

Thus we have: (9 x 4) + (8 x 4) + (7 x 4) + (6 x 4) + (5 x 4) + (4 x 4) + (3 x 4) + (2 x 4) + (1 x 4) = 9! x 49

This makes 95,126,814,720 positions to check (probably more than you imagined). For a 5x5 grid of cards the number of possibilities is 25! x 425. These are seriously large numbers and show a relationship that is exponential, that is the relationship is 2n rather than n2.

Any relationship that is exponential or involves factorials increases the number of operations for a given input very quickly. Problems that grow at this rate are difficult or impossible to solve on computers.

The Tiling Problem

CS4FN

This is an example of an intractable problem: it cannot be solved by a human or a computer; there is no algorithm for it.

Noughts and Crosses

The first player has 9 choices to place a X. Having placed a X on the grid the next player has 8 choices remaining, after which the first player has 7 choices, and so on. Hence the number of choices for a game is, in theory, 9! The rules of the game restrict the number of choices to those that block or win a game.

Sudoku

For the top left 3x3 grid the number of choices for placing numbers is 9! but for subsequent 3x3 sub-squares this is reduced because the rules of Sudoku state that a digit can only appear once in a column or row.

The Subset Sum Problem

Wikipedia

The problem here is to check whether a subset of integers taken from a larger set sums to zero. It is easy to verify that a given subset sums to the required value, which places it in 'NP'; but it is not easy to find a subset of numbers that sum to the required value. A solution to the latter problem takes an exponential amount of time so the solution is not in P.

The number of combinations of numbers in the subset that could be combined to sum to zero is 2^n -1. For a subset of {a, b, c, d, e}:

1 a a
2 b b
3 c c
4 d d
5 e e
6 a+b ab
7 a+c ac
8 a+d ad
9 a+e ae
10 a+b+c abc
11 a+b+d abd
12 a+b+e abe
13 a+c+d acd
14 a+c+e ace
15 a+d+e ade
16 a+b+c+d abcd
17 a+b+c+e abce
18 a+c+d+e acde
19 a+b+c+d+e abcde
20 b+c bc
21 b+d bd
22 b+e be
23 b+c+d bcd
24 b+c+e bce
25 b+d+e bde
26 b+c+d+e bcde
27 c+d cd
28 c+e ce
29 c+d+e cde
30 d+e de

 

The Travelling Salesman Problem

Wikipedia

This problem can be stated simply but finding a solution by brute force is difficult or impossible because of the steep rise in the number of choices for large values of n. The problem involves finding the shortest path between a network of cities that ends at the same place where it began and visits each city only once. The simplest scenario is one where all cities are connected in a complete graph, that is all vertices are connected by edges so the choice of movement from one city to the next is unrestricted.

Starting at the first city and with n cities in the graph the number of choices of cities to visit is n-1, the number of choices from the second city is n-2 and so on. This is (n-1) x (n-2) x (n-3)... or (n-1)!. As we have seen, a function with a factorial in it will grow big very quickly. As each edge may be traversed in either direction (the graph is assumed to be undirected, travel in each direction costs the same) the number of choices is (n-1)!/2. Four vertices thus gives 3 choices, five vertices gives 12 choices and so on. Ten vertices or cities in a connected graph produces 120,960 choices (9!/2).

Example: School Timetables

Wikipedia

Decidable but Intractable

Some jig-saw puzzles are particularly difficult, even for a person: imagine a simple image such as a large pile of footballs cut into a thousand pieces: quite tough to complete. Then imagine the same image printed on the reverse of the pieces, rotated through 90 degrees and with the sides of the pieces indistinguishable: pretty tough! Imagine the possible permutations that a computer would run through using brute force. (What is the point of such puzzles?)

The monkey card problem and the jig-saw puzzle are decidable in the sense that we can see how to arrive at a solution but they are intractable in that it would take too long, for large input values, to find them. These problems are called NP-complete, which means that they are non-deterministic polynomial problems and complete in the sense that none of them can be solved .

Big O Notation

Link: CodeCademy (Bog O)

See Wikipedia.

Big O notation was devised to measure and compare the growth rate of mathematical functions. It is used in computer science to measure the use of a computer's resources by a particular algorithm, in terms of time or memory.

In a function with a constant use of resources regardless of the input value we might have f(x) = O(1); for example, Gauss's function for finding the sum of n natural numbers.

Ina  polynomial function such as f(x)=4x3+x2+5 the dominating factor in its growth rate is x3 and so we say f(x)=O(x3). The coefficients have only minimal influence for large values of n.

For the bubble sort big O = x2. As we have seen a list of 100 items will take (1002-100)/2 = 4,450 comparisons while a list of 1,000 items will take (1,0002 - 1000)/2 = 499,500 comparisons.

For a linear relationship we might have f(x)=O(2x) or f(x)=O(0.5x).

Putting some of the above algorithms in order we have, from the slowest rate of growth to the fastest:

Algorithmic Programming

Programming methods that involve algorithms include structured or modular programming and object-oriented programming. We will concentrate here on the structured, modular approach (known also as top-down  programming).

The Pascal programming language provides the tools and techniques required to produce structured programs. These include:

Pascal programs are themselves structured into sections defined by:

The aim of those who developed structured programming, in the 1960s and 1970s, was to make programming more methodical, scientific, accurate and less prone to errors. This paradigm worked fine for a while but computer hardware got faster and cheaper and projects got bigger and more complex so the computer scientists had to come up with something new: object oriented programming. Man went to the moon on structured programming but a simple software error on the Ariane 5 rocket showed that even the most expensive systems can make fail and software is not foolproof.

Structured programming still provides an effective way to learn programming and it provides access to standard algorithms and problem solving techniques. Algorithms include: searching, sorting, file processing and manipulating data structures such as stacks, queues and trees.

Tutorial 1: Fibonacci, Prime Numbers, Denary-Binary, Decimal-Roman, Reverse a list

Tutorial 2: Binary search, Bubble Sort, Insertion Sort

Tutorial 3: Factorials, GCD (Euclid's Algorithm), Towers of Hanoi, Quicksort

Genetic Algorithms

Wikipedia:

A genetic algorithm (GA) is a search heuristic that mimics the process of natural evolution. This heuristic is routinely used to generate useful solutions to optimization and search problems. Genetic algorithms belong to the larger class of evolutionary algorithms (EA), which generate solutions to optimization problems using techniques inspired by natural evolution, such as inheritance, mutation, selection, and crossover. (Wikipedia)

In artificial intelligence, an evolutionary algorithm (EA) is a subset of evolutionary computation, a generic population-based metaheuristic optimization algorithm. An EA uses some mechanisms inspired by biological evolution: reproduction, mutation, recombination, and selection. Candidate solutions to the optimization problem play the role of individuals in a population, and the fitness function determines the environment within which the solutions "live" (see also cost function). Evolution of the population then takes place after the repeated application of the above operators. Artificial evolution (AE) describes a process involving individual evolutionary algorithms; EAs are individual components that participate in an AE.

The basic idea is that an algorithm can evolve. (An alogirthm, remember, is a series of steps by which an input is transformed to an output.)

In a genetic algorithm the desired output is a solution to a problem, for example a cleaning robot algorithm for the house or office. The desired output is control program that allows the robot to clean the house effectively.

The input to the algorithm has two parts: a population of candidate programs and a fitness function that takes a candidate program and assigns a fitness value that measures how well the program performs.

Genetic Algorithms Genetic Algorithms