### Algorithm Series 1

### Dynamic Programming

**What is Dynamic programming?**

Dynamic programming is a technique for efficiently implementing a recursive algorithm, it stores partial results and solves every subproblem just once and then saves its answer in a table, thereby avoiding the work of recomputing the answer every time the subproblem is encountered. It is essentially a trade off of space for time.

**Procedures**

1. Characterize the structure of an optimal solution.

-- Find the optimal substructure and then use it to construct an optimal solution to the problem from optimal solutions to subproblems.

2. Recursively define the value of an optimal solution.

3. Compute the value of an optimal solution in a bottom-up fashion.

-- Specify an order of evaluation for the recurrence so the partial results you need are always available when you need them.

-- Figure out what auxiliary information will be needed to store to reconstruct the optimal solution.

-- ask what information about the i−1 subproblems would help you to find the answer for the ith subproblems?

4. Construct an optimal solution from computed information.

-- It usually computes/prints the solution recursively.

-- Walking backward reconstructs the solution in reverse order, but usually we can use recursion to do that.

Saving space in dynamic programming is very important, using O(nm) space proves more of a bottleneck than O(nm) time.

**When to use Dynamic Programming?**

1. Optimal substructure

-- Whether optimal solution to the problem contains within it optimal solutions to subproblems.

2. Overlapping subproblems(and limited subproblems)

Dynamic programming is generally the right method for optimization problems on combinatorial objects that have an inherent left to right order among components. Left-to-right objects includes: character strings, rooted trees, polygons, and integer sequences.

There are two ways:

**1. Memoization (top-down Dynamic Programming strategy)**

Start solving the given problem by breaking it down. If you see that the problem has been solved already, then just return the saved answer. If it has not been solved, solve it and save the answer. This is usually easy to think of and very intuitive.

Its implementation would be like recursion version, but add the logic to store the subproblems and look up before recursion call.

**When to use Memoization?**

If some subproblems in the subproblem space need not be solved at all, the memoized solution has the advantage of solving only those subproblems that are definitely required.

**2. Bottom-Up**

Analyze the problem and see the order in which the sub-problems are solved and start solving from the trivial subproblem, up towards the given problem. In this process, it is guaranteed that the subproblems are solved before solving the problem.

**Examples**

Fibonacci Numbers

Solution: O (n) and only retain the last two values

Towers of Hanoi

Key point - int intermediatePole = 6 - startPole - endPole;

```
public static void move(int n, int startPole, int endPole) {
if (n== 0){
return;
}
int intermediatePole = 6 - startPole - endPole;
move(n-1, startPole, intermediatePole);
System.out.println("Move " +n + " from " + startPole + " to " +endPole);
move(n-1, intermediatePole, endPole);
}
```

Binomial Coefficients 二项式系数

C(n,k) = n!/((n−k)!k!) = C(n-1,k-1) + C(n-1,k)

Basis cases: bc[i][0] = 1; bc[j][j] = 1;

**Edit Distance**

D[i,j] = min( D(i-1, j-1) + if S[j]=T[j] then 0 else 1 fi,

D(i, j-1) + 1, d(i-1, j) + 1 )

**Longest Increasing Sequence**

The longest increasing sequence containing the nth number will be formed by appending it to the longest increasing sequence to the left of n that ends on a number smaller than sn.

O(n2):

```
l[j] = max{l[i]} + 1, where (i<j and A[j] <A[i])
```

`O(nlgn):`

**Longest common subsequence O(mn)**

`if x[i]=y[j], c[i,j] = c[i-1,j-1] +1`

`else c[i,j] = max{c[i-1,j], c[i,j-1]}`

`http://en.wikibooks.org/wiki/Algorithm_implementation/Strings/Longest_common_subsequence#Java`

**Maximum contiguous subsequence sum**

1. O(n)

```
for( int i = 0, j = 0; j < a.length; j++ )
{
thisSum += a[ j ];
if( thisSum > maxSum )
{
maxSum = thisSum;
seqStart = i;
seqEnd = j;
}
else if( thisSum < 0 )
{
i = j + 1;
thisSum = 0;
}
}
```

2. DP solution - O(nlogn)

return max3( maxLeftSum, maxRightSum, maxLeftBorderSum + maxRightBorderSum );

**Matrix-chain multiplication O(n3)**

`Let P(n) be the number of ways to parenthesize n matrices.`

`P(n) =Sum{P(k)P(n − k)} where k from 1 to n-1, if n ≥ 2`

`Let Ai be pi−1 by pi .`

`Let m[i, j] be the cost of computing Aij`

```
m[i, j] = min{m[i, k] + m[k + 1, j] + p(i−1)pkpj} where i≤k<j if i < j
```

`Auxiliary information:`

`s[i, j] to be a value of k at which we can split the product Ai..Aj to obtain an optimal parenthesization, m[n,n] stores the costs.`

**Optimal binary search tree**

`e[i, j] is the expected cost of searching an optimal binary search tree containing the keys ki, ..., kj.`

```
e[i, j ] = min(i<<r<<j){e[i, r - 1]+e[r + 1, j]+w(i,j)}
```

**Knapsack**

`Vk(i) = the highest total value that can be achieved from item types k through N, assuming that the knapsack has a remaining capacity of i.`

**Coin Changing Problem**

`Let C[p] be the minimum number of coins of denominations d1, d2, . . . , dk needed to make change for p cents.`

`C[p] = min(i:di≤p){1 + C[p − di]} if p > 0`

Minimum Steps to One

On a positive integer, you can perform any one of the following 3 steps.

1.) Subtract 1 from it. (n = n - 1)

2.) If it’s divisible by 2, divide by 2.

3.) If it’s divisible by 3, divide by 3.

Now the question is, given a positive integer n, find the minimum number of steps that takes n to 1.

F(n) = 1 + min{ F(n-1) , F(n/2) , F(n/3) } if (n>1) , else 0 ( i.e., F(1) = 0 ) .

**Permutations of a String**

Print all possible orderings of the characters in a string. Treat each character in the input string as a distinct character, even if it is repeated.

Solution-1:

```
static void doPermute(char[] in, StringBuffer out, boolean[] used, int level)
{
if (level == in.length)
{
System.out.println(out.toString());
return;
}
for (int i = 0; i < in.length; ++i)
{
if (used[i]) continue;
out.append(in[i]);
used[i] = true;
doPermute(in, out, used, level + 1);
used[i] = false;
out.setLength(out.length() - 1);
}
}
```

Solution-2:

```
public static ArrayList<String> permute2(String s)
{
ArrayList<String> permutations = new ArrayList<String>();
if (s == null)
{ // error case
return null;
}
else if (s.length() == 0)
{ // base case
permutations.add("");
return permutations;
}
char frst = s.charAt(0); // get the first character
String remainder = s.substring(1); // remove the first character
ArrayList<String> words = permute2(remainder);
for (String word : words)
{
for (int j = 0; j <= word.length(); j++)
{
permutations.add(insertCharAt(word, frst, j));
}
}
return permutations;
}
private static String insertCharAt(String word, char c, int i)
{
String start = word.substring(0, i);
String end = word.substring(i);
return start + c + end;
}
```

**Combinations of a String**

Print all possible combinations of the characters in a string. Two combinations that differ only in ordering of their characters are the same combination.

`Solution-1:`

```
public static void combine(String str)
{
StringBuilder outstr = new StringBuilder();
doCombine(str, outstr, 0, 0);
}
private static void doCombine(String inStr, StringBuilder outStr, int level, int start)
{
for (int i = start; i < inStr.length(); i++)
{
outStr.append(inStr.charAt(i));
System.out.println(outStr);
if (i < inStr.length() - 1)
{
doCombine(inStr, outStr, level + 1, i + 1);
}
outStr.setLength(outStr.length() - 1);
}
}
```

`Solution-2:`

```
public static ArrayList<String> combine2(String s)
{
ArrayList<String> combines = new ArrayList<String>();
if (s == null)
{ // error case
return null;
}
else if (s.length() == 0)
{ // base case
combines.add(s);
return combines;
}
char first = s.charAt(0); // get the first character
String remainder = s.substring(1); // remove the first character
ArrayList<String> words = combine2(remainder);
for (String word : words)
{
combines.add(word);
combines.add(word + first);
}
return combines;
}
```

**Related algorithms**

`Divide-and-conquer algorithms`

`-- It partitions the problem into independent subproblems, solve the subproblems recursively, and then combine their solutions to solve the original problem.`

`Recursive algorithm (top-down strategy)`

`Greedy algorithms (top-down strategy)`

`-- It makes the best local decision at each step are typically efficient but usually do not guarantee global optimality.`

`Instead of first finding optimal solutions to subproblems and then making a choice, greedy algorithms first make a choice—the choice that looks best at the time—and then solve a resulting subproblem. `

`Exhaustive search algorithms that try all possibilities and select the best always produce the optimum result, but usually at a prohibitive cost in terms of time complexity.`

Resources: