This article is for intermediate users. It introduces the following ideas: Backtracking, Dynamic programming.
Approach #1 (Brute force) [Time Limit Exceeded]
The problem could be modeled as the following optimization problem :
, where is the amount, is the coin denominations, is the number of coins with denominations used in change of amount . We could easily see that .
A trivial solution is to enumerate all subsets of coin frequencies that satisfy the constraints above, compute their sums and return the minimum among them.
To apply this idea, the algorithm uses backtracking technique to generate all combinations of coin frequencies in the range which satisfy the constraints above. It makes a sum of the combinations and returns their minimum or in case there is no acceptable combination.
- Time complexity : . In the worst case, complexity is exponential in the number of the coins . The reason is that every coin denomination could have at most values. Therefore the number of possible combinations is :
- Space complexity : . In the worst case the maximum depth of recursion is . Therefore we need space used by the system recursive stack.
Approach #2 (Dynamic programming - Top down) [Accepted]
Could we improve the exponential solution above? Definitely! The problem could be solved with polynomial time using Dynamic programming technique. First, let's define:
- minimum number of coins needed to make change for amount using coin denominations
We note that this problem has an optimal substructure property, which is the key piece in solving any Dynamic Programming problems. In other words, the optimal solution can be constructed from optimal solutions of its subproblems. How to split the problem into subproblems? Let's assume that we know where some change for which is optimal and the last coin's denomination is . Then the following equation should be true because of optimal substructure of the problem:
But we don't know which is the denomination of the last coin . We compute for each possible denomination and choose the minimum among them. The following recurrence relation holds:
In the recursion tree above, we could see that a lot of subproblems were calculated multiple times. For example the problem was calculated times. Therefore we should cache the solutions to the subproblems in a table and access them in constant time when necessary
The idea of the algorithm is to build the solution of the problem from top to bottom. It applies the idea described above. It use backtracking and cut the partial solutions in the recursive tree, which doesn't lead to a viable solution. Тhis happens when we try to make a change of a coin with a value greater than the amount . To improve time complexity we should store the solutions of the already calculated subproblems in a table.
Time complexity : . where S is the amount, n is denomination count. In the worst case the recursive tree of the algorithm has height of and the algorithm solves only subproblems because it caches precalculated solutions in a table. Each subproblem is computed with iterations, one by coin denomination. Therefore there is time complexity.
Space complexity : , where is the amount to change We use extra space for the memoization table.
Approach #3 (Dynamic programming - Bottom up) [Accepted]
For the iterative solution, we think in bottom-up manner. Before calculating , we have to compute all minimum counts for amounts up to . On each iteration of the algorithm is computed as
In the example above you can see that:
- Time complexity : . On each step the algorithm finds the next in iterations, where . Therefore in total the iterations are .
- Space complexity : . We use extra space for the memoization table.
Analysis written by: @elmirap.