Searching a 2D Sorted Matrix Part II

October 8, 2010 in divide and conquer

Write an efficient algorithm that searches for a value in an n x m table (two-dimensional array). This table is sorted along the rows and columns — that is,

Table[i][j] ≤ Table[i][j + 1],
Table[i][j] ≤ Table[i + 1][j]


Note:
This is Part II of the article: Searching a 2D Sorted Matrix. Please read Part I for more background information.

Step-wise Linear Search:
We call this the Step-wise Linear Search method. Similar to Diagonal Binary Search, we begin with the upper right corner (or the bottom left corner). Instead of traversing diagonally each step, we traverse one step to the left or bottom. For example, the picture below shows the traversed path (the red line) when we search for 13.

Essentially, each step we are able to eliminate either a row or a column. The worst case scenario is where it ended up in the opposite corner of the matrix, which takes at most 2n steps. Therefore, this algorithm runs in O(n) time, which is better than previous approaches.

An example showing the traversed path using step-wise linear search (colored in red) when the target element is 13.

Below is the code and it is simple and straight to the point. You should not make any careless mistake during the interview.

This is probably the answer that most interviewers would be looking for. But we will not stop here. Let us continue exploring some more interesting solutions.

Quad Partition:
Did you realize that this problem is actually solvable using a divide and conquer approach? I bet you did!

First, we make an observation that the center element always partition the matrix into four smaller matrices. For example, the center element 9 partitions the matrix into four matrices as shown in the picture below. Since the four smaller matrices are also sorted both row and column-wise, the problem can naturally be divided into four sub-problems.

If you notice carefully, we are always able to eliminate one of the four sub-problems in each step. Assume our target is 21, which is greater than the center element 9. We can eliminate the upper left quadrant instantly, because all elements in that quadrant are always less than or equal to 9. Now assume our target is 6, which is less than 9.

Similarly, we eliminate the bottom right quadrant from consideration, because elements in that quadrant must all be greater than 9. Please note however, we still need to search the upper right and bottom left quadrant, even though the example below seems to show all elements in the two mentioned quadrants are greater than 9.

Of course, if the center element is our target element, we have found the target and stop searching. If not, we proceed by searching the rest of three quadrants.

The center element 9 partitions the matrix into four smaller quadrants (shown as four different colors).

What’s the complexity of the Quad Partition method? As it turns out, the run time complexity could be written directly as a recurrence relation:

 T(n) = 3T(n/2) + c,

 where n is the dimension of the matrix.

We add a constant c because each step we do a comparison between the target element and the center element, which takes some constant time.

We need to solve the above equation to obtain the complexity. This is where most confusion comes in. If you have taken advanced algorithm course, you could solve it using the Master’s theorem, but you don’t really need to. You could just expand the recurrence relation directly to solve it.

 T(n) = 3T(n/2) + c, 
      = 3 [ 3T(n/4) + c ] + c 
      = 3 [ 3 [ 3T(n/8)  + c ] + c ] + c 
      = 3k T(n/2k) + c (3k - 1)/2   
      = 3k ( T(n/2k) + c ) - c/2


 Setting k = lg n,  
 T(n)  = 3lg n ( T(1) + c ) - c/2 
       = O(3lg n) 
       = O(nlg 3)           <== 3lg n = nlg 3 
       = O(n1.58)  

Below is the code for the Quad Partition method. l and u represents the upper left corner, while r and d represents the bottom right corner of the matrix. Be very careful of corner cases. Please note that the code below checks for when l equals r (left = right) and u equals d (up = down) (ie, the matrix has only one element). If this only element differs from the target, the function must return false. If you omit this condition, then the code below never terminates, which in other word translates to: You never double check your code, and it is Hasta la vista, baby from your interviewer.

Binary Partition:
We can even reduce the number of sub-problems from three to only two using a method we called Binary Partition. This time we traverse along either the middle row, middle column, or diagonally (as shown in highlighted gray cells in images a), b), and c) below). As we traverse, we find the point such that the target element s satisfies the following condition:

 ai < s < ai+1 ,  

 where ai is the ith traversed cell.

a) Row-wise binary partition. The highlighted gray cells represents the traversed row (the middle row). The target 10 is found between 9 and 16.

b) Column-wise binary partition. The highlighted gray cells represents the traversed column (the middle column). The target 10 is found between 9 and 14.

c) Diagonal-based binary partition. The highlighted gray cells represents the traversed diagonal. The target 10 is found between 9 and 17. Please note that diagonal-based binary partition would fail in a non-square matrix (for the above example, it will not work in the two sub-matrices because they are non-square matrices).

If the target element equals one of the traversed cells, we immediately return the element as found. Otherwise we partition the matrix into two sub-matrices following the partition point we found. As it turns out, we need cn time (linear time) to find such partition point, since we are essentially performing a linear search. Therefore, the complexity could be written as the following recurrence relation: (Note: I omitted the proof, as it is left as an exercise to the reader. )

 T(n) = 2T(n/2) + cn
      = O(n lg n)             

The Binary Partition algorithm runs in O(n lg n) time. You might expect its complexity to be lower than Quad Partition, since it has only two sub-problems (instead of three) to solve. The reason of the higher order complexity is due to the extra O(n) time doing a linear search for the partition point where ai < s < ai+1.

Please note that the matrix is not necessarily divided into two equal-sized sub-matrices. One of the matrix could be bigger than the other one, or in the extreme case, the other matrix could be of size zero. Here, we have made an assumption that each partition step divides the matrix into two equal-sized sub-matrices, just for the sake of complexity analysis.

Below is the code for the Binary Partition method. The code below chooses the middle column as the place to search for the partition point.

Improved Binary Partition:
Since the partition column (or row, or diagonal) is sorted, not utilizing the sorted configuration is a waste. In fact, we are able to modify binary search to search for the partition point in lg n time. Then, the complexity can be expressed as the following recurrence relation: (Note: I’ve omitted some steps, try to work out the math by yourself)

 T(n) = 2T(n/2) + c lg n
                       lg n
      = 2 lg n T(1) + c  ( 2 lg (n/2k) ) 
                       k=1
      = O(n) + c (2n - lg n - 2)
      = O(n)

By incorporating binary search, we are able to reduce the complexity to O(n). However, we have made an assumption, that is: Each subdivision of matrices must be of equal size (or, each partition point is exactly at the center of the partition column). This leads to the following question:

It is entirely possible that the subdivided matrices are of different sizes. Would the complexity change by an order in this case?

This turns out to be a difficult question to answer, but I could provide further insight to you by deriving the complexity of the other extreme, that is:

Each subdivision results in only one sub-matrix (ie, one matrix has the original matrix being halved, while the other matrix is empty.)

For an example of the above case, try searching for –1 in the above sample matrix shown in image (a). Since each subdivision results in the original matrix being halved, the total number of subdivisions can be at most lg n times. Assuming each binary search performed before a subdivision takes c lg n time, the complexity can be expressed as follow:

T(n) = c lg n + ... + c lg n    (a total of lg n times)   
     = c lg n * lg n
     = O(lg n)2

As you can see, the run time complexity of this extreme case is O(lg n)2, which turns out to be even less than O(n). We conclude that this is not the worst case scenario, as some people might believe.

Please note that the worst case for the Improved Binary Partition method had not been proven here. We had merely proven that one case of the Improved Binary Partition could run in O(n). If you know the proof of the worst case, I would be interested to hear from you.

» Continue reading Part III: Searching a 2D Sorted Matrix.

VN:F [1.9.22_1171]
Rating: 4.9/5 (25 votes cast)
Searching a 2D Sorted Matrix Part II, 4.9 out of 5 based on 25 ratings

27 responses to Searching a 2D Sorted Matrix Part II

  1. Hi buddy,

    In your extreme case analysis for improved binary partition, you used '-1' as an example. However, it seems that binary search -1 on diagonal will immediately tell you that -1 is not in the matrix.

    How come you can get "one matrix has the original matrix being halved, while the other matrix is empty"?

    Can you explain the meaning "matrix being halved"?

    VA:F [1.9.22_1171]
    0
  2. I am using a variation of binary search to search for a partition point, not for the element's position.

    The modified binary search works as this:
    Assume the list is
    {0, 2, 3, 5, 9}

    Searching -1 returns 0 as the partition point.
    Searching 1 returns 1 as the partition point.
    Searching 10 returns 5 as the partition point.

    In other words, we are finding the partition point i+1 such that the key(s) is:
    ai < s < ai+1

    If the element is found in the list, then we don't need the partition point, since we are done finding the element in the sorted matrix.

    VA:F [1.9.22_1171]
    0
  3. I doubt this part:
    3 [ 3 [ 3T(n/8) + c ] + c ] + c = 3^k * T(n/2^k) + O(1)

    I think it should be 3^k * T(n/2^k) + (3^(k+1)-1)/2*c

    VA:F [1.9.22_1171]
    0
  4. Thank you very much for pointing that out.
    I've corrected it, look above for the changes.

    VA:F [1.9.22_1171]
    +1
  5. You are welcome. BTW, just noticed another thing: O(nlgn) is actually better than O(n^1.58)

    VA:F [1.9.22_1171]
    0
  6. You made a very great observation! I need to investigate my code running time why it doesn't reflect this. One speculation is in the quad partition, I use an extra condition

    if (l > r || u > d) return false;

    to bail out early, while my binary search code does not use this. I will update my findings.

    Thanks!

    VA:F [1.9.22_1171]
    0
  7. Just to post a follow-up regarding aNiceGuy's observation that O(n lg n) has a lower bound than O(n^1.58), which is true.

    The two algorithms (binary search and diagonal binary search) which runs in O(n lg n), runs in a significant longer time than the quad partition method (O(n^1.58).

    One way to explain this is that both binary search and diagonal binary search has a higher constant than the quad partition. Also due to N=1000 is not large enough, therefore n^1.58 has not overtake n lg n yet. But if N is large enough, eventually n^1.58 will rise exponentially while n lg n remain pretty flat.

    VA:F [1.9.22_1171]
    0
  8. Shouldn’t the recurrence relation for the quad partition be 3T(n/4) + c, instead of 3T(n/2) + c, because it breaks the problem size into 4 pieces instead of two?

    VA:F [1.9.22_1171]
    +1
    • That’s because in each step you eliminate 2 pieces from consideration. Try to read the above post again, it explains clearly why you can eliminate those 2 pieces.

      VN:F [1.9.22_1171]
      0
      • I still cant see how this recurrence relation is 3T(n/2) + c.
        clearly as you mentioned “the problem can naturally be divided into four sub-problems.If you notice carefully, we are always able to eliminate one of the four sub-problems in each step. ”

        at each step we divide the problem size into 4 parts. n/4 . and in the worst case we have to do work on 3 of those part. i think it should be 3T(n/4) + c, instead of 3T(n/2) + c . could you clarify please thanks

        VA:F [1.9.22_1171]
        0
        • I agree with that. The recurrence is

          T(n) = 3 * T(n/4) + c

          which leads to

          T(n) = O(n^log4(3)) = O(n^0.7925)

          which is better than O(n*logn).
          Also, the recurrence is for the worst case (in which you get rid of just one subproblem). However, there might be cases in which you get rid of up to 3 subproblems. In fact, the upper left and lower right corners always represent the minimum and maximum value in the matrix, thus it might save some time to check if the key is within these two before making a recursive call.

          VA:F [1.9.22_1171]
          0
          • Here T(n) defines a problem on a matrix with size n x n.
            Similarly T(n/2) should define a problem on a matrix with size n/2 x n/2. when you divide the matrix in 4 parts that gives you 4 matrices with size n/2 x n/2.
            So each sub problem is T(n/2).

            VA:F [1.9.22_1171]
            +3
      • The two equations are actually equivalent: if n is the size of a square matrix (n*n), then we get O(n^lg2(3)). If m = n*n is the total size of the matrix then we get O(m^lg4(3)) = O(n^lg2(3)).

        VA:F [1.9.22_1171]
        0
  9. I could not quiet understood the fact that you’ve said that when we search for the -1 in the matrix in (a) then we will have O((lg(n))^2), i think it would return false because of the statement :

    if (target mat[d][r]) return false;

    am I not getting it right?

    BTW, also why is not the Improved Binary Partition Method’s worst case time complexity O((lg(n))^2)???
    Assuming each binary search performed before a subdivision takes c lg n time.

    can somebody explain plz…:)

    Thanks

    VA:F [1.9.22_1171]
    0
    • Sorry this is my first comment: and it didn’t come out right…

      I referred to the 3rd line of binPart when i say:

      “i think it would return false because of the statement :

      if (target mat[d][r]) return false;”

      VA:F [1.9.22_1171]
      0
  10. Hi 1337coder,

    Really appreciate your great article on this!

    I have a minor comment on the quad partitioning:

    In your code it seems you use the same way to partition the matrix no matter whether target
    is larger than the partition pivot or smaller than it, this is fine but leaves a corner case where
    both row and col are not advanced. In your code I see this case has been handled, just wondering
    that probably we can code it so that we don’t leave this corner case (as during interviews it’s easy
    to forget about handling corner cases)

    [suggested code]
    if (mat[row][col] > target) {
    return quadPart(mat, M, N, target, col, u, r, row-1, targetRow, targetCol) ||
    quadPart(mat, M, N, target, l, row, col-1, d, targetRow, targetCol) ||
    quadPart(mat, M, N, target, l, u, col-1, row-1, targetRow, targetCol);
    } else {

    [original code]
    if (mat[row][col] == target) {
    targetRow = row;
    targetCol = col;
    return true;
    } else if (l == r && u == d) { target) {
    return quadPart(mat, M, N, target, col+1, u, r, row, targetRow, targetCol) ||
    quadPart(mat, M, N, target, l, row+1, col, d, targetRow, targetCol) ||
    quadPart(mat, M, N, target, l, u, col, row, targetRow, targetCol); <— this is a corner case
    } else {
    return quadPart(mat, M, N, target, col+1, u, r, row, targetRow, targetCol) ||
    quadPart(mat, M, N, target, l, row+1, col, d, targetRow, targetCol) ||
    quadPart(mat, M, N, target, col+1, row+1, r, d, targetRow, targetCol);
    }
    }

    VA:F [1.9.22_1171]
    0
  11. If we do a binary search over the diagonal and find the number which is just greater the required number,say Q.The required number will definitely be in the column or row of Q, and not beyond Q, which in turn take two binary searches to trace.
    Hence, we need a total of 3 binary searches,overall complexity of O(logN).
    Is there any issue of this approach?

    VN:F [1.9.22_1171]
    0
    • The diagonal isn’t neccesarily sorted so we can’t do this approach.

      VN:F [1.9.22_1171]
      0
    • Even if the diagonal was also guarenteed to be sorted this aproach would not work.

      Consider the counter example, searching for 10:

      You would search the row and column containg 17 and not find 10, yet 10 exists in the matrix

      VA:F [1.9.22_1171]
      0
  12. Bad examples led to an incorrect solution.

    VN:F [1.9.22_1171]
    0
  13. Proof of worse case, in general

    In the worst case, for any algorithm, you can never do better than O(n). Proof:
    Consider the case where you are searching for k, where x < k < y. Consider the following adversarial matrix
    x * y
    x * y
    x * y
    x * y
    x * y

    That is the diagonal, and we don't care about the rest of the matrix. * is some number between x and y that is not k.

    Any algorithm must look at every *'d position in the matrix, no matter how clever it is. Your target may always be x and y no matter how clever you are. The number of *s is O(n) [in this example n/2]. Since you must consider each of the stars, your algorithm can never do better than O(n).

    QED.

    VA:F [1.9.22_1171]
    0
  14. Matrix didn’t render:

    VA:F [1.9.22_1171]
    0
  15. I am not convinced in the deduction of Improved Binary Partition:T(n) = 2T(n/2) + c lg n = O(n)
    T(n) = 2T(n/2) + c lg n = 4T(n/4) + c lg n + 2c lg n/2
    So I think it is not c * sum(2 * lg(n/(2^k))). It is c * sum(2^k * lg(n/(2*k))), isn’t it?
    And the overall complexity soulds to be O(n) + O(lg n)^2, which is still O(n).

    VN:F [1.9.22_1171]
    0
    • I agree that it should be sum(2^k * lg(n/2^k)), but how this result in the O(lgn)^2?

      VA:F [1.9.22_1171]
      0
  16. I think in an nxn matrix of this type this can be solved in O(log n) by searching for the element in the middle(or one of the middle rows or if the column is shorter in that) and if the element is bigger then everything in the row we can drop this row and the rows above it and search by the middle column which is half as it was before. Same argument can be applied if the element is smaller than the first element in the middle row. If we found it it’s done. Finally if there are 2 elements such that the searched one is between them then they are a_{i,j} and a_{i+1,j} then we only have to work with a_{1-(i – 1),(j +1) -n} and a_{(i + 2)-n, 1 -(j -1)} submatrices the sum of which is half the previous so the runtime is sum_{i = 0}^{ceil(log n)}(2^i) = 2^(ceil(log n) + 1) – 1 which is O(log n)

    VA:F [1.9.22_1171]
    0
  17. I doubt this part:
    T(n) = 2T(n/2) + c lg n
    lg n
    = 2 lg n T(1) + c ∑ ( 2 lg (n/2^k) )
    k=1

    Shouldn’t it be:
    T(n) = 2T(n/2) + c lg n
    lg n
    = 2 lg n T(1) + c ∑ ( 2^k lg (n/2^k) )
    k=1

    VN:F [1.9.22_1171]
    0
  18. Sorry for my ignorance.
    But intuitively speaking, regarding the last algorithm – Improved Binary Partition.

    Don’t you get here the same run time no matter what element is being looked for?

    I mean, you always half count of elements to be searched, no matter where you binary search in the row/column finished. In you examples this stays true, even when you’re searching for 10.

    This would mean that the worst runtime is always log n * log n.

    Do you have a counter example?

    VN:F [1.9.22_1171]
    0

Leave a reply

Your email address will not be published. Required fields are marked *

You may use the <code> tag to embed your code.