## Searching a 2D Sorted Matrix Part II

October 8, 2010 in divide and conquer

Write an efficient algorithm that searches for a value in an

nxmtable (two-dimensional array). This table is sorted along the rows and columns — that is,Table[i][j] ≤ Table[i][j + 1],

Table[i][j] ≤ Table[i + 1][j]

**Note:**

This is Part II of the article: Searching a 2D Sorted Matrix. Please read Part I for more background information.

**Step-wise Linear Search:**

We call this the *Step-wise Linear Search *method. Similar to *Diagonal Binary Search*, we begin with the upper right corner (or the bottom left corner). Instead of traversing diagonally each step, we traverse one step to the left or bottom. For example, the picture below shows the traversed path (the red line) when we search for **13**.

Essentially, each step we are able to eliminate either a row or a column. The worst case scenario is where it ended up in the opposite corner of the matrix, which takes at most 2*n* steps. Therefore, this algorithm runs in O(*n*) time, which is better than previous approaches.

**13**.

Below is the code and it is simple and straight to the point. You should not make any careless mistake during the interview.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | bool stepWise(int mat[][N_MAX], int N, int target, int &row, int &col) { if (target < mat[0][0] || target > mat[N-1][N-1]) return false; row = 0; col = N-1; while (row <= N-1 && col >= 0) { if (mat[row][col] < target) row++; else if (mat[row][col] > target) col--; else return true; } return false; } |

This is probably the answer that most interviewers would be looking for. But we will not stop here. Let us continue exploring some more interesting solutions.

**Quad Partition:**

Did you realize that this problem is actually solvable using a divide and conquer approach? I bet you did!

First, we make an observation that the center element always partition the matrix into four smaller matrices. For example, the center element **9** partitions the matrix into four matrices as shown in the picture below. Since the four smaller matrices are also sorted both row and column-wise, the problem can naturally be divided into four sub-problems.

If you notice carefully, we are always able to eliminate one of the four sub-problems in each step. Assume our target is **21**, which is greater than the center element **9**. We can eliminate the upper left quadrant instantly, because all elements in that quadrant are always less than or equal to **9**. Now assume our target is **6**, which is less than **9**.

Similarly, we eliminate the bottom right quadrant from consideration, because elements in that quadrant must all be greater than **9**. Please note however, we still need to search the upper right and bottom left quadrant, even though the example below seems to show all elements in the two mentioned quadrants are greater than **9**.

Of course, if the center element is our target element, we have found the target and stop searching. If not, we proceed by searching the rest of three quadrants.

**9**partitions the matrix into four smaller quadrants (shown as four different colors).

What’s the complexity of the *Quad Partition *method?** **As it turns out, the run time complexity could be written directly as a recurrence relation:

T(n) = 3T(n/2) +c, wherenis the dimension of the matrix.

We add a constant *c* because each step we do a comparison between the target element and the center element, which takes some constant time.

We need to solve the above equation to obtain the complexity. This is where most confusion comes in. If you have taken advanced algorithm course, you could solve it using the Master’s theorem, but you *don’t really need to*. You could just expand the recurrence relation directly to solve it.

T(n) = 3T(n/2) +c, = 3 [ 3T(n/4) +c_{ }] +c= 3 [ 3 [ 3T(n/8) +c] +c_{ }] +c= 3T(^{k}n/2) +^{k}c(3^{k}- 1)/2 = 3( T(n/2^{k }) +^{k}c) - c/2 Settingk= lgn, T(n) = 3^{lg }( T(1) +^{n}c) - c/2 =O(3^{lg }) =^{n}O(n^{lg 3}) <== 3^{lg }^{n}= n^{lg 3}= O(n^{1.58})

Below is the code for the *Quad Partition* method. *l* and* **u* represents the upper left corner, while *r* and* d* represents the bottom right corner of the matrix. Be very careful of corner cases. Please note that the code below checks for when *l* equals *r* (left = right) and *u* equals *d* (up = down) (ie, the matrix has only one element). If this only element differs from the target, the function must return false. If you omit this condition, then the code below never terminates, which in other word translates to: *You never double check your code*, and it is Hasta la vista, baby from your interviewer.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 | bool quadPart(int mat[][N_MAX], int M, int N, int target, int l, int u, int r, int d, int &targetRow, int &targetCol) { if (l > r || u > d) return false; if (target < mat[u][l] || target > mat[d][r]) return false; int col = l + (r-l)/2; int row = u + (d-u)/2; if (mat[row][col] == target) { targetRow = row; targetCol = col; return true; } else if (l == r && u == d) { return false; } if (mat[row][col] > target) { return quadPart(mat, M, N, target, col+1, u, r, row, targetRow, targetCol) || quadPart(mat, M, N, target, l, row+1, col, d, targetRow, targetCol) || quadPart(mat, M, N, target, l, u, col, row, targetRow, targetCol); } else { return quadPart(mat, M, N, target, col+1, u, r, row, targetRow, targetCol) || quadPart(mat, M, N, target, l, row+1, col, d, targetRow, targetCol) || quadPart(mat, M, N, target, col+1, row+1, r, d, targetRow, targetCol); } } bool quadPart(int mat[][N_MAX], int N, int target, int &row, int &col) { return quadPart(mat, N, N, target, 0, 0, N-1, N-1, row, col); } |

**Binary**** Partition:**

We can even reduce the number of sub-problems from three to only two using a method we called *Binary Partition*. This time we traverse along either the middle row, middle column, or diagonally (as shown in highlighted gray cells in images *a*), *b*), and *c*) below). As we traverse, we find the point such that the target element *s* satisfies the following condition:

a<_{i}s<a_{i+1}, whereais the_{i}i^{th}traversed cell.

*a*) Row-wise binary partition. The highlighted gray cells represents the traversed row (the middle row). The target

**10**is found between

**9**and

**16**.

*b*) Column-wise binary partition. The highlighted gray cells represents the traversed column (the middle column). The target

**10**is found between

**9**and

**14**.

*c*) Diagonal-based binary partition. The highlighted gray cells represents the traversed diagonal. The target

**10**is found between

**9**and

**17**. Please note that diagonal-based binary partition would fail in a non-square matrix (for the above example, it will not work in the two sub-matrices because they are non-square matrices).

If the target element equals one of the traversed cells, we immediately return the element as found. Otherwise we partition the matrix into two sub-matrices following the partition point we found. As it turns out, we need *c**n* time (linear time) to find such partition point, since we are essentially performing a linear search. Therefore, the complexity could be written as the following recurrence relation: (Note: I omitted the proof, as it is left as an exercise to the reader. )

T(n) = 2T(n/2) +cn=O(nlgn)

The *Binary Partition* algorithm runs in *O*(*n* lg *n*) time. You might expect its complexity to be lower than *Quad Partition*, since it has only two sub-problems (instead of three) to solve. The reason of the higher order complexity is due to the extra *O*(*n*) time doing a linear search for the partition point where *a _{i}* <

*s*<

*a*

_{i+1}.

Please note that the matrix is not necessarily divided into two equal-sized sub-matrices. One of the matrix could be bigger than the other one, or in the extreme case, the other matrix could be of size zero. Here, we have made an assumption that each partition step divides the matrix into two equal-sized sub-matrices, just for the sake of complexity analysis.

Below is the code for the *Binary Partition *method. The code below chooses the middle column as the place to search for the partition point.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 | bool binPart(int mat[][N_MAX], int M, int N, int target, int l, int u, int r, int d, int &targetRow, int &targetCol) { if (l > r || u > d) return false; if (target < mat[u][l] || target > mat[d][r]) return false; int mid = l + (r-l)/2; int row = u; while (row <= d && mat[row][mid] <= target) { if (mat[row][mid] == target) { targetRow = row; targetCol = mid; return true; } row++; } return binPart(mat, M, N, target, mid+1, u, r, row-1, targetRow, targetCol) || binPart(mat, M, N, target, l, row, mid-1, d, targetRow, targetCol); } bool binPart(int mat[][N_MAX], int N, int target, int &row, int &col) { return binPart(mat, N, N, target, 0, 0, N-1, N-1, row, col); } |

**Improved Binary Partition:**

Since the partition column (or row, or diagonal) is sorted, not utilizing the sorted configuration is a waste. In fact, we are able to modify binary search to search for the partition point in lg *n* time. Then, the complexity can be expressed as the following recurrence relation: (Note: I’ve omitted some steps, try to work out the math by yourself)

T(n) = 2T(n/2) +clgnlgn= 2^{lg n}T(1) +c∑( 2 lg (n/2^{k}) )k=1 =O(n) +c(2n- lgn- 2) =O(n)

By incorporating binary search, we are able to reduce the complexity to *O*(*n*). However, we have made an assumption, that is: *Each subdivision of matrices must be of equal size* *(or, each partition point is exactly at the center of the partition column)*. This leads to the following question:

This turns out to be a difficult question to answer, but I could provide further insight to you by deriving the complexity of the other extreme, that is:

For an example of the above case, try searching for **–1** in the above sample matrix shown in image (*a*). Since each subdivision results in the original matrix being halved, the total number of subdivisions can be at most lg *n* times. Assuming each binary search performed before a subdivision takes *c* lg *n* time, the complexity can be expressed as follow:

T(n) =clgn+ ... +clgn(a total of lgntimes) =clgn *lgn=O(lgn)^{2}

As you can see, the run time complexity of this extreme case is *O*(lg *n*)^{2}, which turns out to be even less than *O*(*n*). We conclude that this is not the worst case scenario, as some people might believe.

Please note that the worst case for the *Improved Binary Partition *method had not been proven here. We had merely proven that one case of the *Improved Binary Partition* could run in *O*(*n*). If you know the proof of the worst case, I would be interested to hear from you.

**» Continue reading Part III: Searching a 2D Sorted Matrix.**

Chimpanzee said on November 10, 2010

Hi buddy,

In your extreme case analysis for improved binary partition, you used '-1' as an example. However, it seems that binary search -1 on diagonal will immediately tell you that -1 is not in the matrix.

How come you can get "one matrix has the original matrix being halved, while the other matrix is empty"?

Can you explain the meaning "matrix being halved"?

01337c0d3r said on November 10, 2010

I am using a variation of binary search to search for a partition point, not for the element's position.

The modified binary search works as this:

Assume the list is

{0, 2, 3, 5, 9}

Searching -1 returns 0 as the partition point.

Searching 1 returns 1 as the partition point.

Searching 10 returns 5 as the partition point.

In other words, we are finding the partition point i+1 such that the key(s) is:

ai < s < ai+1

If the element is found in the list, then we don't need the partition point, since we are done finding the element in the sorted matrix.

0aNiceGuy said on November 11, 2010

I doubt this part:

3 [ 3 [ 3T(n/8) + c ] + c ] + c = 3^k * T(n/2^k) + O(1)

I think it should be 3^k * T(n/2^k) + (3^(k+1)-1)/2*c

01337c0d3r said on November 11, 2010

Thank you very much for pointing that out.

I've corrected it, look above for the changes.

+1aNiceGuy (一个好人:) said on November 12, 2010

You are welcome. BTW, just noticed another thing: O(nlgn) is actually better than O(n^1.58)

01337c0d3r said on November 14, 2010

You made a very great observation! I need to investigate my code running time why it doesn't reflect this. One speculation is in the quad partition, I use an extra condition

if (l > r || u > d) return false;

to bail out early, while my binary search code does not use this. I will update my findings.

Thanks!

01337c0d3r said on November 15, 2010

Just to post a follow-up regarding aNiceGuy's observation that O(n lg n) has a lower bound than O(n^1.58), which is true.

The two algorithms (binary search and diagonal binary search) which runs in O(n lg n), runs in a significant longer time than the quad partition method (O(n^1.58).

One way to explain this is that both binary search and diagonal binary search has a higher constant than the quad partition. Also due to N=1000 is not large enough, therefore n^1.58 has not overtake n lg n yet. But if N is large enough, eventually n^1.58 will rise exponentially while n lg n remain pretty flat.

0Octavious Strongstorm said on March 10, 2011

Shouldn’t the recurrence relation for the quad partition be 3T(n/4) + c, instead of 3T(n/2) + c, because it breaks the problem size into 4 pieces instead of two?

+11337c0d3r said on March 10, 2011

That’s because in each step you eliminate 2 pieces from consideration. Try to read the above post again, it explains clearly why you can eliminate those 2 pieces.

0almost-sure said on April 16, 2011

I still cant see how this recurrence relation is 3T(n/2) + c.

clearly as you mentioned “the problem can naturally be divided into four sub-problems.If you notice carefully, we are always able to eliminate one of the four sub-problems in each step. ”

at each step we divide the problem size into 4 parts. n/4 . and in the worst case we have to do work on 3 of those part. i think it should be 3T(n/4) + c, instead of 3T(n/2) + c . could you clarify please thanks

0Giovanni said on May 3, 2012

I agree with that. The recurrence is

T(n) = 3 * T(n/4) + c

which leads to

T(n) = O(n^log4(3)) = O(n^0.7925)

which is better than O(n*logn).

Also, the recurrence is for the worst case (in which you get rid of just one subproblem). However, there might be cases in which you get rid of up to 3 subproblems. In fact, the upper left and lower right corners always represent the minimum and maximum value in the matrix, thus it might save some time to check if the key is within these two before making a recursive call.

0ibrahim said on February 24, 2013

Here T(n) defines a problem on a matrix with size n x n.

Similarly T(n/2) should define a problem on a matrix with size n/2 x n/2. when you divide the matrix in 4 parts that gives you 4 matrices with size n/2 x n/2.

So each sub problem is T(n/2).

+4Giovanni said on September 16, 2013

The two equations are actually equivalent: if n is the size of a square matrix (n*n), then we get O(n^lg2(3)). If m = n*n is the total size of the matrix then we get O(m^lg4(3)) = O(n^lg2(3)).

0Niaz said on April 10, 2011

I could not quiet understood the fact that you’ve said that when we search for the -1 in the matrix in (a) then we will have O((lg(n))^2), i think it would return false because of the statement :

if (target mat[d][r]) return false;

am I not getting it right?

BTW, also why is not the Improved Binary Partition Method’s worst case time complexity O((lg(n))^2)???

Assuming each binary search performed before a subdivision takes c lg n time.

can somebody explain plz…:)

Thanks

0Niaz said on April 10, 2011

Sorry this is my first comment: and it didn’t come out right…

I referred to the 3rd line of binPart when i say:

“i think it would return false because of the statement :

if (target mat[d][r]) return false;”

0maxq said on July 9, 2011

Hi 1337coder,

Really appreciate your great article on this!

I have a minor comment on the quad partitioning:

In your code it seems you use the same way to partition the matrix no matter whether target

is larger than the partition pivot or smaller than it, this is fine but leaves a corner case where

both row and col are not advanced. In your code I see this case has been handled, just wondering

that probably we can code it so that we don’t leave this corner case (as during interviews it’s easy

to forget about handling corner cases)

[suggested code]

if (mat[row][col] > target) {

return quadPart(mat, M, N, target, col, u, r, row-1, targetRow, targetCol) ||

quadPart(mat, M, N, target, l, row, col-1, d, targetRow, targetCol) ||

quadPart(mat, M, N, target, l, u, col-1, row-1, targetRow, targetCol);

} else {

[original code]

if (mat[row][col] == target) {

targetRow = row;

targetCol = col;

return true;

} else if (l == r && u == d) { target) {

return quadPart(mat, M, N, target, col+1, u, r, row, targetRow, targetCol) ||

quadPart(mat, M, N, target, l, row+1, col, d, targetRow, targetCol) ||

quadPart(mat, M, N, target, l, u, col, row, targetRow, targetCol); <— this is a corner case

} else {

return quadPart(mat, M, N, target, col+1, u, r, row, targetRow, targetCol) ||

quadPart(mat, M, N, target, l, row+1, col, d, targetRow, targetCol) ||

quadPart(mat, M, N, target, col+1, row+1, r, d, targetRow, targetCol);

}

}

0Ratna said on November 12, 2011

If we do a binary search over the diagonal and find the number which is just greater the required number,say Q.The required number will definitely be in the column or row of Q, and not beyond Q, which in turn take two binary searches to trace.

Hence, we need a total of 3 binary searches,overall complexity of O(logN).

Is there any issue of this approach?

0James said on June 16, 2012

The diagonal isn’t neccesarily sorted so we can’t do this approach.

0awashburn said on December 1, 2013

Even if the diagonal was also guarenteed to be sorted this aproach would not work.

Consider the counter example, searching for 10:

You would search the row and column containg 17 and not find 10, yet 10 exists in the matrix

0Ratna said on November 12, 2011

Bad examples led to an incorrect solution.

0Russell Cohen said on July 10, 2012

Proof of worse case, in general

In the worst case, for any algorithm, you can never do better than O(n). Proof:

Consider the case where you are searching for k, where x < k < y. Consider the following adversarial matrix

x * y

x * y

x * y

x * y

x * y

That is the diagonal, and we don't care about the rest of the matrix. * is some number between x and y that is not k.

Any algorithm must look at every *'d position in the matrix, no matter how clever it is. Your target may always be x and y no matter how clever you are. The number of *s is O(n) [in this example n/2]. Since you must consider each of the stars, your algorithm can never do better than O(n).

QED.

0Russell Cohen said on July 10, 2012

Matrix didn’t render:

0subx said on February 7, 2013

I am not convinced in the deduction of Improved Binary Partition:T(n) = 2T(n/2) + c lg n = O(n)

T(n) = 2T(n/2) + c lg n = 4T(n/4) + c lg n + 2c lg n/2

So I think it is not c * sum(2 * lg(n/(2^k))). It is c * sum(2^k * lg(n/(2*k))), isn’t it?

And the overall complexity soulds to be O(n) + O(lg n)^2, which is still O(n).

0Qiang said on March 24, 2013

I agree that it should be sum(2^k * lg(n/2^k)), but how this result in the O(lgn)^2?

0Daniel said on February 24, 2013

I think in an nxn matrix of this type this can be solved in O(log n) by searching for the element in the middle(or one of the middle rows or if the column is shorter in that) and if the element is bigger then everything in the row we can drop this row and the rows above it and search by the middle column which is half as it was before. Same argument can be applied if the element is smaller than the first element in the middle row. If we found it it’s done. Finally if there are 2 elements such that the searched one is between them then they are a_{i,j} and a_{i+1,j} then we only have to work with a_{1-(i – 1),(j +1) -n} and a_{(i + 2)-n, 1 -(j -1)} submatrices the sum of which is half the previous so the runtime is sum_{i = 0}^{ceil(log n)}(2^i) = 2^(ceil(log n) + 1) – 1 which is O(log n)

0Hardy said on October 5, 2013

I doubt this part:

T(n) = 2T(n/2) + c lg n

lg n

= 2 lg n T(1) + c ∑ ( 2 lg (n/2^k) )

k=1

Shouldn’t it be:

T(n) = 2T(n/2) + c lg n

lg n

= 2 lg n T(1) + c ∑ ( 2^k lg (n/2^k) )

k=1

0Alexey said on July 7, 2014

Sorry for my ignorance.

But intuitively speaking, regarding the last algorithm – Improved Binary Partition.

Don’t you get here the same run time no matter what element is being looked for?

I mean, you always half count of elements to be searched, no matter where you binary search in the row/column finished. In you examples this stays true, even when you’re searching for 10.

This would mean that the worst runtime is always log n * log n.

Do you have a counter example?

0warlord said on September 18, 2014

Thanks a lot for the blog post. The same problem was asked in my Google interview but I fumbled.

0Rajmohan said on September 23, 2014

hi,

Your improved binary partition algorithm is not optimal. The optimal algorithm for this problem is proposed long back by dijkstra and it is called saddleback binary search. Its worst case complexity is only O(n) and it is optimal.

+1