Approach #1: Ad-Hoc [Accepted]
It is natural to consider an array
W of each interval's sum, where each interval is the given length
K. To create
W, we can either use prefix sums, or manage the sum of the interval as a window slides along the array.
From there, we approach the reduced problem: Given some array
W and an integer
K, what is the lexicographically smallest tuple of indices
(i, j, k) with
i + K <= j and
j + K <= k that maximizes
W[i] + W[j] + W[k]?
Suppose we fixed
j. We would like to know on the intervals and , where the largest value of (and respectively ) occurs first. (Here, first means the smaller index.)
We can solve these problems with dynamic programming. For example, if we know that is where the largest value of occurs first on , then on the first occurrence of the largest must be either or . If say, is better, then we set
best = 6.
At the end,
left[z] will be the first occurrence of the largest value of
W[i] on the interval , and
right[z] will be the same but on the interval . This means that for some choice
j, the candidate answer must be
(left[j-K], j, right[j+K]). We take the candidate that produces the maximum
W[i] + W[j] + W[k].
Time Complexity: , where is the length of the array. Every loop is bounded in the number of steps by , and does work.
Space complexity: .
rightall take memory.
Analysis written by: @awice