## Question and Answer

## You Asked

Hi Tom,

I am given the "current" allocation of items to eight buckets, and I want to make it more efficient by filling as much as possible of bucket A, then of bucket B, then of bucket C (as indicated by the "priority"), by moving items between buckets by taking as many P1 items from the bucket H and reassigning them to bucket A (as many as possible), then to bucket B, etc., until you allocated all of them. Then you take the next lowest-priority bucket and repeat. How much of P1 and P2 to fill is defined in the volumes table i.e. each A, B-H can have quantities in multiple of eight and seven of P1 and P2 in sample data.

I also want to include round-up and round-down logic to nearly distribute the quantities across buckets if one bucket has too big a quantity.

The items P1 and P2 are completely independent and one's result shouldn't impact the other. The height, weight, and width don't matter here so not present in any sample data.

The below I have started with but couldn't make round-up and round-down cases work. Also, in the cases when quantity is moved into two or more buckets from one bucket or moved out from two buckets into one bucket, can we show a single comma-separated row instead of multiple step-by-step rows?

In the cases when one row has too much quantity, can we implement round up and round down logic to distribute the quantities near equally in multiple of the quantities of value table in buckets e.g. if we update the quantity as seventy-two in bucket H of P1 part, the current result gives five rows for H bucket. Can we round buckets A-H with sixteen and then the remaining ones in the H bucket?

I am given the "current" allocation of items to eight buckets, and I want to make it more efficient by filling as much as possible of bucket A, then of bucket B, then of bucket C (as indicated by the "priority"), by moving items between buckets by taking as many P1 items from the bucket H and reassigning them to bucket A (as many as possible), then to bucket B, etc., until you allocated all of them. Then you take the next lowest-priority bucket and repeat. How much of P1 and P2 to fill is defined in the volumes table i.e. each A, B-H can have quantities in multiple of eight and seven of P1 and P2 in sample data.

I also want to include round-up and round-down logic to nearly distribute the quantities across buckets if one bucket has too big a quantity.

The items P1 and P2 are completely independent and one's result shouldn't impact the other. The height, weight, and width don't matter here so not present in any sample data.

The below I have started with but couldn't make round-up and round-down cases work. Also, in the cases when quantity is moved into two or more buckets from one bucket or moved out from two buckets into one bucket, can we show a single comma-separated row instead of multiple step-by-step rows?

In the cases when one row has too much quantity, can we implement round up and round down logic to distribute the quantities near equally in multiple of the quantities of value table in buckets e.g. if we update the quantity as seventy-two in bucket H of P1 part, the current result gives five rows for H bucket. Can we round buckets A-H with sixteen and then the remaining ones in the H bucket?

with LiveSQL Test Case:

## and we said...

So you want to assign the quantities in multiples of the item's volume as evenly as possible between the buckets?

To see how much you want to assign to each bucket, you can do the rounding to this unit with this formula:

There are some challenges to this scenario.

First up, this is a form of bin-packing problem. In general these are

Next you need to keep track of how much you've assigned from other buckets. This means using recursive with or the model clause. These can be tricky to write and understand. Both are likely to take a long time to process large data sets too.

So while I usually advocate SQL approaches, this is one case where procedural PL/SQL is likely to be both easier to write and faster to execute.

FinaIly I know you're on 12.1, but the new loop constructs in 21c make the code much more compact. So I couldn't resist using them ;) Read more about these at:

https://blogs.oracle.com/plsql-and-ebr/better-loops-and-qualified-expressions-array-constructors-in-plsql

Here's a sketch of the algorithm I've used:

- Get all the buckets for an item, calculating the target quantity using the formula above

- Iterate through the buckets in priority order, skipping over any which already have sufficient

- Have an inner loop that works backwards through the buckets

- Assign the amount to the outer bucket if the inner bucket either has

- Lower priority

- Excess quantity

To see how much you want to assign to each bucket, you can do the rounding to this unit with this formula:

round ( avg ( item quantities ) / volume ) * volume

There are some challenges to this scenario.

First up, this is a form of bin-packing problem. In general these are

*hard*problems, meaning there are no known solutions that are guaranteed to be both fast and correct.Next you need to keep track of how much you've assigned from other buckets. This means using recursive with or the model clause. These can be tricky to write and understand. Both are likely to take a long time to process large data sets too.

So while I usually advocate SQL approaches, this is one case where procedural PL/SQL is likely to be both easier to write and faster to execute.

FinaIly I know you're on 12.1, but the new loop constructs in 21c make the code much more compact. So I couldn't resist using them ;) Read more about these at:

https://blogs.oracle.com/plsql-and-ebr/better-loops-and-qualified-expressions-array-constructors-in-plsql

Here's a sketch of the algorithm I've used:

- Get all the buckets for an item, calculating the target quantity using the formula above

- Iterate through the buckets in priority order, skipping over any which already have sufficient

- Have an inner loop that works backwards through the buckets

- Assign the amount to the outer bucket if the inner bucket either has

- Lower priority

- Excess quantity

create table priorities (bucket, priority) as select 'A', 1 from dual union all select 'B', 2 from dual union all select 'C', 3 from dual union all select 'D', 4 from dual union all select 'E', 5 from dual union all select 'F', 6 from dual union all select 'G', 7 from dual union all select 'H', 8 from dual ; create table volumes (item, quantity) as select 'P1', 8 from dual union all select 'P2', 7 from dual ; create table data (bucket, item, present_qty) as select 'A', 'P1', 6 from dual union all select 'B', 'P1', 9 from dual union all select 'C', 'P1', 8 from dual union all select 'D', 'P1', 1 from dual union all select 'F', 'P1', 0 from dual union all select 'G', 'P1', 8 from dual union all select 'H', 'P1', 72 from dual union all select 'A', 'P2', 3 from dual union all select 'B', 'P2', 5 from dual union all select 'C', 'P2', 7 from dual union all select 'D', 'P2', 9 from dual union all select 'E', 'P2', 1 from dual union all select 'F', 'P2', 4 from dual union all select 'H', 'P2', 4 from dual ; create or replace procedure fill_buckets ( fill_item varchar2 ) is cursor bucket_cur is select bucket, present_qty, round ( avg ( present_qty ) over ( partition by item ) / quantity ) * quantity as quantity, null source from priorities join data using ( bucket ) join volumes using ( item ) where item = fill_item order by priority; type bucket_arr is table of bucket_cur%rowtype index by pls_integer; bucket_recs bucket_arr; num_buckets pls_integer; excess pls_integer; begin open bucket_cur; fetch bucket_cur bulk collect into bucket_recs; close bucket_cur; num_buckets := bucket_recs.count; for i, v in pairs of bucket_recs /* Only consider buckets under capacity */ when bucket_recs ( i ).present_qty < bucket_recs ( i ).quantity loop for j in reverse 1 .. num_buckets /* Only attempt different buckets with lower priority or excess quantity */ when i <> j and ( i < j or bucket_recs ( j ).present_qty > bucket_recs ( i ).quantity ) loop if i < j then excess := least ( bucket_recs ( i ).quantity - bucket_recs ( i ).present_qty, bucket_recs ( j ).present_qty ); else excess := least ( bucket_recs ( j ).present_qty - bucket_recs ( j ).quantity, bucket_recs ( i ).quantity - bucket_recs ( i ).present_qty ); end if; continue when excess <= 0; bucket_recs (i).present_qty := bucket_recs (i).present_qty + excess; bucket_recs (j).present_qty := bucket_recs (j).present_qty - excess; bucket_recs (i).source := bucket_recs (i).source || ',' || bucket_recs (j).bucket || '-' || excess; end loop; end loop; for i, v in pairs of bucket_recs loop dbms_output.put_line ( v.bucket || '-' || v.present_qty || ' taken from ' || v.source ); end loop; end; / exec fill_buckets ( 'P1' ); A-16 taken from ,H-10 B-16 taken from ,H-7 C-16 taken from ,H-8 D-16 taken from ,H-15 F-16 taken from ,H-16 G-16 taken from ,H-8 H-8 taken from exec fill_buckets ( 'P2' ); A-7 taken from ,H-4 B-7 taken from ,F-2 C-7 taken from D-7 taken from E-5 taken from ,F-2,D-2 F-0 taken from H-0 taken from

## Rating

# Comments

Thank you for your response. I genuinely appreciate it, but like you know, I am on 12.0.1, I can't use the provided solution. Also, I felt my mistake not to mention that I need SQL solution.

Anyhow thanks for the explanation and solution.

Anyhow thanks for the explanation and solution.

You can adapt this solution to work on 12c, it's just a matter of changing the loop controls (moving the when clauses to exit conditions within the loops and using a standard for 1 .. count to loop through the array)

Why precisely do you need a SQL solution?

If you're processing large data sets, this may be one case where PL/SQL is faster!

*I need SQL solution.*Why precisely do you need a SQL solution?

If you're processing large data sets, this may be one case where PL/SQL is faster!

I wouldn't mind studying this problem, but it's frustrating because the requirement is not clear, and

- Why move items in quantities of 8 or 7 when the buckets don't necessarily contain multiples of those quantities either before or after the processing?

- Why "move" items from one bucket to others? Why not start by saying the final result you want? Are these "buckets" real-life containers, like in a warehouse?

Regards,

Stew Ashton

**you don't show the result you want**.- Why move items in quantities of 8 or 7 when the buckets don't necessarily contain multiples of those quantities either before or after the processing?

- Why "move" items from one bucket to others? Why not start by saying the final result you want? Are these "buckets" real-life containers, like in a warehouse?

Regards,

Stew Ashton

Here is an attempt at a solution, in the hopes that the OP will react by saying whether the end result = the desired result, and maybe saying exactly what the desired result is.

The basic idea is:

- the items are grouped together in "lots" of the indicated quantities (7 or 8)

- get the difference between the average quantity per item and the current quantity, then divide by the item quantity and round: this is the number of lots that should be added or subtracted.

- eliminate the buckets that don't change

- order the buckets by priority ascending if they need more lots, by priority descending if they should give lots

- calculate "lot ranges" and join givers to takers where the ranges intersect. The giver must have lower priority than the taker.

Note that there is no change for item P2 because D cannot give to E.

Regards,

Stew

with lot_ranges as ( select ITEM, BUCKET, PRESENT_QTY, PRIORITY, QUANTITY, change_lots, sign(change_lots) sign_lot, sum(abs(change_lots)) over( partition by item, sign(change_lots) order by sign(change_lots)*priority ) - abs(change_lots) begin_lot, sum(abs(change_lots)) over( partition by item, sign(change_lots) order by sign(change_lots)*priority ) end_lot from ( select ITEM, BUCKET, PRESENT_QTY, PRIORITY, QUANTITY, round((avg(present_qty) over(partition by item) - present_qty) / quantity) change_lots from data d join priorities p using(bucket) join volumes v using(item) ) where change_lots != 0 ) select a.item, a.bucket to_bucket, b.bucket from_bucket, (least(a.end_lot, b.end_lot) - greatest(a.begin_lot, b.begin_lot)) * a.quantity change_qty, a.quantity from lot_ranges a join lot_ranges b on a.item = b.item and a.priority < b.priority and a.sign_lot = 1 and b.sign_lot = -1 and ( a.begin_lot >= b.begin_lot and a.end_lot <= b.end_lot or b.begin_lot >= a.begin_lot and b.end_lot <= a.end_lot ); ITEM TO_BUCKET FROM_BUCKET CHANGE_QTY QUANTITY P1 A H 8 8 P1 B H 8 8 P1 C H 8 8 P1 D H 16 8 P1 F H 16 8

The basic idea is:

- the items are grouped together in "lots" of the indicated quantities (7 or 8)

- get the difference between the average quantity per item and the current quantity, then divide by the item quantity and round: this is the number of lots that should be added or subtracted.

- eliminate the buckets that don't change

- order the buckets by priority ascending if they need more lots, by priority descending if they should give lots

- calculate "lot ranges" and join givers to takers where the ranges intersect. The giver must have lower priority than the taker.

Note that there is no change for item P2 because D cannot give to E.

Regards,

Stew

As always, thanks for your input Stew.

After posting my previous effort, I realised that we can move any amount of items, not just full lots, but the buckets should wind up being filled to multiples of the indicated quantities.

Please ignore my previous post. I will post a more relevant solution later.

Sorry,

Stew

Please ignore my previous post. I will post a more relevant solution later.

Sorry,

Stew

After going down a few wrong tracks, I found a way to understand the requirement that fits the result Chris got.

- At the end, the first N buckets in priority order should be "full": they all have the same quantity, and that quantity should be a multiple of VOLUME.QUANTITY for that item. The bucket N+1 should have the leftover quantity, and any remaining buckets should be empty.

- the number of "full" buckets should be as great as possible

- buckets should "give" in reverse priority order, and buckets should "take" in priority order

- if necessary, a bucket can give to a bucket that has a lower priority.

My solution uses SQL only, and it does not use recursive logic or the MODEL clause.

- It calculates the difference between the end result (the quantity each bucket should wind up with) and the present quantity: this difference is called CHANGE_QTY.

- It then generates two "quantity ranges" based on the priorities: "givers" and "takers", and joins the "givers" and the "takers" where the the ranges intersect.

- At the end, the first N buckets in priority order should be "full": they all have the same quantity, and that quantity should be a multiple of VOLUME.QUANTITY for that item. The bucket N+1 should have the leftover quantity, and any remaining buckets should be empty.

- the number of "full" buckets should be as great as possible

- buckets should "give" in reverse priority order, and buckets should "take" in priority order

- if necessary, a bucket can give to a bucket that has a lower priority.

My solution uses SQL only, and it does not use recursive logic or the MODEL clause.

- It calculates the difference between the end result (the quantity each bucket should wind up with) and the present quantity: this difference is called CHANGE_QTY.

- It then generates two "quantity ranges" based on the priorities: "givers" and "takers", and joins the "givers" and the "takers" where the the ranges intersect.

with qty_totals as ( select row_number() over(partition by item order by priority) PRIORITY, sum(present_qty) over(partition by item) total_qty, count(*) over(partition by item) num_buckets, BUCKET, ITEM, PRESENT_QTY, QUANTITY from data join volumes using(item) join priorities using(bucket) ) , lots as ( select total_qty / quantity total_lots, q.* from qty_totals q ) , lots_per_bucket as ( select ceil(total_lots / num_buckets) lots_per_full_bucket, l.* from lots l ) , per_full_bucket as ( select lots_per_full_bucket * quantity qty_per_full_bucket, trunc(total_lots / lots_per_full_bucket) num_full_buckets, l.* from lots_per_bucket l ) , change_qtys as ( select case when priority <= num_full_buckets then qty_per_full_bucket when priority = num_full_buckets + 1 then total_qty - qty_per_full_bucket * num_full_buckets else 0 end - present_qty change_qty, p.* from per_full_bucket p ) , qty_ranges as ( select sign(change_qty) sign_qty, sum(abs(change_qty)) over( partition by item, sign(change_qty) order by sign(change_qty)*priority ) - abs(change_qty) begin_qty, sum(abs(change_qty)) over( partition by item, sign(change_qty) order by sign(change_qty)*priority ) end_qty, c.* from change_qtys c where change_qty != 0 ) select a.item, b.bucket from_bucket, a.bucket to_bucket, least(a.end_qty, b.end_qty) - greatest(a.begin_qty, b.begin_qty) change_qty from qty_ranges a join qty_ranges b on a.item = b.item and a.sign_qty = 1 and b.sign_qty = -1 and least(a.end_qty, b.end_qty) - greatest(a.begin_qty, b.begin_qty) > 0 order by item, b.priority desc, a.priority; ITEM FROM_BUCKET TO_BUCKET CHANGE_QTY P1 H A 10 P1 H B 7 P1 H C 8 P1 H D 15 P1 H F 16 P1 H G 8 P2 H A 4 P2 F B 2 P2 F E 2 P2 D E 2

Nice work Stew, I thought this was one where recursion/model was necessary; you've proven me wrong :)

Thank you, Stew. That is the exact need I had, but only the quantity column after shuffling the quantities is missing. You explained well and I think that’s why you can write awesome SQL solutions. I am learning to explain the requirements the way you did.

This scenario is - we send a truck on a route that distributes items at different dealerships on that route and it delivered quantities in the racks which can fit a certain quantity. The high-priority bucket in question was dealer far from the warehouse, low-priority buckets were the nearest dealers, and the rack inside the truck was a bucket. Because our ordering system allocates quantities and doesn’t take care of how much to give to whom the warehouse manager has to do all this work manually. Implementing a change to handle it in the system will take months because of various reasons (management approvals and money involved) and the same is the reason not to use stored program units as we can't compile those in the database directly. So I required a query which warehouse operations manager can run and distribute these quantities for loading correctly in trucks.

Let me know if I am not clear and I’ll try to explain it again.

This scenario is - we send a truck on a route that distributes items at different dealerships on that route and it delivered quantities in the racks which can fit a certain quantity. The high-priority bucket in question was dealer far from the warehouse, low-priority buckets were the nearest dealers, and the rack inside the truck was a bucket. Because our ordering system allocates quantities and doesn’t take care of how much to give to whom the warehouse manager has to do all this work manually. Implementing a change to handle it in the system will take months because of various reasons (management approvals and money involved) and the same is the reason not to use stored program units as we can't compile those in the database directly. So I required a query which warehouse operations manager can run and distribute these quantities for loading correctly in trucks.

Let me know if I am not clear and I’ll try to explain it again.

To get the final quantities, outer join the result of the shuffling query to the original table to add/remove quantities as needed.

To ensure you get one row per bucket, group the final results by bucket and item. You can use listagg to return a string listing which buckets/quantities to take the values from

You don't need to create a procedure - you could use an anonymous block. Bit of a moot issue now as Stew's got a working SQL solution

To ensure you get one row per bucket, group the final results by bucket and item. You can use listagg to return a string listing which buckets/quantities to take the values from

*not to use stored program units as we can't compile those in the database directly*You don't need to create a procedure - you could use an anonymous block. Bit of a moot issue now as Stew's got a working SQL solution

You say "That is the exact need I had, but only the quantity column after shuffling the quantities is missing." Are you saying you aren't capable of adjusting the query to include that information?

Then you start describing the real world situation. That is a very good idea and your use case is indeed interesting, but I don't see the relationship between your original question and the real world situation.

- Why move stuff among racks in a certain order?

- Why split the items equally among all the racks?

- I can understand dealerships being arranged by priority, but why are

- If my query (mostly) answers your question, what is the point of backing up and explaining the real world situation?

If my brain is not doing a good job here, it would not be the first time.

Regards,

Stew

Then you start describing the real world situation. That is a very good idea and your use case is indeed interesting, but I don't see the relationship between your original question and the real world situation.

- Why move stuff among racks in a certain order?

- Why split the items equally among all the racks?

- I can understand dealerships being arranged by priority, but why are

**racks**arranged by priority?- If my query (mostly) answers your question, what is the point of backing up and explaining the real world situation?

If my brain is not doing a good job here, it would not be the first time.

Regards,

Stew

My guess is that the truck goes to each location, where they unload the racks for that dealership. So you want to fill the truck such that the racks are in the same order you visit locations. This reduces reshuffling of stock in the truck you need to do at each location, saving time.

We'd need to get the OP to confirm this though

We often get "yeah that's nice, but what about a real-world problem" comments on questions like this

So I think it's good to give an overview of what the actual issue is. It can also help us point out completely different solutions to solve the issue if there are any. I agree it would be more useful in the original question though :)

We'd need to get the OP to confirm this though

*what is the point of backing up and explaining the real world situation?*We often get "yeah that's nice, but what about a real-world problem" comments on questions like this

So I think it's good to give an overview of what the actual issue is. It can also help us point out completely different solutions to solve the issue if there are any. I agree it would be more useful in the original question though :)

Hi Chris, Stew,

What I meant was that - I was looking for a SQL solution exactly like you shared and I have explained the real-time scenario so that it can give more clarity on the need. Yes, I couldn’t understand the point of adding outer join in the query. I tested the solution, and the required outcome differs from this. It’s impossible to ship quantity near equal to order quantity to the dealer in many corner cases because of rack size limitation, so I require a change in solution as explained in sample data and desired result.

“- Why move stuff among racks in a certain order?” different dealers have different quantities ordered and we want to ship nearby order quantity if not in full to far the far most dealer because of the rounding quantity to fill in racks.

“- Why split the items equally among all the racks?” I would say it’s not split, but adjusting allocated order quantity in multiple of quantity a rack can accommodate.

“- I can understand dealerships being arranged by priority, but why are racks arranged by priority? “ racks have to be filled in order so that if not equal then the nearby quantity of dealer order can be shipped.

“- If my query (mostly) answers your question, what is the point of backing up and explaining the real-world situation?” I had shared real-time situations for more clarity on requirements.

You are right that we are loading racks in trucks in such a way that we can unload them without hustle at the dealer location.

Additional requirement - We want to ship somewhat near quantity, if not exactly as ordered quantity to a dealer. Now seeing rack capacity limitation and to achieve the goal we want to reduce the quantity for all rows of an item keeping it in multiple where we can ignoring where we can’t, and then again increase where we can (the iteration column in sample data shows how we want to reduce firstly and result from column shows how we increased later.)

Now, let us say some quantity is still available which we couldn’t adjust in multiple for that quantity I needed an additional generated row with status N.

What I meant was that - I was looking for a SQL solution exactly like you shared and I have explained the real-time scenario so that it can give more clarity on the need. Yes, I couldn’t understand the point of adding outer join in the query. I tested the solution, and the required outcome differs from this. It’s impossible to ship quantity near equal to order quantity to the dealer in many corner cases because of rack size limitation, so I require a change in solution as explained in sample data and desired result.

“- Why move stuff among racks in a certain order?” different dealers have different quantities ordered and we want to ship nearby order quantity if not in full to far the far most dealer because of the rounding quantity to fill in racks.

“- Why split the items equally among all the racks?” I would say it’s not split, but adjusting allocated order quantity in multiple of quantity a rack can accommodate.

“- I can understand dealerships being arranged by priority, but why are racks arranged by priority? “ racks have to be filled in order so that if not equal then the nearby quantity of dealer order can be shipped.

“- If my query (mostly) answers your question, what is the point of backing up and explaining the real-world situation?” I had shared real-time situations for more clarity on requirements.

You are right that we are loading racks in trucks in such a way that we can unload them without hustle at the dealer location.

Additional requirement - We want to ship somewhat near quantity, if not exactly as ordered quantity to a dealer. Now seeing rack capacity limitation and to achieve the goal we want to reduce the quantity for all rows of an item keeping it in multiple where we can ignoring where we can’t, and then again increase where we can (the iteration column in sample data shows how we want to reduce firstly and result from column shows how we increased later.)

Now, let us say some quantity is still available which we couldn’t adjust in multiple for that quantity I needed an additional generated row with status N.

create table priorities (bucket, priority) as select 'A', 1 from dual union all select 'B', 2 from dual union all select 'C', 3 from dual union all select 'D', 4 from dual union all select 'E', 5 from dual ; create table volumes (item, quantity) as select 'P1', 10 from dual union all select 'P2', 5 from dual union all select 'P3', 10 from dual ; create table data (bucket, item, present_qty) as select 'A', 'P1', 11 from dual union all select 'B', 'P1', 15 from dual union all select 'C', 'P1', 9 from dual union all select 'D', 'P1', 12 from dual union all select 'E', 'P1', 8 from dual union all select 'A', 'P2', 11 from dual union all select 'B', 'P2', 15 from dual union all select 'C', 'P2', 9 from dual union all select 'D', 'P2', 12 from dual union all select 'E', 'P2', 8 from dual union all select 'A', 'P2', 9 from dual union all select 'B', 'P2', 9 from dual union all select 'C', 'P2', 9 from dual union all select 'D', 'P2', 9 from dual union all select 'E', 'P2', 9 from dual ; BUCKET ITEM START_QTY ITERATION RESULT STATUS ------ ---- ---------- ---------- ---------- ------ A P1 11 10 10 Y Step: 1 - reduce 11 to 10 Step: 2 - quantity is in multiple so move to next row. B P1 15 10 10 Y Step: 1 - reduce 15 to 10 Step: 2 - quantity is in multiple so move to next row. C P1 9 9 10 Y Step: 1 - Can not reduce 9 so ignore this row Step: 2 - quantity is in multiple so give 1 here from last generated row as its not multiple of 10. D P1 12 10 10 Y Step: 1 - reduce 12 to 10 Step: 2 - quantity is in multiple so move to next row. E P1 8 8 10 Y Step: 1 - Can not reduce 8 so ignore this row Step: 2 - quantity is in multiple so give 2 here from last generated row as its not multiple of 10. P1 8 5 N Step: 1 - Generated row with 8 left quantity. Step: 2 - 5 quantity is left in generated row with status N. A P2 11 10 15 Y Step: 1 - reduce 11 to 10 Step: 2 - add 5 quantity here from generated row as all other roes are in multiple of 5. B P2 15 15 20 Y Step: 1 - Can not reduce 9 so ignore this row Step: 2 - add 5 quantity here from generated row as all other roes are in multiple of 5. C P2 9 5 5 Y Step: 1 - reduce 9 to 5 Step: 2 - quantity is in multiple so give 1 here from last generated row. D P2 12 10 10 Y Step: 1 - reduce 12 to 10 Step: 2 - quantity is in multiple so move to next row. E P2 8 5 5 Y Step: 1 - Can not reduce 8 so ignore this row Step: 2 - quantity is in multiple so give 2 here from last generated row. P2 10 0 Step: 1 - Generated row with 10 left quantity. Step: 2 - 0 quantity is left in generated row with status N. A P3 9 9 10 Y Step: 1 - Can not reduce 9 so ignore this row Step: 2 - add 1 quantity from last row (E-P3) to make multiple of 10. B P3 9 9 10 Y Step: 1 - Can not reduce 9 so ignore this row Step: 2 - add 1 quantity from last row (E-P3) to make multiple of 10. C P3 9 9 10 Y Step: 1 - Can not reduce 9 so ignore this row Step: 2 - add 1 quantity from last row (E-P3) to make multiple of 10. D P3 9 9 10 Y Step: 1 - Can not reduce 9 so ignore this row Step: 2 - add 1 quantity from last row (E-P3) to make multiple of 10. E P3 9 9 0 N Step: 1 - Can not reduce 9 so ignore this row Step: 2 - 5 quantity left here after giving 4 quantity to above rows which is not multiple of 10 so make it 0. P3 0 5 N Step: 1 - Generated row with 0 left quantity. Step: 2 - generated row with 5 leftover quantity of above row.

Some discrepancy was there in sample data for part P3 which is corrected in the below data set.

create table data (bucket, item, present_qty) as

select 'A', 'P1', 11 from dual union all

select 'B', 'P1', 15 from dual union all

select 'C', 'P1', 9 from dual union all

select 'D', 'P1', 12 from dual union all

select 'E', 'P1', 8 from dual union all

select 'A', 'P2', 11 from dual union all

select 'B', 'P2', 15 from dual union all

select 'C', 'P2', 9 from dual union all

select 'D', 'P2', 12 from dual union all

select 'E', 'P2', 8 from dual union all

select 'A', 'P3', 9 from dual union all

select 'B', 'P3', 9 from dual union all

select 'C', 'P3', 9 from dual union all

select 'D', 'P3', 9 from dual union all

select 'E', 'P3', 9 from dual

;

create table data (bucket, item, present_qty) as

select 'A', 'P1', 11 from dual union all

select 'B', 'P1', 15 from dual union all

select 'C', 'P1', 9 from dual union all

select 'D', 'P1', 12 from dual union all

select 'E', 'P1', 8 from dual union all

select 'A', 'P2', 11 from dual union all

select 'B', 'P2', 15 from dual union all

select 'C', 'P2', 9 from dual union all

select 'D', 'P2', 12 from dual union all

select 'E', 'P2', 8 from dual union all

select 'A', 'P3', 9 from dual union all

select 'B', 'P3', 9 from dual union all

select 'C', 'P3', 9 from dual union all

select 'D', 'P3', 9 from dual union all

select 'E', 'P3', 9 from dual

;

If you add an N bucket to the data and volumes tables with zero current quantity this works when all the buckets are full.

You'll need a final rejig to move quantities from buckets below the target quantity to N at the end.

You'll need a final rejig to move quantities from buckets below the target quantity to N at the end.