More than one process saving?
Lars Villadsen, May 30, 2005 - 1:57 pm UTC
Tom,
Thanks for your prompt response.
Just to make sure that I have understood your suggestion: Make the calculation servers write to a queue instead of directly to the DATA and VALUE tables. Add (a) server process(es) that receives the data and calculates the HASH value for the concatenated columns and let a unique constraint enforce the "uniqueness"!
Would you suggest that we should have more than one process saving or should we 'serialize' by simply using one? (The error handling for possible violations of the unique constraint by adding extra save processes would be 'slightly' more complicated).
Thanks
May 30, 2005 - 2:05 pm UTC
no, I was suggesting that the guys that create the "new record R" that A and B discover -- push a message onto a queue rather than create the "new record R"
But what about foregn keys then?
Lars Villadsen, May 30, 2005 - 2:18 pm UTC
But as we push these DATA columns will we not be able to save in the VALUES table before we know the value of the foreign key in the DATA table - i.e. A and B needs to wait for the queue to be processed?
Or should we simply use the hash-calculated value as the foreign key in DATA and then calculate the hash value before we push the values?
Thanks
May 30, 2005 - 2:45 pm UTC
I don't have enough knowledge of your processing to answer. My point is -- the thing that "makes the R records appear", maybe it doesn't make the "r records appear anymore", but queues a message with the R record and asks the team of A, B, C processes to "make them appear and do the processing (rollup or whatever) on them".