Skip to Main Content

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question, AP.

Asked: January 24, 2023 - 3:43 pm UTC

Last updated: January 25, 2023 - 3:11 am UTC

Version: 19.13.0.0

Viewed 1000+ times

You Asked

We currently use triggers on a subset of tables in our transactional database (on-premises) to capture row / column level changes and insert them into a separate table. With the availability of DBMS_CLOUD package for our product version we are exploring the option to rewrite some of the existing triggers and push the row / column changes to a native queueing service on the cloud instead of writing to another table in the database. The idea is to then have an asynchronous process to subscribe to the queue and push the data to a downstream data store.

The developer blog below has some very relevant information but it is written for Autonomous database & Oracle cloud.

https://blogs.oracle.com/developers/post/back-to-the-database-part-3-publishing-database-changes-to-a-stream

Has anyone had success with using DBMS_CLOUD package on 19c to work with streaming / queuing services on Azure / AWS cloud? On the documentation ( https://docs.oracle.com/en/database/oracle/oracle-database/19/arpls/DBMS_CLOUD.html ), I see that it can interact with an object store but unlike the documentation for autonomous database there is no mention of REST_API capabilities. Is that an undocumented feature for 19c? I am looking for some clarity before embarking on a proof of concept. Thank you.


and Connor said...

I don't have a lot of experience with it, but I'd be hesitant to publish row changes etc *directly* to a REST endpoint due the latency risks.

Thus I'd be looking at a workflow of:

a- row changes get pushed to an database queue (ie AQ)
b- daemon style process listens on that AQ and publishes via whatever REST calls etc you want to use.

You can create credentials to AWS and Azure just like you can to OCI, ie

begin
  dbms_credential.create_credential(
    credential_name => 'my_cred',
    username        => '<aws_username>',
    password        => '<aws_secret_access_key>');
end;
/


and then your process in (b) would along the lines of (pseudo-code)

declare
  l_deq     dbms_aq.dequeue_options_t;
  l_prop    dbms_aq.message_properties_t;
  l_id      raw(16);
  l_json    json;
begin
  loop
    dbms_aq.dequeue(
      queue_name         => 'my_event_queue',
      dequeue_options    => l_deq,
      message_properties => l_prop,
      payload            => l_json,
      msgid              => l_id);

    response := dbms_cloud.send_request(
            credential_name => 'my_cred',
            uri => uri => 'https://s3....amazonaws.com/',
            method => dbms_cloud.method_post,
            body => utl_raw.cast_to_raw(l_json)
          );
    commit;           
  end loop;
end;


That way, row change overhead is limited to just a local enqueue.

Is this answer out of date? If it is, please let us know via a Comment

More to Explore

PL/SQL demos

Check out more PL/SQL tutorials on our LiveSQL tool.

PL/SQL docs

PL/SQL reference manual from the Oracle documentation library