I looked over the examples on that follow-up. They don't address the use-case, because yes, I DID have the need to update a BLOB only from the bytes/hex of a file within an application, under different circumstances than when a file would normally be expected to be loaded.
And apparently "dbms_lob.loadfromfile" has its own share of problems:
http://www.oracledba.co.uk/tips/load_lob.htm They used append statements, instead, which was where I ended up. But these have their limits (32767).
1. & 2. Each of these sites shows the update of a BLOB using an OracleParameter - which I can and do use in my application when I am connected to Oracle and someone uploads a file. The problem is if I am disconnected from Oracle, when I swap to a SQLite instance, but still need to automatically update Oracle later with that file. So meanwhile, the bytes from the file, along with the full update query information, still need to be stored for that purpose, somewhere, along with all the other Oracle update queries that didn't get made, and then be re-read-in in a way that does not require a special circumstance from one to the other when re-processing them. Because how can you even denote a file upload query, if stored in a file to be re-ran later with the other non-file-upload queries, to indicate that it requires the extra attention of grabbing the bytes out and creating an OracleParameter out of it? Parse it for "hextobytes", since long hex strings can't be ran directly like this, anyway, and grab the hex, convert it back to bytes, assign it to an OracleParameter and re-insert after grabbing the rest of the query string parts? Sure, I GUESS....!!! But I'd rather build SQL that I don't have to parse and run any special way from any other query to require this much extra work and testing. So there is very little documentation out there on how to do this while avoiding all of Oracle's limits to strings and variable usage and somehow getting that user to re-upload that file to bytes, again.
3. This showed how to do a read operation, using GetOracleBlob(), and so not applicable to the use-case. (I actually did use this same documentation when adding in my code to read BLOB bytes back out of the database.)
Regarding my earlier review, though, I actually did later find that if you use an odd chunk size, that is when the extra "0" hex byte gets appended in hextoraw. If you split the hex into even chunks less than 32767, you can build a BLOB through dbms_createtemporary() and dbms_lob.append() and then convert that variable's hex to RAW (assigning the variable's content back to itself as RAW), set your database field equal to that variable, and it works fine.
The problem with all this was learning:
1) that there is a 4000 character limit when setting field values, even if they are BLOB/CLOB fields that can store 4 GBs of data! (very odd, to me),
2) the chunk size limit (32767), (very small in comparison to today's file sizes, if you ask me)
3) the bug with using odd-numbered chunk sizes, (why would "hextoraw" try to compensate like this and corrupt the file in the process, instead of just giving an error that there aren't enough bytes to convert it to hex?) and
4) how you have the limit on putting the hex directly into an UPDATE/INSERT query to update/insert to a field, but can use a BLOB variable, append to it in chunks, then set the field equal to that variable without issue -- so why can't you just have the query itself allow the field to be set directly? (makes little sense, to me, how Oracle can't be made to self-divide these chunks - it has to be told to do so with individualized instructions...)
Requiring someone to have to build a way to programmatically break apart the hex and append it like this I thought was a little much - like no one at Oracle dreamed there would ever be a use-case for allowing a query to set a field equal to a long set of hex in one swoop - longer than 4000 chars, I mean. And I think the response I got - to use loadfromfile, which doesn't always work, and OracleParameter, for which I'd have to do even more (parsing out the hex from the queries that SHOULD actually work, but can't, due to extremely small size limits, when compared to the 4 GB BLOB limit, or relative size of average files, today), then re-convert the hex to bytes and assign to them, shows that this is the case.