My code would be:
begin
my_pkg.fire_employee( .... );
my_pkg.hire_employee( .... );
end;
/
and if you program well formed transactions (fire and hire do one thing, basic premise of modular coding, they do the transaction work), you cannot "forget anything"
You would never commit in plsql (I wish plsql could not COMMIT or ROLLBACK, it would be a much better language without it)
The client is the only one smart enough to commit or rollback - never a small bit of code that performs a database operation.
I find the "object" approach you took to be obscure, obfuscated, not at all easy to understand the "flow". You have to instantiate to tell the system "I think I might want to do something - later, not not, later - maybe" and then you have to tell it "do that thing I said before I might just want to do sometime".
Rather than just saying "hey, do this, do it now, tell me how it went, thanks"
Nothing personal, but your example is actually (in my opinion) the poster child example of why "OO" "adds not much, even takes away" in most cases.
the polymorphic surprises you would get into with such an approach (think five years from now, you have moved on, you have a monolithic code base and someone else owns it)....
procedure p ( l_obj in actionbase )
is
begin
l_obj.exec; <<<=== what the HECK did I just do, what code did i actually call
end
After I backtrace through all possible invocations - I discover "I have no way of knowing sitting here what I will do here at runtime"
It is like a great big "jump into the unknown"
No, I don't find this maintainable - understandable - or a good approach, as opposed to straight forward "modular coding"