Skip to Main Content
  • Questions
  • Architectural Advice: Reporting via Pipelined Functions vs. Views in a 4-Schema SmartDB

Breadcrumb

Question and Answer

Chris Saxon

Thanks for the question.

Asked: January 28, 2026 - 2:20 pm UTC

Last updated: February 18, 2026 - 3:27 pm UTC

Version: 24.2

Viewed 1000+ times

You Asked

I am implementing a 4-Schema SmartDB architecture for an Oracle APEX project to strictly enforce the principle of least privilege. My schemas are: DATA_LAYER (Tables), LOGIC_LAYER (Private Logic), API_LAYER (Public Interface), and APEX_UI_LAYER (Parsing Schema).

I am currently debating how to expose data for APEX reports. Below is a simplified example of my current "Pipelined Function" approach, which keeps everything encapsulated within packages:

-- 1. DATA_LAYER: Tables
CREATE TABLE DATA_LAYER.departments (
    department_id   NUMBER GENERATED BY DEFAULT AS IDENTITY PRIMARY KEY,
    department_name VARCHAR2(100) NOT NULL UNIQUE
);

-- 2. LOGIC_LAYER: Private implementation
CREATE OR REPLACE PACKAGE LOGIC_LAYER.departments_pkg AUTHID DEFINER AS
    TYPE t_department_rec IS RECORD (
        department_id   NUMBER,
        department_name VARCHAR2(100)
    );
    TYPE t_department_tab IS TABLE OF t_department_rec;

    FUNCTION get_departments RETURN t_department_tab PIPELINED;
END departments_pkg;
/

CREATE OR REPLACE PACKAGE BODY LOGIC_LAYER.departments_pkg AS
    FUNCTION get_departments RETURN t_department_tab PIPELINED IS
    BEGIN
        FOR rec IN (SELECT department_id, department_name FROM DATA_LAYER.departments) 
        LOOP
            PIPE ROW(rec);
        END LOOP;
    END get_departments;
END departments_pkg;
/

-- 3. API_LAYER: Public interface for APEX
CREATE OR REPLACE PACKAGE API_LAYER.departments_api AUTHID DEFINER AS
    FUNCTION get_departments RETURN LOGIC_LAYER.departments_pkg.t_department_tab PIPELINED;
END departments_api;
/

CREATE OR REPLACE PACKAGE BODY API_LAYER.departments_api AS
    FUNCTION get_departments RETURN LOGIC_LAYER.departments_pkg.t_department_tab PIPELINED IS
    BEGIN
        -- Calling the logic layer function
        FOR rec IN (SELECT * FROM TABLE(LOGIC_LAYER.departments_pkg.get_departments)) 
        LOOP
            PIPE ROW(rec);
        END LOOP;
    END get_departments;
END departments_api;
/

-- 4. GRANTS
GRANT SELECT ON DATA_LAYER.departments TO LOGIC_LAYER;
GRANT EXECUTE ON LOGIC_LAYER.departments_pkg TO API_LAYER;
GRANT EXECUTE ON API_LAYER.departments_api TO APEX_UI_LAYER;


My Questions:

The Pipelined Approach: Is this "Double Pipelining" (UI calling API calling Logic) considered an anti-pattern due to context switching or overhead?

The View Alternative: If I should use Views instead for better performance and optimizer transparency, how should I be implemented across these four schemas?

Should the View live in the API_LAYER and select directly from the DATA_LAYER (requiring a direct grant across two layers)?

Or should there be a chain of views (Data -> Logic -> API)?

APEX Integration: Since APEX reports (Interactive Reports/Grids) add their own dynamic WHERE clauses, will the optimizer be able to "see through" these layers to use indexes on DATA_LAYER.departments?

I want to maintain the 4-schema separation without making the database work harder than it needs to.

and Chris said...

Table functions are best suited to cases where you need to process the data in ways you can't (easily) do in SQL.

Using them as a wrapper for pure SELECT * FROM ... queries severely limits the optimizer's options. It can't "see through" these layers.

For example, querying by primary key will no longer use its index:

insert into departments by name
  select dbms_random.string ( 'u', 20 ) department_name 
  connect by level <= 10000;

commit;

alter session set statistics_level = all;
set serveroutput off

select * from departments 
where  department_id = 1;

select * from dbms_xplan.display_cursor ( format => 'IOSTATS LAST');

------------------------------------------------------------------------------------------------------
| Id  | Operation                   | Name         | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |              |      1 |        |      1 |00:00:00.01 |       4 |
|   1 |  TABLE ACCESS BY INDEX ROWID| DEPARTMENTS  |      1 |      1 |      1 |00:00:00.01 |       4 |
|*  2 |   INDEX UNIQUE SCAN         | SYS_C0013614 |      1 |      1 |      1 |00:00:00.01 |       3 |
------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   2 - access("DEPARTMENT_ID"=1)

select * from departments_pkg.get_departments ()
where  department_id = 1;

select * from dbms_xplan.display_cursor ( format => 'IOSTATS LAST');

---------------------------------------------------------------------------------------------------------------
| Id  | Operation                         | Name            | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
---------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                  |                 |      1 |        |      1 |00:00:00.14 |     937 |
|*  1 |  COLLECTION ITERATOR PICKLER FETCH| GET_DEPARTMENTS |      1 |     82 |      1 |00:00:00.14 |     937 |
---------------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   1 - filter("KOKBF$"."DEPARTMENT_ID"=1)


Notice that the table function does nearly 250x more work (4 buffers vs 937).

(I've created the table and package in the same schema; separate schemas make no difference for this test).

Chaining them like this further restricts the optimizer.

So I would advise you to use views rather than (pipelined) table functions.

In terms of where the best place to put these views is, I'm unsure why you've gone for four layers of schemas, so I'm also unsure where best to place them. Here are some thoughts which may help:

- Keep the number of views in a chain small. i.e. avoid views querying views querying views .... The more views there are, the more chances there are of unintended side effects if you change views at the bottom of the chain
- Users with privileges on a view can view its definition in ALL_VIEWS. If you want to hide the logic in a view from the UI, it can't have direct access to it
- View can't be parameterized, which has functionality and performance implications. Macros are a way to achieve this if needed.
- For read-only operations, grant READ rather than SELECT. With SELECT privs, users can run SELECT ... FOR UPDATE and lock rows. With READ they can't.

Rating

  (4 ratings)

Comments

Other schemas?

Stew Ashton, January 31, 2026 - 8:48 am UTC

I think a few other schemas were left out of the description:

- Which schema creates the other schemas and manages the privileges?
- Which schema is used by which developer to write the code for each schema?
- Which schema is used to debug when things go wrong and you have to dig through all those layers to find out where the bug is?

As I said during my company's first big "security" project in the 1980's: security isn't only about keeping the wrong people out, it's also about letting the right people in.

What is so "smart" about that design?

P.S. to Chris & Conner: now that you are answering questions that are not pure DB, people are answering the "version" question with versions of APEX or other products. Wouldn't it be nice if you could get them to mention the DB version anyway?
Connor McDonald
February 02, 2026 - 2:57 am UTC

Wouldn't it be nice if you could get them to mention the DB version anyway?

We'll look at an enhancement

With regard to the number of schemas (and the original question), I'm a fan of keeping security a simple as possible without compromising the security goals.

Because the more complexity you add, the greater the risk of human error in implementation, or people deliberately subverting the process because its too hard to work with.

I've lost track of the number of customers with extreme security processes in their database which ultimately ended up in less security (because people simply started bypassing the database and using things like PowerBI, Excel, to locally store the data they needed).

SmartDB architecture for Oracle APEX

Susana, February 02, 2026 - 2:12 pm UTC

Thanks for the initial answer — it's very helpful. I’d like to refine my original architectural question and ask for further guidance on a few specific points.

1. Does the 4-schema architecture make sense for a SmartDB (Thick DB) paradigm / Oracle APEX project?

My intent with this separation was to strictly enforce the principle of least privilege:

DATA_LAYER: Tables only

LOGIC_LAYER: Private logic (packages) accessing DATA_LAYER

API_LAYER: Public PL/SQL interface (packages) consuming LOGIC_LAYER

APEX_UI_LAYER: Parsing schema only has EXECUTE on API_LAYER

This parallels what I understand of the SmartDB (ThickDB) paradigm — i.e., all SELECT/INSERT/UPDATE/DELETE are done only via PL/SQL APIs inside the database, not ad-hoc queries from the UI.
Source: https://stevenfeuersteinonplsql.blogspot.com/2018/05/the-smartdb-resource-center.html?utm_source=chatgpt.com

But I’m wondering if 4 schemas if there’s a recommended practice for how many layers/schema I should have in a SmartDB design for Oracle APEX.

Questions:
* Is strict 4-schema separation worth the complexity?
* In your opinion, what should it be the schema separation a reasonable and recommended approach for SmartDB with Oracle APEX?

2. How should this be implemented in Oracle APEX?

Originally I asked about pipelined functions vs views for reporting. After reading your answer, it appears that using pipelined functions purely as “SELECT * FROM …” wrappers limits the optimizer’s ability to use indexes and push predicates.

So my revised question is:
When applying the SmartDB paradigm in APEX reporting (Interactive Reports, Grids, Classic Reports), what’s the recommended way to expose data? Should I use views instead of pipelined functions for read operations?

3. How to deal with grants on views?

If I use views for read access across schemas, how should the grants be managed?
Should I avoid using WITH GRANT OPTION on views to avoid privilege escalation? Should I use roles? How should the API_LAYER reach those views (should the API_LAYER own the views and select directly from DATA_LAYER — even though that crosses layers?)?
Any best practices around secure and least-privilege grants in this multi-schema pattern?

4. Example

Here’s a simple example of what I think might be a view-based alternative to the pipelined approach shown previously:

-- DATA layer
CREATE TABLE data_layer.departments (
    department_id   NUMBER PRIMARY KEY,
    department_name VARCHAR2(100)
);

-- API layer (or logic layer, if preferred)
CREATE OR REPLACE VIEW api_layer.v_departments AS
SELECT
    department_id,
    department_name
FROM
    data_layer.departments;

-- Grants
GRANT SELECT ON data_layer.departments TO api_layer;   -- read only
GRANT SELECT ON api_layer.v_departments TO apex_ui_layer;


Is this kind of view model a good approach in SmartDB for APEX reporting?
Or do you recommend something else?
Connor McDonald
February 18, 2026 - 6:48 am UTC

I think ACE director Philipp Salvisberg summed up things nicely some years back

https://www.salvis.com/blog/2018/07/18/the-pink-database-paradigm-pinkdb/

in which he basically makes the argument that SmartDB might be overkill for many, and in its form can limit the functionality of tools that want to access the database (eg APEX).

My own take - I view SmartDB as more a philosophical experiment than blueprint for applications. It's nice to know what a potential utopia is, but that doesn't necessarily mean you want to implement it :-)

Stew makes some good recommendations in his followup, which are also very similar to the PinkDB model

APEX and strict SmartDB are incompatible!

Stew Ashton, February 03, 2026 - 1:38 pm UTC

At its origin, SmartDB was a strict architecture that did not allow any access to the database using SELECT, INSERT, UPDATE, DELETE, MERGE, COMMIT or ROLLBACK.

APEX uses ORDS to access the database, and ORDS commits all the time.

APEX itself selects all the time during the development phase, not to execute but to parse statements in order to populate reports.

There are also built-in APEX services to modify data directly from Interactive Grids. This requires INSERT, UPDATE and DELETE privileges on the data.

SmartDB and multiple schemas will not magically secure your data, and they may stop APEX from working as intended if you are not very careful (and maybe bend the rules).

Just brainstorming on a possible architecture:
- Start with an object layer: tables, indexes and maybe any object that must be dropped before being created.
- Then a code layer: PL/SQL, views. Maybe distinguish read-only views and updatable views, the latter for Interactive Grids, shuttle items and the like. Do this so as to allow all built-in APEX services that modify data. Accesses the object layer.
- Then what APEX calls the parsing schema, which is the database access layer. Accesses the code layer, but with all necessary grants,

Personally, I don't see the point of separating the code layer and the database access layer unless you have two different development teams, one for each layer, with an architect to coordinate them. Please remember that security measures are needed for people, not schemas.

Arguments welcome...

Thanks for the clarification on SmartDB vs APEX realities

Susana, February 18, 2026 - 11:53 am UTC

Thank you very much for taking the time to write such a detailed and thoughtful response — it’s genuinely helpful.

Your clarification about the original SmartDB definition is especially important. You’re absolutely right that a strict “no SELECT / no DML” model clashes with how APEX and ORDS actually work in practice. That’s a very good reality check, and I appreciate you calling it out so clearly.

In particular, your statement that “There are also built-in APEX services to modify data directly from Interactive Grids. This requires INSERT, UPDATE and DELETE privileges on the data” was probably the most challenging point for me when trying to apply a SmartDB-style approach. Reconciling that requirement with strict encapsulation of data access is where I struggled the most, so this helped clarify the practical constraints very clearly.

Thanks again for sharing your perspective. It helped me recalibrate my approach toward something more pragmatic and APEX-friendly, rather than trying to force a purist SmartDB model where it doesn’t quite fit.
Chris Saxon
February 18, 2026 - 3:27 pm UTC

You're welcome. To me, SmartDB is more about making smart use of the database than sticking to a rigid architecture which may not fit your use case.

More to Explore

PL/SQL demos

Check out more PL/SQL tutorials on our LiveSQL tool.

PL/SQL docs

PL/SQL reference manual from the Oracle documentation library