It is typically not the datafiles, but the metadata associated with each objects. The most common large piece of metadata for the objects is optimizer statistics. Once you start adding in histograms on each column, that can be a lot of metadata.
To see where the time is being lost, you can check v$active_session_history, or you can do a system wide trace during the import, ie
1) alter system set events='10046 trace name context forever, level 8';
2) run the impdp
3) alter system set events='10046 trace name context off';
There is also a trace=level parameter in datapump itself
level Purpose
------- -----------------------------------------------
10300 To trace the Shadow process (API) (expdp/impdp)
20300 To trace Fixed table
40300 To trace Process services
80300 To trace Master Control Process
100300 To trace File Manager
200300 To trace Queue services
400300 To trace Worker process(es)
800300 To trace Data Package
1000300 To trace Metadata Package
1FF0300 To trace all components (full tracing)
Several trace files will be created (because datapump uses worker slaves) but that should give you some insight into the cause.
If it is indeed the optimizer stats, then you could potentially defer importing them until after the datafiles are transported. And you could then do some (manual) parallelisation of that.