I understand instead of log generation rate , we should consider amount of redo generated.
My question is - How can we identify any database bottleneck using amount of redo?
Is there any way to compare the amount of redo generation between two days or months probably and see if there is need of tuning or re-sizing?
I generally would not focus on generation rate, because you'll generate what you need ... not more, not less.
That is only an issue if it is impacting user sessions or is impacting activities outside the database (for example, it might slow down I/O for non-database operations because the redo processing is consuming too much I/O bandwidth).
For the user sessions, you want to measure whether they are getting stalled waiting for LGWR activities to complete.
SQL> select name from V$EVENT_NAME
2 where name like 'log%';
log file single write
log file parallel write
log buffer space
log file switch (checkpoint incomplete)
log file switch (private strand flush incomplete)
log file switch (archiving needed)
log file switch completion
log file sync
Monitor your system for the events above which may be indicative of a redo issue. I use the term "issue" instead of I/O, because it is not necessarily related to I/O. If your server is under CPU strain, this can slow down the LGWR which in turn then *looks* like a redo problem because everyone gets held up.
v$log_history and v$archived_log can show you redo log switch frequency, which can then be mapped to redo size over a nominated period.