Quantcast
Channel: Archives des Oracle - dbi Blog
Viewing all 523 articles
Browse latest View live

Oracle SPD status on two learning paths

$
0
0

By Franck Pachot

.
I have written a lot about SQL Plan Directives that appeared in 12c. They were used by default and, because of some side effects at the time of 12cR1 with legacy applications that were parsing too much, they have been disabled by default in 12cR2. Today, there are probably not used enough because of their bad reputation from those times. But for datawarehouses, they should be the default in my opinion.

There is a behaviour that surprised me initially and I though it was a bug but, after 5 years, the verdict is: expected behaviour (Bug 20311655 : SQL PLAN DIRECTIVE INVALIDATED BY STATISTICS FEEDBACK). The name of the bug is my fault: I initially though that the statistics feedback had been wrongly interpreted as HAS_STATS. But actually, this behaviour has nothing to do with it: it was visible here only because the re-optimization had triggered a new hard parse, which has changed the state. But any other query on similar predicates would have done the same.

And this is what I’m showing here: when the misestimate cannot be solved by extended statistics, the learning path of SQL Plan Directive have to go through this HAS_STATS state where misestimate will occur again. I’m mentioning the fact that extended statistics can help or not, and this is anticipated by the optimizer. For this reason, I’ve run two sets of examples: one with a predicate where no column group can help, and one where extended statistics can be created.

SQL> show parameter optimizer_adaptive
NAME                              TYPE    VALUE 
--------------------------------- ------- ----- 
optimizer_adaptive_plans boolean TRUE 
optimizer_adaptive_reporting_only boolean FALSE 
optimizer_adaptive_statistics boolean TRUE 

Since 12.2 the adaptive statistics are disabled by default: SQL Plan Directives are created but not used. This is fine for OLTP databases that are upgraded from previous versions. However, for data warehouse, analytic, ad-hoc queries, reporting, enabling adaptive statistics may help a lot when the static statistics are not sufficient to optimize complex queries.

SQL> alter session set optimizer_adaptive_statistics=true;

Session altered.

I’m enabling adaptive statistics for my session.

SQL> exec for r in (select directive_id from dba_sql_plan_dir_objects where owner=user) loop begin dbms_spd.drop_sql_plan_directive(r.directive_id); exception when others then raise; end; end loop;

I’m removing all SQL Plan Directives in my lab to build a reproducible test case.

SQL> create table DEMO pctfree 99 as select mod(rownum,2) a,mod(rownum,2) b,mod(rownum,2) c,mod(rownum,2) d from dual connect by level <=1000;

Table DEMO created.

This is my test table. Build on purpose with a special distribution of data: all rows with 0 or 1 on all columns.

SQL> alter session set statistics_level=all;

Session altered.

I’m profiling down to execution plan operation in order to see all execution statistics

SPD learning path {E}:
USABLE(NEW)->SUPERSEDED(HAS_STATS)->USABLE(PERMANENT)

SQL> select count(*) c1 from demo where a+b+c+d=0;

    C1 
______ 
   500 

Here is a query where dynamic sampling can help to get better statistics on selectivity but where no static statistic can help even on column group (extended statistics on expression is not considered for SQL Plan Directives even in 21c)

SQL> select * from dbms_xplan.display_cursor(format=>'allstats last');

                                                                                PLAN_TABLE_OUTPUT 
_________________________________________________________________________________________________ 
SQL_ID  fjcbm5x4014mg, child number 0                                                             
-------------------------------------                                                             
select count(*) c1 from demo where a+b+c+d=0                                                      
                                                                                                  
Plan hash value: 2180342005                                                                       
                                                                                                  
----------------------------------------------------------------------------------------------    
| Id  | Operation          | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |    
----------------------------------------------------------------------------------------------    
|   0 | SELECT STATEMENT   |      |      1 |        |      1 |00:00:00.03 |     253 |    250 |    
|   1 |  SORT AGGREGATE    |      |      1 |      1 |      1 |00:00:00.03 |     253 |    250 |    
|*  2 |   TABLE ACCESS FULL| DEMO |      1 |     10 |    500 |00:00:00.03 |     253 |    250 |    
----------------------------------------------------------------------------------------------    
                                                                                                  
Predicate Information (identified by operation id):                                               
---------------------------------------------------                                               
                                                                                                  
   2 - filter("A"+"B"+"C"+"D"=0)      

As expected the estimation (10 rows) is far from the actual number of rows (500). This statement is flagged for re-optimisation with cardinality feedback but I’m interested by different SQL statements here.

SQL> exec dbms_spd.flush_sql_plan_directive;

PL/SQL procedure successfully completed.

SQL> select state, extract(notes,'/spd_note/internal_state/text()') internal_state, extract(notes,'/spd_note/spd_text/text()') spd_text from dba_sql_plan_directives where directive_id in (select directive_id from dba_sql_plan_dir_objects where owner=user) and type='DYNAMIC_SAMPLING';


    STATE    INTERNAL_STATE                      SPD_TEXT 
_________ _________________ _____________________________ 
USABLE    NEW               {E(DEMO.DEMO)[A, B, C, D]}    

A SQL Plan Directive has been created to keep the information that equality predicates on columns A, B, C and D are misestimated. The directive is in internal state NEW. The visible state is USABLE which means that dynamic sampling will be used by queries with a similar predicate on those columns.

SQL> select count(*) c2 from demo where a+b+c+d=0;

    C2 
______ 
   500 

SQL> select * from dbms_xplan.display_cursor(format=>'allstats last');

                                                                       PLAN_TABLE_OUTPUT 
________________________________________________________________________________________ 
SQL_ID  5sg7b9jg6rj2k, child number 0                                                    
-------------------------------------                                                    
select count(*) c2 from demo where a+b+c+d=0                                             
                                                                                         
Plan hash value: 2180342005                                                              
                                                                                         
-------------------------------------------------------------------------------------    
| Id  | Operation          | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |    
-------------------------------------------------------------------------------------    
|   0 | SELECT STATEMENT   |      |      1 |        |      1 |00:00:00.01 |     253 |    
|   1 |  SORT AGGREGATE    |      |      1 |      1 |      1 |00:00:00.01 |     253 |    
|*  2 |   TABLE ACCESS FULL| DEMO |      1 |    500 |    500 |00:00:00.01 |     253 |    
-------------------------------------------------------------------------------------    
                                                                                         
Predicate Information (identified by operation id):                                      
---------------------------------------------------                                      
                                                                                         
   2 - filter("A"+"B"+"C"+"D"=0)                                                         
                                                                                         
Note                                                                                     
-----                                                                                    
   - dynamic statistics used: dynamic sampling (level=AUTO)                              
   - 1 Sql Plan Directive used for this statement      

As expected, a different query (note that I changed the column alias C1 to C2 but anything can be different as long as there’s an equality predicate involving the same columns) has accurate estimations (E-Rows=A-Rows) because of dynamic sampling (dynamic statistics) thanks to the used SQL Plan Directive.

SQL> exec dbms_spd.flush_sql_plan_directive;

PL/SQL procedure successfully completed.

SQL> select state, extract(notes,'/spd_note/internal_state/text()') internal_state, extract(notes,'/spd_note/spd_text/text()') spd_text from dba_sql_plan_directives where directive_id in (select directive_id from dba_sql_plan_dir_objects where owner=user) and type='DYNAMIC_SAMPLING';

        STATE    INTERNAL_STATE                      SPD_TEXT 
_____________ _________________ _____________________________ 
SUPERSEDED    HAS_STATS         {E(DEMO.DEMO)[A, B, C, D]}    

This is the important part and initially, I thought it was a bug because SUPERSEDED means that the next query on similar columns will not do dynamic sampling anymore, and then will have bad estimations. HAS_STATS does not mean that we have correct testimations here but only that there is no additional static statistics that can help. Because the optimizer has detected an expression (“A”+”B”+”C”+”D”=0) and automatic statistics extensions do not consider expressions.

SQL> select count(*) c3 from demo where a+b+c+d=0;

    C3 
______ 
   500 


SQL> select * from dbms_xplan.display_cursor(format=>'allstats last');

                                                                       PLAN_TABLE_OUTPUT 
________________________________________________________________________________________ 
SQL_ID  62cf5zwt4rwgj, child number 0                                                    
-------------------------------------                                                    
select count(*) c3 from demo where a+b+c+d=0                                             
                                                                                         
Plan hash value: 2180342005                                                              
                                                                                         
-------------------------------------------------------------------------------------    
| Id  | Operation          | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |    
-------------------------------------------------------------------------------------    
|   0 | SELECT STATEMENT   |      |      1 |        |      1 |00:00:00.01 |     253 |    
|   1 |  SORT AGGREGATE    |      |      1 |      1 |      1 |00:00:00.01 |     253 |    
|*  2 |   TABLE ACCESS FULL| DEMO |      1 |     10 |    500 |00:00:00.01 |     253 |    
-------------------------------------------------------------------------------------    
                                                                                         
Predicate Information (identified by operation id):                                      
---------------------------------------------------                                      
                                                                                         
   2 - filter("A"+"B"+"C"+"D"=0)    

We are still in the learning phase and as you can see, even if we know that there is a misestimate (SPD has been created), adaptive statistic tries to avoid dynamic sampling: no SPD used mentioned in the notes, and back to the misestimate of E-Rows=10.

SQL> exec dbms_spd.flush_sql_plan_directive;

PL/SQL procedure successfully completed.

SQL> select state, extract(notes,'/spd_note/internal_state/text()') internal_state, extract(notes,'/spd_note/spd_text/text()') spd_text from dba_sql_plan_directives where directive_id in (select directive_id from dba_sql_plan_dir_objects where owner=user) and type='DYNAMIC_SAMPLING';

    STATE    INTERNAL_STATE                      SPD_TEXT 
_________ _________________ _____________________________ 
USABLE    PERMANENT         {E(DEMO.DEMO)[A, B, C, D]}    

The HAS_STATS and the misestimate was temporary. Now that the optimizer has validated that with all possible static statistics available (HAS_STATS) we still have a misestimate, and then has passed the SPD status to PERMANENT: end of the learning phase, we will permanently do dynamic sampling for this kind of query.

SQL> select count(*) c4 from demo where a+b+c+d=0;

    C4 
______ 
   500 


SQL> select * from dbms_xplan.display_cursor(format=>'allstats last');

                                                                       PLAN_TABLE_OUTPUT 
________________________________________________________________________________________ 
SQL_ID  65ufgd70n61nh, child number 0                                                    
-------------------------------------                                                    
select count(*) c4 from demo where a+b+c+d=0                                             
                                                                                         
Plan hash value: 2180342005                                                              
                                                                                         
-------------------------------------------------------------------------------------    
| Id  | Operation          | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |    
-------------------------------------------------------------------------------------    
|   0 | SELECT STATEMENT   |      |      1 |        |      1 |00:00:00.01 |     253 |    
|   1 |  SORT AGGREGATE    |      |      1 |      1 |      1 |00:00:00.01 |     253 |    
|*  2 |   TABLE ACCESS FULL| DEMO |      1 |    500 |    500 |00:00:00.01 |     253 |    
-------------------------------------------------------------------------------------    
                                                                                         
Predicate Information (identified by operation id):                                      
---------------------------------------------------                                      
                                                                                         
   2 - filter("A"+"B"+"C"+"D"=0)                                                         
                                                                                         
Note                                                                                     
-----                                                                                    
   - dynamic statistics used: dynamic sampling (level=AUTO)                              
   - 1 Sql Plan Directive used for this statement                                        
                                                            

Yes, it has an overhead at hard parse time, but that helps to get better estimations and then faster execution plans. The execution plan shows that dynamic sampling is done because id SPD usage.

SPD learning path {EC}:
USABLE(NEW)->USABLE(MISSING_STATS)->SUPERSEDED(HAS_STATS)

I’m now running a query where the misestimate can be avoided with additional statistics: column group statistics extension.

SQL> select count(*) c1 from demo where a=0 and b=0 and c=0 and d=0;

    C1 
______ 
   500 

SQL> select * from dbms_xplan.display_cursor(format=>'allstats last');

                                                                       PLAN_TABLE_OUTPUT 
________________________________________________________________________________________ 
SQL_ID  2x5j71630ua0z, child number 0                                                    
-------------------------------------                                                    
select count(*) c1 from demo where a=0 and b=0 and c=0 and d=0                           
                                                                                         
Plan hash value: 2180342005                                                              
                                                                                         
-------------------------------------------------------------------------------------    
| Id  | Operation          | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |    
-------------------------------------------------------------------------------------    
|   0 | SELECT STATEMENT   |      |      1 |        |      1 |00:00:00.01 |     253 |    
|   1 |  SORT AGGREGATE    |      |      1 |      1 |      1 |00:00:00.01 |     253 |    
|*  2 |   TABLE ACCESS FULL| DEMO |      1 |     63 |    500 |00:00:00.01 |     253 |    
-------------------------------------------------------------------------------------    
                                                                                         
Predicate Information (identified by operation id):                                      
---------------------------------------------------                                      
                                                                                         
   2 - filter(("A"=0 AND "B"=0 AND "C"=0 AND "D"=0))   

I have a misestimate here (E-Rows much lower than E-Rows) because the optimizer doesn’t know the correlation between A,B,C and D.

SQL> exec dbms_spd.flush_sql_plan_directive;

PL/SQL procedure successfully completed.

SQL> select state, extract(notes,'/spd_note/internal_state/text()') internal_state, extract(notes,'/spd_note/spd_text/text()') spd_text from dba_sql_plan_directives where directive_id in (select directive_id from dba_sql_plan_dir_objects where owner=user) and type='DYNAMIC_SAMPLING';


    STATE    INTERNAL_STATE                       SPD_TEXT 
_________ _________________ ______________________________ 
USABLE    PERMANENT         {E(DEMO.DEMO)[A, B, C, D]}     
USABLE    NEW               {EC(DEMO.DEMO)[A, B, C, D]}    

I have now a new SQL Plan Directive and the difference with the previous one is that the equality predicate (E) is a simple column equality on each column (EC). From that, the optimizer knows that statistics extension on column group may help.

SQL> select count(*) c2 from demo where a=0 and b=0 and c=0 and d=0;

    C2 
______ 
   500 

SQL> select * from dbms_xplan.display_cursor(format=>'allstats last');


                                                                       PLAN_TABLE_OUTPUT 
________________________________________________________________________________________ 
SQL_ID  5sg8p03mmx7ca, child number 0                                                    
-------------------------------------                                                    
select count(*) c2 from demo where a=0 and b=0 and c=0 and d=0                           
                                                                                         
Plan hash value: 2180342005                                                              
                                                                                         
-------------------------------------------------------------------------------------    
| Id  | Operation          | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |    
-------------------------------------------------------------------------------------    
|   0 | SELECT STATEMENT   |      |      1 |        |      1 |00:00:00.01 |     253 |    
|   1 |  SORT AGGREGATE    |      |      1 |      1 |      1 |00:00:00.01 |     253 |    
|*  2 |   TABLE ACCESS FULL| DEMO |      1 |    500 |    500 |00:00:00.01 |     253 |    
-------------------------------------------------------------------------------------    
                                                                                         
Predicate Information (identified by operation id):                                      
---------------------------------------------------                                      
                                                                                         
   2 - filter(("A"=0 AND "B"=0 AND "C"=0 AND "D"=0))                                     
                                                                                         
Note                                                                                     
-----                                                                                    
   - dynamic statistics used: dynamic sampling (level=AUTO)                              
   - 1 Sql Plan Directive used for this statement       

So, the NEW directive is a USABLE state: SPD is used to do some dynamic sampling, as it was with the previous example.

SQL> exec dbms_spd.flush_sql_plan_directive;

PL/SQL procedure successfully completed.

SQL> select state, extract(notes,'/spd_note/internal_state/text()') internal_state, extract(notes,'/spd_note/spd_text/text()') spd_text from dba_sql_plan_directives where directive_id in (select directive_id from dba_sql_plan_dir_objects where owner=user) and type='DYNAMIC_SAMPLING';

    STATE    INTERNAL_STATE                       SPD_TEXT 
_________ _________________ ______________________________ 
USABLE    PERMANENT         {E(DEMO.DEMO)[A, B, C, D]}     
USABLE    MISSING_STATS     {EC(DEMO.DEMO)[A, B, C, D]}    

Here we have an additional state during the learning phase because there’s something else that can be done: we are not in HAS_STATS because more stats can be gathered. We are in MISSING_STATS internal state. This is a USABLE state so that dynamic sampling continues until we gather statistics.

SQL> select count(*) c3 from demo where a=0 and b=0 and c=0 and d=0;

    C3 
______ 
   500 

SQL> select * from dbms_xplan.display_cursor(format=>'allstats last');

                                                                       PLAN_TABLE_OUTPUT 
________________________________________________________________________________________ 
SQL_ID  d8zyzh140xk0d, child number 0                                                    
-------------------------------------                                                    
select count(*) c3 from demo where a=0 and b=0 and c=0 and d=0                           
                                                                                         
Plan hash value: 2180342005                                                              
                                                                                         
-------------------------------------------------------------------------------------    
| Id  | Operation          | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |    
-------------------------------------------------------------------------------------    
|   0 | SELECT STATEMENT   |      |      1 |        |      1 |00:00:00.01 |     253 |    
|   1 |  SORT AGGREGATE    |      |      1 |      1 |      1 |00:00:00.01 |     253 |    
|*  2 |   TABLE ACCESS FULL| DEMO |      1 |    500 |    500 |00:00:00.01 |     253 |    
-------------------------------------------------------------------------------------    
                                                                                         
Predicate Information (identified by operation id):                                      
---------------------------------------------------                                      
                                                                                         
   2 - filter(("A"=0 AND "B"=0 AND "C"=0 AND "D"=0))                                     
                                                                                         
Note                                                                                     
-----                                                                                    
   - dynamic statistics used: dynamic sampling (level=AUTO)                              
   - 1 Sql Plan Directive used for this statement       

That can continue for a long time, with SPD in USABLE state and dynamic sampling compensating the missing stats, but at the cost of additional work during hard parse time.

SQL> exec dbms_spd.flush_sql_plan_directive;

PL/SQL procedure successfully completed.

SQL> select created,state, extract(notes,'/spd_note/internal_state/text()') internal_state, extract(notes,'/spd_note/spd_text/text()') spd_text from dba_sql_plan_directives where directive_id in (select directive_id from dba_sql_plan_dir_objects where owner=user) and type='DYNAMIC_SAMPLING' order by last_used;

    CREATED     STATE    INTERNAL_STATE                       SPD_TEXT 
___________ _________ _________________ ______________________________ 
20:52:11    USABLE    PERMANENT         {E(DEMO.DEMO)[A, B, C, D]}     
20:52:16    USABLE    MISSING_STATS     {EC(DEMO.DEMO)[A, B, C, D]}    

The status will not change until statistics gathering occurs.

SQL> exec dbms_stats.set_table_prefs(user,'DEMO','AUTO_STAT_EXTENSIONS','on');

PL/SQL procedure successfully completed.

In the same idea as adaptive statistics not enabled by default, the automatic creation of statistics extension is not there by default. I enable it for this table only here, but, as many dbms_stats operations, you can do that at schema, database or global level. This is what I do here. Usually, you do it initially when creating the table, or simply at database level because it works in pair with adaptive statistics, but in this demo I waited to show that even if the decision of going to HAS_STATS or MISSING_STATS state depends on the possibility of extended stats creation, this is done without looking at the dbms_stats preference.

SQL> exec dbms_stats.gather_table_stats(user,'DEMO', options=>'gather auto');

PL/SQL procedure successfully completed.

Note that I’m gathering the statistics like the automatic job does: GATHER AUTO. As I did not change any rows, the table statistics are not stale but the new directive in MISSING_STATS tells DBMS_STATS that there’s a reason to re-gather the statistics.

And if you look at statistics extensions there, there’s a new statistics extension on (A,B,C,D) column group.Just look at USER_STAT_EXTENSIONS.

SQL> select count(*) c4 from demo where a=0 and b=0 and c=0 and d=0;

    C4 
______ 
   500 

SQL> select * from dbms_xplan.display_cursor(format=>'allstats last');

                                                                       PLAN_TABLE_OUTPUT 
________________________________________________________________________________________ 
SQL_ID  g08m3qrmw7mgn, child number 0                                                    
-------------------------------------                                                    
select count(*) c4 from demo where a=0 and b=0 and c=0 and d=0                           
                                                                                         
Plan hash value: 2180342005                                                              
                                                                                         
-------------------------------------------------------------------------------------    
| Id  | Operation          | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |    
-------------------------------------------------------------------------------------    
|   0 | SELECT STATEMENT   |      |      1 |        |      1 |00:00:00.01 |     253 |    
|   1 |  SORT AGGREGATE    |      |      1 |      1 |      1 |00:00:00.01 |     253 |    
|*  2 |   TABLE ACCESS FULL| DEMO |      1 |    500 |    500 |00:00:00.01 |     253 |    
-------------------------------------------------------------------------------------    
                                                                                         
Predicate Information (identified by operation id):                                      
---------------------------------------------------                                      
                                                                                         
   2 - filter(("A"=0 AND "B"=0 AND "C"=0 AND "D"=0))                                     
                                                                                         
Note                                                                                     
-----                                                                                    
   - dynamic statistics used: dynamic sampling (level=AUTO)                              
   - 1 Sql Plan Directive used for this statement     

You may think that no dynamic sampling is needed anymore but the Adaptive Statistics mechanism is still in the learning phase: the SPD is still USABLE and the next parse will verify if MISSING_STATS can be superseded by HAS_STATS. This is what happened here.

SQL> exec dbms_spd.flush_sql_plan_directive;

PL/SQL procedure successfully completed.

SQL> select created,state, extract(notes,'/spd_note/internal_state/text()') internal_state, extract(notes,'/spd_note/spd_text/text()') spd_text from dba_sql_plan_directives where directive_id in (select directive_id from dba_sql_plan_dir_objects where owner=user) and type='DYNAMIC_SAMPLING' order by last_used;

    CREATED         STATE    INTERNAL_STATE                       SPD_TEXT 
___________ _____________ _________________ ______________________________ 
20:52:11    USABLE        PERMANENT         {E(DEMO.DEMO)[A, B, C, D]}     
20:52:16    SUPERSEDED    HAS_STATS         {EC(DEMO.DEMO)[A, B, C, D]}    

Here, SUPERSEDED means no more dynamic sampling for predicates with simple column equality on A,B,C,D because it HAS_STATS.

In the past, I mean before 12c, I often recommended enabling dynamic sampling (with optimizer_dynamic_sampling >= 4) on datawarehouses, or sessions running complex ad-hoc queries for reporting. And no dynamic sampling, creating manual statistics extensions only when required, for OLTP where we can expect less complex queries and where hard parse time may be a problem.

Now, in the same idea, I’ll rather recommend setting adaptive statistics because it has a finer grain optimization. As we see here: only one kind of predicate does dynamic sampling, and this dynamic sampling is the “adaptive” one, estimating not only single table cardinality but joins and aggregations as well. This is the USABLE (PERMANENT) one. The other, did it only temporarily until statistics extensions were automatically created, SUPERSEDED with HAS_STATS.

In summary, MISSING_STATS state is seen only when, given the simple column equality, there are possible statistics that are missing. And HAS_STATS means that all the statistics that can be used by optimizer for this predicate are available and no more can be gathered. Each directive will go through HAS_STATS during the learning phase. And then, it stays in HAS_STATS or switches definitely to PERMANENT state when HAS_STAT encountered misestimate again.

Cet article Oracle SPD status on two learning paths est apparu en premier sur Blog dbi services.


Password rolling change before Oracle 21c

$
0
0

By Franck Pachot

.
You may have read about Gradual Password Rollover usage from Mouhamadou Diaw and about some internals from Rodrigo Jorge. But it works only on 21c which is only in the cloud, for the moment, in Autonomous Database and DBaaS (but here I’ve encountered some problems apparently because of a bug when using SQL*Net native encryption). But your production is not yet in 21c anyway. However, here is how you can achieve a similar goal in 12c,18c or 19c: be able to connect with two passwords for the time window where you are changing the password in a rolling fashion in the application server configuration.

Proxy User

If your application still connects with the application owner, you do it wrong. Even when it needs to be connected in the application schema by default, and even when you can’t to an “alter session set current_schema” you don’t have to use this user for authentication. And this is really easy with proxy users. Consider the application owner as a schema, not as a user to connect with.

My application is in schema DEMO and I’ll not use DEMO credentials. You can set an impossible password or, better, in 18c, set no password at all. I’ll use a proxy user authentication to connect to this DEMO user:


19:28:49 DEMO@atp1_tp> grant create session to APP2020 identified by "2020 was a really Bad Year!";
Grant succeeded.

19:28:50 DEMO@atp1_tp> alter user DEMO grant connect through APP2020;
User DEMO altered.

The APP2020 user is the one I’ll use. I named it 2020 because I want to change the credentials every year and, as I don’t have the gradual rollover password feature, this means changing the user to connect with.


19:28:50 DEMO@atp1_tp> connect APP2020/"2020 was a really Bad Year!"@atp1_tp
Connected.
19:28:52 APP2020@atp1_tp> show user
USER is "APP2020"

This user can connect as usual, as it has the CREATE SESSION privilege. There is a way to prevent this and allow PROXY ONLY CONNECT, but this is unfortunately not documented (Miguel Anjo has written about this) so better not using it.

However, the most important is:


19:28:52 APP2020@atp1_tp> connect APP2020[DEMO]/"2020 was a really Bad Year!"@atp1_tp
Connected.

19:28:53 DEMO@atp1_tp> show user
USER is "DEMO"

With proxy connection, in addition to the proxy user credentials I mention the final user I want to connect to, though this proxy user. Now I’m in the exact same state as if I connected with the DEMO user.

No authentication


19:28:54 ADMIN@atp1_tp> alter user DEMO no authentication;
User DEMO altered.

As we don’t connect through this user anymore (and once I’m sure no application uses it) the best is to set it with NO AUTHENTICATION.

New proxy user

Now that the application uses this APP2020 for months, I want to change the password. I’ll add a new proxy user for that:


19:28:54 ADMIN@atp1_tp> show user
USER is "ADMIN"

19:28:53 ADMIN@atp1_tp> grant create session to APP2021 identified by "Best Hopes for 2021 :)";
Grant succeeded.

19:28:54 ADMIN@atp1_tp> alter user DEMO grant connect through APP2021;
User DEMO altered.

Here I have another proxy user that can be used to connect to DEMO, in addition to the existing one


19:28:54 ADMIN@atp1_tp> connect APP2020[DEMO]/"2020 was a really Bad Year!"@atp1_tp
Connected.

19:28:55 DEMO@atp1_tp> show user
USER is "DEMO"

19:28:55 DEMO@atp1_tp> connect APP2021[DEMO]/"Best Hopes for 2021 :)"@atp1_tp
Connected.

19:28:56 DEMO@atp1_tp> show user
USER is "DEMO"

During this time, I can use both credentials. This gives me enough time to change all application server configuration one by one, without any downtime for the application.

Lock previous account


19:30:00 ADMIN@atp1_tp> 
 select username,account_status,last_login,password_change_date,proxy_only_connect 
 from dba_users where username like 'APP____';

   USERNAME    ACCOUNT_STATUS                                       LAST_LOGIN    PASSWORD_CHANGE_DATE    PROXY_ONLY_CONNECT
___________ _________________ ________________________________________________ _______________________ _____________________
APP2020     OPEN              27-DEC-20 07.28.55.000000000 PM EUROPE/ZURICH    27-DEC-20               N
APP2021     OPEN              27-DEC-20 07.28.56.000000000 PM EUROPE/ZURICH    27-DEC-20               N

After a while, I can validate that the old user is not used anymore. If you have a connection recycling duration in the connection pool (you should) you can rely on last login.


19:30:00 ADMIN@atp1_tp> alter user APP2020 account lock;
User APP2020 altered.

Before dropping it, just lock the account, easier to keep track of it and unlock it quickly if anyone encounters a problem


19:30:00 ADMIN@atp1_tp> connect APP2020[DEMO]/"2020 was a really Bad Year!"@atp1_tp
Error starting at line : 30 File @ /home/opc/demo/tmp/proxy_to_rollover.sql
In command -
  connect ...
Error report -
Connection Failed
  USER          = APP2020[DEMO]
  URL           = jdbc:oracle:thin:@atp1_tp
  Error Message = ORA-28000: The account is locked.
Commit

If someone tries to connect with the old password, he will know that the user is locked.


19:30:01 @> connect APP2021[DEMO]/"Best Hopes for 2021 :)"@atp1_tp
Connected.
19:30:02 DEMO@atp1_tp> show user
USER is "DEMO"

Once the old user locked, only the new one is able to connect, with the new user credentials. As this operation can be done with no application downtime, you can do it frequently. From a security point of view, you must change passwords frequently. For end-user passwords, you can set a lifetime, and grace period. But not for system users as the warning may not be cached. Better change them proactively.

Cet article Password rolling change before Oracle 21c est apparu en premier sur Blog dbi services.

Building a network bonding between 2 cards on Oracle Linux

$
0
0

I recently needed to configure bonding between 2 network cards on a customer side and I wanted trough this blog to share my findings and how I built it showing some traces. I will also do a short comparison of what is possible or not on the ODA.

Why should I use bonding?

Bonding is a technology which will allow you to merge several network interfaces, either ports of the same cards or ports from separated network cards, into a same logical interface. Purposes would be to have some network redundancy in case of network failure, called fault tolerance, or to increase the network throughput (bandwidth), called load balancing.

What bonding mode should I use?

There are 7 bonding modes available to achieve these purposes. All bonding modes will guarantee fault tolerance. Some bonding modes will have load balancing functionnalities. For bonding mode 4 the switch will need to support links aggregation (EtherChannel). Link aggregation can be configured manually on the switch or automatically using LACP protocol (dynamic links aggregation).

Mode Description Fault tolerance Load balancing
0 Round-Robin Packets are sequentially transmitted and received through each interfaces one by one. YES YES
1 Active-backup Only one interface will be the active one. The other interface from the bonding configuration will be configured as backup. If the active interface will be in failure one of the backup interface will become the active one. The MAC address will only be visible on one port at the same time to avoid any confusion for the switch. YES NO
2 Balance-xor Peer connections are matched with MAC addresses of the slave interfaces. Once the connection is established the transmission of the peers is always sent over the same slave interface. YES YES
3 Broadcast All network transmissions are sent on all slaves. YES NO
4 802.3ad – Dynamic Link Aggregation This mode will aggregate all interfaces from the bonding into a logical one. The traffic is sent and received on all slaves from the aggregation. The switch needs to support LACP and LACP needs to be activated. YES YES
5 TLB – Transmit Load Balancing The outgoing traffic is distributed between all interfaces depending of the current load of each slave interface. Incoming traffic is received by the current active slave. In case the active interface fails, another slave will take over the MAC address of the failed interface. YES YES
6 ALB – Adaptive Load Balancing This mode includes TLB (Transmit Load Balancing) and will use RLB (Receive Load Balancing) as well. The load balanced for the received packets will be done through ARP (Address Resolution Protocol) negotiation. YES YES

In my case, our customer wanted to guarantee the service in case of one network card failure only. No load balancing. The switch was not configured to use LACP. I then decided to configure the bonding in active-backup mode, which will guarantee redundancy only.

Bonding configuration

Checking existing connection

The server is composed of 2 network cards having each of the card 4 interfaces (ports).
Card 1 : em1, em2, em3, em4
Card 2 : p4p1, p4p2, p4p3, p4p4

There is no bonding currently existing as shown in below output.

[root@SRV ~]# nmcli connection
NAME  UUID                                  TYPE      DEVICE
p4p1  d3cdc8f5-2d80-433d-9502-3b357c57f307  ethernet  p4p1
em1   f412b74b-2160-4914-b716-88f6b4d58c1f  ethernet  --
em2   0ab78e63-bde7-4c77-b455-7dcb1d5c6813  ethernet  --
em3   d6569615-322f-477b-9693-b42ee3dbe21e  ethernet  --
em4   52949f94-52d1-463e-ba32-06c272c07ce0  ethernet  --
p4p2  12f01c70-4aab-42db-b0e8-b5422e43c1b9  ethernet  --
p4p3  0db2f5b9-d968-44cb-a042-cff20f112ed4  ethernet  --
p4p4  a2a0ebc4-ca74-452e-94ba-6d5fedbfdf28  ethernet  --

Checking existing configuration

The server was configured only with one IP address on the p4p1 network interface.

[root@SRV network-scripts]# pwd
/etc/sysconfig/network-scripts

[root@SRV network-scripts]# ls -l ifcfg*
-rw-r--r--. 1 root root 275 Sep 21 17:09 ifcfg-em1
-rw-r--r--. 1 root root 275 Sep 21 17:09 ifcfg-em2
-rw-r--r--. 1 root root 275 Sep 21 17:09 ifcfg-em3
-rw-r--r--. 1 root root 275 Sep 21 17:09 ifcfg-em4
-rw-r--r--. 1 root root 254 Aug 19  2019 ifcfg-lo
-rw-r--r--. 1 root root 378 Sep 21 17:09 ifcfg-p4p1
-rw-r--r--. 1 root root 277 Sep 21 17:09 ifcfg-p4p2
-rw-r--r--. 1 root root 277 Sep 21 17:09 ifcfg-p4p3
-rw-r--r--. 1 root root 277 Sep 21 17:09 ifcfg-p4p4

[root@SRV network-scripts]# more ifcfg-p4p1
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=p4p1
UUID=d3cdc8f5-2d80-433d-9502-3b357c57f307
DEVICE=p4p1
ONBOOT=yes
IPADDR=192.168.1.180
PREFIX=24
GATEWAY=192.168.1.1
DNS1=192.168.1.5
DOMAIN=domain.com
IPV6_PRIVACY=no

Creating the bonding

Purpose is to create a bonding between the 2 network cards for fault tolerance. The bonding will then be composed of the slave interfaces p4p1 and em1.
The bonding mode selected will be the mode 1 (active-backup).

[root@SRV network-scripts]# nmcli con add type bond con-name bond1 ifname bond1 mode active-backup ip4 192.168.1.180/24
Connection 'bond1' (7b736616-f72d-46b7-b4eb-01468639889b) successfully added.

[root@SRV network-scripts]# nmcli conn
NAME   UUID                                  TYPE      DEVICE
p4p1   d3cdc8f5-2d80-433d-9502-3b357c57f307  ethernet  p4p1
bond1  7b736616-f72d-46b7-b4eb-01468639889b  bond      bond1
em1    f412b74b-2160-4914-b716-88f6b4d58c1f  ethernet  --
em2    0ab78e63-bde7-4c77-b455-7dcb1d5c6813  ethernet  --
em3    d6569615-322f-477b-9693-b42ee3dbe21e  ethernet  --
em4    52949f94-52d1-463e-ba32-06c272c07ce0  ethernet  --
p4p2   12f01c70-4aab-42db-b0e8-b5422e43c1b9  ethernet  --
p4p3   0db2f5b9-d968-44cb-a042-cff20f112ed4  ethernet  --
p4p4   a2a0ebc4-ca74-452e-94ba-6d5fedbfdf28  ethernet  --

Updating the bonding with appropriate gateway, dns and domain information

[root@SRV network-scripts]# cat ifcfg-bond1
BONDING_OPTS=mode=active-backup
TYPE=Bond
BONDING_MASTER=yes
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
IPADDR=192.168.1.180
PREFIX=24
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=bond1
UUID=7b736616-f72d-46b7-b4eb-01468639889b
DEVICE=bond1
ONBOOT=yes

[root@SRV network-scripts]# vi ifcfg-bond1

[root@SRV network-scripts]# cat ifcfg-bond1
BONDING_OPTS=mode=active-backup
TYPE=Bond
BONDING_MASTER=yes
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
IPADDR=192.168.1.180
PREFIX=24
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=bond1
UUID=7b736616-f72d-46b7-b4eb-01468639889b
DEVICE=bond1
ONBOOT=yes
GATEWAY=192.168.1.1
DNS1=192.168.1.5
DOMAIN=domain.com

Adding slave interface em1 in the bonding bond1

Each slaves needs to be added to the master bonding.

We will first delete existing em1 slave :

[root@SRV network-scripts]# nmcli con delete em1
Connection 'em1' (f412b74b-2160-4914-b716-88f6b4d58c1f) successfully deleted.

We will then create new em1 interface part of the bond1 bonding configuration :

[root@SRV network-scripts]# nmcli con add type bond-slave ifname em1 con-name em1 master bond1
Connection 'em1' (8c72c383-e1e9-4e4b-ac2f-3d3d81d5159b) successfully added.

And we can check the interfaces :

[root@SRV network-scripts]# nmcli con
NAME   UUID                                  TYPE      DEVICE
p4p1   d3cdc8f5-2d80-433d-9502-3b357c57f307  ethernet  p4p1
bond1  7b736616-f72d-46b7-b4eb-01468639889b  bond      bond1
em1    8c72c383-e1e9-4e4b-ac2f-3d3d81d5159b  ethernet  em1
em2    0ab78e63-bde7-4c77-b455-7dcb1d5c6813  ethernet  --
em3    d6569615-322f-477b-9693-b42ee3dbe21e  ethernet  --
em4    52949f94-52d1-463e-ba32-06c272c07ce0  ethernet  --
p4p2   12f01c70-4aab-42db-b0e8-b5422e43c1b9  ethernet  --
p4p3   0db2f5b9-d968-44cb-a042-cff20f112ed4  ethernet  --
p4p4   a2a0ebc4-ca74-452e-94ba-6d5fedbfdf28  ethernet  --

Activating the bonding

We need to first activate the first configured slaves :

[root@SRV network-scripts]# nmcli con up em1
Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/4)

We can now activate the bonding :

[root@SRV network-scripts]# nmcli con up bond1
Connection successfully activated (master waiting for slaves) (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/5)

We can check the connections :

[root@SRV network-scripts]# nmcli con
NAME   UUID                                  TYPE      DEVICE
p4p1   d3cdc8f5-2d80-433d-9502-3b357c57f307  ethernet  p4p1
bond1  7b736616-f72d-46b7-b4eb-01468639889b  bond      bond1
em1    8c72c383-e1e9-4e4b-ac2f-3d3d81d5159b  ethernet  em1
em2    0ab78e63-bde7-4c77-b455-7dcb1d5c6813  ethernet  --
em3    d6569615-322f-477b-9693-b42ee3dbe21e  ethernet  --
em4    52949f94-52d1-463e-ba32-06c272c07ce0  ethernet  --
p4p2   12f01c70-4aab-42db-b0e8-b5422e43c1b9  ethernet  --
p4p3   0db2f5b9-d968-44cb-a042-cff20f112ed4  ethernet  --
p4p4   a2a0ebc4-ca74-452e-94ba-6d5fedbfdf28  ethernet  --

Adding slave interface p4p1 in the bonding bond1

We will first delete existing p4p1 slave :

[root@SRV network-scripts]# nmcli con delete p4p1
Connection 'p4p1' (d3cdc8f5-2d80-433d-9502-3b357c57f307) successfully deleted.

[root@SRV network-scripts]# nmcli con
NAME   UUID                                  TYPE      DEVICE
bond1  7b736616-f72d-46b7-b4eb-01468639889b  bond      bond1
em1    8c72c383-e1e9-4e4b-ac2f-3d3d81d5159b  ethernet  em1
em2    0ab78e63-bde7-4c77-b455-7dcb1d5c6813  ethernet  --
em3    d6569615-322f-477b-9693-b42ee3dbe21e  ethernet  --
em4    52949f94-52d1-463e-ba32-06c272c07ce0  ethernet  --
p4p2   12f01c70-4aab-42db-b0e8-b5422e43c1b9  ethernet  --
p4p3   0db2f5b9-d968-44cb-a042-cff20f112ed4  ethernet  --
p4p4   a2a0ebc4-ca74-452e-94ba-6d5fedbfdf28  ethernet  --

We will then create new p4p1 interface part of the bond1 bonding configuration :

[root@SRV network-scripts]# nmcli con add type bond-slave ifname p4p1 con-name p4p1 master bond1
Connection 'p4p1' (efef0972-4b3f-46a2-b054-ebd1aa201056) successfully added.

And we can check the interfaces :

[root@SRV network-scripts]# nmcli con
NAME   UUID                                  TYPE      DEVICE
bond1  7b736616-f72d-46b7-b4eb-01468639889b  bond      bond1
em1    8c72c383-e1e9-4e4b-ac2f-3d3d81d5159b  ethernet  em1
p4p1   efef0972-4b3f-46a2-b054-ebd1aa201056  ethernet  p4p1
em2    0ab78e63-bde7-4c77-b455-7dcb1d5c6813  ethernet  --
em3    d6569615-322f-477b-9693-b42ee3dbe21e  ethernet  --
em4    52949f94-52d1-463e-ba32-06c272c07ce0  ethernet  --
p4p2   12f01c70-4aab-42db-b0e8-b5422e43c1b9  ethernet  --
p4p3   0db2f5b9-d968-44cb-a042-cff20f112ed4  ethernet  --
p4p4   a2a0ebc4-ca74-452e-94ba-6d5fedbfdf28  ethernet  --

Activating the new p4p1 slave interface

We can now activate the next recently added slaves :

[root@SRV network-scripts]# nmcli con up p4p1
Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/11)

Restart the network service

We will restart the network service to have the new bonding configuration taking into account :

[root@SRV network-scripts]# service network restart
Restarting network (via systemctl):                        [  OK  ]

We can check the IP configuration :

[root@SRV network-scripts]# ip addr sh
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: em1:  mtu 1500 qdisc mq master bond1 state UP group default qlen 1000
    link/ether bc:97:e1:5b:e4:50 brd ff:ff:ff:ff:ff:ff
3: em3:  mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether bc:97:e1:5b:e4:4e brd ff:ff:ff:ff:ff:ff
4: em2:  mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether bc:97:e1:5b:e4:51 brd ff:ff:ff:ff:ff:ff
5: em4:  mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether bc:97:e1:5b:e4:4f brd ff:ff:ff:ff:ff:ff
6: p4p1:  mtu 1500 qdisc mq master bond1 state UP group default qlen 1000
    link/ether bc:97:e1:5b:e4:50 brd ff:ff:ff:ff:ff:ff
7: p4p2:  mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether 3c:fd:fe:85:0d:31 brd ff:ff:ff:ff:ff:ff
8: p4p3:  mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether 3c:fd:fe:85:0d:32 brd ff:ff:ff:ff:ff:ff
9: p4p4:  mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether 3c:fd:fe:85:0d:33 brd ff:ff:ff:ff:ff:ff
11: bond1:  mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether bc:97:e1:5b:e4:50 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.180/24 brd 192.168.1.255 scope global noprefixroute bond1
       valid_lft forever preferred_lft forever
    inet6 fe80::b4f9:e44d:25fc:3a6/64 scope link noprefixroute
       valid_lft forever preferred_lft forever

Check IP configuration files

We are now having our bond ifcfg configuration file :

[root@SRV ~]# cd /etc/sysconfig/network-scripts

[root@SRV network-scripts]# pwd
/etc/sysconfig/network-scripts

[root@SRV network-scripts]# ls -ltrh ifcfg*
-rw-r--r--. 1 root root 254 Aug 19  2019 ifcfg-lo
-rw-r--r--. 1 root root 277 Sep 21 17:09 ifcfg-p4p4
-rw-r--r--. 1 root root 277 Sep 21 17:09 ifcfg-p4p2
-rw-r--r--. 1 root root 275 Sep 21 17:09 ifcfg-em4
-rw-r--r--. 1 root root 275 Sep 21 17:09 ifcfg-em3
-rw-r--r--. 1 root root 277 Sep 21 17:09 ifcfg-p4p3
-rw-r--r--. 1 root root 275 Sep 21 17:09 ifcfg-em2
-rw-r--r--. 1 root root 411 Oct  7 16:45 ifcfg-bond1
-rw-r--r--. 1 root root 110 Oct  7 16:46 ifcfg-em1
-rw-r--r--. 1 root root 112 Oct  7 16:50 ifcfg-p4p1

The bonding file will have the IP configuration :

[root@SRV network-scripts]# cat ifcfg-bond1
BONDING_OPTS=mode=active-backup
TYPE=Bond
BONDING_MASTER=yes
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
IPADDR=192.168.1.180
PREFIX=24
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=bond1
UUID=7b736616-f72d-46b7-b4eb-01468639889b
DEVICE=bond1
ONBOOT=yes
GATEWAY=192.168.1.1
DNS1=192.168.1.5
DOMAIN=domain.com

p4p1 interface will be one of the bond1 slave :

[root@SRV network-scripts]# cat ifcfg-p4p1
TYPE=Ethernet
NAME=p4p1
UUID=efef0972-4b3f-46a2-b054-ebd1aa201056
DEVICE=p4p1
ONBOOT=yes
MASTER=bond1
SLAVE=yes

em1 interface from the other physical network card will be the next bond1 slave :

[root@SRV network-scripts]# cat ifcfg-em1
TYPE=Ethernet
NAME=em1
UUID=8c72c383-e1e9-4e4b-ac2f-3d3d81d5159b
DEVICE=em1
ONBOOT=yes
MASTER=bond1
SLAVE=yes

Check bonding interfaces and mode

[root@SRV network-scripts]# cat /proc/net/bonding/bond1
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: em1
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: em1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 1
Permanent HW addr: bc:97:e1:5b:e4:50
Slave queue ID: 0

Slave Interface: p4p1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 1
Permanent HW addr: 3c:fd:fe:85:0d:30
Slave queue ID: 0
[root@SRV network-scripts]#

Test the bonding

Both network cables are plugged into em1 and p4p1. Both interfaces are UP. :

[root@SRV network-scripts]# ip addr sh
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: em1:  mtu 1500 qdisc mq master bond1 state UP group default qlen 1000
    link/ether bc:97:e1:5b:e4:50 brd ff:ff:ff:ff:ff:ff
3: em3:  mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether bc:97:e1:5b:e4:4e brd ff:ff:ff:ff:ff:ff
4: em2:  mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether bc:97:e1:5b:e4:51 brd ff:ff:ff:ff:ff:ff
5: em4:  mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether bc:97:e1:5b:e4:4f brd ff:ff:ff:ff:ff:ff
6: p4p1:  mtu 1500 qdisc mq master bond1 state UP group default qlen 1000
    link/ether bc:97:e1:5b:e4:50 brd ff:ff:ff:ff:ff:ff
7: p4p2:  mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether 3c:fd:fe:85:0d:31 brd ff:ff:ff:ff:ff:ff
8: p4p3:  mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether 3c:fd:fe:85:0d:32 brd ff:ff:ff:ff:ff:ff
9: p4p4:  mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether 3c:fd:fe:85:0d:33 brd ff:ff:ff:ff:ff:ff
15: bond1:  mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether bc:97:e1:5b:e4:50 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.180/24 brd 192.168.1.255 scope global noprefixroute bond1
       valid_lft forever preferred_lft forever
    inet6 fe80::b4f9:e44d:25fc:3a6/64 scope link noprefixroute
       valid_lft forever preferred_lft forever

Pinging the server is OK :

[ansible@linux-ansible / ]$ ping 192.168.1.180
PING 192.168.1.180 (192.168.1.180) 56(84) bytes of data.
64 bytes from 192.168.1.180: icmp_seq=1 ttl=64 time=0.206 ms
64 bytes from 192.168.1.180: icmp_seq=2 ttl=64 time=0.290 ms
64 bytes from 192.168.1.180: icmp_seq=3 ttl=64 time=0.152 ms
64 bytes from 192.168.1.180: icmp_seq=4 ttl=64 time=0.243 ms

I have plug out the cable from the em1 interface. We can see em1 interface DOWN and p4p1 interface UP :

[root@SRV network-scripts]# ip addr sh
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: em1:  mtu 1500 qdisc mq master bond1 state DOWN group default qlen 1000
    link/ether bc:97:e1:5b:e4:50 brd ff:ff:ff:ff:ff:ff
3: em3:  mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether bc:97:e1:5b:e4:4e brd ff:ff:ff:ff:ff:ff
4: em2:  mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether bc:97:e1:5b:e4:51 brd ff:ff:ff:ff:ff:ff
5: em4:  mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether bc:97:e1:5b:e4:4f brd ff:ff:ff:ff:ff:ff
6: p4p1:  mtu 1500 qdisc mq master bond1 state UP group default qlen 1000
    link/ether bc:97:e1:5b:e4:50 brd ff:ff:ff:ff:ff:ff
7: p4p2:  mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether 3c:fd:fe:85:0d:31 brd ff:ff:ff:ff:ff:ff
8: p4p3:  mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether 3c:fd:fe:85:0d:32 brd ff:ff:ff:ff:ff:ff
9: p4p4:  mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether 3c:fd:fe:85:0d:33 brd ff:ff:ff:ff:ff:ff
15: bond1:  mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether bc:97:e1:5b:e4:50 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.180/24 brd 192.168.1.255 scope global noprefixroute bond1
       valid_lft forever preferred_lft forever
    inet6 fe80::b4f9:e44d:25fc:3a6/64 scope link noprefixroute
       valid_lft forever preferred_lft forever

pinging the server is still OK :

[ansible@linux-ansible / ]$ ping 192.168.1.180
PING 192.168.1.180 (192.168.1.180) 56(84) bytes of data.
64 bytes from 192.168.1.180: icmp_seq=1 ttl=64 time=0.234 ms
64 bytes from 192.168.1.180: icmp_seq=2 ttl=64 time=0.256 ms
64 bytes from 192.168.1.180: icmp_seq=3 ttl=64 time=0.257 ms
64 bytes from 192.168.1.180: icmp_seq=4 ttl=64 time=0.245 ms

I have then plug in the cable in em1 interface again and plug out the cable from the p4p1 interface. We can see em1 interface now UP again and p4p1 interface DOWN :

[root@SRV network-scripts]# ip addr sh
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: em1:  mtu 1500 qdisc mq master bond1 state UP group default qlen 1000
    link/ether bc:97:e1:5b:e4:50 brd ff:ff:ff:ff:ff:ff
3: em3:  mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether bc:97:e1:5b:e4:4e brd ff:ff:ff:ff:ff:ff
4: em2:  mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether bc:97:e1:5b:e4:51 brd ff:ff:ff:ff:ff:ff
5: em4:  mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether bc:97:e1:5b:e4:4f brd ff:ff:ff:ff:ff:ff
6: p4p1:  mtu 1500 qdisc mq master bond1 state DOWN group default qlen 1000
    link/ether bc:97:e1:5b:e4:50 brd ff:ff:ff:ff:ff:ff
7: p4p2:  mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether 3c:fd:fe:85:0d:31 brd ff:ff:ff:ff:ff:ff
8: p4p3:  mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether 3c:fd:fe:85:0d:32 brd ff:ff:ff:ff:ff:ff
9: p4p4:  mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether 3c:fd:fe:85:0d:33 brd ff:ff:ff:ff:ff:ff
15: bond1:  mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether bc:97:e1:5b:e4:50 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.180/24 brd 192.168.1.255 scope global noprefixroute bond1
       valid_lft forever preferred_lft forever
    inet6 fe80::b4f9:e44d:25fc:3a6/64 scope link noprefixroute
       valid_lft forever preferred_lft forever

pinging the server is still OK :

[ansible@linux-ansible / ]$ ping 192.168.1.180
PING 192.168.1.180 (192.168.1.180) 56(84) bytes of data.
64 bytes from 192.168.1.180: icmp_seq=1 ttl=64 time=0.159 ms
64 bytes from 192.168.1.180: icmp_seq=2 ttl=64 time=0.219 ms
64 bytes from 192.168.1.180: icmp_seq=3 ttl=64 time=0.362 ms
64 bytes from 192.168.1.180: icmp_seq=4 ttl=64 time=0.236 ms

And what about the ODA?

This configuration has been setup at one customer system running DELL servers. I have been deploying several ODAs by other customers and the questionning of having fault tolerance between several network cards is often coming. Unfortunately, and albeit the ODA are running Oracle Linux operation system, such configuration is not supported on the appliance. The Appliance will only support active-backup between ports of the same network cards. Additionnal network cards will be used on the ODA to have additionnal network connections. Last but not least, LACP is not supported on the appliance.

Cet article Building a network bonding between 2 cards on Oracle Linux est apparu en premier sur Blog dbi services.

Optimizer Statistics Gathering – pending and history

$
0
0

By Franck Pachot

.
This was initially posted to CERN Database blog on Wednesday, 12 September 2018 where it seems to be lost. Here is a copy thanks to web.archive.org

Demo table

I create a table for the demo. The CTAS gathers statistics (12c online statistics gathering) with one row and then I insert more rows:



10:33:56 SQL> create table DEMO as select rownum n from dual;
Table DEMO created.
10:33:56 SQL> insert into DEMO select rownum n from xmltable('1 to 41');
41 rows inserted.
10:33:56 SQL> commit;
Commit complete.

The estimations are stale: estimates 1 row (E-Rows) but 42 actual rows (A-Rows)



10:33:56 SQL> select /*+ gather_plan_statistics */ count(*) from DEMO;

  COUNT(*) 
  -------- 
        42 

10:33:57 SQL> select * from table(dbms_xplan.display_cursor(format=>'basic +rows +rowstats last'));

PLAN_TABLE_OUTPUT                                                
-----------------                                                
EXPLAINED SQL STATEMENT:                                         
------------------------                                         
select /*+ gather_plan_statistics */ count(*) from DEMO          
                                                                 
Plan hash value: 2180342005                                      
                                                                 
--------------------------------------------------------------   
| Id  | Operation          | Name | Starts | E-Rows | A-Rows |   
--------------------------------------------------------------   
|   0 | SELECT STATEMENT   |      |      1 |        |      1 |   
|   1 |  SORT AGGREGATE    |      |      1 |      1 |      1 |   
|   2 |   TABLE ACCESS FULL| DEMO |      1 |      1 |     42 |   
--------------------------------------------------------------   

Pending Statistics

Here we are: I want to gather statistics on this table. But I will lower all risks by not publishing them immediately. Current statistics preferences are set to PUBLISH=TRUE:



10:33:58 SQL> select num_rows,cast(last_analyzed as timestamp),dbms_stats.get_prefs('PUBLISH',owner,table_name) from dba_tab_statistics where owner='DEMO' and table_name in ('DEMO');

  NUM_ROWS CAST(LAST_ANALYZEDASTIMESTAMP)    DBMS_STATS.GET_PREFS('PUBLISH',OWNER,TABLE_NAME)   
  -------- ------------------------------    ------------------------------------------------   
         1 12-SEP-18 10.33.56.000000000 AM   TRUE     
                                          

I set it to FALSE:



10:33:59 SQL> exec dbms_stats.set_table_prefs('DEMO','DEMO','publish','false');

PL/SQL procedure successfully completed.

10:34:00 SQL> select num_rows,cast(last_analyzed as timestamp),dbms_stats.get_prefs('PUBLISH',owner,table_name) from dba_tab_statistics where owner='DEMO' and table_name in ('DEMO');

  NUM_ROWS CAST(LAST_ANALYZEDASTIMESTAMP)    DBMS_STATS.GET_PREFS('PUBLISH',OWNER,TABLE_NAME)   
  -------- ------------------------------    ------------------------------------------------   
         1 12-SEP-18 10.33.56.000000000 AM   FALSE  
                                            

I’m now gathering stats as I want to:



10:34:01 SQL> exec dbms_stats.gather_table_stats('DEMO','DEMO');
PL/SQL procedure successfully completed.

Test Pending Statistics

They are not published. But to test my queries with those new stats, I can set my session to use pending statistics:



10:34:02 SQL> alter session set optimizer_use_pending_statistics=true;
Session altered.

Running my query again, I can see the good estimations (E-Rows=A-Rows)



10:34:03 SQL> select /*+ gather_plan_statistics */ count(*) from DEMO;

  COUNT(*) 
  -------- 
        42 

10:34:04 SQL> select * from table(dbms_xplan.display_cursor(format=>'basic +rows +rowstats last'));

PLAN_TABLE_OUTPUT                                                
-----------------                                                
EXPLAINED SQL STATEMENT:                                         
------------------------                                         
select /*+ gather_plan_statistics */ count(*) from DEMO          
                                                                 
Plan hash value: 2180342005                                      
                                                                 
--------------------------------------------------------------   
| Id  | Operation          | Name | Starts | E-Rows | A-Rows |   
--------------------------------------------------------------   
|   0 | SELECT STATEMENT   |      |      1 |        |      1 |   
|   1 |  SORT AGGREGATE    |      |      1 |      1 |      1 |   
|   2 |   TABLE ACCESS FULL| DEMO |      1 |     42 |     42 |   
--------------------------------------------------------------   

The published statistics still show 1 row:



10:34:05 SQL> select num_rows,cast(last_analyzed as timestamp),dbms_stats.get_prefs('PUBLISH',owner,table_name) from dba_tab_statistics where owner='DEMO' and table_name in ('DEMO');

  NUM_ROWS CAST(LAST_ANALYZEDASTIMESTAMP)    DBMS_STATS.GET_PREFS('PUBLISH',OWNER,TABLE_NAME)   
  -------- ------------------------------    ------------------------------------------------   
         1 12-SEP-18 10.33.56.000000000 AM   FALSE            
                                  

But I can query the pending ones before publishing them:



10:34:05 SQL> c/dba_tab_statistics/dba_tab_pending_stats
  1* select num_rows,cast(last_analyzed as timestamp),dbms_stats.get_prefs('PUBLISH',owner,table_name) from dba_tab_pending_stats where owner='DEMO' and table_name in ('DEMO');
10:34:05 SQL> /

  NUM_ROWS CAST(LAST_ANALYZEDASTIMESTAMP)    DBMS_STATS.GET_PREFS('PUBLISH',OWNER,TABLE_NAME)   
  -------- ------------------------------    ------------------------------------------------   
        42 12-SEP-18 10.34.01.000000000 AM   FALSE          
                                    

I’ve finished my test with pending statistics:



10:34:05 SQL> alter session set optimizer_use_pending_statistics=false;
Session altered.

Note that if you have Real Application Testing, you can use SQL Performance Analyzer to test the pending statistics on a whole SQL Tuning Set representing the critical queries of your application. Of course, the more you test there, the better it is.

Delete Pending Statistics

Now let’s say that my test shows that the new statistics are not good, I can simply delete the pending statistics:



10:34:06 SQL> exec dbms_stats.delete_pending_stats('DEMO','DEMO');
PL/SQL procedure successfully completed.

Then all queries are still using the previous statistics:



10:34:07 SQL> show parameter pending
NAME                             TYPE    VALUE
-------------------------------- ------- -----
optimizer_use_pending_statistics boolean FALSE

10:34:07 SQL> select /*+ gather_plan_statistics */ count(*) from DEMO;

  COUNT(*) 
  -------- 
        42 

10:34:08 SQL> select * from table(dbms_xplan.display_cursor(format=>'basic +rows +rowstats last'));

PLAN_TABLE_OUTPUT                                                
-----------------                                                
EXPLAINED SQL STATEMENT:                                         
------------------------                                         
select /*+ gather_plan_statistics */ count(*) from DEMO          
                                                                 
Plan hash value: 2180342005                                      
                                                                 
--------------------------------------------------------------   
| Id  | Operation          | Name | Starts | E-Rows | A-Rows |   
--------------------------------------------------------------   
|   0 | SELECT STATEMENT   |      |      1 |        |      1 |   
|   1 |  SORT AGGREGATE    |      |      1 |      1 |      1 |   
|   2 |   TABLE ACCESS FULL| DEMO |      1 |      1 |     42 |   
--------------------------------------------------------------   

Accept Pending Statistics

Now I’ll show the second case where my tests show that the new statistics gathering is ok. I gather statistics again:



10:34:09 SQL> exec dbms_stats.gather_table_stats('DEMO','DEMO');
PL/SQL procedure successfully completed.

10:34:09 SQL> alter session set optimizer_use_pending_statistics=true;
Session altered.

10:34:11 SQL> select /*+ gather_plan_statistics */ count(*) from DEMO;

  COUNT(*) 
  -------- 
        42 


10:34:12 SQL> select * from table(dbms_xplan.display_cursor(format=>'basic +rows +rowstats last'));

PLAN_TABLE_OUTPUT                                                
-----------------                                                
EXPLAINED SQL STATEMENT:                                         
------------------------                                         
select /*+ gather_plan_statistics */ count(*) from DEMO          
                                                                 
Plan hash value: 2180342005                                      
                                                                 
--------------------------------------------------------------   
| Id  | Operation          | Name | Starts | E-Rows | A-Rows |   
--------------------------------------------------------------   
|   0 | SELECT STATEMENT   |      |      1 |        |      1 |   
|   1 |  SORT AGGREGATE    |      |      1 |      1 |      1 |   
|   2 |   TABLE ACCESS FULL| DEMO |      1 |     42 |     42 |   
--------------------------------------------------------------   
                                                                 
10:34:12 SQL> alter session set optimizer_use_pending_statistics=false;
Session altered.

When I’m ok with the new statistics I can publish them so that other sessions can see them. As doing this in production is probably a fix for a critical problem, I want the effects to take immediately, invalidating all cursors:



10:34:13 SQL> exec dbms_stats.publish_pending_stats('DEMO','DEMO',no_invalidate=>false);
PL/SQL procedure successfully completed.

The default NO_INVALIDATE value is probably to avoid in those cases because you want to see the side effects, if any, as soon as possible. Not within a random window of 5 hours later where you have left the office. I set back the table preference to PUBLISH=TRUE and check that the new statistics are visible in DBA_TAB_STATISTICS (and no more in DBA_TAB_PENDING_STATS):



10:34:14 SQL> exec dbms_stats.set_table_prefs('DEMO','DEMO','publish','true');
PL/SQL procedure successfully completed.

10:34:15 SQL> select num_rows,cast(last_analyzed as timestamp),dbms_stats.get_prefs('PUBLISH',owner,table_name) from dba_tab_statistics where owner='DEMO' and table_name in ('DEMO');

  NUM_ROWS CAST(LAST_ANALYZEDASTIMESTAMP)    DBMS_STATS.GET_PREFS('PUBLISH',OWNER,TABLE_NAME)   
  -------- ------------------------------    ------------------------------------------------   
        42 12-SEP-18 10.34.09.000000000 AM   TRUE                                               


10:34:15 SQL> c/dba_tab_statistics/dba_tab_pending_stats
  1* select num_rows,cast(last_analyzed as timestamp),dbms_stats.get_prefs('PUBLISH',owner,table_name) from dba_tab_pending_stats where owner='DEMO' and table_name in ('DEMO');
10:34:15 SQL> /

no rows selected

Report Differences

Then what if a citical regression is observed later? I still have the possibility to revert to the old statistics. First I can check in detail what has changed:



10:34:16 SQL> select report from table(dbms_stats.diff_table_stats_in_history('DEMO','DEMO',sysdate-1,sysdate,0));

REPORT
------

###############################################################################

STATISTICS DIFFERENCE REPORT FOR:
.................................

TABLE         : DEMO
OWNER         : DEMO
SOURCE A      : Statistics as of 11-SEP-18 10.34.16.000000 AM EUROPE/ZURICH
SOURCE B      : Statistics as of 12-SEP-18 10.34.16.000000 AM EUROPE/ZURICH
PCTTHRESHOLD  : 0
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~


TABLE / (SUB)PARTITION STATISTICS DIFFERENCE:
.............................................

OBJECTNAME                  TYP SRC ROWS       BLOCKS     ROWLEN     SAMPSIZE
...............................................................................

DEMO                        T   A   1          4          3          1
                                B   42         8          3          42
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

COLUMN STATISTICS DIFFERENCE:
.............................

COLUMN_NAME     SRC NDV     DENSITY    HIST NULLS   LEN  MIN   MAX   SAMPSIZ
...............................................................................

N               A   1       1          NO   0       3    C102  C102  1
                B   41      .024390243 NO   0       3    C102  C12A  42
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~


NO DIFFERENCE IN INDEX / (SUB)PARTITION STATISTICS
###############################################################################

Restore Previous Statistics

If nothing is obvious and the regression is more critical than the original problem, I still have the possibility to revert back to the old statistics:



10:34:17 SQL> exec dbms_stats.restore_table_stats('DEMO','DEMO',sysdate-1,no_invalidate=>false);
PL/SQL procedure successfully completed.

Again, invalidating all cursors immediately is probably required as I solve a critical problem here. Immediately, the same query uses the old statistics:



10:34:17 SQL> select /*+ gather_plan_statistics */ count(*) from DEMO;

  COUNT(*) 
  -------- 
        42 


10:34:17 SQL> select * from table(dbms_xplan.display_cursor(format=>'basic +rows +rowstats last'));

PLAN_TABLE_OUTPUT                                                
-----------------                                                
EXPLAINED SQL STATEMENT:                                         
------------------------                                         
select /*+ gather_plan_statistics */ count(*) from DEMO          
                                                                 
Plan hash value: 2180342005                                      
                                                                 
--------------------------------------------------------------   
| Id  | Operation          | Name | Starts | E-Rows | A-Rows |   
--------------------------------------------------------------   
|   0 | SELECT STATEMENT   |      |      1 |        |      1 |   
|   1 |  SORT AGGREGATE    |      |      1 |      1 |      1 |   
|   2 |   TABLE ACCESS FULL| DEMO |      1 |      1 |     42 |
--------------------------------------------------------------   

If I want to see what happened recently on this table, I can query the history of operations (you can replace my ugly regexp_replace with XQuery):



10:34:18 SQL> select end_time,end_time-start_time,operation,target,regexp_replace(regexp_replace(notes,'" val="','=>'),'(||)',' '),status from DBA_OPTSTAT_OPERATIONS where regexp_like(target,'"?'||'DEMO'||'"?."?'||'DEMO'||'"?') order by end_time desc fetch first 10 rows only;

END_TIME                                 END_TIME-START_TIME   OPERATION             TARGET          REGEXP_REPLACE(REGEXP_REPLACE(NOTES,'"VAL="','=>'),'(||)','')                                                                                                                                                                                                                                         STATUS      
--------                                 -------------------   ---------             ------          ----------------------------------------------------------------------------------------------                                                                                                                                                                                                                                         ------      
12-SEP-18 10.34.17.718800000 AM +02:00   +00 00:00:00.017215   restore_table_stats   "DEMO"."DEMO"     as_of_timestamp=>09-11-2018 10:34:17  force=>FALSE  no_invalidate=>FALSE  ownname=>DEMO  restore_cluster_index=>FALSE  tabname=>DEMO                                                                                                                                                                                                 COMPLETED   
12-SEP-18 10.34.13.262234000 AM +02:00   +00 00:00:00.010021   restore_table_stats   "DEMO"."DEMO"     as_of_timestamp=>11-30-3000 01:00:00  force=>FALSE  no_invalidate=>FALSE  ownname=>DEMO  restore_cluster_index=>FALSE  tabname=>DEMO                                                                                                                                                                                                 COMPLETED   
12-SEP-18 10.34.09.974873000 AM +02:00   +00 00:00:00.032513   gather_table_stats    "DEMO"."DEMO"     block_sample=>FALSE  cascade=>NULL  concurrent=>FALSE  degree=>NULL  estimate_percent=>DBMS_STATS.AUTO_SAMPLE_SIZE  force=>FALSE  granularity=>AUTO  method_opt=>FOR ALL COLUMNS SIZE AUTO  no_invalidate=>NULL  ownname=>DEMO  partname=>  reporting_mode=>FALSE  statid=>  statown=>  stattab=>  stattype=>DATA  tabname=>DEMO     COMPLETED   
12-SEP-18 10.34.01.194735000 AM +02:00   +00 00:00:00.052087   gather_table_stats    "DEMO"."DEMO"     block_sample=>FALSE  cascade=>NULL  concurrent=>FALSE  degree=>NULL  estimate_percent=>DBMS_STATS.AUTO_SAMPLE_SIZE  force=>FALSE  granularity=>AUTO  method_opt=>FOR ALL COLUMNS SIZE AUTO  no_invalidate=>NULL  ownname=>DEMO  partname=>  reporting_mode=>FALSE  statid=>  statown=>  stattab=>  stattype=>DATA  tabname=>DEMO     COMPLETED   

We can see here that the publishing of pending stats was actually a restore of stats as of Nov 30th of Year 3000. This is probably because the pending status is hardcoded as a date in the future. Does that mean that all pending stats will become autonomously published at that time? I don’t think we have to worry about Y3K bugs for the moment…

Here is the full receipe I’ve given to an application owner who needs to gather statistics on his tables on a highly critical database. Then he has all the info to limit the risks. My recommendation is to prepare this fallback scenario before doing any change, and test it as I did, on a test environment, in order to be ready to react on any unexpected side effect. Be careful, the pending statsitics do not work correctly with system statistics and can have very nasty side effects (Bug 21326597), but restoring from history is possible.

Cet article Optimizer Statistics Gathering – pending and history est apparu en premier sur Blog dbi services.

Migrate Oracle Database 9.2.0.6 to Oracle 19c using GoldenGate

$
0
0

When a customer wanted to take the challenge to migrate an oracle database 9.2.0.6 (the prehistory in the Oracle world) to Oracle 19c using Oracle GodenGate, I saw more problems than add value for different reasons:

  •  Oracle 9.2.0.6 database is out of support (final 9.2 patch was Oracle 9.2.0.8).
  • The customer Operating Systems was AIX 7.4 and only Oracle GoldenGate 11.1.1.1.2 for Oracle 9.2 for AIX 5.3 is available on https://edelivery.oracle.com.
  • The Patch 13606038: ORACLE GOLDENGATE V11.1.1.0.31 FOR ORACLE 9I is not available for download since we need special support to got it.

Oracle GoldenGate database Schema Profile check script

The first step is to download from Oracle Support, the Oracle GoldenGate database Schema Profile check script to query the database by schema to identify current configuration and any unsupported data types or types that may need special considerations for Oracle GoldenGate in an oracle environment:

  • Oracle GoldenGate database Schema Profile check script for Oracle DB (Doc ID 1296168.1) : full-schemaCheckOracle_07072020.sql

Even Oracle Support mentions that this script is written for Oracle database version 9i thru 11g, some adaptation must be done for an Oracle 9.2.0.6 database:

First of all, add a parameter to specify schema name as entry :

oracle@aixSourceServer-Ora9i: /opt/oracle/software/goldengate> ls -ltr

vi full-schemaCheckOracle_07072020.sql

--Lazhar Felahi – 10.12.2020 - comment this line
--spool schemaCheckOracle.&&schema_name.out
--Lazhar Felahi – 10.12.2020 - comment this line
--ACCEPT schema_name char prompt 'Enter the Schema Name > '
variable b0 varchar2(50)
--Lazhar Felahi – 10.12.2020 - comment this line
--exec :b0 := upper('&schema_name');
--Lazhar Felahi – 10.12.2020 - add this line
exec :b0 := '&1';
--Lazhar Felahi – 10.12.2020 - comment this line
--spool schemaCheckOracle.&&schema_name..out
--Lazhar Felahi – 10.12.2020 - add this line
spool schemaCheckOracle.&&1..out

Execute the script for schemas needed:

oracle@aixSourceServer-Ora9i: /opt/oracle/software/goldengate>
sqlplus /nolog
SQL*Plus: Release 9.2.0.6.0 - Production on Thu Dec 10 21:19:37 2020
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
SQL> start full-schemaCheckOracle_07072020.sql HR
error :
ERROR at line 4:
ORA-00904: "SUPPLEMENTAL_LOG_DATA_ALL": invalid identifier
platform_name
*
ERROR at line 2:
ORA-00904: "PLATFORM_NAME": invalid identifier
------ Integrated Extract unsupported objects in HR

select object_name, support_mode from DBA_GOLDENGATE_SUPPORT_MODE WHERE OWNER = :b0 and support_mode = 'NONE'
ERROR at line 1:
ORA-00942: table or view does not exist

 

The above errors can be ignored :

  • The errors ORA-00904: “SUPPLEMENTAL_LOG_DATA_ALL”: invalid identifier and ORA-00904: “PLATFORM_NAME”: invalid identifier can be ignored since this column does not exist into the data dictionary view v$database for the version Oracle 9.2.0.6 database.
  • The error ORA-00942: table or view does not exist can be ignored since the view DBA_GOLDENGATE_SUPPORT_MODE  is available starting with Oracle Database 11g Release 2 (11.2.0.4).

Adapt the script and re-execute it, an output file is generated listing different checks and any types unsupported.

For instance, the script lists all tables with no primary key or Unique Index or Tables with NOLOGGING setting.

GoldenGate needs tables with primary key. If no PK exist for one table, GG will take all column to define the unicity.

GOLDENGATE INSTALLATION – ON SOURCE SERVER

Download the zip file corresponding to Oracle GoldenGate 11.1.1.1.2 software from https://edelivery.oracle.com :

  • V28955-01.zip Oracle GoldenGate 11.1.1.1.2 for Oracle 9.2 for AIX 5.3 on IBM AIX on POWER Systems (64-bit), 45.5 MB

Unzip and untar the file:

oracle@aixSourceServer-Ora9i: /opt/oracle/software/goldengate> ls -ltr
total 365456
-rw-rw-r--    1 oracle   dba       139079680 Oct  5 2011  ggs_AIX_ppc_ora9.2_64bit.tar
-rw-r--r--    1 oracle   dba          245329 Oct 28 2011  OGG_WinUnix_Rel_Notes_11.1.1.1.2.pdf
-rw-r--r--    1 oracle   dba           25065 Oct 28 2011  Oracle GoldenGate 11.1.1.1 README.txt
-rwx------    1 oracle   dba        47749729 Dec 10 13:55 V28955-01.zip
drwxr-xr-x    2 oracle   dba            4096 Dec 14 09:35 check_script
oracle@aixSourceServer-Ora9i:/opt/oracle/software/goldengate>

oracle@aixSourceServer-Ora9i:/opt/oracle/product/gg_11.1.1.1.2> tar -xvf /opt/oracle/software/goldengate/ggs_AIX_ppc_ora9.2_64bit.tar
…
x marker_setup.sql, 3702 bytes, 8 tape blocks
x marker_status.sql, 1715 bytes, 4 tape blocks
oracle@aixSourceServer-Ora9i: /opt/oracle/product/gg_11.1.1.1.2> ls -ltr
total 235344
-r--r--r-- 1 oracle dba 1476 Oct 15 2010 zlib.txt
. . .
-rwxr-xr-x 1 oracle dba 13911955 Oct 5 2011 replicat

Let’s set the LIBPATH environment variable and call “ggsci” utility:

oracle@aixSourceServer-Ora9i: /opt/oracle/product/gg_11.1.1.1.2> export LIBPATH=/opt/oracle/product/gg_11.1.1.1.2/:$ORACLE_HOME/lib:/opt/oracle/product/9.2.0.6/lib32/:/opt/oracle/product/9.2.0.6/lib/:$LIBPATH
oracle@aixSourceServer-Ora9i: /opt/oracle/product/gg_11.1.1.1.2> ./ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 11.1.1.1.2 OGGCORE_11.1.1.1.2_PLATFORMS_111004.2100
AIX 5L, ppc, 64bit (optimized), Oracle 9.2 on Oct 5 2011 00:37:06

Copyright (C) 1995, 2011, Oracle and/or its affiliates. All rights reserved.

GGSCI (aixSourceServer-Ora9i) 1> info all

Program Status Group Lag Time Since Chkpt

MANAGER STOPPED

GOLDENGATE SETUP – ON SOURCE SERVER

Create the goldengate admin user on source and target database:

oracle@aixSourceServer-Ora9i:/home/oracle/ [DB2] sqlplus / as sysdba

SQL> create tablespace GOLDENGATE datafile '/u02/oradata/DB2/goldengate.dbf' size 2G ;

SQL> create profile GGADMIN limit password_life_time unlimited ;

SQL> create user GGADMIN identified by "******" default tablespace
     goldengate temporary tablespace temp profile GGADMIN ;

SQL> grant create session, dba to GGADMIN ;

SQL> exec dbms_goldengate_auth.grant_admin_privilege ('GGADMIN') ;

SQL> grant flashback any table to GGADMIN ;
--create subdirs
GGSCI (aixSourceServer-Ora9i) 1> create subdirs

Creating subdirectories under current directory /opt/oracle/product/gg_11.1.1.1.2

Parameter files                /opt/oracle/product/gg_11.1.1.1.2/dirprm: created
Report files                   /opt/oracle/product/gg_11.1.1.1.2/dirrpt: created
Checkpoint files               /opt/oracle/product/gg_11.1.1.1.2/dirchk: created
Process status files           /opt/oracle/product/gg_11.1.1.1.2/dirpcs: created
SQL script files               /opt/oracle/product/gg_11.1.1.1.2/dirsql: created
Database definitions files     /opt/oracle/product/gg_11.1.1.1.2/dirdef: created
Extract data files             /opt/oracle/product/gg_11.1.1.1.2/dirdat: created
Temporary files                /opt/oracle/product/gg_11.1.1.1.2/dirtmp: created
Veridata files                 /opt/oracle/product/gg_11.1.1.1.2/dirver: created
Veridata Lock files            /opt/oracle/product/gg_11.1.1.1.2/dirver/lock: created
Veridata Out-Of-Sync files     /opt/oracle/product/gg_11.1.1.1.2/dirver/oos: created
Veridata Out-Of-Sync XML files /opt/oracle/product/gg_11.1.1.1.2/dirver/oosxml: created
Veridata Parameter files       /opt/oracle/product/gg_11.1.1.1.2/dirver/params: created
Veridata Report files          /opt/oracle/product/gg_11.1.1.1.2/dirver/report: created
Veridata Status files          /opt/oracle/product/gg_11.1.1.1.2/dirver/status: created
Veridata Trace files           /opt/oracle/product/gg_11.1.1.1.2/dirver/trace: created
Stdout files                   /opt/oracle/product/gg_11.1.1.1.2/dirout: created

--add GGSCHEMA into ./GLOBALS file in source and target
oracle@ aixSourceServer-Ora9i: /opt/oracle/product/gg_11.1.1.1.2> ./ggsci
GGSCI (aixSourceServer-Ora9i) 3> view param ./GLOBALS
GGSCHEMA goldengate

--add PORT into mgr parameter file and start the manager
GGSCI (aixSourceServer-Ora9i) 1> edit params mgr

PORT 7809
GGSCI (aixSourceServer-Ora9i) 6> start mgr
Manager started.
GGSCI (aixSourceServer-Ora9i) 7> info all
Program     Status      Group       Lag           Time Since Chkpt
MANAGER     RUNNING

--Installing the DDL support on the source database : You will be prompted for the name of a schema for the GoldenGate database objects.
SQL> @marker_setup.sql
. . .
Script complete
SQL> @ddl_setup.sql
. . .
SUCCESSFUL installation of DDL Replication software components
SQL> @role_setup.sql
Role setup script complete
SQL> grant ggs_ggsuser_role to goldengate;
SQL> @ddl_enable.sql
Trigger altered
--On both database (source and target), Installing Support for Sequences
SQL> @sequence.
. . .
SUCCESSFUL installation of Oracle Sequence Replication support

Add the trandata on schemas concerned by the GoldenGate replication:

oracle@aixSourceServer-Ora9i: /opt/oracle/product/gg_11.1.1.1.2> ./ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 11.1.1.1.2 OGGCORE_11.1.1.1.2_PLATFORMS_111004.2100
AIX 5L, ppc, 64bit (optimized), Oracle 9.2 on Oct  5 2011 00:37:06

Copyright (C) 1995, 2011, Oracle and/or its affiliates. All rights reserved.
	


GGSCI (aixSourceServer-Ora9i) 1> dblogin userid goldengate
Password:
Successfully logged into database.


GGSCI (aixSourceServer-Ora9i) 2> add trandata bgh.*
GGSCI (aixSourceServer-Ora9i) 2> add trandata all_opi.*

2020-12-18 10:46:45  WARNING OGG-00706  Failed to add supplemental log group on table bgh.KLI_J_TEST_HIST due to ORA-02257: maximum number of columns exceeded, SQL ALTER TABLE "bgh"."KLI_J_TEST_HIST" ADD SUPPLEMENTAL LOG GROUP "GGS_KLI_J_TEST_HIST_901157" ("ENH_N_ID","ENH_N_NOINSCRIPTION","ENH_N_NOCOURS","ENH_C_P1NOTE","ENH_C_P2NOTE","ENH_C_P3NOTE","ENH_C_.


2020-12-18 10:46:52  WARNING OGG-00706  Failed to add supplemental log group on table bgh.TABLE_ELEVES_SVG due to ORA-02257: maximum number of columns exceeded, SQL ALTER TABLE "bgh"."TABLE_ELEVES" ADD SUPPLEMENTAL LOG GROUP "GGS_TABLE_ELEVES_901320" ("NOINSCRIPTION","NOCOURS","P1NOTE","P2NOTE","P3NOTE","P4NOTE","P5NOTE","P6NOTE","P7NOTE","P8NOTE","P1COMPTE".

2020-12-18 10:46:52  WARNING OGG-00869  No unique key is defined for table TABLENOTE_TMP. All viable columns will be used to represent the key, but may not guarantee uniqueness.  KEYCOLS may be used to define the key.

Logging of supplemental redo data enabled for table all_opi.ZUI_VM_RETUIO_SCOLARITE.
ERROR: OCI Error retrieving bind info for query (status = 100), SQL <SELECT * FROM "all_opi"."EXT_POI_V_RTEWR">.

The warning OGG-00706 and OGG–00869 are solved by adding a primary key to the tables concerned.

The OCI error must be investigated by opening an Oracle  Service Request.

Add the extract, exttrail and start it :

GGSCI (aixSourceServer-Ora9i) 2> add extract EXTRSO, tranlog, begin now
EXTRACT added.

add extract EXTRNA, tranlog, begin now
GGSCI (aixSourceServer-Ora9i) 7> add EXTTRAIL /opt/oracle/goldengate/data/DDIP9/so, EXTRACT EXTRSO
EXTTRAIL added.

add EXTTRAIL /opt/oracle/goldengate/data/DDIP9/na, EXTRACT EXTRNA

edit param EXTRSO
Extract EXTRSO
userid goldengate password ******
Exttrail /opt/oracle/goldengate/data/DDIP9/so
ENCRYPTTRAIL AES192
DDL INCLUDE MAPPED OBJNAME SO.*
Table bgh.*;

edit param EXTRNA
Extract EXTRNA
userid goldengate password ******
Exttrail /opt/oracle/goldengate/data/DDIP9/na
ENCRYPTTRAIL AES192
DDL INCLUDE MAPPED OBJNAME NBDS_ADM.*
Table all_api.*;

start EXTRSO

Sending START request to MANAGER ...
EXTRACT EXTRSO starting

start EXTRNA

GGSCI (aixSourceServer-Ora9i) 11> info all

Program     Status      Group       Lag           Time Since Chkpt

MANAGER     RUNNING
EXTRACT     RUNNING     EXTRHR      00:00:00      00:00:04
EXTRACT     RUNNING     EXTRSO      00:08:18      00:00:00
EXTRACT     RUNNING     PUMPHR      00:00:00      00:00:07

Check if trail files are created:

oracle@aixSourceServer-Ora9i: /opt/oracle/product/gg_11.1.1.1.2> ls -ltr /opt/oracle/goldengate/data/DDIP9/
total 72
-rw-rw-rw-    1 oracle   dba             960 Dec 16 09:34 hr000000
-rw-rw-rw-    1 oracle   dba            1021 Dec 16 10:25 hr000001
-rw-rw-rw-    1 oracle   dba            1021 Dec 16 10:34 hr000002
-rw-rw-rw-    1 oracle   dba            2679 Dec 16 14:26 hr000003
-rw-rw-rw-    1 oracle   dba             960 Dec 16 19:54 so000000
-rw-rw-rw-    1 oracle   dba            1021 Dec 16 19:59 na000000

Add the PUMP:

GGSCI (aixSourceServer-Ora9i) 1> add extract PUMPSO,EXTTRAILSOURCE /opt/oracle/goldengate/data/DDIP9/so
EXTRACT added.

add extract PUMPNA,EXTTRAILSOURCE /opt/oracle/goldengate/data/DDIP9/na

GGSCI (aixSourceServer-Ora9i) 2> add rmttrail /data/oradata/goldengate/data/LGGATE/so, extract PUMPSO
RMTTRAIL added.

add rmttrail /data/oradata/goldengate/data/LGGATE/na, extract PUMPNA

extract PUMPSO
userid goldengate password ******
RMTHOST aixTargetServer-Ora19c, MGRPORT 7810
RMTTRAIL /data/oradata/goldengate/data/LGGATE/so
TABLE bgh.*;

extract PUMPNA
userid goldengate password ******
RMTHOST aixTargetServer-Ora19c, MGRPORT 7810
RMTTRAIL /data/oradata/goldengate/data/LGGATE/na
TABLE all_api.*;


GGSCI (aixSourceServer-Ora9i) 6> start pumpso

Sending START request to MANAGER ...
EXTRACT PUMPSO starting

start pumpna

GGSCI (aixSourceServer-Ora9i) 26> info all

Program     Status      Group       Lag           Time Since Chkpt

MANAGER     RUNNING
EXTRACT     RUNNING     EXTRHR      00:00:00      00:00:07
EXTRACT     RUNNING     EXTRNA      00:00:00      00:00:08
EXTRACT     RUNNING     EXTRSO      00:00:00      00:00:05
EXTRACT     RUNNING     PUMPHR      00:00:00      00:00:03
EXTRACT     RUNNING     PUMPNA      00:00:00      00:03:42
EXTRACT     RUNNING     PUMPSO      00:00:00      00:00:00

 

GOLDENGATE INITIAL LOAD

On the source schemas, got the last active transaction and do the export:

SELECT dbms_flashback.get_system_change_number as current_scn FROM DUAL;
10228186709471 --Backup this SCN, it will be used later to start the goldengate replicat process on the target server
nohup  exp / file=rg081DDIP9.s0.202012172303.dmp log=rg081DDIP9.s0.202012172303.dmp.log tables=bgh.% flashback_scn=10228186709471 &
nohup  exp / file=rg081DDIP9.nbds_adm.202012172303.dmp log=rg081DDIP9.all_opi.202012172303.dmp.log tables=nbds_adm.% flashback_scn=10228186709471 &

Copy the dump file on the target and do the import :

drop user bgh cascade;
create user bgh identified by "******" default tablespace SO temporary tablespace TEMP;
alter user bgh quota unlimited on S0_D;
alter user bgh quota unlimited on S0_I;
alter user bgh quota unlimited on S0_LOB;
nohup imp / file=/data/export/LGGATE/rg081DDIP9.s0.202012172303.dmp log=so.imp171220202303.log buffer=1000000 fromuser=bgh touser=bgh grants=n statistics=none constraints=n ignore=y &

drop user all_opi cascade;
create user all_opi identified by "******" default tablespace NA temporary tablespace TEMP;
alter user all_opi quota unlimited on NBDS_D;
alter user all_opi quota unlimited on NBDS_I;
alter user all_opi quota unlimited on NBDS_LOB;
alter user all_opi quota unlimited on system;
alter user all_opi quota unlimited on na;
nohup imp / file=/data/export/LGGATE/rg081DDIP9.nbds_adm.202012172303.dmp log=na.imp171220202303.log buffer=1000000 fromuser=all_opi touser=all_opi grants=n statistics=none constraints=n ignore=y &

Since the import is done without the constraints, get all primary key from the source database and create it into target.

Disable all triggers on the target:

select 'alter trigger '||owner||'.'||trigger_name||' disable;' from dba_triggers where owner = 'NBDS_ADM';

Check no ref. constraints exist, job_queue_processes parameter equal to 0 and recompile all:

--checK ref constraints
SQL> select * from dba_constraints where owner = 'NBDS_ADM' and constraint_type = 'R';

no rows selected

SQL>
--check job_queue_processes

SQL> sho parameter job

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
job_queue_processes                  integer     384
max_datapump_jobs_per_pdb            string      100
max_datapump_parallel_per_job        string      50
SQL> alter system set job_queue_Processes = 0;

System altered.

SQL>

--recompile all
SQL> start ?/rdbms/admin/utlrp.sql

Session altered.
. . .

GOLDENGATE SETUP – ON TARGET SERVER

Add the replicat:

--add replicat
GGSCI (aixTargetServer-Ora19c) 5> dblogin useridalias goldengate
Successfully logged into database.

GGSCI (aixTargetServer-Ora19c as goldengate@LGGATE) 8> add replicat REPLSO, exttrail /data/oradata/goldengate/data/LGGATE/so,checkpointtable GOLDENGATE.CHECKPOINT;
REPLICAT added.


GGSCI (aixTargetServer-Ora19c as goldengate@LGGATE) 9> add replicat REPLNA, exttrail /data/oradata/goldengate/data/LGGATE/na,checkpointtable GOLDENGATE.CHECKPOINT;
REPLICAT added.


GGSCI (aixTargetServer-Ora19c as goldengate@LGGATE) 10> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
REPLICAT    RUNNING     REPLHR      00:00:00      00:00:09
REPLICAT    STOPPED     REPLNA      00:00:00      00:00:02
REPLICAT    STOPPED     REPLSO      00:00:00      00:00:17


GGSCI (aixTargetServer-Ora19c as goldengate@LGGATE) 11>

--configure replicat
Replicat REPLSO
--DBOPTIONS INTEGRATEDPARAMS (parallelism 6)
SOURCECHARSET PASSTHRU
DISCARDFILE /opt/oracle/product/gg_191004/dirrpt/REPLSO_discard.txt, append, megabytes 10
USERIDALIAS goldengate
ASSUMETARGETDEFS
MAP bhg.*,TARGET bgh.*;


Replicat REPLNA
--DBOPTIONS INTEGRATEDPARAMS (parallelism 6)
SOURCECHARSET PASSTHRU
DISCARDFILE /opt/oracle/product/gg_191004/dirrpt/REPLNA_discard.txt, append, megabytes 10
USERIDALIAS goldengate
ASSUMETARGETDEFS
MAP all_opi.*,TARGET all_opi.*;

--Start replicat REPLNA
GGSCI (aixTargetServer-Ora19c as goldengate@LGGATE) 18> start replicat REPLNA, atcsn 10228186709471

Sending START request to MANAGER ...
REPLICAT REPLNA starting

--Start replicat REPLS0
GGSCI (aixTargetServer-Ora19c as goldengate@LGGATE) 18> start replicat REPLSO, atcsn 10228186709471

Sending START request to MANAGER ...
REPLICAT REPLSO starting

GGSCI (aixTargetServer-Ora19c as goldengate@LGGATE) 19> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
REPLICAT    RUNNING     REPLHR      00:00:00      00:00:02
REPLICAT    RUNNING     REPLNA      00:00:00      00:00:01
REPLICAT    RUNNING     REPLSO      00:00:00      00:21:10

Wait unti the lag decrease…

GOLDENGATE TEST SYNCHRONIZATION

Add some activity DML + DDL on the source database and check the synchronization with goldengate “stats” commands on both servers:

GGSCI (aixSourceServer-Ora9i) 5> stats extract EXTRNA, totalsonly *.*

Sending STATS request to EXTRACT EXTRNA ...

Start of Statistics at 2020-12-18 16:35:00.

DDL replication statistics (for all trails):

*** Total statistics since extract started     ***
        Operations                                   0.00
        Mapped operations                            0.00
        Unmapped operations                          0.00
        Other operations                             0.00
        Excluded operations                          0.00

Output to /opt/oracle/goldengate/data/DDIP9/na:

Cumulative totals for specified table(s):

*** Total statistics since 2020-12-18 10:42:15 ***
        Total inserts                                8.00
        Total updates                                1.00
        Total deletes                               25.00
        Total discards                               0.00
        Total operations                            34.00

*** Daily statistics since 2020-12-18 10:42:15 ***
        Total inserts                                8.00
        Total updates                                1.00
        Total deletes                               25.00
        Total discards                               0.00
        Total operations                            34.00

*** Hourly statistics since 2020-12-18 16:00:00 ***

        No database operations have been performed.

*** Latest statistics since 2020-12-18 10:42:15 ***
        Total inserts                                8.00
        Total updates                                1.00
        Total deletes                               25.00
        Total discards                               0.00
        Total operations                            34.00

End of Statistics.

GGSCI (aixSourceServer-Ora9i) 10> stats extract EXTRSO, totalsonly *.*

Sending STATS request to EXTRACT EXTRSO ...

Start of Statistics at 2020-12-18 16:36:06.

DDL replication statistics (for all trails):

*** Total statistics since extract started     ***
        Operations                                   0.00
        Mapped operations                            0.00
        Unmapped operations                          0.00
        Other operations                             0.00
        Excluded operations                          0.00

Output to /opt/oracle/goldengate/data/DDIP9/so:

Cumulative totals for specified table(s):

*** Total statistics since 2020-12-18 10:42:15 ***
        Total inserts                              156.00
        Total updates                                0.00
        Total deletes                                0.00
        Total discards                               0.00
        Total operations                           156.00

*** Daily statistics since 2020-12-18 10:42:15 ***
        Total inserts                              156.00
        Total updates                                0.00
        Total deletes                                0.00
        Total discards                               0.00
        Total operations                           156.00

*** Hourly statistics since 2020-12-18 16:00:00 ***

        No database operations have been performed.

*** Latest statistics since 2020-12-18 10:42:15 ***
        Total inserts                              156.00
        Total updates                                0.00
        Total deletes                                0.00
        Total discards                               0.00
        Total operations                           156.00

End of Statistics.

--On the target server
GGSCI (aixTargetServer-Ora19c) 5> stats replicat REPLNA, totalsonly *.*

Sending STATS request to REPLICAT REPLNA ...

Start of Statistics at 2020-12-18 16:36:45.

DDL replication statistics:

*** Total statistics since replicat started     ***
        Operations                                         1.00
        Mapped operations                                  1.00
        Unmapped operations                                0.00
        Other operations                                   0.00
        Excluded operations                                0.00
        Errors                                             0.00
        Retried errors                                     0.00
        Discarded errors                                   0.00
        Ignored errors                                     0.00

Cumulative totals for specified table(s):

*** Total statistics since 2020-12-18 11:42:12 ***
        Total inserts                                    526.00
        Total updates                                      1.00
        Total deletes                                    543.00
        Total upserts                                      0.00
        Total discards                                     0.00
        Total operations                                1070.00

*** Daily statistics since 2020-12-18 11:42:12 ***
        Total inserts                                    526.00
        Total updates                                      1.00
        Total deletes                                    543.00
        Total upserts                                      0.00
        Total discards                                     0.00
        Total operations                                1070.00

*** Hourly statistics since 2020-12-18 16:00:00 ***

        No database operations have been performed.

*** Latest statistics since 2020-12-18 11:42:12 ***
        Total inserts                                    526.00
        Total updates                                      1.00
        Total deletes                                    543.00
        Total upserts                                      0.00
        Total discards                                     0.00
        Total operations                                1070.00

End of Statistics.

 

If you want to remove your GoldenGate configuration

on source :
delete trandata hr.*
delete trandata bgh.*
delete trandata all_opi.*
drop user goldengate cascade;
SQL> @ddl_disable

on target :
SQL> drop user goldengate cascade;

User dropped.


NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
enable_goldengate_replication        boolean     TRUE
resource_manage_goldengate           boolean     FALSE
SQL> alter system set enable_goldengate_replication=FALSE scope = both;

System altered.

SQL> sho parameter goldengate

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
enable_goldengate_replication        boolean     FALSE
resource_manage_goldengate           boolean     FALSE
SQL>

Conclusion

  • Synchronize an oracle database 9.2.0.6 to Oracle 19c (Oracle 19.7 in our case) with GoldenGate works !!! Of course some test with more activity as we have in the real life (production database) must be done to evaluate all possible problems.
  • Oracle does some enhancements to the Oracle GoldenGate software, we don’t need any parameter to convert the trail file format between different Oracle GoldenGate versions (as we had in the past between GG prior 10g and GG post 10g), the converison is done automatically.
  • Using GoldenGate to migrate your Oracle 9i database to Oracle 19c must be compared with alternative migration solution :
    • Transportable tablespace
    • Export/Import or Datapump
  • The focus must be done on the downtime available for the migration:
    • Less Downtime you have, Oracle Export Import, DataPump or Transportable Tablespaces will be better solution.
    • Near Zero Downtime you have, GoldenGate could be a solution only if the applicative team (architect, project manager, developer) participates since, for instance, tables without primary key will prevent GoldenGate to work, thus, developer must choose column/s to be candidate to be the PK into source.

Cet article Migrate Oracle Database 9.2.0.6 to Oracle 19c using GoldenGate est apparu en premier sur Blog dbi services.

How to quickly download the new bunch of 21c Oracle Database documentation?

$
0
0

Last month, Oracle released its new 21c version of the database documentation.
At that time, I was looking for a quick mean to get all the books of this so-called 21c Innovation Release.

I could remember I used a script to get them all in one run.
Quick look at google, remembered me I used the one from Christian Antognini which hasn’t been refreshed for a while.

In this blog, I will provide you the refreshed script to download all these recent oracle database docs.
By default, the script will download you 109 files and arrange them under the 9 below folders:
– Install and Upgrade
– Administration
– Development
– Security
– Performance
– Clusterware, RAC and Data Guard
– Data Warehousing, ML and OLAP
– Spatial and Graph
– Distributed Data

This will consume about 310 MB of space.

Both Mac and Windows versions are provided at the bottom of this page.

– On Mac, open a terminal window and run <get-21c-docs-on-mac.sh>
– On Windows, open a command prompt and call <get-21c-docs-on-win.cmd>

Both scripts need a working “wget” tool for retrieving the files from https://docs.oracle.com
“wget” is a small utility, created by the GNU foundation
If you haven’t yet installed this tool:

– on Mac, one way to get it, is to use “brew”, somehow an open source package manager (more details on brew.sh)

/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
Brew install wget
get-21c-docs-on-mac.sh

– on Windows, you can also get a recent “wget” binary from here
In the windows script, replace “C:\Downloaded Products\wget\wget” by the full path to your folder and call the script:

get-21c-docs-on-win.cmd

I hope this will spare you a bit of your time. Feel free to let me know your comments.

 

get-21c-docs-on-mac.sh


#!/bin/sh
mkdir "Install and Upgrade"
mkdir "Administration"
mkdir "Development"
mkdir "Security"
mkdir "Performance"
mkdir "Clusterware, RAC and Data Guard"
mkdir "Data Warehousing, ML and OLAP"
mkdir "Spatial and Graph"
mkdir "Distributed Data"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/nfcon/pdf/learning-database-new-features.pdf -O "Database New Features Guide.pdf"
cd "Install and Upgrade"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/dblic/database-licensing-information-user-manual.pdf -O "Database Licensing Information User Manual.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/rnrdm/database-release-notes.pdf -O "Database Release Notes.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/upgrd/database-upgrade-guide.pdf -O "Database Upgrade Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/odbcr/odbc-driver-release-notes.pdf -O "ODBC Driver Release Notes.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/sqprn/sqlplus-release-notes.pdf -O "SQL Plus Release Notes.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/comsc/database-sample-schemas.pdf -O "Database Sample Schemas.pdf"
cd ../"Administration"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/tdppt/2-day-performance-tuning-guide.pdf -O "2 Day + Performance Tuning Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/admqs/2-day-dba.pdf -O "2 Day DBA.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/admin/database-administrators-guide.pdf -O "Database Administrator's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/cncpt/database-concepts.pdf -O "Database Concepts.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/errmg/database-error-messages.pdf -O "Database Error Messages.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/refrn/database-reference.pdf -O "Database Reference.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/sutil/database-utilities.pdf -O "Database Utilities.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/multi/multitenant-administrators-guide.pdf -O "Multitenant Administrator's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/rcmrf/database-backup-and-recovery-reference.pdf -O "Database Backup and Recovery Reference.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/bradv/database-backup-and-recovery-users-guide.pdf -O "Database Backup and Recovery User's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/netag/database-net-services-administrators-guide.pdf -O "Net Services Administrator's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/netrf/database-net-services-reference.pdf -O "Net Services Reference.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/ratug/testing-guide.pdf -O "Testing Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/ostmg/automatic-storage-management-administrators-guide.pdf -O "Automatic Storage Management Administrator's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/acfsg/automatic-storage-management-cluster-file-system-administrators-guide.pdf -O "Automatic Storage Management Cluster File System Administrator's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/unxar/administrators-reference-linux-and-unix-system-based-operating-systems.pdf -O "Administrator's Reference for Linux and UNIX-Based Operating Systems.pdf"
cd ../"Development"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/tdpjd/2-day-java-developers-guide.pdf -O "2 Day + Java Developer's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/tdddg/2-day-developers-guide.pdf -O "2 Day Developer's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/adfns/database-development-guide.pdf -O "Database Development Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/addci/data-cartridge-developers-guide.pdf -O "Data Cartridge Developer's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/lnpls/database-pl-sql-language-reference.pdf -O "Database PLSQL Language Reference.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/arpls/database-pl-sql-packages-and-types-reference.pdf -O "Database PLSQL Packages and Types Reference.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/jjdev/java-developers-guide.pdf -O "Java Developer's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/jjdbc/jdbc-developers-guide.pdf -O "JDBC Developer's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/adjsn/json-developers-guide.pdf -O "JSON Developer's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/adobj/object-relational-developers-guide.pdf -O "Object-Relational Developer's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/lncpp/oracle-c-call-interface-programmers-guide.pdf -O "Oracle C++ Call Interface Programmer's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/lnoci/oracle-call-interface-programmers-guide.pdf -O "Oracle Call Interface Programmer's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/lnpcc/c-c-programmers-guide.pdf -O "Pro C C++ Programmer's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/lnpcb/cobol-programmers-guide.pdf -O "Pro COBOL Programmer's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/adlob/securefiles-and-large-objects-developers-guide.pdf -O "SecureFiles and Large Objects Developer's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/sqlrf/sql-language-reference.pdf -O "SQL Language Reference.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/sqprn/sqlplus-release-notes.pdf -O "SQL Plus Release Notes.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/sqpug/sqlplus-users-guide-and-reference.pdf -O "SQL Plus User's Guide and Reference.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/ccapp/text-application-developers-guide.pdf -O "Text Application Developer's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/ccref/text-reference.pdf -O "Text Reference.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/jjucp/universal-connection-pool-developers-guide.pdf -O "Universal Connection Pool Developer's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/adwsm/workspace-manager-developers-guide.pdf -O "Workspace Manager Developer's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/caxml/xml-c-api-reference.pdf -O "XML C API Reference.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/cpxml/xml-c-api-reference.pdf -O "XML C++ API Reference.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/adxdb/xml-db-developers-guide.pdf -O "XML DB Developer's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/adxdk/xml-developers-kit-programmers-guide.pdf -O "XML Developer's Kit Programmer's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/nlspg/database-globalization-support-guide.pdf -O "Database Globalization Support Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/pccrn/c-c-release-notes.pdf -O "Pro C C++ Release Notes.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/pcbrn/cobol-release-notes.pdf -O "Pro COBOL Release Notes.pdf"
cd ../"Security"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/dbseg/database-security-guide.pdf -O "Database Security Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/dvadm/database-vault-administrators-guide.pdf -O "Database Vault Administrator's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/olsag/label-security-administrators-guide.pdf -O "Label Security Administrator's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/rasad/real-application-security-administration-console-rasadm-users-guide.pdf -O "Real Application Security Administration Console (RASADM) User's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/dbfsg/real-application-security-administrators-and-developers-guide.pdf -O "Real Application Security Administrator's and Developer's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/dbimi/enterprise-user-security-administrators-guide.pdf -O "Enterprise User Security Administrator's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/asoag/advanced-security-guide.pdf -O "Advanced Security Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/12.2/dmksb/oracle-data-masking-and-subsetting-users-guide.pdf -O "Data Masking and Subsetting User's Guide.pdf"
cd ../"Performance"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/tdppt/2-day-performance-tuning-guide.pdf -O "Database 2 Day + Performance Tuning Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/tgdba/database-performance-tuning-guide.pdf -O "Database Performance Tuning Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/inmem/database-memory-guide.pdf -O "Database In-Memory Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/tgsql/sql-tuning-guide.pdf -O "SQL Tuning Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/vldbg/vldb-and-partitioning-guide.pdf -O "VLDB and Partitioning Guide.pdf"
cd ../"Clusterware, RAC and Data Guard"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/atnms/autonomous-health-framework-users-guide.pdf -O "Autonomous Health Framework User's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/cwadd/clusterware-administration-and-deployment-guide.pdf -O "Clusterware Administration and Deployment Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/gsmug/global-data-services-concepts-and-administration-guide.pdf -O "Global Data Services Concepts and Administration Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/racad/real-application-clusters-administration-and-deployment-guide.pdf -O "Real Application Clusters Administration and Deployment Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/dgbkr/data-guard-broker.pdf -O "Data Guard Broker.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/sbydb/data-guard-concepts-and-administration.pdf -O "Data Guard Concepts and Administration.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/shard/using-oracle-sharding.pdf -O "Using Oracle Sharding.pdf"
cd ../"Data Warehousing, ML and OLAP"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/dwhsg/database-data-warehousing-guide.pdf -O "Database Data Warehousing Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/machine-learning/oml4r/1.5.1/oread/oracle-machine-learning-r-installation-and-administration-guide.pdf -O "Machine Learning for R Installation and Administration Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/machine-learning/oml4r/1.5.1/omlrl/oracle-machine-learning-r-licensing-information-user-manual.pdf -O "Machine Learning for R Licensing Information User Manual.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/machine-learning/oml4r/1.5.1/orern/oracle-machine-learning-r-release-notes.pdf -O "Machine Learning for R Release Notes.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/machine-learning/oml4r/1.5.1/oreug/oracle-machine-learning-r-users-guide.pdf -O "Machine Learning for R User's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/machine-learning/oml4sql/21/dmapi/oracle-machine-learning-sql-api-guide.pdf -O "Machine Learning for SQL API Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/machine-learning/oml4sql/21/dmcon/oracle-machine-learning-sql-concepts.pdf -O "Machine Learning for SQL Concepts.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/machine-learning/oml4sql/21/dmprg/oracle-machine-learning-sql-users-guide.pdf -O "Machine Learning for SQL User's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/oladm/olap-dml-reference.pdf -O "OLAP DML Reference.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/olaap/olap-java-api-developers-guide.pdf -O "OLAP Expression Syntax Reference.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/olaap/olap-java-api-developers-guide.pdf -O "OLAP Java API Developer's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/olaug/olap-users-guide.pdf -O "OLAP User's Guide.pdf"
cd ../"Spatial and Graph"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/spatl/spatial-developers-guide.pdf -O "Spatial Developer's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/geors/spatial-georaster-developers-guide.pdf -O "Spatial GeoRaster Developer's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/jimpv/spatial-map-visualization-developers-guide.pdf -O "Spatial Map Visualization Developer's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/topol/spatial-topology-and-network-data-model-developers-guide.pdf -O "Spatial Topology and Network Data Model Developer's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/property-graph/20.4/spgdg/oracle-graph-property-graph-developers-guide.pdf -O "Oracle Graph Property Graph Developer's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/rdfrm/graph-developers-guide-rdf-graph.pdf -O "Graph Developer's Guide for RDF Graph.pdf"
cd ../"Distributed Data"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/adque/database-transactional-event-queues-and-advanced-queuing-users-guide.pdf -O "Database Transactional Event Queues and Advanced Queuing User's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/odbcr/odbc-driver-release-notes.pdf -O "ODBC Driver Release Notes.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/drdas/provider-drda-users-guide.pdf -O "Provider for DRDA User's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/drdaa/sql-translation-and-migration-guide.pdf -O "SQL Translation and Migration Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/appci/database-gateway-appc-installation-and-configuration-guide-aix-5l-based-systems-64-bit-hp-ux-itanium-solaris-operating-system-sparc-64-bit-linux-x86-and-linux-x86-64.pdf -O "Database Gateway for APPC Installation and Configuration Guide for AIX 5L Based Systems (64-Bit), HP-UX Itanium, Solaris Operating System (SPARC 64-Bit), Linux x86, and Linux x86-64.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/appcw/database-gateway-appc-installation-and-configuration-guide-microsoft-windows.pdf -O "Database Gateway for APPC Installation and Configuration Guide for Microsoft Windows.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/appug/database-gateway-appc-users-guide.pdf -O "Database Gateway for APPC User's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/drdag/database-gateway-drda-users-guide.pdf -O "Database Gateway for DRDA User's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/tginu/database-gateway-informix-users-guide.pdf -O "Database Gateway for Informix User's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/odbcu/database-gateway-odbc-users-guide.pdf -O "Database Gateway for ODBC User's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/gmswn/database-gateway-sql-server-users-guide.pdf -O "Database Gateway for SQL Server User's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/tgsyu/database-gateway-sybase-users-guide.pdf -O "Database Gateway for Sybase User's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/tgteu/database-gateway-teradata-users-guide.pdf -O "Database Gateway for Teradata User's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/wsmqg/database-gateway-websphere-mq-installation-and-users-guide.pdf -O "Database Gateway for WebSphere MQ Installation and User's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/otgis/database-gateway-installation-and-configuration-guide-aix-5l-based-systems-64-bit-hp-ux-itanium-solaris-operating-system-sparc-64-bit-linux-x86-and-linux-x86-64.pdf -O "Database Gateway Installation and Configuration Guide for AIX 5L Based Systems (64-Bit), HP-UX Itanium, Solaris Operating System (SPARC 64-Bit), Linux x86, and Linux x86-64.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/otgiw/database-gateway-installation-and-configuration-guide-microsoft-windows.pdf -O "Database Gateway Installation and Configuration Guide for Microsoft Windows.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/heter/heterogeneous-connectivity-users-guide.pdf -O "Heterogeneous Connectivity User's Guide.pdf"
wget -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/xstrm/xstream-guide.pdf -O "XStream Guide.pdf"
cd ..

get-21c-docs-on-win.cmd

@echo off
mkdir "Install and Upgrade"
mkdir "Administration"
mkdir "Development"
mkdir "Security"
mkdir "Performance"
mkdir "Clusterware, RAC and Data Guard"
mkdir "Data Warehousing, ML and OLAP"
mkdir "Spatial and Graph"
mkdir "Distributed Data"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/nfcon/pdf/learning-database-new-features.pdf -O "Database New Features Guide.pdf"
cd "Install and Upgrade"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/dblic/database-licensing-information-user-manual.pdf -O "Database Licensing Information User Manual.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/rnrdm/database-release-notes.pdf -O "Database Release Notes.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/upgrd/database-upgrade-guide.pdf -O "Database Upgrade Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/odbcr/odbc-driver-release-notes.pdf -O "ODBC Driver Release Notes.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/sqprn/sqlplus-release-notes.pdf -O "SQL Plus Release Notes.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/comsc/database-sample-schemas.pdf -O "Database Sample Schemas.pdf"
cd ../"Administration"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/tdppt/2-day-performance-tuning-guide.pdf -O "2 Day + Performance Tuning Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/admqs/2-day-dba.pdf -O "2 Day DBA.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/admin/database-administrators-guide.pdf -O "Database Administrator's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/cncpt/database-concepts.pdf -O "Database Concepts.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/errmg/database-error-messages.pdf -O "Database Error Messages.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/refrn/database-reference.pdf -O "Database Reference.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/sutil/database-utilities.pdf -O "Database Utilities.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/multi/multitenant-administrators-guide.pdf -O "Multitenant Administrator's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/rcmrf/database-backup-and-recovery-reference.pdf -O "Database Backup and Recovery Reference.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/bradv/database-backup-and-recovery-users-guide.pdf -O "Database Backup and Recovery User's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/netag/database-net-services-administrators-guide.pdf -O "Net Services Administrator's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/netrf/database-net-services-reference.pdf -O "Net Services Reference.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/ratug/testing-guide.pdf -O "Testing Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/ostmg/automatic-storage-management-administrators-guide.pdf -O "Automatic Storage Management Administrator's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/acfsg/automatic-storage-management-cluster-file-system-administrators-guide.pdf -O "Automatic Storage Management Cluster File System Administrator's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/unxar/administrators-reference-linux-and-unix-system-based-operating-systems.pdf -O "Administrator's Reference for Linux and UNIX-Based Operating Systems.pdf"
cd ../"Development"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/tdpjd/2-day-java-developers-guide.pdf -O "2 Day + Java Developer's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/tdddg/2-day-developers-guide.pdf -O "2 Day Developer's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/adfns/database-development-guide.pdf -O "Database Development Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/addci/data-cartridge-developers-guide.pdf -O "Data Cartridge Developer's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/lnpls/database-pl-sql-language-reference.pdf -O "Database PLSQL Language Reference.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/arpls/database-pl-sql-packages-and-types-reference.pdf -O "Database PLSQL Packages and Types Reference.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/jjdev/java-developers-guide.pdf -O "Java Developer's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/jjdbc/jdbc-developers-guide.pdf -O "JDBC Developer's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/adjsn/json-developers-guide.pdf -O "JSON Developer's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/adobj/object-relational-developers-guide.pdf -O "Object-Relational Developer's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/lncpp/oracle-c-call-interface-programmers-guide.pdf -O "Oracle C++ Call Interface Programmer's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/lnoci/oracle-call-interface-programmers-guide.pdf -O "Oracle Call Interface Programmer's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/lnpcc/c-c-programmers-guide.pdf -O "Pro C C++ Programmer's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/lnpcb/cobol-programmers-guide.pdf -O "Pro COBOL Programmer's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/adlob/securefiles-and-large-objects-developers-guide.pdf -O "SecureFiles and Large Objects Developer's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/sqlrf/sql-language-reference.pdf -O "SQL Language Reference.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/sqprn/sqlplus-release-notes.pdf -O "SQL Plus Release Notes.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/sqpug/sqlplus-users-guide-and-reference.pdf -O "SQL Plus User's Guide and Reference.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/ccapp/text-application-developers-guide.pdf -O "Text Application Developer's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/ccref/text-reference.pdf -O "Text Reference.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/jjucp/universal-connection-pool-developers-guide.pdf -O "Universal Connection Pool Developer's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/adwsm/workspace-manager-developers-guide.pdf -O "Workspace Manager Developer's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/caxml/xml-c-api-reference.pdf -O "XML C API Reference.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/cpxml/xml-c-api-reference.pdf -O "XML C++ API Reference.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/adxdb/xml-db-developers-guide.pdf -O "XML DB Developer's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/adxdk/xml-developers-kit-programmers-guide.pdf -O "XML Developer's Kit Programmer's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/nlspg/database-globalization-support-guide.pdf -O "Database Globalization Support Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/pccrn/c-c-release-notes.pdf -O "Pro C C++ Release Notes.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/pcbrn/cobol-release-notes.pdf -O "Pro COBOL Release Notes.pdf"
cd ../"Security"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/dbseg/database-security-guide.pdf -O "Database Security Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/dvadm/database-vault-administrators-guide.pdf -O "Database Vault Administrator's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/olsag/label-security-administrators-guide.pdf -O "Label Security Administrator's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/rasad/real-application-security-administration-console-rasadm-users-guide.pdf -O "Real Application Security Administration Console (RASADM) User's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/dbfsg/real-application-security-administrators-and-developers-guide.pdf -O "Real Application Security Administrator's and Developer's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/dbimi/enterprise-user-security-administrators-guide.pdf -O "Enterprise User Security Administrator's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/asoag/advanced-security-guide.pdf -O "Advanced Security Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/12.2/dmksb/oracle-data-masking-and-subsetting-users-guide.pdf -O "Data Masking and Subsetting User's Guide.pdf"
cd ../"Performance"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/tdppt/2-day-performance-tuning-guide.pdf -O "Database 2 Day + Performance Tuning Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/tgdba/database-performance-tuning-guide.pdf -O "Database Performance Tuning Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/inmem/database-memory-guide.pdf -O "Database In-Memory Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/tgsql/sql-tuning-guide.pdf -O "SQL Tuning Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/vldbg/vldb-and-partitioning-guide.pdf -O "VLDB and Partitioning Guide.pdf"
cd ../"Clusterware, RAC and Data Guard"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/atnms/autonomous-health-framework-users-guide.pdf -O "Autonomous Health Framework User's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/cwadd/clusterware-administration-and-deployment-guide.pdf -O "Clusterware Administration and Deployment Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/gsmug/global-data-services-concepts-and-administration-guide.pdf -O "Global Data Services Concepts and Administration Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/racad/real-application-clusters-administration-and-deployment-guide.pdf -O "Real Application Clusters Administration and Deployment Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/dgbkr/data-guard-broker.pdf -O "Data Guard Broker.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/sbydb/data-guard-concepts-and-administration.pdf -O "Data Guard Concepts and Administration.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/shard/using-oracle-sharding.pdf -O "Using Oracle Sharding.pdf"
cd ../"Data Warehousing, ML and OLAP"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/dwhsg/database-data-warehousing-guide.pdf -O "Database Data Warehousing Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/machine-learning/oml4r/1.5.1/oread/oracle-machine-learning-r-installation-and-administration-guide.pdf -O "Machine Learning for R Installation and Administration Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/machine-learning/oml4r/1.5.1/omlrl/oracle-machine-learning-r-licensing-information-user-manual.pdf -O "Machine Learning for R Licensing Information User Manual.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/machine-learning/oml4r/1.5.1/orern/oracle-machine-learning-r-release-notes.pdf -O "Machine Learning for R Release Notes.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/machine-learning/oml4r/1.5.1/oreug/oracle-machine-learning-r-users-guide.pdf -O "Machine Learning for R User's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/machine-learning/oml4sql/21/dmapi/oracle-machine-learning-sql-api-guide.pdf -O "Machine Learning for SQL API Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/machine-learning/oml4sql/21/dmcon/oracle-machine-learning-sql-concepts.pdf -O "Machine Learning for SQL Concepts.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/machine-learning/oml4sql/21/dmprg/oracle-machine-learning-sql-users-guide.pdf -O "Machine Learning for SQL User's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/oladm/olap-dml-reference.pdf -O "OLAP DML Reference.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/olaap/olap-java-api-developers-guide.pdf -O "OLAP Expression Syntax Reference.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/olaap/olap-java-api-developers-guide.pdf -O "OLAP Java API Developer's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/olaug/olap-users-guide.pdf -O "OLAP User's Guide.pdf"
cd ../"Spatial and Graph"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/spatl/spatial-developers-guide.pdf -O "Spatial Developer's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/geors/spatial-georaster-developers-guide.pdf -O "Spatial GeoRaster Developer's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/jimpv/spatial-map-visualization-developers-guide.pdf -O "Spatial Map Visualization Developer's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/topol/spatial-topology-and-network-data-model-developers-guide.pdf -O "Spatial Topology and Network Data Model Developer's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/property-graph/20.4/spgdg/oracle-graph-property-graph-developers-guide.pdf -O "Oracle Graph Property Graph Developer's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/rdfrm/graph-developers-guide-rdf-graph.pdf -O "Graph Developer's Guide for RDF Graph.pdf"
cd ../"Distributed Data"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/adque/database-transactional-event-queues-and-advanced-queuing-users-guide.pdf -O "Database Transactional Event Queues and Advanced Queuing User's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/odbcr/odbc-driver-release-notes.pdf -O "ODBC Driver Release Notes.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/drdas/provider-drda-users-guide.pdf -O "Provider for DRDA User's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/drdaa/sql-translation-and-migration-guide.pdf -O "SQL Translation and Migration Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/appci/database-gateway-appc-installation-and-configuration-guide-aix-5l-based-systems-64-bit-hp-ux-itanium-solaris-operating-system-sparc-64-bit-linux-x86-and-linux-x86-64.pdf -O "Database Gateway for APPC Installation and Configuration Guide for AIX 5L Based Systems (64-Bit), HP-UX Itanium, Solaris Operating System (SPARC 64-Bit), Linux x86, and Linux x86-64.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/appcw/database-gateway-appc-installation-and-configuration-guide-microsoft-windows.pdf -O "Database Gateway for APPC Installation and Configuration Guide for Microsoft Windows.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/appug/database-gateway-appc-users-guide.pdf -O "Database Gateway for APPC User's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/drdag/database-gateway-drda-users-guide.pdf -O "Database Gateway for DRDA User's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/tginu/database-gateway-informix-users-guide.pdf -O "Database Gateway for Informix User's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/odbcu/database-gateway-odbc-users-guide.pdf -O "Database Gateway for ODBC User's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/gmswn/database-gateway-sql-server-users-guide.pdf -O "Database Gateway for SQL Server User's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/tgsyu/database-gateway-sybase-users-guide.pdf -O "Database Gateway for Sybase User's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/tgteu/database-gateway-teradata-users-guide.pdf -O "Database Gateway for Teradata User's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/wsmqg/database-gateway-websphere-mq-installation-and-users-guide.pdf -O "Database Gateway for WebSphere MQ Installation and User's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/otgis/database-gateway-installation-and-configuration-guide-aix-5l-based-systems-64-bit-hp-ux-itanium-solaris-operating-system-sparc-64-bit-linux-x86-and-linux-x86-64.pdf -O "Database Gateway Installation and Configuration Guide for AIX 5L Based Systems (64-Bit), HP-UX Itanium, Solaris Operating System (SPARC 64-Bit), Linux x86, and Linux x86-64.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/otgiw/database-gateway-installation-and-configuration-guide-microsoft-windows.pdf -O "Database Gateway Installation and Configuration Guide for Microsoft Windows.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/heter/heterogeneous-connectivity-users-guide.pdf -O "Heterogeneous Connectivity User's Guide.pdf"
"C:\Downloaded Products\wget\wget" --no-check-certificate -nv https://docs.oracle.com/en/database/oracle/oracle-database/21/xstrm/xstream-guide.pdf -O "XStream Guide.pdf"
cd ..

 

Cet article How to quickly download the new bunch of 21c Oracle Database documentation? est apparu en premier sur Blog dbi services.

Handling unified auditing spillover files on the standby-site

$
0
0

Switching to Oracle Unified Auditing may produce lots of data when e.g. auditing activities of the SYS-user. I.e. according the documentation you can do the following to audit similarly as in traditional auditing with audit_sys_operations=TRUE:

SQL> CREATE AUDIT POLICY TOPLEVEL_ACTIONS ACTIONS ALL ONLY TOPLEVEL;
SQL> AUDIT POLICY TOPLEVEL_ACTIONS BY SYS;

REMARK1: You may check the Blog on traditional SYS-auditing here
REMARK2: Tests done in this Blog were done with Oracle 19.9. Auditing toplevel operations were not possible before 19c.

With unified auditing all data is written in the database – except when there’s no possibility to write to the database. If the database is e.g. not open then spillover files are written to the OS. By default files with extension bin are written to $ORACLE_BASE/audit/$ORACLE_SID. E.g.

oracle@boda1:/u01/app/oracle/audit/TISMED/ [TISMED] ls -ltr | tail -10
-rw-------. 1 oracle oinstall    6656 Jan  8 15:30 ora_audit_031.bin
-rw-------. 1 oracle oinstall   98304 Jan  8 15:35 ora_audit_0251.bin
-rw-------. 1 oracle oinstall    1536 Jan  8 15:57 ora_audit_0253.bin
-rw-------. 1 oracle oinstall 3140096 Jan  8 15:57 ora_audit_019.bin
-rw-------. 1 oracle oinstall    2048 Jan  8 15:57 ora_audit_0256.bin
-rw-------. 1 oracle oinstall    3584 Jan  8 15:58 ora_audit_0252.bin
-rw-------. 1 oracle oinstall    4096 Jan  8 15:58 ora_audit_0238.bin
-rw-------. 1 oracle oinstall    2048 Jan  8 15:58 ora_audit_024.bin
-rw-------. 1 oracle oinstall  314368 Jan  8 15:58 ora_audit_030.bin
-rw-------. 1 oracle oinstall    2048 Jan  8 15:58 ora_audit_029.bin
oracle@boda1:/u01/app/oracle/audit/TISMED/ [TISMED] 

You get an idea what’s in the files when doing a strings on them:


oracle@boda1:/u01/app/oracle/audit/TISMED/ [TISMED] strings ora_audit_030.bin | head -20
ANG Spillover Audit File
ORAAUDNG
oracle
boda2.localdomain
pts/0
(TYPE=(DATABASE));(CLIENT ADDRESS=((ADDRESS=(PROTOCOL=tcp)(HOST=192.168.56.6)(PORT=18890))));	
PUBLIC
oracle@boda2.localdomain (TNS V1-V3)
`)',
26801
SYSOPR
PUBLIC:
ORAAUDNG
oracle
boda2.localdomain
pts/0
(TYPE=(DATABASE));(CLIENT ADDRESS=((ADDRESS=(PROTOCOL=tcp)(HOST=192.168.56.6)(PORT=18892))));	
PUBLIC
oracle@boda2.localdomain (TNS V1-V3)
26813
oracle@boda1:/u01/app/oracle/audit/TISMED/ [TISMED] 

The data in the spillover-files are visible in the database when querying e.g. unified_audit_trail as the following simple example shows:

SQL> select count(*) from unified_audit_trail;

  COUNT(*)
----------
    237306

SQL> !mv /u01/app/oracle/audit/TISMED /u01/app/oracle/audit/TISMED_tmp

SQL> select count(*) from unified_audit_trail;

  COUNT(*)
----------
    233816

SQL> !mv /u01/app/oracle/audit/TISMED_tmp /u01/app/oracle/audit/TISMED

SQL> select count(*) from unified_audit_trail;

  COUNT(*)
----------
    237328

SQL> 

I.e. moving the spillover directory to a new name results in showing less data in UNIFIED_AUDIT_TRAIL. The view UNIFIED_AUDIT_TRAIL is a UNION ALL of the view v$unified_audit_trail and the table audsys.aud$unified (you may check $ORACLE_HOME/rdbms/admin/catuat.sql on what the metadata of UNIFIED_AUDIT_TRAIL is). The data of the spillover-files comes from the view v$unified_audit_trail:

SQL> select count(*) from v$unified_audit_trail;

  COUNT(*)
----------
      3501

SQL> select count(*) from audsys.aud$unified;

  COUNT(*)
----------
    233845

SQL> !mv /u01/app/oracle/audit/TISMED /u01/app/oracle/audit/TISMED_tmp

SQL> select count(*) from v$unified_audit_trail;

  COUNT(*)
----------
	 0

SQL> select count(*) from audsys.aud$unified;

  COUNT(*)
----------
    233849

SQL> !mv /u01/app/oracle/audit/TISMED_tmp /u01/app/oracle/audit/TISMED

SQL> select count(*) from v$unified_audit_trail;

  COUNT(*)
----------
      3501

SQL> 

Oracle provided the following possibility to load the data of the spillover-files into the database:

SQL> exec DBMS_AUDIT_MGMT.LOAD_UNIFIED_AUDIT_FILES;

The issue I want to talk about in this blog are spillover files on standby databases. Standby databases are usually running in MOUNT-mode and hence are not writable. I.e. all audit-data produced on the standby-DBs go to spillover-files. If you haven’t switched over to the standby database for a while and loaded the spillover-files to the database then there could be quite a lot of data in the spillover-directory. I saw systems with e.g. 10GB of data in the spillover-directory, especially when doing rman-backups on the standby-site.

That causes 3 issues:

1.) If you move your audit data on your primary database to history tables then those history-tables may not contain the full truth, because audit-records of spillover-files on the standby-DBs are not visible in the history tables.
2.) After a switchover a query on unified_audit_trail may be very slow, because reading spillover-files is slower than reading from the database.
3.) Loading the spillover files after a switchover to the new primary database may take a long time and causes the SYSAUX-tablespace to grow significantly.

So the question is: How to handle the spillover-files on the standby-DB?

One possibility is to just remove them regularly on the OS if your security rules can afford to lose auditing data of standby-DBs. If that’s not possible then you may move the spillover-files regularly to a backup-location, so that you can restore those if necessary. The third alternative I’ve tested was to copy the spillover-files from the standy DB to the primary DB and load them there. That worked for me, but 2 things have to be considered:

1. spillover-files are not unique over all DBs in data guard. I.e. don’t just copy files over to the primary. You have to move away the primnary spillover directory first and restore it when the data has been loaded.
2. The procedure is not documented and has to be confirmed by Oracle Support

E.g. here the process to copy and load spillover-files from standby to primary and load them:

1. Move away the primary spillover-folder

SQL> select count(*) from v$unified_audit_trail uview;

  COUNT(*)
----------
       239

SQL> !mv /u01/app/oracle/audit/${ORACLE_SID} /u01/app/oracle/audit/${ORACLE_SID}_tmp

SQL> select count(*) from v$unified_audit_trail uview;

  COUNT(*)
----------
	 0

2. Copy spillover-files from standby to primary

SQL> !scp -p -r standby:/u01/app/oracle/audit/${ORACLE_SID} /u01/app/oracle/audit
SQL> select count(*) from v$unified_audit_trail uview;

  COUNT(*)
----------
       101

SQL> exec DBMS_AUDIT_MGMT.LOAD_UNIFIED_AUDIT_FILES;

PL/SQL procedure successfully completed.

SQL> select count(*) from v$unified_audit_trail uview;

  COUNT(*)
----------
	24

–> Not all files were loaded. That’s normal.

3. Backup the current spillover-directory and restore the original spillover-directory

SQL> !mv /u01/app/oracle/audit/${ORACLE_SID} /u01/app/oracle/audit/${ORACLE_SID}_standby

SQL> !mv /u01/app/oracle/audit/${ORACLE_SID}_tmp /u01/app/oracle/audit/${ORACLE_SID}

SQL> select count(*) from v$unified_audit_trail uview;

  COUNT(*)
----------
       239

4. Backup the spillover-folder on the standby-site. I.e. do the following command on the standby-host

SQL> !mv /u01/app/oracle/audit/${ORACLE_SID} /u01/app/oracle/audit/${ORACLE_SID}_backup

5. Copy the remaining standby-files not loaded back to the standby host. I.e. do the following command on the standby-host

SQL> !scp -p -r primary:/u01/app/oracle/audit/${ORACLE_SID}_standby /u01/app/oracle/audit/${ORACLE_SID}
SQL> select count(*) from v$unified_audit_trail uview;

  COUNT(*)
----------
	24

REMARK: Above procedure is not 100% correct, because it doesn’t consider spillover-files produced while doing above steps.

Summary: Many people with standby-DBs and Unified Auditing active may not have realized that there are potential issues with spillover-files at the standby-site. Those spillover files on standby have to be considered. The easiest method is to just remove them regularly on the OS if that has been approved by the security team.

Cet article Handling unified auditing spillover files on the standby-site est apparu en premier sur Blog dbi services.

Oracle 21c: Blockchain Tables

$
0
0

Oracle Blockchain Tables

With Oracle Database 20c/21c the new feature Oracle Blockchain Tables has been introduced.

Blockchain Tables enable Oracle Database users to create tamper-resistant data management without distributing a ledger across multiple parties.

Database security can be improved by using Blockchain Tables to avoid user fraud and administrator fraud as well.

One of the main characteristics of Oracle Blockchain Tables is that you can only append data. Table rows are chained using a cryptographic hashing approach.

In addition, to avoid administrator or identity fraud, rows can optionally be signed with PKI (public key infrastructure) based on the user’s private key.

Use cases can be a centralized storage of compliance data, audit trail or clinical trial.

Let’s have a look how it works.

Creating an Oracle Blockchain Table:
Quite easy, I’ve used Oracle Database 20.3

select version_full from v$instance;
VERSION_FULL     
-----------------
20.3.0.0.0

CREATE BLOCKCHAIN TABLE bank_ledger (bank VARCHAR2(128), deposit_date DATE, deposit_amount NUMBER)
         NO DROP UNTIL 31 DAYS IDLE
         NO DELETE LOCKED
         HASHING USING "SHA2_512" VERSION "v1";
Error report -
ORA-05729: blockchain table cannot be created in root container

select name, pdb from v$services;

alter session set container = pdb1;

CREATE BLOCKCHAIN TABLE bank_ledger (bank VARCHAR2(128), deposit_date DATE, deposit_amount NUMBER)
         NO DROP UNTIL 31 DAYS IDLE
         NO DELETE LOCKED
         HASHING USING "SHA2_512" VERSION "v1";
Blockchain TABLE created.

Changing retention period on Blockchain Tables:
The table was created with a retention time of “31 DAYS IDLE”, can we reset that value?

ALTER TABLE bank_ledger NO DROP UNTIL 16 DAYS IDLE; 
Error report - 
ORA-05732: retention value cannot be lowered 

ALTER TABLE bank_ledger NO DROP UNTIL 42 days idle; 
Table BANK_LEDGER altered.

Appending Data in Oracle Blockchain Tables:
That’s working fine.

SELECT user_name, distinguished_name, 
          UTL_RAW.LENGTH(certificate_guid) CERT_GUID_LEN, 
          DBMS_LOB.GETLENGTH(certificate) CERT_LEN 
          FROM DBA_CERTIFICATES ORDER BY user_name; 
no rows selected
 
desc bank_ledger 
Name Null? Type 
------------------------------------ 
BANK VARCHAR2(128) 
DEPOSIT_DATE 
DATE 
DEPOSIT_AMOUNT NUMBER 

select * from bank_ledger; 
no rows selected
... 
1 row inserted. 
1 row inserted. 
1 row inserted.

BANK             DEPOSIT_           DEPOSIT_AMOUNT 
-------------------------------------------------- 
UBS              01.01.20           444000000 
Credit Suisse    02.02.20           22000000 
Vontobel         03.03.20           1000000

DML and DDL on Oracle Blockchain Tables:
Let’s try to change some data.

update bank_ledger set deposit_amount=10000 where bank like 'UBS';
Error starting at line : 1 in command -
update bank_ledger set deposit_amount=10000 where bank like 'UBS'
Error at Command Line : 1 Column : 8
Error report -
SQL Error: ORA-05715: operation not allowed on the blockchain table

delete from bank_ledger where bank like 'UBS';
Error starting at line : 1 in command -
delete from bank_ledger where bank like 'UBS'
Error at Command Line : 1 Column : 13
Error report -
SQL Error: ORA-05715: operation not allowed on the blockchain table

drop table bank_ledger;
Error starting at line : 1 in command -
drop table bank_ledger
Error report -
ORA-05723: drop blockchain table BANK_LEDGER not allowed

Copying data from an Oracle Blockchain Table:
Ok, we can’t change data in the original table, let’s try to copy it.

create tablespace bank_data;
Tablespace BANK_DATA created.

CREATE BLOCKCHAIN TABLE bad_bank_ledger (bank VARCHAR2(128), deposit_date DATE, deposit_amount NUMBER) 
         NO DROP UNTIL 31 DAYS IDLE
         NO DELETE LOCKED
         HASHING USING "SHA2_512" VERSION "v1"
         tablespace bank_data;
Blockchain TABLE created.

insert into bad_bank_ledger select * from bank_ledger;
Error starting at line : 1 in command -
insert into bad_bank_ledger select * from bank_ledger
Error at Command Line : 1 Column : 13
Error report -
SQL Error: ORA-05715: operation not allowed on the blockchain table

Alternative actions on Oracle Blockchain Tables:
Can we move tablespaces or try to replace tables?

insert into bad_bank_ledger values ('Vader', '09-09-2099', '999999999');
insert into bad_bank_ledger values ('Blofeld', '07-07-1977', '7777777');
insert into bad_bank_ledger values ('Lecter', '08-08-1988', '888888');

1 row inserted.
1 row inserted.
1 row inserted.

select * from bad_bank_ledger;
BANK                                   DEPOSIT_ DEPOSIT_AMOUNT
----------------------------------------------- --------------
Vader                                  09.09.99      999999999
Blofeld                                07.07.77        7777777
Lecter                                 08.08.88         888888

create table new_bad_bank_ledger as select * from bad_bank_ledger;
Table NEW_BAD_BANK_LEDGER created.

update new_bad_bank_ledger set deposit_amount = 666666 where bank like 'Blofeld';
1 row updated.
commit;
commit complete.

select * from new_bad_bank_ledger;
BANK                                   DEPOSIT_ DEPOSIT_AMOUNT
----------------------------------------------- --------------
Vader                                  09.09.99      999999999
Blofeld                                07.07.77         666666
Lecter                                 08.08.88         888888

drop table bad_bank_ledger;
Error starting at line : 1 in command -
drop table bad_bank_ledger
Error report -
ORA-05723: drop blockchain table BAD_BANK_LEDGER not allowed

drop tablespace bank_data INCLUDING CONTENTS and datafiles;
Error starting at line : 1 in command -
drop tablespace bank_data INCLUDING CONTENTS and datafiles
Error report -
ORA-00604: error occurred at recursive SQL level 1
ORA-05723: drop blockchain table BAD_BANK_LEDGER not allowed
00604. 00000 -  "error occurred at recursive SQL level %s"
*Cause:    An error occurred while processing a recursive SQL statement
           (a statement applying to internal dictionary tables).
*Action:   If the situation described in the next error on the stack
           can be corrected, do so; otherwise contact Oracle Support.

Move or Compress on Oracle Blockchain Table:
Table operations are forbidden in either case.

alter table bank_ledger move tablespace bank_data COMPRESS;
Error starting at line : 1 in command -
alter table bank_ledger move tablespace bank_data COMPRESS
Error report -
ORA-05715: operation not allowed on the blockchain table

alter table bank_ledger move tablespace bank_data;
Error starting at line : 1 in command -
alter table bank_ledger move tablespace bank_data
Error report -
ORA-05715: operation not allowed on the blockchain table

Hidden Columns in Oracle Blockchain Tables:
Every row is identified by hidden attributes.

col table_name for a40
set lin 999
set pages 100

SELECT * FROM user_blockchain_tables;
desc bank_ledger
SELECT column_name, hidden_column FROM user_tab_cols WHERE table_name='BANK_LEDGER';

TABLE_NAME                         ROW_RETENTION ROW TABLE_INACTIVITY_RETENTION HASH_ALG
------------------------------------------------ --- -------------------------- --------
BANK_LEDGER                                  YES                             42 SHA2_512
BAD_BANK_LEDGER                              YES                             31 SHA2_512

Name           Null? Type          
-------------- ----- ------------- 
BANK                 VARCHAR2(128) 
DEPOSIT_DATE         DATE          
DEPOSIT_AMOUNT       NUMBER        

COLUMN_NAME                            HID
-------------------------------------- ---
ORABCTAB_SIGNATURE$                    YES
ORABCTAB_SIGNATURE_ALG$                YES
ORABCTAB_SIGNATURE_CERT$               YES
ORABCTAB_SPARE$                        YES
BANK                                   NO 
DEPOSIT_DATE                           NO 
DEPOSIT_AMOUNT                         NO 
ORABCTAB_INST_ID$                      YES
ORABCTAB_CHAIN_ID$                     YES
ORABCTAB_SEQ_NUM$                      YES
ORABCTAB_CREATION_TIME$                YES
ORABCTAB_USER_NUMBER$                  YES
ORABCTAB_HASH$                         YES
13 rows selected. 

set colinvisible on
desc bank_ledger
Name                                 Null? Type                        
------------------------------------ ----- --------------------------- 
BANK                                       VARCHAR2(128)               
DEPOSIT_DATE                               DATE                        
DEPOSIT_AMOUNT                             NUMBER                      
ORABCTAB_SPARE$ (INVISIBLE)                RAW(2000 BYTE)              
ORABCTAB_SIGNATURE_ALG$ (INVISIBLE)        NUMBER                      
ORABCTAB_SIGNATURE$ (INVISIBLE)            RAW(2000 BYTE)              
ORABCTAB_HASH$ (INVISIBLE)                 RAW(2000 BYTE)              
ORABCTAB_SIGNATURE_CERT$ (INVISIBLE)       RAW(16 BYTE)                
ORABCTAB_CHAIN_ID$ (INVISIBLE)             NUMBER                      
ORABCTAB_SEQ_NUM$ (INVISIBLE)              NUMBER                      
ORABCTAB_CREATION_TIME$ (INVISIBLE)        TIMESTAMP(6) WITH TIME ZONE 
ORABCTAB_USER_NUMBER$ (INVISIBLE)          NUMBER                      
ORABCTAB_INST_ID$ (INVISIBLE)              NUMBER    

set lin 999
set pages 100
col bank for a40

select bank, deposit_date, orabctab_creation_time$ from bank_ledger;
BANK                                     DEPOSIT_ ORABCTAB_CREATION_TIME$        
---------------------------------------- -------- -------------------------------
UBS                                      01.01.20 25.09.20 13:17:03.946615000 GMT
Credit Suisse                            02.02.20 25.09.20 13:17:03.951545000 GMT
Vontobel                                 03.03.20 25.09.20 13:17:03.952064000 GMT

We see that it is not possible to modify an Oracle Blockchain Table on database level. To avoid manipulations from users with root access there are several possibilities to protect data, e.g. by transferring cryptographic hashes and user signatures systematically to external vaults which would enable you to recover data against the most disaster scenarios.

Resources:

https://www.oracle.com/blockchain/#blockchain-platform-tab

https://docs.oracle.com/en/cloud/paas/blockchain-cloud/user/create-rich-history-database.html#GUID-266145A1-EF3A-4917-B174-C50D4DB1A0E3

https://docs.oracle.com/en/database/oracle/oracle-database/21/nfcon/details-oracle-blockchain-table-282449857.html

https://docs.oracle.com/en/database/oracle/oracle-database/21/admin/managing-tables.html#GUID-43470B0C-DE4A-4640-9278-B066901C3926

Cet article Oracle 21c: Blockchain Tables est apparu en premier sur Blog dbi services.


How long does it take to redeploy an ODA X8-2M?

$
0
0

Introduction

One of the advantage of Oracle Database Appliance is its fast deployment. Most often, the initial setup of a lite ODA (with a reimaging) only takes half a day, from unboxing until a first test database is available. Trust me, it’s hard to do better. For some reason, for example if you need to patch to the latest release and you have some intermediate patch to apply before, or if you plan to change your network settings, or if your ODA does not work correctly anymore, you can imagine redeploying your ODA from scratch as it will not take days. But as you may know, redeployment means that everything will be erased, and you will have to rebuild all your databases, apply your various settings again, and so on. As a consequence, redeploying an ODA is only something you will consider if you use Data Guard or Dbvisit Standby in order to switch your databases to another ODA before redeployment. A standalone ODA is never a comfortable solution, and you’ll probably never be able to redeploy it.

Appart from your DBA job, redeploying an ODA is split in 2 tasks: reimaging the system, and creating the appliance. Reimaging is basically an OS installation that will erase the system disks, creating the appliance is installing all the Oracle stack, from Linux user creation until a first database creation.

But how much time do you need to redeploy an ODA? Let’s find out some clues based on a redeployment I did a few days ago.

Preparing redeployment

A good preparation is mandatory in order to minimize the downtime due to redeployment. Even if you use Data Guard or Dbvisit Standby, you’d better do the redeployment in few hours instead of few days because you will not be able to failover in case of a problem during this operation (and if you’re very unlucky).

What you need to prepare before starting:

  • all the files needed for redeployment according to your target version: ISO file, patch file, GI clone, RDBMS clones
  • unzip the ISO file for reimaging on your own computer (you don’t need to unzip the other files)
  • connect to ILOM interface to make sure that you still have the credentials and that remote console is working
  • backup each important file on an external volume in order to recover some parameters or files if needed

Important files for me are:

  • the result of odacli describe-system
  • history of root commands
  • history of oracle commands
  • crontab of oracle user
  • crontab of root user
  • content of fstab file
  • content of oratab file
  • list of all running databases ps -ef | grep pmon
  • list of all configured databases odacli list-databases
  • deployment file used for initial deployment of this ODA
  • or if deployment file is missing, information related to network configuration, disk sizing, redundancy, aso
  • tnsnames.ora, sqlnet.ora and listener.ora from the various homes

This list is absolutely not complete as it depends on your own environment and tools. Take enough time to be sure you miss nothing. And do this backup days before redeployment.

Cleanup data disks

Reimaging of an ODA will not care about existing headers on NVMe data disks. And will work as if the disks were empty. But odacli create-appliance cares and will prevent Grid Infrastructure to initialize the disks. And appliance creation will fail.

To avoid this behavior, the first step before starting the reimaging is to do a cleanup of the ODA. On an old ODA with spinning disks, it can take a while (many hours), but on a X8-2S or X8-2M it’s a matter of minutes. The longest operation being actually waiting for the node to reboot after cleanup.

My redeployment day started at 9.15AM. The target is to redeploy a X8-2M ODA, already running the latest 19.9 but with a new network configuration to apply. I did a backup of configuration files and already prepared newest json file for redeployment (I just had to change the network settings).

9.15AM => 9.25AM
Do the cleanup with /opt/oracle/oak/onecmd/cleanup.pl

Redeployment

First step of redeployment is reimaging the server with the ISO file corresponding to the target version.

9.25AM => 9.30AM
Start the remote console on ILOM, connect the ISO file in the storage menu, change boot options and restart the server from the CDROM

9.30AM => 10.15AM
Reimaging is automatic and quite long, after being sure the reimaging process is starting, it’s already time for a coffee

10.15AM => 10.20AM
Start configure-firstnet, the very basic initial network configuration (public IP, netmask and gateway). TFA restart is slowing down the operation.

10.20AM => 10.25AM
Copy the zipfiles from your computer to the server (I usually create a /opt/customer folder for these files): the GI clone and all the DB clones you will need

10.25AM => 10.30AM
Unzip and register in the ODA repository GI clone and first DB clone (I usually deploy the DB clone corresponding to the GI clone version)

10.30AM => 11.15AM
odacli create-appliance with your json file. Always use a json file for the create-appliance, GUI is slower and you could make mistakes during configuration.

11.15AM => 11.20AM
Configure the license with odacli update-cpucore -c x. Reimaging will enable again all the cores on your ODA, be careful about that. When using SE2, you do not need to decrease the number of cores, but I highly recommend you to do so.

11.20AM => 11.35AM
Unzip and install the other dbhomes (this is the time it took to register 3 more dbclones and create for each one a dbhome)

11.35AM => 11.50AM
Sanity checks: usually I do a describe-system and describe-components to see if everything is OK, I also check ASM disgroups available space and running instances (+ASM1, +APX1 and my DBTEST)

11.50AM => 12.10PM
It’s now time to configure all the system stuff: extend /opt and/or /u01 to your needs, declare nfs mountpoints (/etc/fstab), add networks with odacli create-network, set mtu on these networks if required, deploy the DBA tools and scripts, set the hugepages to a higher value (from 50% to 70% most often for me) and do a reboot of the server. Now I can enjoy lunch time and I’m quite confident for the next steps because everything looks fine on the pure ODA side.

It took me 3 hours to do the job, let’s say 4 hours if you’re not used to do that. It’s not so long compared to a single patch duration (about 3 hours).

Next steps

For sure, redeploying your ODA is just a (small) part of the job. You now have to recreate/restore your databases, or duplicate them again. And it takes time, especially if you have big databases and a limited number of cores. Never forget that SE2 is not able to parallelize operations like a restore, and EE is limited by the number of cores you enabled on your ODA (2 cores is one processor license).

The database tasks is most probably the biggest part: you may spend hours or days to rebuild your complete server. But with good procedures and good scripts, this task could finally be quite interesting.

If you experience problems during this part, you will learn how to solve them and improve your procedures for the other ODAs.

Advice

I strongly recommend to document everything you do on your ODAs, and script what you can script. And not just before a redeployment. Do that from the very moment you start working with ODA. You will never be affraid of a redeployment if you know exactly how your ODA was installed.

Conclusion

As I already concluded, it makes definitely sense to consider redeployment as an alternative way of patching, and much more. If you know how to quickly redeploy, it could be very helpful compared to days of complex troubleshooting for example. Keep in mind this possibility, even if you’d never consider this option on another platform.

Cet article How long does it take to redeploy an ODA X8-2M? est apparu en premier sur Blog dbi services.

19c serverless logon trigger

$
0
0

By Franck Pachot

.
I thought I already blogged about this but can’t find it. So here it is, with a funny title. I like to rename oracle features by their user point of view (they are usually named from the oracle development point of view). This is about setting session parameters for Oracle connections, directly from the connection string, especially when it cannot be set in the application (code) or in the DB server (logon trigger).

SESSION_SETTINGS

Here is a simple example. I connect with the full connection string (you can put it in a tnsnames.ora of course to use a small alias instead):


SQL*Plus: Release 21.0.0.0.0 - Production on Sun Jan 31 10:03:07 2021
Version 21.1.0.0.0

Copyright (c) 1982, 2020, Oracle.  All rights reserved.

SQL> connect demo/demo1@(DESCRIPTION=(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=db21_pdb1.subnet.vcn.oraclevcn.com))(ADDRESS=(PROTOCOL=TCP)(HOST=cloud)(PORT=1521)))
Connected.
SQL> show parameter optimizer_mode

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
optimizer_mode                       string      ALL_ROWS
SQL>

the OPTIMIZER_MODE is at its default value – ALL_ROWS.

Let’s say that for this connection I want to use FIRST_ROWS_10 because I know that results will always be paginated to the screen. But I can’t change the application to issue an ALTER SESSION. I can do it from the client connection string by adding (SESSION_SETTINGS=(optimizer_mode=first_rows_10)) in CONNECT_DATA, at the same level as the SERVICE_NAME I connect to:


SQL> connect demo/demo1@(DESCRIPTION=(CONNECT_DATA=(SESSION_SETTINGS=(optimizer_mode=first_rows_10))(SERVER=DEDICATED)(SERVICE_NAME=db21_pdb1.subnet.vcn.oraclevcn.com))(ADDRESS=(PROTOCOL=TCP)(HOST=cloud)(PORT=1521)))
Connected.

SQL> show parameter optimizer_mode

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
optimizer_mode                       string      FIRST_ROWS_10
SQL>

This has been automatically set at connection time.

logon trigger

I could have done this from the server with a logon trigger:


SQL> create or replace trigger demo.set_session_settings after logon on demo.schema
  2  begin
  3    execute immediate 'alter session set optimizer_mode=first_rows_100';
  4  end;
  5  /

Trigger created.

SQL> connect demo/demo1@(DESCRIPTION=(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=db21_pdb1.subnet.vcn.oraclevcn.com))(ADDRESS=(PROTOCOL=TCP)(HOST=cloud)(PORT=1521)))
Connected.
SQL> show parameter optimizer_mode

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
optimizer_mode                       string      FIRST_ROWS_100

Here, with no SESSION_SETTINGS in the connection string, the session parameter is set. Of course the logon trigger may check additional context to set it for specific usage. You have the full power of PL/SQL here.

You probably use the connection string setting when you can’t or don’t want to define it in a logon trigger. But what happens when I use SESSION_SETTINGS in CONNECT_DATA in addition to the logon trigger?


SQL> connect demo/demo1@(DESCRIPTION=(CONNECT_DATA=(SESSION_SETTINGS=(optimizer_mode=first_rows_10))(SERVER=DEDICATED)(SERVICE_NAME=db21_pdb1.subnet.vcn.oraclevcn.com))(ADDRESS=(PROTOCOL=TCP)(HOST=cloud)(PORT=1521)))
Connected.
SQL> show parameter optimizer_mode

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
optimizer_mode                       string      FIRST_ROWS_100

There is a priority for the logon trigger. The DBA always wins 😉 And there’s no error or warning, because your setting works, but is just changed later.

SQL_TRACE

Of course this is very useful to set SQL_TRACE and TRACEFILE_IDENTIFIER that you may need to set temporarily:


SQL> connect demo/demo1@(DESCRIPTION=(CONNECT_DATA=(SESSION_SETTINGS=(sql_trace=true)(tracefile_identifier=franck))(SERVER=DEDICATED)(SERVICE_NAME=db21_pdb1.subnet.vcn.oraclevcn.com))(ADDRESS=(PROTOCOL=TCP)(HOST=cloud)(PORT=1521)))
Connected.

SQL> select value from v$diag_info where name='Default Trace File';

VALUE
--------------------------------------------------------------------------------
/u01/app/oracle/diag/rdbms/db21_iad36d/DB21/trace/DB21_ora_93108_FRANCK.trc

Here is what I see in the trace:


PARSING IN CURSOR #140571524262416 len=45 dep=1 uid=110 oct=42 lid=110 tim=4586453211272 hv=4113172360 ad='0' sqlid='1b8pu0mukn1w8'
ALTER SESSION SET tracefile_identifier=franck
END OF STMT
PARSE #140571524262416:c=312,e=312,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=0,plh=0,tim=4586453211272

*** TRACE CONTINUES IN FILE /u01/app/oracle/diag/rdbms/db21_iad36d/DB21/trace/DB21_ora_93108_FRANCK.trc ***

sql_trace was set and then tracefile_identifier.
The current trace file (with the tracefile_identifier) shows the code from my logon trigger:


*** 2021-01-31T10:44:38.949537+00:00 (DB21_PDB1(3))
*** SESSION ID:(30.47852) 2021-01-31T10:44:38.949596+00:00
*** CLIENT ID:() 2021-01-31T10:44:38.949609+00:00
*** SERVICE NAME:(db21_pdb1) 2021-01-31T10:44:38.949620+00:00
*** MODULE NAME:(sqlplus@cloud (TNS V1-V3)) 2021-01-31T10:44:38.949632+00:00
*** ACTION NAME:() 2021-01-31T10:44:38.949646+00:00
*** CLIENT DRIVER:(SQL*PLUS) 2021-01-31T10:44:38.949663+00:00
*** CONTAINER ID:(3) 2021-01-31T10:44:38.949684+00:00
*** CLIENT IP:(10.0.0.22) 2021-01-31T10:44:38.949700+00:00
*** CLIENT IP:(10.0.0.22) 2021-01-31T10:44:38.949700+00:00


*** TRACE CONTINUED FROM FILE /u01/app/oracle/diag/rdbms/db21_iad36d/DB21/trace/DB21_ora_93108.trc ***

EXEC #140571524262416:c=601,e=1332,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=0,plh=0,tim=4586453212715
CLOSE #140571524262416:c=4,e=4,dep=1,type=1,tim=4586453213206
=====================
PARSING IN CURSOR #140571524259888 len=81 dep=1 uid=110 oct=47 lid=110 tim=4586453215247 hv=303636932 ad='16d3143b0' sqlid='22pncan91k8f4'
begin
  execute immediate 'alter session set optimizer_mode=first_rows_100';
end;
END OF STMT
PARSE #140571524259888:c=1737,e=1966,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=1,plh=0,tim=4586453215246

this proves that the logon trigger has priority, or rather the last word, on settings as it is run after, before giving the session to the application.

Module, Action

Before it comes to tracing, we would like to identify our session for end-to-end profiling and this is also possible from the connection string. Oracle does that by defining the “module” and “action” application info. There’s no parameter to set module and action, but there are additional possibilities than SESSION_SETTINGS with MODULE_NAME and MODULE_ACTION:


SQL> connect demo/demo1@(DESCRIPTION=(CONNECT_DATA=(MODULE_NAME=my_application_tag)(MODULE_ACTION=my_action_tag)(SESSION_SETTINGS=(sql_trace=true))(SERVER=DEDICATED)(SERVICE_NAME=db21_pdb1.subnet.vcn.oraclevcn.com))(ADDRESS=(PROTOCOL=TCP)(HOST=cloud)(PORT=1521)))

This sets the module/action as soon as connected, which I can see in the trace:


*** 2021-01-31T10:57:54.404141+00:00 (DB21_PDB1(3))
*** SESSION ID:(484.11766) 2021-01-31T10:57:54.404177+00:00
*** CLIENT ID:() 2021-01-31T10:57:54.404193+00:00
*** SERVICE NAME:(db21_pdb1) 2021-01-31T10:57:54.404205+00:00
*** MODULE NAME:(sqlplus@cloud (TNS V1-V3)) 2021-01-31T10:57:54.404217+00:00
*** ACTION NAME:() 2021-01-31T10:57:54.404242+00:00
*** CLIENT DRIVER:(SQL*PLUS) 2021-01-31T10:57:54.404253+00:00
*** CONTAINER ID:(3) 2021-01-31T10:57:54.404265+00:00
*** CLIENT IP:(10.0.0.22) 2021-01-31T10:57:54.404277+00:00
*** CLIENT IP:(10.0.0.22) 2021-01-31T10:57:54.404277+00:00

CLOSE #139872849242800:c=1,e=2,dep=1,type=1,tim=4587248667205
*** MODULE NAME:(my_application_tag) 2021-01-31T10:57:54.404725+00:00
*** ACTION NAME:(my_action_tag) 2021-01-31T10:57:54.404756+00:00

However, because I run that from sqlplus, this is set later by sqlplus itself later:


SQL> select sid,module,action from v$session where sid=sys_context('userenv','sid');

       SID MODULE                         ACTION
---------- ------------------------------ ------------------------------
        30 SQL*Plus

What you pass it in the connection string, it is run immediately before running anything else (logon trigger or application statements). It is really useful when you cannot run an ALTER SESSION in any other way. But remember that it is just an initial setting and nothing is locked for future change.

more… mostly undocumented

I mentioned that there are more things that can be set from there. Here is how I’ve found about MODULE_NAME and MODULE_ACTION:


strings $ORACLE_HOME/bin/oracle | grep ^DESCRIPTION/CONNECT_DATA/ | cut -d/ -f3- | sort | paste -s

CID/PROGRAM     CID/USER        COLOCATION_TAG  COMMAND CONNECTION_ID   CONNECTION_ID_PREFIX    
DESIG   DUPLICITY       FAILOVER_MODE   FAILOVER_MODE/BACKUP  GLOBAL_NAME     INSTANCE_NAME   
MODULE_ACTION   MODULE_NAME     NUMA_PG ORACLE_HOME     PRESENTATION    REGION  RPC     SEPARATE_PROCESS      
SERVER  SERVER_WAIT_TIMEOUT     SERVICE_NAME    SESSION_SETTINGS        SESSION_STATE   SID     USE_DBROUTER

Most of them are not documented and probably not working the way you think. But FAILOVER_MODE is well known to keep the session running when it fails over to a different node or replica in HA (because the High Availability of the database is at maximum only if the application follows without interruption). SERVER is well know to choose the level of connection sharing and pooling (a must with microservices). The COLOCATION_TAG is a way to favor colocation of sessions processing the same data when you have scaled-out to multiple nodes, to avoid inter-node cache synchronization. You just set a character string that may have a business meaning and the load-balancer will try to keep together those who hash to the same value. INSTANCE_NAME (and SID) are documented to go to a specific instance (for the DBA, the application uses services for that). NUMA_PG looks interesting to colocated sessions in NUMA nodes (visible in x$ksmssinfo) but it is undocumented and then unsupported. And we are far from the “serverless” title when we mention those physical characteristics… I’ve put this title not only to be in the trend but also to mention than things that we are used to set on server side may have to be set on the client-side when we are in the cloud.

CONTAINER

Even in the SESSION_SETTINGS you can put some settings that are not properly session parameters. The CONTAINER one may be convenient:


Yes, as a connection string can be a BEQ protocol, this can be used for local connections (without going through a listener) and is a way to go directly to a PDB. Here is an example:


BEQ_PDB1=
 (DESCRIPTION=
  (ADDRESS_LIST=
   (ADDRESS=
    (PROTOCOL=BEQ)
    (PROGRAM=oracle)
    (ARGV0=oracleCDB1)
    (ARGS='(DESCRIPTION=(SILLY_EXAMPLE=TRUE)(LOCAL=YEP)(ADDRESS=(PROTOCOL=beq)))')
   )
  )
  (CONNECT_DATA=
   (SESSION_SETTINGS=(container=PDB1)(tracefile_identifier=HAVE_FUN))
   (SID=CDB1)
  )
 )

I’ve added a few funny things here, but don’t do that. A `ps` shows:


oracleCDB1 (DESCRIPTION=(SILLY_EXAMPLE=TRUE)(LOCAL=YEP)(ADDRESS=(PROTOCOL=beq)))

for this connection

undocumented parameters

I mentioned that the SESSION_SETTINGS happen before the logon trigger, and that the application can change the parameters, as usual, afterwards. It seems that there are two hidden parameters for that:

_connect_string_settings_after_logon_triggers     0                              set connect string session settings after logon triggers                                        integer
_connect_string_settings_unalterable                         0                              make connect string session settings unalterable                                        integer

However, I tested them and haven’t seen how it works (surprisingly they are not booleans but integers)

Cet article 19c serverless logon trigger est apparu en premier sur Blog dbi services.

Learn ODA on Oracle Cloud

$
0
0

By Franck Pachot

.
You want to learn and practice your ODA command line and GUI without having an ODA at home? It should be possible to run the ODA image on VirtualBox but that’s probably a hard work as it is tied to the hardware. About the configuration, you can run the Oracle Appliance Manager Configurator on your laptop but I think it is not compatible with the latest odacli. However, for a long time Oracle provides an ODA simulator and it is now available in the Oracle Cloud Marketplace for free.

Here is it page:
https://console.us-ashburn-1.oraclecloud.com/marketplace/application/84422479/overview

You can get there by: Oracle Cloud Console > Create Compute Instance > Edit > Change Image > Oracle Images > Oracle Database Appliance (ODA) Simulator

I mentioned that this is for free. The marketplace does not allow me to run on an Always Free Eligible shape. But you may take the software and run it elsewhere (you will see the .tar.gz in the opc user home directory)

Cleanup and pull images

From the marketplace, the container is already running but I clean it and re-install. This does everything: it installs docker if not already there (you run all this as root).


# cleanup (incl. portainer)
sudo docker rm -f portainer ; sudo docker rmi -f portainer/portainer
yes | sudo ~/simulator*/cleanup_odasimulator_sw.sh
# setup (get images and start portainer)
sudo ~/simulator*/setup_odasimulator_sw.sh

With this you can connect by http on port 9000 to the Portainer. Of course, you need to open this in the Network Security Groups (I opened the range 9000-9100 as I’ll use those ports later). You can connect with user admin password welcome1… yes, that’s the CHANGE_ON_INSTALL password for ODA 😉

Choose Local repository and connect and you will se the users and containers created.

Create ODA simulators


sudo ~/simulator*/createOdaSimulatorContainer.sh -d class01 -t ha -n 2 -p 9004 \
 -i $(oci-public-ip | awk '/Primary public IP:/{print $NF}')

the -d option is a “department name”. You can put whatever you like and you can use it to create multiple classes.
-n is the number of simulators (one per participant in your class for example).
-t is ‘ha’ to create two docker containers to simulate a 2 nodes ODA HA or ‘single’ to simulate a one node ODA-lite.
The default starting port is 7094 but I start at 9004 as I opened the 9000-9100 range.

This has created the containers, storage and starts the ODA software: Zookeeper, DCS agent, DCS controller. You can see them from the Portainer console. It also creates users (the username is displayed, the password is welcome1) in portainer in the “odasimusers” team.

From the container list you have an icon ( >_ ) to go to a command line console (which is also displayed in the createOdaSimulatorContainer.sh output (“ODA cli access”) so that you can give it to your students. One for each node when you chose HA of course. It also displays the url to the ODA Console (“Browser User Interface”) at https://<public ip>:<displayed port>/mgmt/index.html for witch the user is “oda-admin” and the password must be changed at first connect.

Here is an example with mine:


***********************************************
ODA Simulator system info:
Executed on: 2021_02_03_09_39_AM
Executed by:

ADMIN:
ODA Simulator management GUI: http://150.136.58.254:9000
Username: admin Password: welcome1
num=          5
dept=       class01
hostpubip=    150.136.58.254

USERS:
Username: class01-1-node0  Password:welcome1
Container : class01-1-node0
ODA Console: https://150.136.58.254:9005/mgmt/index.html
ODA cli access: http://150.136.58.254:9000/#/containers/86a0846af46251c9389423ad440a807b83645b62a1ec893182e8d15b1d1179bd/exec

Those are my real IP addresses and those ports are opened so you can play with it if it is still up when you read it… it’s a lab.

The Portainer web shell is a possibility but you can go to the Portainer console from the machine where you have created all that you can:


[opc@instance-20210203-1009 simulator_19.9.0.0.0]$ ./connectContainer.sh -n class01-1-node0
[root@class01-1-node0 /]#

Of course you can also simply `sudo docker exec -i -t class01-1-node0 /bin/bash` – there’s nothing magic here. And then you can play with odacli:


[root@class01-1-node0 /]# odacli configure-firstnet

bonding interface is:
Using bonding public interface (yes/no) [yes]:
Select the Interface to configure the network on () [btbond1]:
Configure DHCP on btbond1 (yes/no) [no]:
INFO: You have chosen Static configuration
Use VLAN on btbond1 (yes/no) [no]:
Enter the IP address to configure : 192.168.0.100
Enter the Netmask address to configure : 255.255.255.0
Enter the Gateway address to configure[192.168.0.1] :
INFO: Restarting the network
Shutting down interface :           [  OK  ]
Shutting down interface em1:            [  OK  ]
Shutting down interface p1p1:           [  OK  ]
Shutting down interface p1p2:           [  OK  ]
Shutting down loopback interface:               [  OK  ]
Bringing up loopback interface:    [  OK  ]
Bringing up interface :     [  OK  ]
Bringing up interface em1:    [  OK  ]
Bringing up interface p1p1: Determining if ip address 192.168.16.24 is already in use for device p1p1...    [ OK  ]
Bringing up interface p1p2: Determining if ip address 192.168.17.24 is already in use for device p1p2...    [ OK  ]
Bringing up interface btbond1: Determining if ip address 192.168.0.100 is already in use for device btbond1...     [  OK  ]
INFO: Restarting the DCS agent
initdcsagent stop/waiting
initdcsagent start/running, process 20423

This is just a simulator, not a virtualized ODA, you need to use this address 192.168.0.100 on node0 and 192.168.0.101 on node 1

Some fake versions of the ODA software is there:


[root@class01-1-node0 /]# ls opt/oracle/dcs/patchfiles/

oda-sm-19.9.0.0.0-201020-server.zip
odacli-dcs-19.8.0.0.0-200714-DB-11.2.0.4.zip
odacli-dcs-19.8.0.0.0-200714-DB-12.1.0.2.zip
odacli-dcs-19.8.0.0.0-200714-DB-12.2.0.1.zip
odacli-dcs-19.8.0.0.0-200714-DB-18.11.0.0.zip
odacli-dcs-19.8.0.0.0-200714-DB-19.8.0.0.zip
odacli-dcs-19.8.0.0.0-200714-GI-19.8.0.0.zip
odacli-dcs-19.9.0.0.0-201020-DB-11.2.0.4.zip
odacli-dcs-19.9.0.0.0-201020-DB-12.1.0.2.zip
odacli-dcs-19.9.0.0.0-201020-DB-12.2.0.1.zip
odacli-dcs-19.9.0.0.0-201020-DB-18.12.0.0.zip
odacli-dcs-19.9.0.0.0-201020-DB-19.9.0.0.zip

On real ODA you download it from My Oracle Support but here the simulator will accept those to update the ODA repository:


odacli update-repository -f /opt/oracle/dcs/patchfiles/odacli-dcs-19.8.0.0.0-200714-GI-19.8.0.0.zip
odacli update-repository -f /opt/oracle/dcs/patchfiles/odacli-dcs-19.8.0.0.0-200714-DB-19.8.0.0.zip

You can also go to the web console where, at the first connection, you change the password to connect as oda-admin.


[root@class01-1-node0 opt]# odacli-adm set-credential -u oda-admin
User password: #HappyNewYear#2021
Confirm user password: #HappyNewYear#2021
[root@class01-1-node0 opt]#

Voila I shared my password for https://150.136.58.254:9005/mgmt/index.html 😉
You can also try the next even ports (9007, 9009…9021) and change the password yourself – I’ll leave this machine for a few days after publishing.

And then deploy the database. Here is my oda.json configuration that you can put in a file and load if you want a quick creation without typing:


{ "instance": { "instanceBaseName": "oda-c", "dbEdition": "EE", "objectStoreCredentials": null, "name": "oda", "systemPassword": null, "timeZone": "Europe/Zurich", "domainName": "pachot.net", "dnsServers": [ "8.8.8.8" ], "ntpServers": [], "isRoleSeparated": true, "osUserGroup": { "users": [ { "userName": "oracle", "userRole": "oracleUser", "userId": 1001 }, { "userName": "grid", "userRole": "gridUser", "userId": 1000 } ], "groups": [ { "groupName": "oinstall", "groupRole": "oinstall", "groupId": 1001 }, { "groupName": "dbaoper", "groupRole": "dbaoper", "groupId": 1002 }, { "groupName": "dba", "groupRole": "dba", "groupId": 1003 }, { "groupName": "asmadmin", "groupRole": "asmadmin", "groupId": 1004 }, { "groupName": "asmoper", "groupRole": "asmoper", "groupId": 1005 }, { "groupName": "asmdba", "groupRole": "asmdba", "groupId": 1006 } ] } }, "nodes": [ { "nodeNumber": "0", "nodeName": "GENEVA0", "network": [ { "ipAddress": "192.168.0.100", "subNetMask": "255.255.255.0", "gateway": "192.168.0.1", "nicName": "btbond1", "networkType": [ "Public" ], "isDefaultNetwork": true } ] }, { "nodeNumber": "1", "nodeName": "GENEVA1", "network": [ { "ipAddress": "192.168.0.101", "subNetMask": "255.255.255.0", "gateway": "192.168.0.1", "nicName": "btbond1", "networkType": [ "Public" ], "isDefaultNetwork": true } ] } ], "grid": { "vip": [ { "nodeNumber": "0", "vipName": "GENEVA0-vip", "ipAddress": "192.168.0.102" }, { "nodeNumber": "1", "vipName": "GENEVA1-vip", "ipAddress": "192.168.0.103" } ], "diskGroup": [ { "diskGroupName": "DATA", "diskPercentage": 80, "redundancy": "FLEX" }, { "diskGroupName": "RECO", "diskPercentage": 20, "redundancy": "FLEX" }, { "diskGroupName": "FLASH", "diskPercentage": 100, "redundancy": "FLEX" } ], "language": "en", "enableAFD": "TRUE", "scan": { "scanName": "oda-scan", "ipAddresses": [ "192.168.0.104", "192.168.0.105" ] } }, "database": { "dbName": "DB1", "dbCharacterSet": { "characterSet": "AL32UTF8", "nlsCharacterset": "AL16UTF16", "dbTerritory": "SWITZERLAND", "dbLanguage": "FRENCH" }, "dbRedundancy": "MIRROR", "adminPassword": null, "dbEdition": "EE", "databaseUniqueName": "DB1_GENEVA", "dbClass": "OLTP", "dbVersion": "19.8.0.0.200714", "dbHomeId": null, "instanceOnly": false, "isCdb": true, "pdBName": "PDB1", "dbShape": "odb1", "pdbAdminuserName": "pdbadmin", "enableTDE": false, "dbType": "RAC", "dbTargetNodeNumber": null, "dbStorage": "ASM", "dbConsoleEnable": false, "dbOnFlashStorage": false, "backupConfigId": null, "rmanBkupPassword": null } }

Of course the same can be done from command line. Here are my database created in this simulation:


[root@class01-1-node0 /]# odacli list-databases

ID                                       DB Name    DB Type  DB Version           CDB        Class    Shape    Storage    Status        DbHomeID
---------------------------------------- ---------- -------- -------------------- ---------- -------- -------- ---------- ------------ ----------------------------------------
cc08cd94-0e95-4521-97c8-025dd03a5554     DB1        Rac      19.8.0.0.200714      true       Oltp     Odb1     Asm        Configured   782c749c-ff0e-4665-bb2a-d75e2caa5568

This is the database created by this simulated deployment.

This is really nice to learn or check something without accessing a real ODA. There’s more in a video from Sam K Tan, Business Development Director at Oracle: https://www.youtube.com/watch?v=mrLp8TkcJMI and the hands-on-lab handbook to know more. And about real-live ODA problems and solutions, I have awesome colleagues sharing on our blog: https://blog.dbi-services.com/?s=oda and they give the following trining: https://www.dbi-services.com/trainings/oracle-database-appliance-oda/ in French, German, English, in site or remote.

Cet article Learn ODA on Oracle Cloud est apparu en premier sur Blog dbi services.

Oracle autoupgrade on Windows and plugin to a Container DB with virtual accounts

$
0
0

In a project I recently had to upgrade an Oracle 12.2.-DB to 19.9. and at the same time migrate from the non-container architecture to the container architecture. The interesting part here is to do this on Windows. Actually both steps (upgrade and plugin) are possible using the autoupgrade tool , which is the preferred tool to do Oracle upgrades today.

Before doing the upgrade at the customer, I performed 2 tests in my personal environment:

– Upgrade 12.2. to 19.9. and the plugin to a container-DB in 2 separate steps
– Upgrade 12.2. to 19.9. and the plugin to a container-DB in 1 step (i.e. calling “autoupgrade -mode deploy” only once)

Doing the upgrade and the plugin in 2 steps requires to manually create the Windows services again after the upgrade finished using the oradim-utility. I.e. first step is the upgrade, which automatically uses a temporary Windows service. At the end of the upgrade new services have to be created manually, because autoupgrade has no clue about the password of the user running the Oracle services. I.e. I used the following config file upgrade_config_122_199_SIMC.cfg.txt:


# Global parameter
# ================

global.autoupg_log_dir=D:\Downloads\oracle\19c\autoupgrade\logs
global.target_home=D:\app\cbl\virtual\product\19.9.0\dbhome_1
global.target_version=19


# Database parameter SIMC
# =======================

upg1.dbname=simc
upg1.start_time=NOW
upg1.source_home=D:\app\cbl\virtual\product\12.2.0\dbhome_1
upg1.sid=simc
upg1.upgrade_node=dbi-pc-cbl
# upg1.add_after_upgrade_pfile=D:\Downloads\oracle\19c\autoupgrade\pfile\add_after_init.ora

The commands for the upgrade were


%ORACLE_HOME%\jdk\bin\java -jar autoupgrade.jar -config config\upgrade_config_122_199_SIMC.cfg.txt -mode analyze
%ORACLE_HOME%\jdk\bin\java -jar autoupgrade.jar -config config\upgrade_config_122_199_SIMC.cfg.txt -mode fixups
%ORACLE_HOME%\jdk\bin\java -jar autoupgrade.jar -config config\upgrade_config_122_199_SIMC.cfg.txt -mode deploy

And finally executed the oradim-commands on the cmd-prompt:


oradim -delete -sid SIMC
oradim -new -sid SIMC -startmode auto -spfile -shutmode immediate

The second step then was to plugin the non-container-DB with the following config file:


# Global parameter
# ================

global.autoupg_log_dir=D:\Downloads\oracle\19c\autoupgrade\logs3
global.target_home=D:\app\cbl\virtual\product\19.9.0\dbhome_1
global.target_version=19


# Database parameter SIMC
# =======================

upg1.dbname=simc
upg1.start_time=NOW
upg1.source_home=D:\app\cbl\virtual\product\19.9.0\dbhome_1
upg1.sid=simc
upg1.upgrade_node=dbi-pc-cbl
# upg1.add_after_upgrade_pfile=D:\Downloads\oracle\19c\autoupgrade\pfile\add_after_init.ora
upg1.target_cdb=cdbsimc
upg1.target_pdb_name=PDBSIMC
upg1.target_pdb_copy_option=file_name_convert=('D:\APP\CBL\VIRTUAL\ORADATA\SIMC','D:\APP\CBL\VIRTUAL\ORADATA\CDBSIMC\SIMC')

As the source and target ORACLE_HOME are the same, no upgrade actually happens, but just the plugin.

During installation of Oracle I did choose to use the virtual account. This worked fine for the migration in 2 steps. Later on we’ll see that the virtual account causes issues during plugin when doing the upgrade and plugin in 1 step.

See this blog with more information about the virtual account on Windows.

Doing the upgrade and plugin in one step has the following advantage:

There’s no need to manually create the Windows service, because the services of the pre-created container-DB are already there.

To do the upgrade and plugin in one step I used the following autoupgrade-config-file after creating an empty container db CDBSIMC:


# Global parameter
# ================

global.autoupg_log_dir=D:\Downloads\oracle\19c\autoupgrade\logs2
global.target_home=D:\app\cbl\virtual\product\19.9.0\dbhome_1
global.target_version=19


# Database parameter SIMC
# =======================

upg1.dbname=simc
upg1.start_time=NOW
upg1.source_home=D:\app\cbl\virtual\product\12.2.0\dbhome_1
upg1.sid=simc
upg1.upgrade_node=dbi-pc-cbl
# upg1.add_after_upgrade_pfile=D:\Downloads\oracle\19c\autoupgrade\pfile\add_after_init.ora
upg1.target_cdb=cdbsimc
upg1.target_pdb_name=PDBSIMC
upg1.target_pdb_copy_option=file_name_convert=('D:\APP\CBL\VIRTUAL\ORADATA\SIMC','D:\APP\CBL\VIRTUAL\ORADATA\CDBSIMC\SIMC')

The upgrade went through, but the plugin failed with the following error:

createpdb_simc_PDBSIMC.log:


Start of SQL*Plus output

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Jan 29 22:54:33 2021
Version 19.9.0.0.0

Copyright (c) 1982, 2020, Oracle.  All rights reserved.

SQL> SQL> Connected.
SQL> SQL> old   1: create pluggable database "&pdbName" &asClone using '&xmlFilePath' &fileNameConvertOption tempfile reuse
new   1: create pluggable database "PDBSIMC" as clone using 'D:\Downloads\oracle\19c\autoupgrade\logs2\simc\simc\105\noncdbtopdb\PDBSIMC.xml' COPY file_name_convert=('D:\APP\CBL\VIRTUAL\ORADATA\SIMC','D:\APP\CBL\VIRTUAL\ORADATA\CDBSIMC\SIMC') tempfile reuse
create pluggable database "PDBSIMC" as clone using 'D:\Downloads\oracle\19c\autoupgrade\logs2\simc\simc\105\noncdbtopdb\PDBSIMC.xml' COPY file_name_convert=('D:\APP\CBL\VIRTUAL\ORADATA\SIMC','D:\APP\CBL\VIRTUAL\ORADATA\CDBSIMC\SIMC') tempfile reuse
*
ERROR at line 1:
ORA-19505: failed to identify file
"D:\APP\CBL\VIRTUAL\ORADATA\SIMC\SYSTEM01.DBF"
ORA-27041: unable to open file
OSD-04002: unable to open file
O/S-Error: (OS 5) Access is denied.

REMARK: As you can see the plugin is actually a “Clone”. I.e. the original datafiles are kept to be able to go back.

The reason for the issue was that the 12.2.-Window-service runs under permission of the group ORA_OraDB12Home1_SVCACCTS and hence permission to access the associated DB-files are also granted to that group. This does not change during the upgrade to 19c, but of course can be adjusted with oradim. As we skipped the manual step with oradim, the 19c-services for the container DB CDBSIMC (which runs in group ORA_OraDB19Home1_SVCACCTS) do not have access to the DB-files to clone them.

The workaround in my environment was to grant full rights on the db-files to group ORA_OraDB19Home1_SVCACCTS prior to the upgrade/plugin.

Afterwards my upgrade and plugin went through with the following content in the createpdb_simc_PDBSIMC.log:


Start of SQL*Plus output

SQL*Plus: Release 19.0.0.0.0 - Production on Sat Jan 30 01:15:55 2021
Version 19.9.0.0.0

Copyright (c) 1982, 2020, Oracle.  All rights reserved.

SQL> SQL> Connected.
SQL> SQL> old   1: create pluggable database "&pdbName" &asClone using '&xmlFilePath' &fileNameConvertOption tempfile reuse
new   1: create pluggable database "PDBSIMC" as clone using 'D:\Downloads\oracle\19c\autoupgrade\logs5\simc\simc\100\noncdbtopdb\PDBSIMC.xml' COPY file_name_convert=('D:\APP\CBL\VIRTUAL\ORADATA\SIMC','D:\APP\CBL\VIRTUAL\ORADATA\CDBSIMC\SIMC') tempfile reuse

Pluggable database created.

The best solution here is of course to use the same local account for all Oracle DB-services and not the virtual account. If you plan to use the virtual account then it is a good idea from the beginning to use the same value for the registry entry ORACLE_SVCUSER for all installed versions. E.g. I used ORA_DBSVCACCTS instead of the defaults ORA_OraDB12Home1_SVCACCTS (for 12.2.) and ORA_OraDB19Home1_SVCACCTS (for 19c) during my tests successfully.

Cet article Oracle autoupgrade on Windows and plugin to a Container DB with virtual accounts est apparu en premier sur Blog dbi services.

Oracle 21c : Two nodes Grid Infrastructure Installation

$
0
0

Oracle 21c is actually released in the cloud, and I did some tests to setup a Grid infrastructure cluster with two nodes.
I used following two VM servers to test
racp1vm1
racp1vm2
Below the addresses I am using. Note that a dns server is setup

192.168.0.101 racp1vm1.dbi.lab racp1vm1            --public network
192.168.0.103 racp1vm2.dbi.lab racp1vm2            --public network
192.168.0.102 racp1vm1-vip.dbi.lab racp1vm1-vip    --vitual network
192.168.0.104 racp1vm2-vip.dbi.lab racp1vm2-vip    --virtual network
10.1.1.1 racp1vm1-priv.dbi.lab racp1vm1-priv       --private network
10.1.1.2 racp1vm2-priv.dbi.lab racp1vm2-priv       --private network

The scan name should be resolved in a round-robin method. Every time the nslookup command should return a different IP as first address

racp1-scan.dbi.lab :  192.168.0.105
racp1-scan.dbi.lab : 192.168.0.106
racp1-scan.dbi.lab . 192.168.0.107
[root@racp1vm1 diag]# nslookup racp1-scan
Server:         192.168.0.100
Address:        192.168.0.100#53

Name:   racp1-scan.dbi.lab
Address: 192.168.0.105
Name:   racp1-scan.dbi.lab
Address: 192.168.0.107
Name:   racp1-scan.dbi.lab
Address: 192.168.0.106

[root@racp1vm1 diag]# nslookup racp1-scan
Server:         192.168.0.100
Address:        192.168.0.100#53

Name:   racp1-scan.dbi.lab
Address: 192.168.0.107
Name:   racp1-scan.dbi.lab
Address: 192.168.0.106
Name:   racp1-scan.dbi.lab
Address: 192.168.0.105

[root@racp1vm1 diag]# nslookup racp1-scan
Server:         192.168.0.100
Address:        192.168.0.100#53

Name:   racp1-scan.dbi.lab
Address: 192.168.0.106
Name:   racp1-scan.dbi.lab
Address: 192.168.0.105
Name:   racp1-scan.dbi.lab
Address: 192.168.0.107

[root@racp1vm1 diag]#

I used udev for the ASM disks and below the contents of my udev file

[root@racp1vm1 install]# cat /etc/udev/rules.d/90-oracle-asm.rules
Oracle ASM devices
KERNEL==\”sd[b-f]1",  OWNER="grid", GROUP="asmadmin", MODE="0660"

The installation is the same that for the 19c. Unzip your software in your GRID_HOME

[grid@racp1vm1 ~]$ mkdir -p /u01/app/21.0.0.0/grid
[grid@racp1vm1 ~]$ unzip -d /u01/app/21.0.0.0/grid grid_home-zip.zip

And run the gridSetup.sh command

[grid@racp1vm1 grid]$ ./gridSetup.sh

I got some warnings i decided to ignore and go ahead

As specified I executed the scripts on both nodes

[root@racp1vm1 ~]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
[root@racp1vm1 ~]#

[root@racp1vm2 ~]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
[root@racp1vm2 ~]#

Below truncated outputs of root.sh.

[root@racp1vm1 ~]# /u01/app/21.0.0.0/grid/root.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/21.0.0.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
…
..
CRS-4256: Updating the profile
Successful addition of voting disk 39600544f8794f63bfb83f128d9a9079.
Successfully replaced voting disk group with +CRSDG.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   39600544f8794f63bfb83f128d9a9079 (/dev/sdc1) [CRSDG]
Located 1 voting disk(s).
2021/02/08 15:05:06 CLSRSC-594: Executing installation step 17 of 19: 'StartCluster'.
2021/02/08 15:06:00 CLSRSC-343: Successfully started Oracle Clusterware stack
2021/02/08 15:06:00 CLSRSC-594: Executing installation step 18 of 19: 'ConfigNode'.
2021/02/08 15:08:06 CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'.
2021/02/08 15:08:27 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@racp1vm2 install]# /u01/app/21.0.0.0/grid/root.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/21.0.0.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
   Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]: y
…
…
2021/02/08 15:15:00 CLSRSC-343: Successfully started Oracle Clusterware stack
2021/02/08 15:15:00 CLSRSC-594: Executing installation step 18 of 19: 'ConfigNode'.
2021/02/08 15:15:20 CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'.
2021/02/08 15:15:27 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@racp1vm2 install]#

Then click OK

You can verify that the installation was fine

[root@racp1vm1 diag]# /u01/app/21.0.0.0/grid/bin/crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [21.0.0.0.0]
[root@racp1vm1 diag]#

[root@racp1vm1 diag]# /u01/app/21.0.0.0/grid/bin/crsctl check cluster -all
**************************************************************
racp1vm1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
racp1vm2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
[root@racp1vm1 diag]#
[root@racp1vm1 diag]# /u01/app/21.0.0.0/grid/bin/crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
               ONLINE  ONLINE       racp1vm1                 STABLE
               ONLINE  ONLINE       racp1vm2                 STABLE
ora.chad
               ONLINE  ONLINE       racp1vm1                 STABLE
               ONLINE  ONLINE       racp1vm2                 STABLE
ora.net1.network
               ONLINE  ONLINE       racp1vm1                 STABLE
               ONLINE  ONLINE       racp1vm2                 STABLE
ora.ons
               ONLINE  ONLINE       racp1vm1                 STABLE
               ONLINE  ONLINE       racp1vm2                 STABLE
ora.proxy_advm
               OFFLINE OFFLINE      racp1vm1                 STABLE
               OFFLINE OFFLINE      racp1vm2                 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)
      1        ONLINE  ONLINE       racp1vm1                 STABLE
      2        ONLINE  ONLINE       racp1vm2                 STABLE
ora.CRSDG.dg(ora.asmgroup)
      1        ONLINE  ONLINE       racp1vm1                 STABLE
      2        ONLINE  ONLINE       racp1vm2                 STABLE
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       racp1vm1                 STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       racp1vm1                 STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       racp1vm2                 STABLE
ora.asm(ora.asmgroup)
      1        ONLINE  ONLINE       racp1vm1                 Started,STABLE
      2        ONLINE  ONLINE       racp1vm2                 Started,STABLE
ora.asmnet1.asmnetwork(ora.asmgroup)
      1        ONLINE  ONLINE       racp1vm1                 STABLE
      2        ONLINE  ONLINE       racp1vm2                 STABLE
ora.cdp1.cdp
      1        ONLINE  ONLINE       racp1vm1                 STABLE
ora.cdp2.cdp
      1        ONLINE  ONLINE       racp1vm1                 STABLE
ora.cdp3.cdp
      1        ONLINE  ONLINE       racp1vm2                 STABLE
ora.cvu
      1        ONLINE  ONLINE       racp1vm1                 STABLE
ora.qosmserver
      1        ONLINE  ONLINE       racp1vm1                 STABLE
ora.racp1vm1.vip
      1        ONLINE  ONLINE       racp1vm1                 STABLE
ora.racp1vm2.vip
      1        ONLINE  ONLINE       racp1vm2                 STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       racp1vm1                 STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       racp1vm1                 STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       racp1vm2                 STABLE
--------------------------------------------------------------------------------
[root@racp1vm1 diag]#

Cet article Oracle 21c : Two nodes Grid Infrastructure Installation est apparu en premier sur Blog dbi services.

Network configurations on ODA

$
0
0

Introduction

ODA is an appliance, that means that everything is included in a box, and this box could eventually run its stuff only being connected to power. But it would be quite useless as nobody could connect to the databases. ODA needs network links to deliver the service.

Starting from ODA X8, there are multiple possible configurations regarding network, but also limits to what you can do. Let’s have a glance at the possibilities.

Network cards on a single-node ODA

A minimal configuration will be composed of 2 network cards.

One is for server management, also called the ILOM. ILOM is something like a server in the server itself. It consists of a small-footprint machine dedicated to manage the hardware side of the ODA. When the ODA gets powered, this ILOM is available a few minutes after you plug in the server, even if you don’t actually power up the server itself by pressing the button. There is only one physical port for this network card, it’s even not a card as it’s soldered on the server and you cannot remove it. This physical port is an Ethernet port limited to 1Gbps, so you will not be able to connect it to faster networks. It’s not a problem because you’ll never need more. There is no indication if this port could eventually use lower speeds, like 100Mbps, but using 100Mbps in 2021 would be quite odd.

The other network card is a real one, connected to the PCIe bus slot 7. You can choose between a quad-port Ethernet card, or a dual-port SFP28 card.

Ethernet card is for copper network with RJ45 cables and plugs, and is compatible with 10Gbps switches.

SFP28 card is for fiber network with optical fibers and gbics connectors, and is compatible with 10Gbps or 25Gbps switches.

2 more cards are optional. Like the previous one, you can choose between quad-port Ethernet or dual-port SFP. They can be useful for those planning to have more network interfaces for various purposes. These cards will be available in port 8 and 10.

So, when you order your ODA, you’ll have to choose which type of card for the first one, and if you want additional ones (1 or 2) and which kind of additional cards. Make sure to order exactly what you need.

Network cards on a HA ODA

HA ODA is basically 2 single-node ODAs without DATA disks inside. Disks are deported through a SAS link to one or two storage enclosures, meaning that storage capacity is bigger, but unfortunately without the speed of NVMe disks.

As one HA is 2 single-node ODAs, you’ll find 2 ILOM cards (1 per server) and 2 cards as a default (in slot 7 of each server). Each node can only have 1 additional card into slot 10. It’s not possible to add more cards for your own networks, because the other slots are dedicated to HBA SAS controllers (2x) and a specific network card.

As you may know, one of the benefit of having a HA ODA is High Availability itself, thanks to Real Application Cluster technology. This technology rely on shared storage but also on high bandwidth private network (interconnect) to link both nodes together. This private network has a dedicated card on each node on slot 1, and this network is really private because it never goes out of your ODA. It makes use of a SFP28 card of the same type as the others. But dedicated cables are provided to make a direct link between node 0 and node 1. It means 2 things: both nodes need to be very near, and you do not have to care about this network configuration: no ports to dedicate on your switch, no IP configuration to provide, etc.

As a result, network configuration on a HA ODA is not very different than network configuration of 2 single-node ODAs, but one of the additional card is mandatory and dedicated to interconnect only.

ILOM network configuration

ILOM network configuration will be composed of an IP address, a netmask and a gateway for being available from other networks. You will mainly connect to this network with a web browser on https port, or eventually with SSH terminal. ILOM is used for initial deployment or reimaging of the server, and also for diagnostic purpose.

Public network configuration

Public network is dedicated to connect your applications or clients to the databases. A public network will be based on 2 bonded interfaces on the same network card. If you ordered your ODA with 1 Ethernet card, you can choose between btbond1 or btbond2, btbond1 being a bond of the first 2 ports, btbond2 being a bond of the last 2 ports.

If you ordered your ODA with a SFP card, there is only one bond possible btbond1 on the 2 ports of the card.

For this network you will need an IP address, a netmask, a gateway and also DNS and NTP information because your ODA will need to resolve names and update its clock.

Other networks configuration (user defined networks)

You can configure other networks for additional links depending on your needs and how many cards you chose. For single-node ODAs, up to 6 networks are possible when using Ethernet (public + 5 user-defined networks) and up to 3 using SFP (1 public and 2 user-defined). For HA ODAs, up to 4 networks are possible when using Ethernet and up to 2 using SFP. These networks will be used for backup purpose, in order to dedicate a link to backup stream, administrative purpose in case you have a dedicated network for administrators, or whatever you need.

Each network you will create will be based on 2 bonded ports of a network card.

Note that bond’s names are fixed and follow this mechanism:

  • 1st card has btbond1 and eventually btbond2 if it’s an Ethernet card
  • 2nd card has btbond3 and eventually btbond4 if it’s an Ethernet card
  • for single-node ODAs, 3rd card has btbond5 and eventually btbond6 if it’s an Ethernet card

For each of these user-defined networks you will need an IP address, a netmask, a gateway and a optional VLAN tagging if needed.

To create a user-defined network, use odacli create-network and provide the physical bond, the IP configuration, for example you could create a backup network on the second bond of your primary Ethernet card with:

odacli create-network -w Backup -m Backup -s 255.255.255.0 -p 172.30.52.4 -t BOND -n btbond2

You can also display your actual networks with:

odacli list-networks

Limits

Configuration is quite flexible, but there are some limits.

Manual configuration of these interfaces is not supported. Everything is either done when deploying the appliance (public network configuration) or with odacli (additional networks configuration).

You can tweak your network scripts, but it may prevent you from applying a patch or using some odacli functions. So I do not recommend to do that.

LACP is not supported, networks are always composed of 2 bonded ports, but it’s only in active/backup mode. You cannot aggregate networks interfaces to benefit for double the speed of one. If you need more speed, connect your ODA to 10Gbps Ethernet or 25Gbps SFP switches to maximize throughput if you really need such bandwidth. Aggregating “slow” 1Gbps links makes no sense. Active/backup is enough for most cases.

Jumbo frames are not proposed during network configuration, if you really need them, for example for a backup network, you can add the settings to your network interfaces configuration files but don’t forget to remove them before patching or modifying networks through odacli.

echo 'MTU="9000"' >> /etc/sysconfig/network-scripts/ifcfg-p2p1
echo 'MTU="9000"' >> /etc/sysconfig/network-scripts/ifcfg-p2p2
echo 'MTU="9000"' >> /etc/sysconfig/network-scripts/ifcfg-btbond3
ifdown btbond3; ifup btbond3

If you still use an old 1Gbps fiber switch, the ODA will not manage to connect to the network, even if the network link is detected. 1Gbps SFP makes no sense compared to 1Gbps Ethernet.

Using both Ethernet and SFP is not possible, and it’s probably for now the main drawback regarding the network on ODA. Mixing up the 2 types of network cards is possible, but you will not be able to use both of them. An ODA with the 2 types of card only makes sense when you plan an upgrade to SFP later (with a reimage) or if your datacenters don’t support the same kind of networks and you want your ODAs to be identical. This is probably a good idea to have the same cards on each ODA.

It’s not possible to make a bond between different physical cards. It would make sense as the card itself could have a failure, but there is no way for such configuration with odacli. Don’t forget that a real Disaster Recovery solution is the only answer for extended hardware troubles today.

Finally, if you decided to go for SFP, don’t forget to order the gbics corresponding to your configuration, they are not provided as a standard on ODA, they are optional. Make sure to have the correct gbics according to the network specification of the cards and your switch on both sides of the fiber cable. I heard about some incompatibilities between network card, gbics plugged into the card, gbic plugged into the switch and the switch itself. It’s not an ODA specific subject but make sure that this point will not delay your ODA project.

ODA and VLANs

VLAN tagging is fully supported on ODA, but I recommend to use transparent VLANs and tag the switch port instead of configuring this tagging on the ODA itself. You can still create additional VLANs on top of the public interface, but you will share the bonding for multiple purposes, not the best option. One bonding for one VLAN is probably a better configuration.

Conclusion

There is a lot of possibilities regarding network configuration on ODA, but as usual, take the time to define what you need, look at your available ports and speeds with your network administrator, and prepare carefully information like IPs and hostnames. Changing something on the network after deployment is not recommended and even not always possible.

Cet article Network configurations on ODA est apparu en premier sur Blog dbi services.

Oracle Rolling Invalidate Window Exceeded(3)

$
0
0

By Franck Pachot

.
This extends a previous post (Rolling Invalidate Window Exceeded) where, in summary, the ideas were:

  • When you gather statistics, you want the new executions to take into account the new statistics, which means that the old execution plans (child cursors) should be invalidated
  • You don’t want all child cursors to be immediately invalidated, to avoid an hard parse storm, and this is why this invalidation is rolling: a 5 hour window is defined, starting at the next execution (after the statistics gathering) and a random timestamp is set there where a newer execution will hard parse rather than sharing an existing cursor
  • The “invalidation” term is misleading as it has nothing to do with the V$SQL.INVALIDATIONS wich is at parent cursor level. Here the existing plans are still valid. The “rolling invalidation” just make them non-shareable

In this blog post I’ll share my query to show the timestamps involved:

  • INVALIDATION_WINDOW which is the start of the invalidation (or rather the end of sharing of this cursor) for a future parse call
  • KSUGCTM (Kernel Service User Get Current TiMe) which is the time when non-sharing occured and a new child cursor has been created (hard parse instead of soft parse)

As usual, here is a simple example


alter system flush shared_pool;
create table DEMO as select * from dual;
insert into DEMO select * from dual;
commit;
alter system set "_optimizer_invalidation_period"=15 scope=memory;

I have created a demo table and set the invalidation to 15 seconds instead of the 5 hours default.


20:14:19 SQL> exec dbms_stats.gather_table_stats(user,'DEMO');

PL/SQL procedure successfully completed.

20:14:19 SQL> select * from dba_tab_stats_history where table_name='DEMO' order by stats_update_time;

OWNER   TABLE_NAME   PARTITION_NAME   SUBPARTITION_NAME   STATS_UPDATE_TIME
-----   ----------   --------------   -----------------   -----------------------------------
DEMO    DEMO                                              23-FEB-21 08.14.19.698111000 PM GMT

1 row selected.

I’ve gathered the statistics at 20:14:19 but there are no cursor yet to invalidate.


20:14:20 SQL> host sleep 30

20:14:50 SQL> select * from DEMO;

DUMMY
-----
X

1 row selected.

20:14:50 SQL> select child_number,reason from v$sql_shared_cursor where sql_id='0m8kbvzchkytt';

  CHILD_NUMBER REASON
-------------- --------------------------------------------------------------------------------
             0

1 row selected.

I have executed my statement which created the child parent and, of course, no invalidation yet.


20:14:50 SQL> exec dbms_stats.gather_table_stats(user,'DEMO');

PL/SQL procedure successfully completed.

20:14:50 SQL> select * from dba_tab_stats_history where table_name='DEMO' order by stats_update_time;

OWNER   TABLE_NAME   PARTITION_NAME   SUBPARTITION_NAME   STATS_UPDATE_TIME
-----   ----------   --------------   -----------------   -----------------------------------
DEMO    DEMO                                              23-FEB-21 08.14.19.698111000 PM GMT
DEMO    DEMO                                              23-FEB-21 08.14.50.270984000 PM GMT

2 rows selected.

20:14:50 SQL> host sleep 30

20:15:20 SQL> select * from DEMO;

DUMMY
-----
X


1 row selected.

20:15:20 SQL> select child_number,reason from v$sql_shared_cursor where sql_id='0m8kbvzchkytt';

  CHILD_NUMBER REASON
-------------- --------------------------------------------------------------------------------
             0

1 row selected.

I have gathered statistics and ran my statement again. There’s no invalidation yet because the invalidation window starts only at next parse or execution that occurs after statistics gathering. This next execution occured after 20:15:20 and sets the start of the invalidation window. But for the moment, the same child is still shared.


20:15:20 SQL> exec dbms_stats.gather_table_stats(user,'DEMO');

PL/SQL procedure successfully completed.

20:15:20 SQL> select * from dba_tab_stats_history where table_name='DEMO' order by stats_update_time;

OWNER   TABLE_NAME   PARTITION_NAME   SUBPARTITION_NAME   STATS_UPDATE_TIME
-----   ----------   --------------   -----------------   -----------------------------------
DEMO    DEMO                                              23-FEB-21 08.14.19.698111000 PM GMT
DEMO    DEMO                                              23-FEB-21 08.14.50.270984000 PM GMT
DEMO    DEMO                                              23-FEB-21 08.15.20.476025000 PM GMT


3 rows selected.

20:15:20 SQL> host sleep 30

20:15:50 SQL> select * from DEMO;

DUMMY
-----
X

1 row selected.

20:15:50 SQL> select child_number,reason from v$sql_shared_cursor where sql_id='0m8kbvzchkytt';

  CHILD_NUMBER REASON
-------------- --------------------------------------------------------------------------------
             0 <ChildNode><ChildNumber>0</ChildNumber><ID>33</ID><reason>Rolling Invalidate Window Exceeded(2)</reason><size>0x0</size><details>already_processed</details></ChildNode><ChildNode><ChildNumber>0</ChildNumber><ID>33</ID><reason>Rolling Invalidate Window Exceeded(3)</reason><size>2x4</size><invalidation_window>1614111334</invalidation_window><ksugctm>1614111350</ksugctm></ChildNode>
             1

2 rows selected.

I’ve gathered the statistics again, but what matters here is that I’ve run my statement now that the invalidation window has been set (by the previous execution from 20:15:20), and has been reached (I waited 30 seconds which is more than the 15 second window I’ve defined), and then this new execution set the cursor as non-shareable , for “Rolling Invalidate Window Exceeded(3)” reason, and has created a new child cursor.

20:15:50 SQL> select child_number,invalidations,parse_calls,executions,cast(last_active_time as timestamp) last_active_time
    ,timestamp '1970-01-01 00:00:00'+numtodsinterval(to_number(regexp_replace(reason,'.*.invalidation_window>([0-9]*)./invalidation_window>.ksugctm>([0-9]*).*','\1')),'second') invalidation_window
    ,timestamp '1970-01-01 00:00:00'+numtodsinterval(to_number(regexp_replace(reason,'.*([0-9]*)./invalidation_window>.ksugctm>([0-9]*)./ksugctm>.*','\2')),'second') ksugctm
    from v$sql_shared_cursor left outer join v$sql using(sql_id,child_number,address,child_address)
    where reason like '%Rolling Invalidate Window Exceeded(3)%' and sql_id='0m8kbvzchkytt'
    order by sql_id,child_number,invalidation_window desc
    ;

  CHILD_NUMBER   INVALIDATIONS   PARSE_CALLS   EXECUTIONS LAST_ACTIVE_TIME                  INVALIDATION_WINDOW               KSUGCTM
--------------   -------------   -----------   ---------- -------------------------------   -------------------------------   ----------------------------

             0               0             3            2 23-FEB-21 08.15.50.000000000 PM   23-FEB-21 08.15.34.000000000 PM   23-FEB-21 08.15.50.000000000 PM

1 row selected.

So at 20:15:20 the invalidation has been set (but not exposed yet) at random within the next 15 seconds (because I changed the 5 hours default) and it is now visible as INVALIDATION_WINDOW: 20:15:34 and then the next execution after this timestamp has created a new child at 21:15:50 which is visible by KSUGCTM but also in LAST_ACTIVE_TIME (even if this child cursor has not been executed, just updated).

The important thing is that those child cursors will not be used again but are still there, increasing the length of the list of child cursors that is read when parsing a new statement with the same SQL text. And this can go up to 8192 if you’ve left the default “_cursor_obsolete_threshold” (which is recommended to lower – see Mike Dietrich blog post)

And this also means that you should not gather statistics too often and this is why GATHER AUTO is the default option. You may lower the STALE_PERCENT for some tables (if very large with few changes, it may not be gathered othen enough) but gathering stats from a table everyday, even if small, has bad effect on cursor versions.


SQL> alter session set nls_timestamp_format='dd-mon hh24:mi:ss';
SQL>

select sql_id,child_number,ksugctm,invalidation_window
    ,(select cast(max(stats_update_time) as timestamp) from v$object_dependency 
      join dba_tab_stats_history on to_owner=owner and to_name=table_name and to_type=2
      where from_address=address and from_hash=hash_value and stats_update_time ([0-9]*)./invalidation_window>.ksugctm>([0-9]*).*','\1')),'second') invalidation_window
    ,timestamp '1970-01-01 00:00:00'+numtodsinterval(to_number(regexp_replace(reason,'.*([0-9]*)./invalidation_window>.ksugctm>([0-9]*)./ksugctm>.*','\2')),'second') ksugctm
    from v$sql_shared_cursor left outer join v$sql using(sql_id,child_number,address,child_address)
    where reason like '%Rolling Invalidate Window Exceeded(3)%' --and sql_id='0m8kbvzchkytt'
    ) order by sql_id,child_number,invalidation_window desc;

SQL_ID            CHILD_NUMBER KSUGCTM           INVALIDATION_WINDOW   LAST_ANALYZE
-------------     ------------ ---------------   -------------------   ----------------------------------
04kug40zbu4dm                2 23-feb 06:01:23   23-feb 06:01:04
0m8kbvzchkytt                0 23-feb 21:34:47   23-feb 21:34:25       23-FEB-21 09.34.18.582833000 PM GMT
0m8kbvzchkytt                1 23-feb 21:35:48   23-feb 21:35:23       23-FEB-21 09.35.18.995779000 PM GMT
0m8kbvzchkytt                2 23-feb 21:36:48   23-feb 21:36:22       23-FEB-21 09.36.19.305025000 PM GMT
0m8kbvzchkytt                3 23-feb 21:37:49   23-feb 21:37:32       23-FEB-21 09.37.19.681986000 PM GMT
0m8kbvzchkytt                4 23-feb 21:38:50   23-feb 21:38:26       23-FEB-21 09.38.20.035265000 PM GMT
0m8kbvzchkytt                5 23-feb 21:39:50   23-feb 21:39:32       23-FEB-21 09.39.20.319662000 PM GMT
0m8kbvzchkytt                6 23-feb 21:40:50   23-feb 21:40:29       23-FEB-21 09.40.20.617857000 PM GMT
0m8kbvzchkytt                7 23-feb 21:41:50   23-feb 21:41:28       23-FEB-21 09.41.20.924223000 PM GMT
0m8kbvzchkytt                8 23-feb 21:42:51   23-feb 21:42:22       23-FEB-21 09.42.21.356828000 PM GMT
0m8kbvzchkytt                9 23-feb 21:43:51   23-feb 21:43:25       23-FEB-21 09.43.21.690408000 PM GMT
0sbbcuruzd66f                2 23-feb 06:00:46   23-feb 06:00:45
0yn07bvqs30qj                0 23-feb 01:01:09   23-feb 00:18:02
121ffmrc95v7g                3 23-feb 06:00:35   23-feb 06:00:34

This query joins with the statistics history in order to get an idea of the root cause of the invalidation. I look at the cursor dependencies, and the table statistics. This may be customized with partitions, index names,…

The core message here is that gathering statistics on a table will make the cursors unshareable. If you have, say 10 versions because of multiple NLS settings et bind variable length,… and gather the statistics every day, the list of child cursor will increase until reaching the obsolete threshold. And when the list is long, you will have more pressure on library cache during attempts to soft parse. If you gather statistics without the automatic job, and do it without ‘GATHER AUTO’, even on small tables where gathering is fast, you increase the number of cursor versions without a reason. The best practice for statistics gathering is keeping the AUTO settings. The query above may help to see the correlation between statistics gathering and rolling invalidation.

Cet article Oracle Rolling Invalidate Window Exceeded(3) est apparu en premier sur Blog dbi services.


Oracle Database Appliance: what have you missed since X3/X4/X5?

$
0
0

Introduction

ODA started to become popular with X3-2 and X4-2 in 2013/2014. These 2 ODAs were very similar. The X5-2 from 2015 was different with 3.5 inches disks instead of 2.5 inches and additional SSDs for small databases (FLASH diskgroup). All these 3 ODAs were running 11gR2 and 12cR1 databases and were managed by the oakcli binary. If you’re still using these old machines, you should know that there is a lot of differences compared to modern ODAs. Here is an overview of what have changed on these appliances.

Single-node ODAs

Starting from X6, ODAs are also available in “lite” versions, understand single-node ODAs. The benefits are real: way cheaper than 2-node ODAs (now called High Availability ODAs), no need for RAC complexity, easy plug in (power supply and network and that’s it), cheaper Disaster Recovery, faster deployment, etc. Most of the ODAs sold today are single-nodes ODAs as Real Application Cluster if becoming less and less popular. Today, ODA’s family is composed of 2 lite versions, X8-2S and X8-2M and one HA version X8-2HA.

Support for Standard Edition

Up to X5, ODAs had only supported Enterprise Edition, meaning that the base price was more likely a 6-digit figure in $/€/CHF if you pack the server with 1 EE PROC license. With Standard Edition, the base price is “only” one third of that (X8-2S with 1 SE2 PROC license).

Full SSD storage

I/Os have always been a bottleneck for databases. X6 and later ODAs are mainly full SSD servers. “Lite” ODAs are only running on NVMe SSD (the fastest storage solution for now), and HA ODAs are available in both configurations: SSD (High Performance) or a mix of SSD and HDD (High Capacity). The latest one being quite rare. Even the smallest ODA X8-2S with only 2 NVMe SSDs will be faster than any other disk-based ODA.

Higher TeraByte density and flexible disk configuration

For sure, comparing a 5-year old ODA to X8 is not fair, but ODA X3 and X4 used to pack 18TB in 4U when ODA X8-2M will have up to 75TB in 2U. Some customers didn’t chose ODA 5 years ago because of the limited capacity, it’s even no more a subject today.

Another point is that storage configuration is more flexible. With ODA X8-2M you are able to add disks by pair, and with ODA X8-2HA you can add 5-disk packs. There is no more the need for doubling capacity as we did on X3/X4/X5 (and you could only do it once).

Furthermore, you can now choose an accurate disk split between DATA and RECO (+/-1%) compared to the DATA/RECO options on X3-X4-X5: 40/60 or 80/20.

Web GUI

A real appliance needs a real GUI, X6 introduced the ODA Web GUI, a basic GUI for basic ODA functions (dbhomes and databases creation and deletion, mainly) and this GUI became more and more capable during the past years. If some actions are still missing, the GUI is now quite powerfull and also user-friendly. And you can still use the command line (odacli) if you prefer.

Smart management

ODA now has a repository and everything is ordered and referenced in that repository, each database, dbhome, network, job is identified with a unique id. And all tasks are backgroup jobs with a verbose status.

Next-gen virtualization support

With old HA you had to choose between bare-metal mode or virtualized-mode, the last one being for running additional virtual machines for other purposes than databases. But the databases were also running on a single dedicated VM. Virtualized-mode relied on OVM technology, soon deprecated and now replaced with OLVM. OLVM brings both the advantages of a virtualized ODA (running additional VMs) and a bare-metal ODA (running databases in bare-metal). And it relies on KVM instead of Xen, which is better because it’s part of the Linux operating system.

Data Guard support

It’s quite a new feature, but it’s already a must-have. The command line interface (odacli) is now able to create and manage a Data Guard configuration, and even do the duplicate and the switchover/failover. It’s so convenient that it’s a key benefit for the ODA compared to other platforms. Please have a look at this blogpost for a test case. If you’re used to configure Data Guard, you will probably appreciate this feature a lot.

Performance

ODA has always been a great challenger compared to other platforms. Regarding modern ODAs, NVMe SSDs associated to high-speed cores (as soon as you limit the number of cores in use in the ODA to match your license – please have a look how to) make the ODA a strong performer even compared to EXADATA. Don’t miss that point, your databases will probably run better on ODA than on anything else.

Conclusion

If you’re using Oracle databases, you should probably consider again ODA in your short list. It’s not the perfect solution, and some configurations cannot be addressed by ODA, but it brings much more advantages than drawbacks. And now there is a complete range of models for each need. If your next infrastructure is not in the Cloud, it’s probably with ODAs.

Cet article Oracle Database Appliance: what have you missed since X3/X4/X5? est apparu en premier sur Blog dbi services.

Oracle Blockchain Tables: COMMIT-Time

$
0
0

Oracle Blockchain Tables are available now with Oracle 19.10. (see Connor’s Blog on it), they are part of all editions and do not need any specific license. I.e. whenever we need to store data in a table, which should never be updated anymore and we have to ensure data cannot be tampererd, then blockchain tables should be considered as an option. As Oracle writes in the documentation that blockchain tables could e.g. be used for “audit trails”, I thought to test them by archiving unified audit trail data. Let me share my experience:

First of all I setup a 19c-database so that it supports blockchain tables:

– Installed 19.10.0.0.210119 (patch 32218454)
– Set COMAPTIBLE=19.10.0 and restarted the DB
– Installed patch 32431413

REMARK: All tests I’ve done with 19c have been done with Oracle 21c on the Oracle Cloud as well to verify that results are not caused by the backport of blockchain tables to 19c.

Creating the BLOCKCHAIN TABLE:

Blockchain Tables do not support the Create Table as Select-syntax:


create blockchain table uat_copy_blockchain2
no drop until 0 days idle
no delete until 31 days after insert
hashing using "sha2_512" version v1
tablespace audit_data
as select * from unified_audit_trail;

ERROR at line 6:
ORA-05715: operation not allowed on the blockchain table

I.e. I have to pre-create the blockain table and insert with “insert… select”:


CREATE blockchain TABLE uat_copy_blockchain 
   ("AUDIT_TYPE" VARCHAR2(64),
	"SESSIONID" NUMBER,
	"PROXY_SESSIONID" NUMBER,
	"OS_USERNAME" VARCHAR2(128),
...
	"DIRECT_PATH_NUM_COLUMNS_LOADED" NUMBER,
	"RLS_INFO" CLOB,
	"KSACL_USER_NAME" VARCHAR2(128),
	"KSACL_SERVICE_NAME" VARCHAR2(512),
	"KSACL_SOURCE_LOCATION" VARCHAR2(48),
	"PROTOCOL_SESSION_ID" NUMBER,
	"PROTOCOL_RETURN_CODE" NUMBER,
	"PROTOCOL_ACTION_NAME" VARCHAR2(32),
	"PROTOCOL_USERHOST" VARCHAR2(128),
	"PROTOCOL_MESSAGE" VARCHAR2(4000)
   )
no drop until 0 days idle
no delete until 31 days after insert
hashing using "sha2_512" version v1
tablespace audit_data;

Table created.

Now load the data into the blockchain table:


SQL> insert into uat_copy_blockchain
  2  select * from unified_audit_trail;

26526 rows created.

Elapsed: 00:00:07.24
SQL> commit;

Commit complete.

Elapsed: 00:00:43.26

Over 43 seconds for the COMMIT!!!

The reason for the long COMMIT-time is that the blockchain (or better the row-chain of hashes for the 26526 rows) is actually built when committing. I.e. all blockchain related columns in the table are empty after the insert, before the commit:


SQL> insert into uat_copy_blockchain
  2  select * from unified_audit_trail;

26526 rows created.

SQL> select count(*) from uat_copy_blockchain
  2  where ORABCTAB_INST_ID$ is NULL
  3  and ORABCTAB_CHAIN_ID$ is NULL
  4  and ORABCTAB_SEQ_NUM$ is NULL
  5  and ORABCTAB_CREATION_TIME$ is NULL
  6  and ORABCTAB_USER_NUMBER$ is NULL
  7  and ORABCTAB_HASH$ is NULL
  8  ;

  COUNT(*)
----------
     26526

During the commit those hidden columns are updated:


SQL> commit;

Commit complete.

SQL> select count(*) from uat_copy_blockchain
  2  where ORABCTAB_INST_ID$ is NULL
  3  or ORABCTAB_CHAIN_ID$ is NULL
  4  or ORABCTAB_SEQ_NUM$ is NULL
  5  or ORABCTAB_CREATION_TIME$ is NULL
  6  or ORABCTAB_USER_NUMBER$ is NULL
  7  or ORABCTAB_HASH$ is NULL
  8  ;

  COUNT(*)
----------
         0

When doing a SQL-Trace I can see the following recursive statements during the COMMIT:


SQL ID: 6r4qu6xnvb3nt Plan Hash: 960301545

update "CBLEILE"."UAT_COPY_BLOCKCHAIN" set orabctab_inst_id$ = :1,
  orabctab_chain_id$ = :2, orabctab_seq_num$ = :3, orabctab_user_number$ = :4,
   ORABCTAB_CREATION_TIME$ = :5
where
 rowid = :lrid


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse    26526      0.56       0.55          0          0          0           0
Execute  26526     10.81      12.21       3824       3395      49546       26526
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total    53052     11.38      12.76       3824       3395      49546       26526

********************************************************************************

SQL ID: 4hc26wpgb5tqr Plan Hash: 2019081831

update sys.blockchain_table_chain$ set                    hashval_position =
  :1, max_seq_number =:2
where
 obj#=:3 and inst_id = :4 and chain_id = :5


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute  26526      9.29      10.12        512      26533      27822       26526
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total    26527      9.29      10.12        512      26533      27822       26526

********************************************************************************

SQL ID: 2t5ypzqub0g35 Plan Hash: 960301545

update "CBLEILE"."UAT_COPY_BLOCKCHAIN" set orabctab_hash$ = :1
where
 rowid = :lrid


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse    26526      0.58       0.57          0          0          0           0
Execute  26526      6.79       7.27       1832       2896      46857       26526
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total    53052      7.37       7.85       1832       2896      46857       26526

********************************************************************************

SQL ID: bvggpqdp5u4uf Plan Hash: 1612174689

select max_seq_number, hashval_position
from
 sys.blockchain_table_chain$ where obj#=:1 and                     inst_id =
  :2 and chain_id = :3


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute  26527      5.34       5.51          0          0          0           0
Fetch    26527      0.75       0.72          0      53053          0       26526
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total    53055      6.10       6.24          0      53053          0       26526

********************************************************************************

SQL ID: dktp4suj3mn0t Plan Hash: 4188997816

SELECT  "AUDIT_TYPE",  "SESSIONID",  "PROXY_SESSIONID",  "OS_USERNAME",
  "USERHOST",  "TERMINAL",  "INSTANCE_ID",  "DBID",  "AUTHENTICATION_TYPE",
  "DBUSERNAME",  "DBPROXY_USERNAME",  "EXTERNAL_USERID",  "GLOBAL_USERID",
  "CLIENT_PROGRAM_NAME",  "DBLINK_INFO",  "XS_USER_NAME",  "XS_SESSIONID",
  "ENTRY_ID",  "STATEMENT_ID",  "EVENT_TIMESTAMP",  "EVENT_TIMESTAMP_UTC",
  "ACTION_NAME",  "RETURN_CODE",  "OS_PROCESS",  "TRANSACTION_ID",  "SCN",
  "EXECUTION_ID",  "OBJECT_SCHEMA",  "OBJECT_NAME",  "SQL_TEXT",  "SQL_BINDS",
    "APPLICATION_CONTEXTS",  "CLIENT_IDENTIFIER",  "NEW_SCHEMA",  "NEW_NAME",
   "OBJECT_EDITION",  "SYSTEM_PRIVILEGE_USED",  "SYSTEM_PRIVILEGE",
  "AUDIT_OPTION",  "OBJECT_PRIVILEGES",  "ROLE",  "TARGET_USER",
  "EXCLUDED_USER",  "EXCLUDED_SCHEMA",  "EXCLUDED_OBJECT",  "CURRENT_USER",
  "ADDITIONAL_INFO",  "UNIFIED_AUDIT_POLICIES",  "FGA_POLICY_NAME",
  "XS_INACTIVITY_TIMEOUT",  "XS_ENTITY_TYPE",  "XS_TARGET_PRINCIPAL_NAME",
  "XS_PROXY_USER_NAME",  "XS_DATASEC_POLICY_NAME",  "XS_SCHEMA_NAME",
  "XS_CALLBACK_EVENT_TYPE",  "XS_PACKAGE_NAME",  "XS_PROCEDURE_NAME",
  "XS_ENABLED_ROLE",  "XS_COOKIE",  "XS_NS_NAME",  "XS_NS_ATTRIBUTE",
  "XS_NS_ATTRIBUTE_OLD_VAL",  "XS_NS_ATTRIBUTE_NEW_VAL",  "DV_ACTION_CODE",
  "DV_ACTION_NAME",  "DV_EXTENDED_ACTION_CODE",  "DV_GRANTEE",
  "DV_RETURN_CODE",  "DV_ACTION_OBJECT_NAME",  "DV_RULE_SET_NAME",
  "DV_COMMENT",  "DV_FACTOR_CONTEXT",  "DV_OBJECT_STATUS",  "OLS_POLICY_NAME",
    "OLS_GRANTEE",  "OLS_MAX_READ_LABEL",  "OLS_MAX_WRITE_LABEL",
  "OLS_MIN_WRITE_LABEL",  "OLS_PRIVILEGES_GRANTED",  "OLS_PROGRAM_UNIT_NAME",
   "OLS_PRIVILEGES_USED",  "OLS_STRING_LABEL",  "OLS_LABEL_COMPONENT_TYPE",
  "OLS_LABEL_COMPONENT_NAME",  "OLS_PARENT_GROUP_NAME",  "OLS_OLD_VALUE",
  "OLS_NEW_VALUE",  "RMAN_SESSION_RECID",  "RMAN_SESSION_STAMP",
  "RMAN_OPERATION",  "RMAN_OBJECT_TYPE",  "RMAN_DEVICE_TYPE",
  "DP_TEXT_PARAMETERS1",  "DP_BOOLEAN_PARAMETERS1",
  "DIRECT_PATH_NUM_COLUMNS_LOADED",  "RLS_INFO",  "KSACL_USER_NAME",
  "KSACL_SERVICE_NAME",  "KSACL_SOURCE_LOCATION",  "PROTOCOL_SESSION_ID",
  "PROTOCOL_RETURN_CODE",  "PROTOCOL_ACTION_NAME",  "PROTOCOL_USERHOST",
  "PROTOCOL_MESSAGE",  "ORABCTAB_INST_ID$",  "ORABCTAB_CHAIN_ID$",
  "ORABCTAB_SEQ_NUM$",  "ORABCTAB_CREATION_TIME$",  "ORABCTAB_USER_NUMBER$",
  "ORABCTAB_HASH$",  "ORABCTAB_SIGNATURE$",  "ORABCTAB_SIGNATURE_ALG$",
  "ORABCTAB_SIGNATURE_CERT$"
from
 "CBLEILE"."UAT_COPY_BLOCKCHAIN" where rowid = :lrid


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute  26526      3.85       3.84          0          0          0           0
Fetch    26526      1.31       1.31          0      28120          0       26526
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total    53053      5.17       5.15          0      28120          0       26526

********************************************************************************

SQL ID: fcq6kngm4b3m5 Plan Hash: 4188997816

SELECT  "ORABCTAB_INST_ID$",  "ORABCTAB_CHAIN_ID$",  "ORABCTAB_SEQ_NUM$",
  "ORABCTAB_CREATION_TIME$",  "ORABCTAB_USER_NUMBER$",  "ORABCTAB_HASH$",
  "ORABCTAB_SIGNATURE$",  "ORABCTAB_SIGNATURE_ALG$",
  "ORABCTAB_SIGNATURE_CERT$",  "ORABCTAB_SPARE$"
from
 "CBLEILE"."UAT_COPY_BLOCKCHAIN" where rowid = :lrid


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute  26526      3.04       3.05          0          0          0           0
Fetch    26526      0.41       0.39          0      26526          0       26526
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total    53053      3.45       3.45          0      26526          0       26526

********************************************************************************

SQL ID: fcq6kngm4b3m5 Plan Hash: 4188997816

SELECT  "ORABCTAB_INST_ID$",  "ORABCTAB_CHAIN_ID$",  "ORABCTAB_SEQ_NUM$",
  "ORABCTAB_CREATION_TIME$",  "ORABCTAB_USER_NUMBER$",  "ORABCTAB_HASH$",
  "ORABCTAB_SIGNATURE$",  "ORABCTAB_SIGNATURE_ALG$",
  "ORABCTAB_SIGNATURE_CERT$",  "ORABCTAB_SPARE$"
from
 "CBLEILE"."UAT_COPY_BLOCKCHAIN" where rowid = :lrid


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute  26526      3.04       3.05          0          0          0           0
Fetch    26526      0.41       0.39          0      26526          0       26526
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total    53053      3.45       3.45          0      26526          0       26526

I.e. for every row inserted in the transaction, several recursive statements have to be executed to compute and update the inserted rows to link them together through the hash chain.

That raises the question if I should take care of PCTFREE when creating the blockchain table to avoid row migration (often wrongly called row chaining).

As with normal tables, blockchain tables have a default of 10% for PCTFREE:


SQL> select pct_free from tabs where table_name='UAT_COPY_BLOCKCHAIN';

  PCT_FREE
----------
        10

Do we actually have migrated rows after the commit?


SQL> @?/rdbms/admin/utlchain

Table created.

SQL> analyze table uat_copy_blockchain list chained rows;

Table analyzed.

SQL> select count(*) from chained_rows;

  COUNT(*)
----------
      7298

SQL> select count(distinct dbms_rowid.rowid_relative_fno(rowid)||'_'||dbms_rowid.rowid_block_number(rowid)) blocks_with_rows
  2  from uat_copy_blockchain;

BLOCKS_WITH_ROWS
----------------
	    1084

So it makes sense to adjust the PCTFREE. In my case best would be something like 25-30%, because the blockchain date makes around 23% of the average row length:


SQL> select sum(avg_col_len) from user_tab_cols where table_name='UAT_COPY_BLOCKCHAIN';

SUM(AVG_COL_LEN)
----------------
	     401

SQL> select sum(avg_col_len) from user_tab_cols where table_name='UAT_COPY_BLOCKCHAIN'
  2  and column_name like 'ORABCTAB%';

SUM(AVG_COL_LEN)
----------------
	      92

SQL> select (92/401)*100 from dual;

(92/401)*100
------------
  22.9426434

I could reduce the commit-time by 5 secs by adjusting the PCTFREE to 30.

But coming back to the commit-time issue:

This can easily be tested by just chekcing how much the commit-time increases when more data is loaded per transaction. Here the test done on 21c on the Oracle Cloud:


SQL> create blockchain table test_block_chain (a number, b varchar2(100), c varchar2(100))
  2  no drop until 0 days idle
  3  no delete until 31 days after insert
  4  hashing using "sha2_512" version v1;

Table created.

SQL> set timing on
SQL> insert into test_block_chain select object_id, object_type, object_name from all_objects where rownum < 1000;

999 rows created.

SQL> commit;

Commit complete.

Elapsed: 00:00:00.82
SQL> insert into test_block_chain select object_id, object_type, object_name from all_objects where rownum < 2000;

1999 rows created.

SQL> commit;

Commit complete.

Elapsed: 00:00:01.56
SQL> insert into test_block_chain select object_id, object_type, object_name from all_objects where rownum < 4000;

3999 rows created.

SQL> commit;

Commit complete.

Elapsed: 00:00:03.03
SQL> insert into test_block_chain select object_id, object_type, object_name from all_objects where rownum < 8000;

7999 rows created.

SQL> commit;

Commit complete.

Elapsed: 00:00:06.38
SQL> insert into test_block_chain select object_id, object_type, object_name from all_objects where rownum < 16000;

15999 rows created.

SQL> commit;

Commit complete.

Elapsed: 00:00:11.71

I.e. the more data inserted, the longer the commit-times. The times go up almost linearly with the amount of data inserted per transaction.

Can we gain something here by doing things in parallel? A commit-statement cannot be parallelized, but you may of course split your e.g. 24000 rows insert into 2 x 12000 rows inserts and run them in parallel and commit them at the same time. I created 2 simple scripts for that:


oracle@cbl:/home/oracle/ [DB0111 (CDB$ROOT)] cat load_bct.bash 
#!/bin/bash

ROWS_TO_LOAD=$1

sqlplus -S cbleile/${MY_PASSWD}@pdb1 <<EOF
insert into test_block_chain select object_id, object_type, object_name from all_objects where rownum <= $ROWS_TO_LOAD ;
-- alter session set events '10046 trace name context forever, level 12';
set timing on
commit;
-- alter session set events '10046 trace name context off';
exit
EOF

exit 0

oracle@cbl:/home/oracle/ [DB0111 (CDB$ROOT)] cat load_bct_parallel.bash 
#!/bin/bash

PARALLELISM=$1
LOAD_ROWS=$2

for i in $(seq ${PARALLELISM})
do
  ./load_bct.bash $LOAD_ROWS &
done
wait

exit 0

Loading 4000 Rows in a single job:


oracle@cbl:/home/oracle/ [DB0111 (CDB$ROOT)] ./load_bct_parallel.bash 1 4000

4000 rows created.


Commit complete.

Elapsed: 00:00:03.56

Loading 4000 Rows in 2 jobs, which run in parallel and each loading 2000 rows:


oracle@cbl:/home/oracle/ [DB0111 (CDB$ROOT)] ./load_bct_parallel.bash 2 2000

2000 rows created.


2000 rows created.


Commit complete.

Elapsed: 00:00:17.87

Commit complete.

Elapsed: 00:00:18.10

That doesn’t scale at all. Enabling SQL-Trace for the 2 jobs in parallel showed this:


SQL ID: catcycjs3ddry Plan Hash: 3098282860

update sys.blockchain_table_chain$ set                    hashval_position =
  :1, max_seq_number =:2
where
 obj#=:3 and inst_id = :4 and chain_id = :5                  and epoch# = :6

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute   2000      8.41       8.58          0    1759772       2088        2000
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total     2001      8.41       8.58          0    1759772       2088        2000

Misses in library cache during parse: 0
Optimizer mode: CHOOSE
Parsing user id: SYS   (recursive depth: 1)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         0          0          0  UPDATE  BLOCKCHAIN_TABLE_CHAIN$ (cr=2 pr=0 pw=0 time=103 us starts=1)
         1          1          1   TABLE ACCESS BY GLOBAL INDEX ROWID BATCHED BLOCKCHAIN_TABLE_CHAIN$ PARTITION: ROW LOCATION ROW LOCATION (cr=2 pr=0 pw=0 time=25 us starts=1 cost=1 size=1067 card=1)
         1          1          1    INDEX RANGE SCAN BLOCKCHAIN_TABLE_CHAIN$_IDX (cr=1 pr=0 pw=0 time=9 us starts=1 cost=1 size=0 card=1)(object id 11132)


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  buffer busy waits                             108        0.00          0.00
  latch: cache buffers chains                     1        0.00          0.00
********************************************************************************

SQL ID: fh1yz4801af27 Plan Hash: 1612174689

select max_seq_number, hashval_position
from
 sys.blockchain_table_chain$ where obj#=:1 and                     inst_id =
  :2 and chain_id = :3 and epoch# = :4


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute   2000      0.55       0.55          0          0          0           0
Fetch     2000      7.39       7.52          0    1758556          0        2000
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total     4001      7.95       8.08          0    1758556          0        2000

Misses in library cache during parse: 0
Optimizer mode: CHOOSE
Parsing user id: SYS   (recursive depth: 1)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         1          1          1  TABLE ACCESS BY GLOBAL INDEX ROWID BATCHED BLOCKCHAIN_TABLE_CHAIN$ PARTITION: ROW LOCATION ROW LOCATION (cr=2 pr=0 pw=0 time=49 us starts=1 cost=1 size=1067 card=1)
         1          1          1   INDEX RANGE SCAN BLOCKCHAIN_TABLE_CHAIN$_IDX (cr=1 pr=0 pw=0 time=10 us starts=1 cost=1 size=0 card=1)(object id 11132)


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  buffer busy waits                              80        0.00          0.00
  latch: cache buffers chains                     1        0.00          0.00

The single job for above 2 statements contained the following:


SQL ID: catcycjs3ddry Plan Hash: 3098282860

update sys.blockchain_table_chain$ set                    hashval_position =
  :1, max_seq_number =:2
where
 obj#=:3 and inst_id = :4 and chain_id = :5                  and epoch# = :6

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute   4000      1.76       1.85          0       8001       4140        4000
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total     4001      1.76       1.85          0       8001       4140        4000

Misses in library cache during parse: 0
Optimizer mode: CHOOSE
Parsing user id: SYS   (recursive depth: 1)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         0          0          0  UPDATE  BLOCKCHAIN_TABLE_CHAIN$ (cr=2 pr=0 pw=0 time=102 us starts=1)
         1          1          1   TABLE ACCESS BY GLOBAL INDEX ROWID BATCHED BLOCKCHAIN_TABLE_CHAIN$ PARTITION: ROW LOCATION ROW LOCATION (cr=2 pr=0 pw=0 time=26 us starts=1 cost=1 size=1067 card=1)
         1          1          1    INDEX RANGE SCAN BLOCKCHAIN_TABLE_CHAIN$_IDX (cr=1 pr=0 pw=0 time=12 us starts=1 cost=1 size=0 card=1)(object id 11132)

********************************************************************************

SQL ID: fh1yz4801af27 Plan Hash: 1612174689

select max_seq_number, hashval_position
from
 sys.blockchain_table_chain$ where obj#=:1 and                     inst_id =
  :2 and chain_id = :3 and epoch# = :4


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute   4000      1.09       1.09          0          0          0           0
Fetch     4000      0.06       0.06          0       8000          0        4000
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total     8001      1.15       1.16          0       8000          0        4000

Misses in library cache during parse: 0
Optimizer mode: CHOOSE
Parsing user id: SYS   (recursive depth: 1)
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         1          1          1  TABLE ACCESS BY GLOBAL INDEX ROWID BATCHED BLOCKCHAIN_TABLE_CHAIN$ PARTITION: ROW LOCATION ROW LOCATION (cr=2 pr=0 pw=0 time=49 us starts=1 cost=1 size=1067 card=1)
         1          1          1   INDEX RANGE SCAN BLOCKCHAIN_TABLE_CHAIN$_IDX (cr=1 pr=0 pw=0 time=10 us starts=1 cost=1 size=0 card=1)(object id 11132)

I.e. there’s a massive difference in logical IOs and I could see in the trace that the SQLs became slower with each execution.

Summary: Blockchain Tables are a great technology, but as with any other technology you should know its limitations. There is an overhead when committing and inserting into such tables in parallel sesssions does currently not scale when committing. If you test blockchain tables then I do recommend to review your PCT-FREE-setting of the blockchain table to avoid row migration.

Cet article Oracle Blockchain Tables: COMMIT-Time est apparu en premier sur Blog dbi services.

Delphix: a glossary to get started

$
0
0

By Franck Pachot

.
dbi-services is partner of Delphix – a data virtualization platform for easy cloning of databases. I’m sharing a little glossary to get started if you are not familiar with the terms you see in doc, console or logs.

Setup console

The setup console is the first interface you will access when installing Delphix engine (“Dynamic Data Platform”). You import the .ova and start it. If you are on a network with DHCP you can connect to the GUI, like at http://http://192.168.56.111/ServerSetup.html#/dashboard. If not you will access to the console (also available through ssh) where you have a simple help. The basic commands are `ls` to show what is available – objects and operations, `commit` to validate your changes, `up` to… go up.


network
setup
update
set hostname="fmtdelphix01"
set dnsServers="8.8.8.8, 1.1.1.1"
set defaultRoute="192.168.56.1"
set dnsDomain="tetral.com"
set primaryAddress="192.168.56.111/24"
commit

And anyway, if you started with DHCP I think you heed to disable it ( network setup update ⏎ set dhcp=false ⏎ commit ⏎)

When in the console, the harder thing for me is to find the QWERTY for ” and = as the others are on the numeric pad (yes… numeric pad is still useful!)

Storage test

Once you have an ip address you can ssh to it for the command line console with the correct keyboard and copy/paste. On thing that you can do only there and before engine initialization is storage tests

delphix01> storage
delphix01 storage> test
delphix01 storage test> create
delphix01 storage test create *> ls
Properties
type: StorageTestParameters
devices: (unset)
duration: 120
initializeDevices: true
initializeEntireDevice: false
testRegion: 512GB
tests: ALL

Before that you can set a different duration and testRegion if you don’t want to wait. Then you type `commit`to start it (and check the ETA to know how many coffees you can drink) or `discard` to cancel the test.

Setup console

Then you will continue with the GUI and the first initialization will run the wizard: choose “Virtualization engine”, setup the admin and sysadmin accounts (sysadmin is the one for this “Setup console” and admin the one for the “Management” console), NTP, Network, Storage, Proxy, Certificates, SMTP. Don’t worry many things can be changed later. Like adding network interfaces, adding new disk (just click on rediscover and accept them as “Data” usage, add certificates for HTTPS, get the registration key, and add users. The users here are for this Server Setup GUI or CLI console only.

GUI: Setup and Management consoles

The main reason for this blog post is to explain the names that can be misleading because named differently at different places. There are two graphical consoles for this engine once setup is done:

  • The Engine Setup console with #serverSetup in the URL and SETUP subtitle in the DELPHIX login screen. You use SYSADMIN here (or another user that you will create in this console). You manage the engine here (network, storage,…)
  • The Management console with #delphixAdmin in the URL and the “DYNAMIC DATA PLATFORM” subtitle. You use the ADMIN user here (or another user that you will create in this console). You manage your databases here.

Once you get this, everything is simple. I’ll mention the few other concepts that may have a misleading name in the console or the API. Actually, there’s a third console, the Self Service with the /jetstream/#mgmt in the URL that you access from the Management console, with the Management user. And of course there are the APIs. I’ll cover only the Management console in re rest of this post.

Management console

It’s subtitle in the login screen is “Dynamic Data platform” and it is actually the “Virtualization” engine. There, you use the “admin” user, not the “sysadmin” one. Or any newly added one. The Manage/Dashboard is the best place to start. The main goal of this post is to explain quickly the different concepts and their different names.

Environments

An Environment is the door to other systems. Think of “environments” as if it was called “hosts”. You will create an environment for source and target hosts. It needs only ssh access (the best is to add the dephix ssh key in the target’s .ssh/authorized keys). You can create a dedicated linux user, or use the ‘oracle’ one for simplicity. It only needs a directory that it owns (I use “/u01/app/delphix”) where it will install the “Toolkit” (about 500MB used but check the prerequisites). That’s sufficient for sources but if you want to mount clones you need sudo provileges for that:

cat > /etc/sudoers.d/delphix_oracle <<'CAT'
Defaults:oracle !requiretty
oracle ALL=NOPASSWD: /bin/mount, /bin/umount, /bin/mkdir, /bin/rmdir, /bin/ps
CAT

And that’s all you need. There’s no agent running. All is run by the Delphix engine when needed, through ssh.

Well, I mention ssh only for operations, but the host must be able to connect to the Delphix engine, to send backups of dSource or mount a NFS.

Additionally, you will need to ensure that you have enough memory to start clones as I’m sure you will quickly be addicted to the easiness of provisioning new databases. I use this to check available memory in small pages (MemAvailable) and large pages (HugePages_Free):

awk '/Hugepagesize:/{p=$2} / 0 /{next} / kB$/{v[sprintf("%9d GB %-s",int($2/1024/1024),$0)]=$2;next} {h[$0]=$2} /HugePages_Total/{hpt=$2} /HugePages_Free/{hpf=$2} {h["HugePages Used (Total-Free)"]=hpt-hpf} END{for(k in v) print sprintf("%-60s %10d",k,v[k]/p); for (k in h) print sprintf("%9d GB %-s",p*h[k]/1024/1024,k)}' /proc/meminfo|sort -nr|grep --color=auto -iE "^|( HugePage)[^:]*" #awk #meminfo

You find it there: https://franckpachot.medium.com/proc-meminfo-formatted-for-humans-350c6bebc380

As in many places, you name your environment (I put the host name and a little information behind like “prod” or “clones”) and have a Notes textbox that can be useful for you or your colleagues. Data virtualization is about agility and self-documented tools are the right place: you see the latest info next to the current status.

In each environments you can auto-discover the Databases. Promote one as a dSource. And if the database is an Oracle CDB you can discover the PDBs inside it.
You can also add filesystem directories. And this is where the naming confusion starts: they are displayed here, in environments, as “Unstructured Files” and you add them with “Add Database” and clone them to “vFiles”…

Datasets and Groups

And all those dSource, VDB, vFiles are “Datasets”. If you click on “dSources”, “VDBs” or “vFiles” you always go to “Datasets”. And there, they are listed in “Groups”. And in each group you see the Dataset name with its type (like “VDB” or “dSource”) and status (like “Running”, “Stopped” for VDBS, or “Active” or “Detached” for dSources). The idea is that all Datasets have a Timeflow, Status and Configuration. Because clones can also be sources for other clones. In the CLI console you see all Datasets as “source” objects, with a “virtual” flag that is true only for VDB or an unlinked dSource.

Don’t forget the Notes in the Status panel. I put the purpose there (why the clone is created, who is the user,…) and state (if the application is configured to work on it for example).

About the groups, you arrange them as you want. They also have Notes to describe it. And you can attach default policies to them. I group by host usually, and type of users (as they have different policies). And in the name of the group or the policy, I add a little detail to see which one is daily refreshed for example, or which one is a long-term used clone.

dSource

The first dataset you will have is the dSource. In a source environment, you have Dataset Homes (the ORACLE_HOME for Oracle) and from there a “data source” (a database) is discovered in an environment. And it will run a backup sent to Delphix (as a device type TAPE, for Oracle, handeled by Delphix libobk.so). This is stored in the Delphix engine storage and the configuration is kept to be able to refresh later with incremental backups (called SnapSync, or DB_SYNC or Snapshot with the camera icon). Delphix will then apply the incrementals on his copy-on-write filesystem. There’s no need for an Oracle instance to apply them. It seems that Delphix handles the proprietary format of Oracle backupsets. Of course, the archive logs generated during the backups must be kept but they need an Oracle instance for that so they are just stored to be applied on thin provisioning clone or refresh. If there’s a large gap and the incremental takes long, then you may opt for a DoubleSync where only the second one, faster, need to be covered by archived logs.

Timeflow

So you see the points of Sync as snapshots (camera icon) in the timeflow and you can provision a clone from them (the copy-paste Icon in the Timeflow). Automatic snapshots can be taken by the SnapSync policy and will be kept to cover the Retention policy (but you can mark one to keep longer as well). You take a snapshot manually with the camera icon.

In addition to the archivelog needed to cover the SnapSync, intermediate archive logs and even online logs can be retrieved with LogSync when you clone from an intermediate Point-In-Time. This, in the Timeflow, is seen with “Open LogSync” (an icon like a piece of paper) and from there you can select a specific time.

In a dSource, you select the snapshot, or point-in-time, to create a clone from it. It creates a child snapshot where all changes will be copy-on-write so that modifications on the parent are possible (the next SnapSync will write on the parent) and modifications on the child. And the first modification will be the apply of the redo log before opening the clone. The clone is simply an instance on an NFS mount to the Delphix engine.

VDB

Those clones become a virtual database (VDB) which is still a Dataset as it can be source for further clones.

They have additional options. They can be started and stopped as they are fully managed by Delphix (you don’t have to do anything on the server). And because they have a parent, you can refresh them (the round arrow icon). In the Timeflow, you see the snapshots as in all Datasets. But you also have the refreshes. And there is another operation related to this branch only: rewind.

Rewind

This is like a Flashback Database in Oracle: you mount the database from another point-in-time. This operation has many names. In the Timeflow the icon with two arrows on left is called “Rewind”. In the jobs you find “Rollback”. And none are really good names because you can move back and then in the future (relatively to current state of course).

vFiles

Another Datasource is vFiles where you can synchronize simple filesystems. In the environments, you find it in the Databases tab, under Unstructured Files instead of the Dataset Home (which is sometimes called Installation). And the directory paths are displayed as DATABASES. vFiles is really convenient when you store your files metadata in the database and the files themselves outside of it. You probably want to get them at the same point-in-time.

Detach or Unlink

When a dSource is imported in Delphix, it is a Dataset that can be source for a new clone, or to refresh an existing one. As it is linked to a source database, you can SnapSync and LogSync. But you can also unlink it from the source and keep it as a parent of clones. This is named Detach or Unlink operation.

Managed Source Data

Managed Source Data is the important metric for licensing reasons. Basically, Delphix ingests databases from dSources and stores it in a copy-on-write filesystem on the storage attached to the VM where Delphix engine runs. The Managed Source Data is the sum of all root parent before compression. This means that if you ingested two databases DB1 and DB2 and have plenty of clones (virtual databases) you count only the size of DB1 and DB2 for licensing. This is really good because this is where you save the most: storage thanks to compression and thin provisioning. If you drop the source database, for example DB2 but still keep clones on it, the parent snapshot must be kept in the engine and this still counts for licensing. However, be careful that as soon as a dSource is unlinked (when you don’t want to refresh from it anymore, and maybe even delete the source) the engine cannot query it to know the size. So this will not be displayed on Managed Source Data dashboard but should count for licensing purpose.

Cet article Delphix: a glossary to get started est apparu en premier sur Blog dbi services.

How to configure additional listeners on ODA

$
0
0

Introduction

Oracle Database Appliance has quite a lot of nice features, but when looking into the documentation, at least one thing is missing. How to configure multiple listeners? Odacli apparently doesn’t know what’s a listener. Let’s find out how to add new ones.

odacli

Everything should be done using odacli on ODA, but unfortunately odacli has no commands for configuring listeners:

odacli -h | grep listener
nothing!

The wrong way to configure a listener

One could tell me that configuring a listener is easy, you just have to describe it into the listener.ora file, for example:

echo "DBI_LSN=(DESCRIPTION_LIST=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=oda-dbi-test)(PORT=1576))))" >> $ORACLE_HOME/network/admin/listener.ora

… and start it with:
lsnrctl start DBI_LSN

But if it works fine, it’s not the best way to do that. Why? Simply because it will not survive a reboot.

A better way to configure a listener: through Grid Intrastructure

ODA makes use of Grid Infrastructure for its default listener on port 1521. The listener is an Oracle service running in Grid Infrastructure, so additional listeners should be declared in the Grid Infrastructure using srvctl. This is an example to configure a new listener on port 1576:

su - grid
which srvctl

/u01/app/19.0.0.0/oracle/bin/srvctl
srvctl add listener -listener DBI_LSN -endpoints 1576
srvctl config listener -listener DBI_LSN

Name: DBI_LSN
Type: Database Listener
Network: 1, Owner: grid
Home:
End points: TCP:1576
Listener is enabled.
Listener is individually enabled on nodes:
Listener is individually disabled on nodes:
srvctl start listener -listener DBI_LSN
ps -ef | grep tnslsn | grep DBI

oracle 71530 1 0 12:41 ? 00:00:00 /u01/app/19.0.0.0/oracle/bin/tnslsnr DBI_LSN -no_crs_notify -inherit

The new listener is running fine, and the listener.ora has been completed with this new item:

cat /u01/app/19.0.0.0/oracle/network/admin/listener.ora | grep DBI
DBI_LSN=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=DBI_LSN)))) # line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_DBI_LSN=ON # line added by Agent
VALID_NODE_CHECKING_REGISTRATION_DBI_LSN=SUBNET # line added by Agent

For sure, configuring a listener on a particular port is only possible if this port is not in use.

Removing a listener

If you want to remove a listener, you just need to remove the service from Grid Infrastructure:
su - grid
srvctl stop listener -listener DBI_LSN
srvctl remove listener -listener DBI_LSN
ps -ef | grep tnslsn | grep DBI

no more listener DBI_LSN running
cat /u01/app/19.0.0.0/oracle/network/admin/listener.ora | grep DBI
no more configuration in the listener.ora file

Obviously, if you plan to remove a listener, please make sure that no database is using it prior removing it.

How to use this listener for my database

Since 12c, a new LREG process in the instance is doing the registration of the database in a listener. Previously, this job was done by PMON process. The default behavior is to register the instance in the standard listener on 1521. If you want to configure your database with the new listener you just created with srvctl, configure the local_listener parameter:

su - oracle
. oraenv <<< DBTEST
sqlplus / as sysdba
alter system set local_listener='(ADDRESS=(PROTOCOL=TCP)(HOST=oda-dbi-test)(PORT=1576))' scope=both;
alter system register;
exit;

No need to reboot anything.

What about standard 1521 listener?

You may think about removing the standard listener on port 1521. But I wouldn’t do that. I think it’s better to keep this default one, even if none of your databases are using it. It could later cause troubles when patching or configuring something else on your ODA.

Conclusion

The listener management with odacli could come one day, but for now (19.9 and before) you still have to configure it using Grid Infrastructure. It’s quite easy and pretty straightforward if you do it the proper way.

Cet article How to configure additional listeners on ODA est apparu en premier sur Blog dbi services.

Oracle Database Appliance: ODA patch 19.10 is out

$
0
0

Introduction

4 months after the 19.9, here comes the 19.10 version of the Oracle Database Appliance patch. Let’s have a look at what’s new.

Will my ODA supports this 19.10 patch?

The 19.10 release will be the same for all ODAs, as usual. The oldest ODA compatible with this release is the X5-2. Don’t expect to install this version on older models. X4-2 is stuck to 18.8, X3-2 is stuck to 18.5 and V1 is stuck to 12.2. If you are still using these models, please consider an upgrade to X8-2 and 19c to go back to supported hardware and software.

What are the new features?

As usual, 19.10 includes all the latest patches for all database homes, including for those versions no more supported with Premier Support (provided patches are the very latest from the 19th of January, 2021).

The most important new feature is dedicated to KVM virtualization. It’s now possible to create a database VM, for example if you need to isolate the systems running your databases. Up to 19.9, virtualization was only dedicated to other purpose than Oracle databases. Now with 19.10, a set of new odacli commands are available like: odacli create-dbsystem
Immediately, it makes me think about a dbsystem in OCI, the Oracle public cloud, and I’m quite sure the implementation is similar. Basically, it will create a new KVM virtual machine, with a dbhome and a database inside. As there is quite a lot of parameters to provide for creating a dbsystem, you feed the create-dbsystem with a json file, which is very similar to the one you used for provisionning the appliance. In the file, you will give host and network information, users and groups settings, database name, database version and database parameters as if it were an ODA deployment file. Brilliant.

For sure, you can also have an overview of the existing dbsystems on your ODA with: odacli list-dbsystems
You can have more information on a dbsystem, you can also start, stop and delete a dbsystem with:
odacli describe-dbsystem
odacli start-dbsystem
odacli stop-dbsystem
odacli delete-dbsystem

Quite easy to manage.

What’s also new is the internal database dedicated to ODA repository. It has now switched from JavaDB to MySQL, but actually it wouldn’t change anything for you because you’re not supposed to access to this database.

Database Security Assessment Tool is a new item available in the ODA GUI and is dedicated to discover security risks on your databases: it’s probably a nice addition and definitely usefull.

odacli restore-archivelog This is the only new feature regarding odacli appart from dbsystems: it allows you to restore a range of archivelogs. It’s probably helpfull from time to time if you make use of the odacli backup features. If you’re used to make backups with RMAN directly, you’ll probably never use this feature.

And that’s it regarding new features. And it’s not bad because it’s probably a mature patch, 19c being available on ODA for 1 year now.

Still able to run older databases with 19.10?

19.10 will let you run all versions of database starting from 11.2.0.4. Yes 11gR2 is still there and a few ODA customers in 2021 are still asking for this version. However, it’s highly recommended to migrate to 19c, as it’s the only version with long term support available now. Deploying 19.10 and planning to migrate your databases in the next months is definitely a good idea. With ODA you can easily migrate your databases with: odacli upgrade-database
It should have been replaced by: odacli move-database
But it’s not yet done on this version.

Is is possible to upgrade to 19.10 from my current release?

You will need to already run 19.6 release or later. If your ODA is running on 18.8, you will have to patch to 19.6 prior applying 19.10. If your ODA is running on 18.7 or older 18.x release, an upgrade to 18.8 will be mandatory before patching to 19.6, and then to 19.10. If you are using older versions, I highly recommend to proceed to a complete reimaging of your ODA. It will be easier than applying 3+ patches. And you’ll benefit from a brand new and clean ODA. Patching is still a lot of work, and if you don’t patch regularly, going to the latest version could be challenging. Yes, reimaging is also a lot of work, but it always works.

Conclusion

19.10 seems to be a mature release for customers using ODAs, so if you’re using previous 19.x versions, don’t hesitate. I will be able to try it next week and I will not miss to give you my feedback.

Cet article Oracle Database Appliance: ODA patch 19.10 is out est apparu en premier sur Blog dbi services.

Viewing all 523 articles
Browse latest View live