Quantcast
Channel: Archives des Oracle - dbi Blog
Viewing all 516 articles
Browse latest View live

Merge-Statement crashes with ORA-7445 [kdu_close] caused by Real Time Statistics?

$
0
0

In a recent project we migrated an Oracle database previously running on 12.1.0.2 on an Oracle Database Appliance to an Exadata X8 with DB version 19.7. Shortly after the migration a merge-statement (upsert) failed with an

ORA-07445: exception encountered: core dump [kdu_close()+107] [SIGSEGV] [ADDR:0xE0] [PC:0x1276AE6B] [Address not mapped to object] [] 

The stack looked as follows:

kdu_close - updThreePhaseExe - upsexe - opiexe - kpoal8 - opiodr - ttcpip - opitsk - opiino - opiodr - opidrv - sou2o - opimai_real - ssthrdmain - main - __libc_start_main - _start

As experienced Oracle DBAs know an ORA-7445 error is usually caused by an Oracle bug (defect). Searching in My Oracle Support didn’t reveal much for module “kdu_close” and the associated error stack. Working on a Service Request (SR) with Oracle Support hasn’t provided a solution or workaround to the issue so far as well. Checking Orafun also didn’t provide much insight about kdu_close other than the fact that we are in the area of the code about kernel data update (kdu).

As the merge crashed at the end of its processing (from earlier successful executions we knew how long the statement usually takes) I setup the hypothesis that this issue might be related to the 19c new feature Real Time Statistics on Exadata. To verify if the hypothesis is correct, I did some tests first with Real Time Statistics and merge-statements in my environment to see if they do work as expected and if we can disable them with a hint:

1.) Enable Exadata Features

alter system set "_exadata_feature_on"=TRUE scope=spfile;
shutdown immediate
startup

2.) Test if a merge-statement triggers real time statistics

I setup a table tab1 and tab2 similar to the setup on Oracle-Base and run a merge statement, which actually updates 1000 rows:

Initially we just have statistics on tab1 from dbms_stats.gather_table_stats. Here e.g. the columns:

testuser1@orcl@orcl> select column_name, last_analyzed, notes from user_tab_col_statistics where table_name='TAB1';

COLUMN_NAME      LAST_ANALYZED       NOTES
---------------- ------------------- ----------------------------------------------------------------
ID               07.08.2020 17:29:37
DESCRIPTION      07.08.2020 17:29:37

Then I ran the merge:

testuser1@orcl@orcl> merge
  2  into	tab1
  3  using	tab2
  4  on	(tab1.id = tab2.id)
  5  when matched then
  6  	     update set tab1.description = tab2.description
  7  WHEN NOT MATCHED THEN
  8  	 INSERT (  id, description )
  9  	 VALUES ( tab2.id, tab2.description )
 10  ;

1000 rows merged.

testuser1@orcl@orcl> commit;

Commit complete.

testuser1@orcl@orcl> exec dbms_stats.flush_database_monitoring_info;

PL/SQL procedure successfully completed.

testuser1@orcl@orcl> select column_name, last_analyzed, notes from user_tab_col_statistics where table_name='TAB1';

COLUMN_NAME      LAST_ANALYZED       NOTES
---------------- ------------------- ----------------------------------------------------------------
ID               07.08.2020 17:29:37
DESCRIPTION      07.08.2020 17:29:37
ID               07.08.2020 17:37:34 STATS_ON_CONVENTIONAL_DML
DESCRIPTION      07.08.2020 17:37:34 STATS_ON_CONVENTIONAL_DML

So obviously Real Time Statistics gathering was triggered.

After the verification that merge statements trigger statistics to be gathered in real time I disabled Real Time Statistics on that specific merge-statement by adding the hint

/*+ NO_GATHER_OPTIMIZER_STATISTICS */

to it.

testuser1@orcl@orcl> select column_name, last_analyzed, notes from user_tab_col_statistics where table_name='TAB1';

COLUMN_NAME      LAST_ANALYZED       NOTES
---------------- ------------------- ----------------------------------------------------------------
ID               07.08.2020 17:46:38
DESCRIPTION      07.08.2020 17:46:38

testuser1@orcl@orcl> merge /*+ NO_GATHER_OPTIMIZER_STATISTICS */
  2  into	tab1
  3  using	tab2
  4  on	(tab1.id = tab2.id)
  5  when matched then
  6  	     update set tab1.description = tab2.description
  7  WHEN NOT MATCHED THEN
  8  	 INSERT (  id, description )
  9  	 VALUES ( tab2.id, tab2.description )
 10  ;

1000 rows merged.

testuser1@orcl@orcl> commit;

Commit complete.

testuser1@orcl@orcl> exec dbms_stats.flush_database_monitoring_info;

PL/SQL procedure successfully completed.

testuser1@orcl@orcl> select column_name, last_analyzed, notes from user_tab_col_statistics where table_name='TAB1';

COLUMN_NAME      LAST_ANALYZED       NOTES
---------------- ------------------- ----------------------------------------------------------------
ID               07.08.2020 17:46:38
DESCRIPTION      07.08.2020 17:46:38

So the hint works as expected.

The statement of the real application was generated and could not be modified, so I had to create a SQL-Patch to add the hint at parse-time to it:

var rv varchar2(32);
begin
   :rv:=dbms_sqldiag.create_sql_patch(sql_id=>'13szq2g6xbsg5',
                                      hint_text=>'NO_GATHER_OPTIMIZER_STATISTICS',
                                      name=>'disable_real_time_stats_on_merge',
                                      description=>'disable real time stats');
end;
/
print rv

REMARK: If a statement is no longer in the shared pool, but available in the AWR history, you may use below method to create the sql patch:

var rv varchar2(32);
declare
   v_sql CLOB;
begin
   select sql_text into v_sql from dba_hist_sqltext where sql_id='13szq2g6xbsg5';
   :rv:=dbms_sqldiag.create_sql_patch(
             sql_text  => v_sql,
             hint_text=>'NO_GATHER_OPTIMIZER_STATISTICS',
             name=>'disable_real_time_stats_on_merge',
             description=>'disable real time stats');
end;
/
print rv

It turned out that disabling Real Time Statistics actually worked around the ORA-7445 issue. It might be a coincidence and positive side effect that disabling Real Time Statistics worked around the issue, but for the moment we can cope with it and hope that this information helps to resolve the opened SR so that we get a permanent fix from Oracle for this defect.

Cet article Merge-Statement crashes with ORA-7445 [kdu_close] caused by Real Time Statistics? est apparu en premier sur Blog dbi services.


Oracle Database Appliance and CPU speed

$
0
0

Introduction

A complaint I heard from customers about ODA is the low core speed of the Intel Xeon processor embedded in the X8-2 servers: Intel(R) Xeon(R) Gold 5218 CPU @ 2.30GHz. 2.30GHz only? Because of its comfortable number of cores (16 per processor), the cruise speed of each core is limited. Is it a problem compared to a home made server with less cores?

Why clock speed is important?

As you may now, the faster a core is running, the less it takes time to complete a task. Single core clock speed is still an important parameter for Oracle databases. Software architecture of Oracle is brilliant: automatic parallelism can dramatically reduce the time needed for some statements to complete, but the vast majority of them will be processed on a single thread. Regarding Standard Edition 2, parallelism does not exist on this version, thus each statement is limited to a single thread.

Is ODA X8-2 processor really limited to 2.3GHz?

Don’t be affraid by this low CPU speed, this is actually the lowest speed the cores are guaranteed to operate. Speed of the cores can be increased by the system, depending on various parameters, and fastest speed is 3.9GHz for this kind of CPU, which is nearly twice the base frequency. This Xeon processor, as most of its predecessors, features Turbo boost technology, a kind of intelligent automatic overclocking.

Turbo boost technology?

As far as I know, all the Xeon family has Turbo boost technology. If you need more MHz than normal from time to time, you CPU speed can greatly increase to something like 180% of its nominal speed, which is quite amazing. But why this speed is not the default speed of the cores? Simply because running all the cores at full speed has a thermal impact on the CPU itself, and the complete system. As a consequence, heating can exceed cooling capacity and damage hardware. To manage speed and thermal efficiency, Intel’s processor dynamically distributes Turbo bins, which are basically slices of MHz increase. For each CPU model, a defined number of Turbo bins is available and will be given to the cores. The rule is that each core will receive the same Turbo bins numbers at the same time. What’s most interesting on ODA is that it’s related to enabled cores: the less cores are enabled on the CPU, the more Turbo bins are available for each single core.

Turbo bins and limited number cores

With limited number of cores, the heating of your CPU will be quite low in normal condition, and still low under heavy load because the heatsink and the fans are sized for using all the cores. As a result, most of the time, the Turbo bins will be allocated to your cores, and if you’re lucky, you’ll be running at full throttle, meaning that, for example, instead of a 16 cores CPU running at 2.3GHz, you’ll have a 4 cores CPU running at 3.9GHz. Quite nice isn’t it?

With Enterprise Edition

One of the main feature of ODA is the ability to configure the number of cores you need, and only pay the license for these enabled cores. Most of the customers are only using a few cores, and that’s nice for single threaded performance. You can expect full speed at least for 2 and 4 enabled cores.

What about Standard Edition 2?

With Standard Edition 2, you don’t need to decrease the cores on your server because your license is related to the socket, and not the cores. But nothing prevent you from decreasing the core numbers. There should be a limit where less but faster cores will benefit to all of your databases. If you only have a few databases on you ODA (let’s say less than 10 on a X8-2M), there is no question about decreasing the number of cores: it will most probably bring you more performance. If you have much more databases, the overall perfomance will probably be better with all the cores running at lower speed.

And when using old software/hardware?

Turbo boost was also available on X7-2, but old software releases (18.x) do not seem to let the cores go faster than normal speed. Maybe it’s due to the Linux version: the jump from Linux 6 to Linux 7 starting from earlier versions of 19.x has probably something to do with that. Patching to 19.x is highly recommended on X7 for a new reason: better performance.

Conclusion

If you’re using Standard Edition 2, don’t hesitate to decrease the number of enabled cores on your ODA, it will probably bring you nice speed bump. If you’re using Enterprise Edition and don’t plan to use all the cores on your ODA, you will benefit from very fast cores and leverage at best your licenses. Take this with a grain of salt, as it will depends on the environment, both physical and logical, and as these conclusions came from a quite limited number of systems. Definitely, with its fast NVMe disks and these Xeon CPUs, ODA is the perfect choice for most of us.

Cet article Oracle Database Appliance and CPU speed est apparu en premier sur Blog dbi services.

Oracle ADB: rename the service_name connect_data

$
0
0

By Franck Pachot

.
Since Aug. 4, 2020 we have the possibility to rename an Autonomous Database (ATP, ADW or AJD – the latest JSON database) on shared Exadata infrastructure (what was called ‘serverless’ last year which is a PDB in a public CDB). As the PDB name is internal, we reference the ADB with its database name is actually a part of the service name.

I have an ATP database that I’ve created in the Oracle Cloud Free Tier a few months ago.
I have downloaded the region and instance wallet to be used by client connections:


SQL> host grep _high /var/tmp/Wallet_DB202005052234_instance/tnsnames.ora

db202005052234_high = (description= (retry_count=20)(retry_delay=3)(address=(protocol=tcps)(port=1522)(host=adb.eu-frankfurt-1.oraclecloud.com))(connect_data=(service_name=jgp1nyc204pdpjc_db202005052234_high.atp.oraclecloud.com))(security=(ssl_server_cert_dn="CN=adwc.eucom-central-1.oraclecloud.com,OU=Oracle BMCS FRANKFURT,O=Oracle Corporation,L=Redwood City,ST=California,C=US")))

This is the instance wallet which references only this database (db202005052234)


SQL> host grep _high /var/tmp/Wallet_DB202005052234_region/tnsnames.ora

db202005052234_high = (description= (retry_count=20)(retry_delay=3)(address=(protocol=tcps)(port=1522)(host=adb.eu-frankfurt-1.oraclecloud.com))(connect_data=(service_name=jgp1nyc204pdpjc_db202005052234_high.atp.oraclecloud.com))(security=(ssl_server_cert_dn="CN=adwc.eucom-central-1.oraclecloud.com,OU=Oracle BMCS FRANKFURT,O=Oracle Corporation,L=Redwood City,ST=California,C=US")))
db202003061855_high = (description= (retry_count=20)(retry_delay=3)(address=(protocol=tcps)(port=1522)(host=adb.eu-frankfurt-1.oraclecloud.com))(connect_data=(service_name=jgp1nyc204pdpjc_db202003061855_high.adwc.oraclecloud.com))(security=(ssl_server_cert_dn="CN=adwc.eucom-central-1.oraclecloud.com,OU=Oracle BMCS FRANKFURT,O=Oracle Corporation,L=Redwood City,ST=California,C=US")))

This contains also my other database service that I have in the same region.

I connect using this wallet:


SQL> connect admin/"TheAnswer:=42"@DB202005052234_tp?TNS_ADMIN=/var/tmp/Wallet_DB202005052234_instance
Connected.

SQL> select name,network_name,creation_date,pdb from v$services;

                                                          NAME                                                   NETWORK_NAME          CREATION_DATE                               PDB
______________________________________________________________ ______________________________________________________________ ______________________ _________________________________
JGP1NYC204PDPJC_DB202005052234_high.atp.oraclecloud.com        JGP1NYC204PDPJC_DB202005052234_high.atp.oraclecloud.com        2019-05-17 20:53:03    JGP1NYC204PDPJC_DB202005052234
JGP1NYC204PDPJC_DB202005052234_tpurgent.atp.oraclecloud.com    JGP1NYC204PDPJC_DB202005052234_tpurgent.atp.oraclecloud.com    2019-05-17 20:53:03    JGP1NYC204PDPJC_DB202005052234
JGP1NYC204PDPJC_DB202005052234_low.atp.oraclecloud.com         JGP1NYC204PDPJC_DB202005052234_low.atp.oraclecloud.com         2019-05-17 20:53:03    JGP1NYC204PDPJC_DB202005052234
JGP1NYC204PDPJC_DB202005052234_tp.atp.oraclecloud.com          JGP1NYC204PDPJC_DB202005052234_tp.atp.oraclecloud.com          2019-05-17 20:53:03    JGP1NYC204PDPJC_DB202005052234
jgp1nyc204pdpjc_db202005052234                                 jgp1nyc204pdpjc_db202005052234                                 2020-08-13 09:02:02    JGP1NYC204PDPJC_DB202005052234
JGP1NYC204PDPJC_DB202005052234_medium.atp.oraclecloud.com      JGP1NYC204PDPJC_DB202005052234_medium.atp.oraclecloud.com      2019-05-17 20:53:03    JGP1NYC204PDPJC_DB202005052234

Here are all the services registered: the LOW/MEDIUM/HIGH/TP/TP_URGENT for my connections and the PDB name one.

Now from the Cloud Console I rename the database:

You can see that the “display name” (DB 202008131439) didn’t change but the “Database name” has been renamed (from “DB202008131439” to “FRANCK”).


SQL> select name,network_name,creation_date,pdb from v$services;

Error starting at line : 1 in command -
select name,network_name,creation_date,pdb from v$services
Error at Command Line : 1 Column : 1
Error report -
SQL Error: No more data to read from socket
SQL>

My connection has been canceled. I need to connect again.


SQL> connect admin/"TheAnswer:=42"@DB202005052234_tp?TNS_ADMIN=/var/tmp/Wallet_DB202005052234_instance
Aug 13, 2020 10:13:41 AM oracle.net.resolver.EZConnectResolver parseExtendedProperties
SEVERE: Extended settings parsing failed.
java.lang.RuntimeException: Unable to parse url "/var/tmp/Wallet_DB202005052234_instance:1521/DB202005052234_tp?TNS_ADMIN"
        at oracle.net.resolver.EZConnectResolver.parseExtendedProperties(EZConnectResolver.java:408)
        at oracle.net.resolver.EZConnectResolver.parseExtendedSettings(EZConnectResolver.java:366)
        at oracle.net.resolver.EZConnectResolver.parse(EZConnectResolver.java:171)
        at oracle.net.resolver.EZConnectResolver.(EZConnectResolver.java:130)
        at oracle.net.resolver.EZConnectResolver.newInstance(EZConnectResolver.java:139)
        at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:669)
        at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:562)
        at java.sql.DriverManager.getConnection(DriverManager.java:664)
        at java.sql.DriverManager.getConnection(DriverManager.java:208)
        at oracle.dbtools.raptor.newscriptrunner.SQLPLUS.connect(SQLPLUS.java:5324)
        at oracle.dbtools.raptor.newscriptrunner.SQLPLUS.logConnectionURL(SQLPLUS.java:5418)
        at oracle.dbtools.raptor.newscriptrunner.SQLPLUS.logConnectionURL(SQLPLUS.java:5342)
        at oracle.dbtools.raptor.newscriptrunner.SQLPLUS.getConnection(SQLPLUS.java:5154)
        at oracle.dbtools.raptor.newscriptrunner.SQLPLUS.runConnect(SQLPLUS.java:2414)
        at oracle.dbtools.raptor.newscriptrunner.SQLPLUS.run(SQLPLUS.java:220)
        at oracle.dbtools.raptor.newscriptrunner.ScriptRunner.runSQLPLUS(ScriptRunner.java:425)
        at oracle.dbtools.raptor.newscriptrunner.ScriptRunner.run(ScriptRunner.java:262)
        at oracle.dbtools.raptor.newscriptrunner.ScriptExecutor.run(ScriptExecutor.java:344)
        at oracle.dbtools.raptor.newscriptrunner.ScriptExecutor.run(ScriptExecutor.java:227)
        at oracle.dbtools.raptor.scriptrunner.cmdline.SqlCli.process(SqlCli.java:410)
        at oracle.dbtools.raptor.scriptrunner.cmdline.SqlCli.processLine(SqlCli.java:421)
        at oracle.dbtools.raptor.scriptrunner.cmdline.SqlCli.startSQLPlus(SqlCli.java:1179)
        at oracle.dbtools.raptor.scriptrunner.cmdline.SqlCli.main(SqlCli.java:502)

  USER          = admin
  URL           = jdbc:oracle:thin:@DB202005052234_tp?TNS_ADMIN=/var/tmp/Wallet_DB202005052234_instance
  Error Message = Listener refused the connection with the following error:
ORA-12514, TNS:listener does not currently know of service requested in connect descriptor
  USER          = admin
  URL           = jdbc:oracle:thin:@DB202005052234_tp?TNS_ADMIN=/var/tmp/Wallet_DB202005052234_instance:1521/DB202005052234_tp?TNS_ADMIN=/var/tmp/Wallet_DB202005052234_instance
  Error Message = IO Error: Invalid connection string format, a valid format is: "host:port:sid"

Warning: You are no longer connected to ORACLE.
SQL>

The service is not known, which makes sense because the rename of the database is actually a rename of the services.

Oracle documentation says that we have to download the wallet again after a rename of the database. But that’s not very agile. Let’s rename the service in the tnsnames.ora


SQL> host sed -ie s/_db202005052234/FRANCK/g /var/tmp/Wallet_DB202005052234_instance/tnsnames.ora

This changes only the SERVICE_NAME in CONNECT_DATA but not the tnsnames.ora entry, then I can use the same connection string.


SQL> connect admin/"TheAnswer:=42"@DB202005052234_tp?TNS_ADMIN=/var/tmp/Wallet_DB202005052234_instance

SQL> select name,network_name,creation_date,pdb from v$services;

                                                  NAME                                           NETWORK_NAME          CREATION_DATE                       PDB
______________________________________________________ ______________________________________________________ ______________________ _________________________
JGP1NYC204PDPJC_FRANCK_high.atp.oraclecloud.com        JGP1NYC204PDPJC_FRANCK_high.atp.oraclecloud.com        2019-05-17 20:53:03    JGP1NYC204PDPJC_FRANCK
JGP1NYC204PDPJC_FRANCK_tp.atp.oraclecloud.com          JGP1NYC204PDPJC_FRANCK_tp.atp.oraclecloud.com          2019-05-17 20:53:03    JGP1NYC204PDPJC_FRANCK
JGP1NYC204PDPJC_FRANCK_medium.atp.oraclecloud.com      JGP1NYC204PDPJC_FRANCK_medium.atp.oraclecloud.com      2019-05-17 20:53:03    JGP1NYC204PDPJC_FRANCK
jgp1nyc204pdpjc_franck                                 jgp1nyc204pdpjc_franck                                 2020-08-13 10:05:58    JGP1NYC204PDPJC_FRANCK
JGP1NYC204PDPJC_FRANCK_low.atp.oraclecloud.com         JGP1NYC204PDPJC_FRANCK_low.atp.oraclecloud.com         2019-05-17 20:53:03    JGP1NYC204PDPJC_FRANCK
JGP1NYC204PDPJC_FRANCK_tpurgent.atp.oraclecloud.com    JGP1NYC204PDPJC_FRANCK_tpurgent.atp.oraclecloud.com    2019-05-17 20:53:03    JGP1NYC204PDPJC_FRANCK

Using the new SERVICE_NAME is sufficient. As you can see above, some autonomous magic remains: the new services still have the old creation date.

Note that you should follow the documentation and download the wallet and change your connection string. There is probably a reason behind this. But autonomous or not, I like to understand what I do and I don’t see any reason for changing everything when renaming a service.

Cet article Oracle ADB: rename the service_name connect_data est apparu en premier sur Blog dbi services.

Oracle Data Pump Integration for Table instantiation with Oracle Golden Gate

$
0
0

From Oracle GoldenGate (OGG) version 12.2 and above, there is a transparent integration of OGG with Oracle Data Pump as explained in the Document ID 1276058.1.

The CSN for each table is captured on an Oracle Data Pump export. The CSN is then applied to system tables and views on the target database on the import. These views and system tables are referenced by Replicat when applying data to target database.

This 12.2 feature, no longer requires administrators to know what CSN number Replicat should be started with. Replicat will handle it automatically when the  Replicat Parameter DBOPTIONS ENABLE_INSTANTATION_FILTERING is enabled. It also avoids the need of specification of individual MAP for each imported table with the @FILTER(@GETENV(‘TRANSACTION’,’CSN’) or HANDLECOLLISIONS clause.

Let’s see how it works :

Create a new schema DEMO and a new table into the source database :

oracle@ora-gg-s-2: [DB1] sqlplus / as sysdba
SQL> grant create session to DEMO identified by toto;

Grant succeeded.

SQL> grant resource to DEMO;

Grant succeeded.

SQL> alter user demo quota unlimited on users;

User altered.

SQL> create table DEMO.ADDRESSES as select * from SOE.ADDRESSES;

Table created.

Stop the Extract process :

oracle@ora-gg-s-2:/u10/app/goldengate/product/19.1.0.0.4/gg_1/ [DB1] ./ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 19.1.0.0.200414 OGGCORE_19.1.0.0.0OGGBP_PLATFORMS_200427.2331_FBO
Linux, x64, 64bit (optimized), Oracle 19c on Apr 28 2020 17:41:48
Operating system character set identified as UTF-8.

Copyright (C) 1995, 2019, Oracle and/or its affiliates. All rights reserved.



GGSCI (ora-gg-s-2) 1> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
EXTRACT     RUNNING     EXTRSOE     00:00:00      00:00:08
EXTRACT     RUNNING     PUMPSOE     00:00:00      00:00:01


GGSCI (ora-gg-s-2) 2> stop extract *

Sending STOP request to EXTRACT EXTRSOE ...
Request processed.

Sending STOP request to EXTRACT PUMPSOE ...
Request processed.

GGSCI (ora-gg-s-2) 4> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
EXTRACT     STOPPED     EXTRSOE     00:00:00      00:00:07
EXTRACT     STOPPED     PUMPSOE     00:00:00      00:00:07


GGSCI (ora-gg-s-2) 5>

Stop the Replicat process :

GGSCI (ora-gg-t-2) 4> stop replicat replsoe

Sending STOP request to REPLICAT REPLSOE ...

GGSCI (ora-gg-t-2) 5> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
REPLICAT    STOPPED     REPLSOE     00:00:00      00:00:00


GGSCI (ora-gg-t-2) 9>

Edit Extract process and add the new table :

GGSCI (ora-gg-s-2) 1> edit params EXTRSOE
Table DEMO.ADDRESSES;
GGSCI (ora-gg-s-2) 1> edit params PUMPSOE
Table DEMO.ADDRESSES;

Add schematrandata for the schema DEMO:

GGSCI (ora-gg-s-2) 5> dblogin useridalias ggadmin
Successfully logged into database.

GGSCI (ora-gg-s-2 as ggadmin@DB1) 6> add schematrandata DEMO

2020-08-19 21:25:47  INFO    OGG-01788  SCHEMATRANDATA has been added on schema "DEMO".

2020-08-19 21:25:47  INFO    OGG-01976  SCHEMATRANDATA for scheduling columns has been added on schema "DEMO".

2020-08-19 21:25:47  INFO    OGG-10154  Schema level PREPARECSN set to mode NOWAIT on schema "DEMO".

2020-08-19 21:25:49  INFO    OGG-10471  ***** Oracle Goldengate support information on table DEMO.ADDRESSES *****
Oracle Goldengate support native capture on table DEMO.ADDRESSES.
Oracle Goldengate marked following column as key columns on table DEMO.ADDRESSES: ADDRESS_ID, CUSTOMER_ID, DATE_CREATED, HOUSE_NO_OR_NAME, STREET_NAME, TOWN, COUNTY, COUNTRY, POST_CODE, ZIP_CODE
No unique key is defined for table DEMO.ADDRESSES.

GGSCI (ora-gg-s-2 as ggadmin@DB1) 7> info schematrandata DEMO

2020-08-19 21:25:54  INFO    OGG-06480  Schema level supplemental logging, excluding non-validated keys, is enabled on schema "DEMO".

2020-08-19 21:25:54  INFO    OGG-01980  Schema level supplemental logging is enabled on schema "DEMO" for all scheduling columns.

2020-08-19 21:25:54  INFO    OGG-10462  Schema "DEMO" have 1 prepared tables for instantiation.

GGSCI (ora-gg-s-2 as ggadmin@DB1) 8>

Source system tables are automatically prepared when issuing the command ADD TRANDATA / ADD SCHEMATRANDATA

Start and check the extract :

GGSCI (ora-gg-s-2 as ggadmin@DB1) 8> start extract *

Sending START request to MANAGER ...
EXTRACT EXTRSOE starting

Sending START request to MANAGER ...
EXTRACT PUMPSOE starting


GGSCI (ora-gg-s-2 as ggadmin@DB1) 9> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
EXTRACT     RUNNING     EXTRSOE     00:00:00      00:19:51
EXTRACT     RUNNING     PUMPSOE     00:00:00      00:19:51


GGSCI (ora-gg-s-2 as ggadmin@DB1) 10> !
info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
EXTRACT     RUNNING     EXTRSOE     00:00:00      00:00:00
EXTRACT     RUNNING     PUMPSOE     00:00:00      00:00:01

Let’s do an update to the source table DEMO.ADDRESSES :

oracle@ora-gg-s-2:/u10/app/goldengate/product/19.1.0.0.4/gg_1/ [DB1] sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Wed Aug 19 21:34:19 2020
Version 19.4.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.4.0.0.0

SQL> update DEMO.ADDRESSES set STREET_NAME= 'Demo Street is open' where ADDRESS_ID=1000;

1 row updated.

SQL> commit;

Commit complete.

Let’s  export the DEMO schema :

oracle@ora-gg-s-2:/u10/app/goldengate/product/19.1.0.0.4/gg_1/ [DB1] expdp "'/ as sysdba'" dumpfile=export_tables_DEMO.dmp \
> logfile=export_tables_DEMO.log \
> schemas=demo \
>

Export: Release 19.0.0.0.0 - Production on Wed Aug 19 21:37:09 2020
Version 19.4.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.
Password:

Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
FLASHBACK automatically enabled to preserve database integrity.
Starting "SYS"."SYS_EXPORT_SCHEMA_01":  "/******** AS SYSDBA" dumpfile=export_tables_DEMO.dmp logfile=export_tables_DEMO.log schemas=demo
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/PROCACT_INSTANCE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
. . exported "DEMO"."ADDRESSES"                          35.24 MB  479277 rows
Master table "SYS"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_SCHEMA_01 is:
  /u01/app/oracle/admin/DB1/dpdump/export_tables_DEMO.dmp
Job "SYS"."SYS_EXPORT_SCHEMA_01" successfully completed at Wed Aug 19 21:37:43 2020 elapsed 0 00:00:28

oracle@ora-gg-s-2:/u10/app/goldengate/product/19.1.0.0.4/gg_1/ [DB1]

The dba_capture_prepared_tables does not get populated till the first export of the tables. The scn is the smallest system change number (SCN) for which the table can be instantiated. It is not the export SCN.

SQL> select table_name, scn from dba_capture_prepared_tables where table_owner = 'DEMO' ;

TABLE_NAME   SCN
--------------------
ADDRESSES    2989419

Let’s copy the dump file to target database :

oracle@ora-gg-s-2:/u10/app/goldengate/product/19.1.0.0.4/gg_1/ [DB1] scp \
> /u01/app/oracle/admin/DB1/dpdump/export_tables_DEMO.dmp \
> oracle@ora-gg-t-2:/u01/app/oracle/admin/DB2/dpdump
oracle@ora-gg-t-2's password:
export_tables_DEMO.dmp                                                            100%   36MB 120.8MB/s   00:00
oracle@ora-gg-s-2:/u10/app/goldengate/product/19.1.0.0.4/gg_1/ [DB1]

Let’s  import the new table into target database :

oracle@ora-gg-t-2:/u10/app/goldengate/product/19.1.0.0.4/gg_1/ [DB2] impdp system/manager \
> dumpfile=export_tables_DEMO.dmp \
> logfile=impdemo_tables.log \
>

Import: Release 19.0.0.0.0 - Production on Wed Aug 19 21:45:18 2020
Version 19.4.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_FULL_01":  system/******** dumpfile=export_tables_DEMO.dmp logfile=impdemo_tables.log
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/PROCACT_INSTANCE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "DEMO"."ADDRESSES"                          35.24 MB  479277 rows
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
Job "SYSTEM"."SYS_IMPORT_FULL_01" successfully completed at Wed Aug 19 21:45:46 2020 elapsed 0 00:00:26

oracle@ora-gg-t-2:/u10/app/goldengate/product/19.1.0.0.4/gg_1/ [DB2]

Datapump import will populate system tables and views with instantiation CSNs :

SQL> select source_object_name, instantiation_scn, ignore_scn from dba_apply_instantiated_objects where source_object_owner = 'DEMO' ;

SOURCE_OBJECT_NAME INSTANTIATION_SCN IGNORE_SCN
-----------------------------------------------
ADDRESSES          2995590

Let’s update the table source :

oracle@ora-gg-s-2:/u10/app/goldengate/product/19.1.0.0.4/gg_1/ [DB1] sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Wed Aug 19 21:48:33 2020
Version 19.4.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.4.0.0.0

SQL> update DEMO.ADDRESSES set STREET_NAME= 'Demo Street is open' where ADDRESS_ID=1001;

1 row updated.

SQL> commit;

Commit complete.

Let’s check transactions occured into source table DEMO.ADDRESSES :

oracle@ora-gg-s-2:/u10/app/goldengate/product/19.1.0.0.4/gg_1/ [DB1] ./ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 19.1.0.0.200414 OGGCORE_19.1.0.0.0OGGBP_PLATFORMS_200427.2331_FBO
Linux, x64, 64bit (optimized), Oracle 19c on Apr 28 2020 17:41:48
Operating system character set identified as UTF-8.

Copyright (C) 1995, 2019, Oracle and/or its affiliates. All rights reserved.

GGSCI (ora-gg-s-2) 1> stats extract extrsoe table DEMO.ADDRESSES

Sending STATS request to EXTRACT EXTRSOE ...

Start of Statistics at 2020-08-19 21:50:15.

DDL replication statistics (for all trails):

*** Total statistics since extract started     ***
        Operations                                         0.00
        Mapped operations                                  0.00
        Unmapped operations                                0.00
        Other operations                                   0.00
        Excluded operations                                0.00

Output to /u11/app/goldengate/data/DB1/es:

Extracting from DEMO.ADDRESSES to DEMO.ADDRESSES:

*** Total statistics since 2020-08-19 21:34:35 ***
        Total inserts                                      0.00
        Total updates                                      2.00
        Total deletes                                      0.00
        Total upserts                                      0.00
        Total discards                                     0.00
        Total operations                                   2.00

*** Daily statistics since 2020-08-19 21:34:35 ***
        Total inserts                                      0.00
        Total updates                                      2.00
        Total deletes                                      0.00
        Total upserts                                      0.00
        Total discards                                     0.00
        Total operations                                   2.00

*** Hourly statistics since 2020-08-19 21:34:35 ***
        Total inserts                                      0.00
        Total updates                                      2.00
        Total deletes                                      0.00
        Total upserts                                      0.00
        Total discards                                     0.00
        Total operations                                   2.00

*** Latest statistics since 2020-08-19 21:34:35 ***
        Total inserts                                      0.00
        Total updates                                      2.00
        Total deletes                                      0.00
        Total upserts                                      0.00
        Total discards                                     0.00
        Total operations                                   2.00

Let’s modify the replicat parameter file to add MAP statement for the new table DEMO.ADDRESSES plus the parameter DBOPTIONS ENABLE_INSTANTATION_FILTERING.

Replicat REPLSOE
DBOPTIONS INTEGRATEDPARAMS ( parallelism 6 )
DISCARDFILE /u10/app/goldengate/product/19.1.0.0.4/gg_1/dirrpt/REPLSOE_discard.txt, append, megabytes 10
DBOPTIONS ENABLE_INSTANTIATION_FILTERING
USERIDALIAS ggadmin
MAP SOE.*, TARGET SOE.* ;
--MAP DEMO.ADDRESSES ,TARGET DEMO.ADDRESSES,FILTER ( @GETENV ('TRANSACTION', 'CSN') > 2908627) ;
MAP DEMO.ADDRESSES, TARGET DEMO.ADDRESSES;

I commented the old method where we should mention the CSN used by the export. Now with DBOPTIONS ENABLE_INSTANTATION_FILTERING, there is no need to mention the CSN.

Let’s start the replicat process :

GGSCI (ora-gg-t-2) 2> start replicat REPLSOE

Sending START request to MANAGER ...
REPLICAT REPLSOE starting


GGSCI (ora-gg-t-2) 3> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
REPLICAT    RUNNING     REPLSOE     00:00:00      00:41:37


GGSCI (ora-gg-t-2) 4> !
info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
REPLICAT    RUNNING     REPLSOE     00:00:00      00:00:01

 

Start replicat, who will query instantiation CSN on any new mapping and filter records accordingly Filters out DDL and DML records based on each table’s instantiation CSN . Output in the report file will show the table name and to what CSN the replicat will start applying data :

2020-08-19 21:56:05 INFO OGG-10155 Instantiation CSN filtering is enabled on table DEMO.ADDRESSES at CSN 2,995,59
0.

Let’s wait the lag resolved and let’s check the transaction occured into table DEMO.ADDRESSES:

GGSCI (ora-gg-t-2) 6> stats replicat REPLSOE ,table demo.addresses

Sending STATS request to REPLICAT REPLSOE ...

Start of Statistics at 2020-08-19 21:58:32.


Integrated Replicat Statistics:

        Total transactions                                 1.00
        Redirected                                         0.00
        Replicated procedures                              0.00
        DDL operations                                     0.00
        Stored procedures                                  0.00
        Datatype functionality                             0.00
        Operation type functionality                       0.00
        Event actions                                      0.00
        Direct transactions ratio                          0.00%

Replicating from DEMO.ADDRESSES to DEMO.ADDRESSES:

*** Total statistics since 2020-08-19 21:56:05 ***
        Total inserts                                      0.00
        Total updates                                      1.00
        Total deletes                                      0.00
        Total upserts                                      0.00
        Total discards                                     0.00
        Total operations                                   1.00

*** Daily statistics since 2020-08-19 21:56:05 ***
        Total inserts                                      0.00
        Total updates                                      1.00
        Total deletes                                      0.00
        Total upserts                                      0.00
        Total discards                                     0.00
        Total operations                                   1.00

*** Hourly statistics since 2020-08-19 21:56:05 ***
        Total inserts                                      0.00
        Total updates                                      1.00
        Total deletes                                      0.00
        Total upserts                                      0.00
        Total discards                                     0.00
        Total operations                                   1.00

*** Latest statistics since 2020-08-19 21:56:05 ***
        Total inserts                                      0.00
        Total updates                                      1.00
        Total deletes                                      0.00
        Total upserts                                      0.00
        Total discards                                     0.00
        Total operations                                   1.00

End of Statistics.

Let’s check if data are synchronized :

Into source :

oracle@ora-gg-s-2:/u10/app/goldengate/product/19.1.0.0.4/gg_1/ [DB1] sqh

SQL*Plus: Release 19.0.0.0.0 - Production on Wed Aug 19 22:00:52 2020
Version 19.4.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.4.0.0.0

SQL> select street_name from demo.ADDRESSES where ADDRESS_ID=1000;

STREET_NAME
------------------------------------------------------------
Demo Street is open

SQL> select street_name from demo.ADDRESSES where ADDRESS_ID=1001;

STREET_NAME
------------------------------------------------------------
Demo Street is open

Into target :

oracle@ora-gg-t-2:/u10/app/goldengate/product/19.1.0.0.4/gg_1/ [DB2] sqh

SQL*Plus: Release 19.0.0.0.0 - Production on Wed Aug 19 22:02:44 2020
Version 19.4.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.4.0.0.0

SQL> select street_name from demo.ADDRESSES where ADDRESS_ID=1000;

STREET_NAME
------------------------------------------------------------
Demo Street is open

SQL> select street_name from demo.ADDRESSES where ADDRESS_ID=1001;

STREET_NAME
------------------------------------------------------------
Demo Street is open

Both table DEMO.ADDRESSES on source database and target database has identical data.

DBOPTIONS ENABLE_INSTANTIATION_FILTERING is no longer required:

GGSCI (ora-gg-t-2)> edit params REPLSOE
... 
MAP demo.addresses ,TARGET demo.addresses;

Restart the replicat :

GGSCI (ora-gg-t-2) 9> stop replicat replsoe

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
REPLICAT    STOPPED     REPLSOE     00:00:00      00:00:01

GGSCI (ora-gg-t-2) 10> start replicat replsoe

Sending START request to MANAGER ...
REPLICAT REPLSOE starting


GGSCI (ora-gg-t-2) 11> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
REPLICAT    RUNNING     REPLSOE     00:00:00      00:00:00

Let’s doing a last test :
Update on the source table :

SQL> update DEMO.ADDRESSES set STREET_NAME= 'test 1' where ADDRESS_ID=800;

1 row updated.

SQL> commit;

Commit complete.

Let’s check the target database :

oracle@ora-gg-t-2:/u10/app/goldengate/product/19.1.0.0.4/gg_1/ [DB2] sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Wed Aug 19 22:11:24 2020
Version 19.4.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.4.0.0.0

SQL> select street_name from demo.ADDRESSES where ADDRESS_ID=800;

STREET_NAME
------------------------------------------------------------
test 1

Conclusion :

  • The parameter DBOPTIONS ENABLE_INSTANTATION_FILTERING avoid for the Golden Gate administrator to find the CSN used for the inital load.

Cet article Oracle Data Pump Integration for Table instantiation with Oracle Golden Gate est apparu en premier sur Blog dbi services.

SQL Server on Oracle Cloud

$
0
0

By Franck Pachot

.
You can create a VM with SQL Server running in the Oracle Cloud. This is easy with a few clicks on the marketplace:

Here are the steps I did above:

  • Oracle Cloud -> Marketplace -> Application -> Category -> Database management
  • You have multiple flavors. I’ve chosen the lastest cheaper “Microsoft SQL 2016 Standard with Windows Server 2016 Standard”
  • Select the compartment and “Launch Instance”
  • Choose a name, and the network

This is fast but don’t be as impatient as I am, the password is displayed after a while:

After a few minutes, you have the user (opc) and password (mine was ‘BNG8lsxsD6jrD’) that you will have to change at the first connection. This user will allow you to connect with Remote Desktop. This means that you have to open the 3389 TCP port:

  • You find your instance in Oracle Cloud -> compute -> instances (don’t forget that this is just easy provisioning for a database running on IaaS. It is not a PaaS managed service
  • Subnet -> Security List -> Add Ingress Rules -> IP Protocol TCP, Destination port: 3389

Once this port is opened in the ingress rules, you can connect with Remote Desktop and access the machine as the user OPC which is a local administrator. You can install anything there.

There’s something that quickly annoys me when I want to install something there – Internet Explorer and its “enhanced security configuration”. I disable this and take my responsibility for what I want to download there:

SSMS – the SQL Server Management Studio is not installed on the server. You have the command line with “sqlcmd” and the SQL Server Configuration manager where you can verify that the TCP access to the database is enabled and on the default port 1433:

I’ve the SSMS installed on my laptop and as I’ve created this VM on the public subnet, I open the ingress TCP port 1433:

But that’s not sufficient because the installation from the marketplace does not open this port. You need to open it in the windows firewall:

And… one more thing… the installation from the marketplace allows only Windows Authentication to the database but, if you don’t share a domain, you can’t connect remotely with this.

I’ve created a user:

Microsoft Windows [Version 10.0.14393]
(c) 2016 Microsoft Corporation. All rights reserved.

C:\Windows\system32>sqlcmd
1> create login franck with password='Test-dbi-2020';
2> go

Then, in order to connect with a SQL Server user I had to enable SQL Server authentication:

You may wonder how I enabled it without having SSM first? I’ve read https://docs.microsoft.com/en-us/sql/database-engine/configure-windows/change-server-authentication-mode?view=sql-server-ver15 that mentions:


1> EXEC xp_instance_regwrite N'HKEY_LOCAL_MACHINE',N'Software\Microsoft\MSSQLServer\MSSQLServer',N'LoginMode', REG_DWORD, 1
2> GO


to run on SQLCMD.
But I thought I was more clever, ran regedit.exe and changed HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSSQLServer\MSSQLServer to add a ‘LoginMode’ key. But it didn’t work. I finally realized that the registry key is: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQLServer.
This is a very subtle difference, isn’t it? Ok, now I admit it: I installed SSMS on the cloud VM before finding this 😉

Ok, now that the 1433 port is opened on both the windows firewall and the subnet rules, and I have a user created on SQL Server, and I have enabled SQL Server authentication, I can connect from my laptop SSMS.

That’s all for the moment. I have a SQL Server operational on the Oracle Cloud. You may have seen that my instance is called TEST-REDGATE-SQL-CLONE and then you can guess what the next blog post is about…

Read carefully the small prints in the pricing conditions…

I’ve created a Standard Edition here, which is $0.37/hr per OCPU and you can create an Enterprise Edition at $1.47/hr per OCPU. But be careful with the little prints:

– The minimum shape available is a 2 OCPU one
– If you stop the instance before 744 hours of usage (1 month) you still pay for 744 hours

This means that the minimal instance I’ve created to write this post will cost: 0.37 * 2 * 744 = $550 even if I terminate the instance now. No worry, I have some credits thanks to the ACE program.

I must admit that I didn’t expect that for a VM. Upfront costs are common for bare metal, or if they are a counterpart for huge discount. I usually trash and re-create an instance to test my blog post before publishing it, to be sure I didn’t miss a step. And that’s what cloud is good at, right? But given the upfront cost, I’ll not. Another instance would add another $550 to the bill.

Here is my estimated bill from one hour taken from the Cost Explorer (2 OCPU for the compute, 256 GB block storage for the boot volume):

product/service product/ Description cost/ unitPrice cost/ skuUnitDescription cost/ myCost cost/ currencyCode Monthly
(744 hours)
COMPUTE Virtual Machine Standard – X7 0.0643 OCPU Hours 0.1286 CHF 95.6784
COMPUTE Windows OS 0.0205 OCPU Hours 0.041 CHF 30.504
COMPUTE Microsoft SQL Standard 0.3727 OCPU Hours 0.7454 CHF 554.5776
NETWORK Outbound Data Transfer 0 GB Months 0 CHF 0
BLOCK_STORAGE Block Volume – Performance Units 0.0017 GB Months 0.005849462 CHF 0.4352
BLOCK_STORAGE Block Volume – Storage 0.0257 GB Months 0.008843011 CHF 6.5792

When I shut down the machine, only the following show up: Outbound Data Transfer (0 GB), Block Volume – Performance Units, Block Volume – Storage (256 GB), Virtual Machine Standard – X7 (2 OCPU). I’ll tell you when I terminate the service how those 744 hours minimum for “Microsoft SQL Standard” will show up.

Cet article SQL Server on Oracle Cloud est apparu en premier sur Blog dbi services.

Boost your CPU speed with Standard Edition 2 on ODA

$
0
0

Introduction

There is no need to decrease the number of cores on your ODA when using Standard Edition 2, because your license is based on the number of physical CPU. So why would you do that? Obviously, the more cores you have, the more performance you should get. But that simply isn’t always true.

Base CPU speed and Turbo Boost

If you didn’t read my previous blog, please do so to understand how Xeon processors work in your ODA: Oracle database appliance and CPU speed

Basically, the Xeon processors inside your ODA can benefit from Turbo boost technology, meaning that CPU speed can automatically increase (and decrease) depending on various factors. So the main question is: what should I do to maximize the CPU speed and leverage the Xeon(s) inside my appliance? As you may guess, there is something to do with this Turbo boost.

What happens to my processors when all cores are enabled?

When all the cores are enabled on your ODA, like typically if you are using Standard Edition 2, CPU will most probably not run at maximum speed. On an ODA X8-2S, it means that the 16 cores will run somewhere between 2.3GHz and 3.9GHz. This is what I get on my server:

cat /proc/cpuinfo | grep MHz
cpu MHz : 2799.980
cpu MHz : 2799.993
cpu MHz : 2799.971
cpu MHz : 2799.945
cpu MHz : 2779.100
cpu MHz : 2800.195
cpu MHz : 2799.870
cpu MHz : 2799.964
cpu MHz : 2799.838
cpu MHz : 2804.276
cpu MHz : 2800.073
cpu MHz : 2791.581
cpu MHz : 2800.005
cpu MHz : 2800.106
cpu MHz : 2799.988
cpu MHz : 2800.032
cpu MHz : 2799.805
cpu MHz : 2799.913
cpu MHz : 2799.999
cpu MHz : 2799.891
cpu MHz : 2697.071
cpu MHz : 2799.752
cpu MHz : 2799.983
cpu MHz : 2799.845
cpu MHz : 2799.625
cpu MHz : 2799.864
cpu MHz : 2799.999
cpu MHz : 2631.688
cpu MHz : 2799.967
cpu MHz : 2799.815
cpu MHz : 2799.677
cpu MHz : 2799.876

All the 32 threads are running at 2.8GHz. It can eventually go lower.

It will be the same for an ODA X8-2M, all the cores of both processors will run at quite low frequency (X8-2M is basically a X8-2S with twice the amount of RAM and a second identical CPU).

What happens if I disable half of the cores?

What to expect from disabling half of the cores? In my case, I can do that because my server is only dedicated to 3 databases, so no doubt I will never be able to use the 16 cores of my X8-2S. Remember that Standard Edition 2 cannot use parallelism, thus having less cores will probably not make any difference.

odacli update-cpucore -c 8
{
"jobId" : "4cd51240-6501-4633-8575-0e0ff697bd07",
"status" : "Created",
"message" : null,
"reports" : [ ],
"createTimestamp" : "August 24, 2020 12:11:22 PM CEST",
"resourceList" : [ ],
"description" : "CPU cores service update",
"updatedTime" : "August 24, 2020 12:11:22 PM CEST"
}
sleep 120; cat /proc/cpuinfo | grep MHz
cpu MHz : 3600.000
cpu MHz : 3600.160
cpu MHz : 3600.165
cpu MHz : 3600.016
cpu MHz : 3600.071
cpu MHz : 3600.137
cpu MHz : 3600.003
cpu MHz : 3600.026
cpu MHz : 3600.019
cpu MHz : 3600.037
cpu MHz : 3600.001
cpu MHz : 3599.999
cpu MHz : 3600.018
cpu MHz : 3599.990
cpu MHz : 3600.010
cpu MHz : 3599.992

Remaining cores are now running at higher speed. But is it better? Let’s try with a basic statistics compute:

Before decreasing (16 cores):
sqlplus / as sysdba
host date
Mon Aug 24 13:39:02 CEST 2020
exec dbms_stats.gather_database_stats;
host date
Mon Aug 24 13:40:14 CEST 2020

Statistics were completed in 34 min.

After decreasing (8 cores):
sqlplus / as sysdba
host date
Mon Aug 24 12:12:28 CEST 2020
exec dbms_stats.gather_database_stats;
host date
Mon Aug 24 12:39:59 CEST 2020

Statistics were completed in 27 min.

True, it’s better. Higher core speed made my statistics compute 20% faster with my core running 23% faster: what a surprise!

Let’s try on the other ODA (located in a different site) on nearly the same database to confirm that:

Before decreasing (16 cores):
sqlplus / as sysdba
host date
Mon Aug 24 13:48:03 CEST 2020
exec dbms_stats.gather_database_stats;
host date
Mon Aug 24 14:21:49 CEST 2020

Same duration of 34 min.

After decreasing (8 cores):
sqlplus / as sysdba
host date
Mon Aug 24 14:30:10 CEST 2020
exec dbms_stats.gather_database_stats;
host date
Mon Aug 24 14:57:59 CEST 2020

27 min.

The same behaviour has been seen. I have now less cores available on my systems but the enabled cores are running at 3.6GHz.

Which number of cores should I enable?

It mainly depends on your number of databases. If your ODA is running much more databases than the cores available, do nothing. If your databases number is somewhere between 1 and half the cores available, you surely will benefit from a core decrease. The best would probably to proceed step by step, start by halving the enabled cores, the biggest gain is here. On a X8-2, the jump is from 2.8 to 3.6GHz, not bad. If you divide again by 2 you can expect to reach maximum speed of 3.9GHz: very interesting for small number of databases with very demanding batches. Note that you will have to use the –force option to decrease several times the number of cores, because this feature is made for those using Enterprise Edition, and with this edition you cannot decrease the number of licenses dedicated to your ODA (you can only increase). If you decreased too much, you can increase again without any problem (and it doesn’t require a reboot). There is no problem playing with the number of enabled cores with your license: Standard Edition is NOT related to the number of cores. You can decide how many cores you want, and configure the perfect setting for your databases.

And regarding Enterprise Edition?

The core speed will be the same, because which edition you’re using doesn’t make any difference: it’s hardware and OS related. The best setting would probably to never enable more than half of the cores: meaning 8 cores on a X8-2S, 16 cores on a X8-2M and 32 cores on a X8-2HA (2x 16). If you plan to consolidate your databases on a small number of ODAs, maybe it’s better to distribute your Enterprise licenses on more ODAs.

Conclusion

If you’re running on ODA X7 or X8 with Standard Edition 2, and have deployed one of the 19c releases (it doesn’t work with previous versions), don’t miss this important feature to leverage at best your ODAs.

Cet article Boost your CPU speed with Standard Edition 2 on ODA est apparu en premier sur Blog dbi services.

Reimaging an ODA to 19.8 Part I

$
0
0

Recently I was deploying some ODA X8-2M. To install the 19.8 software, I decide to reimage the server with the 19.8 image. In this blog I describe the main steps I did.

First, we will need to download following patches from Oracle support website

-Patch 30403643: Oracle Database Appliance 19.8.0.0.0 OS ISO IMAGE for all platforms
-Patch 30403673: Oracle Database Appliance GI Clone for ODACLI/DCS stack (and
-Patch 30403662: Oracle Database Appliance RDBMS Clone for ODACLI/DCS stack

We suppose that the ILOM is already configured and that you can connect via the ILOM. The default credentials for the ILOM is root/changeme

The first is to unzip the patch 30403643 on your workstation. This will provide the following iso file oda_bm_19.8.0.0.0_200718.iso.
The principle is to boot with the corresponding image iso. This is done using the ILOM interface. For this just follow this Oracle documentation

1. Open a browser and connect to Oracle Integrated Lights Out Manager (ILOM) on Node 0 as root.
https://ilom-ip-address
2. Launch the Remote Console.
a. Expand Remote Control in the left navigation.
b. Click the Redirection tab.
c. Click Launch for the Remote Console in the Actions menu.
The state of the system determines what appears on the Console page.
3. Add the image.
a. Click the KVMS tab, then select Storage.
b. Click Add.
c. Browse to the Oracle Database Appliance Bare Metal ISO Image, highlight the image, then click Select.
d. Click Connect.
The mounting of the ISO image is successful when the Connect button changes to a Disconnect button.
e. Click OK
The CD-ROM icon in the top right corner is highlighted.
4. Configure the CD-ROM as the next boot device.
a. Expand Host Management in the left menu of the ILOM Remote Console tab.
b. Click Host Control.
c. Select CDROM from the Next Boot Device menu, then click Save.
5. Power cycle the node.
a. Click Power Control in the Host Management menu.
b. Select Power Cycle , then click Save.

Below some screenshots after server reboot

The process of reimaging will take some time. To follow the progress you can open a second terminal with the the ALT-F2 key and then tape the command

# cat /proc/mdstat

At the end of the reimaging, the ODA will restart. We can then verify the components of the server. Connecting with root, the default password is welcome1

Now we can configure the first network of the ODA. This is done by the command configure-firstnet. You will have to give some basic information

Now that the first network configuration is done, we can update the repository with the 2 following patches

-p30403662_198000_Linux-x86-64.zip
-p30403673_198000_Linux-x86-64.zip

You will need to transfer them in the ODA using your favorite tool (WinSCP, ftp …). Then we can uncompress them

[root@oak oda]# unzip p30403673_198000_Linux-x86-64.zip 
Archive:  p30403673_198000_Linux-x86-64.zip
 extracting: odacli-dcs-19.8.0.0.0-200713-GI-19.8.0.0.zip  
  inflating: README.txt     
         
[root@oak oda]# unzip p30403662_198000_Linux-x86-64.zip 
Archive:  p30403662_198000_Linux-x86-64.zip
 extracting: odacli-dcs-19.8.0.0.0-200713-DB-19.8.0.0.zip  
replace README.txt? [y]es, [n]o, [A]ll, [N]one, [r]ename: y
  inflating: README.txt              
[root@oak oda]#

And then update the repository with the grid stack

[root@oak oda]# /opt/oracle/dcs/bin/odacli update-repository -f /tmp/oda/odacli-dcs-19.8.0.0.0-200713-GI-19.8.0.0.zip 
{
  "jobId" : "e9a571ca-f2cd-46ba-b8eb-8e337dbb8375",
  "status" : "Created",
  "message" : "/tmp/oda/odacli-dcs-19.8.0.0.0-200713-GI-19.8.0.0.zip",
  "reports" : [ ],
  "createTimestamp" : "August 05, 2020 01:19:48 AM PDT",
  "resourceList" : [ ],
  "description" : "Repository Update",
  "updatedTime" : "August 05, 2020 01:19:48 AM PDT"
}

The status of the job can be viewed using the command describe-job with the jobid. It should return SUCCESS

[root@oak oda]# /opt/oracle/dcs/bin/odacli describe-job -i  "e9a571ca-f2cd-46ba-b8eb-8e337dbb8375"

Job details                                                      
----------------------------------------------------------------
                     ID:  e9a571ca-f2cd-46ba-b8eb-8e337dbb8375
            Description:  Repository Update
                 Status:  Success
                Created:  August 5, 2020 1:19:48 AM PDT
                Message:  /tmp/oda/odacli-dcs-19.8.0.0.0-200713-GI-19.8.0.0.zip

Task Name                                Start Time                          End Time                            Status    
---------------------------------------- ----------------------------------- ----------------------------------- ----------

[root@oak oda]#

After update the repository with the DB stack

[root@oak oda]# /opt/oracle/dcs/bin/odacli update-repository -f /tmp/oda/odacli-dcs-19.8.0.0.0-200713-DB-19.8.0.0.zip 
{
  "jobId" : "bb83a202-6cf5-4663-a446-c974a0f3a2a5",
  "status" : "Created",
  "message" : "/tmp/oda/odacli-dcs-19.8.0.0.0-200713-DB-19.8.0.0.zip",
  "reports" : [ ],
  "createTimestamp" : "August 05, 2020 01:21:01 AM PDT",
  "resourceList" : [ ],
  "description" : "Repository Update",
  "updatedTime" : "August 05, 2020 01:21:01 AM PDT"
}
[root@oak oda]#

The status should return SUCCESS

[root@oak oda]# /opt/oracle/dcs/bin/odacli describe-job -i  "bb83a202-6cf5-4663-a446-c974a0f3a2a5"

Now we can create the appliance. This can be done via the web interface. You will need the ports 7070 and 7093 to be opened
If you plan to use the web interface just connect to this URL with the oda-admin ( you will be asked to change the password)

https://ODA-host-ip-address:7093/mgmt/index.html

In my case the deployment is done using a json file. This json file can be generated from the web console. But you can also find some example in the net. But I recommend generating the file using the web console, this will reduce errors in the file.
Below the example of my file (please replace the hidden values with your values). We suppose that that the hostname of my ODA is serveroda

[root@serveroda ~]# cat serveroda.json 
{
    "instance": {
        "instanceBaseName": "serveroda-c",
        "dbEdition": "EE",
        "objectStoreCredentials": null,
        "name": "serveroda",
        "systemPassword": "***********",
        "timeZone": "Europe/Zurich",
        "domainName": "XXXXXXXXX",
        "dnsServers": [
            "XXXXXXXX",
            "XXXXXXXX"
        ],
        "ntpServers": [
            "XXXXXXXXXXX",
            "XXXXXXXXXXX",
            "XXXXXXXXXXX"
        ],
        "isRoleSeparated": true,
        "osUserGroup": {
            "users": [
                {
                    "userName": "oracle",
                    "userRole": "oracleUser",
                    "userId": 1001
                },
                {
                    "userName": "grid",
                    "userRole": "gridUser",
                    "userId": 1000
                }
            ],
            "groups": [
                {
                    "groupName": "oinstall",
                    "groupRole": "oinstall",
                    "groupId": 1001
                },
                {
                    "groupName": "dbaoper",
                    "groupRole": "dbaoper",
                    "groupId": 1002
                },
                {
                    "groupName": "dba",
                    "groupRole": "dba",
                    "groupId": 1003
                },
                {
                    "groupName": "asmadmin",
                    "groupRole": "asmadmin",
                    "groupId": 1004
                },
                {
                    "groupName": "asmoper",
                    "groupRole": "asmoper",
                    "groupId": 1005
                },
                {
                    "groupName": "asmdba",
                    "groupRole": "asmdba",
                    "groupId": 1006
                }
            ]
        }
    },
    "nodes": [
        {
            "nodeNumber": "0",
            "nodeName": "serveroda",
            "network": [
                {
                    "ipAddress": "XXXXXXXXXX",
                    "subNetMask": "255.255.255.0",
                    "gateway": "XXXXXXXXX",
                    "nicName": "btbond1",
                    "networkType": [
                        "Public"
                    ],
                    "isDefaultNetwork": true
                }
            ]
        }
    ],
    "grid": {
        "vip": [],
        "diskGroup": [
            {
                "diskGroupName": "DATA",
                "diskPercentage": 80,
                "redundancy": "NORMAL"
            },
            {
                "diskGroupName": "RECO",
                "diskPercentage": 20,
                "redundancy": "NORMAL"
            }
        ],
        "language": "en",
        "enableAFD": "TRUE",
        "scan": null
    },
    "database": null
}
[root@serveroda ~]#

To create the Appliance, we run following command. The output was truncated

[root@serveroda ~]# odacli create-appliance -r serveroda.json
….
…
Enter an initial password for Web Console account (oda-admin):
Confirm the password for Web Console account (oda-admin):
{
  "jobId" : "d4c6762e-7bbd-48d2-aac3-f9c975514ebd",
  "status" : "Created",
  "message" : null,
  "reports" : [ ],
  "createTimestamp" : "August 07, 2020 09:48:50 AM UTC",
  "resourceList" : [ ],
  "description" : "Provisioning service creation",
  "updatedTime" : "August 07, 2020 09:48:50 AM UTC"
}

The status of the job should return SUCCESS. Otherwise you can check dcs-agent.log located in /opt/oracle/dcs/log

[root@serveroda0 ~]# odacli describe-job -i "d4c6762e-7bbd-48d2-aac3-f9c975514ebd" 
Job details                                                      
----------------------------------------------------------------
                     ID:  d4c6762e-7bbd-48d2-aac3-f9c975514ebd
            Description:  Provisioning service creation
                 Status:  Success
                Created:  August 7, 2020 9:48:50 AM CEST
                Message:  

Task Name                                Start Time                          End Time                            Status    
---------------------------------------- ----------------------------------- ----------------------------------- ----------
network update                           August 7, 2020 9:48:51 AM CEST      August 7, 2020 9:48:57 AM CEST      Success   
updating network                         August 7, 2020 9:48:51 AM CEST      August 7, 2020 9:48:57 AM CEST      Success   
Setting up Network                       August 7, 2020 9:48:51 AM CEST      August 7, 2020 9:48:52 AM CEST      Success   
OS usergroup 'asmdba'creation            August 7, 2020 9:48:57 AM CEST      August 7, 2020 9:48:57 AM CEST      Success   
OS usergroup 'asmoper'creation           August 7, 2020 9:48:58 AM CEST      August 7, 2020 9:48:58 AM CEST      Success   
OS usergroup 'asmadmin'creation          August 7, 2020 9:48:58 AM CEST      August 7, 2020 9:48:58 AM CEST      Success   
OS usergroup 'dba'creation               August 7, 2020 9:48:58 AM CEST      August 7, 2020 9:48:58 AM CEST      Success   
OS usergroup 'dbaoper'creation           August 7, 2020 9:48:58 AM CEST      August 7, 2020 9:48:58 AM CEST      Success   
OS usergroup 'oinstall'creation          August 7, 2020 9:48:58 AM CEST      August 7, 2020 9:48:58 AM CEST      Success   
OS user 'grid'creation                   August 7, 2020 9:48:58 AM CEST      August 7, 2020 9:48:58 AM CEST      Success   
OS user 'oracle'creation                 August 7, 2020 9:48:58 AM CEST      August 7, 2020 9:48:58 AM CEST      Success   
Default backup policy creation           August 7, 2020 9:48:58 AM CEST      August 7, 2020 9:48:58 AM CEST      Success   
Backup config metadata persist           August 7, 2020 9:48:58 AM CEST      August 7, 2020 9:48:58 AM CEST      Success   
SSH equivalance setup                    August 7, 2020 9:48:58 AM CEST      August 7, 2020 9:48:58 AM CEST      Success   
Grid home creation                       August 7, 2020 9:48:59 AM CEST      August 7, 2020 9:51:35 AM CEST      Success   
Creating GI home directories             August 7, 2020 9:48:59 AM CEST      August 7, 2020 9:48:59 AM CEST      Success   
Cloning Gi home                          August 7, 2020 9:48:59 AM CEST      August 7, 2020 9:51:32 AM CEST      Success   
Updating GiHome version                  August 7, 2020 9:51:32 AM CEST      August 7, 2020 9:51:35 AM CEST      Success   
Storage discovery                        August 7, 2020 9:51:35 AM CEST      August 7, 2020 9:56:05 AM CEST      Success   
Grid stack creation                      August 7, 2020 9:56:05 AM CEST      August 7, 2020 10:09:00 AM CEST     Success   
Configuring GI                           August 7, 2020 9:56:05 AM CEST      August 7, 2020 9:57:07 AM CEST      Success   
Running GI root scripts                  August 7, 2020 9:57:07 AM CEST      August 7, 2020 10:04:45 AM CEST     Success   
Running GI config assistants             August 7, 2020 10:04:46 AM CEST     August 7, 2020 10:05:22 AM CEST     Success   
Post cluster OAKD configuration          August 7, 2020 10:09:00 AM CEST     August 7, 2020 10:11:44 AM CEST     Success   
Disk group 'RECO'creation                August 7, 2020 10:11:52 AM CEST     August 7, 2020 10:12:04 AM CEST     Success   
Register Scan and Vips to Public Network August 7, 2020 10:12:05 AM CEST     August 7, 2020 10:12:40 AM CEST     Success   
Volume 'commonstore'creation             August 7, 2020 10:12:40 AM CEST     August 7, 2020 10:13:37 AM CEST     Success   
ACFS File system 'DATA'creation          August 7, 2020 10:13:37 AM CEST     August 7, 2020 10:13:54 AM CEST     Success   
Install oracle-ahf                       August 7, 2020 10:13:54 AM CEST     August 7, 2020 10:15:24 AM CEST     Success   
Provisioning service creation            August 7, 2020 10:15:24 AM CEST     August 7, 2020 10:15:24 AM CEST     Success   
persist new agent state entry            August 7, 2020 10:15:24 AM CEST     August 7, 2020 10:15:24 AM CEST     Success   
persist new agent state entry            August 7, 2020 10:15:24 AM CEST     August 7, 2020 10:15:24 AM CEST     Success   
Restart Zookeeper and DCS Agent          August 7, 2020 10:15:24 AM CEST     August 7, 2020 10:15:25 AM CEST     Success   

After the deployment we can check the component version

[root@serveroda ~]# odacli describe-component
System Version  
---------------
19.8.0.0.0

Component                                Installed Version    Available Version   
---------------------------------------- -------------------- --------------------
OAK                                       19.8.0.0.0            up-to-date          
GI                                        19.8.0.0.200714       up-to-date          
DCSAGENT                                  19.8.0.0.0            up-to-date          
ILOM                                      4.0.4.38.a.r132148    4.0.4.51.r134837    
BIOS                                      52020500              52021300            
OS                                        7.8                   up-to-date          
FIRMWARECONTROLLER                        VDV1RL02              VDV1RL04            
FIRMWAREDISK                              1120                  1102                
HMP                                       2.4.5.0.1             up-to-date          

As we can see following components are not up-to-dated
-ILOM
-BIOS
-FIRMWARECONTROLLER

To update these components, in my case, we had to apply the patch 19.8 and we had to manually patch the ILOM and the BIOS. These steps are described in the Part II of this blog

Cet article Reimaging an ODA to 19.8 Part I est apparu en premier sur Blog dbi services.

Reimaging an ODA to 19.8 Part II

$
0
0

In my previous blog, I described the steps to deploy an ODA 19.8 by reimaging. The ODA was reimaged with the latest available version 19.8 but some components were not up-to-dated.

[root@serveroda ~]# odacli describe-component
System Version  
---------------
19.8.0.0.0

Component                                Installed Version    Available Version   
---------------------------------------- -------------------- --------------------
OAK                                       19.8.0.0.0            up-to-date          
GI                                        19.8.0.0.200714       up-to-date          
DCSAGENT                                  19.8.0.0.0            up-to-date          
ILOM                                      4.0.4.38.a.r132148    4.0.4.51.r134837    
BIOS                                      52020500              52021300            
OS                                        7.8                   up-to-date          
FIRMWARECONTROLLER                        VDV1RL02              VDV1RL04            
FIRMWAREDISK                              1120                  1102                
HMP                                       

The patch was downloaded and copied to the ODA and we uncompress the 2 files using unzip

[root@serveroda software_oda]# unzip p31481816_198000_Linux-x86-64_1of2.zip 
Archive:  p31481816_198000_Linux-x86-64_1of2.zip
 extracting: oda-sm-19.8.0.0.0-200718-server1of2.zip  y
  inflating: README.txt              
[root@serveroda software_oda]# ls
oda-sm-19.8.0.0.0-200718-server1of2.zip  p31481816_198000_Linux-x86-64_2of2.zip
p31481816_198000_Linux-x86-64_1of2.zip   README.txt
[root@serveroda software_oda]# unzip p31481816_198000_Linux-x86-64_2of2.zip 
Archive:  p31481816_198000_Linux-x86-64_2of2.zip
 extracting: oda-sm-19.8.0.0.0-200718-server2of2.zip  
[root@serveroda software_oda]#

We have to update the repository with the 2 archives

[root@serveroda ~]# /opt/oracle/dcs/bin/odacli update-repository -f /u01/software_oda/oda-sm-19.8.0.0.0-200718-server1of2.zip 
{
  "jobId" : "1d1a36e0-4630-4f32-a880-7ee2c6d13cf7",
  "status" : "Created",
  "message" : "/u01/software_oda/oda-sm-19.8.0.0.0-200718-server1of2.zip",
  "reports" : [ ],
  "createTimestamp" : "August 18, 2020 09:40:01 AM CEST",
  "resourceList" : [ ],
  "description" : "Repository Update",
  "updatedTime" : "August 18, 2020 09:40:01 AM CEST"
}
[root@serveroda ~]#

The status of the job should return SUCCESS

[root@serveroda ~]# /opt/oracle/dcs/bin/odacli describe-job -i "1d1a36e0-4630-4f32-a880-7ee2c6d13cf7"

Job details                                                      
----------------------------------------------------------------
                     ID:  1d1a36e0-4630-4f32-a880-7ee2c6d13cf7
            Description:  Repository Update
                 Status:  Success
                Created:  August 18, 2020 9:40:01 AM CEST
                Message:  /u01/software_oda/oda-sm-19.8.0.0.0-200718-server1of2.zip

Task Name                                Start Time                          End Time                            Status    
---------------------------------------- ----------------------------------- ----------------------------------- ----------

[root@serveroda ~]#

We do the same thing with the second archive

[root@serveroda ~]# /opt/oracle/dcs/bin/odacli update-repository -f /u01/software_oda/oda-sm-19.8.0.0.0-200718-server2of2.zip 
{
  "jobId" : "60bd7144-d8a2-4f39-b58f-ae73dc0aecb6",
  "status" : "Created",
  "message" : "/u01/software_oda/oda-sm-19.8.0.0.0-200718-server2of2.zip",
  "reports" : [ ],
  "createTimestamp" : "August 18, 2020 09:41:34 AM CEST",
  "resourceList" : [ ],
  "description" : "Repository Update",
  "updatedTime" : "August 18, 2020 09:41:34 AM CEST"
}
[root@serveroda ~]#

And we verify the satus is returning SUCCESS

[root@serveroda ~]# /opt/oracle/dcs/bin/odacli describe-job -i "60bd7144-d8a2-4f39-b58f-ae73dc0aecb6"

After updating the repository we update the DCS agent

[root@serveroda ~]# /opt/oracle/dcs/bin/odacli update-dcsagent -v 19.8.0.0.0
{
  "jobId" : "2f7dbdda-3fc1-4dbf-8fe9-d14e1514679e",
  "status" : "Created",
  "message" : "Dcs agent will be restarted after the update. Please wait for 2-3 mins before executing the other commands",
  "reports" : [ ],
  "createTimestamp" : "August 18, 2020 09:43:31 AM CEST",
  "resourceList" : [ ],
  "description" : "DcsAgent patching",
  "updatedTime" : "August 18, 2020 09:43:31 AM CEST"
}
[root@serveroda ~]#

The status should return SUCCESS

[root@serveroda ~]# /opt/oracle/dcs/bin/odacli describe-job -i "2f7dbdda-3fc1-4dbf-8fe9-d14e1514679e"

Job details                                                      
----------------------------------------------------------------
                     ID:  2f7dbdda-3fc1-4dbf-8fe9-d14e1514679e
            Description:  DcsAgent patching
                 Status:  Success
                Created:  August 18, 2020 9:43:31 AM CEST
                Message:  

Task Name                                Start Time                          End Time                            Status    
---------------------------------------- ----------------------------------- ----------------------------------- ----------
dcs-agent upgrade  to version 19.8.0.0.0 August 18, 2020 9:43:31 AM CEST     August 18, 2020 9:43:31 AM CEST     Success   
Update System version                    August 18, 2020 9:43:31 AM CEST     August 18, 2020 9:43:31 AM CEST     Success   

[root@serveroda ~]#

The DCS components should be also updated

[root@serveroda ~]# /opt/oracle/dcs/bin/odacli update-dcscomponents -v 19.8.0.0.0
{
  "jobId" : "f4359690-a910-484f-9382-b3b94b590c6b",
  "status" : "Success",
  "message" : null,
  "reports" : null,
  "createTimestamp" : "August 18, 2020 09:47:03 AM CEST",
  "description" : "Job completed and is not part of Agent job list",
  "updatedTime" : "August 18, 2020 09:47:03 AM CEST"
}
[root@serveroda ~]#

Just not that a describe-job for the id above will return error. You just must have SUCCESS in the status
Now we can generate a prepatch-report to correct the eventual errors while patching

[root@serveroda ~]# /opt/oracle/dcs/bin/odacli create-prepatchreport -s -v 19.8.0.0.0

Job details                                                      
----------------------------------------------------------------
                     ID:  06fdc0c4-7a1e-45aa-b930-dce35ab978a9
            Description:  Patch pre-checks for [OS, ILOM, GI, ORACHKSERVER]
                 Status:  Created
                Created:  August 18, 2020 9:49:39 AM CEST
                Message:  Use 'odacli describe-prepatchreport -i 06fdc0c4-7a1e-45aa-b930-dce35ab978a9' to check details of results

Task Name                                Start Time                          End Time                            Status    
---------------------------------------- ----------------------------------- ----------------------------------- ----------

[root@serveroda ~]#

As indicated in the output above, we run the describe-prepatchreport to have the results

[root@serveroda ~]# odacli describe-prepatchreport -i 06fdc0c4-7a1e-45aa-b930-dce35ab978a9

Patch pre-check report                                           
------------------------------------------------------------------------
                 Job ID:  06fdc0c4-7a1e-45aa-b930-dce35ab978a9
            Description:  Patch pre-checks for [OS, ILOM, GI, ORACHKSERVER]
                 Status:  FAILED
                Created:  August 18, 2020 9:49:39 AM CEST
                 Result:  One or more pre-checks failed for [ORACHK]

Node Name       
---------------
serveroda 

Pre-Check                      Status   Comments                              
------------------------------ -------- -------------------------------------- 
__OS__ 
Validate supported versions     Success   Validated minimum supported versions. 
Validate patching tag           Success   Validated patching tag: 19.8.0.0.0.   
Is patch location available     Success   Patch location is available.          
Verify OS patch                 Success   There are no packages available for   
                                          an update                             
Validate command execution      Success   Validated command execution           

__ILOM__ 
Validate supported versions     Success   Validated minimum supported versions. 
Validate patching tag           Success   Validated patching tag: 19.8.0.0.0.   
Is patch location available     Success   Patch location is available.          
Checking Ilom patch Version     Success   Successfully verified the versions    
Patch location validation       Success   Successfully validated location       
Validate command execution      Success   Validated command execution           

__GI__ 
Validate supported GI versions  Success   Validated minimum supported versions. 
Validate available space        Success   Validated free space under /u01       
Is clusterware running          Success   Clusterware is running                
Validate patching tag           Success   Validated patching tag: 19.8.0.0.0.   
Is system provisioned           Success   Verified system is provisioned        
Validate ASM in online          Success   ASM is online                         
Validate minimum agent version  Success   GI patching enabled in current        
                                          DCSAGENT version                      
Validate GI patch metadata      Success   Validated patching tag: 19.8.0.0.0.   
Validate clones location exist  Success   Validated clones location             
Is patch location available     Success   Patch location is available.          
Patch location validation       Success   Successfully validated location       
Patch verification              Success   Patch 31281355 already applied on gi  
                                          home /u01/app/19.0.0.0/grid on node   
                                          serveroda                              
Validate Opatch update          Success   Not updating opatch as patch already  
                                          applied on node serveroda              
Patch conflict check            Success   Not analyzing patch as patch already  
                                          applied on node serveroda              
Validate command execution      Success   Validated command execution           

__ORACHK__ 
Running orachk                  Failed    Orachk validation failed: .           
Validate command execution      Success   Validated command execution           
CSS disktimeout                 Failed    CSS disktimeout is not set to the     
                                          default value of 200                  
Verify Cluster                  Failed    Cluster Synchronization Services      
Synchronization Services                  (CSS) misscount not set to            
(CSS) misscount value                     recommended value                     
Software home                   Failed    Software home check failed            

As we can see there are some errors because the system is expecting default values for some parameters.
For example the value of the disktimeout is set to 251 instead of the expected value by the prepatch check 200.

[root@serveroda log]# crsctl get css disktimeout 
CRS-4678: Successful get disktimeout 251 for Cluster Synchronization Services.

So I may think that these errors can be ignored. And then I decide to continue by updating the server with the patch

[root@serveroda log]# /opt/oracle/dcs/bin/odacli update-server -v 19.8.0.0.0
{
  "jobId" : "de8ce20e-81b1-45a5-bd21-7271dc4a8d7f",
  "status" : "Created",
  "message" : "Success of server update will trigger reboot of the node after 4-5 minutes. Please wait until the node reboots.",
  "reports" : [ ],
  "createTimestamp" : "August 18, 2020 10:19:46 AM CEST",
  "resourceList" : [ ],
  "description" : "Server Patching",
  "updatedTime" : "August 18, 2020 10:19:46 AM CEST"
}
[root@serveroda log]#

A few minutes after the patch was successfully applied

[root@serveroda log]# odacli describe-job -i "de8ce20e-81b1-45a5-bd21-7271dc4a8d7f"

Job details                                                      
----------------------------------------------------------------
                     ID:  de8ce20e-81b1-45a5-bd21-7271dc4a8d7f
            Description:  Server Patching
                 Status:  Success
                Created:  August 18, 2020 10:19:46 AM CEST
                Message:  

Task Name                                Start Time                          End Time                            Status    
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Patch location validation                August 18, 2020 10:19:53 AM CEST    August 18, 2020 10:19:54 AM CEST    Success   
dcs-controller upgrade                   August 18, 2020 10:19:54 AM CEST    August 18, 2020 10:19:54 AM CEST    Success   
Patch location validation                August 18, 2020 10:19:54 AM CEST    August 18, 2020 10:19:54 AM CEST    Success   
dcs-cli upgrade                          August 18, 2020 10:19:54 AM CEST    August 18, 2020 10:19:54 AM CEST    Success   
Creating repositories using yum          August 18, 2020 10:19:54 AM CEST    August 18, 2020 10:19:56 AM CEST    Success   
Updating YumPluginVersionLock rpm        August 18, 2020 10:19:56 AM CEST    August 18, 2020 10:19:56 AM CEST    Success   
Applying OS Patches                      August 18, 2020 10:19:56 AM CEST    August 18, 2020 10:19:57 AM CEST    Success   
Creating repositories using yum          August 18, 2020 10:19:57 AM CEST    August 18, 2020 10:19:57 AM CEST    Success   
Applying HMP Patches                     August 18, 2020 10:19:57 AM CEST    August 18, 2020 10:19:58 AM CEST    Success   
Patch location validation                August 18, 2020 10:19:58 AM CEST    August 18, 2020 10:19:58 AM CEST    Success   
oda-hw-mgmt upgrade                      August 18, 2020 10:19:58 AM CEST    August 18, 2020 10:19:58 AM CEST    Success   
OSS Patching                             August 18, 2020 10:19:58 AM CEST    August 18, 2020 10:19:58 AM CEST    Success   
Applying Firmware Disk Patches           August 18, 2020 10:19:58 AM CEST    August 18, 2020 10:20:05 AM CEST    Success   
Applying Firmware Expander Patches       August 18, 2020 10:20:05 AM CEST    August 18, 2020 10:20:10 AM CEST    Success   
Applying Firmware Controller Patches     August 18, 2020 10:20:10 AM CEST    August 18, 2020 10:20:16 AM CEST    Success   
Checking Ilom patch Version              August 18, 2020 10:20:17 AM CEST    August 18, 2020 10:20:19 AM CEST    Success   
Patch location validation                August 18, 2020 10:20:19 AM CEST    August 18, 2020 10:20:20 AM CEST    Success   
Save password in Wallet                  August 18, 2020 10:20:21 AM CEST    August 18, 2020 10:20:21 AM CEST    Success   
Apply Ilom patch                         August 18, 2020 10:20:21 AM CEST    August 18, 2020 10:22:16 AM CEST    Success   
Copying Flash Bios to Temp location      August 18, 2020 10:22:16 AM CEST    August 18, 2020 10:22:16 AM CEST    Success   
Starting the clusterware                 August 18, 2020 10:22:16 AM CEST    August 18, 2020 10:24:38 AM CEST    Success   
clusterware patch verification           August 18, 2020 10:24:39 AM CEST    August 18, 2020 10:24:49 AM CEST    Success   
Patch location validation                August 18, 2020 10:24:49 AM CEST    August 18, 2020 10:24:49 AM CEST    Success   
Opatch update                            August 18, 2020 10:24:49 AM CEST    August 18, 2020 10:24:49 AM CEST    Success   
Patch conflict check                     August 18, 2020 10:24:49 AM CEST    August 18, 2020 10:24:49 AM CEST    Success   
clusterware upgrade                      August 18, 2020 10:24:49 AM CEST    August 18, 2020 10:24:49 AM CEST    Success   
Updating GiHome version                  August 18, 2020 10:24:49 AM CEST    August 18, 2020 10:24:52 AM CEST    Success   
Update System version                    August 18, 2020 10:24:52 AM CEST    August 18, 2020 10:24:52 AM CEST    Success   
preRebootNode Actions                    August 18, 2020 10:24:52 AM CEST    August 18, 2020 10:24:52 AM CEST    Success   
Reboot Ilom                              August 18, 2020 10:24:52 AM CEST    August 18, 2020 10:24:52 AM CEST    Success   

[root@serveroda log]# 

Then we update the storage

[root@serveroda log]# /opt/oracle/dcs/bin/odacli update-storage -v 19.8.0.0.0
{
  "jobId" : "c6e86978-8869-4b24-8d3f-44a8c988c5a4",
  "status" : "Created",
  "message" : "Success of Storage Update may trigger reboot of node after 4-5 minutes. Please wait till node restart",
  "reports" : [ ],
  "createTimestamp" : "August 18, 2020 10:27:31 AM CEST",
  "resourceList" : [ ],
  "description" : "Storage Firmware Patching",
  "updatedTime" : "August 18, 2020 10:27:31 AM CEST"
}
[root@serveroda log]#

The status was SUCCESS and the server is restarted

[root@serveroda log]# odacli describe-job -i "c6e86978-8869-4b24-8d3f-44a8c988c5a4

Job details                                                      
----------------------------------------------------------------
                     ID:  c6e86978-8869-4b24-8d3f-44a8c988c5a4
            Description:  Storage Firmware Patching
                 Status:  Success
                Created:  August 18, 2020 10:27:31 AM CEST
                Message:  

Task Name                                Start Time                          End Time                            Status    
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Applying Firmware Disk Patches           August 18, 2020 10:27:36 AM CEST    August 18, 2020 10:27:40 AM CEST    Success   
Applying Firmware Controller Patches     August 18, 2020 10:27:40 AM CEST    August 18, 2020 10:31:38 AM CEST    Success   
preRebootNode Actions                    August 18, 2020 10:31:38 AM CEST    August 18, 2020 10:31:38 AM CEST    Success   
Reboot Ilom                              August 18, 2020 10:31:38 AM CEST    August 18, 2020 10:31:38 AM CEST    Success   

[root@serveroda log]# [  OK  ] Started Show Plymouth Power Off Screen.
[  OK  ] Stopped (null).
[  OK  ] Stopped Oracle High Availability Services.
[  OK  ] Stopped Availability of block devices.
[  OK  ] Started Restore /run/initramfs.
[  OK  ] Stopped Dynamic System Tuning Daemon.

After the reboot I connect and I remark that all components were updated except the BIOS and the ILOM. Then I opened a SR with Oracle and they asked me to follow steps in the following documnent
ODA (Oracle Database Appliance): OAK Bundle Patch failing on ILOM/BIOS component apply (Doc ID 1427885.1), what I did

First I stop the CRS stack

[root@serveroda grid]# crsctl stop crs
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'serveroda'
CRS-2673: Attempting to stop 'ora.crsd' on 'serveroda'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on server 'serveroda'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'serveroda'
CRS-2673: Attempting to stop 'ora.cvu' on 'serveroda'
CRS-33673: Attempting to stop resource group 'ora.asmgroup' on server 'serveroda'
CRS-2673: Attempting to stop 'ora.RECO.dg' on 'serveroda'
CRS-2673: Attempting to stop 'ora.data.commonstore.acfs' on 'serveroda'
CRS-2673: Attempting to stop 'ora.qosmserver' on 'serveroda'
CRS-2673: Attempting to stop 'ora.chad' on 'serveroda'
CRS-2677: Stop of 'ora.RECO.dg' on 'serveroda' succeeded
CRS-2677: Stop of 'ora.cvu' on 'serveroda' succeeded
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'serveroda' succeeded
CRS-2673: Attempting to stop 'ora.serveroda.vip' on 'serveroda'
CRS-2677: Stop of 'ora.serveroda.vip' on 'serveroda' succeeded
CRS-2677: Stop of 'ora.data.commonstore.acfs' on 'serveroda' succeeded
CRS-2673: Attempting to stop 'ora.DATA.COMMONSTORE.advm' on 'serveroda'
CRS-2677: Stop of 'ora.DATA.COMMONSTORE.advm' on 'serveroda' succeeded
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'serveroda'
CRS-2673: Attempting to stop 'ora.proxy_advm' on 'serveroda'
CRS-2677: Stop of 'ora.DATA.dg' on 'serveroda' succeeded
CRS-2677: Stop of 'ora.chad' on 'serveroda' succeeded
CRS-2677: Stop of 'ora.qosmserver' on 'serveroda' succeeded
CRS-2677: Stop of 'ora.proxy_advm' on 'serveroda' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'serveroda'
CRS-2677: Stop of 'ora.asm' on 'serveroda' succeeded
CRS-2673: Attempting to stop 'ora.ASMNET1LSNR_ASM.lsnr' on 'serveroda'
CRS-2677: Stop of 'ora.ASMNET1LSNR_ASM.lsnr' on 'serveroda' succeeded
CRS-2673: Attempting to stop 'ora.asmnet1.asmnetwork' on 'serveroda'
CRS-2677: Stop of 'ora.asmnet1.asmnetwork' on 'serveroda' succeeded
CRS-33677: Stop of resource group 'ora.asmgroup' on server 'serveroda' succeeded.
CRS-2673: Attempting to stop 'ora.ons' on 'serveroda'
CRS-2677: Stop of 'ora.ons' on 'serveroda' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'serveroda'
CRS-2677: Stop of 'ora.net1.network' on 'serveroda' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'serveroda' has completed
CRS-2677: Stop of 'ora.crsd' on 'serveroda' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'serveroda'
CRS-2673: Attempting to stop 'ora.crf' on 'serveroda'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'serveroda'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'serveroda'
CRS-2677: Stop of 'ora.crf' on 'serveroda' succeeded
CRS-2677: Stop of 'ora.drivers.acfs' on 'serveroda' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'serveroda' succeeded
CRS-2677: Stop of 'ora.asm' on 'serveroda' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'serveroda'
CRS-2673: Attempting to stop 'ora.evmd' on 'serveroda'
CRS-2677: Stop of 'ora.ctssd' on 'serveroda' succeeded
CRS-2677: Stop of 'ora.evmd' on 'serveroda' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'serveroda'
CRS-2677: Stop of 'ora.cssd' on 'serveroda' succeeded
CRS-2673: Attempting to stop 'ora.driver.afd' on 'serveroda'
CRS-2673: Attempting to stop 'ora.gipcd' on 'serveroda'
CRS-2673: Attempting to stop 'ora.gpnpd' on 'serveroda'
CRS-2677: Stop of 'ora.driver.afd' on 'serveroda' succeeded
CRS-2677: Stop of 'ora.gipcd' on 'serveroda' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'serveroda' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'serveroda' has completed
CRS-4133: Oracle High Availability Services has been stopped.
[root@serveroda grid]# 

And then we run the command ipmiflash

[root@serveroda ~]# cd /opt/oracle/oak/pkgrepos/ilom/x8-2l/4.0.4.51.r134837/
[root@serveroda 4.0.4.51.r134837]# ls
componentmetadata.xml  ILOM-4_0_4_51_r134837-ORACLE_SERVER_X8-2L.pkg
[root@serveroda 4.0.4.51.r134837]#

[root@serveroda 4.0.4.51.r134837]# ipmiflash -v write ILOM-4_0_4_51_r134837-ORACLE_SERVER_X8-2L.pkg force script config delaybios warning=0

A few moment after the upgrade was done (around 30 minutes)

Uploading Firmware over Linux OpenIPMI Interface...
64 byte packets
More robust algorithm supported.
31968K total
31968K [sending...] data chunk #595201, size 05779K [sending...] data chunk #70366, size 55
Waiting for transfer to finish..........
Sending upgrade command

Waiting for upgrade to start..........
Waiting for upgrade to complete...........................................................................
Upgrade completed
You have new mail in /var/spool/mail/root
[root@serveroda 4.0.4.51.r134837]# 
reconnecting to console...

After the reconnection we then verify the version

[root@serveroda 4.0.4.51.r134837]# cd
[root@serveroda ~]# /usr/bin/ipmitool sunoem version
Version: 4.0.4.51 r134837
[root@serveroda ~]# /usr/bin/ipmitool sunoem getval /SYS/MB/BIOS/fru_version
Target Value: 52020500
[root@serveroda ~]# 

As we can see, now the ILOM is updated and the target value for the BIOS is 52020500 which is the expected value. Then we stop the server

[root@serveroda ~]# init 0

4 or 5 minutes after we start the node via the ILOM and then validate that everything is fine
Host Management >>>>> Power management >>>>>Power on and save

[root@serveroda ~]# odacli describe-component
System Version  
---------------
19.8.0.0.0

Component                                Installed Version    Available Version   
---------------------------------------- -------------------- --------------------
OAK                                       19.8.0.0.0            up-to-date          
GI                                        19.8.0.0.200714       up-to-date          
DCSAGENT                                  19.8.0.0.0            up-to-date          
ILOM                                      4.0.4.51.r134837      up-to-date          
BIOS                                      52021300              up-to-date          
OS                                        7.8                   up-to-date          
FIRMWARECONTROLLER                        VDV1RL04              up-to-date          
FIRMWAREDISK                              1120                  1102                
HMP                                       2.4.5.0.1             up-to-date          

[root@serveroda ~]#

Cet article Reimaging an ODA to 19.8 Part II est apparu en premier sur Blog dbi services.


Troubleshooting performance on Autonomous Database

$
0
0

By Franck Pachot

.

On my Oracle Cloud Free Tier Autonomous Transaction Processing service, a database that can be used for free with no time limit, I have seen this strange activity. As I’m running nothing scheduled, I was surprised by this pattern and looked at it by curiosity. And I got the idea to take some screenshot to show you how I look at those things. The easiest performance tool available in the Autonomous Database is the Performance Hub which shows the activity though time with detail on multiple dimensions for drill-down analysis. This is based on ASH of course.

In the upper pane, I focus on the part with homogenous activity because I may views the content without the timeline and then want to compare the activity metric (Average Active Session) with the peak I observed. Without this, I may start to look to something that is not significant and waste my time. Here, where the activity is about 1 active session, I want to drill-down on dimensions that account for around 0.8 active sessions to be sure to address 80% of the surprising activity. If the part selected includes some idle time around, I would not be able to do this easily.

The second pane let me drill-down either on 3 dimensions in a load map (we will see that later), or one main dimension with the time axis (in this screenshot the dimension is “Consumer Group”) with two other dimensions below displayed without the time detail, here “Wait Class” and “Wait Event”. This is where I want to compare the activity (0.86 average active session on CPU) to the load I’m looking at, as I don’t have the time to see peaks and idle periods.

  • I see “Internal” for all “Session Attributes” ASH dimensions, like “Consumer Group”, “Module”, “Action”, “Client”, “Client Host Port”
  • About “Session Identifiers” ASH dimensions, I still see “internal” for “User Session”, “User Name” and “Program”.
  • “Parallel Process” shows “Serial” and “Session Type” shows “Foreground” which doesn’t give me more information

I have more information from “Resource Consumption”:

  • ASH Dimension “Wait Class”: mostly “CPU” and some “User I/O”
  • ASH Dimension “Wait Event”: the “User I/O” is “direct path read temp”

I’ll dig into those details later. There’s no direct detail for the CPU consumption. I’ll look at logical reads of course, and SQL Plan but I cannot directly match the CPU time with that. Especially from Average Active Session where I don’t have the CPU time – I have only samples there. It may be easier with “User I/O” because they should show up in other dimensions.

There are no “Blocking Session” but the ASH Dimension “Object” gives interesting information:

  • ASH Dimension “Object”: SYS.SYS_LOB0000009134C00039$$ and SYS.SYS_LOB0000011038C00004$$ (LOB)

I don’t know an easy way to copy/paste from the Performance Hub so I have generated an AWR report and found them in the Top DB Objects section:

Object ID % Activity Event % Event Object Name (Type) Tablespace Container Name
9135 24.11 direct path read 24.11 SYS.SYS_LOB0000009134C00039$$ (LOB) SYSAUX SUULFLFCSYX91Z0_ATP1
11039 10.64 direct path read 10.64 SYS.SYS_LOB0000011038C00004$$ (LOB) SYSAUX SUULFLFCSYX91Z0_ATP1

That’s the beauty of ASH. In addition, to show you the load per multiple dimensions, it links all dimensions. Here, without guessing, I know that those objects are responsible for the “direct path read temp” I have seen above.

Let me insist on the numbers. I mentioned that I selected, in the upper chart, a homogeneous activity time window in order to compare the activity number with and without the time axis. My total activity during this time window is a little bit over 1 session active (on average, AAS – Average Active Session). I can see this on the time chart y-axis. And I confirm it if I sum-up the aggregations on other dimensions. Like above CPU + USER I/O was 0.86 + 0.37 =1.23 when the selected part was around 1.25 active sessions. Here when looking at “Object” dimension, I see around 0.5 sessions on SYS_LOB0000011038C00004$$ (green) during one minute, then around 0.3 sessions on SYS_LOB0000009134C00039$$ (blue) for 5 minutes and no activity on objects during 1 minute. That matches approximately the 0.37 AAS on User I/O. From the AWR report this is displayed as “% Event” and 24.11 + 10.64 = 34.75% which is roughly the ratio of those 0.37 to 1.25 we had with Average Active Sessions. When looking at sampling activity details, it is important to keep in mind the weight of each component we look at.

Let’s get more detail about those objects, from SQL Developer Web, or any connection:


DEMO@atp1_tp> select owner,object_name,object_type,oracle_maintained from dba_objects 
where owner='SYS' and object_name in ('SYS_LOB0000009134C00039$$','SYS_LOB0000011038C00004$$');

   OWNER                  OBJECT_NAME    OBJECT_TYPE    ORACLE_MAINTAINED
________ ____________________________ ______________ ____________________
SYS      SYS_LOB0000009134C00039$$    LOB            Y
SYS      SYS_LOB0000011038C00004$$    LOB            Y

DEMO@atp1_tp> select owner,table_name,column_name,segment_name,tablespace_name from dba_lobs 
where owner='SYS' and segment_name in ('SYS_LOB0000009134C00039$$','SYS_LOB0000011038C00004$$');

   OWNER                TABLE_NAME    COLUMN_NAME                 SEGMENT_NAME    TABLESPACE_NAME
________ _________________________ ______________ ____________________________ __________________
SYS      WRI$_SQLSET_PLAN_LINES    OTHER_XML      SYS_LOB0000009134C00039$$    SYSAUX
SYS      WRH$_SQLTEXT              SQL_TEXT       SYS_LOB0000011038C00004$$    SYSAUX

Ok, that’s interesting information. It confirms why I see ‘internal’ everywhere: those are dictionary tables.

WRI$_SQLSET_PLAN_LINES is about SQL Tuning Sets and in 19c, especially with the Auto Index feature, the SQL statements are captured every 15 minutes and analyzed to find index candidates. A look at SQL Tuning Sets confirms this:


DEMO@atp1_tp> select sqlset_name,parsing_schema_name,count(*),dbms_xplan.format_number(sum(length(sql_text))),min(plan_timestamp)
from dba_sqlset_statements group by parsing_schema_name,sqlset_name order by count(*);


    SQLSET_NAME    PARSING_SCHEMA_NAME    COUNT(*)    DBMS_XPLAN.FORMAT_NUMBER(SUM(LENGTH(SQL_TEXT)))    MIN(PLAN_TIMESTAMP)
_______________ ______________________ ___________ __________________________________________________ ______________________
SYS_AUTO_STS    C##OMLIDM                        1 53                                                 30-APR-20
SYS_AUTO_STS    FLOWS_FILES                      1 103                                                18-JUL-20
SYS_AUTO_STS    DBSNMP                           6 646                                                26-MAY-20
SYS_AUTO_STS    XDB                              7 560                                                20-MAY-20
SYS_AUTO_STS    ORDS_PUBLIC_USER                 9 1989                                               30-APR-20
SYS_AUTO_STS    GUEST0001                       10 3656                                               20-MAY-20
SYS_AUTO_STS    CTXSYS                          12 1193                                               20-MAY-20
SYS_AUTO_STS    LBACSYS                         28 3273                                               30-APR-20
SYS_AUTO_STS    AUDSYS                          29 3146                                               26-MAY-20
SYS_AUTO_STS    ORDS_METADATA                   29 4204                                               20-MAY-20
SYS_AUTO_STS    C##ADP$SERVICE                  33 8886                                               11-AUG-20
SYS_AUTO_STS    MDSYS                           39 4964                                               20-MAY-20
SYS_AUTO_STS    DVSYS                           65 8935                                               30-APR-20
SYS_AUTO_STS    APEX_190200                    130 55465                                              30-APR-20
SYS_AUTO_STS    C##CLOUD$SERVICE               217 507K                                               30-APR-20
SYS_AUTO_STS    ADMIN                          245 205K                                               30-APR-20
SYS_AUTO_STS    DEMO                           628 320K                                               30-APR-20
SYS_AUTO_STS    APEX_200100                  2,218 590K                                               18-JUL-20
SYS_AUTO_STS    SYS                        106,690 338M                                               30-APR-20

All gathered by this SYS_AUTO_STS job. And the statements captured were parsed by SYS – a system job has hard work because of system statements, as I mentioned when seeing this for the first time:

With this drill-down from the “Object” dimension, I’ve already gone far enough to get an idea about the problem: an internal job is reading the huge SQL Tuning Sets that have been collected by the Auto STS job introduced in 19c (and used by Auto Index). But I’ll continue to look at all other ASH Dimensions. They can give me more detail or at least confirm my guesses. That’s the idea: you look at all the dimensions and once one gives you interesting information, you dig down to more details.

I look at “PL/SQL” ASH dimension first because an application should call SQL from procedural code and not the opposite. And, as all this is internal, developed by Oracle, I expect they do it this way.

  • ASH Dimension “PL/SQL”: I see ‘7322,38’
  • ASH Dimension “Top PL/SQL”: I see ‘19038,5’

Again, I copy/paste to avoid typos and got them from the AWR report “Top PL/SQL Procedures” section:

PL/SQL Entry Subprogram % Activity PL/SQL Current Subprogram % Current Container Name
UNKNOWN_PLSQL_ID <19038, 5> 78.72 SQL 46.81 SUULFLFCSYX91Z0_ATP1
UNKNOWN_PLSQL_ID <7322, 38> 31.21 SUULFLFCSYX91Z0_ATP1
UNKNOWN_PLSQL_ID <13644, 332> 2.13 SQL 2.13 SUULFLFCSYX91Z0_ATP1
UNKNOWN_PLSQL_ID <30582, 1> 1.42 SQL 1.42 SUULFLFCSYX91Z0_ATP1

Side note on the number: activity was 0.35 AAS on top-level PL/SQL, 0.33 on current PL/SQL. 0.33 is included within 0.35 as a session active on a PL/SQL call. In AWR (where “Entry” means “top-level”) you see them nested and including the SQL activity. This is why you see 78.72% here, it is SQL + PL/SQL executed under the top-level call. But actually, the procedure (7322,38) is 31.21% if the total AAS, which matches the 0.33 AAS.

By the way, I didn’t mention it before but this in AWR report is actually an ASH report that is included in the AWR html report.

Now trying to know which are those procedures. I think the “UNKNOWN” comes from not finding it in the packages procedures:


DEMO@atp1_tp> select * from dba_procedures where (object_id,subprogram_id) in ( (7322,38) , (19038,5) );

no rows selected

but I find them from DBA_OBJECTS:


DEMO@atp1_tp> select owner,object_name,object_id,object_type,oracle_maintained,last_ddl_time from dba_objects where object_id in (7322,19038);

   OWNER           OBJECT_NAME    OBJECT_ID    OBJECT_TYPE    ORACLE_MAINTAINED    LAST_DDL_TIME
________ _____________________ ____________ ______________ ____________________ ________________
SYS      XMLTYPE                      7,322 TYPE           Y                    18-JUL-20
SYS      DBMS_AUTOTASK_PRVT          19,038 PACKAGE        Y                    22-MAY-20

and DBA_PROCEDURES:


DEMO@atp1_tp> select owner,object_name,procedure_name,object_id,subprogram_id from dba_procedures where object_id in(7322,19038);


   OWNER                   OBJECT_NAME    PROCEDURE_NAME    OBJECT_ID    SUBPROGRAM_ID
________ _____________________________ _________________ ____________ ________________
SYS      DBMS_RESULT_CACHE_INTERNAL    RELIES_ON               19,038                1
SYS      DBMS_RESULT_CACHE_INTERNAL                            19,038                0

All this doesn’t match 🙁

My guess is that the top level PL/SQL object is DBMS_AUTOTASK_PRVT as I can see in the container it is running on, which is the one I’m connected to (an autonomous database is a pluggable database in the Oracle Cloud container database). It has the OBJECT_ID=19038 in my PDB. But the DBA_PROCEDURES is an extended data link and the OBJECT_ID of common objects are different in CDB$ROOT and PDBs. And OBJECT_ID=7322 is probably an identifier in CDB$ROOT, where active session monitoring runs. I cannot verify as I have only a local user. Because of this inconsistency, my drill-down on the PL/SQL dimension stops there.

The package calls some SQL and from browsing the AWR report I’ve seen in the time model that “sql execute elapsed time” is the major component:

Statistic Name Time (s) % of DB Time % of Total CPU Time
sql execute elapsed time 1,756.19 99.97
DB CPU 1,213.59 69.08 94.77
PL/SQL execution elapsed time 498.62 28.38

I’ll follow the hierarchy of this dimension – the most detailed will be the SQL Plan operation. But let’s start with “SQL Opcode”

  • ASH Dimension “Top Level Opcode”: mostly “PL/SQL EXECUTE” which confirms that the SQL I’ll see is called by the PL/SQL.
  • ASH Dimension “top level SQL ID”: mostly dkb7ts34ajsjy here. I’ll look at its details further.

From the AWR report, I see all statements with no distinction about the top level one, and there’s no spinning top to help you find what is running as a recursive call or the top-level one. It can be often guessed from the time and other statistics – here I have 3 queries taking almost the same database time:

Elapsed Time (s) Executions Elapsed Time per Exec (s) %Total %CPU %IO SQL Id SQL Module SQL Text
1,110.86 3 370.29 63.24 61.36 50.16 dkb7ts34ajsjy DBMS_SCHEDULER DECLARE job BINARY_INTEGER := …
1,110.85 3 370.28 63.24 61.36 50.16 f6j6vuum91fw8 DBMS_SCHEDULER begin /*KAPI:task_proc*/ dbms_…
1,087.12 3 362.37 61.88 61.65 49.93 0y288pk81u609 SYS_AI_MODULE SELECT /*+dynamic_sampling(11)…

SYS_AI_MODULE is the Auto Indexing feature


DEMO@atp1_tp> select distinct sql_id,sql_text from v$sql where sql_id in ('dkb7ts34ajsjy','f6j6vuum91fw8','0y288pk81u609');
dkb7ts34ajsjy    DECLARE job BINARY_INTEGER := :job;  next_date TIMESTAMP WITH TIME ZONE := :mydate;  broken BOOLEAN := FALSE;  job_name VARCHAR2(128) := :job_name;  job_subname VARCHAR2(128) := :job_subname;  job_owner VARCHAR2(128) := :job_owner;  job_start TIMESTAMP WITH TIME ZONE := :job_start;  job_scheduled_start TIMESTAMP WITH TIME ZONE := :job_scheduled_start;  window_start TIMESTAMP WITH TIME ZONE := :window_start;  window_end TIMESTAMP WITH TIME ZONE := :window_end;  chain_id VARCHAR2(14) :=  :chainid;  credential_owner VARCHAR2(128) := :credown;  credential_name  VARCHAR2(128) := :crednam;  destination_owner VARCHAR2(128) := :destown;  destination_name VARCHAR2(128) := :destnam;  job_dest_id varchar2(14) := :jdestid;  log_id number := :log_id;  BEGIN  begin dbms_autotask_prvt.run_autotask(3, 0);  end;  :mydate := next_date; IF broken THEN :b := 1; ELSE :b := 0; END IF; END;
f6j6vuum91fw8    begin /*KAPI:task_proc*/ dbms_auto_index_internal.task_proc(FALSE); end;                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       
0y288pk81u609    SELECT /*+dynamic_sampling(11) NO_XML_QUERY_REWRITE */ SQL_ID, PLAN_HASH_VALUE, ELAPSED_TIME/EXECUTIONS ELAPSED_PER_EXEC, DBMS_AUTO_INDEX_INTERNAL.AUTO_INDEX_ALLOW(CE) SESSION_TYPE FROM (SELECT SQL_ID, PLAN_HASH_VALUE, MIN(ELAPSED_TIME) ELAPSED_TIME, MIN(EXECUTIONS) EXECUTIONS, MIN(OPTIMIZER_ENV) CE, MAX(EXISTSNODE(XMLTYPE(OTHER_XML), '/other_xml/info[@type = "has_user_tab"]')) USER_TAB FROM (SELECT F.NAME AS SQLSET_NAME, F.OWNER AS SQLSET_OWNER, SQLSET_ID, S.SQL_ID, T.SQL_TEXT, S.COMMAND_TYPE, P.PLAN_HASH_VALUE, SUBSTRB(S.MODULE, 1, (SELECT KSUMODLEN FROM X$MODACT_LENGTH)) MODULE, SUBSTRB(S.ACTION, 1, (SELECT KSUACTLEN FROM X$MODACT_LENGTH)) ACTION, C.ELAPSED_TIME, C.BUFFER_GETS, C.EXECUTIONS, C.END_OF_FETCH_COUNT, P.OPTIMIZER_ENV, L.OTHER_XML FROM WRI$_SQLSET_DEFINITIONS F, WRI$_SQLSET_STATEMENTS S, WRI$_SQLSET_PLANS P,WRI$_SQLSET_MASK M, WRH$_SQLTEXT T, WRI$_SQLSET_STATISTICS C, WRI$_SQLSET_PLAN_LINES L WHERE F.ID = S.SQLSET_ID AND S.ID = P.STMT_ID AND S.CON_DBID = P.CON_DBID AND P.

It looks like dbms_autotask_prvt.run_autotask calls dbms_auto_index_internal.task_proc that queries WRI$_SQLSET tables and this is where all the database time goes.

  • ASH Dimension “SQL Opcode”: most of SELECT statements here
  • ASH Dimension “SQL Force Matching Signature” is interesting to group all statements that differ only by literals.
  • ASH Dimension “SQL Plan Hash Value”, and the more detailed “SQL Full Plan Hash Value”, are interesting to group all statements having the same execution plan shape, or exactly the same execution plan

  • ASH Dimension “SQL ID” is the most interesting here to see which of this SELECT query is seen most of the time below this Top Level call, but unfortunately, I see “internal here”. Fortunately, the AWR report above did not hide this.
  • ASH Dimension “SQL Plan Operation” shows me that within this query I’m spending time on HASH GROUP BY operation (which, is the workarea is large, does some “direct path read temp” as we encountered on the “wait event” dimension)
  • ASH Dimension “SQL Plan Operation Line” helps me to find this operation in the plan as in addition to the SQL_ID (the one that was hidden in the “SQL_ID” dimension) I have the plan identification (plan hash value) and plan line number.

Again, I use the graphical Performance Hub to find where I need to drill down and find all details in the AWR report “Top SQL with Top Events” section:

SQL ID Plan Hash Executions % Activity Event % Event Top Row Source % Row Source SQL Text
0y288pk81u609 2011736693 3 70.21 CPU + Wait for CPU 35.46 HASH – GROUP BY 28.37 SELECT /*+dynamic_sampling(11)…
direct path read 34.75 HASH – GROUP BY 24.11
444n6jjym97zv 1982042220 18 12.77 CPU + Wait for CPU 12.77 FIXED TABLE – FULL 12.77 SELECT /*+ unnest */ * FROM GV…
1xx2k8pu4g5yf 2224464885 2 5.67 CPU + Wait for CPU 5.67 FIXED TABLE – FIXED INDEX 2.84 SELECT /*+ first_rows(1) */ s…
3kqrku32p6sfn 3786872576 3 2.13 CPU + Wait for CPU 2.13 FIXED TABLE – FULL 2.13 MERGE /*+ OPT_PARAM(‘_parallel…
64z4t33vsvfua 3336915854 2 1.42 CPU + Wait for CPU 1.42 FIXED TABLE – FIXED INDEX 0.71 WITH LAST_HOUR AS ( SELECT ROU…

I can see the full SQL Text in the AWR report and get the AWR statement report with dbms_workload_repository. I can also fetch the plan with DBMS_XPLAN.DISPLAY_AWR:


DEMO@atp1_tp> select * from dbms_xplan.display_awr('0y288pk81u609',2011736693,null,'+peeked_binds');


                                                                                                              PLAN_TABLE_OUTPUT
_______________________________________________________________________________________________________________________________
SQL_ID 0y288pk81u609
--------------------
SELECT /*+dynamic_sampling(11) NO_XML_QUERY_REWRITE */ SQL_ID,
PLAN_HASH_VALUE, ELAPSED_TIME/EXECUTIONS ELAPSED_PER_EXEC,
DBMS_AUTO_INDEX_INTERNAL.AUTO_INDEX_ALLOW(CE) SESSION_TYPE FROM (SELECT
SQL_ID, PLAN_HASH_VALUE, MIN(ELAPSED_TIME) ELAPSED_TIME,
MIN(EXECUTIONS) EXECUTIONS, MIN(OPTIMIZER_ENV) CE,
MAX(EXISTSNODE(XMLTYPE(OTHER_XML), '/other_xml/info[@type =
"has_user_tab"]')) USER_TAB FROM (SELECT F.NAME AS SQLSET_NAME, F.OWNER
AS SQLSET_OWNER, SQLSET_ID, S.SQL_ID, T.SQL_TEXT, S.COMMAND_TYPE,
P.PLAN_HASH_VALUE, SUBSTRB(S.MODULE, 1, (SELECT KSUMODLEN FROM
X$MODACT_LENGTH)) MODULE, SUBSTRB(S.ACTION, 1, (SELECT KSUACTLEN FROM
X$MODACT_LENGTH)) ACTION, C.ELAPSED_TIME, C.BUFFER_GETS, C.EXECUTIONS,
C.END_OF_FETCH_COUNT, P.OPTIMIZER_ENV, L.OTHER_XML FROM
WRI$_SQLSET_DEFINITIONS F, WRI$_SQLSET_STATEMENTS S, WRI$_SQLSET_PLANS
P,WRI$_SQLSET_MASK M, WRH$_SQLTEXT T, WRI$_SQLSET_STATISTICS C,
WRI$_SQLSET_PLAN_LINES L WHERE F.ID = S.SQLSET_ID AND S.ID = P.STMT_ID
AND S.CON_DBID = P.CON_DBID AND P.STMT_ID = C.STMT_ID AND
P.PLAN_HASH_VALUE = C.PLAN_HASH_VALUE AND P.CON_DBID = C.CON_DBID AND
P.STMT_ID = M.STMT_ID AND P.PLAN_HASH_VALUE = M.PLAN_HASH_VALUE AND
P.CON_DBID = M.CON_DBID AND S.SQL_ID = T.SQL_ID AND S.CON_DBID =
T.CON_DBID AND T.DBID = F.CON_DBID AND P.STMT_ID=L.STMT_ID AND
P.PLAN_HASH_VALUE = L.PLAN_HASH_VALUE AND P.CON_DBID = L.CON_DBID) S,
WRI$_ADV_OBJECTS OS WHERE SQLSET_OWNER = :B8 AND SQLSET_NAME = :B7 AND
(MODULE IS NULL OR (MODULE != :B6 AND MODULE != :B5 )) AND SQL_TEXT NOT
LIKE 'SELECT /* DS_SVC */%' AND SQL_TEXT NOT LIKE 'SELECT /*
OPT_DYN_SAMP */%' AND SQL_TEXT NOT LIKE '/*AUTO_INDEX:ddl*/%' AND
SQL_TEXT NOT LIKE '%/*+%dbms_stats%' AND COMMAND_TYPE NOT IN (9, 10,
11) AND PLAN_HASH_VALUE > 0 AND BUFFER_GETS > 0 AND EXECUTIONS > 0 AND
OTHER_XML IS NOT NULL AND OS.SQL_ID_VC (+)= S.SQL_ID AND OS.TYPE (+)=
:B4 AND DECODE(OS.TYPE(+), :B4 , TO_NUMBER(OS.ATTR2(+)), -1) =
S.PLAN_HASH_VALUE AND OS.TASK_ID (+)= :B3 AND OS.EXEC_NAME (+) IS NULL
AND (OS.SQL_ID_VC IS NULL OR TO_DATE(OS.ATTR18, :B2 )  0 ORDER BY
DBMS_AUTO_INDEX_INTERNAL.AUTO_INDEX_ALLOW(CE) DESC, ELAPSED_TIME DESC

Plan hash value: 2011736693

----------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                 | Name                           | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                          |                                |       |       |   957 (100)|          |
|   1 |  SORT ORDER BY                            |                                |   180 |   152K|   957  (18)| 00:00:01 |
|   2 |   FILTER                                  |                                |       |       |            |          |
|   3 |    HASH GROUP BY                          |                                |   180 |   152K|   957  (18)| 00:00:01 |
|   4 |     NESTED LOOPS                          |                                |  3588 |  3030K|   955  (18)| 00:00:01 |
|   5 |      FILTER                               |                                |       |       |            |          |
|   6 |       HASH JOIN RIGHT OUTER               |                                |  3588 |  2964K|   955  (18)| 00:00:01 |
|   7 |        TABLE ACCESS BY INDEX ROWID BATCHED| WRI$_ADV_OBJECTS               |     1 |    61 |     4   (0)| 00:00:01 |
|   8 |         INDEX RANGE SCAN                  | WRI$_ADV_OBJECTS_IDX_02        |     1 |       |     3   (0)| 00:00:01 |
|   9 |        HASH JOIN                          |                                |  3588 |  2750K|   951  (18)| 00:00:01 |
|  10 |         TABLE ACCESS STORAGE FULL         | WRI$_SQLSET_PLAN_LINES         | 86623 |  2706K|   816  (19)| 00:00:01 |
|  11 |         HASH JOIN                         |                                |  3723 |  2737K|   134   (8)| 00:00:01 |
|  12 |          TABLE ACCESS STORAGE FULL        | WRI$_SQLSET_STATISTICS         | 89272 |  2789K|    21  (10)| 00:00:01 |
|  13 |          HASH JOIN                        |                                |  3744 |  2636K|   112   (7)| 00:00:01 |
|  14 |           JOIN FILTER CREATE              | :BF0000                        |  2395 |   736K|    39  (13)| 00:00:01 |
|  15 |            HASH JOIN                      |                                |  2395 |   736K|    39  (13)| 00:00:01 |
|  16 |             TABLE ACCESS STORAGE FULL     | WRI$_SQLSET_STATEMENTS         |  3002 |   137K|    13  (24)| 00:00:01 |
|  17 |              FIXED TABLE FULL             | X$MODACT_LENGTH                |     1 |     5 |     0   (0)|          |
|  18 |              FIXED TABLE FULL             | X$MODACT_LENGTH                |     1 |     5 |     0   (0)|          |
|  19 |              FIXED TABLE FULL             | X$MODACT_LENGTH                |     1 |     5 |     0   (0)|          |
|  20 |             NESTED LOOPS                  |                                |  1539 |   402K|    25   (4)| 00:00:01 |
|  21 |              TABLE ACCESS BY INDEX ROWID  | WRI$_SQLSET_DEFINITIONS        |     1 |    27 |     1   (0)| 00:00:01 |
|  22 |               INDEX UNIQUE SCAN           | WRI$_SQLSET_DEFINITIONS_IDX_01 |     1 |       |     0   (0)|          |
|  23 |              TABLE ACCESS STORAGE FULL    | WRH$_SQLTEXT                   |  1539 |   362K|    24   (5)| 00:00:01 |
|  24 |           JOIN FILTER USE                 | :BF0000                        | 89772 |    34M|    73   (3)| 00:00:01 |
|  25 |            TABLE ACCESS STORAGE FULL      | WRI$_SQLSET_PLANS              | 89772 |    34M|    73   (3)| 00:00:01 |
|  26 |      INDEX UNIQUE SCAN                    | WRI$_SQLSET_MASK_PK            |     1 |    19 |     0   (0)|          |
----------------------------------------------------------------------------------------------------------------------------

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 7 (U - Unused (7))
---------------------------------------------------------------------------

   0 -  SEL$5
         U -  MERGE(@"SEL$5" >"SEL$4") / duplicate hint
         U -  MERGE(@"SEL$5" >"SEL$4") / duplicate hint

   1 -  SEL$5C160134
         U -  dynamic_sampling(11) / rejected by IGNORE_OPTIM_EMBEDDED_HINTS

  17 -  SEL$7286615E
         U -  PUSH_SUBQ(@"SEL$7286615E") / duplicate hint
         U -  PUSH_SUBQ(@"SEL$7286615E") / duplicate hint

  17 -  SEL$7286615E / X$MODACT_LENGTH@SEL$5
         U -  FULL(@"SEL$7286615E" "X$MODACT_LENGTH"@"SEL$5") / duplicate hint
         U -  FULL(@"SEL$7286615E" "X$MODACT_LENGTH"@"SEL$5") / duplicate hint

Peeked Binds (identified by position):
--------------------------------------

   1 - :B8 (VARCHAR2(30), CSID=873): 'SYS'
   2 - :B7 (VARCHAR2(30), CSID=873): 'SYS_AUTO_STS'
   5 - :B4 (NUMBER): 7
   7 - :B3 (NUMBER): 15

Note
-----
   - SQL plan baseline SQL_PLAN_gf2c99a3zrzsge1b441a5 used for this statement

I can confirm what I’ve seen about HASH GROUP BY on line ID=3
I forgot to mention that SQL Monitor is not available for this query probably because it is disabled for internal queries. Anyway, the most interesting here is that the plan comes from SQL Plan Management

Here is more information about this SQL Plan Baseline:


DEMO@atp1_tp> select * from dbms_xplan.display_sql_plan_baseline('','SQL_PLAN_gf2c99a3zrzsge1b441a5');
                                                                                                                  ...
--------------------------------------------------------------------------------
SQL handle: SQL_f709894a87fbff0f
SQL text: SELECT /*+dynamic_sampling(11) NO_XML_QUERY_REWRITE */ SQL_ID,
          PLAN_HASH_VALUE, ELAPSED_TIME/EXECUTIONS ELAPSED_PER_EXEC,
...
--------------------------------------------------------------------------------
Plan name: SQL_PLAN_gf2c99a3zrzsge1b441a5         Plan id: 3786686885
Enabled: YES     Fixed: NO      Accepted: YES     Origin: AUTO-CAPTURE
Plan rows: From dictionary
--------------------------------------------------------------------------------
...

This shows only one plan, but I want to see all plans for this statement.


DEMO@atp1_tp> select 
CREATOR,ORIGIN,CREATED,LAST_MODIFIED,LAST_EXECUTED,LAST_VERIFIED,ENABLED,ACCEPTED,FIXED,REPRODUCED
from dba_sql_plan_baselines where sql_handle='SQL_f709894a87fbff0f' order by created;


   CREATOR                           ORIGIN            CREATED      LAST_MODIFIED      LAST_EXECUTED      LAST_VERIFIED    ENABLED    ACCEPTED    FIXED    REPRODUCED
__________ ________________________________ __________________ __________________ __________________ __________________ __________ ___________ ________ _____________
SYS        EVOLVE-LOAD-FROM-AWR             30-MAY-20 11:50    30-JUL-20 23:34                       30-JUL-20 23:34    YES        NO          NO       YES
SYS        EVOLVE-LOAD-FROM-AWR             30-MAY-20 11:50    31-JUL-20 05:03                       31-JUL-20 05:03    YES        NO          NO       YES
SYS        EVOLVE-LOAD-FROM-CURSOR-CACHE    30-MAY-20 11:50    31-JUL-20 06:09                       31-JUL-20 06:09    YES        NO          NO       YES
SYS        EVOLVE-LOAD-FROM-AWR             30-MAY-20 11:50    31-JUL-20 06:09                       31-JUL-20 06:09    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     30-MAY-20 16:08    31-JUL-20 07:15                       31-JUL-20 07:15    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     30-MAY-20 19:10    30-MAY-20 19:30    30-MAY-20 19:30    30-MAY-20 19:29    YES        YES         NO       YES
SYS        AUTO-CAPTURE                     30-MAY-20 19:30    31-JUL-20 08:21                       31-JUL-20 08:21    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     30-MAY-20 23:32    31-JUL-20 08:21                       31-JUL-20 08:21    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     31-MAY-20 03:14    31-JUL-20 08:21                       31-JUL-20 08:21    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     31-MAY-20 04:14    31-JUL-20 08:21                       31-JUL-20 08:21    YES        NO          NO       YES
SYS        EVOLVE-LOAD-FROM-AWR             31-MAY-20 13:04    31-JUL-20 23:43                       31-JUL-20 23:43    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     31-MAY-20 13:19    31-JUL-20 23:43                       31-JUL-20 23:43    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     31-MAY-20 13:39    11-JUL-20 04:35    11-JUL-20 04:35    31-MAY-20 14:09    YES        YES         NO       YES
SYS        AUTO-CAPTURE                     31-MAY-20 18:01    10-AUG-20 22:05                       10-AUG-20 22:05    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     31-MAY-20 22:44    10-AUG-20 22:05                       10-AUG-20 22:05    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     01-JUN-20 06:48    10-AUG-20 22:05                       10-AUG-20 22:05    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     01-JUN-20 07:09    10-AUG-20 22:05                       10-AUG-20 22:05    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     02-JUN-20 05:22    02-JUN-20 05:49                       02-JUN-20 05:49    YES        YES         NO       YES
SYS        AUTO-CAPTURE                     02-JUN-20 21:52    10-AUG-20 22:06                       10-AUG-20 22:06    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     03-JUN-20 08:20    23-AUG-20 20:45    23-AUG-20 20:45    03-JUN-20 08:49    YES        YES         NO       YES
SYS        AUTO-CAPTURE                     04-JUN-20 01:34    10-AUG-20 22:06                       10-AUG-20 22:06    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     05-JUN-20 21:43    10-AUG-20 22:06                       10-AUG-20 22:06    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     14-JUN-20 06:01    18-AUG-20 23:22    18-AUG-20 23:22    14-JUN-20 10:52    YES        YES         NO       YES
SYS        AUTO-CAPTURE                     14-JUN-20 06:21    13-AUG-20 22:35                       13-AUG-20 22:35    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     27-JUN-20 16:43    27-AUG-20 22:11                       27-AUG-20 22:11    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     28-JUN-20 02:09    28-JUN-20 06:52    28-JUN-20 06:52    28-JUN-20 06:41    YES        YES         NO       YES
SYS        AUTO-CAPTURE                     28-JUN-20 08:13    29-JUL-20 23:24                       29-JUL-20 23:24    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     29-JUN-20 03:05    30-JUL-20 22:28                       30-JUL-20 22:28    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     29-JUN-20 10:50    30-JUL-20 23:33                       30-JUL-20 23:33    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     30-JUN-20 13:28    11-JUL-20 05:15    11-JUL-20 05:15    30-JUN-20 23:09    YES        YES         NO       YES
SYS        AUTO-CAPTURE                     01-JUL-20 14:04    31-JUL-20 22:37                       31-JUL-20 22:37    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     11-JUL-20 06:36    10-AUG-20 22:07                       10-AUG-20 22:07    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     11-JUL-20 14:00    11-AUG-20 22:06                       11-AUG-20 22:06    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     12-JUL-20 00:47    11-AUG-20 22:06                       11-AUG-20 22:06    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     12-JUL-20 01:47    11-AUG-20 22:06                       11-AUG-20 22:06    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     12-JUL-20 09:52    13-AUG-20 22:34                       13-AUG-20 22:34    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     13-JUL-20 04:03    13-AUG-20 22:34                       13-AUG-20 22:34    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     18-JUL-20 12:15    17-AUG-20 22:15                       17-AUG-20 22:15    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     18-JUL-20 23:43    18-AUG-20 22:44                       18-AUG-20 22:44    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     24-JUL-20 01:38    23-AUG-20 06:24                       23-AUG-20 06:24    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     24-JUL-20 06:42    24-AUG-20 22:09                       24-AUG-20 22:09    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     30-JUL-20 02:21    30-JUL-20 02:41                       30-JUL-20 02:41    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     07-AUG-20 18:33    07-AUG-20 19:16                       07-AUG-20 19:16    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     13-AUG-20 22:52    14-AUG-20 22:10                       14-AUG-20 22:10    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     14-AUG-20 05:16    14-AUG-20 22:10                       14-AUG-20 22:10    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     14-AUG-20 15:42    14-AUG-20 22:10                       14-AUG-20 22:10    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     18-AUG-20 23:22    19-AUG-20 22:11                       19-AUG-20 22:11    YES        NO          NO       YES
SYS        AUTO-CAPTURE                     27-AUG-20 00:07    27-AUG-20 22:11                       27-AUG-20 22:11    YES        NO          NO       YES

Ok, there was a huge SQL Plan Management activity here. All starts on 30-MAY-20 and this is when my ATP database has been upgraded to 19c. 19c comes with two new features. First new feature is “Automatic SQL tuning set” which gathers a lot of statements in SYS_AUTO_STS as we have seen above. The other feature, “Automatic SQL Plan Management”, or “Automatic Resolution of Plan Regressions” look into AWR for resource intensive statements with several execution plans. Then it create SQL Plan BAselines for them, loading all alternative plans that are found in AWR, SQL Tuning Sets, and Cursor Cache. And this is why I have EVOLVE-LOAD-FROM-AWR and EVOLVE-LOAD-FROM-CURSOR-CACHE loaded on 30-MAY-20 11:50
This feature is explained by Nigel Bayliss blog post.

So, here are the settings in the Autonomous Database, ALTERNATE_PLAN_BASELINE=AUTO which enables the Auto SPM and ALTERNATE_PLAN_SOURCE=AUTO which means: AUTOMATIC_WORKLOAD_REPOSITORY+CURSOR_CACHE+SQL_TUNING_SET


DEMO@atp1_tp> select parameter_name, parameter_value from   dba_advisor_parameters
              where  task_name = 'SYS_AUTO_SPM_EVOLVE_TASK' and parameter_value  'UNUSED' order by 1;

             PARAMETER_NAME    PARAMETER_VALUE
___________________________ __________________
ACCEPT_PLANS                TRUE
ALTERNATE_PLAN_BASELINE     AUTO
ALTERNATE_PLAN_LIMIT        UNLIMITED
ALTERNATE_PLAN_SOURCE       AUTO
DAYS_TO_EXPIRE              UNLIMITED
DEFAULT_EXECUTION_TYPE      SPM EVOLVE
EXECUTION_DAYS_TO_EXPIRE    30
JOURNALING                  INFORMATION
MODE                        COMPREHENSIVE
TARGET_OBJECTS              1
TIME_LIMIT                  3600
_SPM_VERIFY                 TRUE

This query (and explanations) are from Mike Dietrich blog post which you should read.

So, I can see many plans for this query, some accepted and some not. The Auto Evolve advisor task should help to see which plan is ok or not but it seems that it cannot for this statement:


SELECT DBMS_SPM.report_auto_evolve_task FROM   dual;
...

---------------------------------------------------------------------------------------------
 Object ID          : 848087
 Test Plan Name     : SQL_PLAN_gf2c99a3zrzsgd6c09b5e
 Base Plan Name     : Cost-based plan
 SQL Handle         : SQL_f709894a87fbff0f
 Parsing Schema     : SYS
 Test Plan Creator  : SYS
 SQL Text           : SELECT /*+dynamic_sampling(11) NO_XML_QUERY_REWRITE */
...

FINDINGS SECTION
---------------------------------------------------------------------------------------------

Findings (1):
-----------------------------
 1. This plan was skipped because either the database is not fully open or the
    SQL statement is ineligible for SQL Plan Management.

I dropped all those SQL Plan Baselines:


set serveroutput on
exec dbms_output.put_line ( DBMS_SPM.DROP_SQL_PLAN_BASELINE(sql_handle => 'SQL_f709894a87fbff0f') );

but the query is still long. The problem is not about the Auto SPM job which just tries to find a solution.

It seems that the Auto Index query spends time on this HASH GROUP BY because of the following:


     SELECT
...
     FROM
     (SELECT SQL_ID, PLAN_HASH_VALUE,MIN(ELAPSED_TIME) ELAPSED_TIME,MIN(EXECUTIONS) EXECUTIONS,MIN(OPTIMIZER_ENV) CE,
             MAX(EXISTSNODE(XMLTYPE(OTHER_XML),
                            '/other_xml/info[@type = "has_user_tab"]')) USER_TAB
       FROM
...       
     GROUP BY SQL_ID, PLAN_HASH_VALUE
     )
     WHERE USER_TAB > 0

This is the AI job looking at many statements, with their OTHER_XML plan information and doing a group by on that. There are probably no optimal plans for this query.

Them why do I have so many statements in the auto-captured SQL Tuning Set? An application should have a limited set of statements. In OLTP, with many executions for different values, we should use bind variables to limit the set of statements. In DWH, ad-hoc queries should have so many executions.

When looking at the statements not using bind variables, the FORCE_MATCHING_SIGNATURE is the right dimension on which to aggregates them as there are too many SQL_ID:



DEMO@atp1_tp> select force_matching_signature from dba_sqlset_statements group by force_matching_signature order by count(*) desc fetch first 2 rows only;

     FORCE_MATCHING_SIGNATURE
_____________________________
    7,756,258,419,218,828,704
   15,893,216,616,221,909,352

DEMO@atp1_tp> select sql_text from dba_sqlset_statements where force_matching_signature=15893216616221909352 fetch first 3 rows only;
                                                     SQL_TEXT
_____________________________________________________________
select FLAGS from SYS_FBA_TRACKEDTABLES where OBJ# = 50867
select FLAGS from SYS_FBA_TRACKEDTABLES where OBJ# = 51039
select FLAGS from SYS_FBA_TRACKEDTABLES where OBJ# = 51048

DEMO@atp1_tp> select sql_text from dba_sqlset_statements where force_matching_signature=7756258419218828704 fetch first 3 rows only;
                                                                                   SQL_TEXT
___________________________________________________________________________________________
select count(FA#) from SYS_FBA_TRACKEDTABLES where OBJ# = 51039 and bitand(FLAGS, 128)=0
select count(FA#) from SYS_FBA_TRACKEDTABLES where OBJ# = 51049 and bitand(FLAGS, 128)=0
select count(FA#) from SYS_FBA_TRACKEDTABLES where OBJ# = 51047 and bitand(FLAGS, 128)=0

I have two FORCE_MATCHING_SIGNATURE that have the most rows in DBA_SQLSET_STATEMENTS and looking at a sample of them confirms that they don’t use bind variables. They are oracle internal queries and because I have the FORCE_MATCHING_SIGNATURE I put it in a google search in order to see if others already have seen the issue (Oracle Support notes are also indexed by Google).

First result is a Connor McDonald blog post from 2016, taking this example to show how to hunt for SQL which should use bind variables:
https://connor-mcdonald.com/2016/05/30/sql-statements-using-literals/

There is also a hit on My Oracle Support for those queries:
5931756 QUERIES AGAINST SYS_FBA_TRACKEDTABLES DON’T USE BIND VARIABLES which is supposed to be fixed in 19c but obviously it is not. When I look at the patch I see “where OBJ# = :1” in ktfa.o


$ strings 15931756/files/lib/libserver18.a/ktfa.o | grep "SYS_FBA_TRACKEDTABLES where OBJ# = "
select count(FA#) from SYS_FBA_TRACKEDTABLES where OBJ# = :1 and bitand(FLAGS, :2)=0
select count(FA#) from SYS_FBA_TRACKEDTABLES where OBJ# = :1
select FLAGS from SYS_FBA_TRACKEDTABLES where OBJ# = :1

This uses bind variable.

But I checked in 19.6 and 20.3:


[oracle@cloud libserver]$ strings /u01/app/oracle/product/20.0.0/dbhome_1/bin/oracle | grep "SYS_FBA_TRACKEDTABLES where OBJ# = "
select count(FA#) from SYS_FBA_TRACKEDTABLES where OBJ# = %d and bitand(FLAGS, %d)=0
select count(FA#) from SYS_FBA_TRACKEDTABLES where OBJ# = %d
select FLAGS from SYS_FBA_TRACKEDTABLES where OBJ# = %d

This is string substitution. Not bind variable.

Ok, as usual, I went too far from my initial goal which was just sharing some screenshots about looking at Performance Hub. With the autonomous database we don’t have all tools we are used to. On a self-managed database I would have tkprof’ed this job that runs every 15 minutes. Different tools but still possible. In this example I drilled down the problematic query execution plan, found that a system table was too large, got the bug number that should be fixed and verified that it wasn’t.

If you want to drill down by yourself, I’m sharing one AWR report easy to download from the Performance Hub:
https://www.dropbox.com/s/vp8ndas3pcqjfuw/troubleshooting-autonomous-database-AWRReport.html
and PerfHub report gathered with dbms_perf.report_perfhub: https://www.dropbox.com/s/yup5m7ihlduqgbn/troubleshooting-autonomous-database-perfhub.html

Comments and questions welcome. If you are interested in an Oracle Performance Workshop tuning, I can do it in our office, customer premises or remotely (Teams, Teamviewer, or any tool you want). Just request it on: https://www.dbi-services.com/trainings/oracle-performance-tuning-training/#onsite. We can deliver a 3 days workshop on the optimizer concepts and hands-on lab to learn the troubleshooting method and tools. Or we can do some coaching looking at your environment on a shared screen: your database, your tools.

Cet article Troubleshooting performance on Autonomous Database est apparu en premier sur Blog dbi services.

Oracle 20c Data Guard : Standardization of Client-Side Broker Files

$
0
0

In an Oracle 20c Data Guard environment with a broker configured we can have following files called client-side broker files
-The observer configuration file : observer.ora
-The observer log file
-The observer runtime datafile: fsfo.dat
-The fast-start failover callout scripts (new feature in Oracle 20c)

Before Oracle 20c, there was no default location for these files. Starting with Oracle 20c, now we can define a default location for all these files by setting an environment variable called DG_ADMIN. This variable should point to a directory. Once defined, this directory will contain subdirectories that will store the client-side broker files.

The directory which defines the $DG_ADMIN directory must be created with required permissions.

If the directory does not exist or the has wrong permissions, the broker will store the fsfo.dat file and the observer.ora file in the current directory and the log file will be redirected in the standard output.

The default directory will contain following subdirectories

admin directory : contains the observer.ora
config_ConfigurationSimpleName : contains related to the observer and callout configuration
config_ConfigurationSimpleName/log : contains the observer logfile
config_ConfigurationSimpleName/dat : contains the observer runtime data file
config_ConfigurationSimpleName/callout : contains files related to the callout configuration

In the documentation we can find following

On Linux/Unix, the directory specified by the DG_ADMIN environment variable must have read, write, and execute permissions for the directory owner only. The subdirectories that DGMGRL creates under this directory will also have the same permissions.
On Windows, the directory specified by the DG_ADMIN environment variable must have exclusive permissions wherein it can be accessed only by the current operating system user who is running DGMGRL The subdirectories created under this directory by DGMGRL will also have the same permissions.

Let’s do some practical demonstrations for this new feature. Below the configuration I am using

192.168.2.21 oraadserver : the primary database
192.168.2.22 oraadserver2 : the standby database
192.168.2.23 oraadserver3 : the observer

The Data Guard and the broker are already configured

DGMGRL> show configuration

Configuration - prod20

  Protection Mode: MaxPerformance
  Members:
  PROD20_SITE1 - Primary database
    PROD20_SITE2 - Physical standby database

Fast-Start Failover:  Disabled

Configuration Status:
SUCCESS   (status updated 39 seconds ago)

DGMGRL>

Now let’s connect to the oraadserver3 and let’s start the observer. Note that we do not yet define any $DG_ADMIN variable

oracle@oraadserver3:/home/oracle/ [prod20 (CDB$ROOT)] pwd
/home/oracle
oracle@oraadserver3:/home/oracle/ [prod20 (CDB$ROOT)] echo $DG_ADMIN

oracle@oraadserver3:/home/oracle/ [prod20 (CDB$ROOT)] nohup dgmgrl -silent sys/*******@prod20_site1 "start observer" &
[1] 8202
oracle@oraadserver3:/home/oracle/ [prod20 (CDB$ROOT)] nohup: ignoring input and appending output to ‘nohup.out’

oracle@oraadserver3:/home/oracle/ [prod20 (CDB$ROOT)]

As no DG_ADMIN variable is defined, the client-side broker files are stored in the current directory

oracle@oraadserver3:/home/oracle/ [prod20 (CDB$ROOT)] ls -ltra fsfo.dat observer_oraadserver3.log
-rw-r--r--. 1 oracle oinstall 8336 Sep  4 13:29 fsfo.dat
-rw-r-----. 1 oracle oinstall  760 Sep  4 13:30 observer_oraadserver3.log
oracle@oraadserver3:/home/oracle/ [prod20 (CDB$ROOT)]

The command show observer will confirm that the observer is actually started on the oraadserver3

DGMGRL> show observer

Configuration - prod20

  Fast-Start Failover:     DISABLED

Observer "oraadserver3"

  Host Name:                    oraadserver3
  Last Ping to Primary:         5 seconds ago
  Log File:
  State File:

DGMGRL>

And in the log file we have this

oracle@oraadserver3:/home/oracle/ [prod20 (CDB$ROOT)] less observer_oraadserver3.log
Observer 'oraadserver3' started
[W000 2020-09-04T13:29:35.168+02:00] Observer trace level is set to USER
[W000 2020-09-04T13:29:35.172+02:00] Fast-Start Failover is disabled.

Ok now let’s stop the observer and let’s define the variable DG_ADMIN. The fsfo.dat files and the observer log file are also removed.

oracle@oraadserver3:/home/oracle/ [prod20 (CDB$ROOT)] echo $DG_ADMIN/
/u01/app/oracle/admin/prod20/broker_loc/

With the following permissions

oracle@oraadserver3:/home/oracle/ [prod20 (CDB$ROOT)] ls -ld /u01/app/oracle/admin/prod20/broker_loc/
drwxr-xr-x. 2 oracle oinstall 6 Sep  4 13:39 /u01/app/oracle/admin/prod20/broker_loc/
oracle@oraadserver3:/home/oracle/ [prod20 (CDB$ROOT)]

And let’s start again the observer

oracle@oraadserver3:/home/oracle/ [prod20 (CDB$ROOT)] nohup dgmgrl -silent sys/***@prod20_site1 "start observer" &
[1] 9136

We can following errors in the nohup file as the permissions on folders are wrong

oracle@oraadserver3:/home/oracle/ [prod20 (CDB$ROOT)] nohup: ignoring input and appending output to ‘nohup.out’

[1]+  Exit 255                nohup dgmgrl -silent sys/********@prod20_site1 "start observer"
oracle@oraadserver3:/home/oracle/ [prod20 (CDB$ROOT)] cat nohup.out
DGM-17389: The directory or file $DG_ADMIN/ should not be accessible by any user other than the owner.
Connected to "prod20_site1"
DGM-17390: The directory or file $DG_ADMIN/config_prod20/ cannot be accessed.
Failed.
oracle@oraadserver3:/home/oracle/ [prod20 (CDB$ROOT)]

Now let’s change the permissions as specified in the message

[oracle@oraadserver3 ~]$ chmod -R  700 $DG_ADMIN/
[oracle@oraadserver3 ~]$ ls -ld $DG_ADMIN/
drwx------. 2 oracle oinstall 6 Sep  4 13:39 /u01/app/oracle/admin/prod20/broker_loc/
[oracle@oraadserver3 ~]$

And let’s start the observer. Then we can see following in the nohup file

Succeeded in opening the observer file "/u01/app/oracle/admin/prod20/broker_loc/config_prod20/dat/fsfo.dat".
[W000 2020-09-04T13:50:47.465+02:00] Observer could not read the contents of the observer file.
[W000 2020-09-04T13:50:47.492+02:00] FSFO target standby is
Observer 'oraadserver3' started
The observer log file is '/u01/app/oracle/admin/prod20/broker_loc/config_prod20/log/observer_oraadserver3.log'.
oracle@oraadserver3:/home/oracle/ [prod20 (CDB$ROOT)]

indeed, the observer is now started

DGMGRL> show observer

Configuration - prod20

  Fast-Start Failover:     DISABLED

Observer "oraadserver3"

  Host Name:                    oraadserver3
  Last Ping to Primary:         6 seconds ago
  Log File:
  State File:

DGMGRL>

In the $DG_ADMIN directory we can see new subdirectories created by the broker

[oracle@oraadserver3 ~]$ cd $DG_ADMIN/
[oracle@oraadserver3 broker_loc]$ pwd
/u01/app/oracle/admin/prod20/broker_loc
[oracle@oraadserver3 broker_loc]$ ls
admin  config_prod20
[oracle@oraadserver3 broker_loc]$

We remark that the config_ConfigurationSimpleName is config_prod20 as the ConfigurationSimpleName is actually prod20 in the broker configuration (by default the name of the configuration)


DGMGRL> show configuration verbose

Configuration - prod20

  Protection Mode: MaxPerformance
  Members:
  PROD20_SITE1 - Primary database
    PROD20_SITE2 - Physical standby database

  Properties:
    FastStartFailoverThreshold      = '30'
    OperationTimeout                = '30'
    TraceLevel                      = 'USER'
    FastStartFailoverLagLimit       = '30'
    CommunicationTimeout            = '180'
    ObserverReconnect               = '0'
    FastStartFailoverAutoReinstate  = 'TRUE'
    FastStartFailoverPmyShutdown    = 'TRUE'
    BystandersFollowRoleChange      = 'ALL'
    ObserverOverride                = 'FALSE'
    ExternalDestination1            = ''
    ExternalDestination2            = ''
    PrimaryLostWriteAction          = 'CONTINUE'
    ConfigurationWideServiceName    = 'prod20_CFG'
    ConfigurationSimpleName         = 'prod20'

Fast-Start Failover:  Disabled

Configuration Status:
SUCCESS

DGMGRL>

And in the config_prod20 directory we have following subdirectorieds

[oracle@oraadserver3 config_prod20]$ pwd
/u01/app/oracle/admin/prod20/broker_loc/config_prod20
[oracle@oraadserver3 config_prod20]$ ls -lR
.:
total 0
drwx------. 2 oracle oinstall  6 Sep  4 13:50 callout
drwx------. 2 oracle oinstall 21 Sep  4 13:50 dat
drwx------. 2 oracle oinstall 38 Sep  4 13:50 log

./callout:
total 0

./dat:
total 12
-rw-r--r--. 1 oracle oinstall 8336 Sep  4 13:50 fsfo.dat

./log:
total 32
-rw-r-----. 1 oracle oinstall 29776 Sep  4 14:53 observer_oraadserver3.log
[oracle@oraadserver3 config_prod20]$

As said before the observer log is now in the config_prod20/log directoty and the fsfo.dat is in the config_prod20/dat

Cet article Oracle 20c Data Guard : Standardization of Client-Side Broker Files est apparu en premier sur Blog dbi services.

What is Object Storage?

$
0
0

By Franck Pachot

.

I’ve always been working with databases. Before the cloud era, the most abstract term was “data”. A variable in memory is data. A file is data. A block of disk contains data. We often created a ‘/data’ directory to put everything that is not binaries and configuration files. I’ll always remember when I did that while working in Dakar. My colleagues were laughing for minutes – my Senegalese followers will understand why. “Data”, like “information” is abstract (which is a reason why it is not plural). It makes sense only when you associate it with a container: data bank, database, datafile, datastore… In database infrastructure, we store data in files or block storage where we read and write by pages: read one or many continuous blocks, bring them in memory, update them in memory, write those blocks back. And to export and import data outside of the database, we store them in files, within filesystem that can be local or remote (like NFS). But it is basically the same: you open a file, you seek to the right offset, you read or write, you synchronize, you keep the file opened until you don’t need to work on it anymore, and then you close it. This API is so convenient, that finally in Linux everything is a file: you write to the network with file descriptors, you access the block devices with /dev ones, you output to the screen with stderr and stdout,…

And then came the cloud which maps most of the artifacts we have in the data center: virtual network, virtual machines, block storage, network file systems,… but then came a new invention: the Object Storage. What is an object? Well.. it is data… Then, what is new? It has some metadata associated with it… But that’s what a filesystem provides then? Well.. here we have no hierarchy, no directories, but more metadata, like tags. What? Everything in the cloud without folders and directories? No, we have buckets… But without hierarchy, how do you avoid name collision? No problem, each object has a UUID identifier.

My hope is that you find this blog post when you “google” to know what is object storage. It is quite common that “experts” answer a quick “did you google for it?” in forums without realizing that actually, what makes you an expert is not what you know but how accurate you can be when “googling” for it.

If you already know what is an Object Storage, you will probably find more information on each cloud provider documentation. But what if you don’t know at all. I’m writing this blog post because a friend without cloud knowledge was trying to understand what is Object Storage.

He is an Oracle DBA and came to this page about Oracle Cloud Object Storage:
https://docs.cloud.oracle.com/en-us/iaas/Content/Object/Concepts/objectstorageoverview.htm

Overview of Object Storage
Oracle Cloud Infrastructure offers two distinct storage class tiers to address the need for both performant, frequently accessed “hot” storage, and less frequently accessed “cold” storage. Storage tiers help you maximize performance where appropriate and minimize costs where possible.
Use Object Storage for data to which you need fast, immediate, and frequent access. Data accessibility and performance justifies a higher price to store data in the Object Storage tier.

And you know what a DBA thinks where he reads “performant, frequently accessed hot storage”, and “fast, immediate, and frequent access”? This is for a database. In the whole page, there are some use case described. But no mention of “read” and “write”. No mention of “random” or “sequential” access. Nothing that explicitly tells you that the Object Store is not where you want to put your database datafiles. It mentions some use-cases that seem very related with databases (Big Data Support, Backup, Repository, Large Datasets,…) and it mentions features that are very related to databases (consistency, durability, metadata, encryption).

Basically, if you already know what it is, you have a nice description. But if you are new to the cloud and try to match this service with something you know, then you are completely misled. Especially if you didn’t read about Block Storage before.

Are all cloud providers doing the same mistake? Here is AWS definition:
https://aws.amazon.com/what-is-cloud-object-storage/
Cloud object storage makes it possible to store practically limitless amounts of data in its native format

The Amazon Simple Storage Service (Amazon S3) has also no mention of read/write workloads and random/sequential access. The features are Durability, Availability, & Scalability. Nothing is telling you that it is not designed for database files.

Google Cloud may be more clear:
https://cloud.google.com/storage
Object storage for companies of all sizes. Store any amount of data. Retrieve it as often as you’d like. Good for “hot” data that’s accessed frequently, including websites, streaming videos, and mobile apps.
The “Store” and “Retreive as often” gives the idea of write once and read many, as a whole. This is not for databases. But again “accessed frequently” should mention “read” workload.

Microsoft is known to listen and talk to their users. Look at Azure name and definition for this:
https://azure.microsoft.com/en-us/services/storage/
Blob storage: Massively scalable and secure object storage for cloud-native workloads, archives, data lakes, high-performance computing, and machine learning

Yes, that rings a bell or two for a database person. BLOB is exactly what we use in databases to store what a cloud practitioner stores in an Object Storage. Here is the “O” for “Object”, the same as in “Binary Large Object”. You do not store database files in an Object Storage. Databases need block volumes and you have block storage services. You don’t store a hierarchy of folders and files in an Object Storage. File servers need protocols providing shared access and a structure of filesystem trees. You store everything else in an Object Storage. Think of it like writing to tape but reading like if the tape was transformed to SSD. In this store you put files and can get them efficiently. I’m talking about “store”, “put” and “get” like in a document database. But documents can be terabytes. And you can read those files with a database, as if it were a BLOB, like Oracle Autonomous Datawarehouse reading ORC, Parquet, or Avro. Or Amazon Athena running SQL queries on S3 files.

I hope what is an Object Storage is more clear for you, especially if you are in databases. And also remember that what is easy to google for you may be impossible to find for someone else. You need to concepts and Cloud Practitioner certifications are really good for that.

Cet article What is Object Storage? est apparu en premier sur Blog dbi services.

Duplicate Database on ODA X4

$
0
0

The ODA X4 is still in use for some customers. The last time I was asked to validate the backups . Let me explain the context. Actually the backups are done via RMAN on local in a NFS share. After these backups are backep up by Netback on tape.
The goal was just to validate that the backups done by Netback can be used to restore if needed.

So the backup teams restored the backups of one database in a directory and then we duplicate this database using these backups.

The source database is SRCDB
The target database will be named TESTQ
The backup from tape are copied in /shareback/backup/test_restauration
Below the the server characteristics

[root@ ~]# oakcli show server

        Power State              : On
        Open Problems            : 0
        Model                    : ODA X4-2
        Type                     : Rack Mount
        Part Number              : 32974004+1+1
        Serial Number            : 1435NMP00A
        Primary OS               : Not Available
        ILOM Address             : 10.120.128.111
        ILOM MAC Address         : 00:10:E0:5F:4D:2E
        Description              : Oracle Database Appliance X4-2 1435NMP00A
        Locator Light            : Off
        Actual Power Consumption : 234 watts
        Ambient Temperature      : 23.000 degree C
        Open Problems Report     : System is healthy

[root@ ~]#

On the source the datafiles are stored here

/u02/app/oracle/oradata/datastore/.ACFS/snaps/SRCDB/SRCDB/

As for every duplicate we have to prepare the directories for the target database. But there is a problem with the ODA as I connot create any directory under the snaps directory

oracle@:/u02/app/oracle/oradata/datastore/.ACFS/snaps/ [TESTQ] mkdir TESTQ
mkdir: cannot create directory `TT': Permission denied
oracle@:/u02/app/oracle/oradata/datastore/.ACFS/snaps/ [TESTQ]

As I cannot manually create any directory, I have two solutions
1-Create an empty database named TESTQ with oakcli create database and then remove the datafiles after
2-Create the storage for the future database TESTQ using oakcli create dbstorage.

[root@srvodap01n1test_restauration]# oakcli create dbstorage -h
Usage:
      oakcli create dbstorage -db  [-cdb]


      where:
         db_name      -  Setup the required ACFS storage structure for the database 
         cdb          -  This needs to be passed in case of cdb database

         This storage structure can be used for migrating databases from ASM to ACFS e.t.c

[root@srvodap01n1test_restauration]#

As we can see create dbstorage will create all required directories for the new database. So we use the 2nd methode. We were using an X4 ODA and the command create storage has to be launched from the first node

[root@srvodap01n0 snaps]# oakcli create dbstorage -db TESTQ
INFO: 2020-09-17 13:49:47: Please check the logfile  '/opt/oracle/oak/log/srvodap01n0/tools/12.1.2.12.0/createdbstorage_TESTQ_1793.log' for more details

Please enter the 'SYSASM'  password : (During deployment we set the SYSASM password to 'welcome1'):
Please re-enter the 'SYSASM' password:
Please select one of the following for Database Class  [1 .. 3] :
1    => odb-01s  (   1 cores ,     4 GB memory)
2    =>  odb-01  (   1 cores ,     8 GB memory)
3    =>  odb-02  (   2 cores ,    16 GB memory)
1
The selected value is : odb-01s  (   1 cores ,     4 GB memory)
...SUCCESS: Ran /usr/bin/rsync -tarqvz /opt/oracle/oak/onecmd/ root@192.168.16.28:/opt/oracle/oak/onecmd --exclude=*zip --exclude=*gz --exclude=*log --exclude=*trc --exclude=*rpm and it returned: RC=0

.........
SUCCESS: All nodes in /opt/oracle/oak/onecmd/tmp/db_nodes are pingable and alive.
INFO: 2020-09-17 13:53:44: Successfully setup the storage structure for the database 'TESTQ'
INFO: 2020-09-17 13:53:45: Set the following directory structure for the Database TESTQ
INFO: 2020-09-17 13:53:45: DATA: /u02/app/oracle/oradata/datastore/.ACFS/snaps/TESTQ
INFO: 2020-09-17 13:53:45: REDO: /u01/app/oracle/oradata/datastore/TESTQ
INFO: 2020-09-17 13:53:45: RECO: /u01/app/oracle/fast_recovery_area/datastore/TESTQ
SUCCESS: 2020-09-17 13:53:45: Successfully setup the Storage for the Database : TESTQ
[root@srvodap01n0 snaps]#

Once the storage created we start the new instance TESTQ on nomount state with a minimum of configuration parameters

oracle@srvodap01n1:/u01/app/oracle/local/dmk/etc/ [TESTQ] sqh

SQL*Plus: Release 12.1.0.2.0 Production on Thu Sep 17 14:11:43 2020

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup nomount pfile='/u02/app/oracle/oradata/datastore/.ACFS/snaps/TESTQ/TESTQ/initTESTQ.ora'
ORACLE instance started.

Total System Global Area 4294967296 bytes
Fixed Size                  2932632 bytes
Variable Size             889192552 bytes
Database Buffers         3372220416 bytes
Redo Buffers               30621696 bytes
SQL> show parameter db_uni

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_unique_name                       string      TESTQ
SQL> show parameter db_name

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_name                              string      TESTQ
SQL> show parameter control_files

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
control_files                        string      /u01/app/oracle/product/12.1.0
                                                 .2/dbhome_2/dbs/cntrlTESTQ.dbf
SQL>

SQL> show parameter db_cre

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_create_file_dest                  string      /u02/app/oracle/oradata/datast
                                                 ore/.ACFS/snaps/TESTQ
db_create_online_log_dest_1          string      /u01/app/oracle/oradata/datast
                                                 ore/TESTQ
db_create_online_log_dest_2          string      /u01/app/oracle/oradata/datast
                                                 ore/TESTQ
db_create_online_log_dest_3          string
db_create_online_log_dest_4          string
db_create_online_log_dest_5          string
SQL>

Once the instance started, we can now lunch the duplicate command. Juste note that the output was truncated

Recovery Manager: Release 12.1.0.2.0 - Production on Thu Sep 17 14:18:17 2020

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

RMAN> connect auxiliary /

connected to auxiliary database: TESTQ (not mounted)

run
{
ALLOCATE AUXILIARY CHANNEL c1 DEVICE TYPE DISK;
ALLOCATE AUXILIARY CHANNEL c2 DEVICE TYPE DISK;
DUPLICATE DATABASE TO TESTQ BACKUP LOCATION '/shareback/backup/test_restauration';
release channel c2;
release channel c2;
8> }



allocated channel: c1
channel c1: SID=17 device type=DISK

allocated channel: c2
channel c2: SID=177 device type=DISK

Starting Duplicate Db at 17-SEP-2020 14:18:46

contents of Memory Script:
{
   sql clone "create spfile from memory";
}
executing Memory Script

sql statement: create spfile from memory

contents of Memory Script:
{
   shutdown clone immediate;
   startup clone nomount;
}
executing Memory Script

Oracle instance shut down

connected to auxiliary database (not started)
Oracle instance started

Total System Global Area    4294967296 bytes

Fixed Size                     2932632 bytes
Variable Size                889192552 bytes
Database Buffers            3372220416 bytes
Redo Buffers                  30621696 bytes
allocated channel: c1
channel c1: SID=16 device type=DISK
allocated channel: c2
channel c2: SID=177 device type=DISK

contents of Memory Script:
{
   sql clone "alter system set  control_files =
  ''/u01/app/oracle/oradata/datastore/TESTQ/TESTQ/controlfile/o1_mf_hp6o2sb2_.ctl'', ''/u01/app/oracle/oradata/datastore/TESTQ/TESTQ/controlfile/o1_mf_hp6o2sbm_.ctl'' comment=
 ''Set by RMAN'' scope=spfile";
   sql clone "alter system set  db_name =
 ''SRCDB'' comment=
 ''Modified by RMAN duplicate'' scope=spfile";
   sql clone "alter system set  db_unique_name =
 ''TESTQ'' comment=
 ''Modified by RMAN duplicate'' scope=spfile";
   shutdown clone immediate;
   startup clone force nomount
   restore clone primary controlfile from  '/shareback/test_restauration/20200916_214502_c-2736611334-20200916-04';
   alter clone database mount;
}
executing Memory Script

sql statement: alter system set  control_files =   ''/u01/app/oracle/oradata/datastore/TESTQ/TESTQ/controlfile/o1_mf_hp6o2sb2_.ctl'', ''/u01/app/oracle/oradata/datastore/TESTQ/TESTQ/controlfile/o1_mf_hp6o2sbm_.ctl'' comment= ''Set by RMAN'' scope=spfile

sql statement: alter system set  db_name =  ''SRCDB'' comment= ''Modified by RMAN duplicate'' scope=spfile

sql statement: alter system set  db_unique_name =  ''TESTQ'' comment= ''Modified by RMAN duplicate'' scope=spfile

Oracle instance shut down


Oracle instance started

Total System Global Area    4294967296 bytes

Fixed Size                     2932632 bytes
Variable Size                889192552 bytes
Database Buffers            3372220416 bytes
Redo Buffers                  30621696 bytes
allocated channel: c1
channel c1: SID=16 device type=DISK
allocated channel: c2
channel c2: SID=177 device type=DISK

Starting restore at 17-SEP-2020 14:20:21

channel c2: skipped, AUTOBACKUP already found
channel c1: restoring control file
channel c1: restore complete, elapsed time: 00:00:11
output file name=/u01/app/oracle/oradata/datastore/TESTQ/TESTQ/controlfile/o1_mf_hp6o2sb2_.ctl
output file name=/u01/app/oracle/oradata/datastore/TESTQ/TESTQ/controlfile/o1_mf_hp6o2sbm_.ctl
Finished restore at 17-SEP-2020 14:20:32


...
....
Executing: alter database force logging

contents of Memory Script:
{
   Alter clone database open resetlogs;
}
executing Memory Script

database opened
Executing: alter database flashback on
Cannot remove created server parameter file
Finished Duplicate Db at 17-SEP-2020 14:38:31

The duplicate was successful

oracle@srvodap01n1:/u01/app/oracle/local/dmk/etc/ [TESTQ] TESTQ
********* dbi services Ltd. *********
STATUS                 : OPEN
DB_UNIQUE_NAME         : TESTQ
OPEN_MODE              : READ WRITE
LOG_MODE               : ARCHIVELOG
DATABASE_ROLE          : PRIMARY
FLASHBACK_ON           : YES
FORCE_LOGGING          : YES
VERSION                : 12.1.0.2.0
CDB Enabled            : NO
*************************************
oracle@srvodap01n1:/u01/app/oracle/local/dmk/etc/ [TESTQ]

Hope that will help

Cet article Duplicate Database on ODA X4 est apparu en premier sur Blog dbi services.

Oracle DML (DELETE) and the Index Clustering Factor

$
0
0

As a consultant working for customers, I’m often in the situation that I have an answer to a problem, but the recommended solution cannot be implemented due to some restrictions. E.g. the recommendation would be to adjust the code, but that is not feasible. In such cases you are forced to try to help without code changes.

Recently I was confronted with the following issue: A process takes too long. Digging deeper I could see that most of the time was spent on this SQL:

DELETE FROM COM_TAB WHERE 1=1 

The execution plan looked as follows:

--------------------------------------------------------------------------------------------
| Id  | Operation             | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------
|   0 | DELETE STATEMENT      |                    |       |       | 16126 (100)|          |
|   1 |  DELETE               | COM_TAB            |       |       |            |          |
|   2 |   INDEX FAST FULL SCAN| PK_COM_TAB         |    10M|   306M| 16126   (1)| 00:00:01 |
--------------------------------------------------------------------------------------------

My initial reaction was of course to say that deleting all data in a table with a delete statement is not a good idea. Better is to turn the DML into DDL and use e.g. “truncate table”. All options for deleting lots of rows in a table fast are provided by Chris Saxon in his Blog here.

In this case changing the SQL was not possible, so what are the alternatives?

As I was involved in this a long time after the issue happened I checked the ASH of AWR-History:

SQL> select SQL_EXEC_START, session_state, event, count(*)*10 secs_in_state FROM dba_hist_active_sess_history where sql_id='53gwjb0gjn1np'
  2  group by sql_exec_start, session_state, event order by 1,4 desc;

SQL_EXEC_START      SESSION EVENT                                                            SECS_IN_STATE
------------------- ------- ---------------------------------------------------------------- -------------
19.06.2020 10:13:02 WAITING free buffer waits                                                          560
19.06.2020 10:13:02 WAITING enq: CR - block range reuse ckpt                                           370
19.06.2020 10:13:02 ON CPU                                                                             130
19.06.2020 10:13:02 WAITING reliable message                                                            10
19.06.2020 10:56:01 WAITING enq: CR - block range reuse ckpt                                           550
19.06.2020 10:56:01 WAITING free buffer waits                                                          230
19.06.2020 10:56:01 ON CPU                                                                             140
19.06.2020 10:56:01 WAITING log file switch (checkpoint incomplete)                                     60
19.06.2020 11:39:38 WAITING enq: CR - block range reuse ckpt                                           610
19.06.2020 11:39:38 WAITING free buffer waits                                                          180
19.06.2020 11:39:38 ON CPU                                                                             170
19.06.2020 11:39:38 WAITING log file switch (checkpoint incomplete)                                     80
19.06.2020 11:39:38 WAITING write complete waits                                                        40
19.06.2020 12:23:47 WAITING enq: CR - block range reuse ckpt                                           450
19.06.2020 12:23:47 WAITING free buffer waits                                                          280
19.06.2020 12:23:47 ON CPU                                                                             150
19.06.2020 12:23:47 WAITING log file switch (checkpoint incomplete)                                     90
19.06.2020 12:23:47 WAITING write complete waits                                                        30
19.06.2020 12:23:47 WAITING log buffer space                                                            10

So obviously the DBWR had a problem writing dirty blocks to disk and getting free space in the cache. When the issue happened above the following parameter were active:

filesystemio_options='ASYNCH'

Changing it to

filesystemio_options='SETALL'

improved the situation a lot, but caused waits on “db file sequential read”.

I.e. with filesystemio_options=’ASYNCH’ we do cache lots of repeatedly touched blocks in the filesystem cache, but suffer from slower (non-direct) writes by the DB-writer. With filesystemio_options=’SETALL’ we gain by doing direct IO by the DB-writer, but have to read repeatedly touched blocks from disk more often.

The table just had 1 index, the index for the primary key.

So what to do here?

Several recommendations came to mind:

– With filesystemio_options=’ASYNCH’: Increase the redologs to not do a checkpoint while the statement is running
– With filesystemio_options=’SETALL’: Increase the buffer cache to keep blocks in memory for longer and avoid single block IOs

The most interesting question was: Why is the optimizer deciding to go over the index here first? With a bad clustering factor it would make more sense to do a full table scan than to use the index. And this has actually been validated with a hint:

DELETE /*+ FULL(COM_TAB) */ FROM COM_TAB WHERE 1=1

improved the situation.

An improvement should be achievable by using an Index Organized Table here as we only have a primary key index on the table, so that we just wipe out the data in the index and do not have to visit the same table block repeatedly again. The best however is to create a testcase and reproduce the issue. Here’s what I did:

I created 2 tables

TDEL_GOOD_CF
TDEL_BAD_CF

which do have more blocks than I have in the db-cache. As the name suggests one table had an index with a better clustering factor and one an index with a bad clustering factor:

SQL> select table_name, blocks, num_rows from tabs where table_name like 'TDEL_%';

TABLE_NAME                           BLOCKS   NUM_ROWS
-------------------------------- ---------- ----------
TDEL_BAD_CF                          249280     544040
TDEL_GOOD_CF                         248063     544040

Remark: To use lots of blocks I stored only 2 rows per block by using a high PCTFREE.

SQL> select index_name, leaf_blocks, clustering_factor from ind where table_name like 'TDEL_%';

INDEX_NAME                       LEAF_BLOCKS CLUSTERING_FACTOR
-------------------------------- ----------- -----------------
PK_TDEL_BAD_CF                          1135            532313
PK_TDEL_GOOD_CF                         1135            247906

The database cache size was much smaller than the blocks in the table:

SQL> select bytes/8192 BLOCKS_IN_BUFFER_CACHE from v$sgastat where name='buffer_cache';

BLOCKS_IN_BUFFER_CACHE
----------------------
                 77824

Test with filesystemio_options=’SETALL’:

SQL> set autotrace trace timing on
SQL> delete from TDEL_BAD_CF where 1=1;

544040 rows deleted.

Elapsed: 00:01:08.76

Execution Plan
----------------------------------------------------------
Plan hash value: 2076794500

----------------------------------------------------------------------------------------
| Id  | Operation             | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------
|   0 | DELETE STATEMENT      |                |   544K|  2656K|   315	 (2)| 00:00:01 |
|   1 |  DELETE               | TDEL_BAD_CF    |       |       |            |          |
|   2 |   INDEX FAST FULL SCAN| PK_TDEL_BAD_CF |   544K|  2656K|   315	 (2)| 00:00:01 |
----------------------------------------------------------------------------------------


Statistics
----------------------------------------------------------
         88  recursive calls
    2500712  db block gets
       1267  consistent gets
     477213  physical reads
  388185816  redo size
        195  bytes sent via SQL*Net to client
        384  bytes received via SQL*Net from client
          1  SQL*Net roundtrips to/from client
          7  sorts (memory)
          0  sorts (disk)
     544040  rows processed

Please consider the 477213 physical reads (blocks read), i.e. almost 2 times the number of blocks in the table.
The ASH-data looked as follows:

select sql_id, sql_plan_line_id, session_state, event, p1, count(*)
from v$active_session_history
where sql_id='ck5fw78yqh93g'
group by sql_id,sql_plan_line_id, session_state, event, p1
order by 6;


SQL_ID	      SQL_PLAN_LINE_ID SESSION EVENT                                    P1   COUNT(*)
------------- ---------------- ------- -------------------------------- ---------- ----------
ck5fw78yqh93g                1 WAITING db file scattered read                    7          1
ck5fw78yqh93g                1 ON CPU                                            7         11
ck5fw78yqh93g                1 WAITING db file sequential read                   7         56

P1 is the file_id when doing IO. File ID 7 is the USERS-Tablepspace where my table and index are in.

So obviously Oracle didn’t consider the clustering factor when building the plan with the index. The cost of 315 is just the cost for the INDEX FAST FULL SCAN:

Fast Full Index Scan Cost ~ ((LEAF_BLOCKS/MBRC) x MREADTIM)/ SREADTIM + CPU

REMARK: I do not have system statistics gathered.

LEAF_BLOCKS=1135
MBRC=8
MREADTIM=26ms
SREADTIM=12ms

Fast Index Scan Cost ~ ((1135/8) x 26)/ 12 + CPU = 307 + CPU = 315

The costs for accessing the table are not considered at all. I.e. going through the index and from there to the table to delete the rows results in visiting the same table block several times.

Here the test with the table having a better clustering factor on the index:

SQL> delete from TDEL_GOOD_CF where 1=1;

544040 rows deleted.

Elapsed: 00:00:30.48

Execution Plan
----------------------------------------------------------
Plan hash value: 4284904063

-----------------------------------------------------------------------------------------
| Id  | Operation             | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------
|   0 | DELETE STATEMENT      |                 |   544K|  2656K|   315   (2)| 00:00:01 |
|   1 |  DELETE               | TDEL_GOOD_CF    |       |       |            |          |
|   2 |   INDEX FAST FULL SCAN| PK_TDEL_GOOD_CF |   544K|  2656K|   315   (2)| 00:00:01 |
-----------------------------------------------------------------------------------------


Statistics
----------------------------------------------------------
        115  recursive calls
    2505121  db block gets
       1311  consistent gets
     249812  physical reads
  411603188  redo size
        195  bytes sent via SQL*Net to client
        385  bytes received via SQL*Net from client
          1  SQL*Net roundtrips to/from client
          9  sorts (memory)
          0  sorts (disk)
     544040  rows processed


select sql_id, sql_plan_line_id, session_state, event, p1, count(*)
from v$active_session_history
where sql_id='0nqk3fmcwrrzm'
group by sql_id,sql_plan_line_id, session_state, event, p1
order by 6;

SQL_ID	      SQL_PLAN_LINE_ID SESSION EVENT                                    P1   COUNT(*)
------------- ---------------- ------- -------------------------------- ---------- ----------
0nqk3fmcwrrzm                1 ON CPU                                            7          3
0nqk3fmcwrrzm                1 WAITING db file sequential read                   7         26

I.e. it did run much faster with the better clustering factor and only had to do half the physical reads. 

Here the test with the full table scan on the table with the index having a bad clustering factor:

cbleile@orcl@orcl> delete /*+ FULL(T) */ from TDEL_BAD_CF T where 1=1;

544040 rows deleted.

Elapsed: 00:00:08.39

Execution Plan
----------------------------------------------------------
Plan hash value: 4058645893

----------------------------------------------------------------------------------
| Id  | Operation          | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------
|   0 | DELETE STATEMENT   |             |   544K|  2656K| 67670   (1)| 00:00:01 |
|   1 |  DELETE            | TDEL_BAD_CF |       |       |            |          |
|   2 |   TABLE ACCESS FULL| TDEL_BAD_CF |   544K|  2656K| 67670   (1)| 00:00:01 |
----------------------------------------------------------------------------------


Statistics
----------------------------------------------------------
        161  recursive calls
    1940687  db block gets
     248764  consistent gets
     252879  physical reads
  269882276  redo size
        195  bytes sent via SQL*Net to client
        401  bytes received via SQL*Net from client
          1  SQL*Net roundtrips to/from client
          7  sorts (memory)
          0  sorts (disk)
     544040  rows processed

select sql_id, sql_plan_line_id, session_state, event, p1, count(*)
from v$active_session_history
where sql_id='4272c7xv86d0k'
group by sql_id,sql_plan_line_id, session_state, event, p1
order by 6;

SQL_ID	      SQL_PLAN_LINE_ID SESSION EVENT                                    P1   COUNT(*)
------------- ---------------- ------- -------------------------------- ---------- ----------
4272c7xv86d0k                2 ON CPU                                            7          2
4272c7xv86d0k                1 ON CPU                                            7          3
4272c7xv86d0k                2 WAITING db file scattered read                    7          3

I.e. if Oracle would consider the clustering factor here and do the delete with the full table scan then it would obviously run much faster.

Last test with an IOT:

cbleile@orcl@orcl> delete from TDEL_IOT where 1=1;

544040 rows deleted.

Elapsed: 00:00:06.90

Execution Plan
----------------------------------------------------------
Plan hash value: 515699456

-------------------------------------------------------------------------------------------
| Id  | Operation             | Name      | Rows  | Bytes | Cost (%CPU)| Time             |
-------------------------------------------------------------------------------------------
|   0 | DELETE STATEMENT      |                   |   544K|  2656K| 66065   (1)| 00:00:01 |
|   1 |  DELETE               | TDEL_IOT          |       |       |            |          |
|   2 |   INDEX FAST FULL SCAN| SYS_IOT_TOP_77456 |   544K|  2656K| 66065   (1)| 00:00:01 |
-------------------------------------------------------------------------------------------


Statistics
----------------------------------------------------------
        144  recursive calls
     521556  db block gets
     243200  consistent gets
     243732  physical reads
  241686612  redo size
        194  bytes sent via SQL*Net to client
        381  bytes received via SQL*Net from client
          1  SQL*Net roundtrips to/from client
          7  sorts (memory)
          0  sorts (disk)
     544040  rows processed

select sql_id, sql_plan_line_id, session_state, event, p1, count(*)
from v$active_session_history
where sql_id='cf6nj64yybkpq'
group by sql_id,sql_plan_line_id, session_state, event, p1
order by 6;

SQL_ID	      SQL_PLAN_LINE_ID SESSION EVENT                                    P1   COUNT(*)
------------- ---------------- ------- -------------------------------- ---------- ----------
cf6nj64yybkpq                2 ON CPU                                            7          1
cf6nj64yybkpq                1 ON CPU                                            7          1
cf6nj64yybkpq                2 WAITING db file scattered read                    7          3

As we were not allowed to adjust the code or replace the table with an IOT the measures to improve this situation were to
– set filesystemio_options=’SETALL’
REMARK: That change needs good testing as it may have negative effects on other SQL, which gain from the filesystem cache.
– add a hint with a SQL-Patch to force a full table scan

REMARK: Creating a SQL-Patch to add the hint

FULL(TDEL_BAD_CF)

to the statement was not easily possible, because Oracle does not consider this hint in DML:

var rv varchar2(32);
declare
   v_sql CLOB;
begin
   select sql_text into v_sql from dba_hist_sqltext where sql_id='ck5fw78yqh93g';
   :rv:=dbms_sqldiag.create_sql_patch(
             sql_text  => v_sql,
             hint_text=>'FULL(TDEL_BAD_CF)',
             name=>'force_fts_when_del_all',
             description=>'force fts when del all rows');
end;
/
print rv

delete from TDEL_BAD_CF where 1=1;

544040 rows deleted.

Elapsed: 00:01:01.79

Execution Plan
----------------------------------------------------------
Plan hash value: 2076794500

----------------------------------------------------------------------------------------
| Id  | Operation             | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------
|   0 | DELETE STATEMENT      |                |   544K|  2656K|   315	 (2)| 00:00:01 |
|   1 |  DELETE               | TDEL_BAD_CF    |       |       |            |          |
|   2 |   INDEX FAST FULL SCAN| PK_TDEL_BAD_CF |   544K|  2656K|   315	 (2)| 00:00:01 |
----------------------------------------------------------------------------------------

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 1 (N - Unresolved (1))
---------------------------------------------------------------------------

   1 -	DEL$1
	 N -  FULL(TDEL_BAD_CF)

Note
-----
   - SQL patch "force_fts_when_del_all" used for this statement

I.e. according the Note the SQL patch was used, but the Hint report showed it to be “Unresolved”.

So I had to use the full hint specification:

FULL(@"DEL$1" "TDEL_BAD_CF"@"DEL$1")

To get this full specification you can do an explain plan of the hinted statement and look at the outline data:

SQL> explain plan for
  2  delete /*+ FULL(TDEL_BAD_CF) */ from TDEL_BAD_CF where 1=1;

Explained.

SQL> select * from table(dbms_xplan.display(format=>'+OUTLINE'));

...

Outline Data
-------------

  /*+
      BEGIN_OUTLINE_DATA
      FULL(@"DEL$1" "TDEL_BAD_CF"@"DEL$1")
      OUTLINE_LEAF(@"DEL$1")
      ALL_ROWS
      DB_VERSION('19.1.0')
      OPTIMIZER_FEATURES_ENABLE('19.1.0')
      IGNORE_OPTIM_EMBEDDED_HINTS
      END_OUTLINE_DATA
  */

So here’s the script to create the SQL Patch correctly:

var rv varchar2(32);
declare
   v_sql CLOB;
begin
   select sql_text into v_sql from dba_hist_sqltext where sql_id='ck5fw78yqh93g';
   :rv:=dbms_sqldiag.create_sql_patch(
             sql_text  => v_sql,
             hint_text=>'FULL(@"DEL$1" "TDEL_BAD_CF"@"DEL$1")',
             name=>'force_fts_when_del_all',
             description=>'force fts when del all rows');
end;
/
print rv

SQL> delete from TDEL_BAD_CF where 1=1;

544040 rows deleted.

Elapsed: 00:00:06.57

Execution Plan
----------------------------------------------------------
Plan hash value: 4058645893

----------------------------------------------------------------------------------
| Id  | Operation          | Name        | Rows  | Bytes | Cost (%CPU)| Time	 |
----------------------------------------------------------------------------------
|   0 | DELETE STATEMENT   |             |   544K|  2656K| 67670   (1)| 00:00:01 |
|   1 |  DELETE            | TDEL_BAD_CF |       |       |            |          |
|   2 |   TABLE ACCESS FULL| TDEL_BAD_CF |   544K|  2656K| 67670   (1)| 00:00:01 |
----------------------------------------------------------------------------------

Note
-----
   - SQL patch "force_fts_when_del_all" used for this statement

Statistics
----------------------------------------------------------
        207  recursive calls
    1940517  db block gets
     248759  consistent gets
     252817  physical reads
  272061432  redo size
        195  bytes sent via SQL*Net to client
        384  bytes received via SQL*Net from client
          1  SQL*Net roundtrips to/from client
         16  sorts (memory)
          0  sorts (disk)
     544040  rows processed

Summary: This is a specific corner case where the Oracle optimizer should consider the clustering factor in DML when calculating plan costs but it doesn’t. The workaround in this case was to hint the statement or add a SQL-Patch to hint the statement without modifying it in the code.

Cet article Oracle DML (DELETE) and the Index Clustering Factor est apparu en premier sur Blog dbi services.

How to synchronize the appliance registry metadata on an ODA?

$
0
0

Databases administration on a Bare Metal ODA will be done as root user by running odacli commands :

  • odacli create-database to create a database
  • odacli upgrade-database to upgrade a database between major releases
  • odacli move-database to move databases from one Oracle home to another of the same database version
  • odacli update-dbhome to update a specific RDBMS home to the latest patch bundle version
  • etc…

The odacli commands will do the needful and at the end update the Apache derby DB (ODA registry metadata). odacli commands like odacli list-dbhomes or odacli list-databases will use the derby DB information to display the requested information.

But what will happen if the odacli commands to upgrade or update your database are failing in error? How to synchronize the appliance registry metadata on an ODA?

I have been running several customer projects where the odacli commands to upgrade or update databases have been failing in error before completion. This had as consequence to complete the upgrade manually and to unfortunately update the derby DB manually in order to have coherent metadata information.

I have been already sharing a few blogs on that subject :
https://blog.dbi-services.com/connecting-to-oda-derby-database/
https://blog.dbi-services.com/moving-oracle-database-to-new-home-on-oda/

Updating manually the derby DB is a sensitive operation and you might want to do it only with oracle support guidance and instructions.

But, GOOD NEWS! If you are running an ODA version newer than 18.7, there is a new command available : odacli update-registry. I could use it successfully recently and through this blog, I would like to share it with you.

odacli update-registry command

This command will update the registry of the components when you manually apply patches or run a manual database upgrade. The option -n will define the component you would like to get updated in the derby DB.
See ODA 19.8 documentation for more details.

Real customer case example

I had to upgrade DB1 from 11.2.0.4 to 12.1.0.2. The odacli upgrade-database command has been failing in error and I had to manually complete the upgrade. At the end I had to synchronize the registry metadata DB.

List dbhomes

The dbhomes from the ODA was the following one :
[root@ODASRV log]# odacli list-dbhomes
 
ID Name DB Version Home Location Status
---------------------------------------- -------------------- ---------------------------------------- --------------------------------------------- ----------
d6df9457-e4cd-4c39-b3cb-8d03be3c4598 OraDB11204_home1 11.2.0.4.190115 /u01/app/oracle/product/11.2.0.4/dbhome_1 Configured
9d2d92d0-3b98-42ac-9f39-9bd6deeb2e16 OraDB19000_home1 19.7.0.0.200414 /u01/app/oracle/product/19.0.0.0/dbhome_1 Configured
73847823-ae83-4bf0-a630-f8884cf4387a OraDB12102_home1 12.1.0.2.200414 /u01/app/oracle/product/12.1.0.2/dbhome_1 Configured

Registry metadata after manual upgrade

The registry metadata after manual upgrade was the following one :
[root@ODASRV log]# odacli list-databases
 
ID DB Name DB Type DB Version CDB Class Shape Storage Status DbHomeID
---------------------------------------- ---------- -------- -------------------- ---------- -------- -------- ---------- ------------ ----------------------------------------
d897e7d6-9e2d-45e4-a0d7-a1e232d47f16 DB1 Si 11.2.0.4.190115 false Oltp Odb1 Acfs Configured d6df9457-e4cd-4c39-b3cb-8d03be3c4598
67abbd2e-f8e1-42da-bf8d-2f0a8eb403dd DB2 Si 19.7.0.0.200414 false Oltp Odb1 Acfs Configured 9d2d92d0-3b98-42ac-9f39-9bd6deeb2e16
c51f7361-ee99-42ed-9126-86b7fc281981 DB3 Si 11.2.0.4.190115 false Oltp Odb1 Acfs Configured d6df9457-e4cd-4c39-b3cb-8d03be3c4598
472c04fe-533d-46af-aeab-ab5271979d98 DB4 Si 11.2.0.4.190115 false Oltp Odb1 Acfs Configured d6df9457-e4cd-4c39-b3cb-8d03be3c4598
8dd9b1ea-37fd-408f-99ab-eb32e2c2ed91 DB5 Si 11.2.0.4.190115 false Oltp Odb1 Acfs Configured d6df9457-e4cd-4c39-b3cb-8d03be3c4598
2f5856df-e717-404a-b7b0-ca8c82b2f45e DB6 Si 11.2.0.4.190115 false Oltp Odb1 Acfs Configured d6df9457-e4cd-4c39-b3cb-8d03be3c4598
5797b6b2-e3fc-4182-8db3-671132dd43a7 DB7 Si 11.2.0.4.190115 false Oltp Odb1 Acfs Configured d6df9457-e4cd-4c39-b3cb-8d03be3c4598
3c67a04d-4e6b-4b43-8b56-94284994b25d DB8 Si 11.2.0.4.190115 false Oltp Odb1 Acfs Configured d6df9457-e4cd-4c39-b3cb-8d03be3c4598
a1b2500d-728e-4cbe-8425-f8a85826c422 DB9 Si 11.2.0.4.190115 false Oltp Odb1 Acfs Configured d6df9457-e4cd-4c39-b3cb-8d03be3c4598
7dfadc59-0c67-4b42-86e1-0140f39cf4d3 DB10 Si 11.2.0.4.190115 false Oltp Odb1 Acfs Configured d6df9457-e4cd-4c39-b3cb-8d03be3c4598

And as we can see, DB1 was still showing to be a 11.2.0.4 database and linked to 11.2.0.4 home although it has been upgraded to 12.1.0.2 version :
oracle@ODASRV:/home/oracle/mwagner/upgrade_TZ/ [DB1] DB1
********* dbi services Ltd. *********
STATUS : OPEN
DB_UNIQUE_NAME : DB1_RZA
OPEN_MODE : READ WRITE
LOG_MODE : ARCHIVELOG
DATABASE_ROLE : PRIMARY
FLASHBACK_ON : NO
FORCE_LOGGING : YES
VERSION : 12.1.0.2.0
CDB Enabled : NO
*************************************

Updating registry metadata : odacli update-registry -n db

I tried executing odacli update-registry command to get the registry metadata information updated :

[root@ODASRV log]# odacli update-registry -n db
 
Job details
----------------------------------------------------------------
ID: a7270d8d-c8d2-48be-a41b-150441559791
Description: Discover Components : db
Status: Created
Created: August 10, 2020 1:55:31 PM CEST
Message:
 
Task Name Start Time End Time Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
 
[root@ODASRV log]# odacli describe-job -i a7270d8d-c8d2-48be-a41b-150441559791
 
Job details
----------------------------------------------------------------
ID: a7270d8d-c8d2-48be-a41b-150441559791
Description: Discover Components : db
Status: Success
Created: August 10, 2020 1:55:31 PM CEST
Message:
 
Task Name Start Time End Time Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Discover DBHome August 10, 2020 1:55:31 PM CEST August 10, 2020 1:55:31 PM CEST Success
 
[root@ODASRV log]# odacli list-databases
 
ID DB Name DB Type DB Version CDB Class Shape Storage Status DbHomeID
---------------------------------------- ---------- -------- -------------------- ---------- -------- -------- ---------- ------------ ----------------------------------------
d897e7d6-9e2d-45e4-a0d7-a1e232d47f16 DB1 Si 11.2.0.4.190115 false Oltp Odb1 Acfs Configured d6df9457-e4cd-4c39-b3cb-8d03be3c4598
67abbd2e-f8e1-42da-bf8d-2f0a8eb403dd DB2 Si 19.7.0.0.200414 false Oltp Odb1 Acfs Configured 9d2d92d0-3b98-42ac-9f39-9bd6deeb2e16
c51f7361-ee99-42ed-9126-86b7fc281981 DB3 Si 11.2.0.4.190115 false Oltp Odb1 Acfs Configured d6df9457-e4cd-4c39-b3cb-8d03be3c4598
472c04fe-533d-46af-aeab-ab5271979d98 DB4 Si 11.2.0.4.190115 false Oltp Odb1 Acfs Configured d6df9457-e4cd-4c39-b3cb-8d03be3c4598
8dd9b1ea-37fd-408f-99ab-eb32e2c2ed91 DB5 Si 11.2.0.4.190115 false Oltp Odb1 Acfs Configured d6df9457-e4cd-4c39-b3cb-8d03be3c4598
2f5856df-e717-404a-b7b0-ca8c82b2f45e DB6 Si 11.2.0.4.190115 false Oltp Odb1 Acfs Configured d6df9457-e4cd-4c39-b3cb-8d03be3c4598
5797b6b2-e3fc-4182-8db3-671132dd43a7 DB7 Si 11.2.0.4.190115 false Oltp Odb1 Acfs Configured d6df9457-e4cd-4c39-b3cb-8d03be3c4598
3c67a04d-4e6b-4b43-8b56-94284994b25d DB8 Si 11.2.0.4.190115 false Oltp Odb1 Acfs Configured d6df9457-e4cd-4c39-b3cb-8d03be3c4598
a1b2500d-728e-4cbe-8425-f8a85826c422 DB9 Si 11.2.0.4.190115 false Oltp Odb1 Acfs Configured d6df9457-e4cd-4c39-b3cb-8d03be3c4598
7dfadc59-0c67-4b42-86e1-0140f39cf4d3 DB10 Si 11.2.0.4.190115 false Oltp Odb1 Acfs Configured d6df9457-e4cd-4c39-b3cb-8d03be3c4598
[root@ODASRV log]#

As we can see nothing really happened…

Use -f force option : odacli update-registry -n db -f

I have been using the force option :

[root@ODASRV log]# odacli update-registry -n db -f
 
Job details
----------------------------------------------------------------
ID: 2dbada8a-f76d-44bb-bb6e-c507d52e5ae3
Description: Discover Components : db
Status: Created
Created: August 10, 2020 1:58:37 PM CEST
Message:
 
Task Name Start Time End Time Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
 
[root@ODASRV log]# odacli describe-job -i 2dbada8a-f76d-44bb-bb6e-c507d52e5ae3
 
Job details
----------------------------------------------------------------
ID: 2dbada8a-f76d-44bb-bb6e-c507d52e5ae3
Description: Discover Components : db
Status: Success
Created: August 10, 2020 1:58:37 PM CEST
Message:
 
Task Name Start Time End Time Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Rediscover DBHome August 10, 2020 1:58:37 PM CEST August 10, 2020 1:58:39 PM CEST Success
Rediscover DBHome August 10, 2020 1:58:39 PM CEST August 10, 2020 1:58:41 PM CEST Success
Rediscover DBHome August 10, 2020 1:58:41 PM CEST August 10, 2020 1:58:48 PM CEST Success
Discover DBHome August 10, 2020 1:58:48 PM CEST August 10, 2020 1:58:48 PM CEST Success
Rediscover DB: DB9_RZA August 10, 2020 1:58:48 PM CEST August 10, 2020 1:58:53 PM CEST Success
Rediscover DB: DB10_RZA August 10, 2020 1:58:53 PM CEST August 10, 2020 1:58:58 PM CEST Success
Rediscover DB: DB8_RZA August 10, 2020 1:58:58 PM CEST August 10, 2020 1:59:03 PM CEST Success
Rediscover DB: DB2_RZA August 10, 2020 1:59:03 PM CEST August 10, 2020 1:59:16 PM CEST Success
Rediscover DB: DB6_RZA August 10, 2020 1:59:16 PM CEST August 10, 2020 1:59:21 PM CEST Success
Rediscover DB: DB7_RZA August 10, 2020 1:59:21 PM CEST August 10, 2020 1:59:26 PM CEST Success
Rediscover DB: DB4_RZA August 10, 2020 1:59:26 PM CEST August 10, 2020 1:59:31 PM CEST Success
Rediscover DB: DB5_RZA August 10, 2020 1:59:31 PM CEST August 10, 2020 1:59:36 PM CEST Success
Rediscover DB: DB3_RZA August 10, 2020 1:59:36 PM CEST August 10, 2020 1:59:41 PM CEST Success
Rediscover DB: DB1_RZA August 10, 2020 1:59:41 PM CEST August 10, 2020 1:59:51 PM CEST Success
 
[root@ODASRV log]# odacli list-databases
 
ID DB Name DB Type DB Version CDB Class Shape Storage Status DbHomeID
---------------------------------------- ---------- -------- -------------------- ---------- -------- -------- ---------- ------------ ----------------------------------------
d897e7d6-9e2d-45e4-a0d7-a1e232d47f16 DB1 Si 12.1.0.2.200414 false Oltp Odb1 Acfs Configured 73847823-ae83-4bf0-a630-f8884cf4387a
67abbd2e-f8e1-42da-bf8d-2f0a8eb403dd DB2 Si 19.7.0.0.200414 false Oltp Odb1 Acfs Configured 9d2d92d0-3b98-42ac-9f39-9bd6deeb2e16
c51f7361-ee99-42ed-9126-86b7fc281981 DB3 Si 11.2.0.4.190115 false Oltp Odb1 Acfs Configured d6df9457-e4cd-4c39-b3cb-8d03be3c4598
472c04fe-533d-46af-aeab-ab5271979d98 DB4 Si 11.2.0.4.190115 false Oltp Odb1 Acfs Configured d6df9457-e4cd-4c39-b3cb-8d03be3c4598
8dd9b1ea-37fd-408f-99ab-eb32e2c2ed91 DB5 Si 11.2.0.4.190115 false Oltp Odb1 Acfs Configured d6df9457-e4cd-4c39-b3cb-8d03be3c4598
2f5856df-e717-404a-b7b0-ca8c82b2f45e DB6 Si 11.2.0.4.190115 false Oltp Odb1 Acfs Configured d6df9457-e4cd-4c39-b3cb-8d03be3c4598
5797b6b2-e3fc-4182-8db3-671132dd43a7 DB7 Si 11.2.0.4.190115 false Oltp Odb1 Acfs Configured d6df9457-e4cd-4c39-b3cb-8d03be3c4598
3c67a04d-4e6b-4b43-8b56-94284994b25d DB8 Si 11.2.0.4.190115 false Oltp Odb1 Acfs Configured d6df9457-e4cd-4c39-b3cb-8d03be3c4598
a1b2500d-728e-4cbe-8425-f8a85826c422 DB9 Si 11.2.0.4.190115 false Oltp Odb1 Acfs Configured d6df9457-e4cd-4c39-b3cb-8d03be3c4598
7dfadc59-0c67-4b42-86e1-0140f39cf4d3 DB10 Si 11.2.0.4.190115 false Oltp Odb1 Acfs Configured d6df9457-e4cd-4c39-b3cb-8d03be3c4598

As we can see, this time the metadata information has been updated and DB1 is reflecting 12.1.0.2 version and Oracle home. The force option will update all information for existing components as well, and not only new ones.

Force option (-f) is mandatory if the component (here DB1 database) is already existing in the metadata registry.

Conclusion

This odacli update-registry command is really an excellent new feature and very helpful. It will allow to easily keep the ODA metadata registry up to date with all manual operations.

Cet article How to synchronize the appliance registry metadata on an ODA? est apparu en premier sur Blog dbi services.

How to migrate High Availability databases on an ODA?

$
0
0

Through this blog, I would like to show, with a real customer case, how I have been migrating databases in High Availability environment on an ODA. High Availability using Data Guard for Oracle Enterprise Edition or dbvisit for Oracle Standard Edition 2. One of the advantage here would be to use the Standby Database as fallback in case there would be some application issue with new database version. Although, I would expect that for any productive database, customer would have a similar test database on which he would perform a good testing.

In this procedure I have been upgrading database named DB1 from 11.2.0.4 to 12.1.0.2 version. The primary server is named ODA-PRI (running DB1_RZA database) and the standby server is named ODA-STD (running DB1_RZB database).

Steps

To perform an upgrade in High Availability environment, there would be 16 steps :

  1. Have a good database backup.
  2. Create new 12.1.0.2 dbhomes (or use existing one) on the primary and the standby ODA.
  3. Stop the application.
  4. Make sure the standby database is synchronized with the primary database.
  5. Stop the synchronization between the primary and the standby database.
  6. Stop the standby database.
  7. Upgrade the primary database to 12.1.0.2.
  8. Start the application.
  9. Test the application.
    • In case there is a critical issue failover to the standby database and create again old primary as new standby database.
    • If all is ok, have the standby database upgraded as well by doing the next steps.
  10. Move the standby database to the new home.
  11. Upgrade the standby database in the clusterware.
  12. Set compatible parameter to 12.1.0 on the standby database.
  13. Start the standby database.
  14. Update the registry metadata on the ODA.
  15. Start the log shippment again from the primary to the standby. The standby database will be upgraded through the archive logs.
  16. Test a switchover.

1- Check database backup

On our customer environment we are using our DMK Management Kit which is very easy to use and powerfully to administrate the database.
I have then checked dmk_dbbackup logs to ensure I have a good backup.

oracle@ODA-PRI:/home/oracle/ [DB1] cda
 
oracle@ODA-PRI:/u01/app/oracle/admin/DB1/ [DB1] cd log
 
oracle@ODA-PRI:/u01/app/oracle/admin/DB1/log/ [DB1] ls -ltrh *inc0*
-rw-r--r-- 1 oracle oinstall 34K Jun 21 21:05 DB1_bck_inc0_no_arc_del_20200621_200002.log
-rw-r--r-- 1 oracle oinstall 33K Jul 1 21:03 DB1_bck_inc0_no_arc_del_20200701_200002.log
-rw-r--r-- 1 oracle oinstall 34K Aug 1 21:06 DB1_bck_inc0_no_arc_del_20200801_200002.log
-rw-r--r-- 1 oracle oinstall 35K Aug 10 08:43 DB1_bck_inc0_no_arc_del_20200810_075712.log
 
oracle@ODA-PRI:/u01/app/oracle/admin/DB1/log/ [DB1] tail DB1_bck_inc0_no_arc_del_20200810_075712.log
 
Recovery Manager complete.
 
RMAN return Code: 0
 
#**************************************************************************************************#
# END OF: DB1_bck_inc0_no_arc_del_20200810_075712.log #
#--------------------------------------------------------------------------------------------------#
# timestamp: 2020-08-10_08:43:52 #
#**************************************************************************************************#
oracle@ODA-PRI:/u01/app/oracle/admin/DB1/log/ [DB1]

2- Create new 12.1.0.2 dbhomes (or use existing one) on the primary and the standby ODA

This steps might be similar for the primary and standby ODA. Make sure to use the same version of PSU on both your ODA.

I have been creating a new Oracle Home version 12.1.0.2.

Checking dbhomes, I do not have any 12.1 running on my ODAs :
[root@ODA-PRI ~]# odacli list-dbhomes
 
ID Name DB Version Home Location Status
---------------------------------------- -------------------- ---------------------------------------- --------------------------------------------- ----------
d6df9457-e4cd-4c39-b3cb-8d03be3c4598 OraDB11204_home1 11.2.0.4.190115 /u01/app/oracle/product/11.2.0.4/dbhome_1 Configured
3d92c8ea-2ad5-4565-acd5-1d931cf22b15 OraDB11204_home3 11.2.0.4.190115 /u01/app/oracle/product/11.2.0.4/dbhome_3 Configured
9d2d92d0-3b98-42ac-9f39-9bd6deeb2e16 OraDB19000_home1 19.7.0.0.200414 /u01/app/oracle/product/19.0.0.0/dbhome_1 Configured

I’m running ODA version 19.7 :
[root@ODA-PRI ~]# odacli describe-component
System Version
---------------
19.7.0.0.0
 
Component Installed Version Available Version
---------------------------------------- -------------------- --------------------
OAK 19.7.0.0.0 up-to-date
GI 19.7.0.0.200414 up-to-date
DB {
[ OraDB11204_home1,OraDB11204_home3 ] 11.2.0.4.190115 11.2.0.4.200414
[ OraDB19000_home1 ] 19.7.0.0.200414 up-to-date
}
DCSAGENT 19.7.0.0.0 up-to-date
ILOM 4.0.4.52.r133103 up-to-date
BIOS 41060700 up-to-date
OS 7.8 up-to-date
FIRMWARECONTROLLER QDV1RF30 up-to-date
FIRMWAREDISK 0121 up-to-date
HMP 2.4.5.0.1 up-to-date

I have downloaded Patch p23494992 from My Oracle Support and extracted it locally :
[root@ODA-PRI patchs]# ls -ltrh
total 5.8G
-rw-r--r-- 1 root root 5.8G Jun 19 09:54 12.1.0.2_p23494992_197000_Linux-x86-64.zip
 
[root@ODA-PRI patchs]# unzip 12.1.0.2_p23494992_197000_Linux-x86-64.zip
Archive: 12.1.0.2_p23494992_197000_Linux-x86-64.zip
extracting: odacli-dcs-19.7.0.0.0-200423-DB-12.1.0.2.zip
inflating: README.txt
 
[root@ODA-PRI patchs]# ls -ltrh
total 12G
-rw-r--r-- 1 root root 5.8G Apr 23 20:10 odacli-dcs-19.7.0.0.0-200423-DB-12.1.0.2.zip
-rw-r--r-- 1 root root 252 May 23 20:55 README.txt
-rw-r--r-- 1 root root 5.8G Jun 19 09:54 12.1.0.2_p23494992_197000_Linux-x86-64.zip

I updated the ODA repository with new DB clone :
[root@ODA-PRI patchs]# odacli update-repository -f /tmp/patchs/odacli-dcs-19.7.0.0.0-200423-DB-12.1.0.2.zip
{
"jobId" : "a9677ae9-93af-43eb-b738-4a74f9686573",
"status" : "Created",
"message" : "/tmp/patchs/odacli-dcs-19.7.0.0.0-200423-DB-12.1.0.2.zip",
"reports" : [ ],
"createTimestamp" : "August 10, 2020 09:51:41 AM CEST",
"resourceList" : [ ],
"description" : "Repository Update",
"updatedTime" : "August 10, 2020 09:51:41 AM CEST"
}
 
[root@ODA-PRI patchs]# odacli describe-job -i "a9677ae9-93af-43eb-b738-4a74f9686573"
 
Job details
----------------------------------------------------------------
ID: a9677ae9-93af-43eb-b738-4a74f9686573
Description: Repository Update
Status: Success
Created: August 10, 2020 9:51:41 AM CEST
Message: /tmp/patchs/odacli-dcs-19.7.0.0.0-200423-DB-12.1.0.2.zip
 
Task Name Start Time End Time Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------

I have then created new needed dbhome (on this customer we are using Standard Edition database with dbvisit software) :
[root@ODA-PRI ~]# odacli create-dbhome -de SE -v 12.1.0.2.200414
 
Job details
----------------------------------------------------------------
ID: cb35718a-3aeb-4979-acc3-d352b1f541e4
Description: Database Home OraDB12102_home1 creation with version :12.1.0.2.200414
Status: Created
Created: August 10, 2020 9:57:22 AM CEST
Message: Create Database Home
 
Task Name Start Time End Time Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
 
[root@ODA-PRI ~]# odacli describe-job -i cb35718a-3aeb-4979-acc3-d352b1f541e4
 
Job details
----------------------------------------------------------------
ID: cb35718a-3aeb-4979-acc3-d352b1f541e4
Description: Database Home OraDB12102_home1 creation with version :12.1.0.2.200414
Status: Success
Created: August 10, 2020 9:57:22 AM CEST
Message: Create Database Home
 
Task Name Start Time End Time Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Setting up ssh equivalance August 10, 2020 9:57:23 AM CEST August 10, 2020 9:57:23 AM CEST Success
Validating dbHome available space August 10, 2020 9:57:23 AM CEST August 10, 2020 9:57:23 AM CEST Success
Creating DbHome Directory August 10, 2020 9:57:23 AM CEST August 10, 2020 9:57:23 AM CEST Success
Extract DB clones August 10, 2020 9:57:23 AM CEST August 10, 2020 9:59:25 AM CEST Success
Clone Db home August 10, 2020 9:59:25 AM CEST August 10, 2020 10:01:19 AM CEST Success
Enable DB options August 10, 2020 10:01:19 AM CEST August 10, 2020 10:01:30 AM CEST Success
Run Root DB scripts August 10, 2020 10:01:30 AM CEST August 10, 2020 10:01:30 AM CEST Success
Removing ssh keys August 10, 2020 10:01:39 AM CEST August 10, 2020 10:01:39 AM CEST Success

Following steps is mandatory if using our DMK Management Kit. We need to create a dummy for the new database home :
oracle@ODA-PRI:/home/oracle/ [rdbms11204_1] cdd
oracle@ODA-PRI:/u01/app/oracle/local/dmk/ [rdbms11204_1] cd etc
oracle@ODA-PRI:/u01/app/oracle/local/dmk/etc/ [rdbms11204_1] cp -p dmk.oratab dmk.oratab.20200810_1003
oracle@ODA-PRI:/u01/app/oracle/local/dmk/etc/ [rdbms11204_1] vi dmk.oratab
oracle@ODA-PRI:/u01/app/oracle/local/dmk/etc/ [rdbms11204_1] diff dmk.oratab dmk.oratab.20200810_1003
4d3
---rdbms12102_1:/u01/app/oracle/product/12.1.0.2/dbhome_1:D
oracle@ODA-PRI:/u01/app/oracle/local/dmk/etc/ [rdbms11204_1]

Just a tips : if you are using passwordless SSH authentication for the oracle user, you will have to create the keys again as any odacli create or delete command will remove them.

Check new Oracle database home :
[root@ODA-PRI ~]# odacli list-dbhomes
 
ID Name DB Version Home Location Status
---------------------------------------- -------------------- ---------------------------------------- --------------------------------------------- ----------
d6df9457-e4cd-4c39-b3cb-8d03be3c4598 OraDB11204_home1 11.2.0.4.190115 /u01/app/oracle/product/11.2.0.4/dbhome_1 Configured
9d2d92d0-3b98-42ac-9f39-9bd6deeb2e16 OraDB19000_home1 19.7.0.0.200414 /u01/app/oracle/product/19.0.0.0/dbhome_1 Configured
73847823-ae83-4bf0-a630-f8884cf4387a OraDB12102_home1 12.1.0.2.200414 /u01/app/oracle/product/12.1.0.2/dbhome_1 Configured

3- Stop the application

Before performing next upgrade steps, it is now time to have the customer stopping the application. You might have your own script to check running session, if not you can use below queries :
SQL> set linesize 300
SQL> set pagesize 500
SQL> col machine format a20
SQL> col service_name format a20
SQL> select SID, serial#, username, machine, process, program, status, service_name, logon_time from v$session where username not in ('SYS', 'PUBLIC') and username is not null order by status, username;

To kill inactive old sessions, you can use below PL/SQL commands :
BEGIN
for s in (select SID, serial#, username, machine, process, program, status, service_name, logon_time from v$session where username not in ('SYS', 'PUBLIC') and username is not null and status='INACTIVE' order by status, username)
loop
dbms_output.put_line ('Session : ' || s.sid || ',' ||s.serial#);
dbms_output.put_line ('');
execute immediate 'alter system kill session ''' || s.sid || ',' ||s.serial# || ''' immediate';
end loop;
end;
/

4- Make sure the standby database is synchronized with the primary database

The High Availability solution will depend if you are running Enterprise Edition or Standard Edition.
I will consider that you are using Data Guard for Enterprise Edition and dbvisit solution for Standard Edition.

High availability with Data Guard

To make sure the standby is synchronized, use show database command on the standby database :
DGMGRL> show configuration
 
Configuration - DB1
 
Protection Mode: MaxAvailability
Members:
DB1_RZA - Primary database
DB1_RZB - Physical standby database
 
Fast-Start Failover: DISABLED
 
Configuration Status:
SUCCESS (status updated 19 seconds ago)
 
 
DGMGRL> show database DB1_RZB
 
Database - DB1_RZB
 
Role: PHYSICAL STANDBY
Intended State: APPLY-ON
Transport Lag: 0 seconds (computed 1 second ago)
Apply Lag: 0 seconds (computed 1 second ago)
Average Apply Rate: 4.00 KByte/s
Real Time Query: OFF
Instance(s):
DB1
 
Database Status:
SUCCESS

What is mandatory is to have no transport lag and no apply lag :
Transport Lag: 0 seconds (computed 1 second ago)
Apply Lag: 0 seconds (computed 1 second ago)

High availability with dbvisit

Use dbvctl command with option -i to ensure there is no gap. DB1 would be the name of my DDC configuration file :
oracle@ODA-PRI:/home/oracle/ [DB1] /u01/app/dbvisit/standby/dbvctl -d DB1 -i
=============================================================
Dbvisit Standby Database Technology (9.0.02_0_gbd40c486) (pid 91009)
dbvctl started on ODA-PRI: Mon Aug 10 09:29:17 2020
=============================================================
 
Dbvisit Standby log gap report for DB1_RZA at 202008100929:
-------------------------------------------------------------
Description | SCN | Timestamp
-------------------------------------------------------------
Source 13139995191 2020-08-10:09:29:20 +02:00
Destination 13139994931 2020-08-10:09:27:37 +02:00
 
Standby database time lag (DAYS-HH:MI:SS): +00:01:43
 
Report for Thread 1
-------------------
SOURCE
Current Sequence 52863
Last Archived Sequence 52862
Last Transferred Sequence 52862
Last Transferred Timestamp 2020-08-10 09:27:42
 
DESTINATION
Recovery Sequence 52863
 
Transfer Log Gap 0
Apply Log Gap 0
 
=============================================================
dbvctl ended on ODA-PRI: Mon Aug 10 09:29:25 2020
=============================================================

Mandatory is to have no gap :
Transfer Log Gap 0
Apply Log Gap 0

If there is any gap, same command can be run on the primary to ship the archive log and on the standby to apply them :
oracle@ODA-PRI|STD:/home/oracle/ [DB1] /u01/app/dbvisit/standby/dbvctl -d DB1

5- Stop the synchronization between the primary and the standby database

You will need to stop the synchronization of the databases to isolate the standby from any new change.

High availability with Data Guard

In case FSFO is used, you will need to disable it :
DGMGRL> disable fast_start failover;
Disabled.

You will need to set the protection mode to maxperformance :
DGMGRL> edit configuration set protection mode as maxperformance;
Succeeded.

You will need to stop applying the change vector on the standby :
DGMGRL> edit database DB1_RZB set state=apply-off;
Succeeded.

You will need to stop shipping the change vector on the primary :
DGMGRL> edit database DB1_RZA set state=transport-off;
Succeeded.

High availability with dbvisit

Archive log shippment and apply is done through the linux crontab. Purpose is then to deactivate it on both the primary and the standby database using crontab -e command.

On the primary :
#00,10,20,30,40,50 * * * * /u01/app/dbvisit/standby/dbvctl -d DB1 >/tmp/dbvisit_apply_logs_DB1.log 2>&1

On the standby :
#05,15,25,35,45,55 * * * * /u01/app/dbvisit/standby/dbvctl -d DB1 >/tmp/dbvisit_apply_logs_DB1.log 2>&1

6- Stop the standby database

oracle@ODA-STD:/home/oracle/ [DB1] DB1
********* dbi services Ltd. *********
STATUS : MOUNTED
DB_UNIQUE_NAME : DB1_RZB
OPEN_MODE : MOUNTED
LOG_MODE : ARCHIVELOG
DATABASE_ROLE : PHYSICAL STANDBY
FLASHBACK_ON : NO
FORCE_LOGGING : YES
*************************************
 
oracle@ODA-STD:/home/oracle/ [DB1] srvctl stop database -d DB1_RZB
 
oracle@ODA-STD:/home/oracle/ [DB1] DB1
********* dbi services Ltd. *********
STATUS : STOPPED
*************************************
oracle@ODA-STD:/home/oracle/ [DB1]

7- Upgrade the primary database to 12.1.0.2

7a- Prerequirements

I first checked if the existing database had any invalid object :
SQL> set lines 300
 
SQL> col status format a20
 
SQL> col comp_name format a40
 
SQL> select comp_name, status from dba_registry;
 
COMP_NAME STATUS
---------------------------------------- --------------------
Oracle Database Catalog Views VALID
Oracle Database Packages and Types VALID
Oracle Workspace Manager VALID
 
SQL> SELECT count(*) FROM dba_invalid_objects;
 
COUNT(*)
----------
0

I executed preupgrd.sql script :
oracle@ODA-PRI:/home/oracle/ [DB1] echo $ORACLE_SID
DB1
 
oracle@ODA-PRI:/home/oracle/ [DB1] echo $ORACLE_HOME
/u01/app/oracle/product/11.2.0.4/dbhome_1
 
oracle@ODA-PRI:/home/oracle/ [DB1] ls -lthr /u01/app/oracle/product/12.1.0.2/dbhome_1/rdbms/admin/pre*
-rw-r--r-- 1 oracle oinstall 14K May 15 2014 /u01/app/oracle/product/12.1.0.2/dbhome_1/rdbms/admin/preupgrd.sql
 
oracle@ODA-PRI:/home/oracle/ [DB1] sqh
 
SQL*Plus: Release 11.2.0.4.0 Production on Mon Aug 10 10:29:32 2020
 
Copyright (c) 1982, 2013, Oracle. All rights reserved.
 
 
Connected to:
Oracle Database 11g Release 11.2.0.4.0 - 64bit Production
With the Real Application Clusters option
 
SQL> @/u01/app/oracle/product/12.1.0.2/dbhome_1/rdbms/admin/preupgrd.sql
...
...
...

I checked preupgrade log :
oracle@ODA-PRI:/home/oracle/ [DB1] more /u01/app/oracle/cfgtoollogs/DB1_RZA/preupgrade/preupgrade.log
Oracle Database Pre-Upgrade Information Tool 08-10-2020 10:29:46
Script Version: 12.1.0.2.0 Build: 015
**********************************************************************
Database Name: DB1
Container Name: Not Applicable in Pre-12.1 database
Container ID: Not Applicable in Pre-12.1 database
Version: 11.2.0.4.0
Compatible: 11.2.0
Blocksize: 8192
Platform: Linux x86 64-bit
Timezone file: V31
Database log mode: ARCHIVELOG
**********************************************************************
[Update parameters] [No parameters to update] ...
...
...

I executed the preupgrade_fixup.sql script :
SQL> @/u01/app/oracle/cfgtoollogs/DB1_RZA/preupgrade/preupgrade_fixups.sql
Pre-Upgrade Fixup Script Generated on 2020-08-10 10:29:45 Version: 12.1.0.2 Build: 015
Beginning Pre-Upgrade Fixups...
Executing in container DB1
 
**********************************************************************
Check Tag: NEW_TIME_ZONES_EXIST
Check Summary: Check for use of newer timezone data file
Fix Summary: Time zone data file must be updated in the new ORACLE_HOME.
**********************************************************************
Fixup Returned Information:
ERROR: --> New Timezone File in use
 
Database is using a time zone file newer than version 18.
BEFORE upgrading the database, patch the new
ORACLE_HOME/oracore/zoneinfo/ with a time zone data file of the
same version as the one used in the 11.2.0.4.0 release database.
**********************************************************************
 
 
**********************************************************************
Check Tag: PURGE_RECYCLEBIN
Check Summary: Check that recycle bin is empty prior to upgrade
Fix Summary: The recycle bin will be purged.
**********************************************************************
Fixup Succeeded
**********************************************************************
 
 
**********************************************************************
[Pre-Upgrade Recommendations] **********************************************************************
 
*****************************************
********* Dictionary Statistics *********
*****************************************
 
Please gather dictionary statistics 24 hours prior to
upgrading the database.
To gather dictionary statistics execute the following command
while connected as SYSDBA:
EXECUTE dbms_stats.gather_dictionary_stats;
 
^^^ MANUAL ACTION SUGGESTED ^^^
 
 
*****************************************
*********** Hidden Parameters ***********
*****************************************
 
Please review and remove any unnecessary hidden/underscore parameters prior
to upgrading. It is strongly recommended that these be removed before upgrade
unless your application vendors and/or Oracle Support state differently.
Changes will need to be made in the init.ora or spfile.
 
******** Existing Hidden Parameters ********
 
_datafile_write_errors_crash_instance = FALSE
_db_writer_coalesce_area_size = 16777216
_disable_interface_checking = TRUE
_enable_NUMA_support = FALSE
_file_size_increase_increment = 2143289344
_gc_policy_time = 0
_gc_undo_affinity = FALSE
_ktb_debug_flags = 8
 
^^^ MANUAL ACTION SUGGESTED ^^^
 
 
*****************************************
************ Existing Events ************
*****************************************
 
Please review and remove any unnecessary events prior to upgrading.
It is strongly recommended that these be removed before upgrade unless
your application vendors and/or Oracle Support state differently.
Changes will need to be made in the init.ora or spfile.
 
******** Existing Events ********
 
 
 
^^^ MANUAL ACTION SUGGESTED ^^^
 
 
**************************************************
************* Fixup Summary ************
 
1 fixup routine was successful.
0 fixup routines returned INFORMATIONAL text that should be reviewed.
1 ERROR LEVEL check returned INFORMATION that must be acted on prior to upgrade.
 
************************************************************
====>> USER ACTION REQUIRED <<====
************************************************************
 
1) Check Tag: NEW_TIME_ZONES_EXIST failed.
Check Summary: Check for use of newer timezone data file
Fixup Summary:
"Time zone data file must be updated in the new ORACLE_HOME."
^^^ MANUAL ACTION REQUIRED ^^^
 
**************************************************
You MUST resolve the above error prior to upgrade
**************************************************
 
 
**************** Pre-Upgrade Fixup Script Complete *********************
 
PL/SQL procedure successfully completed.

The hidden parameters are ODA specific and can be ignored.

I manually purge the recycle_bin :
SQL> EXECUTE dbms_preup.purge_recyclebin_fixup;
 
PL/SQL procedure successfully completed.
 
SQL> PURGE DBA_RECYCLEBIN;
 
DBA Recyclebin purged.
 
SQL>

I gathered dictionary statistics :
SQL> EXECUTE dbms_stats.gather_dictionary_stats;
 
PL/SQL procedure successfully completed.

I found out, thanks to MOS community article, that there was a bug for the time zone requirements :
Bug 17303129 : UPGRADE DATABASE FROM 11.1.0.7 TO 12.1.0.1, “OLDER TIMEZONE IN USE” OCCURRED
and that I did not have to downgrade my time zone :
SQL> select version FROM v$timezone_file;
 
VERSION
----------
31
 
SQL> select TZ_VERSION from registry$database;
 
TZ_VERSION
----------
31

I removed OLAP component :
SQL> @?/olap/admin/catnoamd.sql
...
...
...
drop type olapsys.olap_sys_aw_access_obj
*
ERROR at line 1:
ORA-01435: user does not exist
...
...
...

and could check that OLAP is installed but not used :
SQL> col c1 heading 'OLAP|Installed' format a20
SQL> select decode(count(*), 0, 'No', 'Yes') c1 from v$option where parameter = 'OLAP';
 
OLAP
Installed
--------------------
Yes
 
SQL> col c1 heading 'OLAP|Used' format a20
SQL> select decode(count(*), 0, 'No', 'Yes') c1 from dba_feature_usage_statistics where name like '%OLAP%' and first_usage_date is not null;
 
OLAP
Used
--------------------
No

7b- Upgrade the primary database to 12.1.0.2 using odacli upgrade-database

As it should normally be done I wanted to upgrade the database using odacli upgrade-database.
[root@ODA-PRI ~]# odacli list-databases
 
ID DB Name DB Type DB Version CDB Class Shape Storage Status DbHomeID
---------------------------------------- ---------- -------- -------------------- ---------- -------- -------- ---------- ------------ ----------------------------------------
d897e7d6-9e2d-45e4-a0d7-a1e232d47f16 DB1 Si 11.2.0.4.190115 false Oltp Odb1 Acfs Configured d6df9457-e4cd-4c39-b3cb-8d03be3c4598
 
[root@ODA-PRI ~]# odacli list-dbhomes
 
ID Name DB Version Home Location Status
---------------------------------------- -------------------- ---------------------------------------- --------------------------------------------- ----------
d6df9457-e4cd-4c39-b3cb-8d03be3c4598 OraDB11204_home1 11.2.0.4.190115 /u01/app/oracle/product/11.2.0.4/dbhome_1 Configured
9d2d92d0-3b98-42ac-9f39-9bd6deeb2e16 OraDB19000_home1 19.7.0.0.200414 /u01/app/oracle/product/19.0.0.0/dbhome_1 Configured
73847823-ae83-4bf0-a630-f8884cf4387a OraDB12102_home1 12.1.0.2.200414 /u01/app/oracle/product/12.1.0.2/dbhome_1 Configured
 
[root@ODA-PRI ~]# odacli upgrade-database -i d897e7d6-9e2d-45e4-a0d7-a1e232d47f16 -from d6df9457-e4cd-4c39-b3cb-8d03be3c4598 -to 73847823-ae83-4bf0-a630-f8884cf4387a
{
"jobId" : "7c6580a2-3646-4a93-8fcc-582a6f297562",
"status" : "Created",
"message" : null,
"reports" : [ ],
"createTimestamp" : "August 10, 2020 12:01:44 PM CEST",
"resourceList" : [ ],
"description" : "Database service upgrade with db ids: [d897e7d6-9e2d-45e4-a0d7-a1e232d47f16]",
"updatedTime" : "August 10, 2020 12:01:44 PM CEST"
}
 
[root@ODA-PRI upgrade1]# odacli describe-job -i "7c6580a2-3646-4a93-8fcc-582a6f297562"
 
Job details
----------------------------------------------------------------
ID: 7c6580a2-3646-4a93-8fcc-582a6f297562
Description: Database service upgrade with db ids: [d897e7d6-9e2d-45e4-a0d7-a1e232d47f16] Status: Failure
Created: August 10, 2020 12:01:44 PM CEST
Message: DCS-10001:Internal error encountered: Databases failed to upgrade are : [d897e7d6-9e2d-45e4-a0d7-a1e232d47f16].
 
Task Name Start Time End Time Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Database Service Upgradation August 10, 2020 12:01:44 PM CEST August 10, 2020 12:05:31 PM CEST Failure
Database Service Upgradation August 10, 2020 12:01:44 PM CEST August 10, 2020 12:05:31 PM CEST Failure
Setting up ssh equivalance August 10, 2020 12:01:45 PM CEST August 10, 2020 12:01:45 PM CEST Success
Run catnoamd.sql August 10, 2020 12:01:45 PM CEST August 10, 2020 12:01:46 PM CEST Success
Database Upgrade August 10, 2020 12:01:46 PM CEST August 10, 2020 12:05:30 PM CEST Success
Deleting and creating the spfile and pfile August 10, 2020 12:05:30 PM CEST August 10, 2020 12:05:31 PM CEST Success
Database Upgrade Validation August 10, 2020 12:05:31 PM CEST August 10, 2020 12:05:31 PM CEST Failure

But upgrade could not be done successfully using odacli upgrade-database command. I have been investigating the logs and tried several times before deciding not to lose more time and to run a manual upgrade. There was no other solution, the maintenance windows was going on.

7c- Upgrade the primary database to 12.1.0.2 manually

Upgrading the database manually would cover below steps :

  1. Switching database to new home
  2. Updating oratab
  3. Start database in upgrade mode
  4. Run upgrade
  5. Upgrade database configuration in oracle clusterware
  6. Startup the database
  7. Run catuppst.sql
  8. Check dba registry and invalid objects
  9. Execute utlrp.sql
  10. Run postupgrade_fixups.sql
  11. Gather statistics
  12. Run post upgrade status tool
  13. Check dba registry and invalid objects
  14. Solve invalid objects
  15. Upgrade time zone
  16. Check data patch
  17. Set compatible parameter value to 12.1.0
  18. Restart database with grid
  19. Update registry metadata
  20. Update network (tnsnames.ora and listener.ora for statistcs entries)
Switching database to new home

Stop database :
oracle@ODA-PRI:/home/oracle/ [DB1] DB1
********* dbi services Ltd. *********
STATUS : OPEN
DB_UNIQUE_NAME : DB1_RZA
OPEN_MODE : READ WRITE
LOG_MODE : ARCHIVELOG
DATABASE_ROLE : PRIMARY
FLASHBACK_ON : NO
FORCE_LOGGING : YES
VERSION : 11.2.0.4.0
*************************************
 
oracle@ODA-PRI:/home/oracle/ [DB1] srvctl stop database -d DB1_RZA
 
oracle@ODA-PRI:/home/oracle/ [DB1] DB1
********* dbi services Ltd. *********
STATUS : STOPPED
*************************************

Move spfile and password file from 11.2.0.4 home to 12.1.0.2 home :
oracle@ODA-PRI:/home/oracle/ [DB1] cdh
oracle@ODA-PRI:/u01/app/oracle/product/11.2.0.4/dbhome_1/ [DB1] cd dbs
oracle@ODA-PRI:/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/ [DB1] cp -p initDB1.ora /u01/app/oracle/product/12.1.0.2/dbhome_1/dbs/
oracle@ODA-PRI:/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/ [DB1] cp -p orapwDB1 /u01/app/oracle/product/12.1.0.2/dbhome_1/dbs/
oracle@ODA-PRI:/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/ [DB1] mv initDB1.ora initDB1.ora.off
oracle@ODA-PRI:/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/ [DB1] mv orapwDB1 orapwDB1.off

Updating oratab

oracle@ODA-PRI:/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/ [DB1] cp -p /etc/oratab ~/oratab.20200810_1243
oracle@ODA-PRI:/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/ [DB1] vio
oracle@ODA-PRI:/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/ [DB1] diff /etc/oratab ~/oratab.20200810_1243
33c33
DB1:/u01/app/oracle/product/11.2.0.4/dbhome_1:N # line added by Agent
oracle@ODA-PRI:/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/ [DB1] dmk

Start database in upgrade mode

Trying to start the database in upgrade mode I got following error :
ORA-00723: Initialization parameter COMPATIBLE must be explicitly set
This was due to the odacli upgrade command which had already updated the compatible parameter for the upgrade process. I had to set it back to 11g :
SQL> show parameter compatible
 
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
compatible string 12.0.0
noncdb_compatible boolean FALSE
 
SQL> show spparameter compatible
 
SID NAME TYPE VALUE
-------- ----------------------------- ----------- ----------------------------
* compatible string
* noncdb_compatible boolean
 
SQL> alter system set compatible='11.2.0.0.0' scope=spfile;
 
System altered.

Starting the database in upgrade mode was then successful :
SQL> startup upgrade
ORA-32004: obsolete or deprecated parameter(s) specified for RDBMS instance
ORACLE instance started.
 
Total System Global Area 4.5097E+10 bytes
Fixed Size 2936480 bytes
Variable Size 6710886752 bytes
Database Buffers 3.8252E+10 bytes
Redo Buffers 131280896 bytes
Database mounted.
Database opened.

Run upgrade

oracle@ODA-PRI:/u01/app/oracle/product/12.1.0.2/dbhome_1/ [DB1] cd $ORACLE_HOME/rdbms/admin
oracle@ODA-PRI:/u01/app/oracle/product/12.1.0.2/dbhome_1/rdbms/admin/ [DB1] $ORACLE_HOME/perl/bin/perl catctl.pl -n 4 -l $ORACLE_HOME/diagnostics catupgrd.sql
 
Argument list for [catctl.pl] SQL Process Count n = 4
SQL PDB Process Count N = 0
Input Directory d = 0
Phase Logging Table t = 0
Log Dir l = /u01/app/oracle/product/12.1.0.2/dbhome_1/diagnostics
Script s = 0
Serial Run S = 0
Upgrade Mode active M = 0
Start Phase p = 0
End Phase P = 0
Log Id i = 0
Run in c = 0
Do not run in C = 0
Echo OFF e = 1
No Post Upgrade x = 0
Reverse Order r = 0
Open Mode Normal o = 0
Debug catcon.pm z = 0
Debug catctl.pl Z = 0
Display Phases y = 0
Child Process I = 0
 
catctl.pl version: 12.1.0.2.0
Oracle Base = /u01/app/oracle
 
Analyzing file catupgrd.sql
Log files in /u01/app/oracle/product/12.1.0.2/dbhome_1/diagnostics
catcon: ALL catcon-related output will be written to /u01/app/oracle/product/12.1.0.2/dbhome_1/diagnostics/catupgrd_catcon_10444.lst
catcon: See /u01/app/oracle/product/12.1.0.2/dbhome_1/diagnostics/catupgrd*.log files for output generated by scripts
catcon: See /u01/app/oracle/product/12.1.0.2/dbhome_1/diagnostics/catupgrd_*.lst files for spool files, if any
Number of Cpus = 4
SQL Process Count = 4
 
------------------------------------------------------
Phases [0-73] Start Time:[2020_08_10 12:59:54] ------------------------------------------------------
Serial Phase #: 0 Files: 1 Time: 33s
Serial Phase #: 1 Files: 5 Time: 17s
Restart Phase #: 2 Files: 1 Time: 1s
Parallel Phase #: 3 Files: 18 Time: 3s
Restart Phase #: 4 Files: 1 Time: 0s
Serial Phase #: 5 Files: 5 Time: 8s
...
...
...
Serial Phase #:71 Files: 1 Time: 0s
Serial Phase #:72 Files: 1 Time: 0s
Serial Phase #:73 Files: 1 Time: 23s
 
------------------------------------------------------
Phases [0-73] End Time:[2020_08_10 13:09:13] ------------------------------------------------------
...
...
...

Upgrade database configuration in oracle clusterware

Clusterware needs to be updated to have database configured on new oracle home :
oracle@ODA-PRI:/u01/app/oracle/product/12.1.0.2/dbhome_1/diagnostics/ [DB1] srvctl upgrade database -db DB1_RZA -oraclehome /u01/app/oracle/product/12.1.0.2/dbhome_1
 
oracle@ODA-PRI:/u01/app/oracle/product/12.1.0.2/dbhome_1/diagnostics/ [DB1] srvctl config database -db DB1_RZA
Database unique name: DB1_RZA
Database name: DB1
Oracle home: /u01/app/oracle/product/12.1.0.2/dbhome_1
Oracle user: oracle
Spfile: /u02/app/oracle/oradata/DB1_RZA/dbs/spfileDB1.ora
Password file:
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools:
Disk Groups:
Mount point paths: /u02/app/oracle/oradata/DB1_RZA,/u03/app/oracle/
Services:
Type: SINGLE
OSDBA group: dba
OSOPER group: dbaoper
Database instance: DB1
Configured nodes: oda-pri
Database is administrator managed

/etc/oratab should be checked to ensure it’s reflect the same.

Startup the database

oracle@ODA-PRI:/u01/app/oracle/product/12.1.0.2/dbhome_1/diagnostics/ [DB1] DB1
********* dbi services Ltd. *********
STATUS : STOPPED
*************************************
 
oracle@ODA-PRI:/u01/app/oracle/product/12.1.0.2/dbhome_1/diagnostics/ [DB1] srvctl start database -d DB1_RZA
 
oracle@ODA-PRI:/u01/app/oracle/product/12.1.0.2/dbhome_1/diagnostics/ [DB1] DB1
********* dbi services Ltd. *********
STATUS : OPEN
DB_UNIQUE_NAME : DB1_RZA
OPEN_MODE : READ WRITE
LOG_MODE : ARCHIVELOG
DATABASE_ROLE : PRIMARY
FLASHBACK_ON : NO
FORCE_LOGGING : YES
VERSION : 12.1.0.2.0
CDB Enabled : NO
*************************************

Run catuppst.sql

oracle@ODA-PRI:/u01/app/oracle/product/12.1.0.2/dbhome_1/diagnostics/ [DB1] echo $ORACLE_HOME
/u01/app/oracle/product/12.1.0.2/dbhome_1
 
oracle@ODA-PRI:/u01/app/oracle/product/12.1.0.2/dbhome_1/diagnostics/ [DB1] sqh
 
SQL*Plus: Release 12.1.0.2.0 Production on Mon Aug 10 13:27:33 2020
 
Copyright (c) 1982, 2014, Oracle. All rights reserved.
 
Connected to:
Oracle Database 12c Standard Edition Release 12.1.0.2.0 - 64bit Production
With the Real Application Clusters option
 
SQL> @?/rdbms/admin/catuppst.sql
...
...
...
PL/SQL procedure successfully completed.
 
 
PL/SQL procedure successfully completed.
 
 
PL/SQL procedure successfully completed.
 
 
TIMESTAMP
--------------------------------------------------------------------------------
COMP_TIMESTAMP POSTUP_END 2020-08-10 13:27:44
 
 
Session altered.
 
SQL>

Check dba registry and invalid objects

SQL> set line 300
SQL> col comp_name format a50
 
SQL> select comp_name, status from dba_registry;
 
COMP_NAME STATUS
-------------------------------------------------- -----------
Oracle Database Catalog Views UPGRADED
Oracle Database Packages and Types UPGRADED
Oracle Workspace Manager VALID
Oracle XML Database VALID
 
SQL> SELECT count(*) FROM dba_invalid_objects;
 
COUNT(*)
----------
7768

Execute utlrp.sql

Execute utlrp.sql to recompile all objects :
SQL> @?/rdbms/admin/utlrp.sql
...
...
...
ERRORS DURING RECOMPILATION
---------------------------
2
 
Function created.
 
PL/SQL procedure successfully completed.
 
Function dropped.
 
PL/SQL procedure successfully completed.
 
SQL> SELECT count(*) FROM dba_invalid_objects;
 
COUNT(*)
----------
2
 
SQL>

Run postupgrade_fixups.sql

SQL> @/u01/app/oracle/cfgtoollogs/DB1_RZA/preupgrade/postupgrade_fixups.sql

Gather statistics

SQL> EXECUTE DBMS_STATS.GATHER_FIXED_OBJECTS_STATS;

Run post upgrade status tool

SQL> @?/rdbms/admin/utlu121s.sql

Solve invalid objects

SQL> SELECT distinct object_name FROM dba_invalid_objects order by OBJECT_NAME;
 
OBJECT_NAME
--------------------------------------------------------------------------------------------------------------------------------
VIP_START_TRIGGER
VIP_STOP_TRIGGER
 
SQL> select owner,object_name, object_type, status from dba_objects where object_name in ('VIP_START_TRIGGER','VIP_STOP_TRIGGER');
 
OWNER OBJECT_NAME OBJECT_TYPE STATUS
-------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------- ----------------------- -------
DBISERVICES VIP_START_TRIGGER TRIGGER INVALID
DBISERVICES VIP_STOP_TRIGGER TRIGGER INVALID

Both triggers have been created again.

Check dba registry and invalid objects

SQL> set line 300
SQL> col comp_name format a50
SQL> select comp_name, version, status from dba_registry;
 
COMP_NAME VERSION STATUS
-------------------------------------------------- ------------------------------ -----------
Oracle Database Catalog Views 12.1.0.2.0 VALID
Oracle Database Packages and Types 12.1.0.2.0 VALID
Oracle Workspace Manager 12.1.0.2.0 VALID
Oracle XML Database 12.1.0.2.0 VALID
 
SQL> SELECT count(*) FROM dba_invalid_objects;
 
COUNT(*)
----------
0
 
SQL> @?/rdbms/admin/utluiobj.sql
.
Oracle Database 12.1 Post-Upgrade Invalid Objects Tool 08-10-2020 13:39:45
.
This tool lists post-upgrade invalid objects that were not invalid
prior to upgrade (it ignores pre-existing pre-upgrade invalid objects).
.
Owner Object Name Object Type
.
 
PL/SQL procedure successfully completed.
 
SQL>

Upgrade time zone

Time zones have been upgraded with oracle script : upg_tzv_check.sql and upg_tzv_apply.sql
SQL> select version from v$timezone_file;
 
VERSION
----------
34

Check data patch

SQL> select PATCH_ID, VERSION from DBA_REGISTRY_SQLPATCH;
 
PATCH_ID VERSION
---------- --------------------
30691015 12.1.0.2
30805558 12.1.0.2

Set compatible parameter value to 12.1.0

SQL> show parameter compatible
 
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
compatible string 11.2.0.0.0
noncdb_compatible boolean FALSE
 
SQL> show spparameter compatible
 
SID NAME TYPE VALUE
-------- ----------------------------- ----------- ----------------------------
* compatible string 11.2.0.0.0
* noncdb_compatible boolean
 
SQL> alter system set compatible='12.1.0' scope=spfile;
 
System altered.

Restart database with grid

oracle@ODA-PRI:/home/oracle/mwagner/upgrade_TZ/ [DB1] srvctl stop database -d DB1_RZA
 
oracle@ODA-PRI:/home/oracle/mwagner/upgrade_TZ/ [DB1] DB1
********* dbi services Ltd. *********
STATUS : STOPPED
*************************************
 
oracle@ODA-PRI:/home/oracle/mwagner/upgrade_TZ/ [DB1] srvctl start database -d DB1_RZA
 
oracle@ODA-PRI:/home/oracle/mwagner/upgrade_TZ/ [DB1] DB1
********* dbi services Ltd. *********
STATUS : OPEN
DB_UNIQUE_NAME : DB1_RZA
OPEN_MODE : READ WRITE
LOG_MODE : ARCHIVELOG
DATABASE_ROLE : PRIMARY
FLASHBACK_ON : NO
FORCE_LOGGING : YES
VERSION : 12.1.0.2.0
CDB Enabled : NO
*************************************

Update registry metadata

To get the primary ODA registry metadata updated with the manual upgrade, I used odacli update-registry command. See my other blog for more details.

[root@ODA-PRI upgrade3]# odacli list-dbhomes
 
ID Name DB Version Home Location Status
---------------------------------------- -------------------- ---------------------------------------- --------------------------------------------- ----------
d6df9457-e4cd-4c39-b3cb-8d03be3c4598 OraDB11204_home1 11.2.0.4.190115 /u01/app/oracle/product/11.2.0.4/dbhome_1 Configured
9d2d92d0-3b98-42ac-9f39-9bd6deeb2e16 OraDB19000_home1 19.7.0.0.200414 /u01/app/oracle/product/19.0.0.0/dbhome_1 Configured
73847823-ae83-4bf0-a630-f8884cf4387a OraDB12102_home1 12.1.0.2.200414 /u01/app/oracle/product/12.1.0.2/dbhome_1 Configured
 
[root@ODA-PRI upgrade3]# odacli list-databases
 
ID DB Name DB Type DB Version CDB Class Shape Storage Status DbHomeID
---------------------------------------- ---------- -------- -------------------- ---------- -------- -------- ---------- ------------ ----------------------------------------
d897e7d6-9e2d-45e4-a0d7-a1e232d47f16 DB1 Si 11.2.0.4.190115 false Oltp Odb1 Acfs Configured d6df9457-e4cd-4c39-b3cb-8d03be3c4598
 
[root@ODA-PRI log]# odacli update-registry -n db -f
 
Job details
----------------------------------------------------------------
ID: 2dbada8a-f76d-44bb-bb6e-c507d52e5ae3
Description: Discover Components : db
Status: Created
Created: August 10, 2020 1:58:37 PM CEST
Message:
 
Task Name Start Time End Time Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
 
[root@ODA-PRI log]# odacli describe-job -i 2dbada8a-f76d-44bb-bb6e-c507d52e5ae3
 
Job details
----------------------------------------------------------------
ID: 2dbada8a-f76d-44bb-bb6e-c507d52e5ae3
Description: Discover Components : db
Status: Success
Created: August 10, 2020 1:58:37 PM CEST
Message:
 
Task Name Start Time End Time Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Rediscover DBHome August 10, 2020 1:58:37 PM CEST August 10, 2020 1:58:39 PM CEST Success
Rediscover DBHome August 10, 2020 1:58:39 PM CEST August 10, 2020 1:58:41 PM CEST Success
Rediscover DBHome August 10, 2020 1:58:41 PM CEST August 10, 2020 1:58:48 PM CEST Success
Discover DBHome August 10, 2020 1:58:48 PM CEST August 10, 2020 1:58:48 PM CEST Success
Rediscover DB: DB1_RZA August 10, 2020 1:59:41 PM CEST August 10, 2020 1:59:51 PM CEST Success
 
[root@ODA-PRI log]# odacli list-databases
 
ID DB Name DB Type DB Version CDB Class Shape Storage Status DbHomeID
---------------------------------------- ---------- -------- -------------------- ---------- -------- -------- ---------- ------------ ----------------------------------------
d897e7d6-9e2d-45e4-a0d7-a1e232d47f16 DB1 Si 12.1.0.2.200414 false Oltp Odb1 Acfs Configured 73847823-ae83-4bf0-a630-f8884cf4387a

Update network (tnsnames.ora and listener.ora for statistcs entries)

tnsnames.ora file is specific to each Oracle home. Therefore it is needed to copy previous 11.2.0.4 tnsnames.ora file to new 12.1.0.2 home (if any).
listener.ora file needs to be updated with new home for each static entries.

8- Start the application

The application can now be started. The session can be listed using same commands as in the part 3-Stop the application.

9- Test the application

The application needs to be tested. In case there is any critical issue and rollback of the operation is needed, we can use the standby and do a failover. Later the old upgraded primary will have to be deleted in order to create it again as standby. If all is ok, we can move forward and get the standby on the same stand.

10- Move the standby database to the new home

I checked homes :
[root@ODA-STD patchs]# odacli list-dbhomes
 
ID Name DB Version Home Location Status
---------------------------------------- -------------------- ---------------------------------------- --------------------------------------------- ----------
c58cdcfd-e5b2-4041-b993-8df5a5d5ada4 OraDB11204_home1 11.2.0.4.190115 /u01/app/oracle/product/11.2.0.4/dbhome_1 Configured
b45cfba8-f891-469e-9b45-be1e3e2e010c OraDB12201_home1 12.2.0.1.190115 /u01/app/oracle/product/12.2.0.1/dbhome_1 Configured
60c9afb9-4bfe-4a11-bb96-ea43adf74f3d OraDB12102_home2 12.1.0.2.200414 /u01/app/oracle/product/12.1.0.2/dbhome_2 Configured

Standby database has been stopped at the beginning of the process and is still stopped. I moved init and password file from 11.2.0.4 to 12.1.0.2 home :
oracle@ODA-STD:/home/oracle/ [DB1] cdh
oracle@ODA-STD:/u01/app/oracle/product/11.2.0.4/dbhome_1/ [DB1] cd dbs
oracle@ODA-STD:/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/ [DB1] ls *DB1*
hc_DB1.dat initDB1.ora initDB1.ora.old orapwDB1
oracle@ODA-STD:/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/ [DB1] cp -p initDB1.ora /u01/app/oracle/product/12.1.0.2/dbhome_2/dbs
oracle@ODA-STD:/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/ [DB1] cp -p orapwDB1 /u01/app/oracle/product/12.1.0.2/dbhome_2/dbs
oracle@ODA-STD:/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/ [DB1] mv initDB1.ora initDB1.ora.off
oracle@ODA-STD:/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/ [DB1] mv orapwDB1 orapwDB1.off

Of course tnsnames.ora file will have to be moved as well if any is used. The listener.ora will have to be updated for any static entries if used. This was not my case.

11- Upgrade the standby database in the clusterware

I upgraded the database in the clusterware :
oracle@ODA-STD:/u01/app/oracle/product/12.1.0.2/dbhome_2/network/admin/ [DB1] which srvctl
/u01/app/oracle/product/11.2.0.4/dbhome_1/bin/srvctl
 
oracle@ODA-STD:/u01/app/oracle/product/12.1.0.2/dbhome_2/network/admin/ [DB1] /u01/app/oracle/product/12.1.0.2/dbhome_2/bin/srvctl upgrade database -db DB1_RZB -oraclehome /u01/app/oracle/product/12.1.0.2/dbhome_2
 
oracle@ODA-STD:/u01/app/oracle/product/12.1.0.2/dbhome_2/ [DB1] srvctl config database -d DB1_RZB
Database unique name: DB1_RZB
Database name: DB1
Oracle home: /u01/app/oracle/product/12.1.0.2/dbhome_2
Oracle user: oracle
Spfile: /u02/app/oracle/oradata/DB1_RZB/dbs/spfileDB1.ora
Password file:
Domain:
Start options: mount
Stop options: abort
Database role: PHYSICAL_STANDBY
Management policy: AUTOMATIC
Server pools:
Disk Groups:
Mount point paths: /u02/app/oracle/oradata/DB1_RZB,/u03/app/oracle/
Services:
Type: SINGLE
OSDBA group: dba
OSOPER group: dbaoper
Database instance: DB1
Configured nodes: oda-std
Database is administrator managed

I updated /etc/oratab accordingly as it was not done by srvctl :
oracle@ODA-STD:/u01/app/oracle/product/12.1.0.2/dbhome_2/network/admin/ [DB1] grep DB1 /etc/oratab
DB1:/u01/app/oracle/product/11.2.0.4/dbhome_1:N # line added by Agent
oracle@ODA-STD:/u01/app/oracle/product/12.1.0.2/dbhome_2/network/admin/ [DB1] vi /etc/oratab
oracle@ODA-STD:/u01/app/oracle/product/12.1.0.2/dbhome_2/network/admin/ [DB1] grep DB1 /etc/oratab
DB1:/u01/app/oracle/product/12.1.0.2/dbhome_2:N # line added by Agent

12- Set compatible parameter to 12.1.0 on the standby database

SQL> alter system set compatible='12.1.0' scope=spfile;
 
System altered.

13- Start the standby database

oracle@ODA-STD:/u01/app/oracle/product/12.1.0.2/dbhome_2/ [DB1] srvctl status database -d DB1_RZB
Instance DB1 is not running on node rzb-oda02
 
oracle@ODA-STD:/u01/app/oracle/product/12.1.0.2/dbhome_2/ [DB1] srvctl start database -d DB1_RZB
 
oracle@ODA-STD:/u01/app/oracle/product/12.1.0.2/dbhome_2/ [DB1] srvctl status database -d DB1_RZB
Instance DB1 is running on node rzb-oda02
 
oracle@ODA-STD:/u01/app/oracle/product/12.1.0.2/dbhome_2/ [DB1] DB1
********* dbi services Ltd. *********
STATUS : MOUNTED
DB_UNIQUE_NAME : DB1_RZB
OPEN_MODE : MOUNTED
LOG_MODE : ARCHIVELOG
DATABASE_ROLE : PHYSICAL STANDBY
FLASHBACK_ON : NO
FORCE_LOGGING : YES
CDB Enabled : NO
*************************************

14- Update the registry metadata on the ODA

I used odacli update-registry command to update the ODA registry metadata :
[root@ODA-STD patchs]# odacli list-dbhomes
 
ID Name DB Version Home Location Status
---------------------------------------- -------------------- ---------------------------------------- --------------------------------------------- ----------
c58cdcfd-e5b2-4041-b993-8df5a5d5ada4 OraDB11204_home1 11.2.0.4.190115 /u01/app/oracle/product/11.2.0.4/dbhome_1 Configured
b45cfba8-f891-469e-9b45-be1e3e2e010c OraDB12201_home1 12.2.0.1.190115 /u01/app/oracle/product/12.2.0.1/dbhome_1 Configured
60c9afb9-4bfe-4a11-bb96-ea43adf74f3d OraDB12102_home2 12.1.0.2.200414 /u01/app/oracle/product/12.1.0.2/dbhome_2 Configured
 
[root@ODA-STD patchs]# odacli list-databases
 
ID DB Name DB Type DB Version CDB Class Shape Storage Status DbHomeID
---------------------------------------- ---------- -------- -------------------- ---------- -------- -------- ---------- ------------ ----------------------------------------
b15def96-a259-4d3a-a26d-71381142c0bc DB1 Si 11.2.0.4.190115 false Oltp Odb1 Acfs Configured c58cdcfd-e5b2-4041-b993-8df5a5d5ada4
 
[root@ODA-STD patchs]# odacli update-registry -n db -f
 
Job details
----------------------------------------------------------------
ID: f7f9c86b-2a2f-46bc-8195-9f6a76a42cde
Description: Discover Components : db
Status: Created
Created: August 10, 2020 3:46:41 PM CEST
Message:
 
Task Name Start Time End Time Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
 
[root@ODA-STD patchs]# odacli describe-job -i f7f9c86b-2a2f-46bc-8195-9f6a76a42cde
 
Job details
----------------------------------------------------------------
ID: f7f9c86b-2a2f-46bc-8195-9f6a76a42cde
Description: Discover Components : db
Status: Success
Created: August 10, 2020 3:46:41 PM CEST
Message:
 
Task Name Start Time End Time Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Rediscover DBHome August 10, 2020 3:46:41 PM CEST August 10, 2020 3:46:43 PM CEST Success
Rediscover DBHome August 10, 2020 3:46:43 PM CEST August 10, 2020 3:46:45 PM CEST Success
Rediscover DBHome August 10, 2020 3:46:45 PM CEST August 10, 2020 3:46:48 PM CEST Success
Rediscover DB: DB1_RZB August 10, 2020 3:47:15 PM CEST August 10, 2020 3:47:21 PM CEST Success
 
[root@ODA-STD patchs]# odacli list-databases
 
ID DB Name DB Type DB Version CDB Class Shape Storage Status DbHomeID
---------------------------------------- ---------- -------- -------------------- ---------- -------- -------- ---------- ------------ ----------------------------------------
b15def96-a259-4d3a-a26d-71381142c0bc DB1 Si 12.1.0.2.200414 false Oltp Odb1 Acfs Configured 60c9afb9-4bfe-4a11-bb96-ea43adf74f3d

15- Start the log shippment again from the primary to the standby

High availability with Data Guard

In case FSFO is used, you will need to enable it again :
DGMGRL> enable fast_start failover
Disabled.

You will need to set the protection mode back to the previous one (example maxavailability) :
DGMGRL> edit configuration set protection mode as maxavailability;
Succeeded.

You will need to start applying the change vector on the standby :
DGMGRL> edit database DB1_RZB set state=apply-on;
Succeeded.

You will need to start shipping the change vector on the primary :
DGMGRL> edit database DB1_RZA set state=transport-on;
Succeeded.

Check that there is no gap on the standby :
DGMGRL> show database DB1_RZB
 
Database - DB1_RZB
 
Role: PHYSICAL STANDBY
Intended State: APPLY-ON
Transport Lag: 0 seconds (computed 1 second ago)
Apply Lag: 0 seconds (computed 1 second ago)
Average Apply Rate: 4.00 KByte/s
Real Time Query: OFF
Instance(s):
DB1
 
Database Status:
SUCCESS

What is mandatory is to have no transport lag and no apply lag :
Transport Lag: 0 seconds (computed 1 second ago)
Apply Lag: 0 seconds (computed 1 second ago)

High availability with dbvisit

Archive log shippment is done through the linux crontab. Purpose is then to activate again it on both the primary and the standby database using crontab -e command.

On the primary :
00,10,20,30,40,50 * * * * /u01/app/dbvisit/standby/dbvctl -d DB1 >/tmp/dbvisit_apply_logs_DB1.log 2>&1

On the standby :
05,15,25,35,45,55 * * * * /u01/app/dbvisit/standby/dbvctl -d DB1 >/tmp/dbvisit_apply_logs_DB1.log 2>&1

Use dbvctl command with option -i to ensure there is no gap. DB1 would be the name of my DDC configuration file :
oracle@ODA-PRI:/home/oracle/ [DB1] /u01/app/dbvisit/standby/dbvctl -d DB1 -i
=============================================================
Dbvisit Standby Database Technology (9.0.02_0_gbd40c486) (pid 84500)
dbvctl started on ODA-PRI: Mon Aug 10 17:26:53 2020
=============================================================
 
Dbvisit Standby log gap report for DB1_RZA at 202008101726:
-------------------------------------------------------------
Description | SCN | Timestamp
-------------------------------------------------------------
Source 13140834499 2020-08-10:17:26:55 +02:00
Destination 13140828019 2020-08-10:16:59:35 +02:00
 
Standby database time lag (DAYS-HH:MI:SS): +00:27:20
 
Report for Thread 1
-------------------
SOURCE
Current Sequence 52884
Last Archived Sequence 52883
Last Transferred Sequence 52883
Last Transferred Timestamp 2020-08-10 17:22:56
 
DESTINATION
Recovery Sequence 52884
 
Transfer Log Gap 0
Apply Log Gap 0
 
=============================================================
dbvctl ended on ODA-PRI: Mon Aug 10 17:26:59 2020
=============================================================

Mandatory is to have no gap :
Transfer Log Gap 0
Apply Log Gap 0

16- Test a switchover

A switchover can be later tested to ensure everything is working properly.

Cet article How to migrate High Availability databases on an ODA? est apparu en premier sur Blog dbi services.


ODA appliance creation error and cleanup.pl with option force

$
0
0

I had recently faced an interesting issue deploying ODA X8-2-HA. I would like to share my experience here hoping it could help some of you.

Error in creating the appliance

After reimaging both node0 and node1 from the ODA X8-2-HA, running the configure-firstnet and updating the repository I was going to create the appliance.
On ODA X8-2-HA the appliance creation is run from the node0. Provisioning service creation had completed in failure :
[root@oak0 ODA_patch]# odacli describe-job -i 1d6d3c88-a729-4a56-813c-f6751150441c
 
Job details
----------------------------------------------------------------
ID: 1d6d3c88-a729-4a56-813c-f6751150441c
Description: Provisioning service creation
Status: Failure
Created: September 7, 2020 3:12:16 PM CEST
Message: DCS-10001:Internal error encountered: Fail to run root scripts : Check /u01/app/19.0.0.0/grid/install/root_node0_name_2020-09-07_15-41-59-761776313.log for the output of root script.
 
Task Name Start Time End Time Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Provisioning service creation September 7, 2020 3:12:24 PM CEST September 7, 2020 3:45:21 PM CEST Failure
Provisioning service creation September 7, 2020 3:12:24 PM CEST September 7, 2020 3:45:21 PM CEST Failure
network update September 7, 2020 3:12:26 PM CEST September 7, 2020 3:12:40 PM CEST Success
updating network September 7, 2020 3:12:26 PM CEST September 7, 2020 3:12:40 PM CEST Success
Setting up Network September 7, 2020 3:12:26 PM CEST September 7, 2020 3:12:26 PM CEST Success
network update September 7, 2020 3:12:40 PM CEST September 7, 2020 3:12:51 PM CEST Success
updating network September 7, 2020 3:12:41 PM CEST September 7, 2020 3:12:51 PM CEST Success
Setting up Network September 7, 2020 3:12:41 PM CEST September 7, 2020 3:12:41 PM CEST Success
OS usergroup 'asmdba'creation September 7, 2020 3:12:51 PM CEST September 7, 2020 3:12:51 PM CEST Success
OS usergroup 'asmoper'creation September 7, 2020 3:12:51 PM CEST September 7, 2020 3:12:51 PM CEST Success
OS usergroup 'asmadmin'creation September 7, 2020 3:12:51 PM CEST September 7, 2020 3:12:51 PM CEST Success
OS usergroup 'dba'creation September 7, 2020 3:12:51 PM CEST September 7, 2020 3:12:51 PM CEST Success
OS usergroup 'dbaoper'creation September 7, 2020 3:12:51 PM CEST September 7, 2020 3:12:51 PM CEST Success
OS usergroup 'oinstall'creation September 7, 2020 3:12:51 PM CEST September 7, 2020 3:12:51 PM CEST Success
OS user 'grid'creation September 7, 2020 3:12:51 PM CEST September 7, 2020 3:12:51 PM CEST Success
OS user 'oracle'creation September 7, 2020 3:12:51 PM CEST September 7, 2020 3:12:52 PM CEST Success
Default backup policy creation September 7, 2020 3:12:52 PM CEST September 7, 2020 3:12:52 PM CEST Success
Backup config metadata persist September 7, 2020 3:12:52 PM CEST September 7, 2020 3:12:52 PM CEST Success
SSH equivalance setup September 7, 2020 3:12:52 PM CEST September 7, 2020 3:12:52 PM CEST Success
Grid home creation September 7, 2020 3:13:08 PM CEST September 7, 2020 3:29:00 PM CEST Success
Creating GI home directories September 7, 2020 3:13:08 PM CEST September 7, 2020 3:13:08 PM CEST Success
Cloning Gi home September 7, 2020 3:13:08 PM CEST September 7, 2020 3:15:13 PM CEST Success
Cloning Gi home September 7, 2020 3:15:13 PM CEST September 7, 2020 3:28:57 PM CEST Success
Updating GiHome version September 7, 2020 3:28:57 PM CEST September 7, 2020 3:29:00 PM CEST Success
Updating GiHome version September 7, 2020 3:28:57 PM CEST September 7, 2020 3:29:00 PM CEST Success
Storage discovery September 7, 2020 3:29:00 PM CEST September 7, 2020 3:40:26 PM CEST Success
Grid stack creation September 7, 2020 3:40:26 PM CEST September 7, 2020 3:45:21 PM CEST Failure
Configuring GI September 7, 2020 3:40:26 PM CEST September 7, 2020 3:41:59 PM CEST Success
Running GI root scripts September 7, 2020 3:41:59 PM CEST September 7, 2020 3:45:21 PM CEST Failure

Troubleshooting the logs I found that there were errors in creating the disk group +DATA :
[root@oak0 ODA_patch]# more /u01/app/19.0.0.0/grid/install/root_node0_name_2020-09-07_15-41-59-761776313.log
...
...
...
SQL*Plus: Release 19.0.0.0.0 - Production on Mon Sep 7 15:45:00 2020
Version 19.8.0.0.0
 
Copyright (c) 1982, 2020, Oracle. All rights reserved.
 
Connected to an idle instance.
 
ASM instance started
 
Total System Global Area 1137173320 bytes
Fixed Size 8905544 bytes
Variable Size 1103101952 bytes
ASM Cache 25165824 bytes
ORA-15032: not all alterations performed
ORA-15036: Disk 'AFD:HDD_E0_S03_1732452488P1' is truncated to 4671744 MB from
7941632 MB.
ORA-15036: Disk 'AFD:HDD_E0_S02_1732457496P1' is truncated to 4671744 MB from
7941632 MB.
ORA-15036: Disk 'AFD:HDD_E0_S01_1732376280P1' is truncated to 4671744 MB from
7941632 MB.
ORA-15036: Disk 'AFD:HDD_E0_S00_1733177556P1' is truncated to 4671744 MB from
7941632 MB.
 
 
create diskgroup DATA NORMAL REDUNDANCY
*
ERROR at line 1:
ORA-15018: diskgroup cannot be created
ORA-15033: disk 'AFD:HDD_E0_S03_1732452488P1' belongs to diskgroup "DATA"
ORA-15033: disk 'AFD:HDD_E0_S02_1732457496P1' belongs to diskgroup "DATA"
ORA-15033: disk 'AFD:HDD_E0_S01_1732376280P1' belongs to diskgroup "DATA"
ORA-15033: disk 'AFD:HDD_E0_S00_1733177556P1' belongs to diskgroup "DATA"
 
 
create spfile='+DATA' from pfile='/u01/app/19.0.0.0/grid/dbs/init_+ASM1.ora'
*
ERROR at line 1:
ORA-17635: failure in obtaining physical sector size for '+DATA'

And I could realized that this was due because the ODA was previously used and shelf storage disks have not been cleared.

Performing Secure Erase of Data on Storage Disks

I then decided to erase data on the storage disk.

[root@node0_name ~]# ps -ef | grep pmon
grid 5652 1 0 15:45 ? 00:00:00 asm_pmon_+ASM1
root 57503 56032 0 15:59 pts/1 00:00:00 grep --color=auto pmon
 
[root@node0_name ~]# odaadmcli stop oak
2020-09-07 16:02:38.815975760:[init.oak]:[Error : Operation not permitted while software upgrade is in progress ...]  
[root@node0_name ~]# /u01/app/19.0.0.0/grid/bin/crsctl stop crs -f
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'node0_name'
CRS-2673: Attempting to stop 'ora.asm' on 'node0_name'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'node0_name'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'node0_name'
CRS-2677: Stop of 'ora.drivers.acfs' on 'node0_name' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'node0_name' succeeded
CRS-2677: Stop of 'ora.asm' on 'node0_name' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'node0_name'
CRS-2673: Attempting to stop 'ora.evmd' on 'node0_name'
CRS-2677: Stop of 'ora.ctssd' on 'node0_name' succeeded
CRS-2677: Stop of 'ora.evmd' on 'node0_name' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'node0_name'
CRS-2677: Stop of 'ora.cssd' on 'node0_name' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'node0_name'
CRS-2673: Attempting to stop 'ora.gipcd' on 'node0_name'
CRS-2673: Attempting to stop 'ora.driver.afd' on 'node0_name'
CRS-2677: Stop of 'ora.driver.afd' on 'node0_name' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'node0_name' succeeded
CRS-2677: Stop of 'ora.gipcd' on 'node0_name' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'node0_name' has completed
CRS-4133: Oracle High Availability Services has been stopped.
 
[root@node0_name app]# /opt/oracle/oak/bin/odaeraser.py
Please stop oakd and GI/DB applications before running this program
[root@node0_name app]#

So it is cleared, as per the system, knowing there is an ongoing provisioning, we can not stop oak and therefore not run odaeraser python script.

Execute cleanup script

So no other choice than running the ODA cleanup script. Erase data is also an option of this script, so all good. I have been doing the needful for both the nodes.

[root@node0_name app]# perl /opt/oracle/oak/onecmd/cleanup.pl -griduser grid -dbuser oracle -erasedata
INFO: *******************************************************************
INFO: ** Starting process to cleanup provisioned host node0_name **
INFO: *******************************************************************
WARNING: Secure Erase is an irrecoverable process. All data on the disk
WARNING: will be erased, and cannot be recovered by any means. On X3-2,
WARNING: X4-2, and X5-2 HA, the secure erase process can take more than
WARNING: 10 hours. If you need this data, then take a complete backup
WARNING: before proceeding.
Do you want to continue (yes/no) : yes
INFO: nodes will be rebooted
Do you want to continue (yes/no) : yes
INFO: /u01/app/19.0.0.0/grid/.patch_storage/31548513_Jul_10_2020_12_12_45/files/bin/crsctl.bin
...
...
...
--------------------------------------------------------------------------------
Label Filtering Path
================================================================================
HDD_E0_S00_1733177556P1 ENABLED /dev/mapper/HDD_E0_S00_1733177556p1
HDD_E0_S00_1733177556P2 ENABLED /dev/mapper/HDD_E0_S00_1733177556p2
HDD_E0_S01_1732376280P1 ENABLED /dev/mapper/HDD_E0_S01_1732376280p1
HDD_E0_S01_1732376280P2 ENABLED /dev/mapper/HDD_E0_S01_1732376280p2
HDD_E0_S02_1732457496P1 ENABLED /dev/mapper/HDD_E0_S02_1732457496p1
HDD_E0_S02_1732457496P2 ENABLED /dev/mapper/HDD_E0_S02_1732457496p2
HDD_E0_S03_1732452488P1 ENABLED /dev/mapper/HDD_E0_S03_1732452488p1
HDD_E0_S03_1732452488P2 ENABLED /dev/mapper/HDD_E0_S03_1732452488p2
HDD_E0_S04_1733016128P1 ENABLED /dev/mapper/HDD_E0_S04_1733016128p1
HDD_E0_S04_1733016128P2 ENABLED /dev/mapper/HDD_E0_S04_1733016128p2
HDD_E0_S05_1732360364P1 ENABLED /dev/mapper/HDD_E0_S05_1732360364p1
HDD_E0_S05_1732360364P2 ENABLED /dev/mapper/HDD_E0_S05_1732360364p2
HDD_E0_S06_1732360644P1 ENABLED /dev/mapper/HDD_E0_S06_1732360644p1
HDD_E0_S06_1732360644P2 ENABLED /dev/mapper/HDD_E0_S06_1732360644p2
HDD_E0_S07_1732123852P1 ENABLED /dev/mapper/HDD_E0_S07_1732123852p1
HDD_E0_S07_1732123852P2 ENABLED /dev/mapper/HDD_E0_S07_1732123852p2
HDD_E0_S08_1733366676P1 ENABLED /dev/mapper/HDD_E0_S08_1733366676p1
HDD_E0_S08_1733366676P2 ENABLED /dev/mapper/HDD_E0_S08_1733366676p2
HDD_E0_S09_1733005616P1 ENABLED /dev/mapper/HDD_E0_S09_1733005616p1
HDD_E0_S09_1733005616P2 ENABLED /dev/mapper/HDD_E0_S09_1733005616p2
HDD_E0_S10_1733304324P1 ENABLED /dev/mapper/HDD_E0_S10_1733304324p1
HDD_E0_S10_1733304324P2 ENABLED /dev/mapper/HDD_E0_S10_1733304324p2
HDD_E0_S11_1733286472P1 ENABLED /dev/mapper/HDD_E0_S11_1733286472p1
HDD_E0_S11_1733286472P2 ENABLED /dev/mapper/HDD_E0_S11_1733286472p2
HDD_E0_S12_1732005336P1 ENABLED /dev/mapper/HDD_E0_S12_1732005336p1
HDD_E0_S12_1732005336P2 ENABLED /dev/mapper/HDD_E0_S12_1732005336p2
HDD_E0_S13_1733060756P1 ENABLED /dev/mapper/HDD_E0_S13_1733060756p1
HDD_E0_S13_1733060756P2 ENABLED /dev/mapper/HDD_E0_S13_1733060756p2
HDD_E0_S14_1733060744P1 ENABLED /dev/mapper/HDD_E0_S14_1733060744p1
HDD_E0_S14_1733060744P2 ENABLED /dev/mapper/HDD_E0_S14_1733060744p2
SSD_E0_S15_2701225596P1 ENABLED /dev/mapper/SSD_E0_S15_2701225596p1
SSD_E0_S16_2701244384P1 ENABLED /dev/mapper/SSD_E0_S16_2701244384p1
SSD_E0_S17_2701254344P1 ENABLED /dev/mapper/SSD_E0_S17_2701254344p1
SSD_E0_S18_2701254500P1 ENABLED /dev/mapper/SSD_E0_S18_2701254500p1
SSD_E0_S19_2701253460P1 ENABLED /dev/mapper/SSD_E0_S19_2701253460p1
SSD_E0_S20_2617542596P1 ENABLED /dev/mapper/SSD_E0_S20_2617542596p1
SSD_E0_S21_2617528156P1 ENABLED /dev/mapper/SSD_E0_S21_2617528156p1
SSD_E0_S22_2617509504P1 ENABLED /dev/mapper/SSD_E0_S22_2617509504p1
SSD_E0_S23_2617546936P1 ENABLED /dev/mapper/SSD_E0_S23_2617546936p1
...
...
...
INFO: Executing
Start erasing disks on the system
On some platforms, this will take several hours to finish, please wait
Do you want to continue (yes|no) [yes] ?
Number of disks are processing: 0
 
Disk Vendor Model Erase method Status Time(seconds)
e0_pd_00 HGST H7210A520SUN010T SCSI crypto erase success 0
e0_pd_01 HGST H7210A520SUN010T SCSI crypto erase success 0
e0_pd_02 HGST H7210A520SUN010T SCSI crypto erase success 0
e0_pd_03 HGST H7210A520SUN010[18173.972533] reboot: Restarting system
...
...
...

The erase process seems to have been run successfully. This can be checked with checkheader option as well :
[root@node0_name ~]# perl /opt/oracle/oak/onecmd/cleanup.pl -checkHeader
DCS config is - /opt/oracle/dcs/conf/dcs-agent.jsonINFO: Emulation mode is set to FALSE
Command may take few minutes...
OS Disk Disk Type OAK Header ASM Header(p1) ASM Header(p2) Error: /dev/sdaa: unrecognised disk label
 
/dev/sdaa HDD Erased Erased UnKnownError: /dev/sdab: unrecognised disk label
 
/dev/sdab HDD Erased Erased UnKnownError: /dev/sdac: unrecognised disk label
 
/dev/sdac HDD Erased Erased UnKnownError: /dev/sdad: unrecognised disk label
 
/dev/sdad HDD Erased Erased UnKnownError: /dev/sdae: unrecognised disk label
 
/dev/sdae HDD Erased Erased UnKnownError: /dev/sdaf: unrecognised disk label
 
/dev/sdaf HDD Erased Erased UnKnownError: /dev/sdag: unrecognised disk label
 
...
...
...

The odaeraser python script could even be now executed. Adding no other value than checking as the disks have already been cleanup :
[root@node0_name ~]# /opt/oracle/oak/bin/odaeraser.py
Start erasing disks on the system
On some platforms, this will take several hours to finish, please wait
Do you want to continue (yes|no) [yes] ? yes
Number of disks are processing: 0
 
Disk Vendor Model Erase method Status Time(seconds)
e0_pd_00 HGST H7210A520SUN010T SCSI crypto erase success 0
e0_pd_01 HGST H7210A520SUN010T SCSI crypto erase success 0
e0_pd_02 HGST H7210A520SUN010T SCSI crypto erase success 0
e0_pd_03 HGST H7210A520SUN010T SCSI crypto erase success 0
e0_pd_04 HGST H7210A520SUN010T SCSI crypto erase success 0
e0_pd_05 HGST H7210A520SUN010T SCSI crypto erase success 0
e0_pd_06 HGST H7210A520SUN010T SCSI crypto erase success 0
e0_pd_07 HGST H7210A520SUN010T SCSI crypto erase success 0
e0_pd_08 HGST H7210A520SUN010T SCSI crypto erase success 0
e0_pd_09 HGST H7210A520SUN010T SCSI crypto erase success 0
e0_pd_10 HGST H7210A520SUN010T SCSI crypto erase success 0
e0_pd_11 HGST H7210A520SUN010T SCSI crypto erase success 0
e0_pd_12 HGST H7210A520SUN010T SCSI crypto erase success 0
e0_pd_13 HGST H7210A520SUN010T SCSI crypto erase success 0
e0_pd_14 HGST H7210A520SUN010T SCSI crypto erase success 0
e0_pd_15 HGST HBCAC2DH2SUN3.2T SCSI crypto erase success 0
e0_pd_16 HGST HBCAC2DH2SUN3.2T SCSI crypto erase success 0
e0_pd_17 HGST HBCAC2DH2SUN3.2T SCSI crypto erase success 0
e0_pd_18 HGST HBCAC2DH2SUN3.2T SCSI crypto erase success 0
e0_pd_19 HGST HBCAC2DH2SUN3.2T SCSI crypto erase success 0
e0_pd_20 HGST HBCAC2DH4SUN800G SCSI crypto erase success 0
e0_pd_21 HGST HBCAC2DH4SUN800G SCSI crypto erase success 0
e0_pd_22 HGST HBCAC2DH4SUN800G SCSI crypto erase success 0
e0_pd_23 HGST HBCAC2DH4SUN800G SCSI crypto erase success 0
[root@node0_name ~]#

Force option form perl cleanup script

I then tried to create the appliance again. And I was surprised that it was still not possible to create the appliance even after running cleanup.pl script. The same message was displayed even in the web gui : “There was an error in configuring Oracle Database Appliance….For more information about running the cleanup script, see the Deployment and User’s Guide for your model.” See picture below :



The only solution was to use the force option in the cleanup.pl script!!! Running following command solved my problem :
[root@node0_name ~]# perl /opt/oracle/oak/onecmd/cleanup.pl -f

Then all was ok, even in the gui, as described in below picture :

Cet article ODA appliance creation error and cleanup.pl with option force est apparu en premier sur Blog dbi services.

Oracle MAA reference architecture and HA, DR, RTO, RPO

$
0
0

By Franck Pachot

.
I may have mentioned in some previous blog post that, in my opinion, the names of Oracle Database features make sense on the vendor product management context more than in a user context. I’m not saying that it is good or bad. There are so many features, that can be combined, and that evolved for many years. The possible use cases is unlimited. What I see customers doing in Europe is very different from what I have seen in US companies or in Africa for example. What I’m saying is that most of the time you need a vendor-to-user dictionary when reading Oracle documentation and presentations. I’ll focus here on the MAA reference architecture. Yes, acronyms add to the complexity. MAA means Maximum Availability Architecture. Because when you have High Availability features for decades, you need another name when you bring an “higher” High Availability.

The MAA reference architecture defines 4 levels of HA and Data Protection: Bronze, Silver, Gold, Platinium. They are very well described. But they are othogonal to what I see at my customers running Oracle Database. And they are not clear when discussing with architects where the common terms are: HA, DR, replicas, Availability Zone, Regions…

Products

When I read the MAA Architecture reference, I see a categorization made from the products:

  • 🥉 Bronze: Single instance Enterprise Edition with RMAN backups
  • 🥈 Silver: RAC – Real Application Cluster
  • 🥇 Gold: Data Guard (in SYNC) in addition to RAC
  • 🏆 Platinum: adds Golden Gate and a few other features (EBR, Sharding)

If you know Oracle for years, have seen the birth of RMAN in Oracle8 (or EBU before), of RAC in Oracle 9i (or OPS before), Data Guard in 9i (or Standby before), and the acquisition of GoldenGate (or used Streams CDC before) then those medals of availability probably ring a bell for you. But explaining that to new oracle users is difficultplai

RTO/RPO

If we quit the product point of view and talk about user experience, here is what matters: how to recover from a server, storage, or datacenter failure with the shortest downtime (RTO – Recovery Time Objective) and the minimal data loss (RPO – Recovery Point Objective). You know what I mean here. “Server failure” is when the instance crashes, or the motherboard burns. You need to restart without compromising the Atomicity and Consistency in ACID. “Storage failure” is when data stored on disk is damaged (block corruption, file dropped, disk lost). You need to restore without compromising the Durability in ACID. This is about the database “Availability” after some infrastructure components failed. This happens rarely, but it definitely happens. You must design your infrastructure to cope with those failures in and accepted RTO/RPO. “Datacenter failure” is when there’s a full power outage, or all network cables cut by a mechanical shovel, or the data center burns or is underwater, or a plane crashed on it… This may never happen but in the low probability where it happens, you need to get the data and the database back. On one hand, you have time for this because everyone knows it is exceptional. But on the other hand, you will have so much work to do on all infrastructure that you better have the database recovery automated without stress.

🥉 The MAA Bronze provides the following:

  • Server failure: restart the instance and replay the redo log – RPO=0 RTO=minutes
  • Storage failure: restore block, file or database – RPO=0 RTO=hours.
  • Data Center failure: reinstall and restore full backup (with transaction loss) – RPO=hours RTO=days

🥈 The MAA Silver provides, in addition to Bronze, a service protection:

  • Server failure: another instance is immediately usable – RPO=0 RTO=0. It also allows no-downtime for planned server maintenance (like OS updates or changing faulty RAM).
  • Storage failure: same as Bronze because the database files are shared.
  • Data Center failure: same (with the exception of extended cluster with on small distance)

🥇 The MAA Gold brings data protection with physical replication:

  • Server failure: handeled by RAC as in Silver. But you may not need RAC as Data Guard can automate a failover with RPO=0 RTO=minutes.
  • Storage failure: switchover with RPO=0 RTO=minutes.
  • Data Center failure: same as storage failure if latency allows SYNC (depends on distance and application design). Or RPO=minutes if in another region. It allows minimal downtime for planned database maintenance (including upgrades)

🏆 MAA Platinum is based on Gold for database availability but adds features for application availability:

  • Logical replication allows minimal downtime for planned application maintenance (including releases).
  • EBR (Edition Based Refinition) allows continuous deployement of data model and API.
  • Sharding allows to keep the distribtuted database up after a network partition.

Price and complexity

What also matters is the price (of the option and the processors for the additional nodes):

  • 🥉 Bronze: Possible in Standard Edition. Enterprise Edition allows more agility for large databases (parallel backup/recovery, online operations). The spare server doesn’t have to be licensed.
  • 🥈 Silver: RAC is not available in Standard Edition anymore, and is a +50% option in Enterprise Edition. All nodes must be licensed
  • 🥇 Gold: with an RPO in minutes a Standard Edition solution is possible with Dbvisit. Data Guard is available without option in Enterprise Edition but all nodes must be licenced. Active Data Guard gives more features on the standby
  • 🏆 Platinum: Golden Gate is an additional product and all nodes must be licenced. The license includes Active Data Guard

While there, some awesome features like application continuity which bring the database high availability to the application without any code change are available with the RAC or ADG options only.

Probably the most important is the increase of complexity:

  • 🥉 Bronze: Simple, easy to automate (Automatic crash recovery, RMAN Recovery Advisor)
  • 🥈 Silver: RAC adds all possible levels of complexity: shared disks, cluster, network interconnect, grid infrastructure, more listeners… so it requires more skills (network, storage, cluster…) for DBAs (on site and on call)
  • 🥇 Gold: Data Guard is simple to configure, test, operate, automate. It is based on the basic recovery features and nicely automated.
  • 🏆 Platinum: logical replication is a full project by itself infolving operations and developement. 2-way replication brings it to the highest level with update conflict resolution.

The complexity is the most important but also probably the most overlooked. More complexity increases the cost of ownership. This can be reduced when correctly bundled in engineered systems (Oracle Database Appliance) or managed services (but remember that Oracle allows RAC only on their Cloud). And more complexity may also be a source of unplanned outages: the more components the more failure. By this I mean: use RAC when you need to (like RTO=0 being a critical requirement on planned and unplanned outages) and then consider that it has a cost to operate this complexity correctly. And that’s perfect. But if you want to keep it simple and accept an RTO in a few minutes, Data Guard is probably sufficient.

User cases

Here are the most common Oracle Database configuration I see in our customers in Switzerland:

  • SE: simple Standard Edition which is sifficient for non-critical-not-too-large database. Performance is probably sufficient after little tuning. You accept downtime for maintenance operations.
  • SE HA with additional complexity (Grid Infrastructure) you have an active-passive cluster where you can start a spare server to access the database, for planned or unplanned outage.
  • SE with Dbvisit: this maintains a standby database which is a replica with few minutes lag (archive log shipping)
  • EE: brings online operations for DBA and parallel operations which are useful for backup/recovery of databases in terabytes
  • EE cold failover: like SE HA but with Enterprise Edition features. A bit more complex because you have to define the cluster resources yourself. The advantage is that the passive server doesn’t need to be licenced if used less than 10 days per year
  • EE with Data Guard: a replica that can be in sync and ready to open with no data loss. Must be licenced but you can put it on the test server if you accept to stop some test database in case of DR.
  • EE with Active Data Guard: this helps to use the licensed standby to offload some reporting and the backups.
  • ODA HA with Data Guard: this is RAC where the complexity is simplified in the ODA. But with ODA you always need a Data Guard.

Given that servers can have a lot of cores, the need to load-balance in a cluster is very rare, despite all the “scale-out” trend. RAC (or RAC One Node) is good to reduce planned downtimes because you can relocate the services before a maintenance on one server. But may bring more unplanned outage because of additional complexity. An unplanned outage can be handled by Data Guard which protects for disk corruption in addition to the availability of the services. So where is RAC still useful? Consolidation is one case. You build and maintain a cluster and you can balance the load by opening singleton services on the pool of servers. But today, with multitenant and online relocation (or refreshable clone switchover) you may do the same without a cluster.

Of course, if one minute of downtime is not possible at all, then RAC is a must and this justifies the additional complexity. And with it, you reduce the downtime to zero if you properly configured application continuity.

RAC vs. DG as HA vs. DR?

Historically, Real Application Cluster (RAC) was for High Availability (HA) and Data Guard (DG) was for Disaster Recovery (DR) and they still keep those tags in Oracle documentation and marketing slides. RAC is a cluster technology that keeps the database service available even if a node fails. Physical Standby is log shipping technology which maintains a standby on another site in case of data center loss, with small data loss. But things change… Today you can have the standby in SYNC which means no data loss (RPO=0). And then it can be considered as HA. And the failover can be automated (FSFO – Fast Start Failover with an Observer) which means that the service is available within minutes (acceptable RTO). FSFO and SYNC are the keys here. Because if you need a manual action, the RTO will be in hours, especially if the failure happens during the night. And if you are not in SYNC you need a human decision to choose between immediate availability or no data loss.

Today, in the user’s mind HA vs. DR is about SYNC or ASYNC. For example, in AWS things are simple: synchronisation across Availability Zones within the same region is HA: small distance, low latency but protects for a data center failure. And replication to another region is DR: you are protected from earthquake or failure that brings down all AZs but with eventual consistency where some transactions did not ship in time.

Talking about AWS, the definitions are easy for HA/DR but when it comes to Reliability, Fault Tolerance, High Availability the difference is subtle:

In the Oracle Cloud you can create a Data Guard configuration within an Availability Domain (hopefully in different Fault Domains) or another region. This is for the DBaaS (PaaS provisioning). For the managed one, the Autonomous Database, it running in RAC (to allow rolling patching, keeping latest patch level without outage) but – as far as I know – not protected by Data Guard. The “Autonomous Data Guard” is borrowing the name but is actually a refreshable clone (see https://blog.dbi-services.com/a-serverless-standby-database-called-oracle-autonomous-data-guard). In cloud a database is a service. High Availability covers data in addition to the instance and this means database replication. RAC is not sufficient for HA there.

When talking to users, they often consider the standby database as a “read replica”. And that’s right. It is synchronized with physical replication. However, Oracle reserves the “replication” term for logical replication – Golden Gate. Again, like the silver/bronze/gold/platinum, the names are determined by the product rather than the use case. I like to focus on the user point of view. If you want to protect your database, the best you can do is a physical standby. It is an exact replica. Physically the same as the primary: you can expect the same behaviour in functionality and performance when you failover to it. You can even take a datafile from it to restore to the primary. It is like a backup that is already restored pro-actively. This is Dbvisit Standby in Standard Edition or Data Guard in Enterprise Edition. In both, you can open it as a “read replica” for a daily copy, and re-synchronize during the night. With Active Data Guard you can get this “read replica” synchronized for real-time reporting. All this with very little complexity: it uses the basic recovery mechanisms (duplicate for standby or restore from service, log shipping and redo apply) and simple automation (Dbvisit console for SE, Data Guard Broker for EE). It allows fault tolerance at all layers with quite fast recoverability. This is a huge increase in availability with very little complexity. In some cases, you may need RAC but that is probably for other reasons than simple HA. In some cases, you may need Golden Gate, but that is probably for other reasons as well. When you failover to a logical copy, rather than a physical replica, you have the same issue as when importing a dump rather than a database backup: physical layout of data is different, execution plans can be different, performance can be different.

The MAA reference architecture with their metal tiers probably makes sense for people used to the US Health Insurance contracts. You basically decide the level of protection based on your revenue and take the package that has been build for your “level”. I prefer to look at the requirements and build the most simple solution that fit the needs and this is why I’ve put a few alternative definitions here. And remember, none of the HA solutions are valid unless you properly tested them.

Cet article Oracle MAA reference architecture and HA, DR, RTO, RPO est apparu en premier sur Blog dbi services.

Oracle ADB from a Jupyter Notebook

$
0
0

By Franck Pachot

.
My first attempt to connect to an Oracle database from a Jupyter Notebook on Google Colab was about one year ago:
https://medium.com/@FranckPachot/a-jupyter-notebook-on-google-collab-to-connect-to-the-oracle-cloud-atp-5e88b12282b0

I’m currently preparing a notebook as a handout from my coming SQL101 presentation where I start with some NoSQL to discover the benefits of RDBMS and SQL. I’m running everything on the Oracle Database because it provides all APIs (NoSQL-like key-value, with SODA, documents with OSON, and of course SQL on relational tables) within the same converged database. The notebook will connect to my Autonomous Database in the Oracle Free Tier so that readers don’t have to create a database themselves to start with it. And the notebook runs on Google Colab which is a free environment where people (with a Gmail account) can run it and change the queries as they want to try new things.

The notebook is there at sql101.pachot.net, but as I said, I’m currently working on it…

In this post, I’m sharing a few tips about how I install and run connections from SQLcl, sqlplus and cx_Oracle. There are probably many improvements possible and that’s one reason I share it in this blog… Feedback welcome!

Google Colab backend runs Ubuntu 18.04 and in order to tun the Oracle Client I need to install libaio:


dpkg -l | grep libaio1 > /dev/null || apt-get install -y libaio1

I test the existence before calling apt-get because I don’t want a “Run all” to take too much time.

Then I download the Instant Client, and SQLcl, and the cloud credential wallet to connect to my database which I’ve put on a public bucket in my free tier Object Store:


[ -f instantclient/network/admin/sqlnet.ora ] || wget --continue --quiet https://objectstorage.us-ashburn-1.oraclecloud.com/n/idhoprxq7wun/b/pub/o/sql101.zip && unzip -oq sql101.zip && sed -e "1a export TNS_ADMIN=$PWD/instantclient/network/admin" -e "/^bootStrap/s/$/| cat -s/" -i sqlcl/bin/sql 

I test the existence with the presence of one file (sqlnet.ora)
I hardcode the TNS_ADMIN in the SQLcl script
The -e “/^bootStrap/s/$/| cat -s/” is a dirty workaround for the black likes bug in SQLcl 20.2 (I’ll remove it when 20.3 is out)
All this is quick and dirty, I admit… I have my presentation to prepare 😉

I’ve build the wallet with passwords as I mentioned in a previous post

You also check this notebook I published a few weeks ago if you want to see how to install the instant client yourself:
https://twitter.com/FranckPachot/status/1295104246648582146?s=20

Then I call a CREATE_USER procedure I have created in my database. The idea is that a public user is accessible (the password in the wallet) with minimal privileges just to run this procedure that creates a unique user for the Colab session.

The most important is where I define a the Python magics to run SQLcl and sqlplus:


import socket
from IPython.core.magic import register_line_cell_magic
@register_line_cell_magic
def sqlcl(line,cell=None):
    if cell is None:
      get_ipython().run_cell_magic('script', 'sqlcl/bin/sql -s -L \'"SQL101#'+socket.gethostname().upper()+'"/"SQL101#'+socket.gethostname()[::-1]+'"\'@sql101_tp',line)    
    else:
      get_ipython().run_cell_magic('script', 'sqlcl/bin/sql -s -L \'"SQL101#'+socket.gethostname().upper()+'"/"SQL101#'+socket.gethostname()[::-1]+'"\'@sql101_tp',cell)
# register %sqlplus and %%sqlplus for easy run scripts
@register_line_cell_magic
def sqlplus(line,cell=None):
    if cell is None:
      get_ipython().run_cell_magic('script', '/content/instantclient/sqlplus -s -L \'"SQL101#'+socket.gethostname().upper()+'"/"SQL101#'+socket.gethostname()[::-1]+'"\'@sql101_tp',line)    
    else:
      get_ipython().run_cell_magic('script', '/content/instantclient/sqlplus -s -L \'"SQL101#'+socket.gethostname().upper()+'"/"SQL101#'+socket.gethostname()[::-1]+'"\'@sql101_tp',cell)

The %sqlcl will call SQLcl in silent mode with the line (for %sqlcl) or cell (for %sqlcl)
The %sqlplus is similar. The only advantage over SQLcl is that it is faster to start.
Both have the connection string hardcoded in the same way as I generated the user and password (the username from the host name, the password as well).

Then I install the Oracle driver for Python:


pip install cx_Oracle

With it I can run SQL queries from Python or even SQLAlchemy.

I also load the SQL magic from Catherine Devlin


%load_ext sql
%config SqlMagic.autocommit=False
%config SqlMagic.autopandas=True

More info on: https://github.com/catherinedevlin/ipython-sql

I define the connection string for both:


$import socket,cx_Oracle,os
connection=cx_Oracle.connect('"SQL101#'+socket.gethostname().upper()+'"',"SQL101#"+socket.gethostname()[::-1], "sql101_tp")
os.environ['DATABASE_URL']='oracle://"SQL101#'+socket.gethostname().upper()+'":"SQL101#'+socket.gethostname()[::-1]+'"@sql101_tp'

Then I have 4 ways to run queries:

  • %%sqlplus for fast OCI access
  • %%sqlcl for additional SQLcl features (javascript, SODA)
  • %sql when I want the result as Pandas
  • and directly from Python with the connection defined

The examples are in the SQL101 notebook and you can play with them.

Just one more thing, which is probably perfectible:


import cx_Oracle, base64
from IPython.core.display import HTML
cursor=connection.cursor()
HTML("If you want to view performance of my database during the last hour: 1,is_omx =>1,report_level=>'basic',outer_start_time=>created-1/24,selected_start_time=>created) from user_users").fetchone()[0].read().encode('utf-8')).decode('utf-8')+"'>Download PerfHub")

This displays a download link to get the Performance Hub report covering the time since the beginning of my connection (actually the user creation).

The idea is:

  • call dbms_perf.report_perfhub
  • get the row with .fetchone()
  • get the first column with [0]
  • read the BLOB with .read()
  • make it an hexadecimal string with .encode(‘utf-8’)
  • encode it in base64 with base64.b64encode()
  • put it as a hex string with .decode(‘utf-8’)
  • build a data URL with text/html MIME type and base64 encoding
  • display the link ready to click and download

I do that because I prefer to have the performance hub in a plain window, and also because it does not run in an IFRAME as-is.

This is a very powerful environment for demos. You can use it there on Google Colab, connected to my database. Or create your own Oracle Autonomous Database in the always free tier and even run Jupyter in this free tier (see how from Gianni Ceresa)

Cet article Oracle ADB from a Jupyter Notebook est apparu en premier sur Blog dbi services.

Automatic column formatting in Oracle sqlplus

$
0
0

Column formatting was always a pain in sqlplus when writing queries on the prompt. Most people use tools like SQL Developer or Quest TOAD which can scroll horizontally when running queries against a database, but as a consultant you are often still forced to use sqlplus. Here’s the issue: When running e.g. a query on a table T1 (which is a copy of ALL_OBJECTS) it looks by default as follows and is hard to read:

cbleile@orcl@orcl> create table t1 as select * from all_objects;

Table created.

cbleile@orcl@orcl> select owner, oracle_maintained, object_name from t1 where rownum < 4;

OWNER                                                                                                                            O
-------------------------------------------------------------------------------------------------------------------------------- -
OBJECT_NAME
--------------------------------------------------------------------------------------------------------------------------------
SYS                                                                                                                              Y
TS$

SYS                                                                                                                              Y
ICOL$

SYS                                                                                                                              Y
C_FILE#_BLOCK#

The column width is defined by the maximum length of the data type. I.e. for a VARCHAR2(128) a column of width 128 is defined in the output (if the linessize is less than the column width then the linesize defines the maximum column width displayed).

You can format columns of course:

cbleile@orcl@orcl> column owner format a32
cbleile@orcl@orcl> column object_name format a32
cbleile@orcl@orcl> select owner, oracle_maintained, object_name from t1 where rownum < 4;

OWNER                            O OBJECT_NAME
-------------------------------- - --------------------------------
SYS                              Y TS$
SYS                              Y ICOL$
SYS                              Y C_FILE#_BLOCK#

But running lots of ad hoc queries in sqlplus is quite annoying if you have to format all columns manually.
This has been resolved in sqlcl by using “set sqlformat ansiconsole”:

oracle@oracle-19c6-vagrant:/home/oracle/ [orcl] alias sqlcl="bash $ORACLE_HOME/sqldeveloper/sqldeveloper/bin/sql"
oracle@oracle-19c6-vagrant:/home/oracle/ [orcl] sqlcl cbleile

SQLcl: Release 19.1 Production on Thu Oct 01 08:43:49 2020

Copyright (c) 1982, 2020, Oracle.  All rights reserved.

Password? (**********?) *******
Last Successful login time: Thu Oct 01 2020 08:43:51 +01:00

Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.6.0.0.0


SQL> set sqlformat ansiconsole
SQL> select owner, oracle_maintained, object_name from t1 where rownum < 4;
OWNER   ORACLE_MAINTAINED   OBJECT_NAME      
------- ------------------- ----------------
SYS     Y                   TS$              
SYS     Y                   ICOL$            
SYS     Y                   C_FILE#_BLOCK#   

In sqlcl all rows up to “pagesize” are pre-fetched and the column-format is adjusted for the page to the maximum length per column. E.g.

SQL> set pagesize 1
SQL> select 'short' val from dual
  2  union all
  3  select 'looooooooooooooooooooooooooooooooooooooooooooooooooooooong' val from dual;
VAL     
-------
short   

VAL                                                          
------------------------------------------------------------
looooooooooooooooooooooooooooooooooooooooooooooooooooooong   

SQL> set pagesize 2
SQL> select 'short' val from dual
  2  union all
  3  select 'looooooooooooooooooooooooooooooooooooooooooooooooooooooong' val from dual;
VAL                                                          
------------------------------------------------------------
short   
looooooooooooooooooooooooooooooooooooooooooooooooooooooong   

REMARK: Due to the algorithm in sqlcl you can force sqlcl to crash with sqlformat ansiconsole if it does not have enough memory to pre-fetch the data for a single page. E.g. having lots of data returned and the maximum pagesize set (50000):

SQL> set sqlformat ansiconsole
SQL> set pagesize 50000
SQL> select 
  2  'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX' a,
  3  'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX' b,
  4  'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX' c,
  5  'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX' d,
....
315  'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX' y3,
316  'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX' z3
317  from all_objects;
Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit exceeded
	at java.util.Arrays.copyOfRange(Arrays.java:3664)
	at java.lang.String.(String.java:207)
	at oracle.sql.CharacterSetUTF.toStringWithReplacement(CharacterSetUTF.java:134)
	at oracle.sql.CHAR.getStringWithReplacement(CHAR.java:307)
	at oracle.sql.CHAR.toString(CHAR.java:318)
	at oracle.sql.CHAR.stringValue(CHAR.java:411)
	at oracle.dbtools.raptor.nls.DefaultNLSProvider.format(DefaultNLSProvider.java:208)
	at oracle.dbtools.raptor.nls.OracleNLSProvider.format(OracleNLSProvider.java:214)
	at oracle.dbtools.raptor.utils.NLSUtils.format(NLSUtils.java:187)
	at oracle.dbtools.raptor.format.ANSIConsoleFormatter.printColumn(ANSIConsoleFormatter.java:149)
	at oracle.dbtools.raptor.format.ResultSetFormatterWrapper.print(ResultSetFormatterWrapper.java:274)
	at oracle.dbtools.raptor.format.ResultSetFormatterWrapper.print(ResultSetFormatterWrapper.java:222)
	at oracle.dbtools.raptor.format.ResultsFormatter.print(ResultsFormatter.java:518)
	at oracle.dbtools.db.ResultSetFormatter.formatResults(ResultSetFormatter.java:124)
	at oracle.dbtools.db.ResultSetFormatter.formatResults(ResultSetFormatter.java:70)
	at oracle.dbtools.raptor.newscriptrunner.SQL.processResultSet(SQL.java:798)
	at oracle.dbtools.raptor.newscriptrunner.SQL.executeQuery(SQL.java:709)
	at oracle.dbtools.raptor.newscriptrunner.SQL.run(SQL.java:83)
	at oracle.dbtools.raptor.newscriptrunner.ScriptRunner.runSQL(ScriptRunner.java:404)
	at oracle.dbtools.raptor.newscriptrunner.ScriptRunner.run(ScriptRunner.java:230)
	at oracle.dbtools.raptor.newscriptrunner.ScriptExecutor.run(ScriptExecutor.java:344)
	at oracle.dbtools.raptor.newscriptrunner.ScriptExecutor.run(ScriptExecutor.java:227)
	at oracle.dbtools.raptor.scriptrunner.cmdline.SqlCli.process(SqlCli.java:404)
	at oracle.dbtools.raptor.scriptrunner.cmdline.SqlCli.processLine(SqlCli.java:415)
	at oracle.dbtools.raptor.scriptrunner.cmdline.SqlCli.startSQLPlus(SqlCli.java:1249)
	at oracle.dbtools.raptor.scriptrunner.cmdline.SqlCli.main(SqlCli.java:491)
oracle@oracle-19c6-vagrant:/home/oracle/ [orcl] 

But back to sqlplus: To address the issue with the column-formatting several solutions were developed. E.g. Tom Kyte provided a procedure print_table over 20 years ago to list each column and its value in a separate line:

cbleile@orcl@orcl> exec print_table('select owner, oracle_maintained, object_name from t1 where rownum < 4');
OWNER                 : SYS
ORACLE_MAINTAINED     : Y
OBJECT_NAME           : TS$
-----------------
OWNER                 : SYS
ORACLE_MAINTAINED     : Y
OBJECT_NAME           : ICOL$
-----------------
OWNER                 : SYS
ORACLE_MAINTAINED     : Y
OBJECT_NAME           : C_FILE#_BLOCK#
-----------------

PL/SQL procedure successfully completed.

The same can be done with xmltable since more recent versions. See e.g. here.

That was perfect when querying a couple of rows.

Alternatively some people used a terminal emulation which provided horizontal scrolling like terminator on Linux (see e.g. here).

What I wanted to provide in this blog is another solution. Usually the issue is with VARCHAR2-output. So I asked myself, why not formatting all VARCHAR2 columns of a table to their average length. I.e. I created a script colpp_table.sql (colpp, because my initial objective was to provide a column-width per page like in sqlcl) which takes the statistic avg_col_len from ALL_TAB_COLUMNS. To run the script I have to provide 2 parameters: The owner and table-name I use in my query later on:

cbleile@orcl@orcl> !more colpp_table.sql
set termout off heading off lines 200 pages 999 trimspool on feed off timing off verify off
spool /tmp/&1._&2..sql
select 'column '||column_name||' format a'||to_char(decode(nvl(avg_col_len,data_length),0,1,nvl(avg_col_len,data_length))) 
from all_tab_columns 
where owner=upper('&1.') 
and table_name=upper('&2.') 
and data_type in ('VARCHAR2','NVARCHAR2');
spool off
@/tmp/&1._&2..sql
set termout on heading on feed on timing on verify on

cbleile@orcl@orcl> @colpp_table CBLEILE T1

I.e. a temporary script /tmp/<owner>_<table_name>.sql gets written with format-commands for all VARCHAR2 (and NVARCHAR2) columns of the table. That temporary script is automatically executed:

cbleile@orcl@orcl> !cat /tmp/CBLEILE_T1.sql

column OWNER format a5
column OBJECT_NAME format a36
column SUBOBJECT_NAME format a2
column OBJECT_TYPE format a10
column TIMESTAMP format a20
column STATUS format a7
column TEMPORARY format a2
column GENERATED format a2
column SECONDARY format a2
column EDITION_NAME format a1
column SHARING format a14
column EDITIONABLE format a2
column ORACLE_MAINTAINED format a2
column APPLICATION format a2
column DEFAULT_COLLATION format a4
column DUPLICATED format a2
column SHARDED format a2

Now the formatting looks much better without having to format each column manually:

cbleile@orcl@orcl> @colpp_table CBLEILE T1
cbleile@orcl@orcl> select owner, oracle_maintained, object_name from t1 where rownum < 4;

OWNER OR OBJECT_NAME
----- -- ------------------------------------
SYS   Y  TS$
SYS   Y  ICOL$
SYS   Y  C_FILE#_BLOCK#

3 rows selected.

But what happens when selecting from a view? The column avg_col_len in all_tab_columns is NULL for views:

cbleile@orcl@orcl> create view t1v as select * from t1;
cbleile@orcl@orcl> column column_name format a21
cbleile@orcl@orcl> select column_name, avg_col_len from all_tab_columns where owner=user and table_name='T1V';

COLUMN_NAME           AVG_COL_LEN
--------------------- -----------
EDITIONABLE
ORACLE_MAINTAINED
APPLICATION
DEFAULT_COLLATION
DUPLICATED
SHARDED
CREATED_APPID
CREATED_VSNID
MODIFIED_APPID
MODIFIED_VSNID
OWNER
OBJECT_NAME
SUBOBJECT_NAME
OBJECT_ID
DATA_OBJECT_ID
OBJECT_TYPE
CREATED
LAST_DDL_TIME
TIMESTAMP
STATUS
TEMPORARY
GENERATED
SECONDARY
NAMESPACE
EDITION_NAME
SHARING

26 rows selected.

My idea was to do the following: Why not taking the “bytes”-computation per column from the optimizer divided by the number of rows returned by the view? I.e.

cbleile@orcl@orcl> explain plan for select owner from t1v;

Explained.

cbleile@orcl@orcl> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
---------------------------------
Plan hash value: 3617692013

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time	 |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |	     | 67944 |   331K|   372   (1)| 00:00:01 |
|   1 |  TABLE ACCESS FULL| T1   | 67944 |   331K|   372   (1)| 00:00:01 |
--------------------------------------------------------------------------

8 rows selected.

Now I just calculate the Bytes/Rows. As I just select 1 column I do get approximately the avg_col_len with that:

cbleile@orcl@orcl> select 331000/67944 from dual;

331000/67944
------------
  4.87165901

That value is close to the avg_col_len statistic of the underlying table:

cbleile@orcl@orcl> select avg_col_len from user_tab_columns where table_name='T1' and column_name='OWNER';

AVG_COL_LEN
-----------
          5

So the remaining question was just where to get the bytes and cardinality computation from? It’s in the plan_table:

cbleile@orcl@orcl> select bytes, cardinality, ceil(bytes/cardinality) avg_col_len from plan_table where id=0;

     BYTES CARDINALITY AVG_COL_LEN
---------- ----------- -----------
    339720       67944           5

With that information I had everything to create a script colpp_explain.sql:

cbleile@orcl@orcl> !more colpp_explain.sql
set termout off heading off lines 200 pages 999 trimspool on feed off timing off verify off
set serveroutput on size unlimited
spool /tmp/&1._&2..sql
declare
   avg_col_len number;
begin
   for i in (select column_name from all_tab_columns where owner=upper('&1.') and table_name=upper('&2.') and data_type in ('VARCHAR2','NVARCHAR2')) loop
      delete from plan_table;
      execute immediate 'explain plan for select '||i.column_name||' from &1..&2.';
      select ceil(bytes/cardinality) into avg_col_len from plan_table where id=0;
      dbms_output.put_line('column '||i.column_name||' format a'||to_char(avg_col_len+1));
   end loop;
end;
/
spool off
@/tmp/&1._&2..sql
set termout on heading on feed on timing on verify on
set serveroutput off

I.e. I’m looping through all columns with type VARCHAR2 (or NVARCHAR2) of the view and do an

explain plan for
select <column> from <view>;

With that information I can compute the avg_col_len by dividing the bytes by the cardinality and add the column formatting command to a script, which I finally execute.

cbleile@orcl@orcl> @colpp_explain CBLEILE T1V
cbleile@orcl@orcl> !cat /tmp/CBLEILE_T1V.sql
column EDITIONABLE format a3
column ORACLE_MAINTAINED format a3
column APPLICATION format a3
column DEFAULT_COLLATION format a5
column DUPLICATED format a3
column SHARDED format a3
column OWNER format a6
column OBJECT_NAME format a37
column SUBOBJECT_NAME format a3
column OBJECT_TYPE format a11
column TIMESTAMP format a21
column STATUS format a8
column TEMPORARY format a3
column GENERATED format a3
column SECONDARY format a3
column EDITION_NAME format a67
column SHARING format a15

cbleile@orcl@orcl> select owner, oracle_maintained, object_name from t1v where rownum < 4;

OWNER  ORA OBJECT_NAME
------ --- -------------------------------------
SYS    Y   TS$
SYS    Y   ICOL$
SYS    Y   C_FILE#_BLOCK#

3 rows selected.

To use a single script for tables and views I created a simple wrapper-script around colpp_table.sql and colpp_explain.sql:

cbleile@orcl@orcl> !cat colpp.sql
set termout off heading off lines 200 pages 999 trimspool on feed off timing off verify off
define var=colpp_explain.sql
column objt new_value var
select decode(object_type,'TABLE','colpp_table.sql','colpp_explain.sql') objt from all_objects where owner=upper('&1.') and object_name=upper('&2.');
@@&var. &1. &2.
set termout on heading on feed on timing on verify on

I.e. if the parameter is a table-name I do call colpp_table.sql. Otherwise I call colpp_explain.sql.

Finally it looks as follows:

For the table:

cbleile@orcl@orcl> @colpp CBLEILE T1
cbleile@orcl@orcl> select owner, oracle_maintained, object_name from t1v where rownum < 4;

OWNER OR OBJECT_NAME
----- -- ------------------------------------
SYS   Y  TS$
SYS   Y  ICOL$
SYS   Y  C_FILE#_BLOCK#

For the view:

cbleile@orcl@orcl> @colpp CBLEILE T1V
cbleile@orcl@orcl> select owner, oracle_maintained, object_name from t1v where rownum < 4;

OWNER  ORA OBJECT_NAME
------ --- -------------------------------------
SYS    Y   TS$
SYS    Y   ICOL$
SYS    Y   C_FILE#_BLOCK#

I.e. with a call to colpp.sql I can format all VARCHAR-columns of a table or a view. It’s of course not perfect, but easy, quick and better than the default-settings. You may even extend the scripts to also provide the heading per column or specify a sql_id or a script as parameters to colpp.sql.

Cet article Automatic column formatting in Oracle sqlplus est apparu en premier sur Blog dbi services.

CLUSTER

$
0
0

By Franck Pachot

.
Statistically, my blog posts starting with a single SQL keyword (like COMMIT and ROLLBACK) in the title are not fully technical ones, but about moves. Same here. It is more about community engagement, people sharing. And about a friend. And clusters of course…


In 2020 because of this COVID virus, we try to avoid clusters of people. And everybody suffers from that in the community because we have no, or very few, physical conferences where we can meet. This picture is from 6 years ago, 2014, my first Oracle Open World. I was not a speaker, I was not an ACE. Maybe my first and last conference without being a speaker… I was already a blogger and I was already on Twitter. And on Wednesday evening, after the blogger’s meetup, I missed the Aerosmith concert. For much better: a dinner at Scoma’s with these well-known people I was just meeting for the first time. It is not about being with famous people. It is about talking with smart people, who have long experience in this technical community, and are good inspiration on many topics – tech and soft skills. Look at who is taking the picture, visible in the mirror replica (as in a MAA Gold architecture😉). Conferences are great to cluster people from everywhere. There are small clusters (like this dinner) and big clusters (like the concert). All good ways to meet people, depends on your personality where you feel it better. How did I get there? Just following Ludovico… Because it was his 2nd OOW, because he likes to meet people, and he likes to share with others. So he was my guide there. Actually this story is related on his blog post he has written when I became ACE Director. My turn to write something about his move to Oracle MAA team. You know MAA, where RAC is the main pillar: you cluster some nodes that work together for better availability. Well… the team he is joining is also a nice cluster of smart people enhancing the products and helping their developers and their users.

I’ve been working with Ludovico Caldara as a colleague, as well as a competitor, have seen him at conferences, and outside of professional places as well. That’s how I know how great it is that he moves to Oracle, in the team that manages the products which are the bricks for the future (cloud managed ‘autonomous’ database). Because he his always there to understand and help people. Anywhere. Let me take one small example: we are in the Tram 18 from CERN to Geneva (maybe going to Coin Mousse). Sitting and talking. A young kid nearby, with his mum, is crying. In less time than an interconnect ping can detect any network issue, Ludo gets it immediately and proposes to move by one seat, still talking. Because he understood immediately that the kid, tired on late afternoon, wanted to be near the window. What better skills for a Product Manager than catching a problem that may not have been explained clearly, and find an easy solution that pleases everyone with minimal effort?

Talking about clusters, database performance is all about clustering data that you want to process together. Oracle Database has many features for that. It can be done by storing rows pre-joined (the old CLUSTER, the materialized views with amazing refresh and rewrite capabilities, or key-value JSON documents like though the SODA API). Or storing columns together for faster analytics (HCC, In-Memory Column Store). Or storing related rows together (Partitioning, Index Organized Tables, Attribute Clustering). Yes, attribute clustering is awsome: it tries to store related data nearby without forcing it when not possible. And it is the same with people: meet, talk and share, all in good mood, with common goals, to work better together. The syntax for attribute clustering, available in any Enterprise Edition since 12c, is:

ALTER TABLE people ADD CLUSTERING BY INTERLEAVED ORDER (top_skills, passion, personality, engagement, caring, listening, helping, humour, positivity, loyalty, ethics)

That’s a lot of attributes to cluster together and Ludovico has all of them, showing it with his lucky colleagues, managers, customers, friends,…
As an Oracle advocate, user, partner, customer,… I’m so happy that he joins Oracle, especially in that team!

Ludo wrote a blog post when I joined him as an ACE Director. Oracle employees cannot stay in this advocacy program, so I’m writing this post when he is leaving it. As Jennifer, from the Oracle ACE program, says: being an Oracle employee is The only acceptable way for an amazing Oracle ACE Director to leave the program. But, of course, Product Managers are always in contact with ACEs. If you want to contribute, please have a look at: https://developer.oracle.com/ace/. This advocacy program helps you to be in contact with Oracle product managers, other advocates, and users. For the benefit of all. And it is an awesome cluster of smart people, physically or virtually.

Cet article CLUSTER est apparu en premier sur Blog dbi services.

Viewing all 516 articles
Browse latest View live