Quantcast
Channel: Archives des Oracle - dbi Blog
Viewing all 515 articles
Browse latest View live

Oracle disables your multitenant option when you run on EC2

$
0
0

I have installed Oracle 19.6 on an EC2 for our Multitenant Workshop training. And of course, during the workshop we create a lot of PDBs. If you don’t have paid for the Enterprise Edition plus the Multitenant Option you can create at most 3 pluggable database. But with this option you can create up to 252 pluggable databases. Does it worth the price, which according to the public price list is USD 47,500 + 17,500 per processor, which means per-core because Oracle doesn’t count the core factor when your Intel processors are in AWS Cloud (according to the Authorized Cloud Environments paper)? Probably not because Oracle detects where you run and bridles some features depending whether you are on the Dark or the Light Side of the public cloud (according to their criteria of course).

At one point I have 3 pluggable databases in my CDB:


SQL> show pdbs
   CON_ID     CON_NAME    OPEN MODE    RESTRICTED
_________ ____________ ____________ _____________
        2 PDB$SEED     READ ONLY    NO
        3 CDB1PDB01    MOUNTED
        4 CDB1PDB03    MOUNTED
        5 CDB1PDB02    MOUNTED

I want to create a 4th one:


SQL> create pluggable database CDB1PDB04 from CDB1PDB03;

create pluggable database CDB1PDB04 from CDB1PDB03
                          *
ERROR at line 1:
ORA-65010: maximum number of pluggable databases created

It fails. The maximum number of pluggable databases is defined by MAX_PDBS, but I defined nothing in my SPFILE:


SQL> show spparameter max_pdbs
SID NAME     TYPE    VALUE
--- -------- ------- -----
*   max_pdbs integer

I thought that the default was 4098 (which is incorrect anyway as you cannot create more than 4096) but it is actually 5 here:


SQL> show parameter max_pdbs
NAME     TYPE    VALUE
-------- ------- -----
max_pdbs integer 5

Ok… this parameter is supposed to count the number of user pluggable databases (the ones with CON_ID>2) and I have 3 of them here. The limit is 5 and I have an error mentioning that I’ve reached the limit. That’s not the first time I see wrong maths with this parameter. But there’s worse as I cannot change it:


SQL> alter system set max_pdbs=6;

alter system set max_pdbs=6
 *
ERROR at line 1:
ORA-02097: parameter cannot be modified because specified value is invalid
ORA-65334: invalid number of PDBs specified

I can change it in the SPFILE but it doesn’t help me to create more pluggable databases:


SQL> alter system set max_pdbs=200 scope=spfile;

System altered.

SQL> startup force;

Total System Global Area   2147482744 bytes
Fixed Size                    9137272 bytes
Variable Size               587202560 bytes
Database Buffers           1543503872 bytes
Redo Buffers                  7639040 bytes
Database mounted.
Database opened.

SQL> show parameter max_pdbs
NAME     TYPE    VALUE
-------- ------- -----
max_pdbs integer 200

SQL> create pluggable database CDB1PDB04 from CDB1PDB03;

create pluggable database CDB1PDB04 from CDB1PDB03
                          *
ERROR at line 1:
ORA-65010: maximum number of pluggable databases created

Something bridles me. There’s a MOS Note ORA-65010 When Oracle Database Hosted on AWS Cloud (Doc ID 2328600.1) about the same problem but that’s in 12.1.0.2 (before MAX_PDBS was introduced) which is supposed to be fixed in AUG 2017 PSU. But here I am 3 years later in 19.6 (the January 2020 Release Update for the latest version available on-premises).

So, Oracle limits the number of pluggable databases when we are on a public cloud provider which is not the Oracle Public Cloud. This limitation is not documented in the licensing documentation which mentions 252 as the Enterprise Edition limit, and I see nothing about “Authorized Cloud Environments” limitations for this item. This, and the fact that it can come and go with Release Updates put customers at risk when running on AWS EC2: financial risk and availability risk. I think there are only two choices, on long term, when you want to run your database on a cloud: go to Oracle Cloud or leave for another Database.

How does the Oracle instance know on which public cloud you run? All cloud platforms provide some metadata through HTTP api. I have straced all sendto() and recvfrom() system calls when starting the instance:


strace -k -e trace=recvfrom,sendto -yy -s 1000 -f -o trace.trc sqlplus / as sysdba <<<'startup force'

And I searched for Amazon and AWS here:

This is clear: the instance has a function to detect the cloud provider (kgcs_clouddb_provider_detect) when initializing the SGA in a multitenant architecture (kpdbInitSga) with the purpose of detecting non-oracle clouds (kscs_is_non_oracle_cloud). This queries the AWS metadata (documented on Retrieving Instance Metadata):


[oracle@ora-cdb-1 ~]$ curl http://169.254.169.254/latest/meta-data/services/domain
amazonaws.com/

When Oracle software sees the name of the enemy in the domain name amazonaws.com, it sets an internal limit for the number of pluggable databases that overrides the MAX_PDBS setting. Ok, I don’t need this metadata and I’m root on EC2 so my simple workaround is to block this metadata API:


[root@ora-cdb-1 ~]# iptables -A OUTPUT -d 169.254.169.254  -j REJECT
[root@ora-cdb-1 ~]# iptables -L
Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination
REJECT     udp  --  anywhere             10.0.0.2             udp dpt:domain reject-with icmp-port-unreachable
REJECT     all  --  anywhere             10.0.0.2             reject-with icmp-port-unreachable

Then restart the instance and it works: I can set or reset MAX_PDBS and create more pluggable databases.

I can remove the rule


[root@ora-cdb-1 ~]# iptables -D OUTPUT -d 169.254.169.254  -j REJECT

If, for watever reason I want to revert back.

Finally, because they had many bugs with the MAX_PDBS soft limit, there’s a parameter to disable it and this disables also the hard limit:


SQL> alter system set "_cdb_disable_pdb_limit"=true scope=spfile;
System altered.

Thanks to Mauricio Melnik for the heads-up on that:

However, with this parameter you cannot control anymore the maximum number of PDBs so don’t forget to monitor your AUX_COUNT in DBA_FEATURE_USAGE_STATISTICS.

Here was my discovery when preparing the multitenant workshop lab environment. Note that given the current situation where everybody works from home when possible, we are ready to give this training full of hands-on exercises though Microsoft Teams and AWS EC2 virtual machines. Two days to be comfortable when moving to CDB architecture, which is what should be done this year when you plan to stay with Oracle Database for the future versions.

Update 27-MAR-2020

In order not to sound too negative here, this limit on AWS platforms has been removed in the past and this may be a bug re-introduced with the change from 1 to 3 PDBs in Standard Edition.

Cet article Oracle disables your multitenant option when you run on EC2 est apparu en premier sur Blog dbi services.


Oracle recovery concepts

$
0
0

I’ve published a while ago a twitter thead on some Oracle recovery concepts. For those who are not following twitter, I’m putting the whole thread here:
 

🔴⏬ Here I start a thread about some Oracle Database concepts. We will see how far it goes - all questions/comments welcome.

🔴⏬ A database (or DBMS - database management system) stores (for short and long term) and manipulates (from many concurrent users/devices) your #data.

🔴⏬ #data is logically structured (tablespaces, schemas, tables, columns, datatypes, constraints,…). The structure is described by #metadata.

🔴⏬ Stored #data (tables and indexes) is physically structured (blocks, extents, segments, datafiles), which is also maintained by #metadata

🔴⏬ The logical #metadata and some physical #metadata are stored in the #dictionary ,also known #catalog

🔴⏬ The #dictionary is also data itself: it is the system data describing the user data. Its own metadata is fixed, hardcoded in the software for #bootstrap

🔴⏬ The set of persistent files that stores system data (metadata for user data) and user data is what we call (with Oracle) the #database

🔴⏬ The database files are internally referenced by an identifier (file_id, file#, Absolute File Number,…). The file names are not defined in the #dictionary but (only) in the main metadata file for the database, the #controlfile

🔴⏬ So, we have user #data and its #metadata (system data) in #datafiles. Where is metadata for system data? The code that contains this metadata (structures), and the functions to manipulate data, is the database #software

🔴⏬ Oracle database software installed on the server is often referred by its install location environment variable, the $ORACLE_HOME

🔴⏬ As with any software, the Oracle Database binary code is loaded by the OS to be run by multiple #process, which work in #memory

🔴⏬ The processes and memory running the Oracle software for one database system is what is called an Oracle #instance

🔴⏬ I think the #instance was simply called the Oracle ‘system’ at some time as we identify with Oracle System ID (ORACLE_SID) and modify it with ALTER SYSTEM

🔴⏬ The #instance processes open the database files when needed, to read or write data (user #data or #metadata), when started in state #OPEN

🔴⏬ Before being in #OPEN state, where all database files are opened, the instance must read the list of database files from the #controlfile, when the state is #MOUNT

🔴⏬ Multiple instances can open the same database from different nodes, in the case of Real Application Cluster (RAC) and synchronizes themselves though the #shared storage and the private network #interconnect

🔴⏬ For the instance to know which #controlfile to read, we provide its name as an instance parameter, which is read when the instance is started in its first state: #NOMOUNT

🔴⏬Those parameters (memory sizes, database to open, flags…) are stored on the server in the instance configuration file, the server-side parameter file: the #spfile

🔴⏬ The #spfile is in binary format, stored on the server, updated by the instance. When we create a new database we create the spfile from a #pfile

🔴⏬ The #pfile is a simple text file that lists the instance parameter names and value (and comments). It is also referred to as #init.ora

🔴⏬ So, the #spfile or #pfile is the instance metadata used to open the #controlfile, which is the database metadata, which is used to open the dictionary, which is the user data metadata used to… but at the root is the $ORACLE_SID

🔴⏬ $ORACLE_SID identifies which #spfile or #pfile to read when starting the instance in #nomount. It is an #environment #variable

🔴⏬ The instance will read by default, in $ORACLE_HOME/dbs, spfile$ORACLE.SID or init$ORACLE_SID.ora or init.ora when not provided with a specific #pfile

🔴⏬ That’s for the first to start the #instance. But when the instance is running we can connect to it by attaching our process to the shared memory structure, called System Global Area: #SGA

🔴⏬ The #SGA shared memory is identified with a key that is derived from ORACLE_SID and ORACLE_HOME. If no instance was started with the same ORACLE_SID and ORACLE_HOME (literally the same) you get a “connect to idle instance”

🔴⏬ Of course, connecting by attaching to the SGA is possible only from the same host. This protocol is called #bequeath connection

🔴⏬ In order to connect remotely, there’s a process running on the server which knows the $ORACLE_SID and $ORACLE_HOME. It is the local #listener

🔴⏬ The local listener listens on a TCP/IP port for incoming connection and handles the creation of process and attach to the SGA, just by being provided with the desired #service_name

🔴⏬ So, how does the local listener know which #service_name goes to which instance (and then which database)? It can be listed in the listener’s configuration, that’s #static registration

🔴⏬ But in High Availability where multiple instances can run one service, it is the instance which tells it’s local listener which service it runs. That’s #dynamic registration

🔴⏬ Of course, the connection can start a session only when authorized (CREATE SESSION privilege) so the user/password hash is verified and also the privileges. All this is stored in the dictionary. V$SESSION_CONNECT_INFO shows that as #database #authentication

🔴⏬ Database authentication can be done only when the database is opened (access to the dictionary). Not possible in #nomount or #mount. For these, the system passwords are cached in a #password file

🔴⏬ The password file is found in $ORACLE_HOME/dbs and its name is orapw$ORACLE_SID and, once created, changing of password must be done from the database to be sure it is in sync between dictionary and password file

🔴⏬ When connecting locally with bequeath protocol, belonging to a privileged system group may be sufficient. The password provided is not even verified in that case. That uses #OS authentication

🔴⏬ #OS authentication is one case of passwordless authentication. By default, the Linux users in group ‘dba’ have passwordless bequeath (i.e local) access with the highest privilege #sysdba

🔴⏬ I said that data modifications are written to database files, but that would not be efficient when having multiple users because the database is #shared

🔴⏬ Reading from #shared resources requires only a ‘share’ lock but the only way to write without corruption to a shared resource is with an #exclusive lock

🔴⏬ Locking a full portion of data to write directly to disk is done only for specific non-concurrent bulk loads (direct-path inserts like with APPEND hint). All conventional modifications are done in memory into shared #buffers

🔴⏬ Writing the modifications to disk is done asynchronously by a background process, the #dbwriter

🔴⏬ Reads are also going through the #shared buffers to be sure to see the current version. This is a #logical read

🔴⏬ If the buffer is not already in memory, then before the logical read, it must be read from disk with a #physical read

🔴⏬ Keeping many buffers in memory for a while also saves a lot of disk access which is usually latency expensive. This memory is stored in the #SGA as the #buffer cache

🔴⏬ As the changes are written in memory (buffer cache), they can be lost in case of instance or server crash. To protect for this, all changes are logged to a #log buffer

🔴⏬ The log buffer is in memory and asynchronously flushed to persistent storage (disk), to the #online redo logs (and, maybe, to some #standby redo logs in remote locations).

🔴⏬ When a user commits a transaction, the server must be sure that the redo which protects his changes is flushed to disk. If not, before saying ‘commit successful’ it waits on #log file sync

🔴⏬ When changes are written in memory (#buffer cache) the #redo that protects it from an instance crash must be written to persistent storage before the change itself. This is #WAL (Write Ahead Logging)

🔴⏬ This redo is written by #logwriter. It must be fast because this is where the user may have to wait for physical writes. The advantage is that the writes are sequential, with higher throughput than the #dbwriter which has to do #random writes scattered in all datafiles

🔴⏬ The #redo log stream can be long as it contains all changes. But for server/instance recovery we need only the redo for the changes that were not yet flushed to disk by #dbwriter, in the #dirty blocks

🔴⏬ The instance ensures that regularly all #dirty buffers are flushed, so that the previous #redo can be discarded. It is known as a #checkpoint

🔴⏬ That’s sufficient for instance recovery (redo the changes that were made only in memory and lost by the instance crash) but what if we lose or corrupt a #datafile, like #media failure?

🔴⏬ As with any #persistent data, we must take backups (copy of files) so that, in case of some loss or corruption, we can #restore in a predictable time.

🔴⏬ After restoring the backup we need to apply the redo to roll forward the modifications that happened between the beginning of backup until the point of failure. That’s media #recovery

🔴⏬ The recovery may need more than the online redo logs for the changes between the restored backup and the last checkpoint. This is why before being overwritten, the online redo logs are #archived

🔴⏬ We always want to protect for instance failure (or all the database is inconsistent) but we can choose not to protect for media failure (and accept outage at backup and transaction loss at restore) when the database is in #noarchivelog mode

🔴⏬ If the redo cannot be written to disk, the database cannot accept more changes as it cannot ensure the D in ACID: transaction #durability

🔴⏬ As the online redo logs are allocated and formated at instance startup, they can always be written even if the filesystem is full (except if size of fs. is virtual). But they can be overwritten only when checkpoint made them inactive, or we wait on “checkpoint not complete”

🔴⏬ In archive log mode, there’s another requirement to overwrite an online redo log: it must have been archived to ensure media recovery. If not yet archived, we wait on “file switch (archiving needed)”.

🔴⏬ Archived logs are never overwritten. If the destination is full, the instance hangs. You need to move or backup them elsewhere. The most important to monitor is V$RECOVERY_AREA_USAGE so that PERCENT_SPACE_USED – PERCENT_SPACE_RECLAIMABLE never goes to 100% (stuck archiver)

🔴⏬ How long to keep the archived logs or their backups? You need the redo from the latest backup you may want to restore: the backup retention window. When a database backup is obsolete, the previous archived logs become obsolete. RMAN knows that and you just “delete obsolete”

🔴⏬ In the recovery area, files are managed by Oracle and you don’t need to “delete obsolete”. Obsolete files are automatically deleted when space is needed (“space pressure”). That’s the PERCENT_SPACE_RECLAIMABLE of V$RECOVERY_AREA_USAGE

🔴⏬ at the end of recovery, the database state is recovered at the same state as it was at the point-in-time when the last applied redo was generated. Only some uncommitted changes are lost (those that were in log buffer, in memory, and now lost).

🔴⏬ If the recovery reaches the point of failure, all new changes can continue from there. If not, because we can’t or just because we do a point-in-time recovery, the chain of redo is broken and we #resetlogs to a new #incarnation.

🔴⏬ At the end of recovery, the transactions that are not committed cannot continue (we lost the state of the session, probably some redo and block changes, and the connection to the user is lost) and must be un-done with #rollback

🔴⏬ Oracle does not store directly the #rollback information in the redo stream like other databases, because #rollback is also used for another reason: rollback a transaction or re-build a past version of a block.

🔴⏬ When data is changed (in a memory buffer) the #rollback information that can be used to build the previous version of the buffer is stored as special system data: the #undo

🔴⏬ Actually, the #rollback segment is involved for all letters in ACID, mainly: Atomicity (if we cannot commit we must rollback) and Isolation (undo the uncommitted changes made by others)

🔴⏬ Whether the #redo is primarily optimized for a sequential access on time (replay all changes in the same order), the #undo is optimized to be accessed by #transaction (@jloracle Why Undo? https://jonathanlewis.wordpress.com/2010/02/09/why-undo/ …)

🔴⏬In summary, changes made to data and undo blocks generate the #redo for it, which is applied in memory to the buffer. This redo goes to disk asynchronously. When your changes are committed, the database guarantees that your redo reached the disk so that recovery is possible.

🔴⏬ The rollforward + rollback is common to many databases, but some are faster than others there. PostgreSQL stores old versions in-place and this rollback phase is immediate. Oracle stores it in UNDO, checkpointed with data, but has to rollback all incomplete transactions.

🔴⏬ Mysql InnoDB is similar to Oracle. SQL Server stores the undo with the redo, then may have to the transaction log from before the last checkpoint if transactions stay long Then rollforward time can be unpredictable. This changed recently with Accelerated Database Recovery.

🔴⏬For an in-depth on Oracle recovery internals, there is this old document still around on internet archives. From Oracle7 – 25 years ago! – but the concepts are still valid.
 https://pastebin.com/n8emqu08 
🔴⏫
Any questions?

Cet article Oracle recovery concepts est apparu en premier sur Blog dbi services.

A change in full table scan costs in 19c?

$
0
0

During tests in Oracle 19c I recently experienced this:

cbleile@orcl@orcl> select * from demo4 where m=103;
cbleile@orcl@orcl> select * from table(dbms_xplan.display_cursor);
...
---------------------------------------------------------------------------
| Id  | Operation         | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |       |       |       | 26439 (100)|          |
|*  1 |  TABLE ACCESS FULL| DEMO4 |     1 |    10 | 26439  (14)| 00:00:02 |
---------------------------------------------------------------------------

–> The costs of the full table scan are 26439.

Setting back the optimizer_features_enable to 18.1.0 showed different full table scan costs:

cbleile@orcl@orcl> alter session set optimizer_features_enable='18.1.0';
cbleile@orcl@orcl> select * from demo4 where m=103;
cbleile@orcl@orcl> select * from table(dbms_xplan.display_cursor);
...
---------------------------------------------------------------------------
| Id  | Operation         | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |       |       |       |   109K(100)|          |
|*  1 |  TABLE ACCESS FULL| DEMO4 |     1 |    10 |   109K  (4)| 00:00:05 |
---------------------------------------------------------------------------

–> The costs are 109K versus around 26K in 19c.

Why do we have such a difference for the costs of a full table scan between 18c and 19c?
With the CPU-cost model full table scans are computed as follows:

FTS Cost = ((BLOCKS/MBRC) x MREADTIM)/ SREADTIM + “CPU-costs”
REMARK: This is not 100% correct, but the difference is not important here.

In my case:

cbleile@orcl@orcl> select blocks from tabs where table_name='DEMO4';
 
    BLOCKS
----------
     84888
 
cbleile@orcl@orcl> select * from sys.aux_stats$;
 
SNAME                          PNAME                          PVAL1      PVAL2
------------------------------ ------------------------------ ---------- ----------
...
SYSSTATS_MAIN                  SREADTIM                       1
SYSSTATS_MAIN                  MREADTIM                       10
SYSSTATS_MAIN                  CPUSPEED                       2852
SYSSTATS_MAIN                  MBRC                           8
...

I.e.
FTS Cost = ((BLOCKS/MBRC) x MREADTIM)/ SREADTIM + CPU = ((84888/8) x 10)/ 1 + CPU = 106110 + CPU
Considering the additional CPU-cost we are at the costs we see in 18c: 109K
Why do we see only costs of 26439 in 19c (around 25% of 18c)?
The reason is that the optimizer considers “wrong” system statistics here. I.e. let’s check the system statistics again:

SNAME                          PNAME                          PVAL1      PVAL2
------------------------------ ------------------------------ ---------- ---------
SYSSTATS_MAIN                  SREADTIM                       1
SYSSTATS_MAIN                  MREADTIM                       10
SYSSTATS_MAIN                  MBRC                           8

In theory it’s not possible that MREADTIM > SREADTIM * MBRC. I.e. reading e.g. 8 contiguous blocks from disk cannot be slower than reading 8 random blocks from disk. Oracle has considered that and treats the available system statistics as wrong and takes different values internally. The change was implemented with bug fix 27643128. See My Oracle Support Note “Optimizer Chooses Expensive Index Full Scan over Index Fast Full Scan or Full Table Scan from 12.1 (Doc ID 2382922.1)” for details.

I.e. switching the bug fix off results in full table scan costs as in 18c:

cbleile@orcl@orcl> alter session set optimizer_features_enable='19.1.0';
cbleile@orcl@orcl> alter session set "_fix_control"='27643128:OFF';
cbleile@orcl@orcl> select * from demo4 where m=103;
cbleile@orcl@orcl> select * from table(dbms_xplan.display_cursor);
...
---------------------------------------------------------------------------
| Id  | Operation         | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |       |       |       |   109K(100)|          |
|*  1 |  TABLE ACCESS FULL| DEMO4 |	    1 |    10 |   109K  (4)| 00:00:05 |
---------------------------------------------------------------------------

To get the intended behavior in 19c you should make sure that
MREADTIM <= SREADTIM * MBRC
E.g. in my case

cbleile@orcl@orcl> alter system set db_file_multiblock_read_count=12
cbleile@orcl@orcl> exec dbms_stats.set_system_stats('MBRC',12);
cbleile@orcl@orcl> alter session set optimizer_features_enable='19.1.0';
cbleile@orcl@orcl> select * from demo4 where m=103;
cbleile@orcl@orcl> select * from table(dbms_xplan.display_cursor);
...
---------------------------------------------------------------------------
| Id  | Operation         | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |       |       |       | 74188 (100)|          |
|*  1 |  TABLE ACCESS FULL| DEMO4 |	    1 |    10 | 74188   (5)| 00:00:03 |
---------------------------------------------------------------------------
...
 
cbleile@orcl@orcl> alter session set optimizer_features_enable='18.1.0';
cbleile@orcl@orcl> select * from demo4 where m=103;
cbleile@orcl@orcl> select * from table(dbms_xplan.display_cursor);
...
---------------------------------------------------------------------------
| Id  | Operation         | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |	      |       |       | 74188 (100)|          |
|*  1 |  TABLE ACCESS FULL| DEMO4 |	    1 |    10 | 74188   (5)| 00:00:03 |
---------------------------------------------------------------------------
...

I.e. the costs in 19c and 18c are the same again.

Please consider the following:
– If you’ve gathered or set system statistics then always check that they are reasonable.
– If you do work with a very low SREADTIM and high MREADTIM to favor Index-access (to not use low values for OPTIMIZER_INDEX_COST_ADJ) then make sure that MREADTIM <= SREADTIM * MBRC. Otherwise you may see plan changes when migrating to 19c.

Cet article A change in full table scan costs in 19c? est apparu en premier sur Blog dbi services.

Setup Oracle XE on Linux Mint – a funny exercise

$
0
0

On my old Laptop (Acer Travelmate with an Intel Celeron N3160 CPU) I wanted to install Oracle XE. Currently the available XE version is 18.4. My Laptop runs on Linux Mint 19.3 (Tricia). The Blog will describe the steps I had to follow (steps for Ubuntu would be similar).

REMARK: The following steps were done just for fun and are not supported and not licensable from Oracle. If you follow them then you do it at your own risk 😉

Good instructions on how to install Oracle XE are already available here.

But the first issue not mentioned in the instructions above is that Oracle can no longer be installed on the latest Mint version due to a change in glibc. This has also been described in various blogs about e.g. installing Oracle on Fedora 26 or Fedora 27. The workaround for the problem is to do the following:


cd $ORACLE_HOME/lib/stubs
mkdir BAK
mv libc* BAK/
$ORACLE_HOME/bin/relink all

This brings us to the second issue. Oracle does not provide you with a mechanism to relink Oracle XE. You can of course relink an Enterprise Edition or a Standard Edition 2 version, but relinking XE is not possible because lots of archives and objects are not available in a XE-release. So how can we achieve to install Oracle XE on Linux Mint then? This needs a bit of an unsupported hack by copying archive and object-files from an Enterprise Edition version to XE, but I’ll get to that later.

So here are the steps to install Oracle XE on Linux Mint (if not separately mentioned, the steps are done as root. I.e. you may prefix your command with a “sudo” if you do not login to root directly):

1. Install libaio and alien


root@clemens-TravelMate:~# apt-get update && apt-get upgrade
root@clemens-TravelMate:~# apt-get install libaio*
root@clemens-TravelMate:~# apt-get install alien

2. Download the Oracle rpm from here and convert it to a deb-file


root@clemens-TravelMate:~# cd /opt/distr
root@clemens-TravelMate:/opt/distr# alien --script oracle-database-xe-18c_1.0-2_amd64.rpm

3. Delete the original rpm to save some space


root@clemens-TravelMate:/opt/distr# ls -l oracle-database-xe-18c_1.0-2_amd64.deb
...
root@clemens-TravelMate:/opt/distr# rm oracle-database-xe-18c_1.0-2_amd64.rpm

4. Install the package


root@clemens-TravelMate:/opt/distr# dpkg -i oracle-database-xe-18c_1.0-2_amd64.deb

REMARK: In case the installation fails or the database cannot be created then you can find instructions on how to clean everything up again here.

5. Make sure your host has an IPv4 address in your hosts file


root@clemens-TravelMate:/opt/distr# more /etc/hosts
127.0.0.1 localhost localhost.localdomain
192.168.10.49 clemens-TravelMate.fritz.box clemens-TravelMate

6. Disable the system check in the configuration script


cd /etc/init.d/
cp -p oracle-xe-18c oracle-xe-18c-cfg
vi oracle-xe-18c-cfg

Add the parameter


-J-Doracle.assistants.dbca.validate.ConfigurationParams=false 

in line 288 of the script, so that it finally looks as follows:


    $SU -s /bin/bash  $ORACLE_OWNER -c "(echo '$ORACLE_PASSWORD'; echo '$ORACLE_PASSWORD'; echo '$ORACLE_PASSWORD') | $DBCA -silent -createDatabase -gdbName $ORACLE_SID -templateName $TEMPLATE_NAME -characterSet $CHARSET -createAsContainerDatabase $CREATE_AS_CDB -numberOfPDBs $NUMBER_OF_PDBS -pdbName $PDB_NAME -sid $ORACLE_SID -emConfiguration DBEXPRESS -emExpressPort $EM_EXPRESS_PORT -J-Doracle.assistants.dbca.validate.DBCredentials=false -J-Doracle.assistants.dbca.validate.ConfigurationParams=false -sampleSchema true $SQLSCRIPT_CONSTRUCT $DBFILE_CONSTRUCT $MEMORY_CONSTRUCT"

7. Adjust user oracle, so that it has bash as its default shell


mkdir -p /home/oracle
chown oracle:oinstall /home/oracle
vi /etc/passwd
grep oracle /etc/passwd
oracle:x:54321:54321::/home/oracle:/bin/bash

You may of course add a .bashrc or .bash_profile in /home/oracle.

8. Adjust the Oracle make-scripts for Mint/Ubuntu (I took the script from here):


oracle@clemens-TravelMate:~/scripts$ cat omkfix_XE.sh 
#!/bin/sh
# Change the path below to point to your installation
export ORACLE_HOME=/opt/oracle/product/18c/dbhomeXE
# make changes in orld script
sed -i 's/exec gcc "\$@"/exec gcc -no-pie "\$@"/' $ORACLE_HOME/bin/orald
# Take backup before committing changes
cp $ORACLE_HOME/rdbms/lib/ins_rdbms.mk $ORACLE_HOME/rdbms/lib/ins_rdbms.mk.back
cp $ORACLE_HOME/rdbms/lib/env_rdbms.mk $ORACLE_HOME/rdbms/lib/env_rdbms.mk.back
cp $ORACLE_HOME/network/lib/env_network.mk $ORACLE_HOME/network/lib/env_network.mk.back
cp $ORACLE_HOME/srvm/lib/env_srvm.mk $ORACLE_HOME/srvm/lib/env_srvm.mk.back
cp $ORACLE_HOME/crs/lib/env_has.mk $ORACLE_HOME/crs/lib/env_has.mk.back
cp $ORACLE_HOME/odbc/lib/env_odbc.mk $ORACLE_HOME/odbc/lib/env_odbc.mk.back
cp $ORACLE_HOME/precomp/lib/env_precomp.mk $ORACLE_HOME/precomp/lib/env_precomp.mk.back
cp $ORACLE_HOME/ldap/lib/env_ldap.mk $ORACLE_HOME/ldap/lib/env_ldap.mk.back
cp $ORACLE_HOME/ord/im/lib/env_ordim.mk $ORACLE_HOME/ord/im/lib/env_ordim.mk.back
cp $ORACLE_HOME/ctx/lib/env_ctx.mk $ORACLE_HOME/ctx/lib/env_ctx.mk.back
cp $ORACLE_HOME/plsql/lib/env_plsql.mk $ORACLE_HOME/plsql/lib/env_plsql.mk.back
cp $ORACLE_HOME/sqlplus/lib/env_sqlplus.mk $ORACLE_HOME/sqlplus/lib/env_sqlplus.mk.back
cp $ORACLE_HOME/bin/genorasdksh $ORACLE_HOME/bin/genorasdksh.back
#
# make changes changes in .mk files
#
sed -i 's/\$(ORAPWD_LINKLINE)/\$(ORAPWD_LINKLINE) -lnnz18/' $ORACLE_HOME/rdbms/lib/ins_rdbms.mk
sed -i 's/\$(HSOTS_LINKLINE)/\$(HSOTS_LINKLINE) -lagtsh/' $ORACLE_HOME/rdbms/lib/ins_rdbms.mk
sed -i 's/\$(EXTPROC_LINKLINE)/\$(EXTPROC_LINKLINE) -lagtsh/' $ORACLE_HOME/rdbms/lib/ins_rdbms.mk
sed -i 's/\$(OPT) \$(HSOTSMAI)/\$(OPT) -Wl,--no-as-needed \$(HSOTSMAI)/' $ORACLE_HOME/rdbms/lib/env_rdbms.mk
sed -i 's/\$(OPT) \$(HSDEPMAI)/\$(OPT) -Wl,--no-as-needed \$(HSDEPMAI)/' $ORACLE_HOME/rdbms/lib/env_rdbms.mk
sed -i 's/\$(OPT) \$(EXTPMAI)/\$(OPT) -Wl,--no-as-needed \$(EXTPMAI)/' $ORACLE_HOME/rdbms/lib/env_rdbms.mk
sed -i 's/\$(SPOBJS) \$(LLIBDMEXT)/\$(SPOBJS) -Wl,--no-as-needed \$(LLIBDMEXT)/' $ORACLE_HOME/rdbms/lib/env_rdbms.mk
 
sed -i 's/\$(S0MAIN) \$(SSKRMED)/\$(S0MAIN) -Wl,--no-as-needed \$(SSKRMED)/' $ORACLE_HOME/rdbms/lib/env_rdbms.mk
sed -i 's/\$(S0MAIN) \$(SSBBDED)/\$(S0MAIN) -Wl,--no-as-needed \$(SSBBDED)/' $ORACLE_HOME/rdbms/lib/env_rdbms.mk
sed -i 's/\$(S0MAIN) \$(SSKRSED)/\$(S0MAIN) -Wl,--no-as-needed \$(SSKRSED)/' $ORACLE_HOME/rdbms/lib/env_rdbms.mk
sed -i 's/\$(S0MAIN) \$(SKRNPT)/\$(S0MAIN) -Wl,--no-as-needed \$(SKRNPT)/' $ORACLE_HOME/rdbms/lib/env_rdbms.mk
sed -i 's/\$(S0MAIN) \$(SSTRCED)/\$(S0MAIN) -Wl,--no-as-needed \$(SSTRCED)/' $ORACLE_HOME/rdbms/lib/env_rdbms.mk
sed -i 's/\$(S0MAIN) \$(SSTNTED)/\$(S0MAIN) -Wl,--no-as-needed \$(SSTNTED)/' $ORACLE_HOME/rdbms/lib/env_rdbms.mk
sed -i 's/\$(S0MAIN) \$(SSKFEDED)/\$(S0MAIN) -Wl,--no-as-needed \$(SSKFEDED)/' $ORACLE_HOME/rdbms/lib/env_rdbms.mk
 
sed -i 's/\$(S0MAIN) \$(SSKFODED)/\$(S0MAIN) -Wl,--no-as-needed \$(SSKFODED)/' $ORACLE_HOME/rdbms/lib/env_rdbms.mk
sed -i 's/\$(S0MAIN) \$(SSKFNDGED)/\$(S0MAIN) -Wl,--no-as-needed \$(SSKFNDGED)/' $ORACLE_HOME/rdbms/lib/env_rdbms.mk
sed -i 's/\$(S0MAIN) \$(SSKFMUED)/\$(S0MAIN) -Wl,--no-as-needed \$(SSKFMUED)/' $ORACLE_HOME/rdbms/lib/env_rdbms.mk
sed -i 's/\$(S0MAIN) \$(SSKFSAGED)/\$(S0MAIN) -Wl,--no-as-needed \$(SSKFSAGED)/' $ORACLE_HOME/rdbms/lib/env_rdbms.mk
sed -i 's/\$(S0MAIN) \$(DBGVCI)/\$(S0MAIN) -Wl,--no-as-needed \$(DBGVCI)/' $ORACLE_HOME/rdbms/lib/env_rdbms.mk
sed -i 's/\$(S0MAIN) \$(DBGUCI)/\$(S0MAIN) -Wl,--no-as-needed \$(DBGUCI)/' $ORACLE_HOME/rdbms/lib/env_rdbms.mk
sed -i 's/\$(S0MAIN) \$(SSKECED)/\$(S0MAIN) -Wl,--no-as-needed \$(SSKECED)/' $ORACLE_HOME/rdbms/lib/env_rdbms.mk
 
sed -i 's/^\(ORACLE_LINKLINE.*\$(ORACLE_LINKER)\) \($(PL_FLAGS)\)/\1 -Wl,--no-as-needed \2/g' $ORACLE_HOME/rdbms/lib/env_rdbms.mk
sed -i 's/^\(TNSLSNR_LINKLINE.*\$(TNSLSNR_OFILES)\) \(\$(LINKTTLIBS)\)/\1 -Wl,--no-as-needed \2/g' $ORACLE_HOME/network/lib/env_network.mk
sed -i 's/\$LD \$1G/$LD -Wl,--no-as-needed \$LD_RUNTIME/' $ORACLE_HOME/bin/genorasdksh
sed -i 's/\$(GETCRSHOME_OBJ1) \$(OCRLIBS_DEFAULT)/\$(GETCRSHOME_OBJ1) -Wl,--no-as-needed \$(OCRLIBS_DEFAULT)/' $ORACLE_HOME/srvm/lib/env_srvm.mk
sed -i 's/LDDISABLENEWDTAGS=-Wl,--disable-new-dtags/LDDISABLENEWDTAGS=-Wl,--no-as-needed,--disable-new-dtags/' $ORACLE_HOME/rdbms/lib/env_rdbms.mk
sed -i 's/LDDISABLENEWDTAGS=-Wl,--disable-new-dtags/LDDISABLENEWDTAGS=-Wl,--no-as-needed,--disable-new-dtags/' $ORACLE_HOME/crs/lib/env_has.mk;
sed -i 's/LDDISABLENEWDTAGS=-Wl,--disable-new-dtags/LDDISABLENEWDTAGS=-Wl,--no-as-needed,--disable-new-dtags/' $ORACLE_HOME/odbc/lib/env_odbc.mk
sed -i 's/LDDISABLENEWDTAGS=-Wl,--disable-new-dtags/LDDISABLENEWDTAGS=-Wl,--no-as-needed,--disable-new-dtags/' $ORACLE_HOME/precomp/lib/env_precomp.mk
sed -i 's/LDDISABLENEWDTAGS=-Wl,--disable-new-dtags/LDDISABLENEWDTAGS=-Wl,--no-as-needed,--disable-new-dtags/' $ORACLE_HOME/srvm/lib/env_srvm.mk;
sed -i 's/LDDISABLENEWDTAGS=-Wl,--disable-new-dtags/LDDISABLENEWDTAGS=-Wl,--no-as-needed,--disable-new-dtags/' $ORACLE_HOME/network/lib/env_network.mk
sed -i 's/LDDISABLENEWDTAGS=-Wl,--disable-new-dtags/LDDISABLENEWDTAGS=-Wl,--no-as-needed,--disable-new-dtags/' $ORACLE_HOME/ldap/lib/env_ldap.mk
sed -i 's/LDDISABLENEWDTAGS=-Wl,--disable-new-dtags/LDDISABLENEWDTAGS=-Wl,--no-as-needed,--disable-new-dtags/' $ORACLE_HOME/ord/im/lib/env_ordim.mk
sed -i 's/LDDISABLENEWDTAGS=-Wl,--disable-new-dtags/LDDISABLENEWDTAGS=-Wl,--no-as-needed,--disable-new-dtags/' $ORACLE_HOME/ctx/lib/env_ctx.mk
sed -i 's/LDDISABLENEWDTAGS=-Wl,--disable-new-dtags/LDDISABLENEWDTAGS=-Wl,--no-as-needed,--disable-new-dtags/' $ORACLE_HOME/plsql/lib/env_plsql.mk
sed -i 's/LDDISABLENEWDTAGS=-Wl,--disable-new-dtags/LDDISABLENEWDTAGS=-Wl,--no-as-needed,--disable-new-dtags/' $ORACLE_HOME/sqlplus/lib/env_sqlplus.mk
oracle@clemens-TravelMate:~/scripts$ 
oracle@clemens-TravelMate:~/scripts$ chmod +x omkfix_XE.sh
oracle@clemens-TravelMate:~/scripts$ . ./omkfix_XE.sh
oracle@clemens-TravelMate:~/scripts$ 

9. Install an Oracle Enterprise Edition 18.4. in a separate ORACLE_HOME /u01/app/oracle/product/18.0.0/dbhome_1. You may follow the steps to install it here.

REMARK: At this step I also updated the /etc/sysctl.conf with the usual Oracle requirements and activated the parameters with sysctl -p.


vm.swappiness=1
fs.file-max = 6815744
fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmmni = 4096
kernel.shmall = 2097152
kernel.shmmax = 8589934592
kernel.sem = 250 32000 100 128
kernel.panic_on_oops = 1
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
net.ipv4.ip_local_port_range = 9000 65500
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.rp_filter = 1
vm.nr_hugepages = 600

10. Copy Object- and archive-files from the Enterprise Edition Oracle-Home to the XE Oracle-Home:


oracle@clemens-TravelMate:~/scripts$ cat cpXE.bash 
OH1=/u01/app/oracle/product/18.0.0/dbhome_1
OH=/opt/oracle/product/18c/dbhomeXE
cp -p $OH1/rdbms/lib/libknlopt.a $OH/rdbms/lib
cp -p $OH1/rdbms/lib/opimai.o $OH/rdbms/lib
cp -p $OH1/rdbms/lib/ssoraed.o $OH/rdbms/lib
cp -p $OH1/rdbms/lib/ttcsoi.o $OH/rdbms/lib
cp -p $OH1/lib/nautab.o $OH/lib
cp -p $OH1/lib/naeet.o $OH/lib
cp -p $OH1/lib/naect.o $OH/lib
cp -p $OH1/lib/naedhs.o $OH/lib
 
cp -p $OH1/lib/*.a $OH/lib
cp -p $OH1/rdbms/lib/*.a $OH/rdbms/lib
oracle@clemens-TravelMate:~/scripts$ bash ./cpXE.bash

11. relink oracle


cd $ORACLE_HOME/rdbms/lib
make -f ins_rdbms.mk config.o ioracle

REMARK: This is of course not supported and you’ve effectively changed your Oracle XE to an Enterprise Edition version now!!!

12. Configure XE as root
REMARK: Without the relink above the script below would hang at the output “Copying database files”. Actually it would hang during the “startup nomount” of the DB.


root@clemens-TravelMate:/etc/init.d# ./oracle-xe-18c-cfg configure
/bin/df: unrecognized option '--direct'
Try '/bin/df --help' for more information.
Specify a password to be used for database accounts. Oracle recommends that the password entered should be at least 8 characters in length, contain at least 1 uppercase character, 1 lower case character and 1 digit [0-9]. Note that the same password will be used for SYS, SYSTEM and PDBADMIN accounts:
Confirm the password:
Configuring Oracle Listener.
Listener configuration succeeded.
Configuring Oracle Database XE.
Enter SYS user password: 
************
Enter SYSTEM user password: 
**********
Enter PDBADMIN User Password: 
***********
Prepare for db operation
7% complete
Copying database files
29% complete
Creating and starting Oracle instance
30% complete
31% complete
34% complete
38% complete
41% complete
43% complete
Completing Database Creation
47% complete
50% complete
Creating Pluggable Databases
54% complete
71% complete
Executing Post Configuration Actions
93% complete
Running Custom Scripts
100% complete
Database creation complete. For details check the logfiles at:
 /opt/oracle/cfgtoollogs/dbca/XE.
Database Information:
Global Database Name:XE
System Identifier(SID):XE
Look at the log file "/opt/oracle/cfgtoollogs/dbca/XE/XE.log" for further details.
 
Connect to Oracle Database using one of the connect strings:
     Pluggable database: clemens-TravelMate.fritz.box/XEPDB1
     Multitenant container database: clemens-TravelMate.fritz.box
Use https://localhost:5500/em to access Oracle Enterprise Manager for Oracle Database XE
root@clemens-TravelMate:/etc/init.d# 

Done. Now you can use your XE-DB:


oracle@clemens-TravelMate:~$ . oraenv
ORACLE_SID = [oracle] ? XE
The Oracle base has been set to /opt/oracle
oracle@clemens-TravelMate:~$ sqlplus / as sysdba
 
SQL*Plus: Release 18.0.0.0.0 - Production on Mon Apr 6 21:22:46 2020
Version 18.4.0.0.0
 
Copyright (c) 1982, 2018, Oracle.  All rights reserved.
 
Connected to:
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.4.0.0.0
 
SQL> show pdbs
 
    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 XEPDB1                         READ WRITE NO
SQL> 

REMARK: As you can see, the logon-Banner shows “Enterprise Edition”. I.e. the software installed is no longer Oracle XE and absolutely not supported and not licensable under XE. The installation may just serve as a simple test and fun exercise to get Oracle working on Linux Mint.

Finally I installed the Swingbench Simple Order Entry schema and ran a test with 100 concurrent OLTP-Users. It worked without issues.

Cet article Setup Oracle XE on Linux Mint – a funny exercise est apparu en premier sur Blog dbi services.

Cleanup a failed Oracle XE installation on Linux Mint

$
0
0

On this Blog I described on how to install Oracle XE on a current Linux Mint version (19.3. Tricia when writing the Blog). After the conversion of the Oracle provided rpm to a deb installation file with the tool alien, you can install the Oracle XE software with a simple command


root@clemens-TravelMate:/opt/distr# dpkg -i oracle-database-xe-18c_1.0-2_amd64.deb

If the installation fails or the XE database cannot be created with the configuration script later on, then you have to be able to remove the software again and do the cleanup on the system. However, this is often not possible that easily with “dpkg -r oracle-database-xe-18c”, because the pre-removal script or post-removal script may fail leaving the deb-repository untouched.

To remove the software you may follow the steps below:

REMARK: It is assumed here that there is no other Oracle installation in the oraInventory.

1.) Make sure the Listener and the XE database is stopped

If there is no other way then you may kill the listener and pmon process with e.g.


root@clemens-TravelMate:~# kill -9 $(ps -ef | grep tnslsnr | grep -v grep | tr -s " " | cut -d " " -f2)
root@clemens-TravelMate:~# kill -9 $(ps -ef | grep pmon | grep -v grep | tr -s " " | cut -d " " -f2)

2.) Skip the pre-removal skript by just putting an “exit 0” at the beginning of it


root@clemens-TravelMate:~# vi /var/lib/dpkg/info/oracle-database-xe-18c.prerm
...
root@clemens-TravelMate:~# head -2 /var/lib/dpkg/info/oracle-database-xe-18c.prerm
#!/bin/bash
exit 0

3.) Remove the Oracle installation


root@clemens-TravelMate:~# rm /etc/oratab /etc/oraInst.loc
root@clemens-TravelMate:~# rm -rf /opt/oracle/*

Remark: Removing the oraInst.loc ensures that the postrm-script /var/lib/dpkg/info/oracle-database-xe-18c.postrm is skipped.

4.) Remove the deb-package


root@clemens-TravelMate:~# dpkg --purge --force-all oracle-database-xe-18c

5.) Cleanup some other files


root@clemens-TravelMate:~# rm -rf /var/tmp/.oracle
root@clemens-TravelMate:~# rm -rf /opt/ORCLfmap/

Done.

Cet article Cleanup a failed Oracle XE installation on Linux Mint est apparu en premier sur Blog dbi services.

Starting an Oracle Database when a first connection comes in

$
0
0

To save resources I thought about the idea to start an Oracle database automatically when a first connection comes in. I.e. if there are many smaller databases on a server, which are not required during specific times, then we may shut them down and automatically start them when a connection comes in. The objective was that even the first connection should be successful. Is that possible? Yes, it is. Here’s what I did:

First of all I needed a failed connection event which triggers the startup of the database. In my case I took the message a listener produces on a connection of a not registered service. E.g.


sqlplus cbleile/@orclpdb1

SQL*Plus: Release 19.0.0.0.0 - Production on Thu Apr 9 14:06:55 2020
Version 19.6.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.

ERROR:
ORA-12514: TNS:listener does not currently know of service requested in connect descriptor

With the default listener-logging, above connection produces a message like the following in the listener.log-file:


oracle@oracle-19c6-vagrant:/opt/oracle/diag/tnslsnr/oracle-19c6-vagrant/listener/trace/ [orclcdb (CDB$ROOT)] tail -2 listener.log 
09-APR-2020 14:06:55 * (CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=ORCLPDB1)(CID=(PROGRAM=sqlplus)(HOST=oracle-19c6-vagrant)(USER=oracle))) * (ADDRESS=(PROTOCOL=tcp)(HOST=127.0.0.1)(PORT=11348)) * establish * ORCLPDB1 * 12514
TNS-12514: TNS:listener does not currently know of service requested in connect descriptor

or alternatively you may check listener alert-xml-logfile log.xml in the listener/alert directory.
To keep it easy, I used the listener.log file to check if such a failed connection came in.

So now we just need a mechanism to check if we have a message in the listener.log and trigger the database-startup then.

First I do create an application service on my PDB:


alter system set container=pdb1;
exec dbms_service.create_service('APP_PDB1','APP_PDB1');
exec dbms_service.start_service('APP_PDB1');
alter system register;
alter pluggable database save state;

REMARK1: Do not use the default service of a PDB when connecting with the application. ALWAYS create a service for the application.
REMARK2: By using the “save state” I do ensure that the service is started automatically on DB-startup.

Secondly I created a tnsnames-alias, which retries the connection several times in case it fails initially:


ORCLPDB1_S =
  (DESCRIPTION =
  (CONNECT_TIMEOUT=10)(RETRY_COUNT=30)(RETRY_DELAY=2)
   (ADDRESS_LIST =
    (ADDRESS = (PROTOCOL = TCP)(HOST = 0.0.0.0)(PORT = 1521))
   )
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = APP_PDB1)
    )
  )

Important are the parameters


(RETRY_COUNT=30)(RETRY_DELAY=2)

I.e. in case of an error we do wait for 2 seconds and try again. We try 30 times to connect. That means we do have 60 seconds to start the database when the first connection comes in.

The simple BASH-script below polls for the message of a failed connection and starts the database when such a message is in the listener.log:


#!/bin/bash
 
# Set the env for orclcdb
export ORAENV_ASK=NO
export ORACLE_SID=orclcdb
. oraenv
 
# Define where the listener log-file is
LISTENER_LOG=/opt/oracle/diag/tnslsnr/oracle-19c6-vagrant/listener/trace/listener.log
 
# create a fifo-file
fifo=/tmp/tmpfifo.$$
mkfifo "${fifo}" || exit 1
 
# tail the listener.log and write to fifo in a background process
tail -F -n0 $LISTENER_LOG >${fifo} &
tailpid=$! # optional
 
# check if a connection to service APP_PDB1 arrived and the listener returns a TNS-12514
# TNS-12514 TNS:listener does not currently know of service requested in connect descriptor
# i.e. go ahead if we detect a line containing "establish * APP_PDB1 * 12514" in the listener.log
grep -i -m 1 "establish \* app_pdb1 \* 12514" "${fifo}"
 
# if we get here a request to connect to service APP_PDB1 came in and the service is not 
# registered at the listener. We conclude then that the DB is down.
 
# Do some cleanup by killing the tail-process and removing the fifo-file
kill "${tailpid}" # optional
rm "${fifo}"
 
# Startup the DB
sqlplus -S / as sysdba <<EOF
startup
exit
EOF

REMARK1: You may check the discussion here on how to poll for a string in a file on Linux.
REMARK2: In production above script would probably need a trap in case of e.g. Ctrl-C’ing it to kill the background tail process and remove the tmpfifo file.

Test:

The database is down.
In session 1 I do start my simple bash-script:


oracle@oracle-19c6-vagrant:/home/oracle/tools/test_db_start_whenconnecting/ [orclcdb (CDB$ROOT)] bash ./poll_listener.bash

In session 2 I try to connect:


oracle@oracle-19c6-vagrant:/home/oracle/ [orclcdb (CDB$ROOT)] sqlplus cbleile@orclpdb1_s
 
SQL*Plus: Release 19.0.0.0.0 - Production on Thu Apr 9 14:33:11 2020
Version 19.6.0.0.0
 
Copyright (c) 1982, 2019, Oracle.  All rights reserved.
 
Enter password: 

After entering my password my connection-attempt “hangs”.
In session 1 I can see the following messages:


09-APR-2020 14:33:15 * (CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=APP_PDB1)(CID=(PROGRAM=sqlplus)(HOST=oracle-19c6-vagrant)(USER=oracle))) * (ADDRESS=(PROTOCOL=tcp)(HOST=127.0.0.1)(PORT=11592)) * establish * APP_PDB1 * 12514
ORACLE instance started.
 
Total System Global Area 3724537976 bytes
Fixed Size		    9142392 bytes
Variable Size		 1224736768 bytes
Database Buffers	 2483027968 bytes
Redo Buffers		    7630848 bytes
Database mounted.
Database opened.
./poll_listener.bash: line 38: 19096 Terminated              tail -F -n0 $LISTENER_LOG > ${fifo}
oracle@oracle-19c6-vagrant:/home/oracle/tools/test_db_start_whenconnecting/ [orclcdb (CDB$ROOT)] 

And session 2 automatically connects as the DB is open now:


oracle@oracle-19c6-vagrant:/home/oracle/ [orclcdb (CDB$ROOT)] sqlplus cbleile@orclpdb1_s
 
SQL*Plus: Release 19.0.0.0.0 - Production on Thu Apr 9 14:33:11 2020
Version 19.6.0.0.0
 
Copyright (c) 1982, 2019, Oracle.  All rights reserved.
 
Enter password: 
Last Successful login time: Thu Apr 09 2020 14:31:44 +01:00
 
Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.6.0.0.0
 
cbleile@orclcdb@PDB1> 

All subsequent connects the the DB are fast of course.

With such a mechanism I could even think of starting a virtual machine from a common listener once such a connection arrives. I.e. this would even allow us to e.g. start DB-servers in the Cloud when a first connection from the application comes in. I.e. we could stop DB-VMs on the Cloud to save money and start them up with a terraform-script (or whatever CLI-tool you use to manage your Cloud) when a first DB-connection arrives.

REMARK: With (Transparent) Application Continuity there are even more possibilities, but that feature requires RAC/RAC One Node or Active Data Guard and is out of scope here.

Cet article Starting an Oracle Database when a first connection comes in est apparu en premier sur Blog dbi services.

Find the SQL Plan Baseline for a plan operation

$
0
0

By Franck Pachot

.
If you decide to capture SQL Plan Baselines, you achieve plan stability by being conservative: if the optimizer comes with a new execution plan, it is loaded into the SQL Plan Management base, but not accepted. One day, you may add an index to improve some queries. Then you should check if there is any SQL Plan Baseline for queries with the same access predicate. Because the optimizer will probably find this index attractive, and add the new plan in the SPM base, but it will not be used unless you evolve it to accept it. Or you may remove the SQL Plan Baseline for these queries now that you know you provided a very efficient access path.

But how do you find all SQL Plan Baselines that are concerned? Here is an example.

I start with the SCOTT schema where I capture the SQL Plan Baselines for the following queries:


set time on sqlprompt 'SQL> '
host TWO_TASK=//localhost/CDB1A_PDB1.subnet.vcn.oraclevcn.com sqlplus sys/"demo##OracleDB20c" as sysdba @ ?/rdbms/admin/utlsampl.sql
connect scott/tiger@//localhost/CDB1A_PDB1.subnet.vcn.oraclevcn.com
alter session set optimizer_mode=first_rows optimizer_capture_sql_plan_baselines=true;
select * from emp where ename='SCOTT';
select * from emp where ename='SCOTT';

This is a full table scap because I have no index here.
Now I create an index that helps for this kind of queries:


alter session set optimizer_mode=first_rows optimizer_capture_sql_plan_baselines=false;
host sleep 1
create index emp_ename on emp(ename);
host sleep 1
select * from emp where ename='SCOTT';

I have now, in addition to the accepted FULL TABLE SCAN baseline, the loaded, but not accepted, plan with INDEX access.
Here is the detail the list of plans:


SQL> select sql_handle,plan_name,created,enabled ENA,accepted ACC,fixed FIX,origin from dba_sql_plan_baselines;

             SQL_HANDLE                         PLAN_NAME            CREATED    ENA    ACC    FIX          ORIGIN
_______________________ _________________________________ __________________ ______ ______ ______ _______________
SQL_62193752b864a1e8    SQL_PLAN_6469raaw698g854d6b671    17-apr-20 19:37    YES    NO     NO     AUTO-CAPTURE
SQL_62193752b864a1e8    SQL_PLAN_6469raaw698g8d8a279cc    17-apr-20 19:37    YES    YES    NO     AUTO-CAPTURE

Full table scan:

SQL> select * from dbms_xplan.display_sql_plan_baseline('SQL_62193752b864a1e8','SQL_PLAN_6469raaw698g8d8a279cc'
);

                                                                  PLAN_TABLE_OUTPUT
___________________________________________________________________________________

--------------------------------------------------------------------------------
SQL handle: SQL_62193752b864a1e8
SQL text: select * from emp where ename='SCOTT'
--------------------------------------------------------------------------------

--------------------------------------------------------------------------------
Plan name: SQL_PLAN_6469raaw698g8d8a279cc         Plan id: 3634526668
Enabled: YES     Fixed: NO      Accepted: YES     Origin: AUTO-CAPTURE
Plan rows: From dictionary
--------------------------------------------------------------------------------

Plan hash value: 3956160932

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     1 |    87 |     2   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| EMP  |     1 |    87 |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("ENAME"='SCOTT')

Index access - not accepted

SQL> select * from dbms_xplan.display_sql_plan_baseline('SQL_62193752b864a1e8','SQL_PLAN_6469raaw698g854d6b671'
);

                                                                                   PLAN_TABLE_OUTPUT
____________________________________________________________________________________________________

--------------------------------------------------------------------------------
SQL handle: SQL_62193752b864a1e8
SQL text: select * from emp where ename='SCOTT'
--------------------------------------------------------------------------------

--------------------------------------------------------------------------------
Plan name: SQL_PLAN_6469raaw698g854d6b671         Plan id: 1423357553
Enabled: YES     Fixed: NO      Accepted: NO      Origin: AUTO-CAPTURE
Plan rows: From dictionary
--------------------------------------------------------------------------------

Plan hash value: 2855689319

-------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |           |     1 |    87 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| EMP       |     1 |    87 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN                  | EMP_ENAME |     1 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("ENAME"='SCOTT')

SQL Plan Baseline lookup by plan operation

Now, I want to know all queries in this case, where a SQL Plan Baseline references this index because I’ll probably want to delete all plans for this query, or maybe evolve the index access to be accepted.
Here is my query on sys.sqlobj$plan


select sql_handle,plan_name,created,enabled ENA,accepted ACC,fixed FIX,origin
 ,operation,options,object_name
from (
 -- SPM execution plans
 select signature,category,obj_type,plan_id
 ,operation, options, object_name
 from sys.sqlobj$plan
) natural join (
 -- SQL Plan Baselines
 select signature,category,obj_type,plan_id
 ,name plan_name
 from sys.sqlobj$
 where obj_type=2
) natural join (
 select plan_name
 ,sql_handle,created,enabled,accepted,fixed,origin
 from dba_sql_plan_baselines
)
where operation='INDEX' and object_name like 'EMP_ENAME'
/

This gets the signature and plan identification from sys.sqlobj$plan then joins to sys.sqlobj$ to get the plan name, and finally dba_sql_plan_baselines to get additional information:


             SQL_HANDLE                         PLAN_NAME            CREATED    ENA    ACC    FIX          ORIGIN    OPERATION       OPTIONS    OBJECT_NAME
_______________________ _________________________________ __________________ ______ ______ ______ _______________ ____________ _____________ ______________
SQL_62193752b864a1e8    SQL_PLAN_6469raaw698g854d6b671    17-apr-20 19:37    YES    NO     NO     AUTO-CAPTURE    INDEX        RANGE SCAN    EMP_ENAME

You can see that I like natural joins but be aware that I do that only when I fully control the columns by defining, in subqueries, the column projections before the join.

I have the following variant if I want to lookup by the outline hints:


select sql_handle,plan_name,created,enabled ENA,accepted ACC,fixed FIX,origin
 ,operation,options,object_name
 ,outline_data
from (
 -- SPM execution plans
 select signature,category,obj_type,plan_id
 ,operation, options, object_name
 ,case when other_xml like '%outline_data%' then extract(xmltype(other_xml),'/*/outline_data').getStringVal() end outline_data
 from sys.sqlobj$plan
) natural join (
 -- SQL Plan Baselines
 select signature,category,obj_type,plan_id
 ,name plan_name
 from sys.sqlobj$
 where obj_type=2
) natural join (
 select plan_name
 ,sql_handle,created,enabled,accepted,fixed,origin
 from dba_sql_plan_baselines
)
where outline_data like '%INDEX%'
/

This is what we find on the OTHER_XML and it is faster to filter here rather than calling dbms_xplan for each:


             SQL_HANDLE                         PLAN_NAME            CREATED    ENA    ACC    FIX          ORIGIN       OPERATION                   OPTIONS    OBJECT_NAME                                                                                                                                                                                                                                                                                                                                                                                                                             OUTLINE_DATA
_______________________ _________________________________ __________________ ______ ______ ______ _______________ _______________ _________________________ ______________ ________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________
SQL_62193752b864a1e8    SQL_PLAN_6469raaw698g854d6b671    17-apr-20 19:37    YES    NO     NO     AUTO-CAPTURE    TABLE ACCESS    BY INDEX ROWID BATCHED    EMP            <outline_data><hint><![CDATA[BATCH_TABLE_ACCESS_BY_ROWID(@"SEL$1" "EMP"@"SEL$1")]]></hint><hint><![CDATA[INDEX_RS_ASC(@"SEL$1" "EMP"@"SEL$1" ("EMP"."ENAME"))]]></hint><hint><![CDATA[OUTLINE_LEAF(@"SEL$1")]]></hint><hint><![CDATA[FIRST_ROWS]]></hint><hint><![CDATA[DB_VERSION('20.1.0')]]></hint><hint><![CDATA[OPTIMIZER_FEATURES_ENABLE('20.1.0')]]></hint><hint><![CDATA[IGNORE_OPTIM_EMBEDDED_HINTS]]></hint></outline_data>

Those SYS.SQLOBJ$ tables are the tables where Oracle stores the queries for the SQL Management Base (SQL Profiles, SQL Plan Baselines, SQL Patches, SQL Quarantine).

If you want to find the SQL_ID from a SQL Plan Baseline, I have a query in a previous post:
https://medium.com/@FranckPachot/oracle-dba-sql-plan-baseline-sql-id-and-plan-hash-value-8ffa811a7c68

Cet article Find the SQL Plan Baseline for a plan operation est apparu en premier sur Blog dbi services.

“Segment Maintenance Online Compress” feature usage

$
0
0

By Franck Pachot

.
On Twitter, Ludovico Caldara mentioned the #licensing #pitfall when using the Online Partition Move with Basic Compression. Those two features are available in Enterprise Edition without additional option, but when used together (moving online a compressed partition) they enable the usage of Advance Compression Option:


And there was a qustion about detection of this feature. I’ll show how this is detected. Basically, the ALTER TABLE MOVE PARTITION sets the “fragment was compressed online” flag in TABPART$ or TABSUBPART$ when the segment was compressed during the online move.

I create a partitioned table:


SQL> create table SCOTT.DEMO(id,x) partition by hash(id) partitions 2 as select rownum,lpad('x',100,'x') from xmltable('1 to 1000');

Table created.

I set basic compression, which does not compress anything yet but only for future direct loads:


SQL> alter table SCOTT.DEMO modify partition for (42) compress;

Table altered.

I move without the ‘online’ keyword:


SQL> alter table SCOTT.DEMO move partition for (42);

Table altered.

This does not enable the online compression flag (which is 0x2000000):


SQL> select obj#,dataobj#,part#,flags,to_char(flags,'FMXXXXXXXXXXXXX') from SYS.TABPART$ where obj# in ( select object_id from dba_objects where owner='SCOTT' and objec
t_name='DEMO');

      OBJ#   DATAOBJ#      PART#      FLAGS TO_CHAR(FLAGS,
---------- ---------- ---------- ---------- --------------
     75608      75608          1          0 0
     75609      75610          2         18 12

The 0x12 is about the presence of statistics (the MOVE does online statistics gathering in 12c).


SQL> exec sys.dbms_feature_usage_internal.exec_db_usage_sampling(sysdate)

PL/SQL procedure successfully completed.

SQL> select name,detected_usages,currently_used,feature_info from dba_feature_usage_statistics where name='Segment Maintenance Online Compress';

NAME                                     DETECTED_USAGES CURRE FEATURE_INFO
---------------------------------------- --------------- ----- --------------------------------------------------------------------------------
Segment Maintenance Online Compress                    0 FALSE

Online Move of compressed partition

Now moving online this compressed segment:


SQL> alter table SCOTT.DEMO move partition for (42) online;

Table altered.

This has enabled the 0x2000000 flag:


SQL> select obj#,dataobj#,part#,flags,to_char(flags,'FMXXXXXXXXXXXXX') from SYS.TABPART$ where obj# in ( select object_id from dba_objects where owner='SCOTT' and objec
t_name='DEMO');

      OBJ#   DATAOBJ#      PART#      FLAGS TO_CHAR(FLAGS,
---------- ---------- ---------- ---------- --------------
     75608      75608          1          0 0
     75611      75611          2   33554450 2000012

And, of course, is logged by the feature usage detection:


SQL> exec sys.dbms_feature_usage_internal.exec_db_usage_sampling(sysdate)

PL/SQL procedure successfully completed.

SQL> select name,detected_usages,currently_used,feature_info from dba_feature_usage_statistics where name='Segment Maintenance Online Compress';

NAME                                     DETECTED_USAGES CURRE FEATURE_INFO
---------------------------------------- --------------- ----- --------------------------------------------------------------------------------
Segment Maintenance Online Compress                    1 FALSE Partition Obj# list: 75611:

The FEATURE_INFO mentions the object_id for the concerned partitions (for the last detection only).

No Compress

The only way I know to disable this flag is to uncompress the partition, and this can be done online:


SQL> alter table SCOTT.DEMO move partition for (42) nocompress online;

Table altered.

SQL> select obj#,dataobj#,part#,flags,to_char(flags,'FMXXXXXXXXXXXXX') from SYS.TABPART$ where obj# in ( select object_id from dba_objects where owner='SCOTT' and object_name='DEMO');

      OBJ#   DATAOBJ#      PART#      FLAGS TO_CHAR(FLAGS,
---------- ---------- ---------- ---------- --------------
     75608      75608          1          0 0
     75618      75618          2         18 12

DBMS_REDEFINITION

As a workaround, DBMS_REDEFINITION does not use the Advanced Compression Option. For example, this does not enable any flag:


SYS@CDB$ROOT>
SYS@CDB$ROOT> alter table SCOTT.DEMO rename partition for (24) to PART1;

Table altered.

SYS@CDB$ROOT> create table SCOTT.DEMO_X for exchange with table SCOTT.DEMO;

Table created.

SYS@CDB$ROOT> alter table SCOTT.DEMO_X compress;

Table altered.

SYS@CDB$ROOT> exec dbms_redefinition.start_redef_table(uname=>'SCOTT',orig_table=>'DEMO',int_table=>'DEMO_X',part_name=>'PART1',options_flag=>dbms_redefinition.cons_use
_rowid);

PL/SQL procedure successfully completed.

SYS@CDB$ROOT> exec dbms_redefinition.finish_redef_table(uname=>'SCOTT',orig_table=>'DEMO',int_table=>'DEMO_X',part_name=>'PART1');

PL/SQL procedure successfully completed.

SYS@CDB$ROOT> drop table SCOTT.DEMO_X;                                                                                                                        
Table dropped.

But of course, the difference is that only the blocks that are direct-path inserted into the interim table are compressed. Not the online modifications.

Only for partitions?

As far as I know, this is detected only for partitions and subpartitions, the online partition move operation which came in 12cR1. Since 12cR2 we can also move online a non-partitioned table and this, as far as I know, is not detected by dba_feature_usage_statistics. But don’t count on this as this may be considered as a bug which may be fixed one day.

Cet article “Segment Maintenance Online Compress” feature usage est apparu en premier sur Blog dbi services.


Oracle Support: Easy export of SQL Testcase

$
0
0

By Franck Pachot

.
Many people complain about the quality of support. And there are some reasons behind that. But before complaining, be sure that you provide all information. Because one reason for inefficient Service Request handling is the many incomplete tickets the support engineers have to manage. Oracle provides the tools to make this easy for you and for them. Here I’ll show how easy it is to provide a full testcase with DBMS_DIAG. I’m not talking about hours spent to identify the tables involved, the statistics, the parameters,… All that can be done autonomously with a single command as soon as you have the SQL text or SQL_ID.

In my case, I’ve reproduced my problem (very long parse time) with the following:


set linesize 120 pagesize 1000
variable sql clob
exec select sql_text into :sql from dba_hist_sqltext where sql_id='5jyqgq4mmc2jv';
alter session set optimizer_features_enable='18.1.0';
alter session set tracefile_identifier='5jyqgq4mmc2jv';
select value from v$diag_info where name='Default Trace File';
alter session set events 'trace [SQL_Compiler.*]';
exec execute immediate 'explain plan for '||:sql;
alter session set events 'trace [SQL_Compiler.*] off';

I was too lazy to copy the big SQL statement, so I get it directly from AWR. Because it is a parsing problem, I just run an EXPLAIN PLAN. I set Optimizer Feature Enable to my current version because the first workaround in production was to keep the previous version. I ran a “SQL Compiler” trace, aka event 10053, in order to get the timing information (which I described in a previous blog post). But that’s not the topic. Rather than providing those huge traces to Oracle Support, better to give an easy to reproduce test case.

So this is the only thing I added to get it:


variable c clob
exec DBMS_SQLDIAG.EXPORT_SQL_TESTCASE(directory=>'DATA_PUMP_DIR',sql_text=>:sql,testcase=>:c);

Yes, that’s all. This generates the following files in my DATA_PUMP_DIR directory:

There’s a README, there’s a dump of the objects (I used the default which exports only metadata and statistics), there’s the statement, the system statistics,… you can play with this or simply import the whole with DBMS_SQLDIAG.

I just tar’ed this and copy it to another environment (I provisioned a 20c database in the Oracle Cloud for that) and ran the following:


grant DBA to DEMO identified by demo container=current;
connect demo/demo@&_connect_identifier
create or replace directory VARTMPDPDUMP as '/var/tmp/dpdump';
variable c clob
exec DBMS_SQLDIAG.IMPORT_SQL_TESTCASE(directory=>'VARTMPDPDUMP',filename=>'oratcb_0_5jyqgq4mmc2jv_1_018BBEEE0001main.xml');
@ oratcb_0_5jyqgq4mmc2jv_1_01A20CE80001xpls.sql

And that’s all. This imported all the objects and statistics to exactly reproduce my issue. Now that it reproduces everywhere, I can open a SR, with a short description and the SQL Testcase files (5 MB here). It is not always easy to reproduce a problem, but if you can reproduce it in your environment, there’s a good chance that you can quickly export what is required to reproduce it in another environment.

SQL Testcase Builder is available in any edition. You can use it yourself to reproduce in pre-production a production issue or to provide a testcase to the Oracle Support. Or to send to your preferred troubleshooting consultant: we are doing more and more remote expertise, and reproducing an issue in-house is the most efficient way to analyze a problem.

Cet article Oracle Support: Easy export of SQL Testcase est apparu en premier sur Blog dbi services.

티베로 – The most compatible alternative to Oracle Database

$
0
0

By Franck Pachot

.
Do you remember that time where we were able to buy IBM PC clones, cheaper than the IBM PC but fully compatible? I got the same impression when testing Tibero, the TmaxSoft relational database compatible with the Oracle Database. Many Oracle customers are looking for alternatives to the Oracle Database, because of unfriendly commercial and licensing practices, like forcing the usage of expensive options or not counting vCPU for licensing. Up to now, I was not really impressed by the databases that claim Oracle compatibility. You simply cannot migrate an application from Oracle to another RDBMS without having to change a lot of code. This makes it nearly impossible to move a legacy application where the business logic has been implemented during years in the database model and stored procedures. Who will take the risk to guarantee the same behavior even after very expensive UAT? Finally, with less effort, you may optimize your Oracle licenses and stay with the same database software.

Tibero

However, in Asia, some companies have another reason to move out of Oracle. Not because of Oracle, but because it is an American company. This is true especially for public government organizations for which storing data and running critical application should not depend on a US company. And once they have built their alternative, they may sell it worldwide. In this post I’m looking at Tibero, a database created by a South Korean company – TmaxSoft – with an incredible level of compatibility with Oracle.

I’ll install and run a Tibero database to get an idea about what compatibility means.

Demo trial

After creating a login account on the TmaxSoft TechNet, I’ve requested a demo license on: https://technet.tmaxsoft.com/en/front/common/demoPopup.do

You need to now the host where you will run this as you have to provide the result of `uname -n` to get the license key. That’s a 30 days trial (longer if you don’t restart the instance) that can run everything on this host. I’ve used an Oracle Compute instance running OEL7 for this test. I’ve downloaded the Tibero 6 software installation: tibero6-bin-FS07_CS_1902-linux64–166256-opt.tar.gz from TmaxSoft TechNet > Downloads > Database > Tibero > Tibero 6

For the installation, I followed the instructions from https://store.dimensigon.com/deploy-tibero-database/ that I do not reproduce here. Basically, you need some packages, some sysctl.conf settings for shared memory, some limits.conf settings, a user in ‘dba’ group,… Very similar to Oracle prerequisites. Then untar the software – this installs a $TB_HOME about 1GB.

Database creation

The first difference with Oracle is that you cannot start an instance without a valid license file:


$ $TB_HOME/bin/tb_create_db.sh
  ********************************************************************
* ERROR: Can't open the license file!!
* (1) Check the license file - /home/tibero/tibero6/license/license.xml
  ********************************************************************

I have my trial license file and move it to $TB_HOME/license/license.xml

The creation of the database is ready and there’s a simple tb_create_db.sh for that. First stage is starting the instance (NOMOUNT mode):


$ $TB_HOME/bin/tb_create_db.sh
Listener port = 8629
Tibero 6
TmaxData Corporation Copyright (c) 2008-. All rights reserved.
Tibero instance started up (NOMOUNT mode).

A few information about settings is displayed:


+----------------------------- size -------------------------------+
 system size = 100M (next 10M)
 syssub size = 100M (next 10M)
   undo size = 200M (next 10M)
   temp size = 100M (next 10M)
    usr size = 100M (next 10M)
    log size = 50M
+--------------------------- directory ----------------------------+
 system directory = /home/tibero/tibero6/database/t6a
 syssub directory = /home/tibero/tibero6/database/t6a
   undo directory = /home/tibero/tibero6/database/t6a
   temp directory = /home/tibero/tibero6/database/t6a
    log directory = /home/tibero/tibero6/database/t6a
    usr directory = /home/tibero/tibero6/database/t6a

And the creation is going - that really looks like an Oracle Database:


+========================== newmount sql ==========================+
 create database "t6a"
  user sys identified by tibero
  maxinstances 8
  maxdatafiles 100
  character set MSWIN949
  national character set UTF16
  logfile
  group 1 ('/home/tibero/tibero6/database/t6a/log001.log') size 50M,
  group 2 ('/home/tibero/tibero6/database/t6a/log002.log') size 50M,
  group 3 ('/home/tibero/tibero6/database/t6a/log003.log') size 50M
    maxloggroups 255
    maxlogmembers 8
    noarchivelog
  datafile '/home/tibero/tibero6/database/t6a/system001.dtf' 
    size 100M autoextend on next 10M maxsize unlimited
  SYSSUB 
  datafile '/home/tibero/tibero6/database/t6a/syssub001.dtf' 
    size 10M autoextend on next 10M maxsize unlimited
  default temporary tablespace TEMP
    tempfile '/home/tibero/tibero6/database/t6a/temp001.dtf'
    size 100M autoextend on next 10M maxsize unlimited
    extent management local autoallocate
  undo tablespace UNDO
    datafile '/home/tibero/tibero6/database/t6a/undo001.dtf'
    size 200M
    autoextend on next 10M maxsize unlimited
    extent management local autoallocate
  default tablespace USR
    datafile  '/home/tibero/tibero6/database/t6a/usr001.dtf'
    size 100M autoextend on next 10M maxsize unlimited
    extent management local autoallocate;
+==================================================================+

Database created.
Listener port = 8629
Tibero 6
TmaxData Corporation Copyright (c) 2008-. All rights reserved.
Tibero instance started up (NORMAL mode).

Then the dictionary is loaded (equivalent to catalog/catproc):


/home/tibero/tibero6/bin/tbsvr
Dropping agent table...
Creating text packages table ...
Creating the role DBA...
Creating system users & roles...
Creating example users...
Creating virtual tables(1)...
Creating virtual tables(2)...
Granting public access to _VT_DUAL...
Creating the system generated sequences...
Creating internal dynamic performance views...
Creating outline table...
Creating system tables related to dbms_job...
Creating system tables related to dbms_lock...
Creating system tables related to scheduler...
Creating system tables related to server_alert...
Creating system tables related to tpm...
Creating system tables related to tsn and timestamp...
Creating system tables related to rsrc...
Creating system tables related to workspacemanager...
Creating system tables related to statistics...
Creating system tables related to mview...
Creating system package specifications:
    Running /home/tibero/tibero6/scripts/pkg/pkg_standard.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_standard_extension.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_clobxmlinterface.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_udt_meta.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_seaf.sql...
    Running /home/tibero/tibero6/scripts/pkg/anydata.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_standard.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_db2_standard.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_application_info.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_aq.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_aq_utl.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_aqadm.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_assert.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_crypto.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_db2_translator.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_db_version.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_ddl.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_debug.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_debug_jdwp.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_errlog.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_expression.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_fga.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_flashback.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_geom.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_java.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_job.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_lob.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_lock.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_metadata.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_mssql_translator.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_mview.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_mview_refresh_util.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_mview_util.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_obfuscation.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_output.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_pipe.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_random.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_redefinition.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_redefinition_stats.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_repair.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_result_cache.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_rls.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_rowid.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_rsrc.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_scheduler.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_session.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_space.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_space_admin.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_sph.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_sql.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_sql_analyze.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_sql_translator.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_sqltune.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_stats.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_stats_util.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_system.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_transaction.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_types.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_utility.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_utl_tb.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_verify.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_xmldom.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_xmlgen.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_xmlquery.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_xplan.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dg_cipher.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_htf.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_htp.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_psm_sql_result_cache.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_sys_util.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_tb_utility.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_text.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_tudiconst.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_utl_encode.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_utl_file.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_utl_tcp.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_utl_http.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_utl_url.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_utl_i18n.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_utl_match.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_utl_raw.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_utl_smtp.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_utl_str.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_utl_compress.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_text_japanese_lexer.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_tpm.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_utl_recomp.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_monitor.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_server_alert.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_ctx_ddl.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_odci.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_utl_ref.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_owa_util.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_alert.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_client_internal.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_xslprocessor.sql...
    Running /home/tibero/tibero6/scripts/pkg/uda_wm_concat.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_diutil.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_xmlsave.sql...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_xmlparser.sql...
Creating auxiliary tables used in static views...
Creating system tables related to profile...
Creating internal system tables...
Check TPR status..
Stop TPR
Dropping tables used in TPR...
Creating auxiliary tables used in TPR...
Creating static views...
Creating static view descriptions...
Creating objects for sph:
    Running /home/tibero/tibero6/scripts/iparam_desc_gen.sql...
Creating dynamic performance views...
Creating dynamic performance view descriptions...
Creating package bodies:
    Running /home/tibero/tibero6/scripts/pkg/_pkg_db2_standard.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_aq.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_aq_utl.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_aqadm.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_assert.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_db2_translator.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_errlog.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_metadata.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_mssql_translator.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_mview.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_mview_refresh_util.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_mview_util.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_redefinition_stats.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_rsrc.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_scheduler.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_session.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_sph.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_sql_analyze.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_sql_translator.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_sqltune.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_stats.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_stats_util.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_utility.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_utl_tb.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_verify.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_workspacemanager.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_xmlgen.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_xplan.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dg_cipher.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_htf.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_htp.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_text.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_utl_http.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_utl_url.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_utl_i18n.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_utl_smtp.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_text_japanese_lexer.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_tpm.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_utl_recomp.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_server_alert.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_xslprocessor.tbw...
Running /home/tibero/tibero6/scripts/pkg/_uda_wm_concat.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_xmlparser.tbw...
Creating public synonyms for system packages...
Creating remaining public synonyms for system packages...
Registering dbms_stats job to Job Scheduler...
Creating audit event pacakge...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_audit_event.tbw...
Creating packages for TPR...
    Running /home/tibero/tibero6/scripts/pkg/pkg_dbms_tpr.sql...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_tpr.tbw...
    Running /home/tibero/tibero6/scripts/pkg/_pkg_dbms_apm.tbw...
Start TPR
Create tudi interface
    Running /home/tibero/tibero6/scripts/odci.sql...
Creating spatial meta tables and views ...
Creating internal system jobs...
Creating Japanese Lexer epa source ...
Creating internal system notice queue ...
Creating sql translator profiles ...
Creating agent table...
Done.
For details, check /home/tibero/tibero6/instance/t6a/log/system_init.log.

From this log, you can already imagine the PL/SQL DBMS_% packages compatibility with Oracle Database: they are all there.
All seems good, I have a TB_HOME and TB_SID to identify the instance:


**************************************************
* Tibero Database 't6a' is created successfully on Fri Dec  6 17:23:57 GMT 2019.
*     Tibero home directory ($TB_HOME) =
*         /home/tibero/tibero6
*     Tibero service ID ($TB_SID) = t6a
*     Tibero binary path =
*         /home/tibero/tibero6/bin:/home/tibero/tibero6/client/bin
*     Initialization parameter file =
*         /home/tibero/tibero6/config/t6a.tip
*
* Make sure that you always set up environment variables $TB_HOME and
* $TB_SID properly before you run Tibero.
**************************************************

This looks very similar to Oracle Database and here is my ‘init.ora’ equivalent:

I should add _USE_HUGE_PAGE=Y there as I don’t like to see 3GB allocated with 4k pages.
Looking at the instance processes shows many background Worker Processes that have several threads:

Not going into the details there, but DBWR does more than the Oracle Database Writer as it runs treads for writing to datafiles as well as writing to redo logs. RCWP is the recovery process (also used by standby databases). PEWP runs the parallel query threads. FGWP runs the foreground (session) threads.

Tibero is similar to Oracle but not equal. Tibero has been developed in 2003 with the goal of maximum compatibility with Oracle: SQL, PL/SQL, MVCC compatibility for easy application migration as well as architecture compatibility for easier adoption by DBA. But it was also built from scratch for modern OS and runs processes and threads. I installed Linux x86-64 but Tibero is also available for AIX, HP-UX, Solaris, Windows.

Connect

I can connect with the SYS user by attaching to the SHM when the TB_HOME and TB_SID is set to my local instance:


SQL> Disconnected.
[SID=t6a u@h:w]$ TB_HOME=~/tibero6 TB_ID=t6a tbsql sys/tibero

tbSQL 6

TmaxData Corporation Copyright (c) 2008-. All rights reserved.

Connected to Tibero.

I can also connect though the listener (the port was mentioned at database creation):


[SID=t6a u@h:w]$ TB_HOME= TB_ID= tbsql sys/tibero@localhost:8629/t6a

tbSQL 6

TmaxData Corporation Copyright (c) 2008-. All rights reserved.

Connected to Tibero.

Again it is similar to Oracle (like ezconnect or full connection string) but not exactly the same:

Similar but not a clone

The first time I looked at Tibero, I was really surprised how far it goes with the compatibility with Oracle Database. I’ll probably write more blog posts about it but even complex PL/SQL packages can run without any change. Then comes the idea: is it only an API compatibility or is this software a clone of Oracle? I’ve even heard rumours that some source code must have leaked in order to reach such compatibility. I want to make it clear here: I’m 100% convinced that this database engine was written from scratch, inspired by Oracle architecture and features, and implementing the same language, dictionary packages and views, but with completely different code and internal design. When we troubleshoot Oracle we are used to see the C function stacks in trace dumps. Let’s have a look at the C functions here.

I’ll strace the pread64 call while running a query in order to see the stack behind. I get the PID to trace:


select client_pid,pid,wthr_id,os_thr_id from v$session where sid in (select sid from v$mystat);

The process for my session is: tbsvr_FGWP000 -t NORMAL -SVR_SID t6a and the PID is the Linux PID (OS_THR_ID is the thread).
I strace (compiled with libunwind to show the call stack):


strace -k -e trace=pread64 -y -p 7075


Here is the first call stack for the first pread64() call:


pread64(49, "\4\0\0\0\2\0\200\0\261]\2\0\0\0\1\0\7\0\0\0\263\0\0\0l\2\0\0\377\377\377\377"..., 8192, 16384) = 8192
 > /usr/lib64/libpthread-2.17.so(__pread_nocancel+0x2a) [0xefc3]
 > /home/tibero/tibero6/bin/tbsvr(read_dev_ne+0x2b2) [0x14d8cd2]
 > /home/tibero/tibero6/bin/tbsvr(read_dev+0x94) [0x14d96e4]
 > /home/tibero/tibero6/bin/tbsvr(buf_read1_internal+0x2f8) [0x14da158]
 > /home/tibero/tibero6/bin/tbsvr(tcbh_read_blks_internal+0x5d8) [0x14ccf98]
 > /home/tibero/tibero6/bin/tbsvr(tcbh_read_blk_internal+0x1d) [0x14cd2dd]
 > /home/tibero/tibero6/bin/tbsvr(tcbuf_pin_read_locked+0x39c) [0x14ec99c]
 > /home/tibero/tibero6/bin/tbsvr(tcbuf_get+0x198a) [0x14f3c9a]
 > /home/tibero/tibero6/bin/tbsvr(ts_alloc_units_internal+0x256) [0x17b0a56]
 > /home/tibero/tibero6/bin/tbsvr(ts_alloc_units_from_df+0x406) [0x17b2396]
 > /home/tibero/tibero6/bin/tbsvr(ts_alloc_ext_internal+0x2ff) [0x17b5b1f]
 > /home/tibero/tibero6/bin/tbsvr(tx_sgmt_create+0x1cb) [0x1752ffb]
 > /home/tibero/tibero6/bin/tbsvr(ddl_create_dsgmt+0xc0) [0x769260]
 > /home/tibero/tibero6/bin/tbsvr(_ddl_ctbl_internal+0x155d) [0x7f86dd]
 > /home/tibero/tibero6/bin/tbsvr(ddl_create_table+0xf9) [0x7f9dd9]
 > /home/tibero/tibero6/bin/tbsvr(ddl_execute+0xf04) [0x44aa54]
 > /home/tibero/tibero6/bin/tbsvr(ddl_process_internal+0xf6a) [0x44ed0a]
 > /home/tibero/tibero6/bin/tbsvr(tbsvr_sql_process+0x4082) [0x1def92]
 > /home/tibero/tibero6/bin/tbsvr(tbsvr_msg_sql_common+0x1518) [0x1ca718]
 > /home/tibero/tibero6/bin/tbsvr(tbsvr_handle_msg_internal+0x2225) [0x1847c5]
 > /home/tibero/tibero6/bin/tbsvr(tbsvr_wthr_request_from_cl_conn+0x70a) [0x187eea]
 > /home/tibero/tibero6/bin/tbsvr(wthr_get_new_cli_con+0xc94) [0x18fea4]
 > /home/tibero/tibero6/bin/tbsvr(thread_main_chk_bitmask+0x18d) [0x1966ed]
 > /home/tibero/tibero6/bin/tbsvr(svr_wthr_main_internal+0x1393) [0x1ab2b3]
 > /home/tibero/tibero6/bin/tbsvr(wthr_init+0x80) [0xa2a5b0]
 > /usr/lib64/libpthread-2.17.so(start_thread+0xc5) [0x7ea5]
 > /usr/lib64/libc-2.17.so(clone+0x6d) [0xfe8cd]

I don’t think there is anything in common with the Oracle software code or layer architecture here, except some well known terms (segment, extent, buffer get, buffer pin,…).

I also show a data block dump here just to get an idea:


SQL> select dbms_rowid.rowid_absolute_fno(rowid),dbms_rowid.rowid_block_number(rowid) from demo where rownum=1;

DBMS_ROWID.ROWID_ABSOLUTE_FNO(ROWID) DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID)
------------------------------------ ------------------------------------
                                   2                                 2908

SQL> alter system dump datafile 2 block 2908;

The dump is in /home/tibero/tibero6/instance/t6a/dump/tracedump/tb_dump_7029_73_31900660.trc:


**Dump start at 2020-04-19 14:54:46
DUMP of BLOCK file #2 block #2908

**Dump start at 2020-04-19 14:54:46
data block Dump[dba=02_00002908(8391516),tsn=0000.0067cf33,type=13,seqno =1]
--------------------------------------------------------------
 sgmt_id=3220  cleanout_tsn=0000.00000000  btxcnt=2
 l1dba=02_00002903(8391511), offset_in_l1=5
 btx      xid                undo           fl  tsn/credit
 00  0000.00.0000  00_00000000.00000.00000  I  0000.00000000
 01  0000.00.0000  00_00000000.00000.00000  I  0000.00000000
--------------------------------------------------------------
Data block dump:
  dlhdr_size=16  freespace=7792  freepos=7892  symtab_offset=0  rowcnt=4
Row piece dump:
 rp 0 8114:  [74] flag=--H-FL--  itlidx=255    colcnt=11
  col 0: [6]
   0000: 05 55 53 45 52 32                               .USER2
  col 1: [6]
   0000: 05 49 5F 43 46 31                               .I_CF1
  col 2: [1]
   0000: 00                                              .
  col 3: [4]
   0000: 03 C2 9F CA                                     ....
  col 4: [6]
   0000: 05 49 4E 44 45 58                               .INDEX
  col 5: [2]
   0000: 01 80                                           ..
  col 6: [9]
   0000: 08 78 78 01 05 13 29 15 00                      .xx...)..
  col 7: [9]
   0000: 08 78 78 01 05 13 29 15 00                      .xx...)..
  col 8: [20]
   0000: 13 32 30 32 30 2D 30 31 2D 30 35 3A 31 39 3A 34 .2020-01-05:19:4
   0010: 31 3A 32 31                                     1:21
  col 9: [6]
   0000: 05 56 41 4C 49 44                               .VALID
  col 10: [2]
   0000: 01 4E                                           .N

Even if there are obvious differences in the implementation, this really looks similar to an Oracle block format with ITL list in the block header and row pieces with flags.

If you look for a compatible alternative to Oracle Database, you have probably found some database which try to accept the same SQL and PL/SQL syntax. But this is not sufficient to run an application with minimal changes. Here, with Tibero I was really surprised to see how it copies the Oracle syntax, behavior and features. The dictionary views are similar, with some differences because the implementation is different. Tibero has also an equivalent of ASM and RAC. You can expect other blog posts about it, so do not hesitate to follow the rss or twitter feed.

Cet article 티베로 – The most compatible alternative to Oracle Database est apparu en premier sur Blog dbi services.

Patching ODA from 18.3 to 18.8

$
0
0

Introduction

19c will soon be available for your ODAs. But you may not be ready. Here is how to patch your ODA from 18.3 to 18.8, the very latest 18c release. This patch has here been applied on X7-2M hardware.

Download the patch files

Patch number is 30518425. This patch is composed of 2 zipfiles you will copy on your ODA.

Check free space on disk

Applying a patch requires several GB, please check free space on /, /opt and /u01 before starting. These filesystems should have about 20% free space. If needed, /u01 and /opt filesystems can be extended online, based on Linux VG/LV. For example if you need an additional 20GB in /opt:

lvextend -L +20G /dev/mapper/VolGroupSys-LogVolOpt
resize2fs /dev/mapper/VolGroupSys-LogVolOpt

Check processes

It’s also recommended to check what’s running on your ODA before patching, you’ll do the same check after patching is complete:

ps -ef | grep pmon
oracle     863     1  0 Feb22 ?        00:00:06 ora_pmon_UBTMUR
oracle    8014     1  0  2019 ?        00:03:06 ora_pmon_TSTDEV
oracle    9901     1  0 Feb22 ?        00:00:11 ora_pmon_DEVUT2
grid     14044     1  0  2019 ?        00:22:39 asm_pmon_+ASM1
grid     17118     1  0  2019 ?        00:18:18 apx_pmon_+APX1
oracle   22087 19584  0 11:02 pts/0    00:00:00 grep pmon

ps -ef | grep tnslsnr
grid     15667     1  0  2019 ?        01:43:48 /u01/app/18.0.0.0/grid/bin/tnslsnr ASMNET1LSNR_ASM -no_crs_notify -inherit
grid     15720     1  0  2019 ?        02:26:42 /u01/app/18.0.0.0/grid/bin/tnslsnr LISTENER -no_crs_notify -inherit
oracle   22269 19584  0 11:02 pts/0    00:00:00 grep tnslsnr
grid     26884     1  0  2019 ?        00:01:24 /u01/app/18.0.0.0/grid/bin/tnslsnr LISTENER1523 -inherit
grid     94369     1  0  2019 ?        00:01:16 /u01/app/18.0.0.0/grid/bin/tnslsnr LISTENER1522 -inherit

Check current version in use

Start to check current version on all components:

odacli describe-component

System Version
---------------
18.3.0.0.0

Component                                Installed Version    Available Version
---------------------------------------- -------------------- --------------------
OAK                                       18.3.0.0.0            up-to-date
GI                                        18.3.0.0.180717       up-to-date
DB {
[ OraDB12201_home1 ]                      12.2.0.1.180717       up-to-date
[ OraDB11204_home1 ]                      11.2.0.4.180717       up-to-date
}
DCSAGENT                                  18.3.0.0.0            up-to-date
ILOM                                      4.0.4.21.r126801      up-to-date
BIOS                                      41040100              up-to-date
OS                                        6.10                  up-to-date
FIRMWARECONTROLLER                        QDV1RE14              up-to-date
FIRMWAREDISK                              0121                  0112

Some components version could be higher than those available at time of previous patch/deployment. In this example, starting version is 18.3.0.0.0: this version as a straight path to 18.8.0.0.0.

Duration of the patch may vary depending on the components to be patched. I strongly advise you to check the target version for each component in the documentation, for an ODA X7-2M here is the url.

For this ODA, and compared to currently deployed components versions, no OS update is embedded in the patch, meaning that it will shortened the patching time.

Preparing the patch

Copy the patch files on disk in a temp directory. Then unzip the files and update the repository:

cd /u01/tmp
unzip p30518425_188000_Linux-x86-64_1of2.zip
unzip p30518425_183000_Linux-x86-64_1of2.zip
rm -rf p30518425_188000_Linux-x86-64_*
odacli update-repository -f /u01/tmp/oda-sm-18.8.0.0.0-200209-server1of2.zip
odacli update-repository -f /u01/tmp/oda-sm-18.8.0.0.0-200209-server2of2.zip
odacli list-jobs | head -n 3;  odacli list-jobs | tail -n 3
ID                                       Description                      Created                             Status
---------------------------------------- -------------------------------- ----------------------------------- ---------
7127c3ca-8fb9-4ac9-810d-b7e1aa0e32c5     Repository Update                February 24, 2020 01:13:53 PM CET   Success
5e294f03-3fa8-48ae-b193-219659bec4de     Repository Update                February 24, 2020 01:14:09 PM CET   Success

Patch the dcs components

Patching the dcs components is easy. Now it’s a 3-step process:

/opt/oracle/dcs/bin/odacli update-dcsagent -v 18.8.0.0.0
/opt/oracle/dcs/bin/odacli update-dcsadmin -v 18.8.0.0.0
/opt/oracle/dcs/bin/odacli update-dcscomponents -v 18.8.0.0.0
{
  "jobId" : "f36c44b3-4eb8-4a43-a323-d28a9836de74",
  "status" : "Success",
  "message" : null,
  "reports" : null,
  "createTimestamp" : "February 24, 2020 14:04:01 PM CET",
  "description" : "Job completed and is not part of Agent job list",
  "updatedTime" : "February 24, 2020 14:04:01 PM CET"
}
odacli list-jobs | tail -n 3
b8d474ab-3b5e-4860-a9f4-3d73497d6d4c     DcsAgent patching                                                           February 24, 2020 1:58:54 PM CET    Success
286b36a2-c1fc-48b9-8c99-197e24b6c8ba     DcsAdmin patching                                                           February 24, 2020 2:01:52 PM CET    Success

Note that the latest update is not a job in the list.

Check proposed version to patch to

Now the describe-component should propose the real available versions bundled in the patch:

odacli describe-component
System Version
---------------
18.8.0.0.0
 
Component                                Installed Version    Available Version
---------------------------------------- -------------------- --------------------
OAK                                       18.3.0.0.0            18.8.0.0.0
GI                                        18.3.0.0.180717       18.8.0.0.191015
DB {
[ OraDB12201_home1 ]                      12.2.0.1.180717       12.2.0.1.191015
[ OraDB11204_home1 ]                      11.2.0.4.180717       11.2.0.4.191015
}
DCSAGENT                                  18.8.0.0.0            up-to-date
ILOM                                      4.0.4.21.r126801      4.0.4.47.r131913
BIOS                                      41040100              41060600
OS                                        6.10                  up-to-date
FIRMWARECONTROLLER                        QDV1RE14              qdv1rf30
FIRMWAREDISK                              0121                  up-to-date

OS and firmwaredisk components don’t need to be patched.

Pre-patching report

Let’s check if patching has the green light:

odacli create-prepatchreport -s -v 18.8.0.0.0
odacli describe-prepatchreport -i 12d61cda-1cef-40b9-ad7d-8e087007da23

 
Patch pre-check report
------------------------------------------------------------------------
                 Job ID:  12d61cda-1cef-40b9-ad7d-8e087007da23
            Description:  Patch pre-checks for [OS, ILOM, GI]
                 Status:  SUCCESS
                Created:  February 24, 2020 2:05:41 PM CET
                 Result:  All pre-checks succeeded
 

 
Node Name
---------------
dbiora07
 
Pre-Check                      Status   Comments
------------------------------ -------- --------------------------------------
__OS__
Validate supported versions     Success   Validated minimum supported versions
Validate patching tag           Success   Validated patching tag: 18.8.0.0.0
Is patch location available     Success   Patch location is available
Verify OS patch                 Success   There are no packages available for
                                          an update
 
__ILOM__
Validate supported versions     Success   Validated minimum supported versions
Validate patching tag           Success   Validated patching tag: 18.8.0.0.0
Is patch location available     Success   Patch location is available
Checking Ilom patch Version     Success   Successfully verified the versions
Patch location validation       Success   Successfully validated location
 
__GI__
Validate supported GI versions  Success   Validated minimum supported versions
Validate available space        Success   Validated free space under /u01
Verify DB Home versions         Success   Verified DB Home versions
Validate patching locks         Success   Validated patching locks

ODA is ready.

Patching infrastructure and GI

First the Trace File Analyzer should be stopped, then the update-server could be run:

/etc/init.d/init.tfa stop
odacli update-server -v 18.8.0.0.0
odacli describe-job -i 4d6aab0e-18c4-4bbd-8c16-a39c8a14f992
 Job details
----------------------------------------------------------------
                     ID:  4d6aab0e-18c4-4bbd-8c16-a39c8a14f992
            Description:  Server Patching
                 Status:  Success
                Created:  February 24, 2020 2:15:32 PM CET
                Message:


Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Patch location validation                February 24, 2020 2:15:39 PM CET    February 24, 2020 2:15:39 PM CET    Success
dcs-controller upgrade                   February 24, 2020 2:15:39 PM CET    February 24, 2020 2:15:42 PM CET    Success
Patch location validation                February 24, 2020 2:15:42 PM CET    February 24, 2020 2:15:42 PM CET    Success
dcs-cli upgrade                          February 24, 2020 2:15:42 PM CET    February 24, 2020 2:15:43 PM CET    Success
Creating repositories using yum          February 24, 2020 2:15:43 PM CET    February 24, 2020 2:15:53 PM CET    Success
Creating repositories using yum          February 24, 2020 2:15:53 PM CET    February 24, 2020 2:15:53 PM CET    Success
Creating repositories using yum          February 24, 2020 2:15:53 PM CET    February 24, 2020 2:15:53 PM CET    Success
Creating repositories using yum          February 24, 2020 2:15:53 PM CET    February 24, 2020 2:15:53 PM CET    Success
Creating repositories using yum          February 24, 2020 2:15:53 PM CET    February 24, 2020 2:15:53 PM CET    Success
Creating repositories using yum          February 24, 2020 2:15:53 PM CET    February 24, 2020 2:15:53 PM CET    Success
Creating repositories using yum          February 24, 2020 2:15:53 PM CET    February 24, 2020 2:15:53 PM CET    Success
Updating YumPluginVersionLock rpm        February 24, 2020 2:15:53 PM CET    February 24, 2020 2:15:54 PM CET    Success
Applying OS Patches                      February 24, 2020 2:15:54 PM CET    February 24, 2020 2:26:44 PM CET    Success
Creating repositories using yum          February 24, 2020 2:26:44 PM CET    February 24, 2020 2:26:45 PM CET    Success
Applying HMP Patches                     February 24, 2020 2:26:45 PM CET    February 24, 2020 2:27:35 PM CET    Success
Patch location validation                February 24, 2020 2:27:35 PM CET    February 24, 2020 2:27:35 PM CET    Success
oda-hw-mgmt upgrade                      February 24, 2020 2:27:35 PM CET    February 24, 2020 2:28:04 PM CET    Success
OSS Patching                             February 24, 2020 2:28:04 PM CET    February 24, 2020 2:28:04 PM CET    Success
Applying Firmware Disk Patches           February 24, 2020 2:28:05 PM CET    February 24, 2020 2:28:11 PM CET    Success
Applying Firmware Expander Patches       February 24, 2020 2:28:11 PM CET    February 24, 2020 2:28:16 PM CET    Success
Applying Firmware Controller Patches     February 24, 2020 2:28:16 PM CET    February 24, 2020 2:29:02 PM CET    Success
Checking Ilom patch Version              February 24, 2020 2:29:03 PM CET    February 24, 2020 2:29:05 PM CET    Success
Patch location validation                February 24, 2020 2:29:05 PM CET    February 24, 2020 2:29:06 PM CET    Success
Save password in Wallet                  February 24, 2020 2:29:07 PM CET    February 24, 2020 2:29:07 PM CET    Success
Apply Ilom patch                         February 24, 2020 2:29:07 PM CET    February 24, 2020 2:42:35 PM CET    Success
Copying Flash Bios to Temp location      February 24, 2020 2:42:35 PM CET    February 24, 2020 2:42:35 PM CET    Success
Starting the clusterware                 February 24, 2020 2:42:35 PM CET    February 24, 2020 2:44:53 PM CET    Success
clusterware patch verification           February 24, 2020 2:55:27 PM CET    February 24, 2020 2:55:47 PM CET    Success
Patch location validation                February 24, 2020 2:55:47 PM CET    February 24, 2020 2:56:37 PM CET    Success
Opatch updation                          February 24, 2020 2:57:32 PM CET    February 24, 2020 2:57:36 PM CET    Success
Patch conflict check                     February 24, 2020 2:57:36 PM CET    February 24, 2020 2:58:27 PM CET    Success
clusterware upgrade                      February 24, 2020 2:58:27 PM CET    February 24, 2020 3:21:18 PM CET    Success
Updating GiHome version                  February 24, 2020 3:21:18 PM CET    February 24, 2020 3:21:53 PM CET    Success
Update System version                    February 24, 2020 3:22:26 PM CET    February 24, 2020 3:22:27 PM CET    Success
preRebootNode Actions                    February 24, 2020 3:22:27 PM CET    February 24, 2020 3:23:14 PM CET    Success
Reboot Ilom                              February 24, 2020 3:23:14 PM CET    February 24, 2020 3:23:14 PM CET    Success

Server reboots 5 minutes after the patch ends. On my X7-2M this operation lasted 1h15.

Let’s check the component’s versions:

odacli describe-component
System Version
---------------
18.8.0.0.0
 
Component                                Installed Version    Available Version
---------------------------------------- -------------------- --------------------
OAK                                       18.8.0.0.0            up-to-date
GI                                        18.8.0.0.191015       up-to-date
DB {
[ OraDB12201_home1 ]                      12.2.0.1.180717       12.2.0.1.191015
[ OraDB11204_home1 ]                      11.2.0.4.180717       11.2.0.4.191015
}
DCSAGENT                                  18.8.0.0.0            up-to-date
ILOM                                      4.0.4.21.r126801      4.0.4.47.r131913
BIOS                                      41040100              41060600
OS                                        6.10                  up-to-date
FIRMWARECONTROLLER                        QDV1RE14              qdv1rf30
FIRMWAREDISK                              0121                  up-to-date

Neither ILOM nor BIOS have been updated. This is a bug.

Solve the ILOM and BIOS not patched

An additional procedure is needed (provided by MOS), crsctl needs to be stopped then BIOS patched manually:

/u01/app/18.0.0.0/grid/bin/crsctl stop crs
ipmiflash -v write ILOM-4_0_4_47_r131913-ORACLE_SERVER_X7-2.pkg force script config delaybios warning=0

Versions should now be fine:

odacli describe-component

System Version
---------------
18.8.0.0.0

Component                                Installed Version    Available Version
---------------------------------------- -------------------- --------------------
OAK                                       18.8.0.0.0            up-to-date
GI                                        18.8.0.0.191015       up-to-date
DB {
[ OraDB12201_home1 ]                      12.2.0.1.180717       12.2.0.1.191015
[ OraDB11204_home1 ]                      11.2.0.4.180717       11.2.0.4.191015
}
DCSAGENT                                  18.8.0.0.0            up-to-date
ILOM                                      4.0.4.47.r131913      up-to-date
BIOS                                      41060600              up-to-date
OS                                        6.10                  up-to-date
FIRMWARECONTROLLER                        QDV1RE14              qdv1rf30
FIRMWAREDISK                              0121                  up-to-date

Patching the storage

Patching of the storage is much faster than patching the “server”:

odacli update-storage -v 18.8.0.0.0

odacli describe-job -i a97deb0d-2e0b-42d9-8b56-33af68e23f15
 
Job details
----------------------------------------------------------------
                     ID:  a97deb0d-2e0b-42d9-8b56-33af68e23f15
            Description:  Storage Firmware Patching
                 Status:  Success
                Created:  February 24, 2020 3:33:10 PM CET
                Message:
 
Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Applying Firmware Disk Patches           February 24, 2020 3:33:11 PM CET    February 24, 2020 3:33:19 PM CET    Success
Applying Firmware Controller Patches     February 24, 2020 3:33:19 PM CET    February 24, 2020 3:40:53 PM CET    Success
preRebootNode Actions                    February 24, 2020 3:40:53 PM CET    February 24, 2020 3:40:54 PM CET    Success
Reboot Ilom                              February 24, 2020 3:40:54 PM CET    February 24, 2020 3:40:54 PM CET    Success

Another auto reboot is done after this step.

Patching the dbhomes

Time for patching the dbhomes depends on the number of dbhomes and number of databases. In this example, 2 dbhomes are deployed:

odacli list-dbhomes


ID                                       Name                 DB Version                               Home Location                                 Status                 
---------------------------------------- -------------------- ---------------------------------------- --------------------------------------------- -------                 ---
8a2f98f8-2010-4d26-a4b3-2bd5ad8f0b98     OraDB12201_home1     12.2.0.1.180717                          /u01/app/oracle/product/12.2.0.1/dbhome_1     Configured
8a494efd-e745-4fe9-ace7-2369a36924ff     OraDB11204_home1     11.2.0.4.180717                          /u01/app/oracle/product/11.2.0.4/dbhome_1     Configured

odacli update-dbhome -i 8a494efd-e745-4fe9-ace7-2369a36924ff -v 18.8.0.0.0 
odacli describe-job -i 7c5589d7-564a-4d8b-b69a-1dc50

Job details
----------------------------------------------------------------
                     ID:  7c5589d7-564a-4d8b-b69a-1dc50162a4c6
            Description:  DB Home Patching: Home Id is 8a2f98f8-2010-4d26-a4b3-2bd5ad8f0b98
                 Status:  Success
                Created:  February 25, 2020 9:18:49 AM CET
                Message:  WARNING::Failed to run the datapatch as db TSTY_RP7 is not running##WARNING::Failed to run the datapatch as db EXPY_RP7 is not registered with
clusterware##WARNING::Failed to run datapatch on db DEVM12_RP7Failed to run Utlrp script##WARNING::Failed t
 
Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Validating dbHome available space        February 25, 2020 9:18:59 AM CET    February 25, 2020 9:18:59 AM CET    Success
clusterware patch verification           February 25, 2020 9:19:21 AM CET    February 25, 2020 9:19:31 AM CET    Success
Patch location validation                February 25, 2020 9:19:31 AM CET    February 25, 2020 9:19:31 AM CET    Success
Opatch updation                          February 25, 2020 9:19:31 AM CET    February 25, 2020 9:19:32 AM CET    Success
Patch conflict check                     February 25, 2020 9:19:32 AM CET    February 25, 2020 9:19:32 AM CET    Success
db upgrade                               February 25, 2020 9:19:32 AM CET    February 25, 2020 9:19:32 AM CET    Success
SqlPatch upgrade                         February 25, 2020 9:19:32 AM CET    February 25, 2020 9:19:35 AM CET    Success
SqlPatch upgrade                         February 25, 2020 9:19:35 AM CET    February 25, 2020 9:20:18 AM CET    Success
SqlPatch upgrade                         February 25, 2020 9:20:18 AM CET    February 25, 2020 9:20:55 AM CET    Success
SqlPatch upgrade                         February 25, 2020 9:20:55 AM CET    February 25, 2020 9:20:55 AM CET    Success
SqlPatch upgrade                         February 25, 2020 9:20:55 AM CET    February 25, 2020 9:20:56 AM CET    Success
SqlPatch upgrade                         February 25, 2020 9:20:56 AM CET    February 25, 2020 9:21:33 AM CET    Success
SqlPatch upgrade                         February 25, 2020 9:21:33 AM CET    February 25, 2020 9:21:40 AM CET    Success
SqlPatch upgrade                         February 25, 2020 9:21:40 AM CET    February 25, 2020 9:21:46 AM CET    Success
SqlPatch upgrade                         February 25, 2020 9:21:46 AM CET    February 25, 2020 9:21:57 AM CET    Success
SqlPatch upgrade                         February 25, 2020 9:21:57 AM CET    February 25, 2020 9:22:32 AM CET    Success
SqlPatch upgrade                         February 25, 2020 9:22:32 AM CET    February 25, 2020 9:23:12 AM CET    Success
SqlPatch upgrade                         February 25, 2020 9:23:12 AM CET    February 25, 2020 9:23:53 AM CET    Success
Update System version                    February 25, 2020 9:23:53 AM CET    February 25, 2020 9:23:53 AM CET    Success
updating the Database version            February 25, 2020 9:24:03 AM CET    February 25, 2020 9:24:08 AM CET    Success
updating the Database version            February 25, 2020 9:24:08 AM CET    February 25, 2020 9:24:14 AM CET    Success
updating the Database version            February 25, 2020 9:24:14 AM CET    February 25, 2020 9:24:18 AM CET    Success
updating the Database version            February 25, 2020 9:24:18 AM CET    February 25, 2020 9:24:23 AM CET    Success
updating the Database version            February 25, 2020 9:24:23 AM CET    February 25, 2020 9:24:28 AM CET    Success
updating the Database version            February 25, 2020 9:24:28 AM CET    February 25, 2020 9:24:33 AM CET    Success
updating the Database version            February 25, 2020 9:24:33 AM CET    February 25, 2020 9:24:38 AM CET    Success
updating the Database version            February 25, 2020 9:24:38 AM CET    February 25, 2020 9:24:45 AM CET    Success
updating the Database version            February 25, 2020 9:24:45 AM CET    February 25, 2020 9:24:51 AM CET    Success
updating the Database version            February 25, 2020 9:24:51 AM CET    February 25, 2020 9:24:56 AM CET    Success
updating the Database version            February 25, 2020 9:24:56 AM CET    February 25, 2020 9:25:02 AM CET    Success
updating the Database version            February 25, 2020 9:25:02 AM CET    February 25, 2020 9:25:08 AM CET    Success

odacli update-dbhome -i 8a2f98f8-2010-4d26-a4b3-2bd5ad8f0b98 -v 18.8.0.0.0 
odacli describe-job -i fbed248a-1d0d-4972-afd2-8b43ac8ad514

 Job details
----------------------------------------------------------------
                     ID:  fbed248a-1d0d-4972-afd2-8b43ac8ad514
            Description:  DB Home Patching: Home Id is 8a494efd-e745-4fe9-ace7-2369a36924ff
                 Status:  Success
                Created:  February 25, 2020 9:39:54 AM CET
                Message:  WARNING::Failed to run the datapatch as db CUR7_RP7 is not registered with clusterware##WARNING::Failed to run the datapatch as db SRS7_RP7 is not registered with clusterware##WARNING::Failed to run the datapatch as db CUX7_RP7 is not regi
 
Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Validating dbHome available space        February 25, 2020 9:40:05 AM CET    February 25, 2020 9:40:05 AM CET    Success
clusterware patch verification           February 25, 2020 9:40:07 AM CET    February 25, 2020 9:40:12 AM CET    Success
Patch location validation                February 25, 2020 9:40:12 AM CET    February 25, 2020 9:40:17 AM CET    Success
Opatch updation                          February 25, 2020 9:40:49 AM CET    February 25, 2020 9:40:51 AM CET    Success
Patch conflict check                     February 25, 2020 9:40:51 AM CET    February 25, 2020 9:41:07 AM CET    Success
db upgrade                               February 25, 2020 9:41:07 AM CET    February 25, 2020 9:43:27 AM CET    Success
SqlPatch upgrade                         February 25, 2020 9:43:27 AM CET    February 25, 2020 9:43:27 AM CET    Success
SqlPatch upgrade                         February 25, 2020 9:43:27 AM CET    February 25, 2020 9:43:27 AM CET    Success
SqlPatch upgrade                         February 25, 2020 9:43:27 AM CET    February 25, 2020 9:43:27 AM CET    Success
SqlPatch upgrade                         February 25, 2020 9:43:27 AM CET    February 25, 2020 9:43:28 AM CET    Success
SqlPatch upgrade                         February 25, 2020 9:43:28 AM CET    February 25, 2020 9:43:28 AM CET    Success
SqlPatch upgrade                         February 25, 2020 9:43:28 AM CET    February 25, 2020 9:43:28 AM CET    Success
SqlPatch upgrade                         February 25, 2020 9:43:28 AM CET    February 25, 2020 9:43:28 AM CET    Success
SqlPatch upgrade                         February 25, 2020 9:43:28 AM CET    February 25, 2020 9:43:29 AM CET    Success
SqlPatch upgrade                         February 25, 2020 9:43:29 AM CET    February 25, 2020 9:43:29 AM CET    Success
SqlPatch upgrade                         February 25, 2020 9:43:29 AM CET    February 25, 2020 9:43:29 AM CET    Success
SqlPatch upgrade                         February 25, 2020 9:43:29 AM CET    February 25, 2020 9:43:29 AM CET    Success
SqlPatch upgrade                         February 25, 2020 9:43:29 AM CET    February 25, 2020 9:43:30 AM CET    Success
SqlPatch upgrade                         February 25, 2020 9:43:30 AM CET    February 25, 2020 9:43:30 AM CET    Success
SqlPatch upgrade                         February 25, 2020 9:43:30 AM CET    February 25, 2020 9:43:30 AM CET    Success
SqlPatch upgrade                         February 25, 2020 9:43:30 AM CET    February 25, 2020 9:43:33 AM CET    Success
Update System version                    February 25, 2020 9:43:33 AM CET    February 25, 2020 9:43:33 AM CET    Success
updating the Database version            February 25, 2020 9:43:35 AM CET    February 25, 2020 9:43:37 AM CET    Success
updating the Database version            February 25, 2020 9:43:37 AM CET    February 25, 2020 9:43:39 AM CET    Success
updating the Database version            February 25, 2020 9:43:39 AM CET    February 25, 2020 9:43:42 AM CET    Success
updating the Database version            February 25, 2020 9:43:42 AM CET    February 25, 2020 9:43:45 AM CET    Success
updating the Database version            February 25, 2020 9:43:45 AM CET    February 25, 2020 9:43:48 AM CET    Success
updating the Database version            February 25, 2020 9:43:48 AM CET    February 25, 2020 9:43:50 AM CET    Success
updating the Database version            February 25, 2020 9:43:50 AM CET    February 25, 2020 9:43:52 AM CET    Success
updating the Database version            February 25, 2020 9:43:52 AM CET    February 25, 2020 9:43:54 AM CET    Success
updating the Database version            February 25, 2020 9:43:54 AM CET    February 25, 2020 9:43:56 AM CET    Success
updating the Database version            February 25, 2020 9:43:56 AM CET    February 25, 2020 9:43:58 AM CET    Success
updating the Database version            February 25, 2020 9:43:58 AM CET    February 25, 2020 9:44:01 AM CET    Success
updating the Database version            February 25, 2020 9:44:01 AM CET    February 25, 2020 9:44:04 AM CET    Success
updating the Database version            February 25, 2020 9:44:04 AM CET    February 25, 2020 9:44:06 AM CET    Success
updating the Database version            February 25, 2020 9:44:06 AM CET    February 25, 2020 9:44:08 AM CET    Success
updating the Database version            February 25, 2020 9:44:08 AM CET    February 25, 2020 9:44:10 AM CET    Success

odacli list-dbhomes
ID                                       Name                 DB Version                               Home Location                                 Status
---------------------------------------- -------------------- ---------------------------------------- --------------------------------------------- ----------
8a2f98f8-2010-4d26-a4b3-2bd5ad8f0b98     OraDB12201_home1     12.2.0.1.191015                          /u01/app/oracle/product/12.2.0.1/dbhome_1     Configured
8a494efd-e745-4fe9-ace7-2369a36924ff     OraDB11204_home1     11.2.0.4.191015                          /u01/app/oracle/product/11.2.0.4/dbhome_1     Configured

The 2 dbhomes are updated. Failures on Sqlpatch upgrade should be analyzed, but remember that standby databases cannot be upgraded as their dictionary is not available.

Let’s check on primary 12c (and later) databases if everything is OK:

su – oracle
. oraenv <<< DEVC12
sqlplus / as sysdba
set serverout on
exec dbms_qopatch.get_sqlpatch_status;
...
Patch Id : 28163133
        Action : APPLY
        Action Time : 27-MAY-2019 16:02:11
        Description : DATABASE JUL 2018 RELEASE UPDATE 12.2.0.1.180717
        Logfile :
/u01/app/oracle/cfgtoollogs/sqlpatch/28163133/22313390/28163133_apply_DEVC12_201
9May27_16_02_00.log
        Status : SUCCESS
 
Patch Id : 30138470
        Action : APPLY
        Action Time : 24-FEB-2020 17:10:35
        Description : DATABASE OCT 2019 RELEASE UPDATE 12.2.0.1.191015
        Logfile :
/u01/app/oracle/cfgtoollogs/sqlpatch/30138470/23136382/30138470_apply_DEVC12_202
0Feb24_17_09_56.log
        Status : SUCCESS

PL/SQL procedure successfully completed.
 exit

Let’s check if everything is also OK on 11g databases;

su – oracle
. oraenv <<< DEVC11
sqlplus / as sysdba
select * from dba_registry_history;
ACTION_TIME
---------------------------------------------------------------------------
ACTION                         NAMESPACE
------------------------------ ------------------------------
VERSION                                ID BUNDLE_SERIES
------------------------------ ---------- ------------------------------
COMMENTS
--------------------------------------------------------------------------------
...
17-MAY-19 11.18.48.769476 AM
APPLY                          SERVER
11.2.0.4                           180717 PSU
PSU 11.2.0.4.180717
 
25-FEB-20 09.57.40.969359 AM
APPLY                          SERVER
11.2.0.4                           191015 PSU
PSU 11.2.0.4.191015
 Patch Id : 26635944
        Action : APPLY
        Action Time : 21-NOV-2017 15:53:49
        Description : OJVM RELEASE UPDATE: 12.2.0.1.171017 (26635944)
        Logfile :
/u01/app/oracle/cfgtoollogs/sqlpatch/26635944/21607957/26635944_apply_G100652_CD
BROOT_2017Nov21_15_53_12.log
        Status : SUCCESS

Optional: update the dbclones

If you now create a new dbhome, it will be based on the previous dbclone. So you may need to provision a new dbclone to avoid that. If you need the latest dbclone for 18c, patch number is 27604558:

cd /u01/tmp
unzip p27604558_188000_Linux-x86-64.zip
odacli update-repository -f /u01/tmp/odacli-dcs-18.8.0.0.0-191226-DB-18.8.0.0.zip

Now you are able to create a new dbhome from this dbclone:

odacli create-dbhome -v 18.8.0.0.191015
odacli list-dbhomes
ID                                       Name                 DB Version                               Home Location                                 Status
---------------------------------------- -------------------- ---------------------------------------- --------------------------------------------- ----------
8a2f98f8-2010-4d26-a4b3-2bd5ad8f0b98     OraDB12201_home1     12.2.0.1.191015                          /u01/app/oracle/product/12.2.0.1/dbhome_1     Configured
8a494efd-e745-4fe9-ace7-2369a36924ff     OraDB11204_home1     11.2.0.4.191015                          /u01/app/oracle/product/11.2.0.4/dbhome_1     Configured
395451e5-12b9-4851-b331-dd3e650e6d11     OraDB18000_home1     18.8.0.0.191015                          /u01/app/oracle/product/18.0.0.0/dbhome_1     Configured

Final checks

Let’s get the final versions:

odacli describe-component
System Version
---------------
18.8.0.0.0
 
Component                                Installed Version    Available Version
---------------------------------------- -------------------- --------------------
OAK                                       18.8.0.0.0            up-to-date
GI                                        18.8.0.0.191015       up-to-date
DB {
[ OraDB12201_home1 ]                      12.2.0.1.191015       up-to-date
[ OraDB11204_home1 ]                      11.2.0.4.191015       up-to-date
[ OraDB18000_home1 ]                      18.8.0.0.191015       up-to-date
}
DCSAGENT                                  18.8.0.0.0            up-to-date
ILOM                                      4.0.4.47.r131913      up-to-date
BIOS                                      41060600              up-to-date
OS                                        6.10                  up-to-date
FIRMWARECONTROLLER                        QDV1RF30              up-to-date
FIRMWAREDISK                              0121                  up-to-date

Looks good. Please also check the running processes and compare them to the initial status.

Cleanse the old patches

It’s now possible to cleanse the old patches, they will never be used again. For this ODA, history was:

Deploy = 12.2.1.2.0 => Patch 12.2.1.4.0 => Patch 18.3.0.0.0 => Patch 18.8.0.0.0

Check the filesystems usage before and after cleansing:

df -h /opt
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroupSys-LogVolOpt
                       79G   61G   15G  81% /opt

df -h /u01
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroupSys-LogVolU01
                      197G  116G   71G  63% /u01

odacli cleanup-patchrepo -cl -comp db,gi -v 12.2.1.2.0
odacli cleanup-patchrepo -cl -comp db,gi -v 12.2.1.4.0
odacli cleanup-patchrepo -cl -comp db,gi -v 18.3.0.0.0


df -h /opt
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroupSys-LogVolOpt
                       79G   39G   37G  51% /opt
df -h /u01
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroupSys-LogVolU01
                      197G  116G   71G  63% /u01

Conclusion

Your ODA is now in the latest 18c version. Future upgrade will be more serious, OS will jump to Linux 7 and Oracle stack to 19.6. Old X4-2 ODAs will be stuck to 18.8. And remember that 19.5 is not a production release.

Cet article Patching ODA from 18.3 to 18.8 est apparu en premier sur Blog dbi services.

ODA 19.6 is available

$
0
0

Introduction

It’s been 1 year now that 19c is available on-premise. Finally, 19c is now available today on ODA too, in a real production release. Let’s have a glance of what’s inside.

Is 19.6 available for my old ODA?

The 19.6 release, like most recent releases, is the same for all ODAs. But oldest ODAs will not support it. First-gen ODA (now 8 years old), X3-2 (7yo) and X4-2 (6yo) won’t support this very latest version. X4-2 just get stuck to 18.8 as this 19.6 release and later ones will not be compatible on that hardware. All other ODAs between X5-2 and X8-2 in S/M/L/HA flavours are OK to run this release.

What is the benefit of 19.6 over 18.8?

You need 19.x ODA release to be able to run 19c databases. Why do you need 19c over 18c database? Or why not waiting for 20c database? Because 19c is the long-term release of Oracle database, meaning that you will have standard support until 2023, and extended support until 2026. Both 11gR2 and 12cR1 need extended support today, 12cR2’s standard support will end this year with no extended support available and 18c’s standard support will end next year with no extended support available. Because they are short-term releases. Patching to 19.6 makes sense, definitely.

Am I able to run older databases with 19.6?

Yes for sure! Oracle has always been conservative on ODA, letting users running quite old databases on modern hardware. 19.6 supports 19c databases, as well as 18c, 12cR2, 12cR1 and 11gR2 databases. But you’ll need extended support if you’re still using 12cR1 and 11gR2 databases.

Does 19.6 makes my 19c migration easier?

Yes, definitely. ODA benefits from a nice odacli update-database feature for a while now, and this is now replaced by odacli move-database. By moving a database to a new home you can upgrade it to a new version without using another tool.

What are the main changes over 18.8?

If you were using older version, it will not be a big difference but ODA 19.6 is provided with Linux 7 instead of Linux 6. This is the main reason we had to wait all this time to get this new version. This Linux is more modern, more secure and surely more capable on newest servers.

Is is possible to upgrade to 19.6 from my current release?

If your ODA is already running on latest 18.8, it’s OK for a patch. If you’re still using older 18c releases, like 18.3, 18.5 or 18.7 you will need to upgrade to 18.8 before. Please take a look at my blog post.

If you’re coming from an older version, like 12.2 or 12.1, it will be a long journey to update to 19.6. Just because you’ll have to prior upgrade to 18.3 then 18.8 and finally to 19.6. 3 or more patches to apply is quite a big amount of work and you may encounter more problems than applying fewer patches.

But remember, I already advised to always consider reimaging as a different way of patching. So don’t hesitate to start from scratch, and make a complete refresh, especially if you’re still running these old versions.

What about coming from 19.5?

I hope you’re not running this version, because you cannot upgrade to 19.6. Applying 19.6 needs a complete reimaging.

Is it complex to patch?

Patching is not complex, the operations to apply the patch are limited compared to what you should have done on a classic server. But preparing the patch and do the debugging if something goes wrong is complex. If you never patched an ODA before, it could be challenging. This is even more challenging because of the major OS upgrade to Linux 7. And documentation is showing the way when you read this part “Recovering from a Failed Operating System Upgrade”. If your ODA system has been tuned these past years, it will clearly be tough to patch. Once again, consider reimaging before going straight to the patch. Reimaging is always successful if everything has been backed up before.

What’s new for Standard Edition 2 users?

As you may know, RAC is no longuer provided in 19c SE2 edition, but you now have a free High Availability feature in this version that brings automatic failover to the other node in case of failure. A kind of RAC One Node feature, assuming you’re using HA ODA’s, so one of these four: X5-2, X6-2HA, X7-2HA and X8-2HA.

Conclusion

19c was missing on ODA, and that was quite annoying because how to recommand this platform if 19c is not available in 2020? Now we need to deploy this 19.6 on new ODAs and reimage or patch existing ODAs to see if this release meets all our (high) expectations. Stay tuned for first feedbacks from me.

Cet article ODA 19.6 is available est apparu en premier sur Blog dbi services.

Migrating From Oracle Non-CDB 19c to Oracle 20c

$
0
0

With Oracle 20c, the non-multitenant architecture is no longer supported. So, people will have to migrate their databases to container if they want to use Oracle 20c. There are many methods to transform a non-cdb database to a pluggable one.
-Datapump
-Full Trabsportable Tablespaces
-Plugging non-cdb database , upgrade the plugged database and then convert
-Upgrading the non-cdb database, then plug it the container and then convert it ( But I am not sure that this method will work with Oracle 20c as there is non-cdb architecture)
We can find useful information about these methods in Oracle documentation and on Mike Dietrich blogs

In this blog I am going to use the method plugging the database to migrate a non-cdb Oracle 19c database prod19

********* dbi services Ltd. *********
STATUS                 : OPEN
DB_UNIQUE_NAME         : prod19
OPEN_MODE              : READ WRITE
LOG_MODE               : ARCHIVELOG
DATABASE_ROLE          : PRIMARY
FLASHBACK_ON           : NO
FORCE_LOGGING          : NO
VERSION                : 19.0.0.0.0
CDB Enabled            : NO
*************************************

into an Oracle 20c container database prod20

********* dbi services Ltd. *********

STATUS                 : OPEN
DB_UNIQUE_NAME         : prod20
OPEN_MODE              : READ WRITE
LOG_MODE               : ARCHIVELOG
DATABASE_ROLE          : PRIMARY
FLASHBACK_ON           : YES
FORCE_LOGGING          : YES
VERSION                : 20.0.0.0.0
CDB Enabled            : YES
List PDB(s)  READ ONLY : PDB$SEED
List PDB(s) READ WRITE : PDB1
*************************************

The first step is to open the source database on READ-ONLY mode and then generate the metadata xml file of the non-cdb prod19 database using dbms_pdb.describe procedure.

SQL> exec DBMS_PDB.DESCRIBE('/home/oracle/upgrade/prod19.xml');

Procedure PL/SQL terminee avec succes.

SQL>

The generated xml file is used to plug the non-cdb database into the container prod20. But before plugging the database I run the following script to detect eventual errors

DECLARE
compatible CONSTANT VARCHAR2(3) := CASE DBMS_PDB.CHECK_PLUG_COMPATIBILITY( pdb_descr_file => '/home/oracle/upgrade/prod19.xml', pdb_name => 'prod19')
WHEN TRUE THEN 'YES' ELSE 'NO'
END;
BEGIN
DBMS_OUTPUT.PUT_LINE('Is the future PDB compatible?  ==>  ' || compatible);
END;
/

When querying PDB_PLUG_IN_VIOLATIONS, I can see following error

PROD19	 ERROR	   PDB's version does not match CDB's version: PDB's
		   version 19.0.0.0.0. CDB's version 20.0.0.0.0.

But as explained by Mike Dietrich in his blog I ignore the error and then plug prod19 into the CDB prod20

SQL> create pluggable database prod18 using '/home/oracle/upgrade/prod19.xml' file_name_convert=('/u01/app/oracle/oradata/PROD19/','/u01/app/oracle/oradata/PROD20/prod19/');

Base de donnees pluggable creee.

At this state the database prod19 is plugged into the container prod20, but need to be upgraded to Oracle 20.

SQL> show pdbs

    CON_ID CON_NAME   OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED   READ ONLY  NO
3 PDB1   READ WRITE NO
4 PROD19   MOUNTED
SQL> alter session set container=prod19 ;

Session modifiee.

SQL> startup upgrade
Base de donnees pluggable ouverte.

And we can now upgrade the PDB prod19 using dbupgrade

oracle@oraadserverupgde:/home/oracle/ [prod20 (CDB$ROOT)] dbupgrade -l /home/oracle/logs -c prod19

A few minutes after, the upgrade is finished. Below some truncated output

...
...
------------------------------------------------------
Phases [0-106]         End Time:[2020_05_01 17:13:42]
Container Lists Inclusion:[PROD19] Exclusion:[NONE]
------------------------------------------------------

Grand Total Time: 1326s [PROD19]

 LOG FILES: (/home/oracle/upgrade/log//catupgrdprod19*.log)

Upgrade Summary Report Located in:
/home/oracle/upgrade/log//upg_summary.log

     Time: 1411s For PDB(s)

Grand Total Time: 1411s

 LOG FILES: (/home/oracle/upgrade/log//catupgrd*.log)


Grand Total Upgrade Time:    [0d:0h:23m:31s]
oracle@oraadserverupgde:/home/oracle/ [prod20 (CDB$ROOT)]

Having a quick check to log files to see if all was fine during the upgrade

oracle@oraadserverupgde:/home/oracle/upgrade/log/ [prod19] cat upg_summary.log

Oracle Database Release 20 Post-Upgrade Status Tool    05-01-2020 17:13:2
Container Database: PROD20
[CON_ID: 4 => PROD19]

Component                               Current         Full     Elapsed Time
Name                                    Status          Version  HH:MM:SS

Oracle Server                          UPGRADED      20.2.0.0.0  00:11:26
JServer JAVA Virtual Machine           UPGRADED      20.2.0.0.0  00:02:07
Oracle XDK                             UPGRADED      20.2.0.0.0  00:00:33
Oracle Database Java Packages          UPGRADED      20.2.0.0.0  00:00:05
Oracle Text                            UPGRADED      20.2.0.0.0  00:01:02
Oracle Workspace Manager               UPGRADED      20.2.0.0.0  00:00:41
Oracle Real Application Clusters       UPGRADED      20.2.0.0.0  00:00:00
Oracle XML Database                    UPGRADED      20.2.0.0.0  00:01:57
Oracle Multimedia                      UPGRADED      20.2.0.0.0  00:00:39
LOCATOR                                UPGRADED      20.2.0.0.0  00:01:11
Datapatch                                                        00:00:30
Final Actions                                                    00:00:45
Post Upgrade                                                     00:00:06

Total Upgrade Time: 00:21:14 [CON_ID: 4 => PROD19]

Database time zone version is 32. It is older than current release time
zone version 34. Time zone upgrade is needed using the DBMS_DST package.

Grand Total Upgrade Time:    [0d:0h:23m:31s]
oracle@oraadserverupgde:/home/oracle/upgrade/log/ [prod19]

After the upgrade, we have to convert prod19 to a pluggable database.

SQL> show pdbs

    CON_ID CON_NAME			  OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
	 2 PDB$SEED			  READ ONLY  NO
	 3 PDB1 			  READ WRITE NO
	 4 PROD19			  MOUNTED
SQL> alter session set container=prod19;

Session modifiee.

QL> alter session set container=prod18;
SQL> @?/rdbms/admin/noncdb_to_pdb.sql

After the noncdb_to_pdb script runs successfully, the PDB prod19 can be now opened in read write mode

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB1                           READ WRITE NO
         4 PROD19                         READ WRITE NO

Cet article Migrating From Oracle Non-CDB 19c to Oracle 20c est apparu en premier sur Blog dbi services.

Terraform and Oracle Cloud Infrastructure

$
0
0

Introduction

When you learn a cloud technology, like OCI, the one from Oracle, you start building your demo infrastructure with the web interface and numerous clicks. It’s convenient and easy to handle, even more if you’re quite used to infrastructure basics: network, routing, firewalling, servers, etc. But when it comes to build complex infrastructures with multiple servers, subnets, rules, databases, it’s more than a few clicks to do. And rebuilding a clone infrastructure (for example for testing purpose) can be a nightmare.

Is it possible to script an infrastructure?

Yes for sure, most of the cloud providers have a command line interface, and actually, all the clouds are based on a command line interface and a web console on top of it. But scripting all the commands is something not very digestible.

Infrastructure as Code

Why we couldn’t manage an infrastructure as if it were a piece of software? It’s the purpose of “Infrastructure as Code”. The benefits seem obvisous: faster to deploy, reusable code, automation of infrastructure deployment, scalability, reduced cost with an embedded “drop infrastructure” feature, …

There are multiple tools to do IaC, but Oracle recommands Terraform. And it looks like the best solution for now.

What is Terraform?

The goal of Terraform is to help infrastructure administrators to model and provision large and complex infrastructures. It’s not dedicated to OCI as it supports multiple providers, so if you think about an infrastructure based on OCI and Microsoft Azure (as they are becoming friends), it makes even more sense.

Terraform is using a specific langage called HCL, HashiCorp Configuration Langage. Obviously, it’s compatible with code repositories, like GIT. Templates are available to ease your job when you’re a beginner.

The main steps for terraforming an infrastructure are:
1) write your HCL code (describe)
2) preview the execution by reading the configuration (plan)
3) build the infrastructure (apply)
4) eventually delete the infrastructure (destroy)

3 ways of using Terraform with OCI

You can use Terraform by copying the binary on your computer (Windows/Mac/Linux), it’s quite easy to set up and use (no installation, only one binary). Terraform can run from a VM already in the cloud.

Terraform is also available in SaaS mode, just sign up on terraform.io website and you will be able to work with Terraform without installing anything.

You can also use Terraform through Oracle Resource Manager (ORM) inside the OCI. ORM is a free service provided by Oracle and based on Terraform language. ORM will manage stacks, each stack being a set of Terraform files you bring to OCI as a zip file. From this stacks, ORM let you perform the actions you would have done in Terraform: plan, apply and destroy.

Typical use cases

Terraform is quite nice to make cross-platform deployments, build demos, give the ability for people to build an infrastructure as a self-service, make a Proof Of Concept, …

Terraform can also be targeted to Devops engineer, giving them the ability to deploy a staging environment, fix the issues and then deploy production environments reusing the terraform configuration.

How does it work?

A terraform configuration is actually a directory with one or multiple .tf files (depending on your preferences). As HCL is not a scripting language, blocks in the file(s) are not describing any order in the execution.

During the various steps previously described, special subfolders should appear during execution: *tfstate* for current status and .terraform, a kind of cache.

If you need to script your infrastructure deployment, you can use Python, Bash, Powershell or other tools to call the binary.

To be able to authorize your Terraform binary to create resources in the cloud, you’ll have to provide the API key of an OCI user with enough authorizations.

As cloud providers are pushing update quite often, Terraform will keep the plugin of your cloud provider updated regularly.

Terraform can also manage dependencies (for example a VM depending on another one) because tasks will be done in parallel to speed up the infrastructure deployment.

Some variables can be provided as an input (most often through environment variables) for example for naming the compartment. Imagine you want to deploy several test infrastructures isolated from each others.

Conclusion

Terraform is a great tool to leverage cloud benefits, even for a simple infrastructure. Don’t miss that point!

Cet article Terraform and Oracle Cloud Infrastructure est apparu en premier sur Blog dbi services.

The myth of NoSQL (vs. RDBMS) agility: adding attributes

$
0
0

By Franck Pachot

.
There are good reasons for NoSQL and semi-structured databases. And there are also many mistakes and myths. If people move from RDBMS to NoSQL because of wrong reasons, they will have a bad experience and this finally deserves NoSQL reputation. Those myths were settled by some database newbies who didn’t learn SQL and relational databases. And, rather than learning the basics of data modeling, and capabilities of SQL for data sets processing, they thought they had invented the next generation of persistence… when they actually came back to what was there before the invention of RDBMS: a hierarchical semi-structured data model. And now encountering the same problem that the relational database solved 40 years ago. This blog post is about one of those myths.

Myth: adding a column has to scan and update the whole table

I have read and heard that too many times. Ideas like: RDBMS and SQL are not agile to follow with the evolution of the data domain. Or: NoSQL data stores, because they are loose on the data structure, makes it easier to add new attributes. The wrong, but unfortunately common, idea is that adding a new column to a SQL table is an expensive operation because all rows must be updated. Here are some examples (just taking random examples to show how this idea is widely spread even with smart experts and good reputation forums):

A comment on twitter: “add KVs to JSON is so dramatically easier than altering an RDBMS table, especially a large one, to add a new column”

A question on StackOverflow: “Is ‘column-adding’ (schema modification) a key advantage of a NoSQL (mongodb) database over a RDBMS like MySQL” https://stackoverflow.com/questions/17117294/is-column-adding-schema-modification-a-key-advantage-of-a-nosql-mongodb-da/17118853. They are talking about months for this operation!

An article on Medium: “A migration which would add a new column in RDBMS doesn’t require this Scan and Update style migration for DynamoDB” https://medium.com/serverless-transformation/how-to-remain-agile-with-dynamodb-eca44ff9817.

Those are just examples. People hear it. People repeat it. People believe it. And they don’t test. And they don’t learn. They do not crosscheck with documentation. They do not test with their current database. When it is so easy to do.

Adding a column in SQL

Actually, adding a column is a fast operation in the major modern relational databases. I’ll create a table. Check the size. Then add a nullable column without default. Check the size. Then add a column with a default value. Check the size again. Size staying the same means no rows updated. Of course, you can test further: look at the elapsed time on a large table, and the amount of reads, and the redo/WAL generated,… You will see nothing in the major current RDBMS. Then you actually update all rows and compare. There you will see the size, the time, the reads, and the writes and understand that, with an explicit update the rows are actually updated. But not with the DDL to add a column.

PostgreSQL

Here is the example in PostgreSQL 12 in dbfiddle:
https://dbfiddle.uk/?rdbms=postgres_12&fiddle=9acf5fcc62f0ff1edd0c41aafae91b05

Another example where I show the WAL size:

Oracle Database

Here is the example in Oracle Database 18c in dbfiddle:
https://dbfiddle.uk/?rdbms=oracle_18&fiddle=b3a2d41636daeca5f8e9ea1d771bbd23

Another example:

Yes, I even tested in Oracle7 where, at that time, adding a not null column with a default value actually scanned the table. The workaround is easy with a view. Adding a nullable column (which is what you do in NoSQL) was already a fast operation, and that’s 40 years ago!

MySQL

Here is the example in MySQL 8 in dbfiddle:
https://dbfiddle.uk/?rdbms=mysql_8.0&fiddle=8c14e107e1f335b505565a0bde85f6ec

Microsoft SQL Server

It seems that the table I use is too large for dbfiddle but I’ve run the same on my laptop:


1> set statistics time on;
2> go

1> create table demo (x numeric);
2> go
SQL Server parse and compile time:
   CPU time = 0 ms, elapsed time = 0 ms.

 SQL Server Execution Times:
   CPU time = 2 ms,  elapsed time = 2 ms.

1> with q as (select 42 x union all select 42)
2> insert into demo
3> s join q j cross join q k cross join q l cross join q m cross join q n cross join q o  cross join q p  cross join q r cross join q s cross join q t  cross join q u;
4> go
SQL Server parse and compile time:
   CPU time = 11 ms, elapsed time = 12 ms.

 SQL Server Execution Times:
   CPU time = 2374 ms,  elapsed time = 2148 ms.

(1048576 rows affected)

1> alter table demo add b numeric ;
2> go
SQL Server parse and compile time:
   CPU time = 0 ms, elapsed time = 0 ms.

 SQL Server Execution Times:
   CPU time = 0 ms,  elapsed time = 3 ms.

1> alter table demo add c numeric default 42 not null;
2> go
SQL Server parse and compile time:
   CPU time = 0 ms, elapsed time = 0 ms.

 SQL Server Execution Times:
   CPU time = 1 ms,  elapsed time = 2 ms.

2> go
SQL Server parse and compile time:
   CPU time = 0 ms, elapsed time = 0 ms.
x                    b                    c
-------------------- -------------------- --------------------
                  42                 NULL                   42
                  42                 NULL                   42
                  42                 NULL                   42

(3 rows affected)

 SQL Server Execution Times:
   CPU time = 0 ms,  elapsed time = 0 ms.

3> go
SQL Server parse and compile time:
   CPU time = 0 ms, elapsed time = 0 ms.

 SQL Server Execution Times:
   CPU time = 3768 ms,  elapsed time = 3826 ms.

(1048576 rows affected)

2 milliseconds for adding a column with a value, visible on all those million rows (and it can be more).

YugaByte DB

In a distributed database, metadata must be updated in all nodes, but this is still in milliseconds whatever the table size is:

I didn’t show the test with not null and default value as I encountered an issue (adding column is fast but default value not selected). I don’t have the latest version (YugaByte DB is open source and in very active development) and this is probably an issue going to be fixed.

Tibero

Tibero is a database with very high compatibility with Oracle. I’ve run the same SQL. But this version 6 seems to be compatible with Oracle 11 where adding a non null column with default had to update all rows:

You can test on any other databases with a code similar to this one:


-- CTAS CTE Cross Join is the most cross-RDBMS I've found to create one million rows
create table demo as with q as (select 42 x union all select 42) select 42 x from q a cross join q b cross join q c cross join q d cross join q e cross join q f cross join q g cross join q h cross join q i cross join q j cross join q k cross join q l cross join q m cross join q n cross join q o  cross join q p  cross join q r cross join q s cross join q t  cross join q u;
-- check the time to add a column
alter table demo add b numeric ;
-- check the time for a column with a value set for all existing rows
alter table demo add c numeric default 42 not null;
-- check that all rows show this value
select * from demo order by x fetch first 3 rows only;
-- compare with the time to really update all rows
update demo set c=42;

and please don’t hesitate to comment this blog post or the following tweet with you results:

NoSQL semi-structured

The myth comes from old versions of some databases that did no implement the ALTER TABLE .. ADD in an optimal way. And the NoSQL inventors probably knew only MySQL which was late in this area. Who said that MySQL evolution suffered from its acquisition by Oracle? They reduce the gap with other databases, like with this column adding optimisation.

If you stay with this outdated knowledge, you may think that NoSQL with semi-structured collections is more Agile, right? Yes, of course, you can add a new attribute when inserting a new item. It has zero cost and you don’t have to declare it to anyone. But what about the second case I tested in all those SQL databases, where you want to define a value for the existing rows as well? As we have seen, SQL allows that with a DEFAULT clause. In NoSQL you have to scan and update all items. Or you need to implement some logic in the application, like “if null then value”. That is not agile at all: as a side effect of a new feature, you need to change all data or all code.

Relational databases encapsulate the physical storage with a logical view. And in addition to that this logical view protects the existing application code when it evolves. This E.F Codd rule number 9: Logical Data Independence. You can deliver declarative changes to your structures without modifying any procedural code or stored data. Now, who is agile?

Structured data have metadata: performance and agility

How does it work? The RDBMS dictionary holds information about the structure of the rows, and this goes beyond a simple column name and datatype. The default value is defined here, which is why the ADD column was immediate. This is just an update of metadata. It doesn’t touch any existing data: performance. It exposes a virtual view to the application: agility. With Oracle, you can even version those views and deliver them to the application without interruption. This is called Edition Based Redefinition.

There are other smart things in the RDBMS dictionary. For example, when I add a column with the NOT NULL attribute, this assertion is guaranteed. I don’t need any code to check whether the value is set or not. Same with constraints: one declaration in a central dictionary makes all code safe and simpler because the assertion is guaranteed without additional testing. No need to check for data quality as it is enforced by design. Without it, how many sanity assumptions do you need to add in your code to ensure that erroneous data will not corrupt everything around? We have seen adding a column, but think about something even simple. Naming things is the most important in IT. Allow yourself to realize you made a mistake, or some business concepts change, and modify the name of a column for a more meaningful one. That can be done easily, even with a view to keep compatibility with previous code. Changing an attribute name in a large collection of JSON items is not so easy.

Relational databases have been invented for agility

Let me quote the reason why CERN decided to use Oracle in 1982 for the LEP – the ancestor of the LHC: Oracle The Database Management System For LEP: “Relational systems transform complex data structures into simple two-dimensional tables which are easy to visualize. These systems are intended for applications where preplanning is difficult…”

Preplanning not needed… isn’t that the definition of Agility with 20th century words?

Another good read to clear some myth: Relational Database: A Practical Foundation for Productivity by E.F. Codd Some problems that are solved by relational databases are the lack of distinction between “the programmer’s (logical) view of the data and the (physical) representation of data in storage”, the “subsequent changes in data description” forcing code changes, and “programmers were forced to think and code in terms of iterative loops” because of the lack of set processing. Who says that SQL and joins are slow? Are your iterative loops smarter than hash joins, nested loops and sort merge joins?

Never say No

I’m not saying that NoSQL is good or bad or better or worse. It is bad only when the decision is based on myths rather than facts. If you want agility on your data domain structures, stay relational. If you want to allow any future query pattern, stay relational. However, there are also some use cases that can fit in a relational database but may also benefit from another engine with optimal performance in key-value lookups. I have seen tables full of session state with a PRIMARY KEY (user or session ID) and a RAW column containing some data meaningful only for one application module (login service) and without durable purpose. They are acceptable in a SQL table if you take care of the physical model (you don’t want to cluster those rows in a context with many concurrent logins). But a Key-Value may be more suitable. We still see Oracle tables with LONG datatypes. If you like that you probably need a key-value NoSQL. Databases can store documents, but that’s luxury. They benefit from consistent backups and HA but at the prize of operating a very large and growing database. Timeseries, or graphs, are not easy to store in relational tables. NoSQL databases like AWS DynamoDB are very efficient for those specific use cases. But this is when all access patterns are known from design. If you know your data structure and cannot anticipate all queries, then relational databases systems (which means more than a simple data store) and SQL (the 4th generation declarative language to manipulate data by sets) are still the best choice for agility.

Cet article The myth of NoSQL (vs. RDBMS) agility: adding attributes est apparu en premier sur Blog dbi services.


APEX Connect 2020 – Day 1

$
0
0

This year the APEX connect conference goes virtual online, like all other major IT events, due to the pandemic. Unfortunately it spans only over two days with mixed topics around APEX, like JavaScript, PL/SQL and much more. After the welcome speech and the very interesting Keynote about “APEX 20.1 and beyond: News from APEX Development” by Carsten Czarski, I decided to attend presentations on following topics:
– The Basics of Deep Learning
– “Make it faster”: Myths about SQL performance
– Using RESTful Services and Remote SQL
– The Ultimate Guide to APEX Plug-ins
– Game of Fraud Detection with SQL and Machine Learning

APEX 20.1 and beyond: News from APEX Development

Carsten Czarski from the APEX development team shared about the evolution of the latest APEX releases up to 20.1 released on April 23rd.
Since APEX 18.1 there are 2 release per year. There are no major nor minor releases, all are managed at the same level.
Beside those releases bundle PSE to fix critical issues are provided.
From the recent features a couple have retained my attention:
– Faceted search
– Wider integration of Oracle TEXT
– Application backups
– Session timeout warnings
– New URL
And more to come with next releases like:
– Native PDF export
– New download formats
– External data sources
A lot to test and enjoy!

The Basics of Deep Learning

Artificial Intelligence (AI) is now part of our life mostly without noticing it. Machine learning (ML) is part of AI and Deap Learning (DL) a specific sub-part of ML.
ML is used in different sectors and used for example in:

  • SPAM filters
  • Data Analytics
  • Medical Diagnosis
  • Image recognition

and much more…
DL is integrating automated feature extraction which makes it suited for:

  • Natural Language processing
  • Speech recognition
  • Text to Speech
  • Machine translation
  • Referencing to Text

You can find some example of text generator based on DL with Talk to transformer
It is also heavily used in visual recognition (feature based recognition). ML is dependent on the datasets and preset models used so it’s key to have a large set of data to cover a wide range of possibilities. DL has made a big step forward with Convolutional Neural Networks (by Yann Lecun).
DL is based on complex mathematical models in Neural Networks at different levels, which use activation functions, model design, hyper parameters, backpropagation, loss functions, optimzer.
You can learn how it is implemented in image recognition at pyimagesearch.com
Another nice example of DL with reinforcement learning is AI learns to park

“Make it faster”: Myths about SQL performance

Performance of the Database is a hot topic when it comes to data centric application development like with APEX.
The pillars of DB performance are following:
– Performance planning
– Instance tuning
– SQL tuning
To be efficient performance must be considered at every stage of a project.
Recurring statement is: “Index is GOOD, full table scan is BAD”
But when is index better than full table scan? As a rule of thumb you can consider when selectivity is less than 5%
To improve the performance there are also options like:
– KIWI (Kill It With Iron) where more hardware should solve the performance issue
– Hints where you cut branches of the optimizer decision tree to force its choice (which is always the less expensive plan)
Unfortunately there is no golden hint able to improve performance whenever it’s used

Using RESTful Services and Remote SQL

REST web services are based on URI returning different types of data like HTML, XML, CSV or JSON.
Those web services are based on request methods:

  • POST to insert data
  • PUT to update/replace data
  • GET to read data
  • DELETE to DELETE data
  • PATCH to update/modify data

The integration of web services in APEX allows to make use of data outside of the Oracle database and connect to services like:
– Jira
– GitHub
– Online Accounting Services
– Google services
– …
Web services module on ORDS instance provides extensions on top of normal REST which support APEX out of the box but also enables Remote SQL.
Thanks to that, SQL statement can be sent over REST to the Oracle Database and executed remotely returning the data formatted as per REST standards.

The Ultimate Guide to APEX Plug-ins

Even though APEX plug-ins are not trivial to build they have benefits like:
– Introduction of new functionality
– Modularity
– Reusability
which makes them very interesting.
There are already a lot of Plug-ins available which can be found on apex.world or on professional providers like FOEX
What is important to look at with plug-ins are support, quality, security and updates.
The main elements of a plug-in are:
– Name
– Type
– Callbacks
– Standard attributes
– Custom attributes
– Files (CSS, JS, …)
– Events
– Information
– Help Text
Plug-ins are also a way to provide tools to improve the APEX developer experience like APEX Nitro or APEX Builder extension by FOS

Game of Fraud Detection with SQL and Machine Learning

With the example of some banking fraud, the investigation method based on deterministic SQL was compared to the method based on probabilistic ML.
Even though results were close on statistics Supervised Machine Learning (looking for patterns to identify the solutions) was giving more accurate ones. In fact, the combination of both methods was giving even better results.
The challenge is to gain acceptance from the business on results produced using help of ML as they are not based on fully explainable rules.
The Oracle database is embedding ML for free with specific package like DBMS_DATA_MINING for several years now.

The day ended with the most awaited session: Virtual beer!

Cet article APEX Connect 2020 – Day 1 est apparu en premier sur Blog dbi services.

Handle DB-Links after Cloning an Oracle Database

$
0
0

By Clemens Bleile

After cloning e.g. a production database into a database for development or testing purposes, the DBA has to make sure that no activities in the cloned database have an impact on data in other production databases. Because after cloning production data jobs may still try to modify data through e.g. db-links. I.e. scheduled database jobs must not start in the cloned DB and applications connecting to the cloned database must not modify remote production data. Most people are aware of this issue and a first measure is to start the cloned database with the DB-parameter

job_queue_processes=0

That ensures that no database job will start in the cloned database. However, before enabling scheduler jobs again, you have to make sure that no remote production data is modified. Remote data is usually accessed through db-links. So the second step is to handle the db-links in the cloned DB.

In a recent project we decided to be strict and drop all database links in the cloned database.
REMARK: Testers and/or developers should create the needed db-links later again pointing to non-production data.
But how to do that, because private DB-Links can only be dropped by the owner of the db-link? I.e. even a connection with SYSDBA-rights cannot drop private database links:

sys@orcl@orcl> connect / as sysdba
Connected.
sys@orcl@orcl> select db_link from dba_db_links where owner='CBLEILE';
 
DB_LINK
--------------------------------
CBLEILE_DB1
PDB1
 
sys@orcl@orcl> drop database link cbleile.cbleile_db1;
drop database link cbleile.cbleile_db1
                   *
ERROR at line 1:
ORA-02024: database link not found
 
sys@orcl@orcl> alter session set current_schema=cbleile;
 
Session altered.
 
sys@orcl@orcl> drop database link cbleile_db1;
drop database link cbleile_db1
*
ERROR at line 1:
ORA-01031: insufficient privileges

We’ll see later on how to drop the db-links. Before doing that we make a backup of the db-links. That can be achieved with expdp:

Backup of db-links with expdp:

1.) create a directory to store the dump-file:

create directory prod_db_links as '<directory-path>';

2.) create the param-file expdp_db_links.param with the following content:

full=y
INCLUDE=DB_LINK:"IN(SELECT db_link FROM dba_db_links)"

3.) expdp all DB-Links

expdp dumpfile=prod_db_links.dmp logfile=prod_db_links.log directory=prod_db_links parfile=expdp_db_links.param
Username: <user with DATAPUMP_EXP_FULL_DATABASE right>

REMARK: Private db-links owned by SYS are not exported by the command above. But SYS must not own user-objects anyway.

In case the DB-Links have to be restored you can do the following:

impdp dumpfile=prod_db_links.dmp logfile=prod_db_links_imp.log directory=prod_db_links
Username: <user with DATAPUMP_IMP_FULL_DATABASE right>

You may also create a script prod_db_links.sql with all ddl (passwords are not visible in the created script):

impdp dumpfile=prod_db_links.dmp directory=prod_db_links sqlfile=prod_db_links.sql
Username: <user with DATAPUMP_IMP_FULL_DATABASE right>

Finally drop the directory again:

drop directory prod_db_links;

Now, that we have a backup we can drop all db-links. As mentioned earlier, private db-links cannot be dropped, but you can use the following method to drop them:

As procedures are running with definer rights by default, we can create a procedure under the owner of the db-link and in the procedure drop the dblink. SYS has the privileges to execute the procedure. The following example will drop the db-link cbleile.cbleile_db1:

select db_link from dba_db_links where owner='CBLEILE';
 
DB_LINK
--------------------------------
CBLEILE_DB1
PDB1

create or replace procedure CBLEILE.drop_DB_LINK as begin
execute immediate 'drop database link CBLEILE_DB1';
end;
/
 
exec CBLEILE.drop_DB_LINK;
 
select db_link from dba_db_links where owner='CBLEILE';
 
DB_LINK
--------------------------------
PDB1

I.e. the db-link CBLEILE_DB1 has been dropped.
REMARK: Using a proxy-user would also be a possibility to connect as the owner of the db-link, but that cannot be automated in a script that easily.

As we have a method to drop private db-links we can go ahead and automate creating the drop db-link commands with the following sql-script drop_all_db_links.sql:

set lines 200 pages 999 trimspool on heading off feed off verify off
set serveroutput on size unlimited
column dt new_val X
select to_char(sysdate,'yyyymmdd_hh24miss') dt from dual;
spool drop_db_links_&&X..sql
select 'set echo on feed on verify on heading on' from dual;
select 'spool drop_db_links_&&X..log' from dual;
select 'select count(*) from dba_objects where status='''||'INVALID'||''''||';' from dual;
REM Generate all commands to drop public db-links
select 'drop public database link '||db_link||';' from dba_db_links where owner='PUBLIC';
REM Generate all commands to drop db-links owned by SYS (except SYS_HUB, which is oracle maintained)
select 'drop database link '||db_link||';' from dba_db_links where owner='SYS' and db_link not like 'SYS_HUB%';
PROMPT
REM Generate create procedure commands to drop private db-link, generate the execute and the drop of it.
declare
   current_owner varchar2(32);
begin
   for o in (select distinct owner from dba_db_links where owner not in ('PUBLIC','SYS')) loop
      dbms_output.put_line('create or replace procedure '||o.owner||'.drop_DB_LINK as begin');
      for i in (select db_link from dba_db_links where owner=o.owner) loop
         dbms_output.put_line('execute immediate '''||'drop database link '||i.db_link||''''||';');
      end loop;
      dbms_output.put_line('end;');
      dbms_output.put_line('/');
      dbms_output.put_line('exec '||o.owner||'.drop_DB_LINK;');
      dbms_output.put_line('drop procedure '||o.owner||'.drop_DB_LINK;');
      dbms_output.put_line('-- Seperator -- ');
   end loop;
end;
/
select 'select count(*) from dba_objects where status='''||'INVALID'||''''||';' from dual;
select 'set echo off' from dual;
select 'spool off' from dual;
spool off
 
PROMPT
PROMPT A script drop_db_links_&&X..sql has been created. Check it and then run it to drop all DB-Links.
PROMPT

Running above script generates a sql-script drop_db_links_<yyyymmdd_hh24miss>.sql, which contains all drop db-link commands.

sys@orcl@orcl> @drop_all_db_links
...
A script drop_db_links_20200509_234906.sql has been created. Check it and then run it to drop all DB-Links.
 
sys@orcl@orcl> !cat drop_db_links_20200509_234906.sql
 
set echo on feed on verify on heading on
 
spool drop_db_links_20200509_234906.log
 
select count(*) from dba_objects where status='INVALID';
 
drop public database link DB1;
drop public database link PDB2;
 
create or replace procedure CBLEILE.drop_DB_LINK as begin
execute immediate 'drop database link CBLEILE_DB1';
execute immediate 'drop database link PDB1';
end;
/
exec CBLEILE.drop_DB_LINK;
drop procedure CBLEILE.drop_DB_LINK;
-- Seperator --
create or replace procedure CBLEILE1.drop_DB_LINK as begin
execute immediate 'drop database link PDB3';
end;
/
exec CBLEILE1.drop_DB_LINK;
drop procedure CBLEILE1.drop_DB_LINK;
-- Seperator --
 
select count(*) from dba_objects where status='INVALID';
 
set echo off
 
spool off
 
sys@orcl@orcl>

After checking the file drop_db_links_20200509_234906.sql I can run it:

sys@orcl@orcl> @drop_db_links_20200509_234906.sql
sys@orcl@orcl> 
sys@orcl@orcl> spool drop_db_links_20200509_234906.log
sys@orcl@orcl> 
sys@orcl@orcl> select count(*) from dba_objects where status='INVALID';
 
  COUNT(*)
----------
   1
 
1 row selected.
 
sys@orcl@orcl> 
sys@orcl@orcl> drop public database link DB1;
 
Database link dropped.
 
sys@orcl@orcl> drop public database link PDB2;
 
Database link dropped.
 
sys@orcl@orcl> 
sys@orcl@orcl> create or replace procedure CBLEILE.drop_DB_LINK as begin
  2  execute immediate 'drop database link CBLEILE_DB1';
  3  execute immediate 'drop database link PDB1';
  4  end;
  5  /
 
Procedure created.
 
sys@orcl@orcl> exec CBLEILE.drop_DB_LINK;
 
PL/SQL procedure successfully completed.
 
sys@orcl@orcl> drop procedure CBLEILE.drop_DB_LINK;
 
Procedure dropped.
 
sys@orcl@orcl> -- Seperator --
sys@orcl@orcl> create or replace procedure CBLEILE1.drop_DB_LINK as begin
  2  execute immediate 'drop database link PDB3';
  3  end;
  4  /
 
Procedure created.
 
sys@orcl@orcl> exec CBLEILE1.drop_DB_LINK;
 
PL/SQL procedure successfully completed.
 
sys@orcl@orcl> drop procedure CBLEILE1.drop_DB_LINK;
 
Procedure dropped.
 
sys@orcl@orcl> -- Seperator --
sys@orcl@orcl> 
sys@orcl@orcl> select count(*) from dba_objects where status='INVALID';
 
  COUNT(*)
----------
   1
 
1 row selected.
 
sys@orcl@orcl> 
sys@orcl@orcl> set echo off
sys@orcl@orcl> 
sys@orcl@orcl> select owner, db_link from dba_db_links;

OWNER				 DB_LINK
-------------------------------- --------------------------------
SYS				 SYS_HUB

1 row selected.

A log-file drop_db_links_20200509_234906.log has been produced as well.

After dropping all db-links you may do the following checks as well before releasing the cloned database for the testers or the developers:

  • disable all jobs owned by not Oracle maintained users. You may use the following SQL to generate the commands in sqlplus:

select 'exec dbms_scheduler.disable('||''''||owner||'.'||job_name||''''||');' from dba_scheduler_jobs where enabled='TRUE' and owner not in (select username from dba_users where oracle_maintained='Y');
  • check all directories in the DB and make sure the directory-paths do not point to shared production folders

column owner format a32
column directory_name format a32
column directory_path format a64
select owner, directory_name, directory_path from dba_directories order by 1;
  • mask sensitive data, which should not be visible to testers and/or developers.

At that point you are quite sure to not affect production data with your cloned database and you can set
job_queue_processes>0
again and provide access to the cloned database to the testers and/or developers.

Cet article Handle DB-Links after Cloning an Oracle Database est apparu en premier sur Blog dbi services.

APEX Connect 2020 – Day 2

$
0
0

For the second and last virtual conference day, I decided to attend presentations on following topics:
– Universal Theme new features
– Oracle APEX Source Code Management and Release Lifecycle
– Why Google Hates My APEX App
– We ain’t got no time! Pragmatic testing with utPLSQL
– Why APEX developers should know FLASHBACK
– ORDS – Behind the scenes … and more!
and the day ended with a keynote from Kellyn Pot’Vin-Gorman about “Becoming – A Technical Leadership Story”

Universal Theme new features

What is the Universal Theme (UT)?
The user interface of APEX integrated since APEX version 5.0 also known as Theme 42 (“Answer to the Ultimate Question of Life, the Universe, and Everything” – The Hitchhiker’s Guide to the Galaxy by Douglas Adams)
New features introduced with UT:
– Template options
– Font APEX
– Full modal dialog page
– Responsive design
– Mobile support
With APEX 20.1 released in April, a new version 1.5 of the UT comes. With that new version different other components related to it like JQuery libraries, OracleJET, Font APEX, … have changed, so check the release notes.
One of the most relevant new features is Mega Menu introducing a new navigation useful if you need to maximize the display of your application pages. You can check the UT sample app embedded on APEX to test it.
Some other changes are:
– Theme Roller enhancement
– Application Builder Redwood UI
– Interactive Grid with Control Break editing
– Friendly URL
Note also that Font Awesome is no longer natively supported (since APEX 19.2) so consider moving to Font APEX.
You can find more about UT online with the dedicated page.

Oracle APEX Source Code Management and Release Lifecycle

Source code management with APEX is always a challenging question for developers used to work with other programming languages and source code version control systems like GitHub.
There are different aspects to be considered like:
– 1 central instance for all developers or 1 local instance for each developer?
– Export full application or export pages individually?
– How to best automate application exports?
There is no universal answers to them. This must be considered based on the size of the development team and the size of the project.
There are different tools provided by APEX to manage export of the applications:
– ApexExport java classes
– Page UI
– APEX_EXPORT package
– SQLcl
But you need to be careful about workspace and application IDs when you run multiple instances.
Don’t forget that merge changes are not supported in APEX!
You should have a look into the Oracle apex life cycle management White Paper for further insight.

Why Google Hates My APEX App

When publishing a public web site Google provides different tools to help you getting more out of it with tools based on:
– Statistics
– Promotion
– Search
– Adds (to get money back)
When checking Google Analytics for the statistics of an APEX application, you realize that the outcome doesn’t really reflect the content of the APEX application, specially in terms of pages. This is mainly due to the way APEX manages page parameter in the f?p= procedure call. That call is much different than the standards URLs where parameters are given by “&” (which Goggle tools are looking for) and not “:”.
LEt’s hope this is going to improve with the new Friendly URL feature introduced by APEX 20.1.

We ain’t got no time! Pragmatic testing with utPLSQL

Unit testing should be considered right from the beginning while developing new PL/SQL packages.
utPLSQL is an open source PL/SQL package that can help to unit test your code. Tests created as part of the development process deliver value during implementation.
What are the criteria of choice for test automation?
– Risk
– Value
– Cost efficiency
– Change probability
Unit testing can be integrated into test automation which is of great value in the validation of your application.
If you want to know more about test automation you can visit the page of Angie Jones.

Why APEX developers should know FLASHBACK

For most people Flashback is an emergency procedure, but it’s much more in fact!
APEX developers know about flashback thanks to the restore as of functionality on pages in the app Builder.
Flashback is provided at different levels:

  1. Flashback query: allows to restore data associated to a specific query based on the SCN. This can be useful for unit testing.
  2. Flashback session: allows to flashback all queries of the session. By default up to 900 seconds in the past (undo retention parameter).
  3. Flashback transaction: allows to rollback committed transaction thanks to transaction ID (XID) with dbms_transaction.undo_transaction
  4. Flashback drop: allows to recover dropped objects thanks to the user recycle bin. Deleted objects are kept until space is free (advice: keep 20% of free sapce). BEWARE! this is not working for truncated objects.
  5. Flashback table: allows to recover a table to a given point in time. Only applicable for data and cannot help in case of DDL or drop.
  6. Flashback database: allows to restore the database to a given point in time based on restore points. This is only for DBAs. This can be useful to rollback an APEX application deployment as a lot of objects are changed. As it works with pluggable databases it can be used to produce copies to be distributed to individual XE instances for multiple developers.
  7. Data archive: allows to recover based on audit history. It’s secure and efficient and can be imported from existing application audits. It’s now FREE (unless using compression option).

The different flashback option can be used to rollback mistakes, but not only. They can also be used for unit testing or reproducing issues. Nevertheless you should always be careful when using commands like DROP and even more TRUNCATE.

ORDS – Behind the scenes … and more!

ORDS provides multiple functionalities:
– RESTful services for the DB
– Web Listener for APEX
– Web Client for the DB
– DB Management REST API
– Mongo style API for the DB
Regarding APEX Web Listener, EPG and mod_plsql are deprecated so ORDS is the only option for the future.
ORDS integrates into different architectures allowing to provide isolation like:
– APEX application isolation
– REST isolation
– LB whitelists
With APEX there are 2 options to use RESTful services:
– Auto REST
– ORDS RESTful services
developers can choose the best suited one according to their needs.
The most powerful feature is REST enabled SQL.

Becoming – A Technical Leadership Story

Being a leader is defined by different streams:
-Influencing others
-Leadership satisfaction
-Technical leadership
and more…
A couple of thoughts to be kept:
– Leaders are not always managers.
– Mentors are really important because they talk to you not about you like sponsors.
– Communication is more than speaking
But what is most important on my point of you is caring about others, how about you?

Thanks to virtualization of the conference all the presentations have been recorded, so keep tuned on DOAG and you will be able to see those and much more! So take some time and watch as much as possible because everything is precious learning. Thanks a lot to the community.
Keep sharing and enjoy APEX!

Cet article APEX Connect 2020 – Day 2 est apparu en premier sur Blog dbi services.

Oracle Materialized View Refresh : Fast or Complete ?

$
0
0

In contrary of views, materialized views avoid executing the SQL query for every access by storing the result set of the query.

When a master table is modified, the related materialized view becomes stale and a refresh is necessary to have the materialized view up to date.

I will not show you the materialized view concepts, the Oracle Datawarehouse Guide is perfect for that.

I will show you, from a user real case,  all steps you have to follow to investigate and tune your materialized view refresh.

And, as very often in performance and tuning task, most of the performance issue comes from the way to write and design your SQL (here the SQL statement loading the materialized view).

 

First of all, I’m saying that spending almost 50 mins (20% of my DWH Load) to refresh materialized view is too much :

The first step is to check which materialized view has the highest refresh time :

SELECT * 
FROM (
      SELECT OWNER,
             MVIEW_NAME,
             CONTAINER_NAME,
             REFRESH_MODE,
             REFRESH_METHOD,
             LAST_REFRESH_TYPE,
             STALENESS,
             ROUND((LAST_REFRESH_END_TIME-LAST_REFRESH_DATE)*24*60,2) as REFRESH_TIME_MINS
       FROM ALL_MVIEWS
       WHERE LAST_REFRESH_TYPE IN ('FAST','COMPLETE')
      )
ORDER BY REFRESH_TIME_MINS DESC;

OWNER   MVIEW_NAME                      CONTAINER_NAME                  REFRESH_MODE  REFRESH_METHOD LAST_REFRESH_TYPE STALENESS      REFRESH_TIME_MINS
------- ------------------------------- ------------------------------- ------------- -------------- ----------------- --------------------------------
SCORE   MV$SCORE_ST_SI_MESSAGE_HISTORY   MV$SCORE_ST_SI_MESSAGE_HISTORY DEMAND        FAST           FAST              FRESH          32.52
SCORE   MV$SCORE_ST_SI_MESSAGE           MV$SCORE_ST_SI_MESSAGE         DEMAND        FAST           FAST              FRESH          16.38
SCORE   MV$SC1_MYHIST2_STOP              MV$SC1_MYHIST2_STOP            DEMAND        FORCE          COMPLETE          NEEDS_COMPILE  .03
SCORE   MV$SC1_MYHIST2_START             MV$SC1_MYHIST2_START           DEMAND        FORCE          COMPLETE          NEEDS_COMPILE  .03
SCORE   MV$SC1_RWQ_FG_TOPO               MV$SC1_RWQ_FG_TOPO             DEMAND        FORCE          COMPLETE          NEEDS_COMPILE  .02

All the refresh time comes from the mview  : MV$SCORE_ST_SI_MESSAGE_HISTORY and MV$SCORE_ST_SI_MESSAGE.

Thanks to columns ALL_MVIEWS.LAST_REFRESH_DATE and ALL_MVIEWS.LAST_REFRESH_END_TIME, we got the sql statements and the executions plans related to the refresh operation :

The first operation is a “Delete“:

The second operation is an “Insert“:

Let’s extract the PL/SQL procedure doing the refresh used by the ETL tool :

dbms_mview.refresh('SCORE.'||l_mview||'','?',atomic_refresh=>FALSE);
--'?' = Force : If possible, a fast refresh is attempted, otherwise a complete refresh.

Being given that, here all questions which come to me :

  1. My materialized view can be fast-refreshed, so why it takes more than 48 mins to refresh ?
  2. With atomic_refresh set to false, oracle normally optimize refresh by using parallel DML and truncate DDL, so why a “Delete” operation is done instead a “Truncate” more faster ?

To answer to the first point, to be sure that my materialized view can be fast refresh, we can also use explain_mview procedure and check the capability_name called “REFRESH_FAST”:

SQL> truncate table mv_capabilities_table;

Table truncated.

SQL> exec dbms_mview.explain_mview('MV$SCORE_ST_SI_MESSAGE_HISTORY');

PL/SQL procedure successfully completed.

SQL> select capability_name,possible,related_text,msgtxt from mv_capabilities_table;

CAPABILITY_NAME                POSSIBLE             RELATED_TEXT            MSGTXT
------------------------------ -------------------- ----------------------- ----------------------------------------------------------------------------
PCT                            N
REFRESH_COMPLETE               Y
REFRESH_FAST                   Y
REWRITE                        Y
PCT_TABLE                      N                    ST_SI_MESSAGE_HISTORY_H relation is not a partitioned table
PCT_TABLE                      N                    ST_SI_MESSAGE_HISTORY_V relation is not a partitioned table
PCT_TABLE                      N                    DWH_CODE                relation is not a partitioned table
REFRESH_FAST_AFTER_INSERT      Y
REFRESH_FAST_AFTER_ONETAB_DML  Y
REFRESH_FAST_AFTER_ANY_DML     Y
REFRESH_FAST_PCT               N											PCT is not possible on any of the detail tables in the materialized view
REWRITE_FULL_TEXT_MATCH        Y
REWRITE_PARTIAL_TEXT_MATCH     Y
REWRITE_GENERAL                Y
REWRITE_PCT                    N											general rewrite is not possible or PCT is not possible on any of the detail tables
PCT_TABLE_REWRITE              N                    ST_SI_MESSAGE_HISTORY_H relation is not a partitioned table
PCT_TABLE_REWRITE              N                    ST_SI_MESSAGE_HISTORY_V relation is not a partitioned table
PCT_TABLE_REWRITE              N                    DWH_CODE				relation is not a partitioned table

18 rows selected.

Let’s try to force a complete refresh with atomic_refresh set to FALSE in order to check if the “Delete” operation is replaced by a “Truncate” operation:

Now we have a “Truncate“:

--c = complete refresh
 dbms_mview.refresh('SCORE.'||l_mview||'','C',atomic_refresh=>FALSE);

Plus an “Insert” :

Let’s check now the refresh time :

SELECT * 
FROM ( SELECT OWNER, 
			  MVIEW_NAME, 
			  CONTAINER_NAME, 
			  REFRESH_MODE, 
			  LAST_REFRESH_TYPE, 
			  STALENESS, 
			  round((LAST_REFRESH_END_TIME-LAST_REFRESH_DATE)*24*60,2) as REFRESH_TIME_MINS 
	   FROM ALL_MVIEWS 
	   WHERE LAST_REFRESH_TYPE IN ('FAST','COMPLETE')
	 ) 
ORDER BY REFRESH_TIME_MINS DESC;

OWNER   MVIEW_NAME                       CONTAINER_NAME                   REFRESH_MODE LAST_REFRESH_TYPE STALENESS           REFRESH_TIME_MINS
------- -------------------------------- -------------------------------- ------------ ----------------- -------------------------------------
SCORE   MV$SCORE_ST_SI_MESSAGE           MV$SCORE_ST_SI_MESSAGE           FAST         COMPLETE 	 FRESH                            6.75
SCORE   MV$SCORE_ST_SI_MESSAGE_HISTORY   MV$SCORE_ST_SI_MESSAGE_HISTORY   FAST         COMPLETE 	 FRESH                               1

 

Conclusion (for my environment) :

  • The “Complete” refresh (7.75 mins) is more faster than the “Fast” refresh (48.9 mins),
  • The parameter “atomic_refresh=FALSE” works only with “complete” refresh, so “truncate” is only possible with “complete“.
  • It’s not a surprise to have “Complete” more faster than “Fast” since the materialized views are truncated instead of being deleted.

Now, I want to understand why “Fast refresh” is very long (48.9 mins).

In order to be fast refreshed, materialized view requires materialized view logs storing the modifications propagated from the base tables to the container tables (regular table with same name as materialized view which stores the results set returned by the query).

Let’s check the base tables used into the SQL statement loading the materialized view :

Be focus on the table names after the clause “FROM“:

  • ST_SI_MESSAGE_HISTORY_H
  • ST_SI_MESSAGE_HISTORY_V
  • DWH_CODE

Let’s check the number of rows which exist on each tables sources :

SQL> SELECT 'DWH_CODE' as TABLE_NAME,NUM_ROWS FROM USER_TABLES WHERE TABLE_NAME = 'DWH_CODE'
  2  UNION ALL
  3  SELECT 'ST_SI_MESSAGE_HISTORY_H' as TABLE_NAME,NUM_ROWS FROM USER_TABLES WHERE TABLE_NAME = 'ST_SI_MESSAGE_HISTORY_H'
  4  UNION ALL
  5  SELECT 'ST_SI_MESSAGE_HISTORY_V' as TABLE_NAME,NUM_ROWS FROM USER_TABLES WHERE TABLE_NAME = 'ST_SI_MESSAGE_HISTORY_V';

TABLE_NAME                NUM_ROWS
----------------------- ----------
DWH_CODE                         1
ST_SI_MESSAGE_HISTORY_H    4801733
ST_SI_MESSAGE_HISTORY_V    5081578

 

To be fast refreshed, the MV$SCORE_ST_SI_MESSAGE_HISTORY materialized view requires materialized logs on the ST_SI_MESSAGE_HISTORY_H, ST_SI_MESSAGE_HISTORY_V and DWH_CODE tables:

SQL> SELECT LOG_OWNER,MASTER,LOG_TABLE
  2  FROM all_mview_logs
  3  WHERE MASTER IN ('DWH_CODE','ST_SI_MESSAGE_H','ST_SI_MESSAGE_V');

LOG_OWNER   MASTER                 LOG_TABLE
----------- ------------------ ----------------------------
SCORE       ST_SI_MESSAGE_V        MLOG$_ST_SI_MESSAGE_V
SCORE       ST_SI_MESSAGE_H        MLOG$_ST_SI_MESSAGE_H
SCORE       DWH_CODE               MLOG$_DWH_CODE

 

As, the materialized view logs contains only the modifications during a fast refresh, let’s check the contents (number of rows modified coming from the base tables) just before to execute the fast-refresh :

SQL> SELECT
  2              owner,
  3              mview_name,
  4              container_name,
  5              refresh_mode,
  6              last_refresh_type,
  7              staleness
  8          FROM
  9              all_mviews
 10          WHERE
 11              last_refresh_type IN (
 12                  'FAST',
 13                  'COMPLETE'
 14              )
 15  ;

OWNER   MVIEW_NAME                     CONTAINER_NAME                 REFRESH_MODE LAST_REFRESH_TYPE STALENESS
------- ------------------------------ ---------------------------------------------------------------------------
SCORE   MV$SCORE_ST_SI_MESSAGE         MV$SCORE_ST_SI_MESSAGE         DEMAND       COMPLETE          NEEDS_COMPILE
SCORE   MV$SCORE_ST_SI_MESSAGE_HISTORY MV$SCORE_ST_SI_MESSAGE_HISTORY DEMAND       COMPLETE 	     NEEDS_COMPILE

 

STALENESS = NEEDS_COMPILE means the materialized view need to be refreshed because base tables have been modified. It’s normal since we have stopped the ETL process just before the execution of the refresh mview procedure in order to see the content of the mview logs.

The contents of materialized view logs are :

SQL> SELECT * FROM "SCORE"."MLOG$_DWH_CODE";

M_ROW$$               SNAPTIME$ DMLTYPE$$ OLD_NEW$$ CHANGE_VECTOR$$  XID$$
--------------------- --------- --------- --------- ---------------- ----------------
AAAbvUAAUAAABtjAAA    01-JAN-00 U 		  U 		02   1125921382632021
AAAbvUAAUAAABtjAAA    01-JAN-00 U 		  N 		02   1125921382632021

SQL> SELECT * FROM "SCORE"."MLOG$_ST_SI_MESSAGE_V";

no rows selected

SQL> SELECT * FROM "SCORE"."MLOG$_ST_SI_MESSAGE_H";
no rows selected 
SQL>

I’m a little bit surprised because:

  • Being given my refresh time, I expected to have a lot of modifications coming from the big tables : ST_SI_MESSAGE_V (5081578 rows) and ST_SI_MESSAGE_H (4801733 rows) instead of DWH_CODE (1 row).

After analyzing the ETL process, it appears that only this table (DWH_CODE) is modified every day with the sysdate. This table is a metadata table which contents only one row identifying the loading date.

If we check the SQL statement loading the materialized view, this table is used to populate the column DWH_PIT_DATE (see print screen above).

But since this table is joined with ST_SI_MESSAGE_H and ST_SI_MESSAGE_V, the oracle optimizer must do a full scan on the materialized view MV$SCORE_ST_SI_MESSAGE_HISTORY (more than 500K rows) to populate each row with exactly the same value:

SQL> select distinct dwh_pit_date from score.mv$score_st_si_message_history;

DWH_PIT_DATE
------------
09-MAY-20

There is no sense to have a column having always the same value, here we have definitely a materialized view design problem.Whatever the refresh mode using : “Complete” or “Fast”, we always scan all the materialized view logs to populate column DWH_PIT_DATE.

To solve this issue, let’s check the materialized view logs dependencies :

SQL> SELECT DISTINCT NAME
  2  FROM ALL_DEPENDENCIES
  3  WHERE TYPE = 'VIEW'
  4  AND REFERENCED_OWNER = 'DWH_LOAD'
  5  AND REFERENCED_NAME IN ('MV$SCORE_ST_SI_MESSAGE','MV$SCORE_ST_SI_MESSAGE_HISTORY')
  6  ORDER BY NAME;

NAME
--------------------------------------------------------------------------------
V$LOAD_CC_DMSG
V$LOAD_CC_FMSG
V$LOAD_CC_MSG_ACK
V$LOAD_CC_MSG_BOOKEDIN_BY
V$LOAD_CC_MSG_PUSHEDBACK
V$LOAD_CC_MSG_SCENARIO
V$LOAD_CC_MSG_SOURCE
V$LOAD_CC_MSG_SRC
V$LOAD_CC_MSG_TRIAGED_BY

9 rows selected.

SQL>

In my environment, only this objects (oracle views) use the materialized views, so I can safely remove the column DWH_CODE.DWH_PIT_DATE (the column not the join with the table DWH_CODE) from the materialized views and move it to the dependent objects.

After this design modifications, let’s execute the refresh and check the refresh time :

SQL> SELECT *
  2  FROM ( SELECT OWNER,
  3    MVIEW_NAME,
  4    CONTAINER_NAME,
  5    REFRESH_MODE,
  6    REFRESH_METHOD,
  7    LAST_REFRESH_TYPE,
  8    STALENESS,
  9    ROUND((LAST_REFRESH_END_TIME-LAST_REFRESH_DATE)*24*60,2) as REFRESH_TIME_MINS
 10     FROM ALL_MVIEWS
 11     WHERE LAST_REFRESH_TYPE IN ('FAST','COMPLETE')
 12   )
 13  ORDER BY REFRESH_TIME_MINS DESC;

OWNER     MVIEW_NAME                            CONTAINER_NAME                       REFRES REFRESH_ LAST_REF STALENESS           REFRESH_TIME_MINS
--------- --------------------------------- ------------------------------------ ------ -------- -------- ------------------- ---------------------
SCORE     MV$SCORE_ST_SI_MESSAGE                MV$SCORE_ST_SI_MESSAGE               DEMAND FAST     COMPLETE FRESH                            1.58
SCORE     MV$SCORE_ST_SI_MESSAGE_HISTORY        MV$SCORE_ST_SI_MESSAGE_HISTORY       DEMAND FAST     COMPLETE FRESH                             .28

 

The refresh time is faster (1.86 mins) than the last one (7.75 mins) and now oracle optimizer does not full scan the materialized view to populate each row with same value (DWH_CODE.DWH_PIT_DATE).

Conclusion :

  • We have reduced the refresh time from 50mins to 1.86 mins.
  • Fast Refresh is not always more faster than Complete Refresh, it depends of the SQL statement loading the view and the number of rows propagated from the base tables to the container tables within the materialized view logs.
  • To decrease the refresh time, act only on the refresh option (Fast, Complete, Index,etc.) is not enough, we have to also analyze and modify the SQL statement loading the materialized view.
  •  If you have design problem, never be afraid to modify the SQL statement and even some part of your architecture (like here the dependent objects). Of course you have to know very well the impact on your application and on your ETL process.

Cet article Oracle Materialized View Refresh : Fast or Complete ? est apparu en premier sur Blog dbi services.

20c: AWR now stores explain plan predicates

$
0
0

By Franck Pachot

.
In a previous post https://blog.dbi-services.com/awr-dont-store-explain-plan-predicates/ I explained this limitation in gathering filter and access predicates by Statspack and then AWR because of old bugs about reverse parsing of predicates. Oracle listens to its customers through support (enhancement requests), though the community (votes on database ideas), and through the product managers who participate in User Groups and ACE program. And here it is: in 20c the predicates are collected by AWS and visible with DBMS_XPLAN and AWRSQRPT reports.

I’ll test with a very simple query:


set feedback on sql_id echo on pagesize 1000

SQL> select * from dual where ascii(dummy)=42;

no rows selected

SQL_ID: g4gx2zqbkjwh1

I used the “FEEDBACK ON SQL” feature to get the SQL_ID.

Because this query is fast, it will not be gathered by AWR except if I ‘color’ it:


SQL> exec dbms_workload_repository.add_colored_sql('g4gx2zqbkjwh1');

PL/SQL procedure successfully completed.

Coloring a statement is the AWR feature to use when you want to get a statement always gathered, for example when you have optimized it and want compare the statistics.

Now running the statement between two snapshots:


SQL> exec dbms_workload_repository.create_snapshot;

PL/SQL procedure successfully completed.

SQL> select * from dual where ascii(dummy)=42;

no rows selected

SQL> exec dbms_workload_repository.create_snapshot;

PL/SQL procedure successfully completed.

Here, I’m sure it has been gathered.

Now checking the execution plan:


SQL> select * from dbms_xplan.display_awr('g4gx2zqbkjwh1');

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
SQL_ID g4gx2zqbkjwh1
--------------------
select * from dual where ascii(dummy)=42

Plan hash value: 272002086

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |       |       |     2 (100)|          |
|*  1 |  TABLE ACCESS FULL| DUAL |     1 |     2 |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(ASCII("DUMMY")=42)


18 rows selected.

Here I have the predicate. This is a silly example but the predicate information is very important when looking at a large execution plan trying to understand the cardinality estimation or the reason why an index is not used.

Of course, this is also visible from the ?/rdbms/admin/awrsqrpt report:

What if you upgrade?

AWR gathers the SQL Plan only when it is not already there. Then, when we will update to 20c only the new plans will get the predicates. Here is an example where I simulate the pre-20c behaviour with “_cursor_plan_unparse_enabled”=false:


SQL> alter session set "_cursor_plan_unparse_enabled"=false;

Session altered.

SQL> exec dbms_workload_repository.add_colored_sql('g4gx2zqbkjwh1');

PL/SQL procedure successfully completed.

SQL> exec dbms_workload_repository.create_snapshot;

PL/SQL procedure successfully completed.

SQL> select * from dual where ascii(dummy)=42;

no rows selected

SQL> exec dbms_workload_repository.create_snapshot;

PL/SQL procedure successfully completed.

SQL> select * from dbms_xplan.display_awr('g4gx2zqbkjwh1');

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
SQL_ID g4gx2zqbkjwh1
--------------------
select * from dual where ascii(dummy)=42

Plan hash value: 272002086

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |       |       |     2 (100)|          |
|   1 |  TABLE ACCESS FULL| DUAL |     1 |     2 |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------

13 rows selected.

No predicate here. Even If I re-connect to reset the “_cursor_plan_unparse_enabled”:


SQL> connect / as sysdba
Connected.
SQL> exec dbms_workload_repository.create_snapshot;

PL/SQL procedure successfully completed.

SQL> select * from dual where ascii(dummy)=42;

no rows selected

SQL> exec dbms_workload_repository.create_snapshot;

PL/SQL procedure successfully completed.

SQL> select * from dbms_xplan.display_awr('g4gx2zqbkjwh1');

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
SQL_ID g4gx2zqbkjwh1
--------------------
select * from dual where ascii(dummy)=42

Plan hash value: 272002086

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |       |       |     2 (100)|          |
|   1 |  TABLE ACCESS FULL| DUAL |     1 |     2 |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------

13 rows selected.

This will be the situation after upgrade.

If you want to re-gather all sql_plans, you need to purge the AWR repository:


SQL> execute dbms_workload_repository.drop_snapshot_range(1,1e22);

PL/SQL procedure successfully completed.

SQL> execute dbms_workload_repository.purge_sql_details();

PL/SQL procedure successfully completed.

SQL> commit;

This clears everything, so I do not recommend to do that at the same time as the upgrade as you may like to compare some performance with the past. Anyway, we have time and maybe this fix will be backported in 19c.

There are very small chances that fix is ported to Statspack, but you can do it yourself as I mentioned in http://viewer.zmags.com/publication/dd9ed62b#/dd9ed62b/36 (“on Improving Statspack Experience”) with something like:


sed -i -e 's/ 0 -- should be//' -e 's/[(]2254299[)]/--&/' $ORACLE_HOME/rdbms/admin/spcpkg.sql

Cet article 20c: AWR now stores explain plan predicates est apparu en premier sur Blog dbi services.

Viewing all 515 articles
Browse latest View live