Quantcast
Channel: Archives des Oracle - dbi Blog
Viewing all 515 articles
Browse latest View live

Easily manage dual backup destination with RMAN

$
0
0

Backup on disk with RMAN is great. It’s fast, you can set as many channels as your platform can handle for faster backups. And you can restore as fast as you can read and write files on disk with these multiple channels. As far as you’re using Enterprise Edition because Standard Edition is stuck to a single channel.

Disk space is very often limited and you’ll probably have to find another solution to keep backups longuer if you want to. You can think about tapes or you can connect RMAN to a global backup tool, but it requires additional libraries that are not free, and it definitely adds complexity.

The other solution is to have dual disk destination for the backups. The first one will be the main destination for your daily backups, the other one will be dedicated to long-term backups, maybe on slower disks but with more free space available. This second destination can eventualy be backed up with another tool without using any library.

For the demonstration, assume you have 2 filesystems, /backup is dedicated to latest daily backups and /lt_backup is for long-term backups.

du -hs backup; ls -lrt backup/* | tail -n 8 ; echo ;du -hs lt_backup; ls -lrt lt_backup/* | tail -n 8

4.0K    backup
ls: cannot access backup/*: No such file or directory

4.0K    lt_backup
ls: cannot access lt_backup/*: No such file or directory

 

First of all, take a backup on the first destination:

RMAN> backup as compressed backupset database format '/oracle/backup/%U';

 

This is a small database and backup is done with the default single channel, so there is only two backupsets, one for the datafiles and the other for the controlfile and the spfile:

du -hs backup; ls -lrt backup/* | tail -n 8 ; echo ;du -hs lt_backup; ls -lrt lt_backup/* | tail -n 8

162M    backup
-rw-r-----. 1 oracle oinstall 168067072 Aug 15 01:27 backup/2btaj0mt_1_1
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 01:27 backup/2ctaj0nm_1_1

4.0K    lt_backup
ls: cannot access lt_backup/*: No such file or directory

 

It’s quite easy to move the backup to the long term destination with RMAN:

RMAN> backup backupset all format '/oracle/lt_backup/%U' delete input;

 

BACKUP BACKUPSET with DELETE INPUT is basically the same as a system mv or move. But it does not require to recatalog the backup files as RMAN is doing this automatically.

Now our backup is located in the second destination:

du -hs backup; ls -lrt backup/* | tail -n 8 ; echo ;du -hs lt_backup; ls -lrt lt_backup/* | tail -n 8

4.0K    backup
ls: cannot access backup/*: No such file or directory

162M    lt_backup
-rw-r-----. 1 oracle oinstall 168067072 Aug 15 01:28 lt_backup/2btaj0mt_1_2
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 01:28 lt_backup/2ctaj0nm_1_2

 

You can see here that backup filename has changed: last number increased. Oracle knows that this is the second copy of these backupsets (even the first ones don’t exist anymore).

Like a mv command you can put again your backup in previous destination:

RMAN> backup backupset all format '/oracle/backup/%U' delete input;

162M    backup
-rw-r-----. 1 oracle oinstall 168067072 Aug 15 01:29 backup/2btaj0mt_1_3
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 01:29 backup/2ctaj0nm_1_3

4.0K    lt_backup
ls: cannot access lt_backup/*: No such file or directory

 

All the backupsets are now back to the first destination only, and you can see another increase on the filename. And RMAN catalog is up-to-date.

Now let’s make the first folder the default destination for the backups, and go for compressed backupset as a default behavior:

RMAN> CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO COMPRESSED BACKUPSET ;
RMAN> CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT '/oracle/backup/%U';

 

Now you only need a 2-word command to backup the database:

RMAN> backup database;

 

New backup is in first destination as expected:

du -hs backup; ls -lrt backup/* | tail -n 8 ; echo ;du -hs lt_backup; ls -lrt lt_backup/* | tail -n 8

323M    backup
-rw-r-----. 1 oracle oinstall 168067072 Aug 15 01:29 backup/2btaj0mt_1_3
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 01:29 backup/2ctaj0nm_1_3
-rw-r-----. 1 oracle oinstall 168050688 Aug 15 01:35 backup/2dtaj15o_1_1
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 01:35 backup/2etaj16h_1_1

4.0K    lt_backup
ls: cannot access lt_backup/*: No such file or directory

 

Suppose you want to move the oldest backups, those done before 1.30AM:

RMAN> backup backupset completed before 'TRUNC(SYSDATE)+1.5/24' format '/oracle/lt_backup/%U' delete input;

 

Everything is working as expected, latest backup is still in the first destination, and the oldest one is in the lt_backup filesystem. With another increase of the number ending the filename:

du -hs backup; ls -lrt backup/* | tail -n 8 ; echo ;du -hs lt_backup; ls -lrt lt_backup/* | tail -n 8

162M    backup
-rw-r-----. 1 oracle oinstall 168050688 Aug 15 01:35 backup/2dtaj15o_1_1
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 01:35 backup/2etaj16h_1_1

162M    lt_backup
-rw-r-----. 1 oracle oinstall 168067072 Aug 15 01:38 lt_backup/2btaj0mt_1_4
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 01:38 lt_backup/2ctaj0nm_1_4

 

Now that the tests are OK, let’s simulate a real world example. First, tidy up all the backups:

RMAN> delete noprompt backupset;

 

Let’s take a new backup.

RMAN> backup database;

 

Backup is in default destination:

du -hs backup; ls -lrt backup/* | tail -n 8 ; echo ;du -hs lt_backup; ls -lrt lt_backup/* | tail -n 8

162M    backup
-rw-r-----. 1 oracle oinstall 168050688 Aug 15 01:43 backup/2ftaj1lv_1_1
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 01:43 backup/2gtaj1mo_1_1

4.0K    lt_backup
ls: cannot access lt_backup/*: No such file or directory

 

Let’s take another backup later:

RMAN> backup database;

du -hs backup; ls -lrt backup/* | tail -n 8 ; echo ;du -hs lt_backup; ls -lrt lt_backup/* | tail -n 8

323M    backup
-rw-r-----. 1 oracle oinstall 168050688 Aug 15 01:43 backup/2ftaj1lv_1_1
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 01:43 backup/2gtaj1mo_1_1
-rw-r-----. 1 oracle oinstall 168181760 Aug 15 02:00 backup/2htaj2m4_1_1
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 02:01 backup/2itaj2mt_1_1

4.0K    lt_backup
ls: cannot access lt_backup/*: No such file or directory

 

Now let’s move the oldest backup to the other folder:

RMAN> backup backupset completed before 'TRUNC(SYSDATE)+2/24' format '/oracle/lt_backup/%U' delete input;

du -hs backup; ls -lrt backup/* | tail -n 8 ; echo ;du -hs lt_backup; ls -lrt lt_backup/* | tail -n 8

162M    backup
-rw-r-----. 1 oracle oinstall 168181760 Aug 15 02:00 backup/2htaj2m4_1_1
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 02:01 backup/2itaj2mt_1_1

162M    lt_backup
-rw-r-----. 1 oracle oinstall 168050688 Aug 15 02:02 lt_backup/2ftaj1lv_1_2
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 02:02 lt_backup/2gtaj1mo_1_2

 

Storing only the oldest backups in the long-term destination is not so clever, imagine you loose your first backup destination? It could be great to have the latest backup in both destinations. You can do that with a BACKUP BACKUPSET COMPLETED AFTER and no DELETE INPUT for basically the same as a cp or copy command:

RMAN> backup backupset completed after 'TRUNC(SYSDATE)+2/24' format '/oracle/lt_backup/%U';

du -hs backup; ls -lrt backup/* | tail -n 8 ; echo ;du -hs lt_backup; ls -lrt lt_backup/* | tail -n 8

162M    backup
-rw-r-----. 1 oracle oinstall 168181760 Aug 15 02:00 backup/2htaj2m4_1_1
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 02:01 backup/2itaj2mt_1_1

323M    lt_backup
-rw-r-----. 1 oracle oinstall 168050688 Aug 15 02:02 lt_backup/2ftaj1lv_1_2
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 02:02 lt_backup/2gtaj1mo_1_2
-rw-r-----. 1 oracle oinstall 168181760 Aug 15 02:03 lt_backup/2htaj2m4_1_2
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 02:03 lt_backup/2itaj2mt_1_2

 

That’s it, you now have a first destination for newest backups, and a second one for all the backups. And you just have to schedule these 2 BACKUP BACKUPSET after your daily backup of your database.

Note that backups will stay in both destinations until they reach the retention limit you defined for your database. The DELETE OBSOLETE will purge the backupsets wherever they are and delete all the known copies.

 

Cet article Easily manage dual backup destination with RMAN est apparu en premier sur Blog dbi services.


The size of Oracle Home: from 9GB to 600MB

$
0
0

This is research only and totally unsupported. When building docker images to run Oracle Database in a container, we try to get the smallest image possible. One way is to remove some subdirectories that we know will not be used. For example, the patch history is not used anymore once we have the required version. The dbca templates can be removed as soon as we have created the database… In this post I take the opposite approach: run some workload on a normal Oracle Home, and keep only the files that were used.

I have Oracle Database 18c installed in /u00/app/oracle/product/18EE and it takes 9GB on my host:

[oracle@vmreforatun01 ~]$ du --human-readable --max-depth=1 $ORACLE_HOME | sort -h | tail -10
 
352M /u00/app/oracle/product/18EE/jdk
383M /u00/app/oracle/product/18EE/javavm
423M /u00/app/oracle/product/18EE/inventory
437M /u00/app/oracle/product/18EE/assistants
605M /u00/app/oracle/product/18EE/md
630M /u00/app/oracle/product/18EE/bin
673M /u00/app/oracle/product/18EE/apex
1.4G /u00/app/oracle/product/18EE/.patch_storage
2.3G /u00/app/oracle/product/18EE/lib
9.4G /u00/app/oracle/product/18EE

Gigabytes of libraries (most of them used only to link the executables), hundreds of megabytes of binaries, templates for new databases, applied patches, old object files, options, tools, command line and graphical interfaces,… Do we need all that?

For a full installation in production, yes for sure. The more we have, the better it is. When you have to connect at 2 a.m because you are on-call and a critical alert wakes you up, then you will appreciate to have all tools on the server. Especially if you connect through a few security obstacles such as remote VPN, desktop, Wallix, tunnels to finally get a high latency tty with no copy-paste possibilities. With a full Oracle Home, you can face any issue. You have efficient command line interfaces (sqlplus and hopefully sqlcl) or graphical (SQLDeveloper, asmca,…). For severe problems, you can even re-link, apply or rollback patches, quickly create a new database to import something in it,…

But what if you just want to provide a small container where a database is running, and no additional administration support? Where you will never re-install the software, apply patches, re-create the database, troubleshoot weird issues. Just have users connect through the listener port and never log to the container. Then, most of these 9.4 GB are useless.

But how to know which files are useful or not?

If you can rely on Linux ‘access time’ then you may look at the files accessed during the last days – after any installation or database creation is done:

[oracle@vmreforatun01 ~]$ find $ORACLE_HOME -atime -1 -exec stat -L -c "%x %y %z %F %n" {} \; | sort

But this is not reliable. Access time depends on the file type, filesystem, mount options,… and is usually bypassed as much as possible because writing something just to log that you read something is not a very good idea.

Here, I’ll trace all system calls related to file names (strace -e trace=file). I’ll trace them from the start of the database, so that I run strace on dbstart with the -f arguments to trace across forks. Then, I’ll trace the listener, the instance processes and any user process created through the listener.

I pipe the output to an awk script which extracts the file names (which is enclosed in double quotes in the strace output). Basically, the awk is just setting the field separator with -F” and prints the $2 token for each line. There are many single and double quotes here because of shell interpretation.

[oracle@vmreforatun01 ~]$ dbshut $ORACLE_HOME ; strace -fe trace=file -o "|awk -F'"'"'"' '"'{print $2}'"'" sh -xc "dbstart $ORACLE_HOME >&2" | grep "^$ORACLE_HOME" | sort -u > /tmp/files.txt &

Then I run some activity. I did this on our Oracle Tuning training workshop lab, when reviewing all exercises after upgrading the lab VM to 18c. This runs some usual SQL for application (we use Swingbench) and monitoring. The idea is to run through all features that you want to be available on the container you will build.

When I’m done, I dbshut (remember this is for a lab only – strace is not for production) and then strace output gets deduplicated (sort -u) and written to a file.txt in /tmp.

This file contains all files referenced by system calls. Surprisingly, there is one that is not captured here, the ldap messages file, but if I do not take it then the remote connections will fail with:

ORA-07445: exception encountered: core dump [gslumcCalloc()+41] [SIGSEGV] [ADDR:0x21520] [PC:0x60F92D9] [Address not mapped to object] []

I got it with a very empirical approach, will try to understand later. For the moment, I just add it to the list:

[oracle@vmreforatun01 ~]$ ls $ORACLE_HOME/ldap/mesg/ldapus.msb >> /tmp/files.txt

I also add adrci and dbshut scripts as they are small and may be useful:

[oracle@vmreforatun01 ~]$ ls $ORACLE_HOME/bin/adrci $ORACLE_HOME/bin/dbshut >> /tmp/files.txt

From this list, I check thise which are not directories, and tar all regular files and symbolic links into /tmp/smalloh.tar:

[oracle@vmreforatun01 ~]$ stat -c "%F %n" $(cat /tmp/files.txt) | awk '!/^directory/{print $3}' | tar -cvf /tmp/smalloh.tar --dereference --files-from=-

This is a 600M tar:

[oracle@vmreforatun01 ~]$ du -h /tmp/smalloh.tar
 
598M /tmp/smalloh.tar

Then I can remove my Oracle Home

[oracle@vmreforatun01 ~]$ cd $ORACLE_HOME/..
[oracle@vmreforatun01 product]$ rm -rf 18EE
[oracle@vmreforatun01 product]$ mkdir 18EE

and extract the files from my tar:

[oracle@vmreforatun01 /]$ tar -xf /tmp/smalloh.tar

I forgot that there are some setuid executables so I must be root to set them:

[oracle@vmreforatun01 /]$ ls -l $ORACLE_HOME/bin/oracle
-rwxr-x--x. 1 oracle oinstall 437157251 Aug 11 18:40 /u00/app/oracle/product/18EE/bin/oracle
[oracle@vmreforatun01 /]$ su
Password:
[root@vmreforatun01 /]# tar -xf /tmp/smalloh.tar
[root@vmreforatun01 /]# exit
[oracle@vmreforatun01 /]$ ls -l $ORACLE_HOME/bin/oracle
-rwsr-s--x. 1 oracle oinstall 437157251 Aug 11 18:40 /u00/app/oracle/product/18EE/bin/oracle

That’s a 600MB Oracle Home then. You can reduce it further by stripping the binaries:

[oracle@vmreforatun01 18EE]$ du -hs $ORACLE_HOME
599M /u00/app/oracle/product/18EE
[oracle@vmreforatun01 18EE]$ strip $ORACLE_HOME/bin/* $ORACLE_HOME/lib/*
[oracle@vmreforatun01 18EE]$ du -hs $ORACLE_HOME
570M /u00/app/oracle/product/18EE

but for only 30MB I really prefer to have all symbols. As I’m doing something completely unsupported, I may have to do some toubleshooting.

Now I’m ready to start the database and the listener:

[oracle@vmreforatun01 18EE]$ dbstart $ORACLE_HOME
Processing Database instance "DB1": log file /u00/app/oracle/product/18EE/rdbms/log/startup.log

and I run some Swingbench workload to check that everything is fine:

[oracle@vmreforatun01 18EE]$ /home/oracle/swingbench/bin/charbench -cs //localhost:1521/APP -u soe -p soe -uc 10 -min 5 -max 20 -a -v
Author : Dominic Giles
Version : 2.5.0.932
 
Results will be written to results.xml.
 
Time Users TPM TPS
 
6:35:15 PM 0 0 0
...
6:35:44 PM 10 12 9
6:35:45 PM 10 16 4
6:35:46 PM 10 21 5
6:35:47 PM 10 31 10

The only errors in alert.log are about checking the patches at install:

QPI: OPATCH_INST_DIR not present:/u00/app/oracle/product/18EE/OPatch
Unable to obtain current patch information due to error: 20013, ORA-20013: DBMS_QOPATCH ran mostly in non install area
ORA-06512: at "SYS.DBMS_QOPATCH", line 767
ORA-06512: at "SYS.DBMS_QOPATCH", line 547
ORA-06512: at "SYS.DBMS_QOPATCH", line 2124

Most of those 600MB are in the server executable (bin/oracle) and client shared library (lib/libclntsh.so):

[oracle@vmreforatun01 ~]$ size -td /u00/app/oracle/product/18EE/bin/* /u00/app/oracle/product/18EE/lib/* | sort -n
 
text data bss dec hex filename
2423 780 48 3251 cb3 /u00/app/oracle/product/18EE/lib/libofs.so
4684 644 48 5376 1500 /u00/app/oracle/product/18EE/lib/libskgxn2.so
5301 732 48 6081 17c1 /u00/app/oracle/product/18EE/lib/libodm18.so
10806 2304 1144 14254 37ae /u00/app/oracle/product/18EE/bin/sqlplus
13993 2800 1136 17929 4609 /u00/app/oracle/product/18EE/bin/adrci
46456 3008 160 49624 c1d8 /u00/app/oracle/product/18EE/lib/libnque18.so
74314 4824 1248 80386 13a02 /u00/app/oracle/product/18EE/bin/oradism
86396 23968 1144 111508 1b394 /u00/app/oracle/product/18EE/bin/lsnrctl
115523 2196 48 117767 1cc07 /u00/app/oracle/product/18EE/lib/libocrutl18.so
144591 3032 160 147783 24147 /u00/app/oracle/product/18EE/lib/libdbcfg18.so
216972 2564 48 219584 359c0 /u00/app/oracle/product/18EE/lib/libclsra18.so
270692 13008 160 283860 454d4 /u00/app/oracle/product/18EE/lib/libskjcx18.so
321701 5024 352 327077 4fda5 /u00/app/oracle/product/18EE/lib/libons.so
373988 7096 9536 390620 5f5dc /u00/app/oracle/product/18EE/lib/libmql1.so
717398 23224 110088 850710 cfb16 /u00/app/oracle/product/18EE/bin/orabaseconfig
717398 23224 110088 850710 cfb16 /u00/app/oracle/product/18EE/bin/orabasehome
878351 36800 1144 916295 dfb47 /u00/app/oracle/product/18EE/bin/tnslsnr
928382 108920 512 1037814 fd5f6 /u00/app/oracle/product/18EE/lib/libcell18.so
940122 56176 2376 998674 f3d12 /u00/app/oracle/product/18EE/lib/libsqlplus.so
1118019 16156 48 1134223 114e8f /u00/app/oracle/product/18EE/lib/libocr18.so
1128954 5936 160 1135050 1151ca /u00/app/oracle/product/18EE/lib/libskgxp18.so
1376814 18548 48 1395410 154ad2 /u00/app/oracle/product/18EE/lib/libocrb18.so
1685576 130464 160 1816200 1bb688 /u00/app/oracle/product/18EE/lib/libasmclntsh18.so
2517125 16496 15584 2549205 26e5d5 /u00/app/oracle/product/18EE/lib/libipc1.so
3916867 86504 111912 4115283 3ecb53 /u00/app/oracle/product/18EE/lib/libclntshcore.so.18.1
4160241 26320 69264 4255825 40f051 /u00/app/oracle/product/18EE/lib/libmkl_rt.so
5120001 459984 7784 5587769 554339 /u00/app/oracle/product/18EE/lib/libnnz18.so
10822468 302312 21752 11146532 aa1524 /u00/app/oracle/product/18EE/lib/libhasgen18.so
11747579 135320 160 11883059 b55233 /u00/app/oracle/product/18EE/lib/libshpkavx218.so
61758209 2520896 134808 64413913 3d6e0d9 /u00/app/oracle/product/18EE/lib/libclntsh.so.18.1
376147897 3067672 602776 379818345 16a39169 /u00/app/oracle/product/18EE/bin/oracle
487369241 7106932 1203944 495680117 1d8b7a75 (TOTALS)

Of course, this is probably not sufficient, especially if you want to run APEX, OJVM, OracleText. The method is there: run a workload that covers everything you need, and build the Oracle Home from the files used there. I used strace here, but auditd can also be a good idea. Ideally, this job will be done one day by Oracle itself in a supported way, so that we can build a core container for Oracle Database and add features as Dockerfile layers. This had be done to release Oracle XE 11g which is 300MB only. However Oracle XE 18c announced for October will probably be larger as it includes nearly all option.

 

Cet article The size of Oracle Home: from 9GB to 600MB est apparu en premier sur Blog dbi services.

A tribute to Natural Join

$
0
0

By Franck Pachot

.
I know that lot of people are against the ANSI join syntax in Oracle. And this goes beyond the limits when talking about NATURAL JOIN. But I like them and use them quite often.

Why is Natural Join bad?

Natural join is bad because it relies on column names, and, at the time of writing the query, you don’t know which columns will be added or removed later. Here is an example on the SCOTT schema, joining on DEPTNO which has the same name in DEPT and EMP:

SQL> select * from EMP natural join DEPT where DNAME='SALES';
 
DEPTNO EMPNO ENAME JOB MGR HIREDATE SAL COMM DNAME LOC
---------- ---------- ---------- --------- ---------- --------- ---------- ---------- -------------- -------------
30 7521 WARD SALESMAN 7698 22-FEB-81 1250 500 SALES CHICAGO
30 7844 TURNER SALESMAN 7698 08-SEP-81 1500 0 SALES CHICAGO
30 7499 ALLEN SALESMAN 7698 20-FEB-81 1600 300 SALES CHICAGO
30 7900 JAMES CLERK 7698 03-DEC-81 950 SALES CHICAGO
30 7698 BLAKE MANAGER 7839 01-MAY-81 2850 SALES CHICAGO
30 7654 MARTIN SALESMAN 7698 28-SEP-81 1250 1400 SALES CHICAGO

The DEPT table has a ‘LOC’column for the location of the department. But the data model may evolve and you may add a location for each employee. And we may also call it LOC:

SQL> alter table EMP add (LOC varchar2(10));
Table altered.

But now our Natural Join adds this column to the join predicate and the result is wrong because it shows only rows which have same department location as employee location:

SQL> select * from EMP natural join DEPT where DNAME='SALES';
 
no rows selected

Projection

In my opinion, the problem is not the Natural Join. Column names have a meaning for their tables. But the tables have different roles in our queries. As soon as a table or view participates to our query, we should redefine the column names. If we don’t, the result is completely wrong as:

SQL> select * from EMP join DEPT using(DEPTNO) where DNAME='SALES';
 
DEPTNO EMPNO ENAME JOB MGR HIREDATE SAL COMM LOC DNAME LOC
---------- ---------- ---------- --------- ---------- --------- ---------- ---------- ---------- -------------- -------------
30 7521 WARD SALESMAN 7698 22-FEB-81 1250 500 SALES CHICAGO
30 7844 TURNER SALESMAN 7698 08-SEP-81 1500 0 SALES CHICAGO
30 7499 ALLEN SALESMAN 7698 20-FEB-81 1600 300 SALES CHICAGO
30 7900 JAMES CLERK 7698 03-DEC-81 950 SALES CHICAGO
30 7698 BLAKE MANAGER 7839 01-MAY-81 2850 SALES CHICAGO
30 7654 MARTIN SALESMAN 7698 28-SEP-81 1250 1400 SALES CHICAGO

Look: the result has two columns with the same name. This is completely wrong for a relational database and I don’t even understand why this parses without raising an error.

The projection is the most important relational operation, often overlooked as if it was just a rename for aesthetic purpose. You need to name the columns of your result set. They are the metadata for the interface between SQL and the host language. ‘select *’ is a shortcut when running an interactive query, to get a glance at the result rows. But a SQL query result is not complete without proper column names. And in most cases, at least when you query more than one table, the name of the query result columns should be different than the name of the underlying table columns. A department may have a location. And an employee may have a location. But the location of the employee department is something completely different than the employee location.

Then, as you need to name each column anyway, why not doing it as soon as possible? Do it for each table involved in the query, so that you are sure that all column names are correct within the query. As soon as you introduce an new table in the FROM clause, you should actually name the columns according to their role in the query. Let’s take an example with an airline data model. Each airport is linked to a city. This can be a CITY column in the AIRPORTS table. But as soon as you join FLIGHTS with AIRPORTS, this table has a different role. You join on destination airport or source airport. Then you alias the AIRPORTS table in the FROM clause, such as DST_AIRPORTS or SRC_AIRPORTS. Within the query, you can reference the columns with the table alias, such as DST_AIRPORTS.CITY or SRC_AIRPORTS.CITY but this cannot be exposed as-is in the query result. You must name them in the SELECT clause with something like SELECT DST_AIRPORTS.CITY as DST_ARP_CITY , SRC_AIRPORTS.CITY as SRC_ARP_CITY.

Then, as I’ll need to rename them anyway, I prefer to do it as soon as I join to a new table in the FROM clause. Instead of joining to AIRPORTS DST_AIRPORTS I can join to (SELECT IATA DST_ARP_IATA, CITY DST_ARP_CITY FROM AIRPORTS) and all column names will relate to the role without table aliases and without further renaming. And when I do that correctly, I can use natural join without risk.

Projection in the FROM clause

Let’s take an example. Here is a query in DEPT where I explicitly mention that LOC is the department location. This is implicit when the column name belongs to the DEPT table. But it will not be implicit anymore once I join this table to another table. Here is the view ready to be included in any query:


SQL> select DEPTNO,DNAME DEPT_DNAME,LOC DEPT_LOC from DEPT where DNAME='SALES';
 
DEPTNO DEPT_DNAME DEPT_LOC
---------- -------------- -------------
30 SALES CHICAGO

Now, I can join this to the EMP table. I prefix all columns from EMP with “EMP_” and all columns from DEPT with “EMP_DEPT_” because they belong to DEPT when in the role of employee department:

SQL> select EMP_EMPNO,EMP_ENAME,EMP_DEPT_DNAME,EMP_DEPT_LOC,EMP_LOC,EMP_MGR_EMPNO
from
(select DEPTNO EMP_DEPTNO,EMPNO EMP_EMPNO,ENAME EMP_ENAME,MGR EMP_MGR_EMPNO,LOC EMP_LOC from EMP)
natural join
(select DEPTNO EMP_DEPTNO,DNAME EMP_DEPT_DNAME,LOC EMP_DEPT_LOC from DEPT)
where EMP_DEPT_DNAME='SALES';
 
EMP_EMPNO EMP_ENAME EMP_DEPT_DNAME EMP_DEPT_LOC EMP_LOC EMP_MGR_EMPNO
---------- ---------- -------------- ------------- ---------- -------------
7521 WARD SALES CHICAGO 7698
7844 TURNER SALES CHICAGO 7698
7499 ALLEN SALES CHICAGO 7698
7900 JAMES SALES CHICAGO 7698
7698 BLAKE SALES CHICAGO 7839
7654 MARTIN SALES CHICAGO 7698

As you can see, when the names are clearly indicating the column with its role in the join, and how they are correlated with the other tables, there is no need to mention any join predicate. I used Natural Join because the join is on EMP_DEPTNO and I’m sure that it will always be the one and only one column with the same name. By query design.

And the column names in the result are correct, explicitly mentioning what is an Employee attribute or an Employee department attribute. That can be easy to parse and put in an object graph in the host language. You can see there that the MGR column of EMP was named EMP_MGR_EMPNO because this is actually what it is: the EMPNO of the employee manager. It is a foreign key to the EMP table.

And then, adding more information about the manager is easy: join with EMP again but with the proper projection of columns: EMPNO will be EMP_MGR_EMPNO when in the role of the employee manager, ENAME will be EMP_MGR_ENAME, DEPTNO will be EMP_MGR_DEPTNO, and so on:


SQL> select EMP_EMPNO,EMP_ENAME,EMP_DEPT_DNAME,EMP_DEPT_LOC,EMP_LOC,EMP_MGR_DEPTNO,EMP_MGR_ENAME
from
(select DEPTNO EMP_DEPTNO,EMPNO EMP_EMPNO,ENAME EMP_ENAME,MGR EMP_MGR_EMPNO,LOC EMP_LOC from EMP)
natural join
(select DEPTNO EMP_DEPTNO,DNAME EMP_DEPT_DNAME,LOC EMP_DEPT_LOC from DEPT)
natural join
(select DEPTNO EMP_MGR_DEPTNO,EMPNO EMP_MGR_EMPNO,ENAME EMP_MGR_ENAME from EMP)
where EMP_DEPT_DNAME='SALES';
 
EMP_EMPNO EMP_ENAME EMP_DEPT_DNAME EMP_DEPT_LOC EMP_LOC EMP_MGR_DEPTNO EMP_MGR_ENAME
---------- ---------- -------------- ------------- ---------- -------------- -------------
7900 JAMES SALES CHICAGO 30 BLAKE
7499 ALLEN SALES CHICAGO 30 BLAKE
7654 MARTIN SALES CHICAGO 30 BLAKE
7844 TURNER SALES CHICAGO 30 BLAKE
7521 WARD SALES CHICAGO 30 BLAKE
7698 BLAKE SALES CHICAGO 10 KING

No need to review the whole query when adding a new table. No need to solve the new ‘column ambiguously defined’. We don’t even need to alias the tables here.

Want to add the department name of the manager? That’s easy: join to DEPT with the right column projection (all prefixed by EMP_MGR_DEPT as the new columns are all about the employee manager’s department):

SQL> select EMP_EMPNO,EMP_ENAME,EMP_DEPT_DNAME,EMP_DEPT_LOC,EMP_LOC,EMP_MGR_DEPTNO,EMP_MGR_ENAME,EMP_MGR_DEPT_DNAME
from
(select DEPTNO EMP_DEPTNO,EMPNO EMP_EMPNO,ENAME EMP_ENAME,MGR EMP_MGR_EMPNO,LOC EMP_LOC from EMP)
natural join
(select DEPTNO EMP_DEPTNO,DNAME EMP_DEPT_DNAME,LOC EMP_DEPT_LOC from DEPT)
natural join
(select DEPTNO EMP_MGR_DEPTNO,EMPNO EMP_MGR_EMPNO,ENAME EMP_MGR_ENAME from EMP)
natural join
(select DEPTNO EMP_MGR_DEPTNO,DNAME EMP_MGR_DEPT_DNAME,LOC EMP_MGR_DEPT_LOC from DEPT)
where EMP_DEPT_DNAME='SALES';
 
EMP_EMPNO EMP_ENAME EMP_DEPT_DNAME EMP_DEPT_LOC EMP_LOC EMP_MGR_DEPTNO EMP_MGR_EN EMP_MGR_DEPT_D
---------- ---------- -------------- ------------- ---------- -------------- ---------- --------------
7698 BLAKE SALES CHICAGO 10 KING ACCOUNTING
7900 JAMES SALES CHICAGO 30 BLAKE SALES
7499 ALLEN SALES CHICAGO 30 BLAKE SALES
7654 MARTIN SALES CHICAGO 30 BLAKE SALES
7844 TURNER SALES CHICAGO 30 BLAKE SALES
7521 WARD SALES CHICAGO 30 BLAKE SALES

This can be even easier when you generate SQL queries. When adding a new table to join to, you just prefix all columns with their role. Check foreign keys so that the naming is consistent with the referenced tables. Then when parsing the result, the naming convention can help to break on the object hierarchy.

Additional notes

I mentioned that aliasing the subquery is not mandatory because I do not have to prefix the column names. However, when looking at the predicates section of the execution plan, the columns may be prefixed with an internal alias:

Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("from$_subquery$_006"."EMP_MGR_DEPTNO"="from$_subquery$_009"."EMP_MGR_DEPTNO")
2 - access("from$_subquery$_001"."EMP_MGR_EMPNO"="from$_subquery$_006"."EMP_MGR_EMPNO" AND "from$_subquery$_001"."EMP_DEPTNO"="from$_subquery$_003"."EMP_DEPTNO")

Then it is a good idea to add prefixes, such as EMP, EMP_DEPT, EMP_MGR EMP_MGR_DEPTNO in the query above so that the predicates become:

Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("EMP_MGR"."EMP_MGR_DEPTNO"="EMP_MGR_DEPT"."EMP_MGR_DEPTNO")
2 - access("EMP"."EMP_MGR_EMPNO"="EMP_MGR"."EMP_MGR_EMPNO" AND "EMP"."EMP_DEPTNO"="EMP_DEPT"."EMP_DEPTNO")
5 - filter("DNAME"='SALES')

I also like to add a QB_NAME hint so that I can reference easily those subqueries if I have to add some hints there. Finally, this is what I can generate for this query:


SQL> select EMP_EMPNO,EMP_ENAME,EMP_DEPT_DNAME,EMP_DEPT_LOC,EMP_LOC,EMP_MGR_DEPTNO,EMP_MGR_ENAME,EMP_MGR_DEPT_DNAME
from
(select /*+qb_name(EMP)*/ DEPTNO EMP_DEPTNO,EMPNO EMP_EMPNO,ENAME EMP_ENAME,MGR EMP_MGR_EMPNO,LOC EMP_LOC from EMP) EMP
natural join
(select /*+qb_name(EMP_DEPT)*/ DEPTNO EMP_DEPTNO,DNAME EMP_DEPT_DNAME,LOC EMP_DEPT_LOC from DEPT) EMP_DEPT
natural join
(select /*+qb_name(EMP_MGR)*/ DEPTNO EMP_MGR_DEPTNO,EMPNO EMP_MGR_EMPNO,ENAME EMP_MGR_ENAME from EMP) EMP_MGR
natural join
(select /*+qb_name(EMP_MGR_DEPT)*/ DEPTNO EMP_MGR_DEPTNO,DNAME EMP_MGR_DEPT_DNAME,LOC EMP_MGR_DEPT_LOC from DEPT) EMP_MGR_DEPT
where EMP_DEPT_DNAME='SALES';

So what?

My goal here is not to recommend to always use natural joins. This depends on the context (ad-hoc queries, embedded ones in existing code with naming standards,…) and whether con control exactly the columns names. There are also a few bugs with ANSI joins, and natural join is not widely used, so maybe not tested a lot. But when I hear that Natural Join is bad, I want to explain the why/how/when. And one of the good sides of it is that it forces us to do the projection/rename as soon as possible and this makes the query easier to read/maintain/evolve. Of course, using natural join in that way requires that all tables are added to the FROM clause through a subquery which carefully names all columns in the SELECT clause so that the correlation with the other tables is clearly defined.

 

Cet article A tribute to Natural Join est apparu en premier sur Blog dbi services.

Restoring a database without having any controlfile backup

$
0
0

It should never happen but sometimes it happens. You just lost your datafiles as well as your fast recovery area (probably because most of the time these areas are on the same disks despite the recommendations).

Normal restore operations with RMAN are quite easy and secure as far as you have backupsets for database, archivelogs, and spfile/controlfile:

Step 1 – restore the spfile and start the instance
Step 2 – restore the controlfile and mount the database
Step 3 – restore the database (meaning the datafiles)
Step 4 – recover the database as far as possible (by applying archivelogs)
Step 5 – open the database in (no)resetlogs

If you cannot go through step 2 because you don’t have any controlfile backup, you can’t go further with RMAN, that’s it. But there is another way to get a controlfile back to work.

Not having the spfile is annoying, but it’s just a subset of instance parameters, not really important stuff for your data. You can eventually recreate a pfile (you will probably convert it to spfile later) by picking up the non-default parameters in the alert_SID.log, these are located just after the last start of the instance. Or you can create a very basic pfile with very few parameters: at least the db_unique_name, and for this example I need compatible parameter, and a temporary fast recovery area for easy restore of the archivelogs.

vi /u01/oradata/DBTEST1/initDBTEST1.ora
*.db_name='DBTEST1'
control_files='/u01/oradata/DBTEST1/control01.dbf'
compatible=12.1.0.2
db_recovery_file_dest='/u01/oradata/fast_recovery_area/'
db_recovery_file_dest_size=10G

 

Fortunately you remember where you put the backup and you found this:

oracle@vmoratest1:/oracle/backup/ [DBTEST1] ls -lrt
total 189828
-rw-r-----. 1 oracle oinstall   4333568 Aug 20 14:47 DB_34tb1lfb_1_1
-rw-r-----. 1 oracle oinstall     98304 Aug 20 14:47 DB_36tb1lfd_1_1
-rw-r-----. 1 oracle oinstall  54304768 Aug 20 14:47 DB_33tb1lfb_1_1
-rw-r-----. 1 oracle oinstall 121438208 Aug 20 14:47 DB_32tb1lfb_1_1
-rw-r-----. 1 oracle oinstall     92672 Aug 20 14:49 ARC_3atb1lj7_1_1
-rw-r-----. 1 oracle oinstall   1730560 Aug 20 14:49 ARC_39tb1lj7_1_1
-rw-r-----. 1 oracle oinstall   5758464 Aug 20 14:49 ARC_38tb1lj7_1_1
-rw-r-----. 1 oracle oinstall   6619648 Aug 20 14:49 ARC_37tb1lj7_1_1

 

First of all, start the instance.

sqlplus / as sysdba
SQL> startup nomount pfile='/u01/oradata/DBTEST1/initDBTEST1.ora';

 

After trying to restore the controlfile from backuppieces inside the backup directory, you found that no backup has a controlfile:

rman target /
RMAN> restore controlfile from '/oracle/backup/DB_36tb1lfb_1_1';

Starting restore at 20-AUG-2018 15:53:08
using channel ORA_DISK_1

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 08/20/2018 15:53:08
RMAN-06172: no AUTOBACKUP found or specified handle is not a valid copy or piece

RMAN> restore controlfile from '/oracle/backup/DB_32tb1lfb_1_1';

Starting restore at 20-AUG-2018 15:53:21
using channel ORA_DISK_1

channel ORA_DISK_1: restoring control file
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 08/20/2018 15:53:21
ORA-19697: standby control file not found in backup set

RMAN> restore controlfile from '/oracle/backup/ARC_3atb1lj7_1_1';

Starting restore at 20-AUG-2018 15:53:56
using channel ORA_DISK_1

channel ORA_DISK_1: restoring control file
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 08/20/2018 15:53:56
ORA-19870: error while restoring backup piece /oracle/backup/ARC_3atb1lj7_1_1
ORA-19626: backup set type is archived log - can not be processed by this conversation

...

 

Having an instance started is always better than nothing. And through this instance you can have access to many things without actually having a real database. For example, you can use the dbms_backup_restore package: this package is able to restore datafiles without having any controlfile. Very useful for us now. You can easily restore a datafile from a backuppiece but you have to provide the datafile number. A few lines of PL/SQL code can help you to restore all the datafiles from all the available backuppieces.

cd /u01/oradata/DBTEST1/
vi resto.sql

set serveroutput on
declare
        v_dev           varchar2(30) ;
        v_rest_ok       boolean;
        v_df_num        number := 1;
        v_df_max        number := 30;
        v_bck_piece     varchar2(256) := '&1';
        v_rest_folder   varchar2(226) := '/u01/oradata/DBTEST1/';
        v_rest_df       varchar2(256);
begin
       v_dev := dbms_backup_restore.deviceallocate;
       while v_df_num <= v_df_max loop
                v_rest_df := v_rest_folder||'DF_'||lpad(v_df_num,4,'0');
                dbms_backup_restore.restoreSetDatafile;
                dbms_backup_restore.restoreDataFileTo(dfnumber=>v_df_num,toname=>v_rest_df);
                BEGIN
                        dbms_backup_restore.restoreBackupPiece(done=>v_rest_ok,handle=>v_bck_piece);
                EXCEPTION
                        WHEN OTHERS
                        THEN
                                v_rest_ok := FALSE;
                                -- dbms_output.put_line('Datafile '||v_df_num||' is not in this piece');
                END;
                if v_rest_ok THEN
                        dbms_output.put_line('Datafile '||v_df_num||' is restored : '||v_rest_df);
                end if;
                v_df_num := v_df_num + 1;
        end loop;
        dbms_backup_restore.deviceDeallocate;
end;
/
exit;

 

Let’s iterate this anonymous PL/SQL block for each backuppiece in your backup folder:

for a in `find /oracle/backup/ -name DB*`; do sqlplus -s / as sysdba @resto $a; done;

old   6:     v_bck_piece    varchar2(256) := '&1';
new   6:     v_bck_piece    varchar2(256) := '/oracle/backup/DB_32tb1lfb_1_1';
Datafile 1 is restored : /u01/oradata/DBTEST1/DF_0001.dbf
Datafile 4 is restored : /u01/oradata/DBTEST1/DF_0004.dbf
Datafile 9 is restored : /u01/oradata/DBTEST1/DF_0009.dbf

PL/SQL procedure successfully completed.

old   6:     v_bck_piece    varchar2(256) := '&1';
new   6:     v_bck_piece    varchar2(256) := '/oracle/backup/DB_33tb1lfb_1_1';
Datafile 2 is restored : /u01/oradata/DBTEST1/DF_0002.dbf
Datafile 7 is restored : /u01/oradata/DBTEST1/DF_0007.dbf
Datafile 8 is restored : /u01/oradata/DBTEST1/DF_0008.dbf

PL/SQL procedure successfully completed.

old   6:     v_bck_piece    varchar2(256) := '&1';
new   6:     v_bck_piece    varchar2(256) := '/oracle/backup/DB_36tb1lfd_1_1';

PL/SQL procedure successfully completed.

old   6:     v_bck_piece    varchar2(256) := '&1';
new   6:     v_bck_piece    varchar2(256) := '/oracle/backup/DB_34tb1lfb_1_1';
Datafile 3 is restored : /u01/oradata/DBTEST1/DF_0003.dbf
Datafile 5 is restored : /u01/oradata/DBTEST1/DF_0005.dbf
Datafile 6 is restored : /u01/oradata/DBTEST1/DF_0006.dbf

PL/SQL procedure successfully completed.

 

Well done! 9 datafiles were restored. Now look at your folder, you’ll find the 9 datafiles, actually all your database if your backup is reliable:

ls -lrt /u01/oradata/DBTEST1/

total 2017372
-rw-r--r--. 1 oracle oinstall      1035 Aug 20 23:05 resto.sql
-rw-r--r--. 1 oracle oinstall        91 Aug 20 23:12 initDBTEST1.ora
-rw-r-----. 1 oracle oinstall 734011392 Aug 20 23:15 DF_0001.dbf
-rw-r-----. 1 oracle oinstall   5251072 Aug 20 23:15 DF_0004.dbf
-rw-r-----. 1 oracle oinstall  52436992 Aug 20 23:15 DF_0009.dbf
-rw-r-----. 1 oracle oinstall 576724992 Aug 20 23:15 DF_0002.dbf
-rw-r-----. 1 oracle oinstall  52436992 Aug 20 23:15 DF_0007.dbf
-rw-r-----. 1 oracle oinstall  52436992 Aug 20 23:15 DF_0008.dbf
-rw-r-----. 1 oracle oinstall 487596032 Aug 20 23:15 DF_0003.dbf
-rw-r-----. 1 oracle oinstall  52436992 Aug 20 23:15 DF_0005.dbf
-rw-r-----. 1 oracle oinstall  52436992 Aug 20 23:15 DF_0006.dbf

 

You can now manually create the controlfile with these datafiles (you just have to remember the characterset of your database):

sqlplus / as sysdba

CREATE CONTROLFILE REUSE DATABASE "DBTEST1" RESETLOGS  ARCHIVELOG
      MAXLOGFILES 16
      MAXLOGMEMBERS 3
      MAXDATAFILES 100
      MAXINSTANCES 8
      MAXLOGHISTORY 2073
LOGFILE
    GROUP 1 '/u01/oradata/DBTEST1/redo01.rdo'  SIZE 100M BLOCKSIZE 512,
    GROUP 2 '/u01/oradata/DBTEST1/redo02.rdo'  SIZE 100M BLOCKSIZE 512,
    GROUP 3 '/u01/oradata/DBTEST1/redo03.rdo'  SIZE 100M BLOCKSIZE 512
DATAFILE
    '/u01/oradata/DBTEST1/DF_0001.dbf',
    '/u01/oradata/DBTEST1/DF_0002.dbf',
    '/u01/oradata/DBTEST1/DF_0003.dbf',
    '/u01/oradata/DBTEST1/DF_0004.dbf',
    '/u01/oradata/DBTEST1/DF_0005.dbf',
    '/u01/oradata/DBTEST1/DF_0006.dbf',
    '/u01/oradata/DBTEST1/DF_0007.dbf',
    '/u01/oradata/DBTEST1/DF_0008.dbf',
    '/u01/oradata/DBTEST1/DF_0009.dbf'
CHARACTER SET AL32UTF8 ;


Control file created.

 

ls -lrt /u01/oradata/DBTEST1/

total 2029804
-rw-r--r--. 1 oracle oinstall      1035 Aug 20 23:05 resto.sql
-rw-r--r--. 1 oracle oinstall        91 Aug 20 23:12 initDBTEST1.ora
-rw-r-----. 1 oracle oinstall 734011392 Aug 20 23:15 DF_0001.dbf
-rw-r-----. 1 oracle oinstall   5251072 Aug 20 23:15 DF_0004.dbf
-rw-r-----. 1 oracle oinstall  52436992 Aug 20 23:15 DF_0009.dbf
-rw-r-----. 1 oracle oinstall 576724992 Aug 20 23:15 DF_0002.dbf
-rw-r-----. 1 oracle oinstall  52436992 Aug 20 23:15 DF_0007.dbf
-rw-r-----. 1 oracle oinstall  52436992 Aug 20 23:15 DF_0008.dbf
-rw-r-----. 1 oracle oinstall 487596032 Aug 20 23:15 DF_0003.dbf
-rw-r-----. 1 oracle oinstall  52436992 Aug 20 23:15 DF_0005.dbf
-rw-r-----. 1 oracle oinstall  52436992 Aug 20 23:15 DF_0006.dbf
-rw-r-----. 1 oracle oinstall  12730368 Aug 20 23:24 control01.dbf

 

What a relief to see pfile, controlfile and datafiles all together again!

Work is not yet finished because the datafiles are probably inconsistent. There is no need to mount the database as it’s already mounted, and it’s now possible to catalog all your  backuppieces for some kind of RMAN catalog restore:

rman target /
catalog start with '/oracle/backup/';

using target database control file instead of recovery catalog
searching for all files that match the pattern /oracle/backup/

List of Files Unknown to the Database
=====================================
File Name: /oracle/backup/DB_32tb1lfb_1_1
File Name: /oracle/backup/ARC_37tb1lj7_1_1
File Name: /oracle/backup/ARC_39tb1lj7_1_1
File Name: /oracle/backup/DB_33tb1lfb_1_1
File Name: /oracle/backup/DB_36tb1lfd_1_1
File Name: /oracle/backup/ARC_3atb1lj7_1_1
File Name: /oracle/backup/DB_34tb1lfb_1_1
File Name: /oracle/backup/ARC_38tb1lj7_1_1

Do you really want to catalog the above files (enter YES or NO)? YES

cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /oracle/backup/DB_32tb1lfb_1_1
File Name: /oracle/backup/ARC_37tb1lj7_1_1
File Name: /oracle/backup/ARC_39tb1lj7_1_1
File Name: /oracle/backup/DB_33tb1lfb_1_1
File Name: /oracle/backup/DB_36tb1lfd_1_1
File Name: /oracle/backup/ARC_3atb1lj7_1_1
File Name: /oracle/backup/DB_34tb1lfb_1_1
File Name: /oracle/backup/ARC_38tb1lj7_1_1

 

You now need to restore the archivelogs:

RMAN> restore archivelog all;

Starting restore at 20-AUG-2018 23:43:47
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=23 device type=DISK

channel ORA_DISK_1: starting archived log restore to default destination
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=44
channel ORA_DISK_1: reading from backup piece /oracle/backup/ARC_39tb1lj7_1_1
channel ORA_DISK_1: piece handle=/oracle/backup/ARC_39tb1lj7_1_1 tag=TAG20180820T144911
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting archived log restore to default destination
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=45
channel ORA_DISK_1: reading from backup piece /oracle/backup/ARC_38tb1lj7_1_1
channel ORA_DISK_1: piece handle=/oracle/backup/ARC_38tb1lj7_1_1 tag=TAG20180820T144911
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting archived log restore to default destination
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=46
channel ORA_DISK_1: reading from backup piece /oracle/backup/ARC_37tb1lj7_1_1
channel ORA_DISK_1: piece handle=/oracle/backup/ARC_37tb1lj7_1_1 tag=TAG20180820T144911
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting archived log restore to default destination
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=47
channel ORA_DISK_1: reading from backup piece /oracle/backup/ARC_3atb1lj7_1_1
channel ORA_DISK_1: piece handle=/oracle/backup/ARC_3atb1lj7_1_1 tag=TAG20180820T144911
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
Finished restore at 20-AUG-2018 23:43:52

 

Now it’s probably possible to recover the database:

sqlplus / as sysdba

SQL> recover database until cancel using backup controlfile;
ORA-00279: change 1386561 generated at 08/20/2018 14:47:07 needed for thread 1
ORA-00289: suggestion :
/u01/oradata/fast_recovery_area/DBTEST1/archivelog/2018_08_21/o1_mf_1_47_fqplx2l
n_.arc
ORA-00280: change 1386561 for thread 1 is in sequence #47


Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
AUTO
ORA-00279: change 1386635 generated at 08/20/2018 14:49:10 needed for thread 1
ORA-00289: suggestion :
/u01/oradata/fast_recovery_area/DBTEST1/archivelog/2018_08_21/o1_mf_1_48_%u_.arc
ORA-00280: change 1386635 for thread 1 is in sequence #48
ORA-00278: log file
'/u01/oradata/fast_recovery_area/DBTEST1/archivelog/2018_08_21/o1_mf_1_47_fqplx2
ln_.arc' no longer needed for this recovery


ORA-00308: cannot open archived log
'/u01/oradata/fast_recovery_area/DBTEST1/archivelog/2018_08_21/o1_mf_1_48_%u_.ar
c'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3

 

Last error is normal because Oracle didn’t know the sequence 48 never existed.

Now all the archivelogs are applied, fingers crossed for the last operation that is supposed to bring back the database to life:

SQL> alter database open resetlogs;

Database altered.

SQL> select instance_name, status from v$instance;

INSTANCE_NAME     STATUS
---------------- ------------
DBTEST1      OPEN

SQL> select file_name from dba_data_files;

FILE_NAME
--------------------------------------------------------------------------------
/u01/oradata/DBTEST1/DF_0001.dbf
/u01/oradata/DBTEST1/DF_0002.dbf
/u01/oradata/DBTEST1/DF_0003.dbf
/u01/oradata/DBTEST1/DF_0004.dbf
/u01/oradata/DBTEST1/DF_0005.dbf
/u01/oradata/DBTEST1/DF_0007.dbf
/u01/oradata/DBTEST1/DF_0006.dbf
/u01/oradata/DBTEST1/DF_0009.dbf
/u01/oradata/DBTEST1/DF_0008.dbf

9 rows selected.

Yes everything is OK!!! Apart from generic name for your datafiles, a single controlfile, no spfile, default-configured redologs and probably no temporary tablespace. But the database is up and running, and you feel like a hero. Or you just manage to keep your job ;-)

 

Cet article Restoring a database without having any controlfile backup est apparu en premier sur Blog dbi services.

COMMIT

$
0
0

By Franck Pachot

.
COMMIT is the SQL statement that ends a transaction, with two goals: persistence (changes are durable) and sharing (changes are visible to others). That’s a weird title and introduction for the 499th blog post I write on the dbi-services blog. 499 posts in nearly 5 years- roughly two blog posts per week. This activity was mainly motivated by the will to persist and share what I learn every day.

Persistence is primarily for myself: writing a test case with a little explanation is a good way to remember an issue encountered, and Google helps to get back to it when the problem is encountered again later. Sharing is partly for others: I learn a lot from what others are sharing (blogs, forums, articles, mailing lists,…) and it makes sense to also share what I learn. But in addition to that, publishing an idea is also a good way to validate it. If something is partially wrong or badly explained, or just benefits from exchanging ideas, then I’ll get feedbacks, by comments, tweets, e-mails.

This high throughput of things I learn every day gets its source from multiple events. In a consulting company, going from one customer to another means different platforms, versions, editions, different requirements, different approaches. Our added value is our experience. From all the problems seen in all those environments, we have build knowledge, best practices and tools (this is the idea of DMK) to bring a reliable and efficient solution to customers projects. But dbi services also invests a lot in research and training, in order to build this knowledge pro-actively, before encountering the problems at customers. A lot of blog posts were motivated by lab problems only (beta testing, learning new features, setting up a proof of concept before proposing it to a customer). And then encountered later at customers, with faster solutions as this had been investigated before. Dbi services also provides workshops for all technologies and preparing training exercises, as well as giving the workshop, was also a great source of blog posts.

I must say that dbi services is an amazing company in this area. Five years ago, I blogged in French on developpez.com and answered forums such as dba-village.com, and wrote a few articles for SOUG. But as soon as I started at dbi services, I passed the OCM, I presented for the first time in public, at DOAG, and then at many local and international conferences. I attended my first Oracle Open World. I became ACE and later ACE Director. The blogging activity is one aspect only. What the dbi services Technology Organization produces is amazing, for the benefit of the customers and the consultants.

You may have heard that I’m going to work in the database team at CERN, which means quiescing my consulting and blogging activity here. For sure I’ll continue to share, but probably differently. Maybe on the Databases at CERN blog, and probably posting on Medium. Blogs will be also replicated to http://www.oaktable.net/ of course. Anyway, it is easy to find me on LinkedIn or Twitter. For sure I’ll be at conferences and probably not only Oracle ones.

Database transparent_1000pxOracle_100_1000pxI encourage you to continue to follow the dbi services blog, as I’ll do. Many colleagues are already sharing on all technologies. And new ones are coming. Even if my goal was the opposite, I’m aware that publishing so often may have throttled other authors to do so. I’m now releasing some bandwidth to them. The dbi services blog is in the 9th position in the Top-100 Oracle blogs and 27th position in the Top-60 Database blogs with 6 blog posts a week on average. And there’s also a lot non-database topics covered as well. So stay tuned on https://blog.dbi-services.com/.

 

Cet article COMMIT est apparu en premier sur Blog dbi services.

SQL Plan stability in 11G using stored outlines

$
0
0

Plan stability preserves execution plans in stored outlines. An outline is implemented as a set of optimizer hints that are associated with the SQL statement. If the use of the outline is enabled for the statement, then Oracle Database automatically considers the stored hints and tries to generate an execution plan in accordance with those hints (Oracle documentation).

Oracle Database can create a public or private stored outline for one or all SQL statements. The optimizer then generates equivalent execution plans from the outlines when you enable the use of stored outlines. You can group outlines into categories and control which category of outlines Oracle Database uses to simplify outline administration and deployment (Oracle documentation).

The plans that Oracle Database maintains in stored outlines remain consistent despite changes to a system’s configuration or statistics. Using stored outlines also stabilizes the generated execution plan if the optimizer changes in subsequent Oracle Database releases (Oracle documentation).

 

Many times we are into the situation when the performance of a query regressing, or the optimizer is not able to choose the better execution plan.

In the next lines I will try to describe a scenario that needs the usage of a stored outline on a Standard Edition 2 Database:

–we will identify the different plans that exists for our sql_id

SQL> select hash_value,child_number,sql_id,executions from v$sql where sql_id='574gkxxxxxxxx';

HASH_VALUE CHILD_NUMBER SQL_ID        EXECUTIONS 
---------- ------------ ------------- ---------- 
 524000000            0 574gkxxxxxxxx          4 
 576000001            1 574gkxxxxxxxx          5

 

Between the two different plans, we know that the best one is that with the cost 15 and the hash_value : 444444444444 , but which is not all the time choosed by the optimizer, causing peak of performance

SQL> select * from table(dbms_xplan.display_cursor(‘574gkxxxxxxxx’,0));

PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------

SQL_ID  574gkxxxxxxxx, child number 0
-------------------------------------
Select   <qeury>
........................................................

Plan hash value: 4444444444444

-------------------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name                       | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |                            |       |       |    15 (100)|       |
|   1 |  UNION-ALL                     |                            |       |       |            |       |
|*  2 |   FILTER                       |                            |       |       |            |       |
|   3 |    NESTED LOOPS                |                            |       |       |            |       |
|   4 |     NESTED LOOPS               |                            |     1 |    76 |     7  (15)| 00:00:01 |
|   5 |      MERGE JOIN CARTESIAN      |                            |     1 |    52 |     5  (20)| 00:00:01 |
|   6 |       SORT UNIQUE              |                            |     1 |    26 |     2   (0)| 00:00:01 |
|*  7 |        TABLE ACCESS FULL       |                            |     1 |    26 |     2   (0)| 00:00:01 |
|   8 |       BUFFER SORT              |                            |     1 |    26 |     3  (34)| 00:00:01 |
|   9 |        SORT UNIQUE             |                            |     1 |    26 |     2   (0)| 00:00:01 |
|* 10 |         TABLE ACCESS FULL      |                            |     1 |    26 |     2   (0)| 00:00:01 |
|* 11 |      INDEX RANGE SCAN          |                            |     1 |       |     1   (0)| 00:00:01 |
|* 12 |     TABLE ACCESS BY INDEX ROWID|                            |     1 |    24 |     2   (0)| 00:00:01 |
|* 13 |   FILTER                       |                            |       |       |            |       |
|  14 |    NESTED LOOPS                |                            |       |       |            |       |
|  15 |     NESTED LOOPS               |                            |     1 |    76 |     8  (13)| 00:00:01 |
|  16 |      MERGE JOIN CARTESIAN      |                            |     1 |    52 |     5  (20)| 00:00:01 |
|  17 |       SORT UNIQUE              |                            |     1 |    26 |     2   (0)| 00:00:01 |
|* 18 |        TABLE ACCESS FULL       |                            |     1 |    26 |     2   (0)| 00:00:01 |
|  19 |       BUFFER SORT              |                            |     1 |    26 |     3  (34)| 00:00:01 |
|  20 |        SORT UNIQUE             |                            |     1 |    26 |     2   (0)| 00:00:01 |
|* 21 |         TABLE ACCESS FULL      |                            |     1 |    26 |     2   (0)| 00:00:01 |
|* 22 |      INDEX RANGE SCAN          |                            |     1 |       |     2   (0)| 00:00:01 |
|  23 |     TABLE ACCESS BY INDEX ROWID|                            |     1 |    24 |     3   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------


   7 - filter("SERIAL#"=1xxxxxxxxxxxx)
  10 - filter("SERIAL#"=1xxxxxxxxxxxx)
----------------------------------------------

 

In order to fix this , we will create and enable an outline, that should help the optimizer to choose always the best plan:

 BEGIN
      DBMS_OUTLN.create_outline(hash_value    =>52400000,child_number  => 0);
    END;
  /

PL/SQL procedure successfully completed.

SQL>
SQL> alter system set use_stored_outlines=TRUE;

System altered.

As the parameter “use_stored_outlines” is a ‘pseudo’ parameter, is not persistent over the reboot of the system, for that reason we had to create this trigger on startup database.

SQL> create or replace trigger my_trigger after startup on database
  2  begin
  3  execute immediate 'alter system set use_stored_outlines=TRUE';
  4  end;
  5  /

Trigger created.

Now we can check , if the outline is used:

NAME                           OWNER                          CATEGORY                       USED
------------------------------ ------------------------------ ------------------------------ ------
SYS_OUTLINE_1xxxxxxxxxxxxxxxx  TEST                           DEFAULT                        USED

And also, to check that the execution is taking in account

SQL> select * from table(dbms_xplan.display_cursor('574gkxxxxxxxx',0));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------

SQL_ID  574gkxxxxxxxx, child number 0
-------------------------------------
Select  
...................

Plan hash value: 444444444444

-------------------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name                       | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |                            |       |       |    15 (100)|       |
|   1 |  UNION-ALL                     |                            |       |       |            |       |
|*  2 |   FILTER                       |                            |       |       |            |       |
|   3 |    NESTED LOOPS                |                            |       |       |            |       |
|   4 |     NESTED LOOPS               |                            |     1 |    76 |     7  (15)| 00:00:01 |
|   5 |      MERGE JOIN CARTESIAN      |                            |     1 |    52 |     5  (20)| 00:00:01 |
|   6 |       SORT UNIQUE              |                            |     1 |    26 |     2   (0)| 00:00:01 |
|*  7 |        TABLE ACCESS FULL       |                            |     1 |    26 |     2   (0)| 00:00:01 |
|   8 |       BUFFER SORT              |                            |     1 |    26 |     3  (34)| 00:00:01 |
|   9 |        SORT UNIQUE             |                            |     1 |    26 |     2   (0)| 00:00:01 |
|* 10 |         TABLE ACCESS FULL      |                            |     1 |    26 |     2   (0)| 00:00:01 |
|* 11 |      INDEX RANGE SCAN          |                            |     1 |       |     1   (0)| 00:00:01 |
|* 12 |     TABLE ACCESS BY INDEX ROWID|                            |     1 |    24 |     2   (0)| 00:00:01 |
|* 13 |   FILTER                       |                            |       |       |            |       |
|  14 |    NESTED LOOPS                |                            |       |       |            |       |
|  15 |     NESTED LOOPS               |                            |     1 |    76 |     8  (13)| 00:00:01 |
|  16 |      MERGE JOIN CARTESIAN      |                            |     1 |    52 |     5  (20)| 00:00:01 |
|  17 |       SORT UNIQUE              |                            |     1 |    26 |     2   (0)| 00:00:01 |
|* 18 |        TABLE ACCESS FULL       |                            |     1 |    26 |     2   (0)| 00:00:01 |
|  19 |       BUFFER SORT              |                            |     1 |    26 |     3  (34)| 00:00:01 |
|  20 |        SORT UNIQUE             |                            |     1 |    26 |     2   (0)| 00:00:01 |
|* 21 |         TABLE ACCESS FULL      |                            |     1 |    26 |     2   (0)| 00:00:01 |
|* 22 |      INDEX RANGE SCAN          |                            |     1 |       |     2   (0)| 00:00:01 |
|  23 |     TABLE ACCESS BY INDEX ROWID|                            |     1 |    24 |     3   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------


   7 - filter("SERIAL#"=1xxxxxxxxxxx)
  10 - filter("SERIAL#"=1xxxxxxxxxxx)
  
Note
-----
   - outline "SYS_OUTLINE_18xxxxxxxxxxxx" used for this statement

To use stored outlines when Oracle compiles a SQL statement we need to enable them by setting the system parameter USE_STORED_OUTLINES to TRUE or to a category name. This parameter can be also be set at the session level.
By setting this parameter to TRUE, the category by default on which the outlines are created is DEFAULT.
If you prefer to add a category on the procedure of outline creation, Oracle will used this outline category until you provide another category value or you disable the usage of the outlines by putting the parameter USE_STORED_OUTLINE to FALSE.

Additionally, I would like to mention that outlines are unsupported feature from Oracle, but still helps us to fix performance issue on Standard Edition configurations.

 

Cet article SQL Plan stability in 11G using stored outlines est apparu en premier sur Blog dbi services.

How to migrate Grid Infrastructure from release 12c to release 18c

$
0
0

Oracle Clusterware 18c builds on this innovative technology by further enhancing support for larger multi-cluster environments and improving the overall ease of use. Oracle Clusterware is leveraged in the cloud in order to provide enterprise-class resiliency where required and dynamic as well as online allocation of compute resources where needed, when needed.
Oracle Grid Infrastructure provides the necessary components to manage high availability (HA) for any business critical application.
HA in consolidated environments is no longer simple active/standby failover.

In this blog we will see how to upgrade our Grid Infrastructure stack from 12cR2 to 18c.

Step1: You are required to patch your GI with the patch 27006180

[root@dbisrv04 ~]# /u91/app/grid/product/12.2.0/grid/OPatch/opatchauto apply /u90/Kit/27006180/ -oh /u91/app/grid/product/12.2.0/grid/

Performing prepatch operations on SIHA Home........

Start applying binary patches on SIHA Home........

Performing postpatch operations on SIHA Home........

[finalize:finalize] OracleHomeLSInventoryGrepAction action completed on home /u91/app/grid/product/12.2.0/grid successfully
OPatchAuto successful.

Step2: Check the list of patches applied

grid@dbisrv04:/u90/Kit/ [+ASM] /u91/app/grid/product/12.2.0/grid/OPatch/opatch lsinventory
Oracle Interim Patch Installer version 12.2.0.1.6
Copyright (c) 2018, Oracle Corporation.  All rights reserved.

Lsinventory Output file location : /u91/app/grid/product/12.2.0/grid/cfgtoollogs/opatch/lsinv/lsinventory2018-10-11_09-06-44AM.txt

--------------------------------------------------------------------------------
Oracle Grid Infrastructure 12c                                       12.2.0.1.0
There are 1 products installed in this Oracle Home.


Interim patches (1) :

Patch  27006180     : applied on Thu Oct 11 09:02:50 CEST 2018
Unique Patch ID:  21761216
Patch description:  "OCW Interim patch for 27006180"
   Created on 5 Dec 2017, 09:12:44 hrs PST8PDT
   Bugs fixed:
     13250991, 20559126, 22986384, 22999793, 23340259, 23722215, 23762756
........................
     26546632, 27006180

 

Step3: Upgrage the binaries to the release 18c

upgrade_grid

directory_new_grid

– recommend to run the rootUpgrade.sh script manually

run_root_script

/u90/app/grid/product/18.3.0/grid/rootupgrade.sh
[root@dbisrv04 ~]# /u90/app/grid/product/18.3.0/grid/rootupgrade.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u90/app/grid/product/18.3.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:
The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u90/app/grid/product/18.3.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/oracle/crsdata/dbisrv04/crsconfig/roothas_2018-10-11_09-21-27AM.log

2018/10/11 09:21:29 CLSRSC-595: Executing upgrade step 1 of 12: 'UpgPrechecks'.
2018/10/11 09:21:30 CLSRSC-363: User ignored prerequisites during installation
2018/10/11 09:21:31 CLSRSC-595: Executing upgrade step 2 of 12: 'GetOldConfig'.
2018/10/11 09:21:33 CLSRSC-595: Executing upgrade step 3 of 12: 'GenSiteGUIDs'.
2018/10/11 09:21:33 CLSRSC-595: Executing upgrade step 4 of 12: 'SetupOSD'.
2018/10/11 09:21:34 CLSRSC-595: Executing upgrade step 5 of 12: 'PreUpgrade'.

ASM has been upgraded and started successfully.

2018/10/11 09:22:25 CLSRSC-595: Executing upgrade step 6 of 12: 'UpgradeAFD'.
2018/10/11 09:23:52 CLSRSC-595: Executing upgrade step 7 of 12: 'UpgradeOLR'.
Creating OCR keys for user 'grid', privgrp 'oinstall'..
Operation successful.
2018/10/11 09:23:57 CLSRSC-595: Executing upgrade step 8 of 12: 'UpgradeOCR'.
LOCAL ONLY MODE
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4664: Node dbisrv04 successfully pinned.
2018/10/11 09:24:00 CLSRSC-595: Executing upgrade step 9 of 12: 'CreateOHASD'.
2018/10/11 09:24:02 CLSRSC-595: Executing upgrade step 10 of 12: 'ConfigOHASD'.
2018/10/11 09:24:02 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.service'
2018/10/11 09:24:49 CLSRSC-595: Executing upgrade step 11 of 12: 'UpgradeSIHA'.
CRS-4123: Oracle High Availability Services has been started.


dbisrv04     2018/10/11 09:25:58     /u90/app/grid/product/18.3.0/grid/cdata/dbisrv04/backup_20181011_092558.olr     70732493   

dbisrv04     2018/07/31 15:24:14     /u91/app/grid/product/12.2.0/grid/cdata/dbisrv04/backup_20180731_152414.olr     0
2018/10/11 09:25:59 CLSRSC-595: Executing upgrade step 12 of 12: 'InstallACFS'.
CRS-4123: Oracle High Availability Services has been started.

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'dbisrv04'
CRS-2673: Attempting to stop 'ora.driver.afd' on 'dbisrv04'
CRS-2677: Stop of 'ora.driver.afd' on 'dbisrv04' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'dbisrv04' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2018/10/11 09:27:54 CLSRSC-327: Successfully configured Oracle Restart for a standalone server

– you can ignore the warning related to the memory resources

ignore_prereq

completed_succesfully

– once finished the installation, verify what has been made

[root@dbisrv04 ~]# /u90/app/grid/product/18.3.0/grid/bin/crsctl query has softwareversion
Oracle High Availability Services version on the local node is [18.0.0.0.0]

[root@dbisrv04 ~]# /u90/app/grid/product/18.3.0/grid/bin/crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       dbisrv04                 STABLE
ora.DATA2.dg
               ONLINE  ONLINE       dbisrv04                 STABLE
ora.DATA3.dg
               ONLINE  ONLINE       dbisrv04                 STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       dbisrv04                 STABLE
ora.RECO.dg
               ONLINE  ONLINE       dbisrv04                 STABLE
ora.asm
               ONLINE  ONLINE       dbisrv04                 Started,STABLE
ora.ons
               OFFLINE OFFLINE      dbisrv04                 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd
      1        ONLINE  ONLINE       dbisrv04                 STABLE
ora.db18c.db
      1        ONLINE  ONLINE       dbisrv04                 Open,HOME=/u90/app/o
                                                             racle/product/18.3.0
                                                             /dbhome_1,STABLE
ora.diskmon
      1        OFFLINE OFFLINE                               STABLE
ora.driver.afd
      1        ONLINE  ONLINE       dbisrv04                 STABLE
ora.evmd
      1        ONLINE  ONLINE       dbisrv04                 STABLE
ora.orcl.db
      1        ONLINE  ONLINE       dbisrv04                 Open,HOME=/u90/app/o
                                                             racle/product/18.3.0
                                                             /dbhome_1,STABLE
--------------------------------------------------------------------------------
 

Cet article How to migrate Grid Infrastructure from release 12c to release 18c est apparu en premier sur Blog dbi services.

Oracle Open World 2018 D 0: Bike trip starting on Golden Gate

$
0
0

Today (22.10.2018) my colleague Mouhamadou and me had the opportunity to make a bike trip (#BikeB4OOW) organized by Bryn Llewellyn product manager for Oracle PL/SQL and Edition Based Redefinition (EBR). We were well accompagnied with several other famous people such as Franck Pachot, Mike Dietrich, Pieter Van Puymbroeck, Liesbeth Van Raemd and Ivaca Arsov.. Just to name few of them.

Oracle Biking Team

We started our trip beside the Golden Gate at the Welcome Center at 10:00 am in direction of Sausalito and we kept on along the coast until Tiburon that we reached at about 12:00. There we splitted our group between the ones who would enjoy a meal and take the ferry and another group which prefered to come back by bike.

Golden Gate Bike trip

Mouhamadou and me accompanied by Franck, Pieter, Liesbeth and Ivica retained the first option and enjoyed a delicious meal at the Servino Ristaurante.

Lunch Time in Tiburon

We then went for a small digestion walk on Paradise drive taking some sea lions&heron’s pictures but also some selfies.

Heron in San Francisco

Finally we took the ferry to reach North Beach and bringing back our bikes.  It was the opportunity to have a wonderful view on Alcatraz Island, San Francisco and the Golden Gate bridge.

Alcatraz and San Francisco

Because a blog speaking about a bike trip starting on the Golden Gate, without even a Golden Gate picture is not imaginable I made a small detour before giving back my bike to catch a picture…

Golden Gate

On the way back to the hotel we caught our Oracle Pass at Moscone Center in order to attend to the sessions tomorrow.

The Oracle Pass

So tomorrow no sightseeing and no bike trip on the programm but I very do hope lot of interesting technical sessions and fun. For sure I will attend to the Larry Ellison Keynote “Cloud Generation 2″ and I’m pretty sure that I’m going to heard about autonomous tasks, AI, security and cloud ;-).

Greetings from San Francisco

Cet article Oracle Open World 2018 D 0: Bike trip starting on Golden Gate est apparu en premier sur Blog dbi services.


Oracle Open World 2018 D2: Mark Hurd’s keynote – Accelerating Growth in the Cloud

$
0
0

During this second day at Oracle Open World 2018 (24.10.2018) I attended to Mark Hurd’s keynote name “Accelerating Growth in the Cloud. Several famous people participated to this keynote such as:

Ian Bremmer who is the president and founder of Eurasia Group and according to Oracle “the leading global political risk research and consulting firm. Mr Bremmer is also the president and founder of GZERO Media.

Sherry Aaholm who is the Vice President and Chief Information Officer of Cummins Inc. “Cummins Inc. is an American Fortune 500 corporation that designs, manufactures, and distributes engines, filtration, and power generation products” – wikipedia.

Sherry Aaholm with Mark Hurd

Navindra Yadav, Founder of Tetration Analytics. “Cisco Tetration offers holistic workload protection for multicloud data centers by enabling a zero-trust model using segmentation” – Cisco

Navindra Yadav and Mike Hurd

Thaddeus Arroyo, Chief Executive Officer of AT&T Business. Mr Arroyo is responsible for the company’s integrated global business solutions organization, which servces more than 3 million business customers in nearly. “AT&T is the world’s largest telecommunications company, the second largest provider of mobile telephone services, and the largest provider of fixed telephone services in the United States through AT&T Communications.” – wikipedia

Thaddeus Arroyo with Mike Hurd

Geopolitical analysis with Ian Bremmer

The session started with a videoconference between Mark Hurd and Ian Bremmer regarding geopolitical topics. China has been mentioned as the biggest economy in the world and technology superpower. It has also been underlined the alignment between Chinese company and Chinese Government. Regading U.S they spoke about investment in physical defense vs investment in virtual defense where there is still lot to do compared to some other countries.

Disruption as a constant

Mark Hurd then presented few slides starting with a short summary named “With disruption as a constant – technology becomes the differentiator”

  • Data is key asset for business to own, analyze, use and secure
  • Virtual assets will win over physical resources
  • Cyber teams are the new future
  • Cloud and integrated technologies, like AI, help organizations lower costs while driving innovation & improving productivity

Past predictions

He then recapped the predictions he did in 2015/2016 for 2025

  • 80% of production apps will be in the cloud
  • Two SaaS Suite providers will have 80% market share
  • The number of corporate-owned data centers will have decreased by 80%
  • 80% of IT budgets will be spent on cloud services
  • 80% of IT budgets will be spent on business innovation, and only 20% on system maintenance
  • All enterprise data will be stored in the cloud
  • 100% of application development and testing will be conducted in the cloud
  • Enterprise clouds will be the most secure place for IT processing

and the ones he did in 2017 for 2020

  • More than 50% of all enterprise data will be managed autonomously and also be more secure
  • Even highly regulated industries will shift 50% of their production workloads to cloud
  • 90% of all enterprise applications will feature integrated AI capabilities
  • The top ERP vendor in the cloud will own more than half of the total ERP market

Then he presented few predictions that have been afterwards by Forbes and Gartner Reseach to prove that the analysts and press had followed the same predictions…

  • In 15 months, 80% of all IT budgets will be committed to cloud apps and solutions – Forbes, Louis Columbus, “State of Cloud Adoption and Security”, 2017
  • 80% of enterprises will have shut down their traditional data centers by 2025 – Gartner Reserach, Dave Cappuccio, “The Data Center is Dead” 2018
  • The Cloud Could Be Your Most Secure Place for Data, Niall Browne CISO, Domo, 2017
  • Oracle, Salesforce, and MSFT together have a 70% share of all SaaS revenue – Forrester Research, 10 Cloud Computing predictions for 2018
  • AI Technologies Will Be in Almost Every New Software Product by 2020 – Gartner Research, Jim Hare, AI development strategies, 2017

Mark Hurd then spoke about AI in a slide named “Business Applications with AI” where he presented few statistics in order to better understand in what AI(chatbot, blockchain, aso) can help businesses. Not to mention that all these technologies will be encapsulated in Cloud Services.

  • ERP Cloud – 30% of Financial Analyst’s time “roughly 1 full day a week) is spent doing manual reports in excel. using AI, reports become error free and more insightful.
  • HCM Cloud – 35% of job recruiter’s day spent in sourcing and screening candidates. This could be cut in half, and result in improved employee talent.
  • SCM Cloud – 65% of Managers time spent manually tracking the shipment of goods. With Blockchain, this could be automated for improved visibility and trust.
  • CX Cloud – 60% of phone-support time on customer issues could be avoided altogether. With Integrated CX and AI could be addressed in a single call or via a chatbot.

Mark Hurd’s predictions by 2025

Finally he spoke about his own predictions for 2025: By 2025, all cloud apps will include AI

  • These Cloud apps will further distance themselves from legacy applications.
  • AI will be pervasive and woven into all business apps and platform services.
  • The same will be true for technologies like blokchain.

According to him by 2025, 85% of interactions with customers will be automated: Customer experience is fundamentally changing (and will dramatically improve) with these emerging technologies:

  • AI-based Digital Assistanst increases productivity and humanizes experiences
  • AI-driven Analytics helps businesses understand complexity of all customer needs
  • Internet of Things brings customers closer to companies that serve them

New I.T jobs by 2025

Regarding I.T jobs the following has been predicted by Mark Hurd:

  • 60% of the I.T Jobs have not been invented yet (But will be by 2025)

and the new jobs in 2025 will be:

  • Data professional (Analyst Scientist, Engineers)
  • Robot Supervisor
  • Human to Machine UX specialists
  • Smart Cyty Technology Designers
  • AI-Assisted Healtcare Technician

As a summary he concludes with a slide named “Better Business, Better I.T”

  • Cloud is irrefutable and foundational
  • Next in cloud is accelerated productivity and innovation
  • AI and other technologies will be integrated features
  • Autonomous database software will reduce cost and reduce risk

Mike Hurd during OOW2018

Cet article Oracle Open World 2018 D2: Mark Hurd’s keynote – Accelerating Growth in the Cloud est apparu en premier sur Blog dbi services.

Reimaging an old X3-2/X4-2 ODA

$
0
0

Introduction

X3-2 and X4-2 ODAs are still very capable pieces of hardware in 2018. With 256GB of RAM and at least 16 cores per node, and with 18TB RAW disk capacity as a standard these appliances are far from obsolete even you probably don’t have any more support on the hardware from Oracle.
If you own several ODAs of this kind, hardware support may not really be a problem. If something fails, you can use the other ODA for spare parts.

You probably missed some patches on your old ODAs. Why? Maybe because it’s not so easy to patch, and it’s even more difficult if you don’t patch regularly. Or maybe just because you don’t want to add more tasks to your job (applying each patch is just like never stop patching).

So if you want to give a second life to your ODA, you’d better reimage it.

Reimaging: how to do?

Reimaging is the best way to do the cleanup of your ODA. Current deployment packages are certified for all the ODAs, except from V1 (first generation before the X3-2).

You first have to download all the needed files from MOS. Pay attention to download the deployment packages for OAKCLI stack because ODACLI is limited to lite and newer ODAs.

Assuming you’re using a bare metal configuration and you want to deploy the latest ODA version 12.2.1.4.0, you will need the following files :

  • 12999313 : ISO for reimaging
  • 28216780 : patch for OAKCLI stack (because reimaging actually does not update bioses and firmwares)
  • 12978712 : appliance server for OAKCLI stack
  • 17770873,  19520042 et 27449599 : rdbms clones for database 11.2.0.4, 12.1.0.2 and 12.2.0.1

Network configuration and disk configuration didn’t change: you still need to provide all the IPs, VIPs, DNS and so on for the network, and disk configuration is still not so clear with external backup meaning that you will go for 85/15 repartition between DATA and RECO instead of the default 40/60 split. Don’t forget that you can change the redundancy level for each ASM diskgroup: DATA, RECO and REDO can use high redundancy, but normal redundancy will give you 50% more free space (18TB RAW is 9TB usable in normal redundancy and 6TB usable in high redundancy).

Step 1 – Connect the ISO as a CDROM through ILOM interface and reimage the servers

I won’t give you the extensive procedure for this part: nothing has changed regarding ODA reimaging during last years.

First step is to connect to the ILOM and virtually plug the ISO image on the server. Then, select the CDROM as the next boot device, and do a power cycle of the server. You’ll have to repeat this on the other node too. Reimaging lasts about 1h and is fully automatic. The latest step is still the longest one (post-installation procedure). Once the reimaging is done, each node should have a different default name: oak1 for node 0 and oak2 for node 1 (weird). If the nodes are both oak1, please check the cables connected to the shared storage: they must be connected according to the setup poster.

Step 2 – Configure basic network settings

Reimaging is always ending by a reboot, and depending on the appliance, it will ask you the kind of network you plan to use: Copper of Fiber. Then, through the ILOM, you need to launch the configure firstnet script:

/opt/oracle/oak/bin/oakcli configure firstnet

Repeat this configuration step on the second node. Now your nodes are visible through the network.

Step 3 – Deploy, cleanup, deploy…

Reimaging was so easy… But from now it will be a little more tricky. You now need to deploy the appliance: understand configure the complete network settings, install all the Oracle stack with Grid Infrastrucure, ASM, latest database engine and eventually create a first database. And you will need a graphical interface to configure all these parameters and launch the deployment. So, from the ILOM session, let’s unpack the necessary files, start a graphical session of Linux and launch the deployment GUI.

oakcli unpack -package /opt/dbi/p12978712_122140_Linux-x86-64_1of2.zip
oakcli unpack -package /opt/dbi/p12978712_122140_Linux-x86-64_2of2.zip
oakcli unpack -package /opt/dbi/p27449599_122140_Linux-x86-64.zip
startx
oakcli deploy

Graphical interface will help you to configure all the parameters, but don’t deploy straight away from now. Backup the configuration file and then edit it:

vi /opt/dbi/deploy_oda01

Review all the parameters and adjust them to perfectly match your needs (most of these parameters cannot be changed afterwards).

Now you can launch the real deployment and select your configuration file in the graphical interface:

oakcli deploy

First try will fail and it’s a normal behaviour. Failure is because of the ASM headers: they are still writen on the disks in the storage shelf. Reimaging did nothing on these disks. And already having ASM disks configured will make the deployment process to fail. Now you can exit the deployment and do a cleanup of the failed attempt.

/opt/oracle/oak/onecmd/cleanupDeploy.pl

Unfortunatly you cannot do the cleanup if nothing is already deployed, so you need this first failing attempt. Alternatively, you can do the cleanup before reimaging, or manually clean all the disks headers and partitions on the 20 disks before trying to deploy (with a dd), but it probably won’t be faster.

When the cleanup is done, the ODA will reboot and you’ll have to configure again the firstnet from the ILOM on both nodes.

/opt/oracle/oak/bin/oakcli configure firstnet

Finally, with a new graphical session you can restart the deployment, and this time, if your parameter file is OK, it will be succesful. Yes!

startx
oakcli deploy

Step 4 – Patch the server

It seems weird but reimaging actually doesn’t update the firmware, bios, ilom of the servers, nor the firmware of the disks in the storage shelf. Understand that reimaging is only a software reimaging of the nodes. This is an example of an ODA X4-2 configuration just after reimaging and deploying the appliance:

oakcli show version -detail
System Version  Component Name            Installed Version         Supported Version
--------------  ---------------           ------------------        -----------------
12.2.1.4.0
Controller_INT            11.05.03.00               Up-to-date
Controller_EXT            11.05.03.00               Up-to-date
Expander                  0018                      Up-to-date
SSD_SHARED                944A                      Up-to-date
HDD_LOCAL                 A720                      A7E0
HDD_SHARED {
[ c2d0,c2d1,c2d2,c2d      A720                      A7E0
3,c2d4,c2d5,c2d6,c2d
7,c2d8,c2d9,c2d11,c2
d12,c2d13,c2d14,c2d1
5,c2d16,c2d17,c2d18,
c2d19 ] [ c2d10 ]                 A7E0                      Up-to-date
}
ILOM                      3.2.4.46.a r101689        4.0.2.27.a r123795
BIOS                      25030100                  25060300
IPMI                      1.8.12.4                  Up-to-date
HMP                       2.4.1.0.11                Up-to-date
OAK                       12.2.1.4.0                Up-to-date
OL                        6.9                       Up-to-date
GI_HOME                   12.2.0.1.180417(2767      Up-to-date
4384,27464465)
DB_HOME                   12.2.0.1.180417(2767      Up-to-date
4384,27464465)

Hopefully you can apply the patch even if your ODA is already in the same software version as your patch. Well done Oracle.

So let’s register the patch files and do the patching of the servers (server will probably reboot):


oakcli unpack -package /opt/dbi/p282166780_122140_Linux-x86-64_1of3.zip
oakcli unpack -package /opt/dbi/p282166780_122140_Linux-x86-64_2of3.zip
oakcli unpack -package /opt/dbi/p282166780_122140_Linux-x86-64_3of3.zip
oakcli update -patch 12.2.1.4.0 --server
...


oakcli show version -detail

System Version  Component Name            Installed Version         Supported Version
--------------  ---------------           ------------------        -----------------
12.2.1.4.0
Controller_INT            11.05.03.00               Up-to-date
Controller_EXT            11.05.03.00               Up-to-date
Expander                  0018                      Up-to-date
SSD_SHARED                944A                      Up-to-date
HDD_LOCAL                 A7E0                      Up-to-date
HDD_SHARED {
[ c2d0,c2d1,c2d2,c2d      A720                      A7E0
3,c2d4,c2d5,c2d6,c2d
7,c2d8,c2d9,c2d11,c2
d12,c2d13,c2d14,c2d1
5,c2d16,c2d17,c2d18,
c2d19 ] [ c2d10 ]                 A7E0                      Up-to-date
}
ILOM                      4.0.2.27.a r123795        Up-to-date
BIOS                      25060300                  Up-to-date
IPMI                      1.8.12.4                  Up-to-date
HMP                       2.4.1.0.11                Up-to-date
OAK                       12.2.1.4.0                Up-to-date
OL                        6.9                       Up-to-date
GI_HOME                   12.2.0.1.180417(2767      Up-to-date
4384,27464465)
DB_HOME                   12.2.0.1.180417(2767      Up-to-date
4384,27464465)

Great, our servers are now up-to-date. But storage is still not OK.

Step 4 – Patch the storage

Patching the storage is quite easy (server will probably reboot):

oakcli update -patch 12.2.1.4.0 --storage
...

oakcli show version -detail

System Version  Component Name            Installed Version         Supported Version
--------------  ---------------           ------------------        -----------------
12.2.1.4.0
Controller_INT            11.05.03.00               Up-to-date
Controller_EXT            11.05.03.00               Up-to-date
Expander                  0018                      Up-to-date
SSD_SHARED                944A                      Up-to-date
HDD_LOCAL                 A7E0                      Up-to-date
HDD_SHARED                A7E0                      Up-to-date
ILOM                      4.0.2.27.a r123795        Up-to-date
BIOS                      25060300                  Up-to-date
IPMI                      1.8.12.4                  Up-to-date
HMP                       2.4.1.0.11                Up-to-date
OAK                       12.2.1.4.0                Up-to-date
OL                        6.9                       Up-to-date
GI_HOME                   12.2.0.1.180417(2767      Up-to-date
4384,27464465)
DB_HOME                   12.2.0.1.180417(2767      Up-to-date
4384,27464465)

Everything is OK now!

Conclusion – A few more things

  • When redeploying, consider changing the redundancy of the diskgroups and the partitionning of the disk if needed. This can only be configured during deployment. Disks parameters are located in the deployment file (DISKGROUPREDUNDANCYS and DBBackupType)
  • Always check that all the components are up-to-date to keep your ODA in a consistent state. Check on both nodes because local patching is also possible, and it could make no sense if the nodes are running different level of patch
  • Don’t forget to check/apply your licenses on your ODA because using Oracle software is for sure not free
  • You have to know that a freshly redeployed ODA will have 12.2 database compatibility on diskgroups, making the use of acfs mandatory for your old databases. For me it’s a real drawback considering that acfs is adding useless complexity to ASM
  • Don’t forget to deploy the other dbhomes according to your needs

Cet article Reimaging an old X3-2/X4-2 ODA est apparu en premier sur Blog dbi services.

My DOAG Debut

$
0
0

Unbelievable! After more than 10 years working in the Oracle Database environment, this year was my first participation at the DOAG Conference + Exhibition.

After a relaxed travel to Nürnberg with all the power our small car could provide on the German Autobahn, we arrived at the Messezentrum.
With the combined power of our dbi services’ team, the booth was ready in no time and we could switch to the more relaxed part of the day and ended up in our hotel’s bar with other DOAG participants.

The next few days were a firework of valuable sessions, stimulating discussions and some after hour parties who gave me to think about my life decisions and led me to the question: Why did it take me so long for participating in the DOAG Conference + Exhibition?

It would make this post unreadable long and boring if I would sum up all sessions I attended.
So I will just mention a few highlights with the links to the presentations:

Boxing-Gloves-Icons

Boxing Gloves Vectors by Creativology.pk

And of course, what must be mentioned is The Battle: Oracle vs. Postgres: Jan Karremans vs. Daniel Westermann

The red boxing glow (for Oracle) represents Daniel Westermann, Oracle expert for many many years who now is the Open Infrastructure Technologie Leader @ dbi services, while Jan Karremans, Senior Sales Engineer at Enterprise DB put on the blue glow (for Postgres). The room was fully packed with over 200 people who have more sympathy for Oracle.

The Battle: Oracle vs. Postgres

The Battle: Oracle vs. Postgres

Knowing how much Daniel loves the Open Source database it was inspiring to see how eloquent he defended the Oracle system and brought Jan multiple times into troubles.
It was a good and brave fight between the opponents in which Daniel had the better arguments and gained a win after points.
For the next time, I would wish to see Daniel on the other side defending Postgres because I am sure he could fight down almost every opponent.

In the end, this DOAG was a wonderful experience and I am sure it won’t take another 10 years until I come back.

PS: I could write about the after party, but as you know, what happens at the after party stays at the after party expect the headache, this little b… stays a little bit longer.

PPS: On the last day I’ve got a nice little present from virtual7 for winning the F1 grand prix challenge. I now exactly on which dbi event we will open this bottle, stay tuned…
IMG_20181122_153112

Cet article My DOAG Debut est apparu en premier sur Blog dbi services.

DOAG 2018: OVM or KVM on ODA?

$
0
0

The DOAG 2018 is over, for me the most important topics were in the field of licensing. The insecurity among the users is great, let’s take virtualization on the ODA, for example:

The starting point: The customer uses Oracle Enterprise Edition, has 2 CPU licenses, uses Dataguard as disaster protection on 2 ODA X7-2M systems and wants to virtualize, he also has 2 application servers that are also to be virtualized.

Sure, if I use the HA variant of the ODA or Standard Edition, this does not concern me, there OVM is used as a hypervisor and this allows hard partitioning. The database system (ODA_BASE) automatically gets its own CPU pool in Virtualized Deployment; additional VMs can be distributed to the rest of the CPU.

On the small and medium models only KVM is available as a hypervisor. This has some limitations: on the one hand there is no virtualized deployment of the ODA 2S / 2M system, on the other hand, the operation of databases as KVM guests is not supported. This means that the ODA must be set up as a bare metal system, the application servers are virtualized in KVM.

What does that mean for the customer described above? We set up the system in bare metal mode, we activate 2 cores on each system, set up the database and set up the Dataguard between primary and standby. The customer costs 2 EE CPU licenses (about $ 95k per price list).

Now he wants to virtualize his 2 application servers and notes that 4 cores are needed per application server. Of 36 cores (per system) but only 2 cores are available, so he also activates 4 more cores (odacli update-cpucore -c 6) on both systems and installs the VM.

But: The customer has also changed his Oracle EE licenses, namely from 1 EE CPU to 3 CPU per ODA, so overall he has to buy 6 CPU licenses (about $ 285k according to the price list)!

Now Oracle propagates that in the future KVM in the virtualization should be the means of choice. However, this will not work without hard partitioning under KVM or the support of databases in KVM machines.

Tammy Bednar (Oracle’s Oracle Database Appliance Product Manager) announced in her presentation “KVM or OVM? Use Cases for Solution in a Box” that solutions to this problem are expected by mid-2019:

– Oracle databases and applications should be supported as KVM guests
– Support for hard partitioning
– Windows guests under KVM
– Tooling (odacli / Web Console) should support the deployment of KVM guests
– A “privileged” VM (similar to the ODA_BASE on the HA models) for the databases should be provided
– Automated migration of OVM guests to KVM

All these measures would certainly make the “small” systems much more attractive for consolidation. It will also help to simplify the “license jungle” a bit and to give the customers a bit more security. I am curious what will come.

Cet article DOAG 2018: OVM or KVM on ODA? est apparu en premier sur Blog dbi services.

Strange behavior when patching GI/ASM

$
0
0

I tried to apply a patch to my 18.3.0 GI/ASM two node cluster on RHEL 7.5.
The first node worked fine, but the second node got always an error…

Environment:
Server Node1: dbserver01
Server Node2: dbserver02
Oracle Version: 18.3.0 with PSU OCT 2018 ==> 28660077
Patch to be installed: 28655784 (RU 18.4.0.0)

First node (dbserver01)
Everything fine:

cd ${ORACLE_HOME}/OPatch
sudo ./opatchauto apply /tmp/28655784/
...
Sucessfull

Secondary node (dbserver02)
Same command but different output:

cd ${ORACLE_HOME}/Patch
sudo ./opatchauto apply /tmp/28655784/
...
Remote command execution failed due to No ECDSA host key is known for dbserver01 and you have requested strict checking.
Host key verification failed.
Command output:
OPATCHAUTO-72050: System instance creation failed.
OPATCHAUTO-72050: Failed while retrieving system information.
OPATCHAUTO-72050: Please check log file for more details.

After playing around with the keys I found out, that the host keys had to be exchange also for root.
So I connected as root and made an ssh from dbserver01 to dbserver02 and from dbserver02 to dbserver01.

After I exchanged the host keys the error message changed:

Remote command execution failed due to Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
Command output:
OPATCHAUTO-72050: System instance creation failed.
OPATCHAUTO-72050: Failed while retrieving system information.
OPATCHAUTO-72050: Please check log file for more details.

So I investigated the log file a litte further and the statement with the error was:

/bin/ssh -o FallBackToRsh=no -o PasswordAuthentication=no -o StrictHostKeyChecking=no -o NumberOfPasswordPrompts=0 dbserver01 \
/bin/ssh -o FallBackToRsh=no -o PasswordAuthentication=no -o StrictHostKeyChecking=yes -o NumberOfPasswordPrompts=0 dbserver01 \
/u00/app/oracle/product/18.3.0/dbhome_1//perl/bin/perl \
/u00/app/oracle/product/18.3.0/dbhome_1/OPatch/auto/database/bin/RemoteHostExecutor.pl \
-GRID_HOME=/u00/app/oracle/product/18.3.0/grid_1 \
-OBJECTLOC=/u00/app/oracle/product/18.3.0/dbhome_1//cfgtoollogs/opatchautodb/hostdata.obj \
-CRS_ACTION=get_all_homes -CLUSTERNODES=dbserver01,dbserver02,dbserver02 \
-JVM_HANDLER=oracle/dbsysmodel/driver/sdk/productdriver/remote/RemoteOperationHelper

Soooooo: dbserver02 starts a ssh session to dbserver01 and from there an additional session to dbserver01 (himself).
I don’t know why but it is as it is….after I did a keyexchange from dbserver01 (root) to dbserver01 (root) the patching worked fine.
At the moment I can not remeber that I ever had to do a keyexchange from the root User on to the same host.

Did you got the same proble or do you know a better way to do that? Write me a comment!

Cet article Strange behavior when patching GI/ASM est apparu en premier sur Blog dbi services.

odacli create-database fails on ODA X7-2HA with java.lang.OutOfMemoryError

$
0
0

Today I was onsite at my customer and he told me: I can no longer create databases on my ODA X7-2HA, every time I try to use odacli create-database it fails, please help.

Ok, let’s check what happens, the customer shares the Oracle Homes, he wants to create a 11.2.0.4 database:

[root@robucnoroda020 ~]# odacli list-dbhomes

ID                                       Name                 DB Version                               Home Location                                 Status
---------------------------------------- -------------------- ---------------------------------------- --------------------------------------------- ----------
157bfdf4-4430-4fb1-878e-2fb803ee54bd     OraDB11204_home1     11.2.0.4.180417 (27441052, 27338049)     /u01/app/oracle/product/11.2.0.4/dbhome_1     Configured
2aaba0e6-4482-4c9f-8d98-4a9d72fdb96e     OraDB12102_home1     12.1.0.2.180417 (27338020, 27338029)     /u01/app/oracle/product/12.1.0.2/dbhome_1     Configured
ad2b0d0a-11c1-4a15-b22a-f698496cd606     OraDB12201_home1     12.2.0.1.180417 (27464465, 27674384)     /u01/app/oracle/product/12.2.0.1/dbhome_1     Configured

[root@robucnoroda020 ~]#

Ok we try to create a 11.2.0.4 database:

[root@robucnoroda020 log]# odacli create-database -n FOO -dh 157bfdf4-4430-4fb1-878e-2fb803ee54bd -cs AL32UTF8 -y RAC -r ACFS -m
Password for SYS,SYSTEM and PDB Admin:

Job details
----------------------------------------------------------------
 ID: 1959838e-34a6-419e-94da-08b931a039cc
 Description: Database service creation with db name: FOO
 Status: Created
 Created: December 4, 2018 11:11:26 PM EET
 Message:

Task Name Start Time End Time Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------

[root@robucnoroda020 log]#

The job was created successful, we check whats going on:

[root@robucnoroda020 log]# odacli describe-job -i 1959838e-34a6-419e-94da-08b931a039cc

Job details
----------------------------------------------------------------
 ID: 1959838e-34a6-419e-94da-08b931a039cc
 Description: Database service creation with db name: FOO
 Status: Failure
 Created: December 4, 2018 11:11:26 PM EET
 Message: DCS-10001:Internal error encountered: Failed to create the database FOO.

Task Name Start Time End Time Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Database Service creation December 4, 2018 11:11:26 PM EET December 4, 2018 11:13:01 PM EET Failure
Database Service creation December 4, 2018 11:11:26 PM EET December 4, 2018 11:13:01 PM EET Failure
Setting up ssh equivalance December 4, 2018 11:11:26 PM EET December 4, 2018 11:11:46 PM EET Success
Creating volume dclFOO December 4, 2018 11:11:47 PM EET December 4, 2018 11:12:04 PM EET Success
Creating volume datFOO December 4, 2018 11:12:04 PM EET December 4, 2018 11:12:21 PM EET Success
Creating ACFS filesystem for DATA December 4, 2018 11:12:21 PM EET December 4, 2018 11:12:34 PM EET Success
Database Service creation December 4, 2018 11:12:34 PM EET December 4, 2018 11:13:01 PM EET Failure
Database Creation December 4, 2018 11:12:34 PM EET December 4, 2018 11:13:00 PM EET Failure

[root@robucnoroda020 log]#

Indeed, the job has failed, next we check the DCS log, there we can see the database creation failure:

2018-12-04 23:13:01,209 DEBUG [Database Service creation] [] c.o.d.c.t.r.TaskReportRecorder:  Compile task plan for ServiceJobReport
'{
  "updatedTime" : null,
  "jobId" : "1959838e-34a6-419e-94da-08b931a039cc",
  "status" : "Failure",
  "message" : null,
  "reports" : [ ],
  "createTimestamp" : 1543957886185,
  "resourceList" : [ ],
  "description" : "Database service creation with db name: FOO"
}'...

2018-12-04 23:13:01,219 DEBUG [Database Service creation] [] c.o.d.a.t.TaskServiceRequest: Task[id: 1959838e-34a6-419e-94da-08b931a039cc, jobid: 1959838e-34a6-419e-94da-08b931a039cc, TaskName: Database Service creation] call() completed.
2018-12-04 23:13:01,219 INFO [Database Service creation] [] c.o.d.a.t.TaskServiceRequest: Task[id: 1959838e-34a6-419e-94da-08b931a039cc, jobid: 1959838e-34a6-419e-94da-08b931a039cc, TaskName: Database Service creation] completed: Failure

Ok, the log don’t tell us what’s going wrong, but behind the scene odacli create-database uses dbca in the requested ORACLE Home. In the next step we check the the dbca logs:

oracle@robucnoroda020:/u01/app/oracle/cfgtoollogs/dbca/FOO/ [rdbms11204] ll
total 276
-rw-r----- 1 oracle oinstall 275412 Dec  4 23:13 trace.log
oracle@robucnoroda020:/u01/app/oracle/cfgtoollogs/dbca/FOO/ [rdbms11204]

oracle@robucnoroda020:/u01/app/oracle/cfgtoollogs/dbca/FOO/ [rdbms11204] cat trace.log

......

[main] [ 2018-12-04 23:12:57.546 EET ] [InventoryUtil.getHomeName:111]  homeName = OraDB11204_home1
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
	at oracle.xml.parser.v2.XMLDocument.createNodeFromType(XMLDocument.java:4132)
	at oracle.xml.parser.v2.XMLDocument.createElement(XMLDocument.java:2801)
	at oracle.xml.parser.v2.DocumentBuilder.startElement(DocumentBuilder.java:488)
	at oracle.xml.parser.v2.NonValidatingParser.parseElement(NonValidatingParser.java:1616)
	at oracle.xml.parser.v2.NonValidatingParser.parseRootElement(NonValidatingParser.java:456)
	at oracle.xml.parser.v2.NonValidatingParser.parseDocument(NonValidatingParser.java:402)
	at oracle.xml.parser.v2.XMLParser.parse(XMLParser.java:244)
	at oracle.xml.jaxp.JXDocumentBuilder.parse(JXDocumentBuilder.java:155)
	at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:172)
	at oracle.sysman.oix.oixd.OixdDOMReader.getDocument(OixdDOMReader.java:42)
	at oracle.sysman.oic.oics.OicsCheckPointReader.buildCheckpoint(OicsCheckPointReader.java:75)
	at oracle.sysman.oic.oics.OicsCheckPointSession.<init>(OicsCheckPointSession.java:101)
	at oracle.sysman.oic.oics.OicsCheckPointIndexSession.<init>(OicsCheckPointIndexSession.java:123)
	at oracle.sysman.oic.oics.OicsCheckPointFactory.getIndexSession(OicsCheckPointFactory.java:69)
	at oracle.sysman.assistants.util.CheckpointContext.getCheckPointSession(CheckpointContext.java:256)
	at oracle.sysman.assistants.util.CheckpointContext.getCheckPoint(CheckpointContext.java:245)
	at oracle.sysman.assistants.dbca.backend.Host.cleanup(Host.java:3710)
	at oracle.sysman.assistants.dbca.backend.SilentHost.cleanup(SilentHost.java:585)
	at oracle.sysman.assistants.dbca.Dbca.execute(Dbca.java:145)
	at oracle.sysman.assistants.dbca.Dbca.main(Dbca.java:189)
[Thread-5] [ 2018-12-04 23:13:00.631 EET ] [DbcaCleanupHook.run:44]  Cleanup started
[Thread-5] [ 2018-12-04 23:13:00.631 EET ] [OracleHome.cleanupDBOptionsIntance:1482]  DB Options dummy instance sid=null
[Thread-5] [ 2018-12-04 23:13:00.631 EET ] [DbcaCleanupHook.run:49]  Cleanup ended

Ah, there is a JAVA OutOfMemory Exception, we know this from older times, we have to change the Heap Space for dbca’s Java  engine, let’s change to ORACLE_HOME and check dbca:

oracle@robucnoroda020:/u01/app/oracle/product/11.2.0.4/dbhome_1/bin/ [rdbms11204] grep JRE_OPT dbca
JRE_OPTIONS="${JRE_OPTIONS} -DSET_LAF=${SET_LAF} -Dsun.java2d.font.DisableAlgorithmicStyles=true -Dice.pilots.html4.ignoreNonGenericFonts=true  -DDISPLAY=${DISPLAY} -DJDBC_PROTOCOL=thin -mx128m"
exec $JRE_DIR/bin/java  $JRE_OPTIONS  $DEBUG_STRING -classpath $CLASSPATH oracle.sysman.assistants.dbca.Dbca $ARGUMENTS

In the dbca script, we see that the Java Heap Space is 128MB, we change it to 512MB (and yes create a backup first):

oracle@robucnoroda020:/u01/app/oracle/product/11.2.0.4/dbhome_1/bin/ [rdbms11204] grep JRE_OPT dbca
JRE_OPTIONS="${JRE_OPTIONS} -DSET_LAF=${SET_LAF} -Dsun.java2d.font.DisableAlgorithmicStyles=true -Dice.pilots.html4.ignoreNonGenericFonts=true  -DDISPLAY=${DISPLAY} -DJDBC_PROTOCOL=thin -mx512m"
exec $JRE_DIR/bin/java  $JRE_OPTIONS  $DEBUG_STRING -classpath $CLASSPATH oracle.sysman.assistants.dbca.Dbca $ARGUMENTS

After deleting the failed database we try again to create our database FOO:

[root@robucnoroda020 log]# odacli create-database -n FOO -dh 157bfdf4-4430-4fb1-878e-2fb803ee54bd -cs AL32UTF8 -y RAC -r ACFS -m
Password for SYS,SYSTEM and PDB Admin:

Job details
----------------------------------------------------------------
                     ID:  b289ed58-c29f-4ea8-8aa8-46a5af8ca529
            Description:  Database service creation with db name: FOO
                 Status:  Created
                Created:  December 4, 2018 11:45:04 PM EET
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------

Let’s check what’s going on:

[root@robucnoroda020 log]# odacli describe-job -i b289ed58-c29f-4ea8-8aa8-46a5af8ca529

Job details
----------------------------------------------------------------
                     ID:  b289ed58-c29f-4ea8-8aa8-46a5af8ca529
            Description:  Database service creation with db name: FOO
                 Status:  Success
                Created:  December 4, 2018 11:45:04 PM EET
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Setting up ssh equivalance               December 4, 2018 11:45:05 PM EET    December 4, 2018 11:45:25 PM EET    Success
Creating volume dclFOO                   December 4, 2018 11:45:25 PM EET    December 4, 2018 11:45:42 PM EET    Success
Creating volume datFOO                   December 4, 2018 11:45:42 PM EET    December 4, 2018 11:45:59 PM EET    Success
Creating ACFS filesystem for DATA        December 4, 2018 11:45:59 PM EET    December 4, 2018 11:46:13 PM EET    Success
Database Service creation                December 4, 2018 11:46:13 PM EET    December 4, 2018 11:51:23 PM EET    Success
Database Creation                        December 4, 2018 11:46:13 PM EET    December 4, 2018 11:49:56 PM EET    Success
updating the Database version            December 4, 2018 11:51:21 PM EET    December 4, 2018 11:51:23 PM EET    Success
create Users tablespace                  December 4, 2018 11:51:23 PM EET    December 4, 2018 11:51:25 PM EET    Success

[root@robucnoroda020 log]#

Super, the database was successfully created. It seems that sometimes odacli create-database fails due to dbca memory usage. So also on ODA check your dbca logs, if your database creation wasn’t successful. If you see these Heap Space Exceptions, don’t be afraid to change dbca’s heap memory allocation.

Cet article odacli create-database fails on ODA X7-2HA with java.lang.OutOfMemoryError est apparu en premier sur Blog dbi services.

OEM Cloud Control 13c – Agent Gold Image

$
0
0

Introduction

I am currently setting up a new “Base Image” virtual machine (Red Hat Enterprise Linux 7.6) which will be used to create 6 brand new Oracle database servers requested by a customer. Besides installing and configuring the OS, I also have to install 3 Oracle Homes and one Cloud Control Agent 13c.

An OMS13c server already exists including an Agent patched with the EM-AGENT Bundle Patch 13.2.0.0.181031 (28680866) :
oracle@oms13c:/home/oracle/ [agent13c] opatch lsinventory | grep 28680866
Patch 28680866 : applied on Tue Nov 13 17:32:48 CET 2018
28680866, 28744209, 28298159, 25141245, 28533438, 28651962, 28635152
oracle@oms13c:/home/oracle/ [agent13c]

However, when I wanted to deploy the CC13c Agent on my Master VM from the Cloud Control 13c web interface (Setup > Add Target > Add Targets Manually > Install Agent on Host), the Agent was successfully installed but… without the patch 28680866 :( . That means I will have to install the patch manually. Considering that the goal of creating a “Base Image” VM for this project is to quickly and easily delivering 6 database servers, having to install AND to patch the Agent on each server is not very efficient and doesn’t fit with what I want.
I had so to find a better way to deploy a patched Agent and the solution has been to use an Agent Gold Image. It allowed me to do exactly what I wanted.

In this post I will show how I have set this up.

Deploying the Agent

Here is how we can deploy the Agent on the Base Image VM. From Cloud Control 13c, we click on Setup > Add Target > Add Targets Manually > Install Agent on Host :
1

Then we insert the name of the target VM, we select the approriate platform…
2_2

…and we specify the directory in which we want to install the Agent (Agent Home) :
3_2

Everything is now ready to start the deployment. We can click on Next to see the review of the deployment configuration and on Deploy Agent to start.
Once the Agent is correctly deployed, the status should be like that :
4

As explained above we can see that the Agent is not patched with the Bundle Patch of October 2018 :
oracle@basevm:/u01/app/oracle/agent13c/agent_13.2.0.0.0/OPatch/ [agent13c] ./opatch lsinventory | grep 28680866
oracle@basevm:/u01/app/oracle/agent13c/agent_13.2.0.0.0/OPatch/ [agent13c]

We must patch it manually…

Updating OPatch

Before installing a patch it is highly recommended to update the OPatch utility first. All version of the tool are available here. The current one my VM is 13.8.0.0.0 :
oracle@basevm:/u01/app/oracle/software/OPatch/oms13cAgent/ [agent13c] opatch version
OPatch Version: 13.8.0.0.0

OPatch succeeded.

We must use the following command to update OPatch :
oracle@basevm:/u01/app/oracle/software/OPatch/oms13cAgent/ [agent13c] unzip -q p6880880_139000_Generic.zip
oracle@basevm:/u01/app/oracle/software/OPatch/oms13cAgent/ [agent13c] cd 6880880/
oracle@basevm:/u01/app/oracle/software/OPatch/oms13cAgent/6880880/ [agent13c] $ORACLE_HOME/oracle_common/jdk/bin/java -jar ./opatch_generic.jar -silent oracle_home=$ORACLE_HOME
Launcher log file is /tmp/OraInstall2018-11-23_02-58-11PM/launcher2018-11-23_02-58-11PM.log.
Extracting the installer . . . . Done
Checking if CPU speed is above 300 MHz. Actual 2099.998 MHz Passed
Checking swap space: must be greater than 512 MB. Actual 4095 MB Passed
Checking if this platform requires a 64-bit JVM. Actual 64 Passed (64-bit not required)
Checking temp space: must be greater than 300 MB. Actual 27268 MB Passed
Preparing to launch the Oracle Universal Installer from /tmp/OraInstall2018-11-23_02-58-11PM
Installation Summary
[...] [...] Logs successfully copied to /u01/app/oraInventory/logs.
oracle@basevm:/u01/app/oracle/software/OPatch/oms13cAgent/6880880/ [agent13c] opatch version
OPatch Version: 13.9.3.3.0

OPatch succeeded.
oracle@basevm:/u01/app/oracle/software/OPatch/oms13cAgent/6880880/ [agent13c]

You probably noticed that since OEM 13cR2 the way to update OPatch has changed : no more easy unzip, we have to use a Java file instead (don’t really understand why…).

Patching the Agent

As OPatch is now up to date we can proceed with the installation of the patch 28680866 :
oracle@basevm:/u01/app/oracle/software/agent13c/patch/ [agent13c] unzip -q p28680866_132000_Generic.zip
oracle@basevm:/u01/app/oracle/software/agent13c/patch/ [agent13c] cd 28680866/28680866/
oracle@basevm:/u01/app/oracle/software/agent13c/patch/28680866/28680866/ [agent13c] emctl stop agent
Oracle Enterprise Manager Cloud Control 13c Release 2
Copyright (c) 1996, 2016 Oracle Corporation. All rights reserved.
Stopping agent ... stopped.
oracle@basevm:/u01/app/oracle/software/agent13c/patch/28680866/28680866/ [agent13c] opatch apply
Oracle Interim Patch Installer version 13.9.3.3.0
Copyright (c) 2018, Oracle Corporation. All rights reserved.

Oracle Home : /u01/app/oracle/agent13c/agent_13.2.0.0.0
Central Inventory : /u01/app/oraInventory
from : /u01/app/oracle/agent13c/agent_13.2.0.0.0/oraInst.loc
OPatch version : 13.9.3.3.0
OUI version : 13.9.1.0.0
Log file location : /u01/app/oracle/agent13c/agent_13.2.0.0.0/cfgtoollogs/opatch/opatch2018-11-23_15-33-14PM_1.log

OPatch detects the Middleware Home as "/u01/app/oracle/agent13c"

Verifying environment and performing prerequisite checks...
OPatch continues with these patches: 28680866

Do you want to proceed? [y|n] y
User Responded with: Y
All checks passed.
Backing up files...
Applying interim patch '28680866' to OH '/u01/app/oracle/agent13c/agent_13.2.0.0.0'

Patching component oracle.sysman.top.agent, 13.2.0.0.0...
Patch 28680866 successfully applied.
Log file location: /u01/app/oracle/agent13c/agent_13.2.0.0.0/cfgtoollogs/opatch/opatch2018-11-23_15-33-14PM_1.log

OPatch succeeded.
oracle@basevm:/u01/app/oracle/software/agent13c/patch/28680866/28680866/ [agent13c]

Let’s restart the Agent and check that the patch has been applied :
oracle@basevm:/u01/app/oracle/software/agent13c/patch/28680866/28680866/ [agent13c] emctl start agent
Oracle Enterprise Manager Cloud Control 13c Release 2
Copyright (c) 1996, 2016 Oracle Corporation. All rights reserved.
Starting agent ................... started.
oracle@basevm:/u01/app/oracle/software/agent13c/patch/28680866/28680866/ [agent13c] opatch lsinventory | grep 28680866
Patch 28680866 : applied on Mon Dec 03 17:17:25 CET 2018
28680866, 28744209, 28298159, 25141245, 28533438, 28651962, 28635152
oracle@basevm:/u01/app/oracle/software/agent13c/patch/28680866/28680866/ [agent13c]

Perfect. The Agent is now patched but…

Installing the DB plugin

…what about its plugins ? We can see from the OMS13c server that the Agent doesn’t have the database plugin installed :
oracle@oms13c:/home/oracle/ [oms13c] emcli login -username=sysman
Enter password :

Login successful
oracle@oms13c:/home/oracle/ [oms13c] emcli list_plugins_on_agent -agent_names="basevm.xx.yyyy.com:3872"
The Agent URL is https://basevm.xx.yyyy.com:3872/emd/main/ -
Plug-in Name Plugin-id Version [revision]
Oracle Home oracle.sysman.oh 13.2.0.0.0
Systems Infrastructure oracle.sysman.si 13.2.2.0.0

This is normal. As no Oracle database are currently running on the VM, the DB plugin was not installed automatically during the Agent deployment. We have to install it manually using the following command :
oracle@oms13c:/home/oracle/ [oms13c] emcli deploy_plugin_on_agent -agent_names="basevm.xx.yyyy.com:3872" -plugin=oracle.sysman.db
Agent side plug-in deployment is in progress
Use "emcli get_plugin_deployment_status -plugin=oracle.sysman.db" to track the plug-in deployment status.
oracle@oms13c:/home/oracle/ [oms13c]

To check the status of the plugin installation :
oracle@oms13c:/home/oracle/ [oms13c] emcli get_plugin_deployment_status -plugin=oracle.sysman.db
Plug-in Deployment/Undeployment Status

Destination : Management Agent - basevm.xx.yyyy.com:3872
Plug-in Name : Oracle Database
Version : 13.2.2.0.0
ID : oracle.sysman.db
Content : Plug-in
Action : Deployment
Status : Success
Steps Info:
---------------------------------------- ------------------------- ------------------------- ----------
Step Start Time End Time Status
---------------------------------------- ------------------------- ------------------------- ----------
Submit job for deployment 11/23/18 4:06:29 PM CET 11/23/18 4:06:30 PM CET Success

Initialize 11/23/18 4:06:32 PM CET 11/23/18 4:06:43 PM CET Success

Validate Environment 11/23/18 4:06:44 PM CET 11/23/18 4:06:44 PM CET Success

Install software 11/23/18 4:06:44 PM CET 11/23/18 4:06:45 PM CET Success

Attach Oracle Home to Inventory 11/23/18 4:06:46 PM CET 11/23/18 4:07:04 PM CET Success

Configure plug-in on Management Agent 11/23/18 4:07:05 PM CET 11/23/18 4:07:28 PM CET Success

Update inventory 11/23/18 4:07:23 PM CET 11/23/18 4:07:28 PM CET Success

---------------------------------------- ------------------------- ------------------------- ----------
oracle@oms13c:/home/oracle/ [oms13c]

Quick check :
oracle@oms13c:/home/oracle/ emcli list_plugins_on_agent -agent_names="basevm.xx.yyyy.com:3872"
The Agent URL is https://basevm.xx.yyyy.com:3872/emd/main/ -
Plug-in Name Plugin-id Version [revision]
Oracle Database oracle.sysman.db 13.2.2.0.0
Oracle Home oracle.sysman.oh 13.2.0.0.0
Systems Infrastructure oracle.sysman.si 13.2.2.0.0

oracle@oms13c:/home/oracle/ [oms13c]

The Agent is now exactly in the state in which we want to deploy it on all 6 servers (OPatch up to date, Agent patched, DB plugin installed).
It’s now time to move forward with the creation of an Agent Gold Image.

Creating the Agent Gold image

Going back to Cloud Control we can navigate to Setup > Manage Cloud Control > Gold Agent Images :
Screenshot from 2018-12-03 21-13-14
We click on Manage All Images
6

…then on Create and we give a name to our Image :
7

Once the Image created, we must create its 1st version. We click on the Image name and then on Action > Create. From here we can select the Agent configured earlier on the VM. It will be the source of the Gold Image :
8

The creation of the Gold Agent Image and its 1st version can be also done from command-line with the following emcli command :
oracle@oms13c:/home/oracle/ [oms13c] emcli create_gold_agent_image -image_name="agent13c_gold_image" -version_name="gold_image_v1" -source_agent="basevm.xx.yyyy.com:3872"
A gold agent image create operation with name "GOLD_AGENT_IMAGE_CREATE_2018_12_03_22_04_20_042" has been submitted.
You can track the progress of this session using the command "emcli get_gold_agent_image_activity_status -operation_name=GOLD_AGENT_IMAGE_CREATE_2018_12_03_22_04_20_042"

oracle@oms13c:/home/oracle/ [oms13c] emcli get_gold_agent_image_activity_status -operation_name=GOLD_AGENT_IMAGE_CREATE_2018_12_03_22_04_20_042
Inputs
------
Gold Image Version Name : gold_image_v1
Gold Image Name : agent13c_gold_image
Source Agent : basevm.xx.yyyy.com:3872
Working Directory : %agentStateDir%/install

Status
-------
Step Name Status Error Cause Recommendation
Create Gold Agent Image IN_PROGRESS

oracle@oms13c:/home/oracle/

The Gold Agent Image is now created. We can start to deploy it on the others servers in the same way we did at the first deployment, but by selecting this time With Gold Image :
9

Once the Agent is deployed on the server we can see that OPatch is up to date :
oracle@srvora01:/u01/app/oracle/agent13c/ [agent13c] opatch version
OPatch Version: 13.9.3.3.0

OPatch succeeded.
oracle@srvora01:/u01/app/oracle/agent13c/ [agent13c]

The Agent Bundle Patch is installed :
oracle@srvora01:/u01/app/oracle/agent13c/ [agent13c] opatch lsinventory | grep 28680866
Patch 28680866 : applied on Mon Dec 03 17:17:25 CET 2018
28680866, 28744209, 28298159, 25141245, 28533438, 28651962, 28635152
oracle@srvora01:/u01/app/oracle/agent13c/ [agent13c]

And the DB plugin is ready :
oracle@srvora01:/u01/app/oracle/agent13c/ [agent13c] ll
total 24
drwxr-xr-x. 31 oracle oinstall 4096 Dec 3 22:59 agent_13.2.0.0.0
-rw-r--r--. 1 oracle oinstall 209 Dec 3 22:32 agentimage.properties
drwxr-xr-x. 8 oracle oinstall 98 Dec 3 22:58 agent_inst
-rw-r--r--. 1 oracle oinstall 565 Dec 3 22:56 agentInstall.rsp
-rw-r--r--. 1 oracle oinstall 19 Dec 3 22:56 emctlcfg.rsp
-rw-r-----. 1 oracle oinstall 350 Dec 3 22:32 plugins.txt
-rw-r--r--. 1 oracle oinstall 470 Dec 3 22:57 plugins.txt.status
oracle@srvora01:/u01/app/oracle/agent13c/ [agent13c] cat plugins.txt.status
oracle.sysman.oh|13.2.0.0.0||discoveryPlugin|STATUS_SUCCESS
oracle.sysman.oh|13.2.0.0.0||agentPlugin|STATUS_SUCCESS
oracle.sysman.db|13.2.2.0.0||discoveryPlugin|STATUS_SUCCESS
oracle.sysman.db|13.2.2.0.0||agentPlugin|STATUS_SUCCESS
oracle.sysman.xa|13.2.2.0.0||discoveryPlugin|STATUS_SUCCESS
oracle.sysman.emas|13.2.2.0.0||discoveryPlugin|STATUS_SUCCESS
oracle.sysman.si|13.2.2.0.0||agentPlugin|STATUS_SUCCESS
oracle.sysman.si|13.2.2.0.0||discoveryPlugin|STATUS_SUCCESS
oracle@srvora01:/u01/app/oracle/agent13c/ [agent13c]

Conclusion

Using a Gold Image drastically ease the management of OMS Agents in Oracle environments. In addition to allowing massive deployment on targets, it is also possible to manage several Gold Images with different patch levels. The hosts are simply subscribed to a specific Image and follow its life cycle (new patch, new plugins, aso…).

Think about it during your next Oracle monitoring project !

Cet article OEM Cloud Control 13c – Agent Gold Image est apparu en premier sur Blog dbi services.


Understand Oracle Text at a glance

$
0
0

What is Oracle Text?

Oracle Text provides indexing, word and theme searching, and viewing capabilities for text in query applications and document classification applications.

Oracle text activation for a user

create user ORATXT identified by oratxt ;
grant ctxapp to ORATXT ;
grant execute on ctxsys.ctx_cls to ORATXT ;
grant execute on ctxsys.ctx_ddl to ORATXT ;
grant execute on ctxsys.ctx_doc to ORATXT ;
grant execute on ctxsys.ctx_output to ORATXT ;
grant execute on ctxsys.ctx_query to ORATXT ;
grant execute on ctxsys.ctx_report to ORATXT ;
grant execute on ctxsys.ctx_thes to ORATXT ;
grant execute on ctxsys.ctx_ulexer to ORATXT ;

Oracle Text configuration and usage

To design an Oracle Text application, first determine the type of queries you expect to run. This enables you to choose the most suitable index for the task. There are 4 use cases with Oracle Text:

  1. Document Collection Applications
  • The collection is typically static with no significant change in content after the initial indexing run. Documents can be of any size and of different formats, such as HTML, PDF, or Microsoft Word. These documents are stored in a document table. Searching is enabled by first indexing the document collection.
  • Queries usually consist of words or phrases. Application users can specify logical combinations of words and phrases using operators such as OR and AND. Other query operations can be used to improve the search results, such as stemming, proximity searching, and wildcarding.
  • An important factor for this type of application is retrieving documents relevant to a query while retrieving as few non-relevant documents as possible. The most relevant documents must be ranked high in the result list.
  • The queries for this type of application are best served with a CONTEXT index on your document table. To query this index, the application uses the SQL CONTAINS operator in the WHERE clause of a SELECT statement.
  • Example of searching
  • SQL> select score(1), doc_id, html_content from docs where contains(html_content, 'dbi', 1) > 0;
     
    SCORE(1) ID HTML_CONTENT
    ---------- ---------- -----------------------------------------------------------
    4 1 <HTML>dbi services provide various IT services</HTML>
    4 9 <HTML>You can become expert with dbi services</HTML>
    4 3 <HTML>The compaany dbi services is in Switzerland.</HTML>

  • Catalog Information Applications
    • The stored catalog information consists of text information, such as book titles, and related structured information, such as price. The information is usually updated regularly to keep the online catalog up to date with the inventory.
    • Queries are usually a combination of a text component and a structured component. Results are almost always sorted by a structured component, such as date or price. Good response time is always an important factor with this type of query application.
    • Catalog applications are best served by a CTXCAT index. Query this index with the CATSEARCH operator in the WHERE clause of a SELECT statement.
    • Example of searching
    • SQL> select product, price from auction where catsearch(title, 'IT', 'order by price')> 0;
       
      PRODUCT PRICE
      ----------------------------------- ----------
      IT Advice 1 hour 499
      Course IT management 3999
      License IT monitoring 199
      IT desk 810

  • Document Classification Applications
    • In a document classification application, an incoming stream or a set of documents is compared to a pre-defined set of rules. When a document matches one or more rules, the application performs some action. For example, assume there is an incoming stream of news articles. You can define a rule to represent the category of Finance. The rule is essentially one or more queries that select document about the subject of Finance. The rule might have the form ‘stocks or bonds or earnings’.
    • When a document arrives about a Wall Street earnings forecast and satisfies the rules for this category, the application takes an action, such as tagging the document as Finance or e-mailing one or more users.
    • To create a document classification application, create a table of rules and then create a CTXRULE index. To classify an incoming stream of text, use the MATCHES operator in the WHERE clause of a SELECT statement. See Figure 1-5 for the general flow of a classification application.
    • SQL> select category_id, category_name
      from categories
      where matches(blog_string, 'Dbi services add value to your IT infrastructure by providing experts in different technologies to cover all your needs.');
       
      QUERY_ID QUERY_STRING
      ---------- -----------------------------------
      9 Expertise
      2 Advertisement
      6 IT Services

  • XML Search Applications
    • An XML search application performs searches over XML documents. A regular document search usually searches across a set of documents to return documents that satisfy a text predicate; an XML search often uses the structure of the XML document to restrict the search.
      Typically, only that part of the document that satisfies the search is returned. For example, instead of finding all purchase orders that contain the word electric, the user might need only purchase orders in which the comment field contains electric.

    In conclusion, there is various uses cases for which Oracle Text will help you with text indexation. Before implementing, verify which will best suit your need. Also, it may be interesting to compare with an external text indexer like Solr which is also able to index your database via a JDBC driver.

    I hope it may help and please do not hesitate to contact us if you have any questions or require further information.

    Cet article Understand Oracle Text at a glance est apparu en premier sur Blog dbi services.

    ODA : Free up space on local filesystems

    $
    0
    0

    Introduction

    When you work on ODA you sometimes get struggled with local filesystem free space. ODA has terabytes of space on data disks, but local disks are still limited to a raid-1 array of 2x 480GB disks. And only few GB are dedicated to / and /u01 filesystems. You do not need hundreds of GB on these filesystems, but I think that you prefer to keep at least 20-30% of free space. And if you plan to patch your ODA, you surely need more space to pass all the steps without reaching dangerous level of filling. Here is how to grab free space on these filesystems.

    Use additional purgeLogs script

    PurgeLogs script is provided as an additional tool from Oracle. It should have been available with oakcli/odacli but it’s not. Download it from MOS note 2081655.1. As this tool is not part of the official ODA tool, please test it before using it on a production environment. It’s quite easy to use, put the zip in a folder, unzip it, and run it with root user. You can use this script with a single parameter that will clean up all the logfiles for all the Oracle products aged of a number of days:


    df -h /
    Filesystem Size Used Avail Use% Mounted on
    /dev/xvda2 55G 29G 23G 56% /
    df -h /u01/
    Filesystem Size Used Avail Use% Mounted on
    /dev/xvdb1 92G 43G 45G 50% /u01

    cd /tmp/
    unzip purgeLogs.zip
    du -hs /opt/oracle/oak/log/*
    11G /opt/oracle/oak/log/aprhodap02db0
    4.0K /opt/oracle/oak/log/fishwrap
    232K /opt/oracle/oak/log/test
    ./purgeLogs -days 1

    --------------------------------------------------------
    purgeLogs version: 1.43
    Author: Ruggero Citton
    RAC Pack, Cloud Innovation and Solution Engineering Team
    Copyright Oracle, Inc.
    --------------------------------------------------------

    2018-12-20 09:20:06: I adrci GI purge started
    2018-12-20 09:20:06: I adrci GI purging diagnostic destination diag/asm/+asm/+ASM1
    2018-12-20 09:20:06: I ... purging ALERT older than 1 days

    2018-12-20 09:20:47: S Purging completed succesfully!
    du -hs /opt/oracle/oak/log/*
    2.2G /opt/oracle/oak/log/aprhodap02db0
    4.0K /opt/oracle/oak/log/fishwrap
    28K /opt/oracle/oak/log/test


    df -h /
    Filesystem Size Used Avail Use% Mounted on
    /dev/xvda2 55G 18G 34G 35% /
    df -h /u01/
    Filesystem Size Used Avail Use% Mounted on
    /dev/xvdb1 92G 41G 47G 48% /u01

    In this example, you just freed up about 13GB. If your ODA is composed of 2 nodes, don’t forget to use the same script on the other node.

    Truncate hardware log traces

    Hardware related traces are quietly filling up the filesystem if your ODA is running since a long time. This traces are located under /opt/oracle/oak/log/`hostname`/adapters. I don’t know if each model has this kind of behaviour but this was an example on an old X4-2 running for 3 years now.

    cd /opt/oracle/oak/log/aprhodap02db0/adapters
    ls -lrth
    total 2.2G
    -rw-r--r-- 1 root root 50M Dec 20 09:26 ServerAdapter.log
    -rw-r--r-- 1 root root 102M Dec 20 09:27 ProcessorAdapter.log
    -rw-r--r-- 1 root root 794M Dec 20 09:28 MemoryAdapter.log
    -rw-r--r-- 1 root root 110M Dec 20 09:28 PowerSupplyAdapter.log
    -rw-r--r-- 1 root root 318M Dec 20 09:30 NetworkAdapter.log
    -rw-r--r-- 1 root root 794M Dec 20 09:30 CoolingAdapter.log
    head -n 3 CoolingAdapter.log
    [Mon Apr 27 18:02:28 CEST 2015] Action script '/opt/oracle/oak/adapters/CoolingAdapter.scr' for resource [CoolingType] called for action discovery
    In CoolingAdapter.scr
    [Mon Apr 27 18:07:28 CEST 2015] Action script '/opt/oracle/oak/adapters/CoolingAdapter.scr' for resource [CoolingType] called for action discovery
    head -n 3 MemoryAdapter.log
    [Mon Apr 27 18:02:26 CEST 2015] Action script '/opt/oracle/oak/adapters/MemoryAdapter.scr' for resource [MemoryType] called for action discovery
    In MemoryAdapter.scr
    [Mon Apr 27 18:07:25 CEST 2015] Action script '/opt/oracle/oak/adapters/MemoryAdapter.scr' for resource [MemoryType] called for action discovery

    Let’s purge the oldest lines in these files:

    for a in `ls *.log` ; do tail -n 200 $a > tmpfile ; cat tmpfile > $a ; rm -f tmpfile; done
    ls -lrth
    total 176K
    -rw-r--r-- 1 root root 27K Dec 20 09:32 CoolingAdapter.log
    -rw-r--r-- 1 root root 27K Dec 20 09:32 ProcessorAdapter.log
    -rw-r--r-- 1 root root 30K Dec 20 09:32 PowerSupplyAdapter.log
    -rw-r--r-- 1 root root 29K Dec 20 09:32 NetworkAdapter.log
    -rw-r--r-- 1 root root 27K Dec 20 09:32 MemoryAdapter.log
    -rw-r--r-- 1 root root 27K Dec 20 09:32 ServerAdapter.log

    2GB of traces you’ll never use! Don’t forget the second node on a HA ODA.

    Purge old patches in the repository: simply because they are useless

    If you successfully patched your ODA at least 2 times, you can remove the oldest patch in the ODA repository. As you may know, patches are quite big in size because they include a lot of things. So it’s a good practise to remove the oldest patches when you have successfuly patched your ODA. To identify if old patches are still on your ODA, you can dig into folder /opt/oracle/oak/pkgrepos/orapkgs/. Purge of old patches is easy:

    df -h / >> /tmp/dbi.txt
    oakcli manage cleanrepo --ver 12.1.2.6.0
    Deleting the following files...
    Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/OAK/12.1.2.6.0/Base
    Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/Seagate/ST95000N/SF04/Base
    Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/Seagate/ST95001N/SA03/Base
    Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/WDC/WD500BLHXSUN/5G08/Base
    Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/HGST/H101860SFSUN600G/A770/Base
    Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/Seagate/ST360057SSUN600G/0B25/Base
    Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/HITACHI/H106060SDSUN600G/A4C0/Base
    Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/HITACHI/H109060SESUN600G/A720/Base
    Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/HITACHI/HUS1560SCSUN600G/A820/Base
    Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/HGST/HSCAC2DA6SUN200G/A29A/Base
    Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/HGST/HSCAC2DA4SUN400G/A29A/Base
    Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/STEC/ZeusIOPs-es-G3/E12B/Base
    Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/STEC/Z16IZF2EUSUN73G/9440/Base
    Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Expander/ORACLE/DE2-24P/0018/Base
    Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Expander/ORACLE/DE2-24C/0018/Base
    Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Expander/ORACLE/DE3-24C/0291/Base
    Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Ilom/SUN/X4370-es-M2/3.0.16.22.f-es-r100119/Base
    Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/HITACHI/H109090SESUN900G/A720/Base
    Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/STEC/Z16IZF4EUSUN200G/944A/Base
    Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/HGST/H7240AS60SUN4.0T/A2D2/Base
    Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/HGST/H7240B520SUN4.0T/M554/Base
    Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Disk/HGST/H7280A520SUN8.0T/P554/Base
    Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Expander/SUN/T4-es-Storage/0342/Base
    Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Ilom/SUN/X4170-es-M3/3.2.4.26.b-es-r101722/Base
    Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Ilom/SUN/X4-2/3.2.4.46.a-es-r101689/Base
    Deleting the files under /opt/oracle/oak/pkgrepos/thirdpartypkgs/Firmware/Ilom/SUN/X5-2/3.2.4.52-es-r101649/Base
    Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/HMP/2.3.4.0.1/Base
    Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/IPMI/1.8.12.4/Base
    Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/ASR/5.3.1/Base
    Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/DB/12.1.0.2.160119/Patches/21948354
    Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/DB/11.2.0.4.160119/Patches/21948347
    Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/DB/11.2.0.3.15/Patches/20760997
    Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/DB/11.2.0.2.12/Patches/17082367
    Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/OEL/6.7/Patches/6.7.1
    Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/OVS/12.1.2.6.0/Base
    Deleting the files under /opt/oracle/oak/pkgrepos/orapkgs/GI/12.1.0.2.160119/Base
    df -h / >> /tmp/dbi.txt
    cat /tmp/dbi.txt
    Filesystem Size Used Avail Use% Mounted on
    /dev/xvda2 55G 28G 24G 54% /
    Filesystem Size Used Avail Use% Mounted on
    /dev/xvda2 55G 21G 31G 41% /

    Increase /u01 filesystem with remaining space

    This only concern ODAs in bare metal. You may have noticed that not all the disk space is allocated to your ODA local filesystems. On modern ODAs, you have 2 M2 SSD of 480GB each in a RAID1 configuration for the system, and only half of the space is allocated. As the appliance is using LogicalVolumes, you can extend very easily the size of your /u01 filesystem.

    This is an example on a X7-2M:


    vgdisplay
    --- Volume group ---
    VG Name VolGroupSys
    System ID
    Format lvm2
    Metadata Areas 1
    Metadata Sequence No 7
    VG Access read/write
    VG Status resizable
    MAX LV 0
    Cur LV 6
    Open LV 4
    Max PV 0
    Cur PV 1
    Act PV 1
    VG Size 446.00 GiB
    PE Size 32.00 MiB
    Total PE 14272
    Alloc PE / Size 7488 / 234.00 GiB
    Free PE / Size 6784 / 212.00 GiB
    VG UUID wQk7E2-7M6l-HpyM-c503-WEtn-BVez-zdv9kM


    lvdisplay
    --- Logical volume ---
    LV Path /dev/VolGroupSys/LogVolRoot
    LV Name LogVolRoot
    VG Name VolGroupSys
    LV UUID icIuHv-x9tt-v2fN-b8qK-Cfch-YfDA-xR7y3W
    LV Write Access read/write
    LV Creation host, time localhost.localdomain, 2018-03-20 13:40:00 +0100
    LV Status available
    # open 1
    LV Size 30.00 GiB
    Current LE 960
    Segments 1
    Allocation inherit
    Read ahead sectors auto
    - currently set to 256
    Block device 249:0
    --- Logical volume ---
    LV Path /dev/VolGroupSys/LogVolU01
    LV Name LogVolU01
    VG Name VolGroupSys
    LV UUID ggYNkK-GfJ4-ShHm-d5eG-6cmu-VCdQ-hoYzL4
    LV Write Access read/write
    LV Creation host, time localhost.localdomain, 2018-03-20 13:40:07 +0100
    LV Status available
    # open 1
    LV Size 100.00 GiB
    Current LE 3200
    Segments 1
    Allocation inherit
    Read ahead sectors auto
    - currently set to 256
    Block device 249:2
    --- Logical volume ---
    LV Path /dev/VolGroupSys/LogVolOpt
    LV Name LogVolOpt
    VG Name VolGroupSys
    LV UUID m8GvKZ-zgFF-2gXa-NSCG-Oy9l-vTYd-ALi6R1
    LV Write Access read/write
    LV Creation host, time localhost.localdomain, 2018-03-20 13:40:30 +0100
    LV Status available
    # open 1
    LV Size 60.00 GiB
    Current LE 1920
    Segments 1
    Allocation inherit
    Read ahead sectors auto
    - currently set to 256
    Block device 249:3
    --- Logical volume ---
    LV Path /dev/VolGroupSys/LogVolSwap
    LV Name LogVolSwap
    VG Name VolGroupSys
    LV UUID 9KWiYw-Wwot-xCmQ-uzCW-mILq-rsPz-t2X2pr
    LV Write Access read/write
    LV Creation host, time localhost.localdomain, 2018-03-20 13:40:44 +0100
    LV Status available
    # open 2
    LV Size 24.00 GiB
    Current LE 768
    Segments 1
    Allocation inherit
    Read ahead sectors auto
    - currently set to 256
    Block device 249:1
    --- Logical volume ---
    LV Path /dev/VolGroupSys/LogVolDATA
    LV Name LogVolDATA
    VG Name VolGroupSys
    LV UUID oTUQsd-wpYe-0tiA-WBFk-719z-9Cgd-ZjTmei
    LV Write Access read/write
    LV Creation host, time localhost.localdomain, 2018-03-20 13:55:25 +0100
    LV Status available
    # open 0
    LV Size 10.00 GiB
    Current LE 320
    Segments 1
    Allocation inherit
    Read ahead sectors auto
    - currently set to 256
    Block device 249:4
    --- Logical volume ---
    LV Path /dev/VolGroupSys/LogVolRECO
    LV Name LogVolRECO
    VG Name VolGroupSys
    LV UUID mJ3yEO-g0mw-f6IH-6r01-r7Ic-t1Kt-1rf36j
    LV Write Access read/write
    LV Creation host, time localhost.localdomain, 2018-03-20 13:55:25 +0100
    LV Status available
    # open 0
    LV Size 10.00 GiB
    Current LE 320
    Segments 1
    Allocation inherit
    Read ahead sectors auto
    - currently set to 256
    Block device 249:5

    212GB are available. Let’s take 100GB for extending /u01:


    lvextend -L +100G /dev/mapper/VolGroupSys-LogVolU01
    Size of logical volume VolGroupSys/LogVolU01 changed from 100.00 GiB (3200 extents) to 200.00 GiB.
    Logical volume LogVolU01 successfully resized.

    Filesystem needs to be resized:

    resize2fs /dev/mapper/VolGroupSys-LogVolU01
    resize2fs 1.43-WIP (20-Jun-2013)
    Filesystem at /dev/mapper/VolGroupSys-LogVolU01 is mounted on /u01; on-line resizing required
    old_desc_blocks = 7, new_desc_blocks = 13
    Performing an on-line resize of /dev/mapper/VolGroupSys-LogVolU01 to 52428800 (4k) blocks.
    The filesystem on /dev/mapper/VolGroupSys-LogVolU01 is now 52428800 blocks long.

    Now /u01 is bigger:

    df -h /dev/mapper/VolGroupSys-LogVolU01
    Filesystem Size Used Avail Use% Mounted on
    /dev/mapper/VolGroupSys-LogVolU01
    197G 77G 111G 41% /u01

    Conclusion

    Don’t hesitate to clean up your ODA before having to deal with space pressure.

    Cet article ODA : Free up space on local filesystems est apparu en premier sur Blog dbi services.

    Compile additional packages for Oracle VM Server

    $
    0
    0

    I needed a special package on my OVM Server 3.4.6.
    The package is called fio and is needed to do some I/O performance tests.
    Unfortunately, OVM Server does not provide any package for compiling software and installing additional software to your OVM Server is also not supported.
    But there is a solution:

    Insatll a VM with Oracle VM Server 3.4.6 and added the official OVM SDK repositories:


    rm -f /etc/yum.repos.d/*
    echo '
    [ovm34] name=Oracle Linux $releasever Latest ($basearch)
    baseurl=http://public-yum.oracle.com/repo/OracleVM/OVM3/34_latest/x86_64/
    gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
    gpgcheck=1
    enabled=1
    [ol6_latest] name=Oracle Linux $releasever Latest ($basearch)
    baseurl=http://yum.oracle.com/repo/OracleLinux/OL6/latest/$basearch/
    gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
    gpgcheck=1
    enabled=1
    [ol6_addons] name=Oracle Linux $releasever Add ons ($basearch)
    baseurl=http://yum.oracle.com/repo/OracleLinux/OL6/addons/$basearch/
    gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
    gpgcheck=1
    enabled=1
    [ol6_UEKR4] name=Latest Unbreakable Enterprise Kernel Release 4 for Oracle Linux $releasever ($basearch)
    baseurl=http://yum.oracle.com/repo/OracleLinux/OL6/UEKR4/$basearch/
    gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
    gpgcheck=1
    enabled=1 ' > /etc/yum.repos.d/ovm-sdk.repo

    Now install the necessary packages and compile your software:

    On OVM 3.4 SDK VM
    yum install -y gcc make zlib-devel libaio libaio-devel
    wget https://codeload.github.com/axboe/fio/zip/master
    unzip master
    cd fio-master
    ./configure
    make

    Copy the compiled executable “fio” to your OVM Server or to an attached NFS share.
    Run the program and do what you wanna do.

    In my case I will run several different performance tests, but that is a story for an other blog post.

    Reference: Oracle VM 3: How-to build an Oracle VM 3.3/3.4 SDK platform (Doc ID 2160955.1)

    Cet article Compile additional packages for Oracle VM Server est apparu en premier sur Blog dbi services.

    Oracle OpenWorld Europe: London 2019

    $
    0
    0

    Oracle 19c, Oracle Cloud in numbers and pre-build environments on vagrant, docker and Oracle Linux Cloud Native environment, those were some of the topics at Open World Europe Conference in London. To see what’s behind with links to detailed sources, please read on.

    The conference was organised by Oracle, most of the speakers were Oracle employees and introduced the audience at a high level into the Oracle ecosphere. To have an overview about what (huge portfolio) Oracle offers to the market and to get in touch with Oracle employees, open world conferences are the place to be. Many statements have already given at the bigger sister conference in San Francisco in October 2018, so European customers are the target audience for the conference in London. Most information on upcoming features given fall under save harbour statement, so one should be careful to take decisions based on these.

    Oracle 19c

    The main target in release 19 is stability, so fewer new features were added as in previous releases. Many new features are very well described on a post by our former dbi colleague Frank Pachot.

    To get a deeper view into new features of every RDBMS release, a good source is to read the new features guides:

    livesql.oracle.com is now running on Oracle 19, two demos on SQL functions introduced in 19c are available:

    If you like to test Oracle 19c, you can participate in Oracle 19c beta program.

    Oracle Cloud in numbers

    • 29’000’000+ active users
    • 25’000 customers
    • 1’075 PB storage
    • 83’000 VMs at 27 data centers
    • 1’600 operators

    Reading these numbers, it’s obvious Oracle gains knowhow in Cloud environments and also understands better requirements building up private cloud environments at customers. It would be interesting to see what Oracle offers to small and medium sized companies.

    Oracle Linux Cloud Native environment

    Building up a stack with tools for DevOps teams can be very challenging for organizations:

    • huge effort
    • hard to find expert resources
    • no enterprise support
    • complex architectural bets

    Thats why Oracle build up an stack on Oracle Linux that can be used both for dev, test and productive environments. Some features are:

    The stack can be run in the cloud as well as on premises using Oracle Virtualbox. Since release 6, Virtualbox is able to move VMs to the Oracle Cloud.

    Pre-build, ready-to-use environments on VirtualBox and Docker

    It’s good practise to use Vagrant for fast Virtualbox provisioning. There are a couple of pre-build so called “Vagrant boxes” available by Oracle in their yum and githup repository.

    If you want to test on pre-build oracle database environments (single instance, Real Application Cluster, Data Guard), Tim Hall provides Vagrant boxes for various releases.

    If you are looking for pre-build docker containers, have a look at Oracle Container Registry.

    Oracle strategy

    To provide a pre-build stack follows a superior Oracle strategy: IT professionals should not deal with basic work (provisioning, patching, basic tuning), but concentrate on other, more important tasks. That why Oracle offers engineered systems and cloud services as a basis. What the more important subjects are was explained in a session about “the changing role of the DBA”.

    Architecture

    Security

    • No insecure passwords
    • Concept work: who should have access to what and in which context?
    • Analyse privileges with the help of DBMS_PRIVILEGE_CAPTURE package
    • Data masking/redaction in test and dev environment

    Availability

    Understand SQL

    A personal highlight was the session from Chris R. Saxon which is as specialist for SQL. His presentation style is not only very entertaining, but also the content is interesting and helps you get the most out of Oracle Database engine. In Chris’ session, he explained why sql queries tend not to use indexes even there are present and do Full table scans instead. This is always bad and mainly based on clustering factor and data cardinality. You can follow his presentation on Youtube:

    You can find more video content from Chris on his Youtube channel.

    If you are interested to learn SQL, another great source is the Oracle SQL blog.

    Cet article Oracle OpenWorld Europe: London 2019 est apparu en premier sur Blog dbi services.

    Recover a corrupted datafile in your DataGuard environment 11G/12C.

    $
    0
    0

    On a DG environment, a datafile needs to be recovered on the STANDBY site, in two situations : when is deleted or corrupted.
    Below, I will explain  how to recover a corrupted datafile, in order to be able to repair the Standby database, without to be necessary to restore entire database.

    Initial situation :

    DGMGRL> connect /
    Connected to "PROD_SITE2"
    Connected as SYSDG.
    DGMGRL> show configuration;
    
    Configuration - CONFIG1
    
      Protection Mode: MaxPerformance
      Members:
      PROD_SITE2 - Primary database
        PROD_SITE1 - Physical standby database
    
    Fast-Start Failover: DISABLED
    
    Configuration Status:
    SUCCESS   (status updated 15 seconds ago)
    
    

    On this  environment, we have a table called EMP with 100 rows, owned by the user TEST (default tablespace TEST).

    SQL> set linesize 220;
    SQL> select username,default_tablespace from dba_users where username='TEST';
    
    USERNAME     DEFAULT_TABLESPACE
    -------------------------------
    TEST         TEST
    
    SQL> select count(*) from test.emp;
    
      COUNT(*)
    ----------
           100
    

    By mistake, the datafile on Standby site, get corrupted.

    SQL> alter database open read only;
    alter database open read only
    *
    ORA-01578: ORACLE data block corrupted (file # 5, block # 3)
    ORA-01110: data file 5: '/u02/oradata/PROD/test.dbf'

    As is corrupted, the apply of the redo log is stopped until will be repaired. So the new inserts into the EMP table will not be applied:

    SQL> begin
      2  for i in 101..150 loop
      3  insert into test.emp values (i);
      4  end loop;
      5  END;
      6  /
    
    PL/SQL procedure successfully completed.
    
    SQL> COMMIT;
    
    Commit complete.
    
    SQL> select count(*) from test.emp;
    
      COUNT(*)
    ----------
           150
    
    SQL> select name,db_unique_name,database_role from v$database;
    
    NAME      DB_UNIQUE_NAME                 DATABASE_ROLE
    --------- ------------------------------ ----------------
    PROD      PROD_SITE2                     PRIMARY

    To repair it, we will use PRIMARY site to backup controlfile and the related datafile.

    oracle@dbisrv03:/home/oracle/ [PROD] rman target /
    
    connected to target database: PROD (DBID=410572245)
    
    RMAN> backup current controlfile for standby format '/u02/backupctrl.ctl';
    
    
    RMAN> backup datafile 5 format '/u02/testbkp.dbf';
    
    Starting backup at 29-JAN-2019 10:59:37
    using target database control file instead of recovery catalog
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: SID=276 device type=DISK

    We will transfer the backuppieces on the STANDBY server, using scp:

     scp backupctrl.ctl oracle@dbisrv04:/u02/
     scp testbkp.dbf oracle@dbisrv04:/u02/
    

    Now, will start the restore/recover on the STANDBY server :

    SQL> startup nomount
    ORACLE instance started.
    
    Total System Global Area 1895825408 bytes
    Fixed Size                  8622048 bytes
    Variable Size             570425376 bytes
    Database Buffers         1308622848 bytes
    Redo Buffers                8155136 bytes
    SQL> exit
    oracle@dbisrv04:/u02/oradata/PROD/ [PROD] rman target /
    
    
    Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.
    
    connected to target database: PROD (not mounted)
    
    RMAN> restore controlfile from '/u02/backupctrl.ctl'; 
    .........
    RMAN> alter database mount;
    
    
    RMAN> catalog start with '/u02/testbkp.dbf';
    
    searching for all files that match the pattern /u02/testbkp.dbf
    
    List of Files Unknown to the Database
    =====================================
    File Name: /u02/testbkp.dbf
    
    Do you really want to catalog the above files (enter YES or NO)? YES
    cataloging files...
    cataloging done
    
    List of Cataloged Files
    =======================
    File Name: /u02/testbkp.dbf
    
    
    
    
    RMAN> restore datafile 5;
    
    Starting restore at 29-JAN-2019 11:06:31
    using channel ORA_DISK_1
    
    channel ORA_DISK_1: starting datafile backup set restore
    channel ORA_DISK_1: specifying datafile(s) to restore from backup set
    channel ORA_DISK_1: restoring datafile 00005 to /u02/oradata/PROD/test.dbf
    channel ORA_DISK_1: reading from backup piece /u02/testbkp.dbf
    channel ORA_DISK_1: piece handle=/u02/testbkp.dbf tag=TAG20190129T105938
    channel ORA_DISK_1: restored backup piece 1
    channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
    Finished restore at 29-JAN-2019 11:06:33
    
    RMAN> exit
    

    Now, we will start to apply the logs again and try to resync the STANDBY database.
    !!! Here you need to stop recovery process if you do not have a dataguard active license.

    SQL> recover managed standby database using current logfile disconnect from session;
    Media recovery complete.
    SQL> recover managed standby database cancel;
    SQL> alter database open read only;
    
    Database altered.
    
    SQL> select count(*) from test.emp;
    
      COUNT(*)
    ----------
           150
    

    Now, we can see the last insert activity on the PRIMARY site that is available on the STANDBY site.

    On 12c environment, with an existing container PDB1, the things are easier, with the feature RESTORE/RECOVER from service :

    connect on the standby site
    rman target /
    restore tablespace PDB1:USERS from service PROD_PRIMARY;
    recover tablespace PDB1:USERS;
    
    

    Cet article Recover a corrupted datafile in your DataGuard environment 11G/12C. est apparu en premier sur Blog dbi services.

    Viewing all 515 articles
    Browse latest View live