Quantcast
Channel: Archives des Oracle - dbi Blog
Viewing all 533 articles
Browse latest View live

Event Sourcing: CQN is not a replacement for CDC

$
0
0

We are in an era where software architects want to stream the transactions out of the database and distribute them, as events, to multiple microservices. Don’t ask why, but that’s the trend: store inconsistent eventually consistent copies of data in different physical components, rather than simply using logical views in the same database, where the data is ACIDely stored, processed and protected. Because it was decided that this segregation, in CQRS (Command Query Responsibility Segregation), will be physical, on different systems, the need for logical replication and change data capture is raising, with a new name: Event Sourcing.

When we want to replicate the changes without adding an overhead to the database, the solution is Change Data Capture from the redo stream. The redo contains all the physical changes and, with dictionary information and a little supplemental logging, we can mine it to extract the logical changes. Currently are commercial products (Oracle GoldenGate, Attunity, Dbvisit replicate) and there are some open source ones based on LogMiner (StreamSets, Debezium). LogMiner is available on all Oracle Database editions without any option. In Enterprise Edition, a more efficient solution was possible with Streams but now you have to pay for GoldenGate to use Streams. Unfortunately, sometimes you pay software update to get features removed and be sold in additional products.

Oracle has another feature that can help to replicate changes: Database Change notification, now known as Continuous Query Notification (CQN) or Object Change Notification (OCN). This feature has been implemented to refresh caches: you have a query that loads the cache and you want to be notified when some changes occurred, so that you have to update/refresh the cache. Then, in theory, this can be used to stream out the changes. However, CQN was not built for frequent changes but rather for nearly static, or slowly changing data. But sometimes we have to test by ourselves and here are my test using CQN with a lot of changes on the underlying table, just to show how it increases the load on the database and slows down the changes.

I create a DEMO table with one million rows:

17:21:56 SQL> whenever sqlerror exit failure;
17:21:56 SQL> create table DEMO (ID constraint DEMO_ID primary key) as select rownum from xmltable('1 to 1000000');
 
Table DEMO created.
 

And a table to hold notifications. As always when I want to start with an example, I start to get it from oracle-base:

17:21:58 SQL> -- from Tim Hall https://oracle-base.com/articles/10g/dbms_change_notification_10gR2
17:21:58 SQL> CREATE TABLE notifications (
2 id NUMBER,
3 message VARCHAR2(4000),
4 notification_date DATE
5 );
 
Table NOTIFICATIONS created.
 
17:21:58 SQL> CREATE SEQUENCE notifications_seq;
 
Sequence NOTIFICATIONS_SEQ created.

The callback function:

17:21:58 SQL> CREATE OR REPLACE PROCEDURE callback (ntfnds IN SYS.chnf$_desc) IS
2 l_regid NUMBER;
3 l_table_name VARCHAR2(60);
4 l_event_type NUMBER;
5 l_numtables NUMBER;
6 l_operation_type NUMBER;
7 l_numrows NUMBER;
8 l_row_id VARCHAR2(20);
9 l_operation VARCHAR2(20);
10 l_message VARCHAR2(4000) := NULL;
11 BEGIN
12 l_regid := ntfnds.registration_id;
13 l_numtables := ntfnds.numtables;
14 l_event_type := ntfnds.event_type;
15 IF l_event_type = DBMS_CHANGE_NOTIFICATION.EVENT_OBJCHANGE THEN
16 FOR i IN 1 .. l_numtables LOOP
17 l_table_name := ntfnds.table_desc_array(i).table_name;
18 l_operation_type := ntfnds.table_desc_array(i).Opflags;
19 IF (BITAND(l_operation_type, DBMS_CHANGE_NOTIFICATION.ALL_ROWS) = 0) THEN
20 l_numrows := ntfnds.table_desc_array(i).numrows;
21 ELSE
22 l_numrows :=0; /* ROWID INFO NOT AVAILABLE */
23 END IF;
24 CASE
25 WHEN BITAND(l_operation_type, DBMS_CHANGE_NOTIFICATION.INSERTOP) != 0 THEN
26 l_operation := 'Records Inserted';
27 WHEN BITAND(l_operation_type, DBMS_CHANGE_NOTIFICATION.UPDATEOP) != 0 THEN
28 l_operation := 'Records Updated';
29 WHEN BITAND(l_operation_type, DBMS_CHANGE_NOTIFICATION.DELETEOP) != 0 THEN
30 l_operation := 'Records Deleted';
31 WHEN BITAND(l_operation_type, DBMS_CHANGE_NOTIFICATION.ALTEROP) != 0 THEN
32 l_operation := 'Table Altered';
33 WHEN BITAND(l_operation_type, DBMS_CHANGE_NOTIFICATION.DROPOP) != 0 THEN
34 l_operation := 'Table Dropped';
35 WHEN BITAND(l_operation_type, DBMS_CHANGE_NOTIFICATION.UNKNOWNOP) != 0 THEN
36 l_operation := 'Unknown Operation';
37 ELSE
38 l_operation := '?';
39 END CASE;
40 l_message := 'Table (' || l_table_name || ') - ' || l_operation || '. Rows=' || l_numrows;
41 INSERT INTO notifications (id, message, notification_date)
42 VALUES (notifications_seq.NEXTVAL, l_message, SYSDATE);
43 COMMIT;
44 END LOOP;
45 END IF;
46 END;
47 /
 
Procedure CALLBACK compiled
 
17:21:58 SQL> -- thanks Tim

and the CQN registration:

17:21:58 SQL> -- register on DEMO;
17:21:58 SQL>
17:21:58 SQL> DECLARE
2 reginfo CQ_NOTIFICATION$_REG_INFO;
3 v_cursor SYS_REFCURSOR;
4 regid NUMBER;
5 BEGIN
6 reginfo := cq_notification$_reg_info ( 'callback', DBMS_CHANGE_NOTIFICATION.QOS_ROWIDS, 0, 0, 0);
7 regid := sys.DBMS_CHANGE_NOTIFICATION.new_reg_start(reginfo);
8 OPEN v_cursor FOR
9 SELECT dbms_cq_notification.CQ_NOTIFICATION_QUERYID, demo.* from DEMO;
10 CLOSE v_cursor;
11 sys.DBMS_CHANGE_NOTIFICATION.reg_end;
12 END;
13 /
 
PL/SQL procedure successfully completed.

Now I delete 1 million rows and commit:


17:21:58 SQL> exec dbms_workload_repository.create_snapshot;
 
PL/SQL procedure successfully completed.
 
17:22:02 SQL>
17:22:02 SQL> -- 1000000 deletes
17:22:02 SQL>
17:22:02 SQL> exec for i in 1..1000000 loop delete from DEMO WHERE id=i; commit; end loop;
 
PL/SQL procedure successfully completed.
 
17:39:23 SQL>
17:39:23 SQL> exec dbms_workload_repository.create_snapshot;

Here are the notifications captured:

17:39:41 SQL> select count(*) from notifications;
COUNT(*)
--------
942741
 
17:39:54 SQL> select * from notifications fetch first 10 rows only;
 
ID MESSAGE NOTIFICATION_DATE
--- ------------------------------------------- -----------------
135 Table (DEMO.DEMO) - Records Deleted. Rows=1 09-MAY-18
138 Table (DEMO.DEMO) - Records Deleted. Rows=1 09-MAY-18
140 Table (DEMO.DEMO) - Records Deleted. Rows=1 09-MAY-18
142 Table (DEMO.DEMO) - Records Deleted. Rows=1 09-MAY-18
145 Table (DEMO.DEMO) - Records Deleted. Rows=1 09-MAY-18
147 Table (DEMO.DEMO) - Records Deleted. Rows=1 09-MAY-18
149 Table (DEMO.DEMO) - Records Deleted. Rows=1 09-MAY-18
152 Table (DEMO.DEMO) - Records Deleted. Rows=1 09-MAY-18
154 Table (DEMO.DEMO) - Records Deleted. Rows=1 09-MAY-18
156 Table (DEMO.DEMO) - Records Deleted. Rows=1 09-MAY-18

The DML has been long and SQL Monitoring shows that 64% of the time was waiting on ‘Wait for EMON to process ntfns’ which is the notification process:
CaptureCQN

The execution of the delete itself (cdq5w65zk18r1 DELETE FROM DEMO WHERE ID=:B1) is only a small part of the database time. And we have additional load on the database:
CaptureCQN01

The following is activity related to Continuous Query notification queuing of messages, the one that slows down the modifications, during the delete (from 17:22 to 17:38):

59p1yadp2g6mb call DBMS_AQADM_SYS.REGISTER_DRIVER ( )
gzf71xphapf1b select /*+ INDEX(TAB AQ$_AQ_SRVNTFN_TABLE_1_I) */ tab.rowid, tab.msgid, tab.corrid, tab.priority, tab.delay, tab.expiration ,tab.retry_count, tab.exception_qschema, tab.exception_queue, tab.chain_no, tab.local_order_no, tab.enq_time, tab.time_manager_info, tab.state, tab.enq_tid, tab.step_no, tab.sender_name, tab.sender_address, tab.sender_protocol, tab.dequeue_msgid, tab.user_prop, tab.user_data from "SYS"."AQ_SRVNTFN_TABLE_1" tab where q_name = :1 and (state = :2 ) order by q_name, state, enq_time, step_no, chain_no, local_order_no for update skip locked
61cgh171qq5m6 delete /*+ CACHE_CB("AQ_SRVNTFN_TABLE_1") */ from "SYS"."AQ_SRVNTFN_TABLE_1" where rowid = :1
ccrv58ajb7pxg begin callback(ntfnds => :1); end;
cdq5w65zk18r1 DELETE FROM DEMO WHERE ID=:B1

And at the end (17:38), when the modifications are committed, my callback function is running to process the messages:
CaptureCQN02
the main query is the insert from the callback function:
8z4m5tw9uh02d INSERT INTO NOTIFICATIONS (ID, MESSAGE, NOTIFICATION_DATE) VALUES (NOTIFICATIONS_SEQ.NEXTVAL, :B1 , SYSDATE)
The callback function may send the changes to another system, rather than inserting them here, but then you can question the availability and, anyway, this will still have a high overhead in context switches and network roundtrips.

In summary, for 1 million rows deleted, here are the queries that have been executed 1 million times:

Elapsed
Executions Rows Processed Rows per Exec Time (s) %CPU %IO SQL Id
------------ --------------- -------------- ---------- ----- ----- -------------
1,000,000 1,000,000 1.0 123.4 55.2 3.2 cdq5w65zk18r1 Module: java@VM188 (TNS V1-V3) DELETE FROM DEMO WHERE ID=:B1
999,753 999,753 1.0 261.5 88.6 .7 dw9yv631knnqd insert into "SYS"."AQ_SRVNTFN_TABLE_1" (q_name, msgid, corrid, priority, state, delay, expiration, time_manager_info, local_order_no, chain_no, enq_time, step_no, enq_uid, enq_tid, retry_count, exception_qschema, exception_queue, recipient_key, dequeue_msgid, user_data, sender_name, sender_address, sender_protoc
978,351 978,351 1.0 212.5 64.3 0 61cgh171qq5m6 Module: DBMS_SCHEDULER delete /*+ CACHE_CB("AQ_SRVNTFN_TABLE_1") */ from "SYS"."AQ_SRVNTFN_TABLE_1" where rowid = :1
978,248 942,657 1.0 971.6 20 .7 8z4m5tw9uh02d Module: DBMS_SCHEDULER INSERT INTO NOTIFICATIONS (ID, MESSAGE, NOTIFICATION_DATE) VALUES (NOTIFICATIONS_SEQ.NEXTVAL, :B1 , SYSDATE)
978,167 942,559 1.0 1,178.7 33.1 .5 ccrv58ajb7pxg Module: DBMS_SCHEDULER begin callback(ntfnds => :1); end;
977,984 977,809 1.0 73.9 96.5 0 brq600g3299zp Module: DBMS_SCHEDULER SELECT INSTANCE_NUMBER FROM SYS.V$INSTANCE
933,845 978,350 1.0 446.9 51.4 .7 gzf71xphapf1b Module: DBMS_SCHEDULER select /*+ INDEX(TAB AQ$_AQ_SRVNTFN_TABLE_1_I) */ tab.rowid, tab.msgid, tab.corrid, tab.priority, tab.delay, tab.expiration ,tab.retry_count, tab.exception_qschema, tab.exception_queue, tab.chain_no, tab.local_order_no, tab.enq_time, tab.time_manager_info, tab.state, tab.enq_tid, tab.step_no, tab.sender_name

This is a huge overhead. And all this has generated 8 millions of redo entries.

In summary, just forget about CQN to stream changes. This feature is aimed at cache refresh for rarely changing data. What we call today ‘event sourcing’ exists for a long time in the database, with redo logs. When a user executes some DML, Oracle generates the redo records first, store them and apply them to update the current version of the table rows. And the redo logs keeps the atomicity of transaction (the ‘A’ in ACID). Then better use this if the changes need to be propagated to other systems.

 

Cet article Event Sourcing: CQN is not a replacement for CDC est apparu en premier sur Blog dbi services.


18c: some optimization about redo size

$
0
0

Some years ago, at the time of 12.1 release, I published in the SOUG newsletter some tests to show the amount of redo generated by different operations on a 10000 rows table. I had run it on 12.2 without seeing the differences and now on 18.1
I get the statistics from mystat using a script that displays them as columns, with the value being the difference from the previous run. I’ve run the same as in the article, and most of the statistics were in the same ballpark.

12.2

In 12.2 I get the same numbers. I was surprised about that because there is an optimization on 12.2 when updating a column to the same value. But this optimization may not occur for all situations. This reduction of redo generation has been analyzed by Maris Elsins with redo log dumps and by Sayan Malakshinov on triggers. And then the origin of this optimization has been exposed by Bryn Llewellyn. All info and links from the following Twitter conversation:

Here are the numbers I have in 18c, which are very similar to those from 12.1


SQL> --- update with different values (the original ones were all lowercase)
SQL>
SQL> update TEST_TABLE set a=upper(a),b=upper(b),c=upper(c),d=upper(d);
10000 rows updated.
 
SQL> commit;
Commit complete.
 
SQL> @ _mystat_diff.sql
SQL> set termout off verify off
SQL> /
 
db block changes redo size undo change vector size redo entries
---------------- ---------------- ----------------------- ----------------
190,485 32,226,368 13,288,940 95,258
 
1 row selected.
 
SQL>
SQL> --- update with same valus
SQL>
SQL> update TEST_TABLE set a=upper(a),b=upper(b),c=upper(c),d=upper(d);
10000 rows updated.
 
SQL> commit;
Commit complete.
 
SQL> @ _mystat_diff.sql
SQL> set termout off verify off
SQL> /
 
db block changes redo size undo change vector size redo entries
---------------- ---------------- ----------------------- ----------------
20,346 4,594,528 1,844,012 10,085
 
1 row selected.

The second update has updated the 4 columns with the same value. For about 1MB of data (10000 rows with avg_row_len=100), we have 1.8MB of undo and 4.4MB of redo (which covers the new data and the undo). I have 4 indexes there but Oracle do not update index entries when the old and new values are the same.

The first update changes all values, and then, in addition to the changes in the table block, the indexes must be updated.

So, here, on my test, it seems that the 12.2 optimization, referenced in the tweet above, did not occur because the redo generated for the table blocks is stull full logging when the old and new values are the same. I can check from a block dump that I have the same value in undo and redo:

REDO RECORD - Thread:1 RBA: 0x000008.00002444.0010 LEN: 0x01c8 VLD: 0x01 CON_UID: 1008806272
SCN: 0x00000000002cb8a4 SUBSCN:3047 07/03/2018 12:23:22
CHANGE #1 CON_ID:4 TYP:0 CLS:36 AFN:14 DBA:0x02405a20 OBJ:4294967295 SCN:0x00000000002cb8a4 SEQ:34 OP:5.1 ENC:0 RBL:0 FLG:0x0000
ktudb redo: siz: 184 spc: 2020 flg: 0x0022 seq: 0x0147 rec: 0x22
xid: 0x000a.009.000002bd
ktubu redo: slt: 9 rci: 33 opc: 11.1 objn: 77968 objd: 77978 tsn: 0
Undo type: Regular undo Undo type: Last buffer split: No
Tablespace Undo: No
0x00000000
KDO undo record:
KTB Redo
op: 0x02 ver: 0x01
compat bit: 4 (post-11) padding: 1
op: C uba: 0x02405a20.0147.21
KDO Op code: URP row dependencies Disabled
xtype: XAxtype KDO_KDOM2 flags: 0x00000080 bdba: 0x0040a994 hdba: 0x0040a7d8
itli: 3 ispac: 0 maxfr: 4863
tabn: 0 slot: 3(0x3) flag: 0x2c lock: 0 ckix: 0
ncol: 4 nnew: 4 size: 0
Vector content:
col 0: [24] 31 30 30 30 30 58 58 58 58 58 58 58 58 58 58 58 58 58 58 58 58 58 58 58
col 1: [24] 31 30 30 30 30 58 58 58 58 58 58 58 58 58 58 58 58 58 58 58 58 58 58 58
col 2: [24] 31 30 30 30 30 58 58 58 58 58 58 58 58 58 58 58 58 58 58 58 58 58 58 58
col 3: [24] 31 30 30 30 30 58 58 58 58 58 58 58 58 58 58 58 58 58 58 58 58 58 58 58
CHANGE #2 CON_ID:4 TYP:0 CLS:1 AFN:12 DBA:0x0040a994 OBJ:77978 SCN:0x00000000002cb8a4 SEQ:3 OP:11.5 ENC:0 RBL:0 FLG:0x0000
KTB Redo
op: 0x02 ver: 0x01
compat bit: 4 (post-11) padding: 1
op: C uba: 0x02405a20.0147.22
KDO Op code: URP row dependencies Disabled
xtype: XAxtype KDO_KDOM2 flags: 0x00000080 bdba: 0x0040a994 hdba: 0x0040a7d8
itli: 3 ispac: 0 maxfr: 4863
tabn: 0 slot: 3(0x3) flag: 0x2c lock: 3 ckix: 0
ncol: 4 nnew: 4 size: 0
Vector content:
col 0: [24] 31 30 30 30 30 58 58 58 58 58 58 58 58 58 58 58 58 58 58 58 58 58 58 58
col 1: [24] 31 30 30 30 30 58 58 58 58 58 58 58 58 58 58 58 58 58 58 58 58 58 58 58
col 2: [24] 31 30 30 30 30 58 58 58 58 58 58 58 58 58 58 58 58 58 58 58 58 58 58 58
col 3: [24] 31 30 30 30 30 58 58 58 58 58 58 58 58 58 58 58 58 58 58 58 58 58 58 58

The redo record has the old and new values even when they are the same. I hope that we will have more documentation about the 12.2 redo optimization so that it can be predictable and tunable.

18.1

So, I didn’t see the 12.2 optimizations I expected here. However, but it seems that we have one with deletes on 18c.

In 12.2 the delete of all 10000 rows without index generates 2MB of undo and 3.5MB of redo:

SQL> --- delete all rows
SQL>
SQL> delete from TEST_TABLE;
10000 rows deleted.
 
SQL> commit;
Commit complete.
 
SQL> @ _mystat_diff.sql
SQL> set termout off verify off
SQL> /
 
db block changes redo size undo change vector size redo entries
---------------- ---------------- ----------------------- ----------------
20,690 3,670,476 2,053,292 10,145
 
1 row selected.

The same in 18.1 generates only 1MB of undo and redo:

SQL> --- delete all rows
SQL>
SQL> delete from TEST_TABLE;
10000 rows deleted.
 
SQL> commit;
Commit complete.
 
SQL> @ _mystat_diff.sql
SQL> set termout off verify off
SQL> /
 
db block changes redo size undo change vector size redo entries
---------------- ---------------- ----------------------- ----------------
872 1,187,120 1,116,812 293
 
1 row selected.

So in 12.2 I had one block change, and one redo record per row deleted. In 18.1 it seems that I have one redo record per block where all rows are deleted. Still in the same Twitter conversation, Tanel Poder had the idea to do a sparse delete leaving one row in each block:

SQL> delete from TEST_TABLE where dbms_rowid.rowid_row_number(rowid)!=42;
9849 rows deleted.
 
SQL> commit;
Commit complete.
 
SQL> @ _mystat_diff.sql
SQL> set termout off verify off
SQL> /
 
db block changes redo size undo change vector size redo entries
---------------- ---------------- ----------------------- ----------------
20,431 3,660,204 2,102,584 10,011

Now, as there is no blocks that are fully emptied, I’m back to one redo entry per row deleted.

 

Cet article 18c: some optimization about redo size est apparu en premier sur Blog dbi services.

ODA fiber interface link detection issue

$
0
0

Configuring fiber interfaces during an ODA installation can sometime been challenging. The configuration can depend on the kind of gigabit interface converter (GBIC) and switch been used. In this blog, I would like to show a real case faced during one of our customer deployment, and come with a solution.

Problem

We were facing link detection issue when using the GBIC converter delivered by ORACLE, although using a CISCO GBIC converter could make the link available.
em2 interface was used with a CISCO GBIC.
em3 interface was used with an ORACLE GBIC.

[root@TEST1 tmp]# ethtool em2
Settings for em2:
        Supported ports: [ FIBRE ]
        Supported link modes:   1000baseT/Full
                                10000baseT/Full
        Supported pause frame use: Symmetric Receive-only
        Supports auto-negotiation: Yes
        Advertised link modes:  1000baseT/Full
                                10000baseT/Full
        Advertised pause frame use: No
        Advertised auto-negotiation: Yes
        Speed: 10000Mb/s
        Duplex: Full
        Port: FIBRE
        PHYAD: 1
        Transceiver: internal
        Auto-negotiation: on
        Supports Wake-on: d
        Wake-on: d
        Current message level: 0x00000000 (0)

        Link detected: yes
        
[root@TEST1 tmp]# ethtool em3
Settings for em3:
        Supported ports: [ FIBRE ]
        Supported link modes:   1000baseT/Full
                                10000baseT/Full
        Supported pause frame use: Symmetric Receive-only
        Supports auto-negotiation: Yes
        Advertised link modes:  1000baseT/Full
                                10000baseT/Full
        Advertised pause frame use: No
        Advertised auto-negotiation: Yes
        Speed: Unknown!
        Duplex: Unknown! (255)
        Port: FIBRE
        PHYAD: 1
        Transceiver: internal
        Auto-negotiation: on
        Supports Wake-on: d
        Wake-on: d
        Current message level: 0x00000000 (0)

        Link detected: no

 

Solution

Forcing the interface on 10 Gb and configuring auto-negotiation to false could solve the link detection issue with the ORACLE GBIC converter.
This has been performed by updating both ifcfg-em2 and ifcfg-em3 (stored in /etc/sysconfig/network-scripts) with :

ETHTOOL_OPTS="speed 10000 duplex full autoneg off"

 

Both links could then be detected successfully.

[root@TEST1 ~]# ethtool em2
Settings for em2:
        Supported ports: [ FIBRE ]
        Supported link modes:   1000baseT/Full
                                10000baseT/Full
        Supported pause frame use: Symmetric Receive-only
        Supports auto-negotiation: Yes
        Advertised link modes:  Not reported
        Advertised pause frame use: No
        Advertised auto-negotiation: No
        Speed: 10000Mb/s
        Duplex: Full
        Port: FIBRE
        PHYAD: 1
        Transceiver: internal
        Auto-negotiation: off
        Supports Wake-on: d
        Wake-on: d
        Current message level: 0x00000000 (0)

        Link detected: yes
        
[root@TEST1 ~]# ethtool em3
Settings for em3:
        Supported ports: [ FIBRE ]
        Supported link modes:   1000baseT/Full
                                10000baseT/Full
        Supported pause frame use: Symmetric Receive-only
        Supports auto-negotiation: Yes
        Advertised link modes:  Not reported
        Advertised pause frame use: No
        Advertised auto-negotiation: No
        Speed: 10000Mb/s
        Duplex: Full
        Port: FIBRE
        PHYAD: 1
        Transceiver: internal
        Auto-negotiation: off
        Supports Wake-on: d
        Wake-on: d
        Current message level: 0x00000000 (0)

        Link detected: yes

 

SFP28 Ethernet Controller firmware upgrade

The firmware version we were running is the 20.06.04.06.

[root@TEST1 ~]# ethtool -i em3
driver: bnxt_en
version: 1.8.0
firmware-version: 20.6.141/1.8.1 pkg 20.06.04.06
bus-info: 0000:18:00.1
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: no
supports-priv-flags: no

In case of further trouble, a firmware upgrade to 20.08.01.14 version can help, as described in following MOS note.
Using the onboard SFP28 ports on an ODA X7-2 server node (Doc ID 2373070.1)

 

Cet article ODA fiber interface link detection issue est apparu en premier sur Blog dbi services.

18c: Order by in WITH clause is not preserved

$
0
0

For a previous post I’ve run on 18c a script of mine to get the V$MYSTAT delta values between two queries. This script (new version available on GitHub) generates the queries to store some values and subtract them on the next execution. But I had to fix it for 18c because I relied on some order by in a CTE which is lost in 18c.
The idea was to get the statistic names in a Common Table Expression (CTE):

with stats as (
select rownum n,stat_id,name from (select stat_id,name from v$statname where name in (&names) order by stat_id)
)

and query it from different parts of the UNION ALL which generates the script:

select 'select ' from dual
union all
select ' '||decode(n,1,' ',',')||'"CUR'||stat_id||'" - '||'&'||'LAG'||stat_id||' "DIF'||stat_id||'"' from stats
union all
select ' '||',nvl("CUR'||stat_id||'",0) "CUR'||stat_id||'"' from stats
union all

The rowum in the WITH clause is used later to add the comma for all rows except the first one – using decode(n,1,’ ‘,’,’)

But this relied on two assumptions:

  1. the WITH keeps the rows ordered
  2. the UNION ALL keeps the rows ordered

Of course, it would be better to add a number on each part and an ORDER BY at the end of the query. But for better readability, I didn’t.

However, the first assumption is wrong since 12.2 where the optimizer introduced In-Memory Cursor Duration Temp Tables. In the query above, the CTE is materialized because it is used multiple times. And this optimization keeps it in memory. But it seems that this structure does not keep the rows ordered.

Here is an example where I have a 4 rows table:

SQL> create table DEMO as select rownum id from xmltable('1 to 4');
Table DEMO created.
 
SQL> select * from DEMO order by id;
 
ID
----------
1
2
3
4

When I put the same query in a WITH clause, with its ORDER BY, and query it from 2 union all statements, the rows are not ordered anymore:

SQL> with cte as ( select * from DEMO order by id)
select rownum,id from cte
union all
select rownum,id from cte ;
 
ROWNUM ID
---------- ----------
1 4
2 3
3 2
4 1
1 4
2 3
3 2
4 1

The execution plan shows the materialization of the CTE result, and mentions that it is an In-Memory Cursor Duration Temp Table (CURSOR DURATION MEMORY)

SQL> select * from dbms_xplan.display_cursor(format=>'+projection');
 
PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 9uwc06ana6trn, child number 0
-------------------------------------
with cte as ( select * from DEMO order by id) select rownum,id from
cte union all select rownum,id from cte
 
Plan hash value: 4025392480
 
----------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 4 (100)| |
| 1 | TEMP TABLE TRANSFORMATION | | | | | |
| 2 | LOAD AS SELECT (CURSOR DURATION MEMORY)| SYS_TEMP_0FD9D6640_23D74B | | | | |
| 3 | SORT ORDER BY | | 4 | 12 | 4 (25)| 00:00:01 |
| 4 | TABLE ACCESS FULL | DEMO | 4 | 12 | 3 (0)| 00:00:01 |
| 5 | UNION-ALL | | | | | |
| 6 | COUNT | | | | | |
| 7 | VIEW | | 4 | 52 | 2 (0)| 00:00:01 |
| 8 | TABLE ACCESS FULL | SYS_TEMP_0FD9D6640_23D74B | 4 | 12 | 2 (0)| 00:00:01 |
| 9 | COUNT | | | | | |
| 10 | VIEW | | 4 | 52 | 2 (0)| 00:00:01 |
| 11 | TABLE ACCESS FULL | SYS_TEMP_0FD9D6640_23D74B | 4 | 12 | 2 (0)| 00:00:01 |
----------------------------------------------------------------------------------------------------------------------
 
Column Projection Information (identified by operation id):
-----------------------------------------------------------
1 - STRDEF[22], STRDEF[22] 2 - SYSDEF[4], SYSDEF[0], SYSDEF[1], SYSDEF[120], SYSDEF[0] 3 - (#keys=1) "DEMO"."ID"[NUMBER,22] 4 - (rowset=256) "DEMO"."ID"[NUMBER,22] 5 - STRDEF[22], STRDEF[22] 6 - "ID"[NUMBER,22], ROWNUM[8] 7 - "ID"[NUMBER,22] 8 - "C0"[NUMBER,22] 9 - "ID"[NUMBER,22], ROWNUM[8] 10 - "ID"[NUMBER,22] 11 - "C0"[NUMBER,22]

/*+ inline */

If the CTE is not materialized, the rows are ordered, bit the table is read two times:


SQL> with cte as ( select /*+ inline */ * from DEMO order by id)
2 select rownum,id from cte
3 union all
4 select rownum,id from cte ;
 
ROWNUM ID
---------- ----------
1 1
2 2
3 3
4 4
1 1
2 2
3 3
4 4
 
8 rows selected.
 
SQL>
SQL> select * from dbms_xplan.display_cursor(format=>'+projection');
 
PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 0dad0vwg3k0th, child number 0
-------------------------------------
with cte as ( select /*+ inline */ * from DEMO order by id) select
rownum,id from cte union all select rownum,id from cte
 
Plan hash value: 2913170750
 
------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 8 (100)| |
| 1 | UNION-ALL | | | | | |
| 2 | COUNT | | | | | |
| 3 | VIEW | | 4 | 52 | 4 (25)| 00:00:01 |
| 4 | SORT ORDER BY | | 4 | 12 | 4 (25)| 00:00:01 |
| 5 | TABLE ACCESS FULL| DEMO | 4 | 12 | 3 (0)| 00:00:01 |
| 6 | COUNT | | | | | |
| 7 | VIEW | | 4 | 52 | 4 (25)| 00:00:01 |
| 8 | SORT ORDER BY | | 4 | 12 | 4 (25)| 00:00:01 |
| 9 | TABLE ACCESS FULL| DEMO | 4 | 12 | 3 (0)| 00:00:01 |
------------------------------------------------------------------------------
 
Column Projection Information (identified by operation id):
-----------------------------------------------------------
 
1 - STRDEF[22], STRDEF[22] 2 - "ID"[NUMBER,22], ROWNUM[8] 3 - "ID"[NUMBER,22] 4 - (#keys=1) "DEMO"."ID"[NUMBER,22] 5 - (rowset=256) "DEMO"."ID"[NUMBER,22] 6 - "ID"[NUMBER,22], ROWNUM[8] 7 - "ID"[NUMBER,22] 8 - (#keys=1) "DEMO"."ID"[NUMBER,22] 9 - (rowset=256) "DEMO"."ID"[NUMBER,22]

“_in_memory_cdt”=off

If the CTE is materialized, but the new feature to keep it in memory is disabled, the rows are ordered (but probably by chance – there si no guarantee):


SQL> alter session set "_in_memory_cdt"=off;
 
Session altered.
 
SQL>
SQL> with cte as ( select /*+ materialize */ * from DEMO order by id)
2 select rownum,id from cte
3 union all
4 select rownum,id from cte ;
 
ROWNUM ID
---------- ----------
1 1
2 2
3 3
4 4
1 1
2 2
3 3
4 4
 
8 rows selected.
 
SQL> select * from dbms_xplan.display_cursor(format=>'+projection');
 
PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 9uwc06ana6trn, child number 1
-------------------------------------
with cte as ( select * from DEMO order by id) select rownum,id from
cte union all select rownum,id from cte
 
Plan hash value: 4025392480
 
--------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 4 (100)| |
| 1 | TEMP TABLE TRANSFORMATION | | | | | |
| 2 | LOAD AS SELECT | SYS_TEMP_0FD9D664D_23D74B | | | | |
| 3 | SORT ORDER BY | | 4 | 12 | 4 (25)| 00:00:01 |
| 4 | TABLE ACCESS FULL | DEMO | 4 | 12 | 3 (0)| 00:00:01 |
| 5 | UNION-ALL | | | | | |
| 6 | COUNT | | | | | |
| 7 | VIEW | | 4 | 52 | 2 (0)| 00:00:01 |
| 8 | TABLE ACCESS FULL | SYS_TEMP_0FD9D664D_23D74B | 4 | 12 | 2 (0)| 00:00:01 |
| 9 | COUNT | | | | | |
| 10 | VIEW | | 4 | 52 | 2 (0)| 00:00:01 |
| 11 | TABLE ACCESS FULL | SYS_TEMP_0FD9D664D_23D74B | 4 | 12 | 2 (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------------
 
Column Projection Information (identified by operation id):
-----------------------------------------------------------
 
1 - STRDEF[22], STRDEF[22] 2 - SYSDEF[4], SYSDEF[0], SYSDEF[1], SYSDEF[120], SYSDEF[0] 3 - (#keys=1) "DEMO"."ID"[NUMBER,22] 4 - (rowset=256) "DEMO"."ID"[NUMBER,22] 5 - STRDEF[22], STRDEF[22] 6 - "ID"[NUMBER,22], ROWNUM[8] 7 - "ID"[NUMBER,22] 8 - "C0"[NUMBER,22] 9 - "ID"[NUMBER,22], ROWNUM[8] 10 - "ID"[NUMBER,22] 11 - "C0"[NUMBER,22]

So what?

We should never rely on the preservation of the row order except when explicitly documented (like rownum over a subquery with order by). And this In-Memory Cursor Duration Temporary table also works in parallel and RAC, so we can understand that the rows are not read in the same order as they were stored. This is always a problem when we migrate. When something works by chance in a version and not in the newer versions, people usually incriminate the migration, rather than recognizing an old bug in the application, just because it was working before.

 

Cet article 18c: Order by in WITH clause is not preserved est apparu en premier sur Blog dbi services.

My tmux scripts to script demos

$
0
0

When I did my first demo using tmux, I have written the following blog post to explain it: https://blog.dbi-services.com/using-tmux-for-semi-interactive-demos/. Since then I’ve done all my demos with this technique. My scripts and configuration files have evolved now I have uploaded the on GitHub to share them: https://github.com/FranckPachot/scripts/tree/master/tmux-demo

The README.md explains the pre-requisites (a recent version of Tmux), how to download the scripts quickly (I use this to get everything ready on a new Cloud host), and how to run it. The ‘demo’ alias starts tmux attached to the same session (in order to see it on the laptop and the beamer). The .vimrc defines the macros to run lines from the script file as tmux-send commands.

If you have questions, the best is to ask them on Twitter:

 

Cet article My tmux scripts to script demos est apparu en premier sur Blog dbi services.

Data Guard: always set db_create_file_dest on the standby

$
0
0

The file name convert parameters are not dynamic and require a restart of the instance. An enhancement request was filled in 2011. I mentioned recently on Twitter that it can be annoying with Active Data Guard when a file on the primary server is created on a path that has no file name conversion. However, Ian Baugaard mentioned that there is a workaround for this specific case because db_create_file_dest is dynamic:

I’ve quickly created a 18c Data Guard configuration on the Oracle Cloud DBaaS to test it and here it is.

In the primary database and the standby database, here are the datafiles:

RMAN> report schema;
 
Report of database schema for database with db_unique_name ORCL_01
 
List of Permanent Datafiles
===========================
File Size(MB) Tablespace RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1 830 SYSTEM YES /u02/app/oracle/oradata/ORCL/system01.dbf
3 510 SYSAUX NO /u02/app/oracle/oradata/ORCL/sysaux01.dbf
4 60 UNDOTBS1 YES /u02/app/oracle/oradata/ORCL/undotbs01.dbf
5 340 PDB$SEED:SYSTEM NO /u02/app/oracle/oradata/ORCL/pdbseed/system01.dbf
6 620 PDB$SEED:SYSAUX NO /u02/app/oracle/oradata/ORCL/pdbseed/sysaux01.dbf
7 5 USERS NO /u02/app/oracle/oradata/ORCL/users01.dbf
8 200 PDB$SEED:UNDOTBS1 NO /u02/app/oracle/oradata/ORCL/pdbseed/undotbs01.dbf
12 340 PDB1:SYSTEM YES /u02/app/oracle/oradata/ORCL/PDB1/system01.dbf
13 620 PDB1:SYSAUX NO /u02/app/oracle/oradata/ORCL/PDB1/sysaux01.dbf
14 200 PDB1:UNDOTBS1 YES /u02/app/oracle/oradata/ORCL/PDB1/undotbs01.dbf
15 50 PDB1:USERS NO /u02/app/oracle/oradata/ORCL/PDB1/PDB1_users01.dbf
 
List of Temporary Files
=======================
File Size(MB) Tablespace Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1 33 TEMP 32767 /u04/app/oracle/oradata/temp/temp01.dbf
2 62 PDB$SEED:TEMP 32767 /u04/app/oracle/oradata/temp/pdbseed_temp012018-02-08_13-49-27-256-PM.dbf
4 62 PDB1:TEMP 32767 /u04/app/oracle/oradata/temp/temp012018-02-08_13-49-27-256-PM.dbf

The properties of the standby database define no DbFileNameConvert because the directory structure is supposed to be the same:

DGMGRL> show configuration
 
Configuration - fsc
 
Protection Mode: MaxPerformance
Members:
ORCL_01 - Primary database
ORCL_02 - Physical standby database
 
Fast-Start Failover: DISABLED
 
Configuration Status:
SUCCESS (status updated 45 seconds ago)
 
 
DGMGRL> show database verbose 'ORCL_02';
 
Database - ORCL_02
 
Role: PHYSICAL STANDBY
Intended State: APPLY-ON
Transport Lag: 0 seconds (computed 0 seconds ago)
Apply Lag: 0 seconds (computed 0 seconds ago)
Average Apply Rate: 15.00 KByte/s
Active Apply Rate: 532.00 KByte/s
Maximum Apply Rate: 535.00 KByte/s
Real Time Query: ON
Instance(s):
ORCL
 
Properties:
DGConnectIdentifier = 'ORCL_02'
...
DbFileNameConvert = ''
LogFileNameConvert = 'dummy, dummy'
...
 
Log file locations:
Alert log : /u01/app/oracle/diag/rdbms/orcl_02/ORCL/trace/alert_ORCL.log
Data Guard Broker log : /u01/app/oracle/diag/rdbms/orcl_02/ORCL/trace/drcORCL.log
 
Database Status:
SUCCESS

You can see that Oracle defines a dummy log file name convert. This a good idea to avoid some RMAN duplicate issues.

On the standby server, I have no db_create_file_dest defined:

SQL> show parameter create%dest
 
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_create_file_dest string
db_create_online_log_dest_1 string .
db_create_online_log_dest_2 string
db_create_online_log_dest_3 string
db_create_online_log_dest_4 string
db_create_online_log_dest_5 string

Note that the Oracle Cloud DBaaS defines it. I’ve reset it for the purpose of this demo.

New filesystem on Primary server only

I create a new filesystem on the primary server:

[root@DG-dg01 opc]# mkdir /DATA ; chown oracle:dba /DATA

I create a datafile on this new filesystem:

SQL> alter session set container=PDB1;
Session altered.
 
SQL> create tablespace FRANCK datafile '/DATA/franck.dbf' size 100M;
Tablespace created.

The apply is stuck:

DGMGRL> show database 'ORCL_02';
 
Database - ORCL_02
 
Role: PHYSICAL STANDBY
Intended State: APPLY-ON
Transport Lag: 0 seconds (computed 2 seconds ago)
Apply Lag: 11 seconds (computed 2 seconds ago)
Average Apply Rate: 16.00 KByte/s
Real Time Query: OFF
Instance(s):
ORCL
 
Database Error(s):
ORA-16766: Redo Apply is stopped
 
Database Status:
ERROR

The standby alert.log shows the error about the impossibility to create the datafile:

2018-07-06T08:04:59.077730+00:00
Errors in file /u01/app/oracle/diag/rdbms/orcl_02/ORCL/trace/ORCL_pr00_29393.trc:
ORA-01274: cannot add data file that was originally created as '/DATA/franck.dbf'
2018-07-06T08:04:59.111881+00:00
Background Media Recovery process shutdown (ORCL)

db_file_name_convert

The first idea is to set a db_file_name_convert, however, this requires an instance restart, which means downtime when you have sessions on the Active Data Guard standby:

DGMGRL> edit database 'ORCL_02' set property DbFileNameConvert='/DATA,/u02/app/oracle/oradata/ORCL';
Warning: ORA-16675: database instance restart required for property value modification to take effect
 
Property "dbfilenameconvert" updated
 
DGMGRL> show database 'ORCL_02';
 
Database - ORCL_02
 
Role: PHYSICAL STANDBY
Intended State: APPLY-ON
Transport Lag: 0 seconds (computed 2 seconds ago)
Apply Lag: 3 minutes 32 seconds (computed 2 seconds ago)
Average Apply Rate: 16.00 KByte/s
Real Time Query: OFF
Instance(s):
ORCL
Warning: ORA-16675: database instance restart required for property value modification to
take effect
Warning: ORA-16714: the value of property DbFileNameConvert is inconsistent with the member setting
 
Database Error(s):
ORA-16766: Redo Apply is stopped
 
Database Warning(s):
ORA-16853: apply lag has exceeded specified threshold
 
Database Status:
ERROR

db_create_file_dest

The solution is set db_create_file_dest which, on the standby, has higher priority than the convert:

SQL> alter system set db_create_file_dest='/u02/app/oracle/oradata';
System altered.

I restart the apply:

DGMGRL> edit database 'ORCL_02' set state=apply-on;
Succeeded.

No need to restart and future datafile creations will be created there. However, it is too late for this datafile as it has already been created as UNNAMED in the controlfile:

ORA-01186: file 18 failed verification tests
ORA-01157: cannot identify/lock data file 18 - see DBWR trace file
ORA-01111: name for data file 18 is unknown - rename to correct file
ORA-01110: data file 18: '/u01/app/oracle/product/18.0.0/dbhome_1/dbs/UNNAMED00018'

Manual CREATE DATAFILE

Then I must manually create it, but I cannot do that while I am in standby_file_management=auto:

SQL> alter session set container=PDB1;
Session altered.
 
SQL> alter database create datafile '/u01/app/oracle/product/18.0.0/dbhome_1/dbs/UNNAMED00018' as '/u02/app/oracle/oradata/ORCL/franck.dbf';
alter database create datafile '/u01/app/oracle/product/18.0.0/dbhome_1/dbs/UNNAMED00018' as '/u02/app/oracle/oradata/ORCL/franck.dbf'
*
ERROR at line 1:
ORA-01275: Operation CREATE DATAFILE is not allowed if standby file management
is automatic.

This can be changed dynamically:

DGMGRL> edit database 'ORCL_02' set property StandbyFileManagement=manual;
Property "standbyfilemanagement" updated

And then the creation is possible:

SQL> alter session set container=PDB1;
Session altered.
 
SQL> alter database create datafile '/u01/app/oracle/product/18.0.0/dbhome_1/dbs/UNNAMED00018' as new;
Database altered.

You can see that because I have defined db_create_file_dest, I don’t need to name the datafile and create it as OMF with the ‘new’ keyword.

Now I can start the apply and it will resolve the gap:

DGMGRL> edit database 'ORCL_02' set state=apply-on;
Succeeded.
 
DGMGRL> show database 'ORCL_02';
 
Database - ORCL_02
 
Role: PHYSICAL STANDBY
Intended State: APPLY-ON
Transport Lag: 0 seconds (computed 3 seconds ago)
Apply Lag: 0 seconds (computed 3 seconds ago)
Average Apply Rate: 22.00 KByte/s
Real Time Query: ON
Instance(s):
ORCL
Warning: ORA-16675: database instance restart required for property value modification to take effect
Warning: ORA-16714: the value of property DbFileNameConvert is inconsistent with the member setting
 
Database Status:
WARNING

Do not forget to put back standby_file_management”to auto:

DGMGRL> edit database 'ORCL_02' set property StandbyFileManagement=auto;
Property "standbyfilemanagement" updated

So, now that db_create_file_dest is set, new datafiles will be created automatically as OMF (Oracle Managed Files), without caring about file name conversion:

SQL> alter session set container=PDB1;
Session altered.
 
SQL> alter tablespace FRANCK add datafile '/DATA/franck2.dbf' size 100M;
Tablespace altered.

This is confirmed from the standby alert.log:

(4):Datafile 19 added to flashback set
(4):Successfully added datafile 19 to media recovery
(4):Datafile #19: '/u02/app/oracle/oradata/ORCL_02/7050211FE75F26FAE05392781D0AADAA/datafile/o1_mf_franck_fmybw332_.dbf'

Conclusion

Always define db_create_file_dest in the standby database so that datafiles will be created. Better to have them at the wrong place rather than stopping the apply. And anyway, if you don’t like the OMF names, and you are at least in 12c Enterprise Edition, you can change their name later with online move:

SQL> alter session set container=PDB1;
Session altered.
 
SQL> alter database move datafile '/u02/app/oracle/oradata/ORCL_02/7050211FE75F26FAE05392781D0AADAA/datafile/o1_mf_franck_fmybw332_.dbf' to '/u02/app/oracle/oradata/ORCL/franck2.dbf';
Database altered.

 

Cet article Data Guard: always set db_create_file_dest on the standby est apparu en premier sur Blog dbi services.

Oracle Database Vault: Realm in a Pluggable Database

$
0
0

Database Vault can also be used in a multitenant environment. In a multitenant environment we must register Oracle Database Vault in the root first, then after in the PDBs.
In this blog we will see how we can use realms to protect data in a pluggable database 12.1.

In CDB$ROOT we have to create common accounts that will be used for the Database Vault Owner (DV_OWNER role) and Database Vault Account Manager (DV_ACCTMGR role) accounts. It is also recommended to create a backup for each user.

SQL> conn sys as sysdba
Enter password:
Connected.
SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT
SQL>

SQL> GRANT CREATE SESSION, SET CONTAINER TO c##dbv_owner_root IDENTIFIED BY root CONTAINER = ALL;

SQL> GRANT CREATE SESSION, SET CONTAINER TO c##dbv_acctmgr_root IDENTIFIED BY root CONTAINER = ALL;

SQL> grant select any dictionary to C##DBV_OWNER_ROOT;

Grant succeeded.

SQL> grant select any dictionary to C##DBV_ACCTMGR_ROOT;

Grant succeeded.

SQL>

The next step is configure Database Vault user account on CDB$ROOT

BEGIN
 DVSYS.CONFIGURE_DV (
   dvowner_uname         => 'c##dbv_owner_root',
   dvacctmgr_uname       => 'c##dbv_acctmgr_root');
 END;
  6  /

PL/SQL procedure successfully completed.

SQL> @?/rdbms/admin/utlrp.sql

We can after enable Oracle Database Vault with user c##dbv_owner_root in CDB$ROOT

SQL> conn c##dbv_owner_root/root
Connected.
SQL> show user
USER is "C##DBV_OWNER_ROOT"
SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT
SQL>

SQL> EXEC DBMS_MACADM.ENABLE_DV;

PL/SQL procedure successfully completed.

SQL>

After restart o CDB$ROOT , we can verify the status. These queries should return TRUE.

SQL> SELECT VALUE FROM V$OPTION WHERE PARAMETER = 'Oracle Database Vault';

VALUE
----------------------------------------------------------------
TRUE

SQL> SELECT VALUE FROM V$OPTION WHERE PARAMETER = 'Oracle Label Security';

VALUE
----------------------------------------------------------------
TRUE

SQL>  SELECT * FROM DVSYS.DBA_DV_STATUS;

NAME                STATUS
------------------- ----------------------------------------------------------------
DV_CONFIGURE_STATUS TRUE
DV_ENABLE_STATUS    TRUE

SQL>

At PDB level, we must register common users we created earlier. In this example I am using a pluggable database named PDB1.

SQL> show user
USER is "SYS"
SQL> show con_name

CON_NAME
------------------------------
PDB1

SQL> GRANT CREATE SESSION, SET CONTAINER TO c##dbv_owner_root CONTAINER = CURRENT;

Grant succeeded.

SQL> GRANT CREATE SESSION, SET CONTAINER TO c##dbv_acctmgr_root CONTAINER = CURRENT;

Grant succeeded.

SQL>

SQL> grant select any dictionary to C##DBV_OWNER_ROOT;

Grant succeeded.

SQL> grant select any dictionary to C##DBV_ACCTMGR_ROOT;

Grant succeeded.

SQL>

Like in CDB$ROOT we also have to configure the Database Vault Users in PDB1

SQL> show user
USER is "SYS"
SQL> show con_name

CON_NAME
------------------------------
PDB1


SQL> BEGIN
 DVSYS.CONFIGURE_DV (
   dvowner_uname         => 'c##dbv_owner_root',
   dvacctmgr_uname       => 'c##dbv_acctmgr_root');
 END;
  6  /

PL/SQL procedure successfully completed.

SQL>

SQL> @?/rdbms/admin/utlrp.sql

And now let’s enable Oracle Database Vault on PDB1

SQL> show user
USER is "C##DBV_OWNER_ROOT"
SQL> show con_name

CON_NAME
------------------------------
PDB1
SQL> EXEC DBMS_MACADM.ENABLE_DV;

PL/SQL procedure successfully completed.

SQL>

With SYS let’s restart PDB1

SQL> show user
USER is "SYS"
SQL> show con_name

CON_NAME
------------------------------
PDB1
SQL> alter pluggable database pdb1 close immediate;

Pluggable database altered.

SQL> alter pluggable database pdb1 open;

Pluggable database altered.

As in CDB$ROOT we can verify

SQL> show con_name

CON_NAME
------------------------------
PDB1
SQL> SELECT VALUE FROM V$OPTION WHERE PARAMETER = 'Oracle Database Vault';

VALUE
----------------------------------------------------------------
TRUE

SQL> SELECT VALUE FROM V$OPTION WHERE PARAMETER = 'Oracle Label Security';

VALUE
----------------------------------------------------------------
TRUE

SQL> SELECT * FROM DVSYS.DBA_DV_STATUS;

NAME                STATUS
------------------- ----------------------------------------------------------------
DV_CONFIGURE_STATUS TRUE
DV_ENABLE_STATUS    TRUE

SQL>

Now that the Database vault is configured, we can create A REALM to protect our DATA. In this example we are protecting data of the SCOTT table EMP. We are using EM 12c to create the REAM.
From Database Home select Security and then Database Vault
dbvault1
In the Database vault page log with any user having appropriate privileges: DV_OWNER or DV_ADMIN role, SELECT ANY DICTIONARY
dbvault2
Before creating the REALM we can verify that user SYSTEM access to table SCOTT.EMP

SQL> show user
USER is "SYSTEM"
SQL> show con_name

CON_NAME
------------------------------
PDB1
SQL> select * from scott.emp;

     EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM     DEPTNO
---------- ---------- --------- ---------- --------- ---------- ---------- ----------
      7369 SMITH      CLERK           7902 17-DEC-80        800                    20
      7499 ALLEN      SALESMAN        7698 20-FEB-81       1600        300         30
      7521 WARD       SALESMAN        7698 22-FEB-81       1250        500         30
      7566 JONES      MANAGER         7839 02-APR-81       2975                    20
      7654 MARTIN     SALESMAN        7698 28-SEP-81       1250       1400         30
      7698 BLAKE      MANAGER         7839 01-MAY-81       2850                    30
      7782 CLARK      MANAGER         7839 09-JUN-81       2450                    10
      7788 SCOTT      ANALYST         7566 19-APR-87       3000                    20
      7839 KING       PRESIDENT            17-NOV-81       5000                    10
      7844 TURNER     SALESMAN        7698 08-SEP-81       1500          0         30
      7876 ADAMS      CLERK           7788 23-MAY-87       1100                    20
      7900 JAMES      CLERK           7698 03-DEC-81        950                    30

     EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM     DEPTNO
---------- ---------- --------- ---------- --------- ---------- ---------- ----------
      7902 FORD       ANALYST         7566 03-DEC-81       3000                    20
      7934 MILLER     CLERK           7782 23-JAN-82       1300                    10

14 rows selected.

SQL>

Under Administration Tab, Select Reams
dbvault3
And Click on Create
Give a name and a description for the realm
dbvault4
Click on Next
On the Realm Secured Objects click on Add
dbvault5
Click on OK
dbvault6
Click on Next
On Real Authorization page select ADD
dbvault7
Click on OK
dbvault8
Click Next
On the Review page Click Finish
dbvault9
And the end we should have
dbvault10
And that’s all. We can verify now that SYSTEM is no longer allowed to query SCOTT.EMP

SQL> conn system/root@pdb1
Connected.
SQL> show user
USER is "SYSTEM"
SQL> show con_name

CON_NAME
------------------------------
PDB1
SQL> select * from scott.emp;
select * from scott.emp
                    *
ERROR at line 1:
ORA-01031: insufficient privileges
SQL>

And that user EDGE is allowed to query SCOTT.EMP

SQL> show user
USER is "EDGE"
SQL> show con_name

CON_NAME
------------------------------
PDB1
SQL> select * from scott.emp;

     EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM     DEPTNO
---------- ---------- --------- ---------- --------- ---------- ---------- ----------
      7369 SMITH      CLERK           7902 17-DEC-80        800                    20
      7499 ALLEN      SALESMAN        7698 20-FEB-81       1600        300         30
      7521 WARD       SALESMAN        7698 22-FEB-81       1250        500         30
      7566 JONES      MANAGER         7839 02-APR-81       2975                    20
      7654 MARTIN     SALESMAN        7698 28-SEP-81       1250       1400         30
      7698 BLAKE      MANAGER         7839 01-MAY-81       2850                    30
      7782 CLARK      MANAGER         7839 09-JUN-81       2450                    10
      7788 SCOTT      ANALYST         7566 19-APR-87       3000                    20
      7839 KING       PRESIDENT            17-NOV-81       5000                    10
      7844 TURNER     SALESMAN        7698 08-SEP-81       1500          0         30
      7876 ADAMS      CLERK           7788 23-MAY-87       1100                    20
      7900 JAMES      CLERK           7698 03-DEC-81        950                    30

     EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM     DEPTNO
---------- ---------- --------- ---------- --------- ---------- ---------- ----------
      7902 FORD       ANALYST         7566 03-DEC-81       3000                    20
      7934 MILLER     CLERK           7782 23-JAN-82       1300                    10

14 rows selected.

SQL>

 

Cet article Oracle Database Vault: Realm in a Pluggable Database est apparu en premier sur Blog dbi services.

Control File issues on duplicating with non patched Oracle version.

$
0
0

Introduction :

RMAN has the ability to duplicate, or clone, a database from a backup or from an active database.
It is possible to create a duplicate database on a remote server with the same file structure,
or on a remote server with a different file structure or on the local server with a different file structure.
For some old and  non patched Oracle versions such as that earlier than 11.2.0.4 , the duplicate (from active or backup) can be a real
challenge even for those DBAs with years of experience,  due to different bugs encountered.

The scenario specified  below will focus on control file issues revealed by duplication from active database an Oracle 11.2.0.2 version EE.

<INFO>Make sure to use nohup command line-utility which allows to run command/process or shell script.

Demonstration :

Step1: Prepare your script:

vi script_duplicate.ksh

#!/bin/ksh
export ORACLE_HOME=$ORACLE_HOME
export PATH=$PATH1:$ORACLE_HOME/bin
rman target sys/pwd@TNS_NAME_TARGET auxiliary sys/pwd@TNS_NAME_AUXILIARY log=duplicate.log cmdfile=/home/oracle/rman_bkup.cmd

vi rman_bkup.cmd
run
{
allocate channel ch1 device type disk;
allocate channel ch2 device type disk;
allocate channel ch3 device type disk;
allocate auxiliary channel dh1 device type disk;
allocate auxiliary channel dh2 device type disk;
allocate auxiliary channel dh3 device type disk;
duplicate target database to <AUXILIARY_NAME> from active database nofilenamecheck;
release channel ch3;
release channel ch2;
release channel ch1;
}

and launch like that : nohup ./script_duplicate.ksh &

Step2: Check instance parameters.
Depending on the PSU level of your instance, even before starting the duplicate, can fail with this error.

RMAN-00571: ===================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS
RMAN-00571: ===================================================
RMAN-03002: failure of Duplicate Db command at 11/02/2011 06:05:48
RMAN-04014: startup failed: ORA-00600: internal error code, arguments: [kck_rls_check must use (11,0,0,0,0) or lower], [kdt.c], [9576], [11.2.0.2.0], [], [], [], [], [], [], [], []
RMAN-04017: startup error description: ORA-32004: obsolete or deprecated parameter(s) specified for RDBMS instance
RMAN-03015: error occurred in stored script Memory Script
RMAN-04014: startup failed: ORA-00600: internal error code, arguments: [kck_rls_check must use (11,0,0,0,0) or lower], [kdt.c], [9576], [11.2.0.2.0], [], [], [], [], [], [], [], []
RMAN-04017: startup error description: ORA-32004: obsolete or deprecated parameter(s) specified for RDBMS instance

According with Oracle Support note : 1064264.1

1. Edit the pfile, add parameter:
_compression_compatibility= "11.2.0"

2. Restart the instance using the pfile
SQL> startup pfile='<fullpath name of pfile>'

3. Create the SPFILE again
SQL> create spfile from pfile;

4. Restart the instance with the SPFILE
SQLl> shutdown immediate;
SQL> startup

and relaunch the previous command (Step 1).

Step3 : Control file issue, trying to open the database.
After transferring the datafiles , your duplicate will crash with these errors , trying to open the database.

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 15/07/2018 17:39:30
RMAN-05501: aborting duplication of target database
RMAN-03015: error occurred in stored script Memory Script



SQL> alter database open resetlogs;
alter database open resetlogs
*
ERROR at line 1:
ORA-19838: Cannot use this control file to open database

Basically is because of a known bug (Bug 11063122 in 11gr2).
Controlfile created during the duplicate in 11gr2 will store redolog file locations as of primary.
We need to recreate control file changing the locations of redo logfiles and datafiles and open database with resetlogs.
In the controlfile recreation script the database name is the source <db_name> and the directory names for redo logs are still pointing to the source database .

The workaround is :

1. Backup as trace your control file (cloned DB)

sql> alter database backup controlfile to trace ;

2. Open the file  , and extract the section RESETLOGS, to modify like that :

CREATE CONTROLFILE REUSE DATABASE "<src_db_name>" RESETLOGS  ARCHIVELOG
    MAXLOGFILES 16
    MAXLOGMEMBERS 3
    MAXDATAFILES 100
    MAXINSTANCES 8
    MAXLOGHISTORY 11680
LOGFILE
  GROUP 9  '<path_of_the_cloned_DB>redo09.log'  SIZE 150M BLOCKSIZE 512,
  GROUP 10 '<path_of_the_cloned_DB>/redo10.log'  SIZE 150M BLOCKSIZE 512,
  GROUP 11 '<path_of_the_cloned_DB>/redo11.log'  SIZE 150M BLOCKSIZE 512,

DATAFILE
  '<path_of_the_cloned_DB>/system01.dbf',
  '<path_of_the_cloned_DB>/undotbs01.dbf',
  '<path_of_the_cloned_DB>/sysaux01.dbf',
  '<path_of_the_cloned_DB>/users01.dbf',
-------------more datafiles
CHARACTER SET EE8ISO8859P2;

Save as trace_control.ctl

3. SQL> alter system set db_name=<new db_name> scope=spfile;
4. SQL> startup nomount
5. SQL>@trace_control.ctl
      --control file created and multiplexed in all the destinations mentioned on your spfile 
6. SQL> alter database open resetlogs

<INFO>If your source db had activity during the duplicate process you should apply manually some required archivelogs.

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of alter db command at 15/07/2018 19:21:30
ORA-01152: file 1 was not restored from a sufficiently old backup
ORA-01110: data file 1: '/u01/oradata/DBName/system01.dbf'

Search on source database , for those  archivelogs with sequence# greater or equal to 399747 and apply them manually on the target DB.

If somehow those are not available you need to take an incremental backup to roll forward your cloned database.

7. SQL> recover database using backup controlfile;

ORA-00279: change 47260162325 generated at  15/07/2018 19:27:40 needed for thread 1
ORA-00289: suggestion : <path>o1_mf_1_399747_%u_.arc
ORA-00280: change 47260162325 for thread 1 is in sequence #399747

Once the required archivelogs files have been applied , try again to open your database:

RMAN> alter database open resetlogs;

database opened

RMAN> exit

Conclusion :
If you’re the kind of Oracle administrator who has the power to approve or deny, you must know how dangerous it is to run your applications with  non patched Oracle databases.
Your data within your organization is better protected if your are taking advantage of patches issued by Oracle and running your production data against supported Oracle versions only.

 

Cet article Control File issues on duplicating with non patched Oracle version. est apparu en premier sur Blog dbi services.


New features and known issues with RMAN tool on Oracle database 12.1.0.2

$
0
0

Oracle Database 12c has new enhancements and additions in Recovery Manager (RMAN).
The RMAN tool continues to enhance and extend the reliability, efficiency, and availability of Oracle Database Backup and Recovery.
Below, I will mention couple of new features for the RMAN duplicate command, but also how to avoid issues that can happen on the creation of the temporary files.

FEATURES:

<INFO>Using BACKUPSET clause :

In previous releases, active duplicates were performed using implicit image copy backups, transferred directly to the destination server. From 12.1 it is also possible to perform active duplicates using backup sets by including the USING BACKUPSET clause.
Compared to the other method (image copy backups), the unused block compression associated with a backup set reduces the amount of the data pulled across the network.

<INFO>Using SECTION SIZE clause:

The section size clause takes into account the parallel degree and the size of the datafile that will be used.
In my case I have configured the parallel degree to 6:

RMAN> CONFIGURE DEVICE TYPE DISK PARALLELISM 6 BACKUP TYPE TO BACKUPSET;

new RMAN configuration parameters:
CONFIGURE DEVICE TYPE DISK PARALLELISM 6 BACKUP TYPE TO BACKUPSET;
new RMAN configuration parameters are successfully stored

Starting restore at 19-JUL-2018 14:11:06
using channel ORA_AUX_DISK_1
using channel ORA_AUX_DISK_2
using channel ORA_AUX_DISK_3
using channel ORA_AUX_DISK_4
using channel ORA_AUX_DISK_5
using channel ORA_AUX_DISK_6
channel ORA_AUX_DISK_3: using network backup set from service PROD2_SITE1
channel ORA_AUX_DISK_3: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_3: restoring datafile 00005 to /u02/oradata/PROD/data.dbf
channel ORA_AUX_DISK_3: restoring section 2 of 7

------
channel ORA_AUX_DISK_2: starting datafile backup set restore
channel ORA_AUX_DISK_2: using network backup set from service PROD2_SITE1
channel ORA_AUX_DISK_2: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_2: restoring datafile 00005 to /u02/oradata/PROD/data.dbf
channel ORA_AUX_DISK_2: restoring section 7 of 7

 

<INFO>The 2 clauses “USING BACKUPSET” and “SECTION SIZE” cannot be used without “ACTIVE DATABASE” and can be integrated successfully into the standby creation :

oracle@dbisrv01:/home/oracle/ [PROD2] rman target sys/password@PROD2_SITE1 auxiliary sys/password@PROD2_SITE2

Recovery Manager: Release 12.1.0.2.0 - Production on Sun Jul 22 13:17:14 2018

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: PROD2 (DBID=1633730013)
connected to auxiliary database: PROD2 (not mounted)

RMAN> duplicate target database for standby from active database using backupset section size 500m nofilenamecheck;
Starting Duplicate Db at 22-JUL-2018 13:17:21
using target database control file instead of recovery catalog
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=249 device type=DISK
allocated channel: ORA_AUX_DISK_2
channel ORA_AUX_DISK_2: SID=13 device type=DISK
allocated channel: ORA_AUX_DISK_3
channel ORA_AUX_DISK_3: SID=250 device type=DISK
allocated channel: ORA_AUX_DISK_4
channel ORA_AUX_DISK_4: SID=14 device type=DISK
allocated channel: ORA_AUX_DISK_5
channel ORA_AUX_DISK_5: SID=251 device type=DISK
allocated channel: ORA_AUX_DISK_6
channel ORA_AUX_DISK_6: SID=15 device type=DISK

contents of Memory Script:
{
   backup as copy reuse
   targetfile  '/u01/app/oracle/product/12.1.0/dbhome_1/dbs/orapwPROD2' auxiliary format
 '/u01/app/oracle/product/12.1.0/dbhome_1/dbs/orapwPROD2'   ;
}
executing Memory Script
----------------------------
executing Memory Script

datafile 1 switched to datafile copy
input datafile copy RECID=1 STAMP=982156757 file name=/u02/oradata/PROD2/system01.dbf
datafile 3 switched to datafile copy
input datafile copy RECID=2 STAMP=982156757 file name=/u02/oradata/PROD2/sysaux01.dbf
datafile 4 switched to datafile copy
input datafile copy RECID=3 STAMP=982156757 file name=/u02/oradata/PROD2/undotbs01.dbf
datafile 5 switched to datafile copy
input datafile copy RECID=4 STAMP=982156757 file name=/u02/oradata/PROD2/data.dbf
datafile 6 switched to datafile copy
input datafile copy RECID=5 STAMP=982156757 file name=/u02/oradata/PROD2/users01.dbf
Finished Duplicate Db at 22-JUL-2018 13:19:21

RMAN> exit

<INFO>Check the status of the PRIMARY & STANDBY database

SQL> select name,db_unique_name,database_role from v$database;

NAME      DB_UNIQUE_NAME                 DATABASE_ROLE
--------- ------------------------------ ----------------
PROD2     PROD2_SITE1                    PRIMARY


SQL> select name,db_unique_name,database_role from v$database;

NAME      DB_UNIQUE_NAME                 DATABASE_ROLE
--------- ------------------------------ ----------------
PROD2     PROD2_SITE2                    PHYSICAL STANDBY

ISSUES :
<WARN>Duplicating on 12cR1, creation of the temp files is not handled correctly.
Duplicating from active or from backup, using Oracle 12cR1, you can run into some issues with the temporary files.

oracle@dbisrv02:/u01/app/oracle/product/12.1.0/dbhome_1/dbs/ [PROD] rman target sys/pwd00@<TNS_NAME_TARGET> auxiliary sys/pwd00@<TNS_NAME_AUXILIARY> 
Recovery Manager: Release 12.1.0.2.0 - Production on Thu Jul 19 13:31:20 2018

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: <TNS_NAME_TARGET> (DBID=xxxxxxxxxx)
connected to auxiliary database: <TNS_NAME_AUXILIARY> (not mounted)

duplicate target database to <TNS_NAME_AUXILIARY> from active database using backupset section size 500m;

----------------------------------------
contents of Memory Script:
{
   Alter clone database open resetlogs;
}
executing Memory Script

database opened
Finished Duplicate Db at 19-JUL-2018 14:26:09

<INFO>Querying the v$tempfile will not reveal any error

SQL> select file#,name,status from v$tempfile;

     FILE# NAME                           STATUS
---------- ------------------------------ -------
         1 /u02/oradata/<AUXILIARY>/temp01.dbf   ONLINE

<INFO>But querying the dba_temp_files, or run some transactions against your database that need usage of the temporary tablespace, you will got :

SQL> select * from dba_temp_files;
select * from dba_temp_files
              *
ERROR at line 1:
ORA-01187: cannot read from file  because it failed verification tests
ORA-01110: data file 201: '/u02/oradata/<AUXILIARY>/temp01.dbf'

Solution1 : Drop and recreate your temporary tablespace(s) manually. Could be difficult if you have several of them, OR
Solution2 : Drop temp files from your <to_be_cloned_DB>, on the OS side, before launching the duplicate. For more details you can consult this note from MOS :  2250889.1

SQL> col TABLESPACE_NAME format a50;
SQL> col file_name format a50;
SQL> select file_name,TABLESPACE_NAME from dba_temp_files;

FILE_NAME                                          TABLESPACE_NAME
-------------------------------------------------- --------------------------------------------------
/u02/oradata/<AUXILIARY>/temp01.dbf                       TEMP

SQL>startup nomount;

rm -rf /u02/oradata/<AUXILIARY>/temp01.dbf

 

oracle@dbisrv02:/u01/app/oracle/product/12.1.0/dbhome_1/dbs/ [PROD] rman target sys/pwd00@<TNS_NAME_TARGET> auxiliary sys/pwd00@<TNS_NAME_AUXILIARY> 
Recovery Manager: Release 12.1.0.2.0 - Production on Thu Jul 19 13:31:20 2018

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: <TNS_NAME_TARGET> (DBID=xxxxxxxxxx)
connected to auxiliary database: <TNS_NAME_AUXILIARY> (not mounted)

duplicate target database to <TNS_NAME_AUXILIARY> from active database using backupset section size 500m;

At then end of the duplicate action, you should be able to use the database without any action performed against temp files :

SQL> select file#,name,status from v$tempfile;

     FILE# NAME                           STATUS
---------- ------------------------------ -------
         1 /u02/oradata/<AUXILIARY>/temp01.dbf   ONLINE

Additionally, if you are running your auxiliary DB using the Oracle Grid Infra, you need to remove it from Grid during your actions and add again once you finished.

SQL> alter system set db_unique_name='PROD_SITE2' scope=spfile;
alter system set db_unique_name='PROD_SITE2' scope=spfile
*
ERROR at line 1:
ORA-32017: failure in updating SPFILE
ORA-65500: could not modify DB_UNIQUE_NAME, resource exists

--remove from GRID
[grid@dbisrv02 ~]$ srvctl stop database -d PROD
[grid@dbisrv02 ~]$ srvctl remove database -d PROD
Remove the database PROD? (y/[n]) Y

SQL> startup
ORACLE instance started.

Total System Global Area  788529152 bytes
Fixed Size                  2929352 bytes
Variable Size             314576184 bytes
Database Buffers          465567744 bytes
Redo Buffers                5455872 bytes
Database mounted.
Database opened.

SQL> alter system set db_unique_name='PROD_SITE2' scope=spfile;

System altered.
 

Cet article New features and known issues with RMAN tool on Oracle database 12.1.0.2 est apparu en premier sur Blog dbi services.

Release 18.0.0.0.0 Version 18.3.0.0.0 On-Premises binaries

$
0
0

Good news, the latest Patchset for Oracle 12cR2 (which is not named patchset anymore, is actually called release 18c and numbered 18.0.0.0.0) is available for download on OTN. It is great because OTN download does not require access to Support and Software Updates. It is available to anybody under the Free Developer License Terms (basically development, testing, prototyping, and demonstrating for an application that is not in production and for non-commercial use). We all complained about the ‘Cloud First’ strategy because we were are eager to download the latest version. But the positive aspect of it is that we have now on OTN a release that has been stabilized after a few release updates. In the past, only the first version of the latest release was available there. Now we have one with many bug fixed.

Of course, I didn’t wait and I have tested 18c as soon as it was available on the Oracle Cloud thanks to the ACE Director program that provided me with some Cloud Credits. In this post, I’ll update my Cloud database to run it with the on-premises binary. Because that’s the big strength of Oracle: we can run the same software, 100% compatible, on the Cloud and on our own servers. There are some limitations in the features available, but technically it is the same software.

Oracle Cloud First

Here is my Cloud version of Oracle 18c installed on February (18.1) updated on April (18.2) and July (18.3):

SQLcl: Release 18.2 Production on Tue Jul 24 11:02:56 2018
 
Copyright (c) 1982, 2018, Oracle. All rights reserved.
 
Connected to:
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.3.0.0.0
 
SQL> host $ORACLE_HOME/OPatch/opatch lspatches
28090523;Database Release Update : 18.3.0.0.180717 (28090523)
OPatch succeeded.
 
SQL> select banner from v$version;
 
BANNER
--------------------------------------------------------------------------------
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
 
SQL> select banner_full from v$version;
 
BANNER_FULL
----------------------------------------------------------------------------------------------------
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.3.0.0.0
 
SQL> select banner_legacy from v$version;
 
BANNER_LEGACY
--------------------------------------------------------------------------------
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
 
SQL> exec dbms_qopatch.get_sqlpatch_status;
 
Patch Id : 27676517
Action : APPLY
Action Time : 18-APR-2018 20:44:50
Description : Database Release Update : 18.2.0.0.180417 (27676517)
Logfile : /u01/app/oracle/cfgtoollogs/sqlpatch/27676517/22097537/27676517_apply_CDB1_CDBROOT_2018Apr18_20_43_27.log
Status : SUCCESS
 
Patch Id : 28090523
Action : APPLY
Action Time : 18-JUL-2018 11:38:20
Description : Database Release Update : 18.3.0.0.180717 (28090523)
Logfile : /u01/app/oracle/cfgtoollogs/sqlpatch/28090523/22329768/28090523_apply_CDB1_CDBROOT_2018Jul18_11_36_38.log
Status : SUCCESS
 
PL/SQL procedure successfully completed.

LINUX.X64_180000_db_home.zip

I have installed the on-premises 18c available on OTN. The good things with the new releases are:

  • No need to extract installer files. Just unzip the Oracle Home and link the executable
  • This Oracle Home image already includes the latest Release Updates


SQLcl: Release 18.2 Production on Tue Jul 24 11:02:56 2018
 
Copyright (c) 1982, 2018, Oracle. All rights reserved.
 
Connected to:
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.3.0.0.0
 
SQL> host $ORACLE_HOME/OPatch/opatch lspatches
27908644;UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171
27923415;OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)
28090553;OCW RELEASE UPDATE 18.3.0.0.0 (28090553)
28090523;Database Release Update : 18.3.0.0.180717 (28090523)
OPatch succeeded.
 

We have 4 updates from July here for the following components:

  • The Database (28090523)
  • The Java in the Oracle Home, aka JDK (27908644)
  • The Java in the database, aka OJVM (27923415)
  • The clusterware component for the database to match the CRS, aka OCW (28090553)

So, now we have an image of the Oracle Home which already contains all the latest updates… except one:

$ cat $ORACLE_HOME/sqldeveloper/sqldeveloper/bin/version.properties
 
COMPANY=Oracle
PRODUCT=SQL Developer
VERSION=17.30003410937f
VER=17.3.2
VER_FULL=17.3.2.341.0937
BUILD_LABEL=341.0937
BUILD_NUM=341.0937
EDITION=

Unfortunately, that’s an old version of SQL Developer here, and with no SQLcl. Then just download this additional one and unzip it in the Oracle Home.

DataPatch

So, what happens when I open the database that I have created on 18.1 and patched with 18.2 and 18.3 RUs on the Oracle Cloud? There are two updates for the database (DBRU and OJVM). The DBRU is already there then DataPatch has only to apply the OJVM:

[oracle@VM183x dbhome_1]$ $ORACLE_HOME/OPatch/datapatch
SQL Patching tool version 18.0.0.0.0 Production on Tue Jul 24 10:57:55 2018
Copyright (c) 2012, 2018, Oracle. All rights reserved.
 
Log file for this invocation: /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_11104_2018_07_24_10_57_5 5/sqlpatch_invocation.log
 
Connecting to database...OK
Gathering database info...done
 
Note: Datapatch will only apply or rollback SQL fixes for PDBs
that are in an open state, no patches will be applied to closed PDBs.
Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
(Doc ID 1585822.1)
 
Bootstrapping registry and package to current versions...done
Determining current state...done
 
Current state of interim SQL patches:
Interim patch 27923415 (OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)):
Binary registry: Installed
PDB CDB$ROOT: Not installed
PDB PDB$SEED: Not installed
PDB PDB1: Not installed
 
Current state of release update SQL patches:
Binary registry:
18.3.0.0.0 Release_Update 1806280943: Installed
PDB CDB$ROOT:
Applied 18.3.0.0.0 Release_Update 1806280943 successfully on 18-JUL-18 11.38.20.321353 AM
PDB PDB$SEED:
Applied 18.3.0.0.0 Release_Update 1806280943 successfully on 18-JUL-18 11.38.21.802495 AM
PDB PDB1:
Applied 18.3.0.0.0 Release_Update 1806280943 successfully on 18-JUL-18 11.38.23.230513 AM
 
Adding patches to installation queue and performing prereq checks...done
Installation queue:
For the following PDBs: CDB$ROOT PDB$SEED PDB1
No interim patches need to be rolled back
No release update patches need to be installed
The following interim patches will be applied:
27923415 (OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415))
 
Installing patches...
Patch installation complete. Total patches installed: 3
 
Validating logfiles...done
Patch 27923415 apply (pdb CDB$ROOT): SUCCESS
logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/27923415/22239273/27923415_apply_CDB1_CDBROOT_2018Jul24_10_58_08. log (no errors)
Patch 27923415 apply (pdb PDB$SEED): SUCCESS
logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/27923415/22239273/27923415_apply_CDB1_PDBSEED_2018Jul24_10_58_56. log (no errors)
Patch 27923415 apply (pdb PDB1): SUCCESS
logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/27923415/22239273/27923415_apply_CDB1_PDB1_2018Jul24_10_58_56.log (no errors)
SQL Patching tool complete on Tue Jul 24 10:59:21 2018

Now here is the history of patches:

SQL> exec dbms_qopatch.get_sqlpatch_status;
 
Patch Id : 27676517
Action : APPLY
Action Time : 18-APR-2018 20:44:50
Description : Database Release Update : 18.2.0.0.180417 (27676517)
Logfile : /u01/app/oracle/cfgtoollogs/sqlpatch/27676517/22097537/27676517_apply_CDB1_CDBROOT_2018Apr18_20_43_27.log
Status : SUCCESS
 
Patch Id : 28090523
Action : APPLY
Action Time : 18-JUL-2018 11:38:20
Description : Database Release Update : 18.3.0.0.180717 (28090523)
Logfile : /u01/app/oracle/cfgtoollogs/sqlpatch/28090523/22329768/28090523_apply_CDB1_CDBROOT_2018Jul18_11_36_38.log
Status : SUCCESS
 
Patch Id : 27923415
Action : APPLY
Action Time : 24-JUL-2018 10:59:19
Description : OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)
Logfile : /u01/app/oracle/cfgtoollogs/sqlpatch/27923415/22239273/27923415_apply_CDB1_CDBROOT_2018Jul24_10_58_08.log
Status : SUCCESS
 
PL/SQL procedure successfully completed.

This is all good. Despite the different release schedules, the level of software is exactly the same. And we can start on-premises on a release with low regression risk (18c like a patchset) but many fixes (several release updates). For the moment only the Linux port is there. The other platforms should come this summer.

 

Cet article Release 18.0.0.0.0 Version 18.3.0.0.0 On-Premises binaries est apparu en premier sur Blog dbi services.

Logical standby: No view for data guard status on session level

$
0
0

On logical standby you have to switch off data guard at least on session level to make modifications on objects which are maintained by data guard.

This is done by command


alter session disable guard;

Dataguard can be reenabled for this session by issuing


alter session enable guard;

For executing these commands “alter database” privilege is at least needed, which is often not liked by IT security staff.

Command “alter session disable guard” is also used in login triggers of logical standby databases for power users who have to do modifications in objects which are maintained by data guard. Problem is that no view can be queried to get data guard status of the session.

Workaround is to put result of login trigger into a log table which can be queried by the user.

 

Cet article Logical standby: No view for data guard status on session level est apparu en premier sur Blog dbi services.

Patching ODA lite to 12.2.1.4.0

$
0
0

Here is how to apply the latest patch for your ODA. As usual the Oracle documentation is not 100% accurate. I applied this patch on 2 ODAs X7-2M previously deployed in 12.2.1.2.0, no intermediate patch was needed.

1) Download the patch

The patch number is 28216794. This patch will update the following components:  dcs (odacli), operating system, bios/firmwares, ilom, GI, dbhomes and databases.
Download and copy the patch to a temporary folder on the server, for example /opt/patch. You’ll have to be root to apply the patch.

2) Check the actual versions and free space on disk

First check the current version:

odacli describe-component

System Version
---------------
12.2.1.2.0

Component                                Installed Version    Available Version
---------------------------------------- -------------------- --------------------
OAK                                       12.2.1.2.0            up-to-date
GI                                        12.2.0.1.171017       up-to-date
DB {
[ OraDB12201_home1 ]                      12.2.0.1.171017       up-to-date
[ OraDB11204_home1 ]                      11.2.0.4.171017       up-to-date
}
DCSAGENT                                  12.2.1.2.0            up-to-date
ILOM                                      4.0.0.28.r121827      4.0.0.22.r120818
BIOS                                      41017600              41017100
OS                                        6.8                   up-to-date

The “available version” stands for “what’s available in the ODA repository?”. As this ODA has never been patched, it seems that some of the components are already in newer versions than those provided in the initial deployment package. Registering a new patch will refresh this repository.

Check that folders /, /u01 and /opt have enough free GB to process (>=10GB).

3) Prepare the patch files

Previous patch was slightly different from the others because Oracle simply forgot to double zip the file 2 and 3 of the patch. Now this patch is back to a more classic update: unzip and register the 3 unzipped files (they are zip files, too).

cd /opt/patch
unzip p28216794_122140_Linux-x86-64_1of3.zip
unzip p28216794_122140_Linux-x86-64_2of3.zip
unzip p28216794_122140_Linux-x86-64_3of3.zip

odacli update-repository -f /tmp/oda-sm-12.2.1.4.0-180708-server1of3.zip
odacli update-repository -f /tmp/oda-sm-12.2.1.4.0-180708-server2of3.zip
odacli update-repository -f /tmp/oda-sm-12.2.1.4.0-180708-server3of3.zip

Updating the repository, as other tasks through odacli, will generate a job-id you can look at to confirm that everything is running fine:

odacli describe-job -i "4087f0f4-2582-4621-b75c-59c9915a0cb5"

Job details
----------------------------------------------------------------
                     ID:  4087f0f4-2582-4621-b75c-59c9915a0cb5
            Description:  Repository Update
                 Status:  Success
                Created:  July 19, 2018 10:32:53 AM CEST
                Message:  oda-sm-12.2.1.4.0-180703-server1of3.zip

Note: you can update the repository with a single update-repository command including the 3 files. But it’s better to do the update separately in case of a corrupted file.

New feature with this patch, you can check if your system is ready for patching with this command:

odacli create-prepatchreport -s -v 12.2.1.4.0
odacli describe-prepatchreport -i f45f9750-ec9b-411f-ba53-43205cb17f87

4) Update the dcs-agent

First, you’ll have to update the dcs-agent:

/opt/oracle/dcs/bin/odacli update-dcsagent -v 12.2.1.4.0

odacli describe-job -i "5cc9174e-bd7a-435d-aaff-0113e9ab01bc"

Job details
----------------------------------------------------------------
                     ID:  5cc9174e-bd7a-435d-aaff-0113e9ab01bc
            Description:  DcsAgent patching
                 Status:  Success
                Created:  July 19, 2018 10:36:18 AM CEST
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Patch location validation                July 19, 2018 10:36:18 AM CEST      July 19, 2018 10:36:19 AM CEST      Success
dcs-agent upgrade                        July 19, 2018 10:36:19 AM CEST      July 19, 2018 10:36:19 AM CEST      Success

This update takes only few seconds. Check again the version and you will see a new 18c component:

odacli describe-component

System Version
---------------
12.2.1.4.0

Component                                Installed Version    Available Version
---------------------------------------- -------------------- --------------------
OAK                                       12.2.1.2.0            12.2.1.4.0
GI                                        12.2.0.1.171017       12.2.0.1.180417
DB {
[ OraDB12201_home1 ]                      12.2.0.1.171017       12.2.0.1.180417
[ OraDB11204_home1 ]                      11.2.0.4.171017       11.2.0.4.180417
}
DCSAGENT                                  18.2.1.0.0            up-to-date
ILOM                                      4.0.0.28.r121827      4.0.2.20.b.r123704
BIOS                                      41017600              41021300
OS                                        6.8                   6.9
FIRMWARECONTROLLER                        QDV1RE0F              qdv1re14

5) Update the server

Updating the server will update OS, ILOM, BIOS, firmwares and the GI. Update process will assume that no additionnal packages have been installed. If you installed additionnal packages, please remove them as they can prevent the patch to apply correctly.

This part of the update is the longest one (about 1 hour), and will end with a reboot of the server.

odacli update-server -v 12.2.1.4.0

odacli describe-job -i "a2e296c2-1b3e-4ed1-a5cc-0cb7c4d23120"

Job details
----------------------------------------------------------------
                     ID:  a2e296c2-1b3e-4ed1-a5cc-0cb7c4d23120
            Description:  Server Patching
                 Status:  Success
                Created:  July 19, 2018 12:12:53 PM CEST
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Patch location validation                July 19, 2018 12:12:53 PM CEST      July 19, 2018 12:12:53 PM CEST      Success
dcs-controller upgrade                   July 19, 2018 12:12:54 PM CEST      July 19, 2018 12:12:54 PM CEST      Success
Patch location validation                July 19, 2018 12:12:54 PM CEST      July 19, 2018 12:12:54 PM CEST      Success
dcs-cli upgrade                          July 19, 2018 12:12:54 PM CEST      July 19, 2018 12:12:54 PM CEST      Success
Creating repositories using yum          July 19, 2018 12:12:54 PM CEST      July 19, 2018 12:12:55 PM CEST      Success
Applying HMP Patches                     July 19, 2018 12:12:55 PM CEST      July 19, 2018 12:14:31 PM CEST      Success
Patch location validation                July 19, 2018 12:14:31 PM CEST      July 19, 2018 12:14:31 PM CEST      Success
oda-hw-mgmt upgrade                      July 19, 2018 12:14:31 PM CEST      July 19, 2018 12:14:31 PM CEST      Success
Creating repositories using yum          July 19, 2018 12:14:31 PM CEST      July 19, 2018 12:14:35 PM CEST      Success
Applying OS Patches                      July 19, 2018 12:14:35 PM CEST      July 19, 2018 12:16:18 PM CEST      Success
OSS Patching                             July 19, 2018 12:16:18 PM CEST      July 19, 2018 12:16:18 PM CEST      Success
Applying Firmware Disk Patches           July 19, 2018 12:16:18 PM CEST      July 19, 2018 12:16:23 PM CEST      Success
Applying Firmware Expander Patches       July 19, 2018 12:16:23 PM CEST      July 19, 2018 12:16:26 PM CEST      Success
Applying Firmware Controller Patches     July 19, 2018 12:16:26 PM CEST      July 19, 2018 12:16:30 PM CEST      Success
Checking Ilom patch Version              July 19, 2018 12:16:31 PM CEST      July 19, 2018 12:16:33 PM CEST      Success
Patch location validation                July 19, 2018 12:16:33 PM CEST      July 19, 2018 12:16:34 PM CEST      Success
Apply Ilom patch                         July 19, 2018 12:16:35 PM CEST      July 19, 2018 12:16:36 PM CEST      Success
Copying Flash Bios to Temp location      July 19, 2018 12:16:37 PM CEST      July 19, 2018 12:16:37 PM CEST      Success
Starting the clusterware                 July 19, 2018 12:16:57 PM CEST      July 19, 2018 12:16:58 PM CEST      Success
clusterware patch verification           July 19, 2018 12:16:58 PM CEST      July 19, 2018 12:17:00 PM CEST      Success
Patch location validation                July 19, 2018 12:17:00 PM CEST      July 19, 2018 12:17:03 PM CEST      Success
Opatch updation                          July 19, 2018 12:17:33 PM CEST      July 19, 2018 12:17:35 PM CEST      Success
Patch conflict check                     July 19, 2018 12:17:35 PM CEST      July 19, 2018 12:18:16 PM CEST      Success
clusterware upgrade                      July 19, 2018 12:18:16 PM CEST      July 19, 2018 12:33:07 PM CEST      Success
Updating GiHome version                  July 19, 2018 12:33:07 PM CEST      July 19, 2018 12:33:11 PM CEST      Success
preRebootNode Actions                    July 19, 2018 12:33:30 PM CEST      July 19, 2018 12:34:11 PM CEST      Success
Reboot Ilom                              July 19, 2018 12:34:11 PM CEST      July 19, 2018 12:34:11 PM CEST      Success

If there is a problem during the patching process (you forgot to remove an additionnal rpm or one of the patch file is missing for example), you can relaunch the patching and it will skip the already patched components.

Once this part of the patch is successfuly applied, check again the components:

odacli describe-component

System Version
---------------
12.2.1.4.0

Component                                Installed Version    Available Version
---------------------------------------- -------------------- --------------------
OAK                                       12.2.1.4.0            up-to-date
GI                                        12.2.0.1.180417       up-to-date
DB {
[ OraDB12201_home1 ]                      12.2.0.1.171017       12.2.0.1.180417
[ OraDB11204_home1 ]                      11.2.0.4.171017       11.2.0.4.180417
}
DCSAGENT                                  18.2.1.0.0            up-to-date
ILOM                                      4.0.2.20.b.r123704    up-to-date
BIOS                                      41017600              41021300
OS                                        6.9                   up-to-date
FIRMWARECONTROLLER                        QDV1RE14              up-to-date

Note: at this time, BIOS is not yet updated. Don’t know why but check this later and it will be OK.

6) Patch the dbhomes

You now need to patch the dbhomes separately. First of all, list them:

odacli list-dbhomes

ID                                       Name                 DB Version                               Home Location                                 Status
---------------------------------------- -------------------- ---------------------------------------- --------------------------------------------- ----------
90556d26-6756-4fed-9546-d44d55b6fc04     OraDB11204_home1     11.2.0.4.171017 (26609929, 26392168)     /u01/app/oracle/product/11.2.0.4/dbhome_1     Configured
18a9c067-3629-409d-9bae-60d27516c914     OraDB12201_home1     12.2.0.1.171017 (27020386, 26710464)     /u01/app/oracle/product/12.2.0.1/dbhome_1     Configured

Patch the first one. Target version is actually version of the patch. Remember that updating the ODA through a patch will never upgrade your database to a newer release, your 11gR2 databases will stay in 11.2.0.4 and 12cR1 databases will stay in 12.1.0.2 (if you’re using one or both of these versions). Only the PSU number will change (the fifth number which is actually a date).

odacli update-dbhome -i 90556d26-6756-4fed-9546-d44d55b6fc04 -v 12.2.1.4.0

odacli describe-job -i "c1abf083-d597-4673-b07b-d7cb79ec434a"

Job details
----------------------------------------------------------------
                     ID:  c1abf083-d597-4673-b07b-d7cb79ec434a
            Description:  DB Home Patching: Home Id is 90556d26-6756-4fed-9546-d44d55b6fc04
                 Status:  Success
                Created:  July 19, 2018 12:59:11 PM CEST
                Message:  WARNING::Failed to run the datapatch as db DB03_S2 is not registered with clusterware##WARNING::Failed to run the datapatch as db DB04_S2 is not registered with clusterware##WARNING::Failed to run the datapatch as db DB05_S2 is not r

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
clusterware patch verification           July 19, 2018 12:59:12 PM CEST      July 19, 2018 12:59:15 PM CEST      Success
Patch location validation                July 19, 2018 12:59:15 PM CEST      July 19, 2018 12:59:19 PM CEST      Success
Opatch updation                          July 19, 2018 12:59:39 PM CEST      July 19, 2018 12:59:40 PM CEST      Success
Patch conflict check                     July 19, 2018 12:59:40 PM CEST      July 19, 2018 12:59:49 PM CEST      Success
db upgrade                               July 19, 2018 12:59:49 PM CEST      July 19, 2018 1:01:22 PM CEST       Success
SqlPatch upgrade                         July 19, 2018 1:01:22 PM CEST       July 19, 2018 1:01:22 PM CEST       Success
SqlPatch upgrade                         July 19, 2018 1:01:22 PM CEST       July 19, 2018 1:01:22 PM CEST       Success
SqlPatch upgrade                         July 19, 2018 1:01:22 PM CEST       July 19, 2018 1:01:23 PM CEST       Success
SqlPatch upgrade                         July 19, 2018 1:01:23 PM CEST       July 19, 2018 1:01:23 PM CEST       Success
SqlPatch upgrade                         July 19, 2018 1:01:23 PM CEST       July 19, 2018 1:01:23 PM CEST       Success
SqlPatch upgrade                         July 19, 2018 1:01:23 PM CEST       July 19, 2018 1:01:23 PM CEST       Success
SqlPatch upgrade                         July 19, 2018 1:01:23 PM CEST       July 19, 2018 1:01:23 PM CEST       Success
SqlPatch upgrade                         July 19, 2018 1:01:23 PM CEST       July 19, 2018 1:01:24 PM CEST       Success
SqlPatch upgrade                         July 19, 2018 1:01:24 PM CEST       July 19, 2018 1:01:24 PM CEST       Success
SqlPatch upgrade                         July 19, 2018 1:01:24 PM CEST       July 19, 2018 1:01:24 PM CEST       Success
SqlPatch upgrade                         July 19, 2018 1:01:24 PM CEST       July 19, 2018 1:01:24 PM CEST       Success
SqlPatch upgrade                         July 19, 2018 1:01:24 PM CEST       July 19, 2018 1:01:24 PM CEST       Success
SqlPatch upgrade                         July 19, 2018 1:01:24 PM CEST       July 19, 2018 1:01:25 PM CEST       Success

Despite the overall Success status, some database could stay in previous version for some reason:
– database is a StandBy database : no update is possible on the dictionary (dictionary will be updated when the primary server will be patched)
– database is not correctly registered in the Oracle Cluster repository (if you create the database with odacli in instance-only mode: create-database -io – Probably a bug)

After applying the patch, please check the version of your dictionary on each database with this statement :

select ACTION_TIME, ACTION, VERSION, ID, BUNDLE_SERIES, COMMENTS from  DBA_REGISTRY_HISTORY;
ACTION_TIME                    ACTION       VERSION                  ID BUNDLE_SERIES   COMMENTS
------------------------------ ------------ ---------------- ---------- --------------- ------------------------------
23/07/18 20:25:26,765974       APPLY        11.2.0.4             180417 PSU             PSU 11.2.0.4.180417

Then update the next dbhome:

odacli update-dbhome -i 18a9c067-3629-409d-9bae-60d27516c914 -v 12.2.1.4.0

odacli describe-job -i "ef53a8a6-bd74-40f6-a338-343489d41a1c"

Job details
----------------------------------------------------------------
                     ID:  ef53a8a6-bd74-40f6-a338-343489d41a1c
            Description:  DB Home Patching: Home Id is 18a9c067-3629-409d-9bae-60d27516c914
                 Status:  Success
                Created:  July 19, 2018 1:20:20 PM CEST
                Message:  WARNING::Failed to run datapatch on db DB12TST Failed to run Utlrp script##WARNING::Failed to run the datapatch as db DB12DEV: is not registered with clusterware

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
clusterware patch verification           July 19, 2018 1:20:41 PM CEST       July 19, 2018 1:20:42 PM CEST       Success
Patch location validation                July 19, 2018 1:20:42 PM CEST       July 19, 2018 1:20:42 PM CEST       Success
Opatch updation                          July 19, 2018 1:20:42 PM CEST       July 19, 2018 1:20:42 PM CEST       Success
Patch conflict check                     July 19, 2018 1:20:42 PM CEST       July 19, 2018 1:20:42 PM CEST       Success
db upgrade                               July 19, 2018 1:20:42 PM CEST       July 19, 2018 1:20:42 PM CEST       Success
SqlPatch upgrade                         July 19, 2018 1:20:42 PM CEST       July 19, 2018 1:20:47 PM CEST       Success
SqlPatch upgrade                         July 19, 2018 1:20:47 PM CEST       July 19, 2018 1:20:47 PM CEST       Success
SqlPatch upgrade                         July 19, 2018 1:20:47 PM CEST       July 19, 2018 1:21:03 PM CEST       Success

As previously patched dbhome, this one also has warnings. Please check if each database is OK.

Check the /opt/oracle/dcs/log/dcs-agent.log for extended warning messages if needed, and then check the DB Version after the update of all dbhomes:

odacli list-dbhomes

ID                                       Name                 DB Version                               Home Location                                 Status
---------------------------------------- -------------------- ---------------------------------------- --------------------------------------------- ----------
90556d26-6756-4fed-9546-d44d55b6fc04     OraDB11204_home1     11.2.0.4.180417 (27441052, 27338049)     /u01/app/oracle/product/11.2.0.4/dbhome_1     Configured
18a9c067-3629-409d-9bae-60d27516c914     OraDB12201_home1     12.2.0.1.180417 (27464465, 27674384)     /u01/app/oracle/product/12.2.0.1/dbhome_1     Configured

Jump to step 8 if your databases are OK.

7) Update the databases where datapatch failed

If for some reason datapatch was not applied on some of your databases and it should have been, you’ll have to do it manually:

For 12c :

su – oracle
. oraenv <<< DB12TST
cd $ORACLE_HOME/OPatch
./datapatch -verbose

For 11gR2 :

su – oracle
. oraenv <<< DB11TST
sqlplus / as sysdba
@?/rdbms/admin/catbundle.sql psu apply

8) Patch the storage

No update-storage is needed for this patch on this ODA.

9) Optional: deploy the latest db clone files

If you’ll never deploy a new dbhome this step is not necessary. If you will, or simply if you don’t know if you’ll have to do that later, download and register the new db clone files in the repository to be able to create a new dbhome at the same patch level than the one already deployed, for example:

cd /opt/patch

unzip p27119402_122140_Linux-x86-64.zip
Archive:  p27119402_122140_Linux-x86-64.zip
 extracting: odacli-dcs-12.2.1.4.0-180617-DB-12.2.0.1.zip
  inflating: README.txt

update-image --image-files odacli-dcs-12.2.1.4.0-180617-DB-12.2.0.1.zip
Unpacking will take some time,  Please wait...
Unzipping odacli-dcs-12.2.1.4.0-180617-DB-12.2.0.1.zip

10) Control the final version of the components

Now the patching is done. ±2 hours were needed, if everything is OK.

oodacli describe-component

System Version
---------------
12.2.1.4.0

Component                                Installed Version    Available Version
---------------------------------------- -------------------- --------------------
OAK                                       12.2.1.4.0            up-to-date
GI                                        12.2.0.1.180417       up-to-date
DB {
[ OraDB12201_home1 ]                      12.2.0.1.180417       up-to-date
[ OraDB11204_home1 ]                      11.2.0.4.180417       up-to-date
}
DCSAGENT                                  18.2.1.0.0            up-to-date
ILOM                                      4.0.2.20.b.r123704    up-to-date
BIOS                                      41021300              up-to-date
OS                                        6.9                   up-to-date
FIRMWARECONTROLLER                        QDV1RE14              up-to-date

11) Optional: patch the SFP firmware

If you’re using SFP network interfaces on your ODA X7 (fiber connected network), you may encounter network troubles (lost of pings, failover not working correctly, and so on). SFP are not part of the ODA patching process, but there is a firmware update available for those kind of SFP on Oracle X7 server familly (including ODA). Please review note 2373070.1 for more information :

Using the onboard SFP28 ports on an ODA X7-2 server node (Doc ID 2373070.1)

 

Cet article Patching ODA lite to 12.2.1.4.0 est apparu en premier sur Blog dbi services.

Upgrade Oracle Grid Infrastructure from 12.1.0.2.0 to 12.2.0.1.0

$
0
0

The following blog will provide the necessary steps to upgrade the Grid Infrastructure from 12.1 to 12.2, for a Standalone Server.
One of the new features of GI 12.2 is the usage of the AFD (Oracle ASMFD Filter Driver).

Assumptions :

 You have installed Oracle GI 12.1 as grid user
 You have installed Oracle Database 12.1 as oracle user
 You have configured the groups asmadmin,asmoper,asmdba
 You installed oracle-rdbms-server-12cr2-preinstall rpm
 You patched your Oracle GI to PSU July 2017 (combo patch 25901062 to patch Oracle stack 12.1 , GI & RDBMS)
 [root]mkdir /u01/app/grid/product/12.2.0/grid/
 [root]chown -R grid:oinstall /u01/app/grid/product/12.2.0/grid/
 --stop all dbs that are using ASM
 [oracle]srvctl stop database -d ORCL

Installation : Tasks

[grid]cd /u01/app/grid/product/12.2.0/grid/
[grid]unzip /stage/linuxx64_12201_grid_home.zip
[grid]./gridSetup.sh
	Choose Upgrade Oracle Grid Infrastructure option.
	Confirm that all Oracle DBs using ASM are stopped.
	Check :
        Oracle base : /u01/app/grid/  
        Software Location : /u01/app/grid/product/12.2.0/grid/
		
	Uncheck "Automatically run configuration scripts". Is not recommanded by Oracle, but if you are doing like that 
is very possible that your upgrade process is dying without any output. 
	So at the right moment you will be asked to run rootUpgrade.sh maually.
	Click Next and validate that all the pre-requirements are confirmed.
	Monitor the progress and run the script rootUpgrade.sh when is prompted
	Once your action completed succesfully:
[grid@dbisrv04 ~]$ . oraenv
ORACLE_SID = [grid] ? +ASM
The Oracle base has been set to /u01/app/grid

[grid@dbisrv04 ~]$ crsctl query has softwareversion
Oracle High Availability Services version on the local node is [12.2.0.1.0]

Migrating ASM disks from ASMlib to AFD : Tasks

Oracle ASM Filter Driver (Oracle ASMFD) simplifies the configuration and management of disk devices by eliminating the need to rebind disk devices used with Oracle ASM each time the system is restarted.
Oracle ASM Filter Driver (Oracle ASMFD) is a kernel module that resides in the I/O path of the Oracle ASM disks. Oracle ASM uses the filter driver to validate write I/O requests to Oracle ASM disks.

Step1:

[grid@dbisrv04 ~]$ asmcmd dsget
parameter:
profile:

[grid@dbisrv04 ~]$ asmcmd dsset '/dev/xvda*','ORCL:*','AFD:*'

[grid@dbisrv04 ~]$ asmcmd dsget
parameter:/dev/xvda*, ORCL:*, AFD:*
profile:/dev/xvda*,ORCL:*,AFD:*

Step2:

[root]export ORACLE_HOME=/u01/app/grid/product/12.2.0/grid/
[root]$GRID_HOME/bin/crsctl stop has -f

Step3:

root@dbisrv04 ~]# $ORACLE_HOME/bin/asmcmd afd_configure

ASMCMD-9524: AFD configuration failed 'ERROR: ASMLib deconfiguration failed'
Cause: acfsload is running.To configure AFD oracleasm and acfsload must be stopped
Solution: stop acfsload and rerun asmcmd afd_configure

[root@dbisrv04 ~]# oracleasm exit
[root@dbisrv04 ~]# $ORACLE_HOME/bin/acfsload stop

root@dbisrv04 ~]# $ORACLE_HOME/bin/asmcmd afd_configure
AFD-627: AFD distribution files found.
AFD-634: Removing previous AFD installation.
AFD-635: Previous AFD components successfully removed.
AFD-636: Installing requested AFD software.
AFD-637: Loading installed AFD drivers.
AFD-9321: Creating udev for AFD.
AFD-9323: Creating module dependencies - this may take some time.
AFD-9154: Loading 'oracleafd.ko' driver.
AFD-649: Verifying AFD devices.
AFD-9156: Detecting control device '/dev/oracleafd/admin'.
AFD-638: AFD installation correctness verified.
Modifying resource dependencies - this may take some time.

Step4:

[grid@dbisrv04 ~]$ $ORACLE_HOME/bin/asmcmd afd_state
ASMCMD-9526: The AFD state is 'LOADED' and filtering is 'ENABLED' on host 'dbisrv04.localdomain'

Step5:

[root]$ORACLE_HOME/bin/crsctl stop has

Step6:

[grid@dbisrv04 ~]$ $ORACLE_HOME/bin/asmcmd afd_refresh
[grid@dbisrv04 ~]$ $ORACLE_HOME/bin/asmcmd afd_lsdsk
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
DISK01                      ENABLED   /dev/sdf1
DISK02                      ENABLED   /dev/sdg1
DISK03                      ENABLED   /dev/sdh1
DISK04                      ENABLED   /dev/sdi1
DISK05                      ENABLED   /dev/sdj1
DISK06                      ENABLED   /dev/sdk1
DISK07                      ENABLED   /dev/sdl1
DISK08                      ENABLED   /dev/sdm1
DISK09                      ENABLED   /dev/sdn1

Step7:

[grid@dbisrv04 ~]$ $ORACLE_HOME/bin/asmcmd afd_dsset '/dev/sd*'

Step8:

[root]$ORACLE_HOME/bin/crsctl stop has -f
[root]$GRID_HOME/bin/asmcmd afd_scan
[root]$GRID_HOME/bin/asmcmd afd_refresh

Step9:

[root@dbisrv04 ~]# /u01/app/grid/product/12.2.0/grid/bin/asmcmd afd_lsdsk
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
DISK01                      ENABLED   /dev/sdf1
DISK02                      ENABLED   /dev/sdg1
DISK03                      ENABLED   /dev/sdh1
DISK04                      ENABLED   /dev/sdi1
DISK05                      ENABLED   /dev/sdj1
DISK06                      ENABLED   /dev/sdk1
DISK07                      ENABLED   /dev/sdl1
DISK08                      ENABLED   /dev/sdm1
DISK09                      ENABLED   /dev/sdn1

Step10:

select name,label,path from v$asm_disk;SQL> SQL> SQL>

NAME       LABEL                PATH
---------- -------------------- --------------------
DISK04     DISK04               AFD:DISK04
DISK03     DISK03               AFD:DISK03
DISK02     DISK02               AFD:DISK02
DISK01     DISK01               AFD:DISK01
DISK07     DISK07               AFD:DISK07
DISK05     DISK05               AFD:DISK05
DISK06     DISK06               AFD:DISK06
DISK09     DISK09               AFD:DISK09
DISK08     DISK08               AFD:DISK08

Step11: Confirm your AFD is loaded

[root@dbisrv04 ~]# /u01/app/grid/product/12.2.0/grid/bin/crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       dbisrv04                 STABLE
ora.DATA2.dg
               ONLINE  ONLINE       dbisrv04                 STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       dbisrv04                 STABLE
ora.RECO.dg
               ONLINE  ONLINE       dbisrv04                 STABLE
ora.asm
               ONLINE  ONLINE       dbisrv04                 Started,STABLE
ora.ons
               OFFLINE OFFLINE      dbisrv04                 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd
      1        ONLINE  ONLINE       dbisrv04                 STABLE
ora.diskmon
      1        OFFLINE OFFLINE                               STABLE
ora.driver.afd
      1        ONLINE  ONLINE       dbisrv04                 STABLE
ora.evmd
      1        ONLINE  ONLINE       dbisrv04                 STABLE
ora.orcl.db
      1        ONLINE  ONLINE       dbisrv04                 Open,HOME=/u01/app/o
                                                             racle/product/12.2.0
                                                             /dbhome_1,STABLE

——————————————————————————–

Step 11b: Introduce new disks with AFD

[root]. oraenv
[root]+ASM
[root@dbisrv04 ~]# asmcmd afd_label DSIK10 /dev/sdo1 --init
ASMCMD-9521: AFD is already configured
[root@dbisrv04 ~]# asmcmd afd_label DSIK10 /dev/sdo1
[root@dbisrv04 ~]# asmcmd afd_lslbl

Step 12: Erase Oracle ASMLib

[root] yum erase oracleasm-support.x86_64
[root] yum erase oracleasmlib.x86_64
 

Cet article Upgrade Oracle Grid Infrastructure from 12.1.0.2.0 to 12.2.0.1.0 est apparu en premier sur Blog dbi services.

Oracle 18c preinstall RPM on RedHat RHEL

$
0
0

The Linux prerequisites for Oracle Database are all documented but using the pre-install rpm makes all things easier. Before 18c, this was easy on Oracle Enterprise Linux (OEL) but not so easy on RedHat (RHEL) where the .rpm had many dependencies on OEL and UEK.
Now that 18c is there to download, there’s also the 18c preinstall rpm and the good news is that it can be run also on RHEL without modification.

This came to my attention on Twitter:

And of course this is fully documented:
https://docs.oracle.com/en/database/oracle/oracle-database/18/cwlin/about-the-oracle-preinstallation-rpm.html#GUID-C15A642B-534D-4E4A-BDE8-6DC7772AA9C8

In order to test it I’ve created quickly a CentOS instance on the Oracle Cloud:
CaptureCentosPreinstall

I’ve downloaded the RPM from the OEL7 repository:

[root@instance-20180803-1152 opc]# curl -o oracle-database-preinstall-18c-1.0-1.el7.x86_64.rpm https ://yum.oracle.com/repo/OracleLinux/OL7/latest/x86_64/getPackage/oracle-database-preinstall-18c-1.0-1 .el7.x86_64.rpm
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 18244 100 18244 0 0 63849 0 --:--:-- --:--:-- --:--:-- 63790

then ran the installation:

[root@instance-20180803-1152 opc]# yum -y localinstall oracle-database-preinstall-18c-1.0-1.el7.x86_ 64.rpm

 
It installs automatically all dependencies:
Installed:
oracle-database-preinstall-18c.x86_64 0:1.0-1.el7
 
Dependency Installed:
compat-libcap1.x86_64 0:1.10-7.el7 compat-libstdc++-33.x86_64 0:3.2.3-72.el7 glibc-devel.x86_64 0:2.17-222.el7 glibc-headers.x86_64 0:2.17-222.el7
gssproxy.x86_64 0:0.7.0-17.el7 kernel-headers.x86_64 0:3.10.0-862.9.1.el7 keyutils.x86_64 0:1.5.8-3.el7 ksh.x86_64 0:20120801-137.el7
libICE.x86_64 0:1.0.9-9.el7 libSM.x86_64 0:1.2.2-2.el7 libXext.x86_64 0:1.3.3-3.el7 libXi.x86_64 0:1.7.9-1.el7
libXinerama.x86_64 0:1.1.3-2.1.el7 libXmu.x86_64 0:1.1.2-2.el7 libXrandr.x86_64 0:1.5.1-2.el7 libXrender.x86_64 0:0.9.10-1.el7
libXt.x86_64 0:1.1.5-3.el7 libXtst.x86_64 0:1.2.3-1.el7 libXv.x86_64 0:1.0.11-1.el7 libXxf86dga.x86_64 0:1.1.4-2.1.el7
libXxf86misc.x86_64 0:1.0.3-7.1.el7 libXxf86vm.x86_64 0:1.1.4-1.el7 libaio-devel.x86_64 0:0.3.109-13.el7 libbasicobjects.x86_64 0:0.1.1-29.el7
libcollection.x86_64 0:0.7.0-29.el7 libdmx.x86_64 0:1.1.3-3.el7 libevent.x86_64 0:2.0.21-4.el7 libini_config.x86_64 0:1.3.1-29.el7
libnfsidmap.x86_64 0:0.25-19.el7 libpath_utils.x86_64 0:0.2.1-29.el7 libref_array.x86_64 0:0.1.5-29.el7 libstdc++-devel.x86_64 0:4.8.5-28.el7_5.1
libverto-libevent.x86_64 0:0.2.5-4.el7 nfs-utils.x86_64 1:1.3.0-0.54.el7 psmisc.x86_64 0:22.20-15.el7 xorg-x11-utils.x86_64 0:7.5-22.el7
xorg-x11-xauth.x86_64 1:1.0.9-1.el7

Note that the limits are stored in limits.d which has priority over limits.conf:

[root@instance-20180803-1152 opc]# cat /etc/security/limits.d/oracle-database-preinstall-18c.conf
 
# oracle-database-preinstall-18c setting for nofile soft limit is 1024
oracle soft nofile 1024
 
# oracle-database-preinstall-18c setting for nofile hard limit is 65536
oracle hard nofile 65536
 
# oracle-database-preinstall-18c setting for nproc soft limit is 16384
# refer orabug15971421 for more info.
oracle soft nproc 16384
 
# oracle-database-preinstall-18c setting for nproc hard limit is 16384
oracle hard nproc 16384
 
# oracle-database-preinstall-18c setting for stack soft limit is 10240KB
oracle soft stack 10240
 
# oracle-database-preinstall-18c setting for stack hard limit is 32768KB
oracle hard stack 32768
 
# oracle-database-preinstall-18c setting for memlock hard limit is maximum of 128GB on x86_64 or 3GB on x86 OR 90 % of RAM
oracle hard memlock 134217728
 
# oracle-database-preinstall-18c setting for memlock soft limit is maximum of 128GB on x86_64 or 3GB on x86 OR 90% of RAM
oracle soft memlock 134217728

Note that memlock is set to 128GB here but can be higher on machines with huge RAM (up to 90% of RAM)

And for information, here is what is set in /etc/sysctl.conf:

fs.file-max = 6815744
kernel.sem = 250 32000 100 128
kernel.shmmni = 4096
kernel.shmall = 1073741824
kernel.shmmax = 4398046511104
kernel.panic_on_oops = 1
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
net.ipv4.conf.all.rp_filter = 2
net.ipv4.conf.default.rp_filter = 2
fs.aio-max-nr = 1048576
net.ipv4.ip_local_port_range = 9000 65500

Besides that, the preinstall rpm disables NUMA and transparent huge pages (as boot options in GRUB). It creates the oracle user (id 54321 and belonging to groups oinstall,dba,oper,backupdba,dgdba,kmdba,racdba)

 

Cet article Oracle 18c preinstall RPM on RedHat RHEL est apparu en premier sur Blog dbi services.

How much free space can be reclaimed from a segment?

$
0
0

You have the feeling that your table takes more blocks than it should? Here are the queries I use to quickly check the free space. The idea is to call DBMS_SPACE.SPACE_USAGE and infer the minimum space from the percentages. For example, a block in FS3 (defined as having at least 50 to 75% free space) is supposed to have at least 50% of free space. Of course it can have more, but you don’t know.

Here is some PL/SQL to do so:

set serveroutput on
declare
unf number; unfb number; fs1 number; fs1b number; fs2 number; fs2b number; fs3 number; fs3b number; fs4 number; fs4b number; full number; fullb number;
begin
for i in (select * from (select * from dba_segments where segment_subtype='ASSM' and segment_type in (
'TABLE','TABLE PARTITION','TABLE SUBPARTITION','CLUSTER','LOB','LOB PARTITION','LOB SUBPARTITION'
) order by bytes desc) where 10>=rownum)
loop
begin
dbms_space.space_usage(i.owner,i.segment_name,i.segment_type,unf,unfb,fs1,fs1b,fs2,fs2b,fs3,fs3b,fs4,fs4b,full,fullb,partition_name=>i.partition_name);
dbms_output.put_line(to_char((unfb+fs1b+fs2b*0.25+fs3b*0.5+fs4b*0.75)/1024/1024/1024,'999G999D999')||' GB free in '||i.segment_type||' "'||i.owner||'"."'||i.segment_name||'" partition "'||i.partition_name||'"');
exception
when others then dbms_output.put_line(i.segment_type||' "'||i.owner||'"."'||i.segment_name||'" partition "'||i.partition_name||'": '||sqlerrm);
end;
end loop;
end;
/

The output looks like:

.001 GB free in INDEX "DEMO"."ACCOUNT_PK" partition ""
.001 GB free in TABLE "APEX_040200"."WWV_FLOW_PAGE_PLUGS" partition ""
.009 GB free in TABLE "SCOTT"."DEMO" partition ""
.000 GB free in TABLE "APEX_040200"."WWV_FLOW_STEP_ITEMS" partition ""
.003 GB free in INDEX "SYS"."WRH$_SYSMETRIC_HISTORY_INDEX" partition ""
.000 GB free in TABLE "MDSYS"."SDO_CS_SRS" partition ""
.002 GB free in INDEX "SYS"."I_WRI$_OPTSTAT_H_OBJ#_ICOL#_ST" partition ""
.006 GB free in TABLE "SYS"."WRH$_SYSMETRIC_HISTORY" partition ""
.002 GB free in TABLE "SYS"."WRH$_SQL_PLAN" partition ""

If you are in 12c, an inline function in the query might come handy:

with function freebytes(segment_owner varchar2, segment_name varchar2, segment_type varchar2,partition_name varchar2) return number as
unf number; unfb number; fs1 number; fs1b number; fs2 number; fs2b number; fs3 number; fs3b number; fs4 number; fs4b number; full number; fullb number;
begin
dbms_space.space_usage(segment_owner,segment_name,segment_type,unf,unfb,fs1,fs1b,fs2,fs2b,fs3,fs3b,fs4,fs4b,full,fullb,partition_name=>partition_name);
return unfb+fs1b+fs2b*0.25+fs3b*0.5+fs4b*0.75;
end;
select round(freebytes(owner,segment_name,segment_type,partition_name)/1024/1024/1024,3) free_GB,segment_type,owner,segment_name,partition_name
from dba_segments where segment_subtype='ASSM' and segment_type in (
'TABLE','TABLE PARTITION','TABLE SUBPARTITION','CLUSTER','LOB','LOB PARTITION','LOB SUBPARTITION'
) order by bytes desc fetch first 10 rows only
/

The result looks like:

FREE_GB SEGMENT_TYPE OWNER SEGMENT_NAME PARTITION_NAME
------- ------------ ----- ------------ --------------
0 TABLE DEMO ACCOUNTS
0.001 INDEX DEMO ACCOUNT_PK
0.001 TABLE APEX_040200 WWV_FLOW_PAGE_PLUGS
0.009 TABLE SCOTT DEMO
0.003 INDEX SYS WRH$_SYSMETRIC_HISTORY_INDEX
0 TABLE APEX_040200 WWV_FLOW_STEP_ITEMS
0.002 INDEX SYS I_WRI$_OPTSTAT_H_OBJ#_ICOL#_ST
0 TABLE MDSYS SDO_CS_SRS
0.006 TABLE SYS WRH$_SYSMETRIC_HISTORY
0.002 TABLE SYS WRH$_SQL_PLAN

Future evolution will be published on GitHub:
https://raw.githubusercontent.com/FranckPachot/scripts/master/administration/segment_free_space_plsql.sql
https://raw.githubusercontent.com/FranckPachot/scripts/master/administration/segment_free_space_sql.sql

Note that having free space does not mean that you have to shrink or reorg. Try to understand what happened to your data before, and whether this space will be reused soon.

Update 8-AUG-2018

In the initial post I added all segment types accepted by the dbms_space documentation but finally removed ‘INDEX’,’INDEX PARTITION’,’INDEX SUBPARTITION’ because the meaning of the output is completely different. See Jonathan Lewis note about it: https://jonathanlewis.wordpress.com/2013/12/17/dbms_space_usage/

 

Cet article How much free space can be reclaimed from a segment? est apparu en premier sur Blog dbi services.


New installed Fusion Middleware Reports or Forms WebLogic Servers fails to start after configuring SSL

$
0
0

We installed a Fusion Middleware reports and Forms 12.2.1.2.0 on a Linux Server.
Those are single node Reports and Forms WebLogic Servers.
After disabling the Non SSL Listen Port, we got the below error in the WebLogic Servers log files while starting the Managed Server with only SSL ports enabled.

####<Aug 19, 2017 9:22:44 PM GMT+00:00> <Notice> <Security> <host01.example.com> <WLS_Reports> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1320441764877> <BEA-090082> <Security initializing using security realm myrealm.>
####<Aug 19, 2017 9:22:45 PM GMT+00:00> <Critical> <WebLogicServer> <host01.example.com> <WLS_Reports> <main> <<WLS Kernel>> <> <> <1320441765180> <BEA-000386> <Server subsystem failed. Reason: java.lang.AssertionError: Servers do not have a common channel to communicate over
java.lang.AssertionError: Servers do not have a common channel to communicate over
at weblogic.cluster.messaging.internal.server.ConfiguredServersMonitorImpl.createConfiguration(ConfiguredServersMonitorImpl.java:124)
at weblogic.cluster.messaging.internal.server.ConfiguredServersMonitorImpl.<init>(ConfiguredServersMonitorImpl.java:55)
at weblogic.cluster.messaging.internal.server.ConfiguredServersMonitorImpl.<init>(ConfiguredServersMonitorImpl.java:28)
at weblogic.cluster.messaging.internal.server.ConfiguredServersMonitorImpl$Factory.<clinit>(ConfiguredServersMonitorImpl.java:39)
at weblogic.cluster.messaging.internal.server.ConfiguredServersMonitorImpl.getInstance(ConfiguredServersMonitorImpl.java:44)
at weblogic.cluster.messaging.internal.server.UnicastFragmentSocket.<init>(UnicastFragmentSocket.java:47)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at java.lang.Class.newInstance0(Class.java:355)
at java.lang.Class.newInstance(Class.java:308)
at weblogic.cluster.FragmentSocketWrapper.getUnicastFragmentSocket(FragmentSocketWrapper.java:76)
at weblogic.cluster.FragmentSocketWrapper.<init>(FragmentSocketWrapper.java:64)
at weblogic.cluster.FragmentSocketWrapper.getInstance(FragmentSocketWrapper.java:47)
at weblogic.cluster.MulticastManager.<init>(MulticastManager.java:158)
at weblogic.cluster.MulticastManager.initialize(MulticastManager.java:111)
at weblogic.cluster.ClusterService.startService(ClusterService.java:196)
at weblogic.server.ServiceActivator.start(ServiceActivator.java:96)
at weblogic.t3.srvr.SubsystemRequest.run(SubsystemRequest.java:64)
at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:528)
at weblogic.work.ExecuteThread.execute(ExecuteThread.java:209)
at weblogic.work.ExecuteThread.run(ExecuteThread.java:178)
>
####<Aug 19, 2017 9:22:45 PM GMT+00:00> <Notice> <WebLogicServer> <host01.example.com> <WLS_Reports> <main> <<WLS Kernel>> <> <> <1320441765413> <BEA-000365> <Server state changed to FAILED>
####<Aug 19, 2017 9:22:45 PM GMT+00:00> <Error> <WebLogicServer> <host01.example.com> <WLS_Reports> <main> <<WLS Kernel>> <> <> <1320441765414> <BEA-000383> <A critical service failed. The server will shut itself down>
####<Aug 19, 2017 9:22:45 PM GMT+00:00> <Notice> <WebLogicServer> <host01.example.com> <WLS_Reports> <main> <<WLS Kernel>> <> <> <1320441765426> <BEA-000365> <Server state changed to FORCE_SHUTTING_DOWN>
####<Aug 19, 2017 9:22:45 PM GMT+00:00> <Info> <WebLogicServer> <host01.example.com> <WLS_Reports> <main> <<WLS Kernel>> <> <> <1320441765447> <BEA-000236> <Stopping execute threads.>

Even if no cluster is used, the default Reports and Forms domain configuration creates  WebLogic Clusters.

To solve “Servers do not have a common channel to communicate over” errors do the following steps for each cluster as administrator

  1. Login into the Administration Console
  2. Expand the Environment tab and navigate to Clusters
  3. In production mode, you need to select “Lock and edit”
  4. Go to General TAB and select the ‘Replication’ subTAB
  5. In the -> Replication -> find Replication Channel, check box beside “Secure Replication Enabled” (to set it as true)
  6. Activate the changes
  7. Restart the managed server(s).

Or in WLST:

Start wlst.sh(*)  and connect to the WebLogic Domain

connect('AdminUser','password','t3s://host01.example.com:7002')

startEdit()

cd('/Clusters/cluster_reports')
cmo.setClusterType('none')
cmo.setPersistSessionsOnShutdown(false)
cmo.setReplicationChannel('ReplicationChannel')
cmo.setSecureReplicationEnabled(true)

activate()

(*) wlst.sh is located in the Middleware software home $M_HOME/oracle_common/common/bin directory

 

Cet article New installed Fusion Middleware Reports or Forms WebLogic Servers fails to start after configuring SSL est apparu en premier sur Blog dbi services.

MERGE JOIN CARTESIAN: a join method or a join type?

$
0
0

I’ll present about join methods at POUG and DOAG. I’ll show how the different join methods work in order to better understand them. The idea is to show Nested Loops, Hash Join, Sort Merge Join, Merge Join Cartesian on the same query. I’ll run a simple join between DEPT and EMP with the USE_NL, USE_HASH, USE_MERGE and USE_MERGE_CARTESIAN hints. I’ll show the execution plan, with SQL Monitoring in text mode. And I’ll put some gdb breakpoints on the ‘qer’ (query execution rowsource) functions to run the plan operations step by step. Then I’ll do the same on a different query in order to show in detail the 12c adaptive plans.

But wait, I listed Nested Loops, Hash Join, Sort Merge Join, Merge Join Cartesian… but is Merge Cartesian Join really a join method? I mean, my query is not a cartesian join. I have all join predicates here. But for sure you can also do an inner join by starting with a cartesian join and then filter on the join predicate. As if doing physically what the old join syntax of Oracle is doing logically: by not putting any predicates in the from clause and add the join predicates in the where clause to filter over it.

If I look at the 12.2 documentation, it is a Join method
CaptureJoinMethods122

For the definition, a Join Method is how the join will be executed. It is not a decision of the SQL developer because SQL is declarative: you declare the result you want, and the optimizer will decide how to do it. And this is why hints are in comments: they are not part of the declarative syntax. Forcing how to do it is not part of SQL.

Just after listing the join methods, the documentation lists the join types which are part of the SQL because it declares the join result you expect. Inner join to get all matching rows. Semi join to get only the first matching row. Anti Join to get all rows which do not match. Outer join to get all matching rows in addition to those which matches. The syntax is INNER JOIN, OUTER JOIN, EXISTS or IN, NOT EXISTS or NOT IN. Join type is not ‘how’ but ‘what’.

Ok, so back to the join method. Let’s force it on my inner join between DEPT and EMP:

11g


SQL> alter session set current_schema=SCOTT statistics_level=all;
Session altered.
 
SQL> select /*+ leading(DEPT) USE_MERGE_CARTESIAN(EMP) FULL(DEPT) */ * from DEPT join EMP using(deptno);
 
DEPTNO DNAME LOC EMPNO ENAME JOB MGR HIREDATE SAL COMM
---------- -------------- ------------- ---------- ---------- --------- ---------- --------- ---------- ----------
10 ACCOUNTING NEW YORK 7782 CLARK MANAGER 7839 09-JUN-81 2450
10 ACCOUNTING NEW YORK 7839 KING PRESIDENT 17-NOV-81 5000
10 ACCOUNTING NEW YORK 7934 MILLER CLERK 7782 23-JAN-82 1300
10 RESEARCH DALLAS 7782 CLARK MANAGER 7839 09-JUN-81 2450
10 RESEARCH DALLAS 7839 KING PRESIDENT 17-NOV-81 5000
10 RESEARCH DALLAS 7934 MILLER CLERK 7782 23-JAN-82 1300
10 SALES CHICAGO 7782 CLARK MANAGER 7839 09-JUN-81 2450
10 SALES CHICAGO 7839 KING PRESIDENT 17-NOV-81 5000
10 SALES CHICAGO 7934 MILLER CLERK 7782 23-JAN-82 1300
10 OPERATIONS BOSTON 7782 CLARK MANAGER 7839 09-JUN-81 2450
10 OPERATIONS BOSTON 7839 KING PRESIDENT 17-NOV-81 5000
10 OPERATIONS BOSTON 7934 MILLER CLERK 7782 23-JAN-82 1300
 
12 rows selected.
 
SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));
 
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------
SQL_ID 1xpfxq6pc30vq, child number 0
-------------------------------------
select /*+ leading(DEPT) USE_MERGE_CARTESIAN(EMP) FULL(DEPT) */ * from
DEPT join EMP using(deptno)
 
Plan hash value: 2034389985
 
------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 12 |00:00:00.01 | 7 | | | |
| 1 | MERGE JOIN CARTESIAN| | 1 | 14 | 12 |00:00:00.01 | 7 | | | |
| 2 | TABLE ACCESS FULL | DEPT | 1 | 4 | 4 |00:00:00.01 | 4 | | | |
| 3 | BUFFER SORT | | 4 | 4 | 12 |00:00:00.01 | 3 | 2048 | 2048 | 2048 (0)|
|* 4 | TABLE ACCESS FULL | EMP | 1 | 4 | 3 |00:00:00.01 | 3 | | | |
------------------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
4 - filter("DEPT"."DEPTNO"="EMP"."DEPTNO")
 

Ok, then I declared my result with an inner join query, and I forced the join method with a hint to show that it is possible. But look at the result. 12 rows? Only DEPTNO 10 where the SCOTT schema has employees in 10, 20 and 30? And only 3 employees here, repeated 4 times for each department name? That’s wrong result.

NEVER FORCE A CARTESIAN JOIN WITH USE_MERGE_CARTESIAN!

That’s a very old bug: Bug 17064391 Wrong result with USE_MERGE_CARTESIAN hint finally fixed in 12c (12.2 and backported in 12.1 PSU)

Then how is it fixed?

18c

With the fix, the hint is just ignored and a SORT MERGE JOIN is used here:

SQL> alter session set current_schema=SCOTT statistics_level=all;
Session altered.
 
SQL> select /*+ leading(DEPT) USE_MERGE_CARTESIAN(EMP) FULL(DEPT) */ * from DEPT join EMP using(deptno);
 
DEPTNO DNAME LOC EMPNO ENAME JOB MGR HIREDATE SAL COMM
---------- -------------- ------------- ---------- ---------- --------- ---------- --------- ---------- ----------
10 ACCOUNTING NEW YORK 7782 CLARK MANAGER 7839 09-JUN-81 2450
10 ACCOUNTING NEW YORK 7839 KING PRESIDENT 17-NOV-81 5000
10 ACCOUNTING NEW YORK 7934 MILLER CLERK 7782 23-JAN-82 1300
20 RESEARCH DALLAS 7566 JONES MANAGER 7839 02-APR-81 2975
20 RESEARCH DALLAS 7902 FORD ANALYST 7566 03-DEC-81 3000
20 RESEARCH DALLAS 7876 ADAMS CLERK 7788 23-MAY-87 1100
20 RESEARCH DALLAS 7369 SMITH CLERK 7902 17-DEC-80 800
20 RESEARCH DALLAS 7788 SCOTT ANALYST 7566 19-APR-87 3000
30 SALES CHICAGO 7521 WARD SALESMAN 7698 22-FEB-81 1250 500
30 SALES CHICAGO 7844 TURNER SALESMAN 7698 08-SEP-81 1500 0
30 SALES CHICAGO 7499 ALLEN SALESMAN 7698 20-FEB-81 1600 300
30 SALES CHICAGO 7900 JAMES CLERK 7698 03-DEC-81 950
30 SALES CHICAGO 7698 BLAKE MANAGER 7839 01-MAY-81 2850
30 SALES CHICAGO 7654 MARTIN SALESMAN 7698 28-SEP-81 1250 1400
 
14 rows selected.
 
SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));
 
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------
SQL_ID 1xpfxq6pc30vq, child number 0
-------------------------------------
select /*+ leading(DEPT) USE_MERGE_CARTESIAN(EMP) FULL(DEPT) */ * from
DEPT join EMP using(deptno)
 
Plan hash value: 1407029907
 
--------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 14 |00:00:00.01 | 12 | 12 | | | |
| 1 | MERGE JOIN | | 1 | 14 | 14 |00:00:00.01 | 12 | 12 | | | |
| 2 | SORT JOIN | | 1 | 4 | 4 |00:00:00.01 | 6 | 6 | 2048 | 2048 | 2048 (0)|
| 3 | TABLE ACCESS FULL| DEPT | 1 | 4 | 4 |00:00:00.01 | 6 | 6 | | | |
|* 4 | SORT JOIN | | 4 | 14 | 14 |00:00:00.01 | 6 | 6 | 2048 | 2048 | 2048 (0)|
| 5 | TABLE ACCESS FULL| EMP | 1 | 14 | 14 |00:00:00.01 | 6 | 6 | | | |
--------------------------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
4 - access("DEPT"."DEPTNO"="EMP"."DEPTNO")
filter("DEPT"."DEPTNO"="EMP"."DEPTNO")

So here the result is good, thanks to the fix, and we clearly see how it is fixed: the USE_MERGE_CARTESIAN hint has been ignored.

And the funny thing is that when you look at the 18c documentation, the Merge Join Cartesian is not a join method anymore but a join type:
CaptureJoinMethods183

Exactly the same paragraph, but now in join types (the ‘what’) rather than in join methods (the ‘when’).

What or How?

Actually, in my opinion, it is both. When you explicitly want a cartesian join, that’s a join type described by the CROSS JOIN in the ANSI join syntax, or the lack of related predicates in the old syntax. This is ‘what’. But you may also encounter a MERGE JOIN CARTESIAN for a non-cartesian join just because the optimizer decides it is more efficient. When you have very few rows on both sides, it may be faster to start with a cartesian product on small rowsources. This can be part of star transformation where fact rows are joined back to the cartesian product of filtered dimensions in order to project the dimension attributes. This is ‘how’ it will be executed. We also see it when the optimizer underestimates the cardinalities and is followed by a long nested loop.

When?

So, let’s look at the documentation “When the Optimizer Considers Cartesian Joins”:
CaptureJoinMethods183-when

  • No join condition exists: that’s when cartesian product is what we want
  • The ORDERED hint specifies a table before its join table is specified: that’s when it is the only join method possible with the specified join order
  • A Cartesian join is an efficient method: then it is a method there, even if documented in join types.

In conclusion, cartesian join is a join type. It can also be used as a join method when the optimizer decides to. But you cannot decide it yourself by hinting since 12c, and trying to do so in previous version is a very bad idea and can returns wrong results.

So, for this one I’ll explicitely run a CROSS JOIN:
CaptureUSEMERGECARTESIAN

The query is on top. The SQL monitor in the middle, showing that we are currently active on reading rows from EMP. The bottom shows the ‘qer’ functions backtrace: the fetch call is propagated from opifch2 for the SELECT STATEMENT, through the MERGE JOIN CARTESIAN (querjo), the BUFFER SORT (qerso), to the TABLE ACCESS (qertb).

So basically, the goal of this full-demo presentation is to show how to read the execution plan by understanding how it is executed. This qertbFetch on the inner table EMP is executed only on the first row coming from the outer table DEPT. As the rows are returned to a buffer, the further iterations will fetch only from this buffer and will not go further than qersoFetchSimple. The qersoProcessULS (‘process underlying row source’ – see Frits Hoogland annotations) is run only once. This is the big difference with Nested Loop where the inner loop on the underlying rowsource is run for each outer loop iteration: those two loops are nested – thus the name. But the function for the join part is the same for Nested Loop, Sort Merge Join and Merge Join Cartesian: qerjo. Only the underlying operations differenciate the join methods.

Last comment, we don’t see any function which really sort the rows in this buffer (as we will see for the Sort Merge Join method) because there is no sorting despites the name of the BUFFER SORT operation. More info on Jonathan Lewis blog.

 

Cet article MERGE JOIN CARTESIAN: a join method or a join type? est apparu en premier sur Blog dbi services.

ATP vs ADW – the Autonomous Database lockdown profiles

$
0
0

The Oracle database has always distinguished two types of workloads: transactional (OLTP) and datawarehouse (VLDB, DWH, DSS, BI, analytics). There is the same idea in the managed Oracle Cloud with two autonomous database services.

To show how this is old, here is how they were defined in the Oracle7 Tuning Book:

CaptureOLTPvsDSS

The definition has not changed a lot. But the technology behind DSS/DWH has improved. Now, with In-Memory Column Store, Smart Scan, Result Cache we can even see that indexes, materialized views, star transformation, hints,.. are disabled in the Autonomous Datawarehouse cloud service.

The difference between the two autonomous cloud services, ATP (Autonomous Transaction Processing) for OLTP and ADW (Autonomous Datawarehouse) for analytics have been described by Maria Colgan after Larry Ellison announce:
https://sqlmaria.com/2018/08/07/how-does-autonomous-transaction-processing-differ-from-the-autonomous-data-warehouse/

PDBaaS

Those autonomous services are PDB as a Service. They are using the consolidation and isolation features of 12cR2 multitenant. And we can even see that the ATP and ADW services can run within the same CDB and same instance. They are different PDBs and they differ only by their resource management plans and PDB lockdown profiles. So let’s see the differences from this point of view: ATP lockdown profile is called ‘OLTP’ and ADW lockdown profile is called ‘DWCS’.

Options

The only difference is about partitioning which is enabled for ATP and disabled for ASW

SQL> select (select value from v$parameter where name='pdb_lockdown') type,v$lockdown_rules.* from v$lockdown_rules where rule_type='OPTION';
 
TYPE RULE_TYPE RULE CLAUSE CLAUSE_OPTION STATUS USERS CON_ID
---- --------- ---- ------ ------------- ------ ----- ------
OLTP OPTION PARTITIONING ENABLE ALL 284
 
SQL> select (select value from v$parameter where name='pdb_lockdown') type,v$lockdown_rules.* from v$lockdown_rules where rule_type='OPTION' order by 1, 2 ,3 nulls first, 4 nulls first,5;
 
TYPE RULE_TYPE RULE CLAUSE CLAUSE_OPTION STATUS USERS CON_ID
---- --------- ---- ------ ------------- ------- ----- ------
DWCS OPTION PARTITIONING DISABLE ALL 73

Features

All disabled features are the same:
Disabled for all users: COMMON_SCHEMA_ACCESS, CONNECTIONS, CTX_LOGGING, NETWORK_ACCESS, OS_ACCESS, SYSTEM_DATA
Disabled for local users only: USER_PASSWORD_VERIFIERS
Enabled for all users: COMMON_USER_CONNECT, TRACE_VIEW_ACCESS, UTL_FILE

Those last ones are good news. We can query V$DIAG_OPT_TRACE_RECORDS, V$DIAG_SQL_TRACE_RECORDS, $DIAG_TRACE_FILE_CONTENTS to show some diagnostics. Unfortunately, I’ve seen no allowed ways to enable SQL Trace.

DDL Statements

In the ADW service, the local user cannot create indexes and materialized views, They are supposed to be autonomously created when required (probably by a common user):

SQL> select (select value from v$parameter where name='pdb_lockdown') type,v$lockdown_rules.* from v$lockdown_rules where regexp_like(rule,'(INDEX|MATERIALIZED)') order by 1, 2 ,3 nulls first, 4 nulls first,5;
 
TYPE RULE_TYPE RULE CLAUSE CLAUSE_OPTION STATUS USERS CON_ID
---- --------- ---- ------ ------------- ------ ----- ------
DWCS STATEMENT ALTER INDEX DISABLE LOCAL 73
DWCS STATEMENT ALTER INDEXTYPE DISABLE LOCAL 73
DWCS STATEMENT ALTER MATERIALIZED VIEW DISABLE LOCAL 73
DWCS STATEMENT ALTER MATERIALIZED VIEW LOG DISABLE LOCAL 73
DWCS STATEMENT CREATE INDEX DISABLE LOCAL 73
DWCS STATEMENT CREATE INDEXTYPE DISABLE LOCAL 73
DWCS STATEMENT CREATE MATERIALIZED VIEW DISABLE LOCAL 73
DWCS STATEMENT CREATE MATERIALIZED VIEW LOG DISABLE LOCAL 73
DWCS STATEMENT DROP INDEX DISABLE LOCAL 73
DWCS STATEMENT DROP INDEXTYPE DISABLE LOCAL 73
DWCS STATEMENT DROP MATERIALIZED VIEW DISABLE LOCAL 73
DWCS STATEMENT DROP MATERIALIZED VIEW LOG DISABLE LOCAL 73

The ATP service enables all those for all users.

System parameters

Both services disable all ALTER SYSTEM statements except KILL SESSION. They allow all ALTER SYSTEM SET for the common users but allow only a few parameters to be changed by local users.

Basically, the ADW service sets the following:


_default_pct_free=1
_ldr_io_size=33554432
_ldr_io_size2=33554432
_max_io_size=33554432
_optimizer_allow_all_access_paths=FALSE
_optimizer_answering_query_using_stats=TRUE
optimizer_ignore_hints=TRUE
optimizer_ignore_parallel_hints=TRUE
parallel_degree_policy=AUTO
parallel_min_degree=CPU
result_cache_max_result=1
result_cache_mode=FORCE

The ATP service keeps the defaults.

Basically, all the features for OLTP are there for years in the Oracle Database. Under the ‘autonomous’ umbrella we can see some well-known features

  • Automatic provisioning: pluggable databases
  • Automatic scaling: PDB resource manager
  • Automatic tuning: SQL Plan Management, Adaptive Plans, SQL Tuning Advisor, Storage Indexes
  • Automatic security: Rolling patches, Encryption, Database Vault
  • Automatic Fault Tolerant Failover: RAC, Data Guard
  • Automatic Backup and Recovery: RMAN, Flashback

All these proven features are enhanced to work together in a consolidated multitenant environment. Povisioning is as fast as a CREATE PDB. The ATP and ADW services enable and configure the right set of features for each workload.

 

Cet article ATP vs ADW – the Autonomous Database lockdown profiles est apparu en premier sur Blog dbi services.

TRANSPORT_CONNECT_TIMEOUT and RETRY_COUNT

$
0
0

When you have a Data Guard configuration, you want the application to connect to the right server, where the primary is, without taking too much time. The default TCP timeout is 1 minute which is too long. When you don’t want to configure a virtual IP address (VIP) you can simply list all the addresses in the client connection string. But then you need to reduce the timeout. A short duration in 1 to 5 seconds will be ok most of the time, but in case of network issue, you want to give a chance to retry with a longer timeout. This post is about the connection string parameters to define this. Of course, all is documented but the goal of this post is also to show how to quickly test it. Because a reliable understanding of how it works relies on both documentation and test.

Here is a simple client failover configuration where the connection tries 10.10.10.10 and, if it fails, tries 10.10.10.11

DEFAULT=
(DESCRIPTION=
(CONNECT_DATA=(SERVICE_NAME=pdb1))
(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.10)(PORT=1521))
(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.11)(PORT=1521))
)
)

The problem with that is when the 10.10.10.10 is down then the 10.10.10.11 will be tried only after 60 seconds, the default TCP timeout. You can completely avoid waiting for the timeout by using a virtual IP that will always be up, started on the failed-over server. But you can also reduce the TCP timeout to a few seconds.

Here is a tnsping with the above tnsnames.ora entry and when both servers are down:


$ time tnsping DESCRIPTION
 
TNS Ping Utility for Linux: Version 18.0.0.0.0 - Production on 10-AUG-2018 15:15:55
 
Copyright (c) 1997, 2018, Oracle. All rights reserved.
 
Used parameter files:
 
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION= (CONNECT_DATA=(SERVICE_NAME=pdb1)) (ADDRESS_LIST= (ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.10)(PORT=1521)) (ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.11)(PORT=1521))))
TNS-12535: TNS:operation timed out
 
real 2m0.051s
user 0m0.005s
sys 0m0.011s

That’s 2 minutes because there is a 1 minute timeout for each address.

TRANSPORT_CONNECT_TIMEOUT

Now, just adding the TRANSPORT_CONNECT_TIMEOUT to the connection string description to reduce the timout to 4 seconds:


DESCRIPTION=
(DESCRIPTION=
(CONNECT_DATA=(SERVICE_NAME=pdb1))
(TRANSPORT_CONNECT_TIMEOUT=4)
(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.10)(PORT=1521))
(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.11)(PORT=1521))
)
)

The total time to get the answer from both addresses is 8 seconds – 4 second for each:

$ time tnsping DESCRIPTION
 
TNS Ping Utility for Linux: Version 18.0.0.0.0 - Production on 10-AUG-2018 15:15:55
 
Copyright (c) 1997, 2018, Oracle. All rights reserved.
 
Used parameter files:
 
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION= (CONNECT_DATA=(SERVICE_NAME=pdb1)) (TRANSPORT_CONNECT_TIMEOUT=4) (ADDRESS_LIST= (ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.10)(PORT=1521)) (ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.11)(PORT=1521))))
TNS-12535: TNS:operation timed out
 
real 0m8.023s
user 0m0.010s
sys 0m0.006s

RETRY_COUNT

If you lower the timeout, you may give a chance to retry a few times with RETRY_COUNT. There, RETRY_COUNT=2 will give 3 attempts ( 1 + 2 retries ) to the address list:

$ time tnsping RETRY_COUNT
 
TNS Ping Utility for Linux: Version 18.0.0.0.0 - Production on 10-AUG-2018 15:49:34
 
Copyright (c) 1997, 2018, Oracle. All rights reserved.
 
Used parameter files:
 
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION= (CONNECT_DATA=(SERVICE_NAME=pdb1)) (TRANSPORT_CONNECT_TIMEOUT=4) (RETRY_COUNT=2) (ADDRESS_LIST= (ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.10)(PORT=1521)) (ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.11)(PORT=1521))))
TNS-12535: TNS:operation timed out
 
real 0m24.049s
user 0m0.011s
sys 0m0.010s

This has tried 10.10.10.10 and then 10.10.10.11 for 4 seconds each, and then retried 2 times wich in total takes 2x4x4=24 seconds

DESCRIPTION_LIST

The TRANSPORT and RETRY_COUNT are used only in the DESCRIPTION. You may want to give several attempts with an increasing timeout. For example: try each address for one second to get a quick connection to the primary, wherever it is, when the network is in good health. Then give two attempts with a 5 seconds timeout for bad network times. And then one final attempt to each with the default timeout to be sure that the servers are down.

You can use a DESCRIPTION_LIST for this:

INCREASING=
(DESCRIPTION_LIST=
(LOAD_BALANCE=off)
(DESCRIPTION=
(CONNECT_DATA=(SERVICE_NAME=pdb1))
(TRANSPORT_CONNECT_TIMEOUT=1)
(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.10)(PORT=1521))
(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.11)(PORT=1521))
)
)
(DESCRIPTION=
(CONNECT_DATA=(SERVICE_NAME=pdb1))
(TRANSPORT_CONNECT_TIMEOUT=5)
(RETRY_COUNT=1)
(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.10)(PORT=1521))
(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.11)(PORT=1521))
)
)
(DESCRIPTION=
(CONNECT_DATA=(SERVICE_NAME=pdb1))
(TRANSPORT_CONNECT_TIMEOUT=2)
(ADDRESS_LIST=
(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.10)(PORT=1521))
(ADDRESS=(PROTOCOL=TCP)(HOST=10.10.10.11)(PORT=1521))
)
)
)

Rather than just time the total attempts, I’ll strace each connections:

$ strace -tT tnsping INCREASING 2>&1 | grep -C1 --color=auto -E 'poll.*|inet_addr[()".0-9]*'
 
16:15:49 fcntl(4, F_SETFL, O_RDONLY|O_NONBLOCK) = 0 <0.000008>
16:15:49 connect(4, {sa_family=AF_INET, sin_port=htons(1521), sin_addr=inet_addr("10.10.10.10")}, 16) = -1 EINPROGRESS (Operation now in progress) <0.000087>
16:15:49 times(NULL) = 434920117 <0.000011>
16:15:49 mmap(NULL, 528384, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7efce31bc000 <0.000013>
16:15:49 poll([{fd=4, events=POLLOUT}], 1, 1000) = 0 (Timeout) <1.001435>
16:15:50 close(4) = 0 <0.000256>
--
16:15:50 fcntl(4, F_SETFL, O_RDONLY|O_NONBLOCK) = 0 <0.000060>
16:15:50 connect(4, {sa_family=AF_INET, sin_port=htons(1521), sin_addr=inet_addr("10.10.10.11")}, 16) = -1 EINPROGRESS (Operation now in progress) <0.000495>
16:15:50 times(NULL) = 434920218 <0.000062>
16:15:50 poll([{fd=4, events=POLLOUT}], 1, 1000) = 0 (Timeout) <1.000768>
16:15:51 close(4) = 0 <0.000050>
--
16:15:51 fcntl(4, F_SETFL, O_RDONLY|O_NONBLOCK) = 0 <0.000015>
16:15:51 connect(4, {sa_family=AF_INET, sin_port=htons(1521), sin_addr=inet_addr("10.10.10.10")}, 16) = -1 EINPROGRESS (Operation now in progress) <0.000060>
16:15:51 times(NULL) = 434920318 <0.000010>
16:15:51 poll([{fd=4, events=POLLOUT}], 1, 5000) = 0 (Timeout) <5.005563>
16:15:56 close(4) = 0 <0.000027>
--
16:15:56 fcntl(4, F_SETFL, O_RDONLY|O_NONBLOCK) = 0 <0.000012>
16:15:56 connect(4, {sa_family=AF_INET, sin_port=htons(1521), sin_addr=inet_addr("10.10.10.11")}, 16) = -1 EINPROGRESS (Operation now in progress) <0.000081>
16:15:56 times(NULL) = 434920819 <0.000015>
16:15:56 poll([{fd=4, events=POLLOUT}], 1, 5000) = 0 (Timeout) <5.006265>
16:16:01 close(4) = 0 <0.000192>
--
16:16:01 fcntl(4, F_SETFL, O_RDONLY|O_NONBLOCK) = 0 <0.000079>
16:16:01 connect(4, {sa_family=AF_INET, sin_port=htons(1521), sin_addr=inet_addr("10.10.10.10")}, 16) = -1 EINPROGRESS (Operation now in progress) <0.000486>
16:16:01 times(NULL) = 434921320 <0.000087>
16:16:01 poll([{fd=4, events=POLLOUT}], 1, 5000) = 0 (Timeout) <5.004660>
16:16:06 close(4) = 0 <0.000611>
--
16:16:06 fcntl(4, F_SETFL, O_RDONLY|O_NONBLOCK) = 0 <0.000114>
16:16:06 connect(4, {sa_family=AF_INET, sin_port=htons(1521), sin_addr=inet_addr("10.10.10.11")}, 16) = -1 EINPROGRESS (Operation now in progress) <0.000536>
16:16:06 times(NULL) = 434921822 <0.000097>
16:16:06 poll([{fd=4, events=POLLOUT}], 1, 5000) = 0 (Timeout) <5.008128>
16:16:11 close(4) = 0 <0.000135>
--
16:16:11 fcntl(4, F_SETFL, O_RDONLY|O_NONBLOCK) = 0 <0.000137>
16:16:11 connect(4, {sa_family=AF_INET, sin_port=htons(1521), sin_addr=inet_addr("10.10.10.10")}, 16) = -1 EINPROGRESS (Operation now in progress) <0.000584>
16:16:11 times(NULL) = 434922323 <0.000079>
16:16:11 poll([{fd=4, events=POLLOUT}], 1, 60000) = 0 (Timeout) <60.053782>
16:17:11 close(4) = 0 <0.000166>
--
16:17:11 fcntl(4, F_SETFL, O_RDONLY|O_NONBLOCK) = 0 <0.000195>
16:17:11 connect(4, {sa_family=AF_INET, sin_port=htons(1521), sin_addr=inet_addr("10.10.10.11")}, 16) = -1 EINPROGRESS (Operation now in progress) <0.000549>
16:17:11 times(NULL) = 434928329 <0.000488>
16:17:11 poll([{fd=4, events=POLLOUT}], 1, 60000) = 0 (Timeout) <60.007246>
16:18:11 close(4) = 0 <0.000043>

With ‘-T’ strace shows the duration of the poll() system call between brackets after the return code. You can see here 1-second timeout attempts to each address, then 2 attempts with 5 seconds timeout and then 60 seconds.

Note that I have added (LOAD_BALANCE=OFF) here because the default is ON in a DESCRIPTION_LIST but here I want to take them in the order I specified them.

 

Cet article TRANSPORT_CONNECT_TIMEOUT and RETRY_COUNT est apparu en premier sur Blog dbi services.

18c runInstaller -silent

$
0
0

You find two different ‘runInstaller’ under an Oracle Home. The old one, the Oracle Universal Installer, in $ORACLE_HOME/oui/bin. And the new one, in $ORACLE_HOME directly. They have the same name but are completely different. The old one was used to install an Oracle Home from the installation media. But in 18c you don’t use it. It has been used by Oracle to build the Oracle Home image. Then you download and unzip directly your Oracle Home. You have only to configure it and re-link the binaries. And this is done by the new runInstaller which is at the root of the Oracle Home. Actually, it is just a shell script that runs the Perl dbSetup.pl to setup the Oracle Database software. In my opinion, it would be better to have it called dbSetup.sh rather than rename it to runInstaller, especially given that the same thing for Grid Infrastructure is called GridSetup.sh since 12cR2. The Perl script finally runs the Java GUI. It can also be run in command line, aka silent mode, which is the goal of this post. The command line arguments are similar, but not the same as in the old runInstaller.

Prerequisites

You may want to run the prerequisites only to check if your system is ready for the installation. Here is how to do so in command line:

$ $ORACLE_HOME/runInstaller -silent -executePrereqs -responseFile $ORACLE_HOME/inventory/response/db_install.rsp
 
Launching Oracle Database Setup Wizard...
 
[FATAL] [INS-13013] Target environment does not meet some mandatory requirements.
CAUSE: Some of the mandatory prerequisites are not met. See logs for details. /u00/app/oraInventory/logs/InstallActions2018-08-11_06-07-14PM/installActions2018-08-11_06-07-14PM.log
ACTION: Identify the list of failed prerequisite checks from the log: /u00/app/oraInventory/logs/InstallActions2018-08-11_06-07-14PM/installActions2018-08-11_06-07-14PM.log. Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually.

From there we can check the log about the tests that have failed, such as in the following example:

INFO: [Aug 11, 2018 6:08:21 PM] Physical Memory: This is a prerequisite condition to test whether the system has at least 8GB (8388608.0KB) of total physical memory.
INFO: [Aug 11, 2018 6:08:21 PM] Severity:IGNORABLE
INFO: [Aug 11, 2018 6:08:21 PM] OverallStatus:VERIFICATION_FAILED
INFO: [Aug 11, 2018 6:08:21 PM] *********************************************
INFO: [Aug 11, 2018 6:08:21 PM] Run Level: This is a prerequisite condition to test whether the system is running with proper run level.
INFO: [Aug 11, 2018 6:08:21 PM] Severity:CRITICAL
INFO: [Aug 11, 2018 6:08:21 PM] OverallStatus:VERIFICATION_FAILED
INFO: [Aug 11, 2018 6:08:21 PM] *********************************************
INFO: [Aug 11, 2018 6:08:21 PM] OS Kernel Version: This is a prerequisite condition to test whether the system kernel version is at least "2.6.39-400.211.1".
INFO: [Aug 11, 2018 6:08:21 PM] Severity:CRITICAL
INFO: [Aug 11, 2018 6:08:21 PM] OverallStatus:VERIFICATION_FAILED

Software Install

You can pass all parameters in command line (‘runInstaller -silent -help’ to see all possibilities), but in all cases you need a response file. Then I put everything I need in the response file. There’s no mention of the ORACLE_HOME because you already unzipped it at the right place. The most important is the edition which seems to accept [EE, SEONE, SE2, HP, XP, PE]. I didn’t try it but Standard Edition One is for versions <= 12.1.0.1 by the way.

cd $ORACLE_HOME
 
cat > db18EE.rsp <<END
oracle.install.responseFileVersion=/oracle/install/rspfmt_dbinstall_response_schema_v18.0.0
oracle.install.option=INSTALL_DB_SWONLY
UNIX_GROUP_NAME=oinstall
INVENTORY_LOCATION=/u00/app/oraInventory
ORACLE_BASE=/u00/app/oracle
oracle.install.db.InstallEdition=EE
oracle.install.db.OSDBA_GROUP=dba
oracle.install.db.OSOPER_GROUP=oper
oracle.install.db.OSBACKUPDBA_GROUP=backupdba
oracle.install.db.OSDGDBA_GROUP=dgdba
oracle.install.db.OSKMDBA_GROUP=kmdba
oracle.install.db.OSRACDBA_GROUP=dba
END

There is no need for the oracle.install.db.config variables because I’ll install the software only without creating a database.

Here is how to run the dbSetup. You can use ‘-ignorePrereqFailure’ to ignore the prerequisites if you want to install to a host where some prerequisites fail:

./runInstaller -silent -noconfig -ignorePrereqFailure -responseFile ./db18EE.rsp

 
The log of the installation goes into the oraInventory/logs and, as usual, you have to run the root.sh

As a root user, execute the following script(s):
1. /u00/app/oracle/product/18SE/root.sh
 
Execute /u00/app/oracle/product/18SE/root.sh on the following nodes:
[vmreforatun01]

This new runInstaller can also apply one-off patches with -applyOneOffs mentioning the patch locations. You can also build an Oracle Home image that you customize, with -createGoldImage -destinationLocation and even mention some files or path to exclude to make it smaller: -exclFiles

 

Cet article 18c runInstaller -silent est apparu en premier sur Blog dbi services.

Viewing all 533 articles
Browse latest View live