Quantcast
Channel: Archives des Oracle - dbi Blog
Viewing all 533 articles
Browse latest View live

ADWC – a Docker container to start/stop Oracle Cloud services

$
0
0

In the previous post, I’ve explained how to start and stop the Autonomous Data Warehouse Cloud service from PSM (PaaS Service Manager). There’s a setup phase, and a run phase starting with service-start and ending with service-stop. And the setup is specific to an Oracle Cloud account, storing information in the local user home. You may want to run different setups, and even provide an easy way to start/stop an Oracle Cloud service without knowing the user, password and tenant name.

A Docker container is perfect to isolate this.

Dockerfile

Here is my quick (aka experimental) Dockerfile:

FROM alpine:latest
RUN apk add --update --no-cache python3 curl
ENV user=my.cloud.account@me.com
ENV password=MyP@ssw0rd
ENV tenant=idcs-31bbd63c3cb9466cb8a96f627b6b6116
ENV region=us
# get PSM
RUN curl -X GET -u ${user}:${password} -H X-ID-TENANT-NAME:${tenant} https://psm.us.oraclecloud.com/paas/core/api/v1.1/cli/${tenant}/client -o psmcli.zip
# install PSM
RUN pip3 install -U psmcli.zip
# setup PSM
RUN echo "{\"username\":\"${user}\",\"password\":\"${password}\",\"identityDomain\":\"${tenant}\",\"region\":\"${region}\",\"outputFormat\":\"short\"}" > config-payload &&\
psm setup --config-payload config-payload &&\
rm 421d64918638 # remove file with password
# patch PSM for bugs
RUN sed -ie 's/core\\[/]//g' ~/.psm/data/ADWC.json
# variables which can be overwritten at run time
ENV service=ADWC
ENV name=MYADWC1
CMD trap 'echo "Stopping service ${service}...";psm ${service} stop-service --service-name ${name} -wc true ; exit 0' SIGINT SIGSTOP SIGKILL; echo "Starting service ${service}...";psm ${service} start-service --service-name ${name} -wc true || exit 1 ; echo "You can access to ${service} console with ADMIN user at:";echo;psm ${service} service -s ${name} -of json | jq -r '.serviceConsole';echo ; while sleep 60 ; do echo "Status of service ${service} at $(date)...";psm ADWC service -s ADWC ; done

We need curl to download PSM, and pip3 to install it, and python3 to run it.
You can set your Oracle Cloud Account credentials as environment variables.
Then it fills all required information in a ‘config-payload’ file, runs ‘psm setup’ and removes that file.

At run, it calls a ‘start-service’ and loops while showing the status every minute (you can see them with docker container logs). The INT, STOP and KILL signals call ‘stop-service’. Then, the idea is that while the container exists, the Cloud Service is running. And it is shutdown at the container end of life. There’s no other action to do with the container: it display the console url where you have everything to interact with the service (download client credentials, manage users, go to Machine Learning notebooks,…).

Run example:

Here is a simple example:

# docker run --rm franck/psm/awdc
 
Starting service MYADWC1...
Message: Job submitted successfully for start of service/system
Job ID: 25583108
Waiting for the job to complete... (it cannot be cancelled)
Command completed with status [SUCCEED].
You can access to MYADWC1 console with ADMIN user at:
 
https://adwc.uscom-east-1.oraclecloud.com/console/index.html?tenant_name=idcs-31bbd63c3cb9466cb8a96f627b6b6116&database_name=MYADWC1
 
Status of service MYADWC1 at Sun Apr 29 18:20:50 UTC 2018...
Service: MYADWC1
Status: Ready
Version: 18.1.4.0
Edition: N/A
Compute Site: N/A
Cloud Storage Container: N/A
Created On: 2018-04-19T19:22:18.360+0000
Status of service MYADWC1 at Sun Apr 29 18:21:51 UTC 2018...
Service: MYADWC1
Status: Ready
Version: 18.1.4.0
Edition: N/A
Compute Site: N/A
Cloud Storage Container: N/A
Created On: 2018-04-19T19:22:18.360+0000
 
^C
 
Stopping service MYADWC1...
Message: Job submitted successfully for stop of service/system
Job ID: 25620930
Waiting for the job to complete... (it cannot be cancelled)
Command completed with status [SUCCEED].

After 2 minutes I’ve hit ‘Control-C’ to stop the container. The service has been cleanly shut-down.

 

Cet article ADWC – a Docker container to start/stop Oracle Cloud services est apparu en premier sur Blog dbi services.


ADWC – the hidden gem: Zepplin Notebook

$
0
0

IMG_5339
In the previous blog posts I explained how to create, and stop/start the Autonomous Data Warehouse Cloud service. And I didn’t show yet how to connect to it. It is easy, from sqlplus or SQL Developer, or SQLcl.

But there’s something more exciting to run some SQL queries: the Oracle Machine Learning Notebooks based on Apache Zepplin. At first, I didn’t realize why the administration menu entry to create users in the ADWC service was named ‘Manage Oracle ML Users’, and didn’t realize that the ‘Autonomous Data Warehouse Cloud’ header was replaced by ‘Machine Learning’.

But last week at IOUG Collaborate 18, I visited the Demo Grounds and thanks to Charlie Berger I realized all the power of this: we are in the ‘Machine Learning’ interface here and the home button opens all the features available to query the ADWC database, including the SQL Notebooks based on Apache Zepplin.

Here is the path to this hidden Gem. From your ADWC service, you go to the Service Console:
CaptureZepplin001

Here you log as the ADMIN user with the >12 characters password that you have defined at service creation. Don’t worry if you forgot it, you can reset it from here:
CaptureZepplin002

Once connected, you go to the Administration tab and choose the ‘Manage Oracle ML Users':

CaptureZepplin003

Here you have to create a user because the ADMIN user not a Machine Learning user. Machine Learning users need one of the following roles: OML_DEVELOPER, OML_APP_ADMIN, OML_SYS_ADMIN. The user you will create here will have OML_DEVELOPER which is required to use SQL Notebooks.

Now that you have a user created from here, you can click on this little house icon, which is your home in the Machine Learning part of the ADWC:

CaptureZepplin004

Here you connect with the user you have created from the Oracle ML User page (not the ADMIN one as it has no OML role granted).

CaptureZepplin005

Then you are in your OML home, ready to run SQL from a Notebook:

CaptureZepplin006

I’ll show what you can do in future post. But just to give you an idea, you have a Notebook where you can type a query, execute it, and have the result displayed as a table, or as a graph. Here I was looking at I/O latency and the following shows me that the ‘cell single block physical read’, which are nothing else than the buffered one-block-at-a-time reads that are called ‘db file sequential read’ when not on Exadata, in dark green here, have most of their I/O call time between 128 and 512 microseconds.

CaptureZepplin007

I like this way to have the result just under the query, with easy formatting. The code, documented, is at the same place as the result, in a notebook that is easy to refresh, or share. And you can export the whole in a simple JSON file.

 

Cet article ADWC – the hidden gem: Zepplin Notebook est apparu en premier sur Blog dbi services.

Some ideas about Oracle Database on Docker

$
0
0

This is not a ‘best practice’ but just some ideas about building Docker images to provide an Oracle Database. I started with the images provided by Oracle: https://github.com/oracle/docker-images/tree/master/OracleDatabase/SingleInstance and this is great to validate the docker environment. Then I customized for my needs and here are the different points about this customization.

Do not send a huge context at each build attempt

I work by iteration. Sending a 3GB context each time I try a build is a no-go for me. Then I quickly stopped to put the Oracle installation .zip in the context of my build. I already blogged about this.

There are several ways to avoid to send a big context, such as having the .zip in an NFS or HTTP server and ADD it or RUN wget from there. I prefer to build one small container with this .zip that I’ll use later

In my current directory I have the linuxx64_12201_database.zip which I explicitly send to the context with this .dockerignore:

*
!linuxx64_12201_database.zip

And I build a franck/oracle122/zip image with it:

FROM oraclelinux:7-slim
ADD linuxx64_12201_database.zip /var/tmp

When done, I’ll not have to send the context again and I will build my container from this one with another Dockerfile:

FROM franck/oracle122/zip
RUN yum -y install oracle-database-server-12cR2-preinstall unzip tar wget openssl vi && rm -rf /var/cache/yum

Do not yum install at each build attempt

In the same idea, I build another intermediate image with the yum install above. The reason is that once I have it, I don’t need internet access anymore. I did that before boarding for an 8 hours flight. I build the above Dockerfile as franck/oracle122/prereq while on airport wifi and will use it later as the base for the final Dockerfile:

.dockerignore:

*

Dockerfile:

FROM franck/oracle122/prereq
# then follow all the work which do not need large context or internet connection
...

Even if you are not on a plane, it is always good to avoid internet access. You probably had to get some doors opened in the firewall in order to pull the base image. Now that you have it, you should keep it. Or one day, the security team will close the door again and you will waste a few hours. That also means that you do not start with a :latest image but with a specific version.

Do the long steps first

The Dockerfile provided by Oracle starts with all ENV and a COPY to add all scripts into the container. The problem is that each time you want to change a script, the build has to start from this step. And then the long operations have to be done again: unzip, install,…

I have a small context here (only the scripts and configuration files) but I ADD or COPY them only when needed. For example, here, a modification in install.rsp will re-do the runInstaller step, but the unzip one will not have to be done again because the cache is re-used:

WORKDIR /var/tmp
RUN unzip linuxx64_12201_database.zip
COPY install.rsp /var/tmp
RUN ./database/runInstaller -silent -force -waitforcompletion -responsefile /var/tmp/install.rsp -ignoresysprereqs -ignoreprereq ; true

The script that will run the container is added only at the end so that I can modify and re-build quickly without re-doing the previous steps.

VOLUME ["/opt/oracle/pdbs"] EXPOSE 1521 5500
COPY docker_cmd.sh /opt/oracle
CMD exec /opt/oracle/docker_cmd.sh ${CDB} ${PDB

Another step that I do at the end is removing the files I do not need in the container. Because that’s a guess and try approach and I want to build quickly. Of course, this may not be optimized for the size of all those layers, but I can reduce the final image later. The main feature of Docker build are the layers and I use them to develop the Dockerfile without wasting my time. For the waste of storage, I use ZFS with block level Cow, dedup and compression. For the final image, I’ll –squash it.

Remove all unnecessary files

The detail will probably go into a future blog post. But, as soon as runInstaller is done, and latest bundle patch applied, you can remove a lot of directories that I do not need anymore:

rm -rf $ORACLE_HOME/inventory $ORACLE_HOME/.patch_storage

As soon as the database has been created with DBCA, I do not need the DBCA templates anymore:

rm -rf $ORACLE_HOME/assistants

As this container will run only the instance, I can remove:

rm -rf $ORACLE_HOME/sqldeveloper $ORACLE_HOME/suptools $ORACLE_HOME/jdk

And depending on the options I will provide in the database, I remove the big ones:

rm -rf $ORACLE_HOME/apex $ORACLE_HOME/javavm $ORACLE_HOME/md

There is also a lot to remove from $ORACLE_HOME/lib (I need only a few *.so* that I can determine with strace, perf, lsof, ldd) and from $ORACLE_HOME/bin (basically, I need oracle, tnslsnr, lsnrctl, and sqlplus). Those are executables and you can strip them to reduce the size further. Definitely remove the last relink ones renamed as oracleO, …

Those are just examples, your list will depend on your version and usage, but this may reduce the image to 1GB or less. Of course, this is not supported. But the goal is to provide a small development database. Not an reliable and efficient one for production.

Use ZFS for the storage driver

An Oracle Database is full of large files that are updated sparsely. Just forget about OVERLAY and OVERLAY2 which copies the whole file to the new layer when you update a single byte of a file. I do not consider BTRFS seriously. In my opinion, ZFS is the only filesystem to consider for storing Docker images with large files. Enforce deduplication and compression to overcome the inflation of layering and the ignorance of sparse files. I think that recordsize=32k is a good idea from what I’ve seen about how docker applies writes to layers. More detail in a future blog post.

Note that layering issues are not only for build efficiency but also for container run. You will see that I put some datafiles in the image. Then, at database open, some blocks are changed (at least the headers) and I do not want a full file copy to the runnable layer. Block level CoW is required for that.

Create the CDB in the container

The container is the place to store all the software, and most of CDB$ROOT and PDB$SEED is part of the software distribution. This is what takes time when creating a database (catalog, catproc,…) and I definitely refuse to give a container to a developer where he will have to wait 10 minutes at run because the database has to be created on the external volume. A ‘docker run’ must be fast. And the external volume must contain only the data that has to be persisted, not 500MB of dbms_% package code, which will be all the same for all containers from the same image.

This means that I create the CDB during the build:

RUN /opt/oracle/product/12.2.0.1/dbhome_1/bin/dbca -silent -createDatabase -templateName General_Purpose.dbc -gdbName ${CDB} -sid ${CDB} -initParams db_unique_name=${CDB},service_names=${CDB},shared_pool_size=600M,local_listener='(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=0.0.0.0)(PORT=1521)))' -createAsContainerDatabase true -numberOfPdbs 0 -sysPassword oracle -systemPassword oracle -emConfiguration NONE -datafileDestination /opt/oracle/oradata -recoveryAreaDestination /opt/oracle/fast_recovery_area -recoveryAreaSize 1024 -storageType FS -sampleSchema false -automaticMemoryManagement false -totalMemory 1024 -databaseType OLTP -enableArchive false -redoLogFileSize 10 -useLocalUndoForPDBs true -createListener LISTENER:1521 -useOMF true -dbOptions OMS:false,JSERVER:false,SPATIAL:false,IMEDIA:false,ORACLE_TEXT:false,SAMPLE_SCHEMA:false,CWMLITE:false,DV:false,APEX:false
RUN rm -rf /opt/oracle/product/12.2.0.1/dbhome_1/assistants/dbca/*

No PDB here, as the PDB will be created at ‘docker run’ time into the external volume. I use a template with datafiles here, but I may prefer to run the whole creation to control the creation. I may even hack some .bsq and .sql files in RDBMS admin to reduce the size. I’m in archivelog mode here because I want to allow to flashback the PDB. The container is ephemeral. If it becomes too large (archive logs, audit, …) just remove it and start another one. Or add a script to remove the old ones (those not required by guarantee restore points).

Create the PDB in the external volume

The PDB is the only thing that must be persistent, and controlled by the developer. I create it with the following in my docker_cmd.sh (which is called from the Dockerfile CMD line providing CDB name and PDB name as arguments) script:

create pluggable database $2 admin user admin identified by oracle create_file_dest='/opt/oracle/pdbs';
alter pluggable database $2 open;
show pdbs

The PDB is bound to the external volume ( VOLUME [“/opt/oracle/pdbs”] ) thanks to 12.2 CREATE_FILE_DEST clause so that the developer can create datafiles only there. Then the ‘docker run’ is fast as a clone of PDB$SEED.

The developer will connect only to the PDB. He has nothing to do in CDB$ROOT. If there is a need to change something in CDB$ROOT, I’ll provide a new image. I may even define lockdown profiles rules to limit the PDB and define a listener where only the PDB registers.

Unplug the PDB at container stop

When the developer stops the container, I want to leave something consistent in the external volume. The way to do that quickly is a PDB unplug. An unplug to a PDB archive (a .pdb zip with all datafiles) would be nicer, but that takes too much time to create. I unplug to a .xml file. This is what I do on stop (SIGTERM and SIGSTOP):

alter pluggable database all close;
column pdb_name format a30
select pdb_id,pdb_name,status from dba_pdbs;
begin
for c in (select pdb_name from dba_pdbs where pdb_id>2) loop
dbms_output.put_line('-- Unpluging '||c.pdb_name);
execute immediate 'alter pluggable database "'||c.pdb_name||'" unplug into ''/opt/oracle/pdbs/'||c.pdb_name||'.xml''';
end loop;
for c in (select pdb_name from dba_pdbs where pdb_id>2 and status='UNPLUGGED') loop
dbms_output.put_line('-- Dropping '||c.pdb_name);
execute immediate 'drop pluggable database "'||c.pdb_name||'" keep datafiles';
end loop;
end;
/
-- All pluggable databases have been unplugged:
host ls /opt/oracle/pdbs/*.xml
-- Shutdown the CDB:
shutdown immediate;
-- You can plug those PDBs by passing the volume to a new container

The script iterates on all PDBs but I see no reason to create more than one. I unplug the PDB and drop it, and then shutdown the instance. We need the unplug to be completed before the stop timeout. The container may be killed before the drop or shutdown, but as long as we have the .xml we can plug the PDB into a new container.

Plug the PDB at container re-start

I mentioned earlier that at start I create the pluggable database mentioned by ${PDB}. But this is only when there is no /opt/oracle/pdbs/${PDB}.xml
If this file is found, this means that we provide a PDB that was unplugged by a previous container stop.
Actually, when the start detects this file, the following will be run:

whenever sqlerror exit failure;
create pluggable database "${PDB}" using '/opt/oracle/pdbs/${PDB}.xml';
host rm /opt/oracle/pdbs/${PDB}.xml
alter pluggable database "${PDB}" open;
select message from pdb_plug_in_violations;

Finally, because I may start a container which has a newer Release Update than the one which unplugged my PDB, I run:

$ORACLE_HOME/OPatch/datapatch

One PDB per container

My scripts process all PDBs but I think that in most cases we need to have a one-to-one relationship between the container and the PDB. The idea is to provide a database that is ready to use and where no administration/troubleshooting is required. The key here is to keep it simple. If you need to provide a large CDB with several PDBs, then Docker is not the solution to your problem. A virtual machine is a better answer for that.

SPfile? Password file?

The image build provided by Oracle stores the persistent configuration files with the database, in the external volume, through symbolic links from ?/dbs. But with my design, I don’t need to. The configuration of the instance, running in the container, is within the container. The passwords for SYS is in the container. Then SPfile and password files stay in the container. The runnable image is not read-only. It is writeable. We can write here as long as the changes do not have to persist beyond the container end of life.

The ‘scope=spfile’ parameters that can be modified by the developer will be PDB parameters. They are persisted because they go to the .xml file at unplug. Only in case of crash, without a clean unplug, those parameters may be stored only in the container. That’s a special case. A crashed container is not dead and jsut waits to be re-started.

Crash recovery needs the same container

There’s one big flaw with my CDB-in-container/PDB-in-volume design. The whole database datafiles must be consistent, are checkpointed together, and are all protected by the same redo stream, which is located in the container. But what’s the real problem about that? If the container is cleanly stopped, the PDB is unplugged and there is a clear separation between my external volume and the container. And both are consistent.

However, if the container crashes, the datafiles in my external volume are fuzzy and need recovery. This cannot be done without the CDB files which are on the container. This has only one consequence: the user must know that if the container was not cleanly stopped, she will need to start the PDB with the same container. I don’t think this is a real problem. I just ensure that the user gets the warning (a big README file in the external volume for example, created at start and removed at clean stop) and that the container will always be able to recover (no 100% full filesystem at start – anyway I have some log cleanups at start).

Handle all errors and signals

My startup script handle 3 situations.
The first one is the first start after creation of the container. This creates the pluggable database.
The second one is the re-start after a clean stop. This plugs the existing pluggable database.
The third one is crash-recovery after a kill. This just runs the automatic instance recovery.

Then the startup script will run in a loop, either tailing the alert.log or displaying some status info every minutes.

But before all of that, the startup script must handle the termination signals.

The clean stop is handled by the following signals:

trap 'shutdown SIGINT' SIGINT
trap 'shutdown SIGTERM' SIGTERM
trap 'shutdown EXIT' 0

SIGINT is for ^C when running the container, SIGTERM is when ‘docker stop’, and the signal 0 is when the container exits by itself. This can happen when my ‘tail -f’ on alert log is killed for example. All of them call my shutdown() procedure which is trying a clean stop (unplug the PDBs).

When the stop timout is expired or when we do a ‘docker kill’, there’s no time for that. The only thing I do here before a shutdown abort is an ‘alter system checkpoint’ to try to reduce the recovery needed. And display a WARNING message saying that the container that was killed must not be removed but be re-started asap to recover the PDB in the external volume. Maybe explicitly name the container and the command to re-start.

I do that with an abort() function called by the following:

trap 'abort SIGKILL' SIGKILL

The kill -9 of the instance, or container crash, cannot be handled. Recovery is needed as for the SIGKILL one. Here is the reason for keeping a permanent README file near the PDB to explain that the container which crashed should be restarted as soon as possible to recover this.

Conclusion

This is not a recipe of how to build an Oracle Database Docker image, but just some ideas. The most important is to know the requirement. If you provide Oracle on Docker just because the developers want that, the solution will probably be wrong: too large, too long, inefficient, and too complex,… They will not use it and they will tell everybody that Oracle is not cool because it cannot be dockerized.
CaptureDockerCDBPDB
With my PDB-in-volume / CDB-in-container design, I have :

  • Docker Images with the ephemeral software, one per version (down to patches), and with different set of component installed
  • External volume (personal directory in a NFS share, or a local one) with the persistent data and settings
  • Containers running the software on the data, linking them together for the whole lifecycle of the container

Think of them as 2 USB sticks, one with the software (binaries and packages), and one with the data (user metadata and data). When plugged together on the same container, it runs one version of software with one state of data. If the container crashes, you just run it again without unplugging any of the sticks. When you are done with your test or development iteration, you stop the container and remove it. Then you have unplugged the sticks to run another combination of data and software.

 

Cet article Some ideas about Oracle Database on Docker est apparu en premier sur Blog dbi services.

SQL Developer Web on the Oracle Cloud

$
0
0

You like SQL Developer because it is easy to install (just unzip a jar) and has a lot of features? Me too. It can be even easier if it is provided as a web application: no installation, and no java to take all my laptop RAM…
When I say no installation, you will see that you have some little things to setup here in DBaaS. That will probably be done for you in the managed services (PDBaaS) such as ‘Express’ and ‘Autonomous’ ones.

CaptureSDW010
Be careful, Oracle is a Top-Down deployment company. It seems that new products are announced first and then people have to work hard to make them available. Which means that if, like me, you want to test them immediately you may encounter some disappointment.
The announce was there. The documentation was there, mentioning that the Cloud Tooling must be upgraded to 18.2.3. But 18.2.3 was there only a few days later. You can check it from the place where the DBaaS looks for its software. Check from https://storage.us2.oraclecloud.com/v1/dbcsswlibp-usoracle29538/dbaas_patch if you a are not sure.

So, before being able to see SQL Developer in the colorful DBaaS landing page (where you can also access APEX for example) there’s a bit of command line stuff to do as root.

Install the latest Cloud Tooling

SQL Developer Web needs to be installed with the latest version of ORDS, which is installed with the latest version of Cloud Tooling aka dbaastools.rpm

You need to connect as root, so opc and then sudo

ssh opc@144.21.89.223
sudo su

Check if there is a new version to install:

dbaascli dbpatchm --run -list_tools | awk '/Patchid/{id=$3}END{print id}'

If something is returned (such as 18.2.3.1.0_180505.1604) you install it:

dbaascli dbpatchm --run -toolsinst -rpmversion=$(dbaascli dbpatchm --run -list_tools | awk '/Patchid/{id=$3}END{print id}')

Actually I got an error, and I had to ^C:

[root@DB18c opc]# dbaascli dbpatchm --run -toolsinst -rpmversion=$(dbaascli dbpatchm --run -list_tools | awk '/Patchid/{id=$3}END{print id}')
DBAAS CLI version 1.0.0
Executing command dbpatchm --run -toolsinst -rpmversion=18.2.3.1.0_180505.1604 -cli
/var/opt/oracle/patch/dbpatchm -toolsinst -rpmversion=18.2.3.1.0_180505.1604 -cli
Use of uninitialized value in concatenation (.) or string at /var/opt/oracle/patch/dbpatchm line 4773.
^C

But finally, it was installed because the ‘list_tools’ above returns nothing.

Enable SQL Developer Web

SQL Developer Web (SDW) is running in ORDS (Oracle REST Data Services) and must be enabled with the ORDS Assistant with the enable_schema_for_sdw action.
Here I’ll enable it at CDB level. I provide a password for the SDW schema. I create it in a file:

cat > password.txt <<<'Ach1z0#d'

You may secure that better than I do, as I’m putting the password on command line here. But this is only a test.

Then, still as root, I call the ORDS assistant to install SDW in C##SQLDEVWEB (as I’m installing it in CDB$ROOT I need a common user name).


/var/opt/oracle/ocde/assistants/ords/ords -ords_action=enable_schema_for_sdw -ords_sdw_schema="C##SQLDEVWEB" -ords_sdw_schema_password=$PWD/password.txt -ords_sdw_schema_enable_dba=true

Here is the output. The last lines are important:

WARNING: Couldn't obtain the "dbname" value from the assistant parameters nor the "$OCDE_DBNAME" environment variable
Starting ORDS
Logfile is /var/opt/oracle/log/ords/ords_2018-05-10_10:44:12.log
Config file is /var/opt/oracle/ocde/assistants/ords/ords.cfg
INFO: Starting environment summary checks...
INFO: Database version : 18000
INFO: Database CDB : yes
INFO: Original DBaaS Tools RPM installed : dbaastools-1.0-1+18.1.4.0.0_180123.1336.x86_64
INFO: Actual DBaaS Tools RPM installed : dbaastools-1.0-1+18.2.3.1.0_180505.1604.x86_64
INFO: DBTools JDK RPM installed : dbtools_jdk-1.8.0-2.74.el6.x86_64
INFO: DBTools JDK RPM "/var/opt/oracle/rpms/dbtools/dbtools_jdk-1.8.0-2.74.el6.x86_64.rpm" MD5 : 48f13bb401677bfc7cf0748eb1a6990d
INFO: DBTools ORDS Standalone RPM installed : dbtools_ords_standalone-18.1.0.11.22.15-1.el6.x86_64
INFO: DBTools ORDS Standalone RPM "/var/opt/oracle/rpms/dbtools/dbtools_ords_standalone-18.1.0.11.22.15-1.el6.x86_64.rpm" MD5 : 480355ac3ce0f357d5741c2c2f688901
INFO: DBTools DBaaS Landing Page RPM installed : dbtools_dbaas_landing_page-2.0.0-1.el6.x86_64
INFO: DBTools DBaaS Landing Page RPM "/var/opt/oracle/rpms/dbtools/dbtools_dbaas_landing_page-2.0.0-1.el6.x86_64.rpm" MD5 : af79e128a56b38de1c3406cfcec966db
INFO: Environment summary completed...
INFO: Action mode is "full"
INFO: Database Role is "PRIMARY"
INFO: Enabling "C##SQLDEVWEB" schema in "CDB$ROOT" container for SQL Developer Web...
 
SQL*Plus: Release 18.0.0.0.0 Production on Thu May 10 10:44:27 2018
Version 18.1.0.0.0
 
Copyright (c) 1982, 2017, Oracle. All rights reserved.
 
 
Connected to:
Oracle Database 18c EE Extreme Perf Release 18.0.0.0.0 - Production
Version 18.1.0.0.0
 
SQL> SQL> SQL> SQL> SQL> SQL> SQL> SQL Developer Web user enable starting...
Enabling "C##SQLDEVWEB" user for SQL Developer Web...
 
PL/SQL procedure successfully completed.
PL/SQL procedure successfully completed.
Creating "C##SQLDEVWEB" user
PL/SQL procedure successfully completed.
PL/SQL procedure successfully completed.
PL/SQL procedure successfully completed.
Call completed.
Commit complete.
PL/SQL procedure successfully completed.
Session altered.
PL/SQL procedure successfully completed.
PL/SQL procedure successfully completed.
 
"C##SQLDEVWEB" user enabled successfully. The schema to access SQL Developer Web
is "c_sqldevweb"...
 
PL/SQL procedure successfully completed.
 
SQL Developer Web user enable finished...
Disconnected from Oracle Database 18c EE Extreme Perf Release 18.0.0.0.0 - Production
Version 18.1.0.0.0
INFO: To access SQL Developer Web through DBaaS Landing Page, the schema "c_sqldevweb" needs to be provided...
INFO: "C##SQLDEVWEB" schema in the "CDB$ROOT" container for SQL Developer Web was enabled successfully...
 

The information to remember here is that I will have to provide the c_sqldevweb schema name (which is the schema name I’ve provided but lowercased and with sequences of ‘special’ characters replaced by an underscore). It is lowercased, but it seems that the schemaname has to be provided in uppercase.

Basically what has been done is quite simple: create the C##SQLDEVWEB user and call ORDS.ENABLE_SCHEMA to enable it and map it to the url.

DBCS Landing Page 2.0.0

Now I’m ready to see SQL Developer on the DBCS Landing Page. You access this page by:

  1. Enabling https access from internet (in Access Rules, enable ora_p2_httpssl)
  2. going to default web page for your service, in my case https://144.21.89.223

You may have to accept some self-signed certificates

And here it is with SQL Developer Web in the middle:
CaptureSDW011

The above shows PDB1/pdbadmin for the schema but I installed it at CDB level and the log above tells me that the schema is c_sqldevweb, so given the input, I change the schema to c_sqldevweb then on the login page. Finally, the direct url in my example is https://144.21.89.223/ords/c_sqldevweb/_sdw.

I enter C##SQLDEVWEB (uppercase here) as the user and Ach1z0#d as the password.

And here is the Dashboard:
CaptureSDW012

Do not worry about the 97% storage used which tells me that SYSTEM is full. My datafiles are autoextensible.

Just go to the SQL Worksheet and check your files:

select tablespace_name,bytes/1024/1024 "MBytes", maxbytes/1024/1024/1024 "MaxGB", autoextensible from dba_data_files

Enable SDW for local PDB user

To enable a PDB local user, I run ORDS assistant with a local user name (PDBADMIN here) and an additional parameter with the PDB name (PDB1 here).


cat > password.txt <<<'Ach1z0#d'
/var/opt/oracle/ocde/assistants/ords/ords -ords_action=enable_schema_for_sdw -ords_sdw_schema=PDBADMIN -ords_sdw_schema_password=$PWD/password.txt -ords_sdw_schema_enable_dba=true -ords_sdw_schema_container=PDB1

Now, I can connect to it with PDB1/pdbadmin as schema name.

Error handling

CaptureRestCallFail
If, like me, you are not used to ORDS applications, you may waste some minutes looking at a splash screen waiting for the result. Always look at the message bar. All actions are REST calls and the message bar will show if a call is running or completed successfully or not. The example on the right shows ‘call failed’. You can click on it to see the REST call, and the error.

 

Cet article SQL Developer Web on the Oracle Cloud est apparu en premier sur Blog dbi services.

ADWC – connect from your premises

$
0
0

In the previous post about the Autonomous Data Warehouse Service, I’ve run queries though the Machine Learning Notebooks. But you obviously want to connect to it from your premises, with SQL*Net.

CaptureADWCconnect001Of course the connection, going through the public internet, must be secured. If you already use a managed service like the Oracle Exadata Express Cloud Service, you already know how to do: download a .zip containing the connection string and the wallet and certificate for SQL*Net encryption.

You get it from the Service Console, logged as the ADMIN user, and Administration tab. The Download Client Credentials asks you for the wallet password. However, this is not a password to protect the .zip file and the .zip file contains an auto-login wallet, so keep it secured.

SQL Developer

CaptureADWCconnect002 The simplest use of this file is with SQL Developer because you don’t even have to unzip it. Just choose a ‘Cloud PDB’ connection type, enter the path of the .zip file as Configuration File, the password as Keystore Password and the ADMIN user (or any user you have created with the Oracle ML Users).

In the tnsnames.ora provided in the .zip file there are 3 network service names connecting to 3 different services: _low, _medium and _high. They map to the resource manager plan so that you can run your queries with different priorities.

SQLcl thin

With SQLcl you do not need to unzip the credentials file, at least when you are using thin JDBC (the default).
You just register it with:

18:53:12 SQL> set cloudconfig /media/sf_share/ADWC/wallet_ADWC.zip
Using temp directory:/tmp/oracle_cloud_config4174171941677611695

and you are ready to connect to the _low, _medium and _high services.

As you see, it unzips the file into a temporary directory so you have to do it each time you run SQLcl. You can add this to login.sql and may add some housekeeping as this temporary directory may remain. Or run all this in a docker container.

This is simple, at least if you are running the latest Java 8 which includes the Java Cryptography Extension (JCE). If it is not the case, as when you use the Java Home provided with 18c (1.8.0_152), you have to add the jars yourself. But don’t worry, all is explained:

SQL> set cloudconfig /media/sf_share/ADWC/wallet_ADWC.zip
***** JCE NOT INSTALLED ****
***** CAN NOT CONNECT TO PDB Service without it ****
Current Java: /u01/app/oracle/product/18.0.0/dbhome_1/jdk/jre
Follow instructions on http://www.oracle.com/technetwork/java/javase/downloads/jce8-download-2133166.html
SQL>

Of course the alternative is to install the latest Java

$ sudo rpm -i jre-8u171-linux-x64.rpm
Unpacking JAR files...
plugin.jar...
javaws.jar...
deploy.jar...
rt.jar...
jsse.jar...
charsets.jar...
localedata.jar...

and set JAVA_HOME to it before starting SQLcl

$ export JAVA_HOME=/usr/java/jre1.8.0_171-amd64
$ SQLPATH=~/sql bash $ORACLE_HOME/sqldeveloper/sqlcl/bin/sql /nolog

Anyway, in all cases, once the credential .zip is provided you can connect with Thin JDBC (the default) with a user/password that has been created in the ADWC:

SQL> connect admin@adwc_high
Password? (**********?) ****************
AArray = [B@24959ca4
AArray = [B@10289886
AArray = [B@32115b28
AArray = [B@2ad48653
Connected.
SQL> select * from v$instance;
INSTANCE_NUMBER INSTANCE_NAME HOST_NAME VERSION STARTUP_TIME STATUS PARALLEL THREAD# ARCHIVER LOG_SWITCH_WAIT LOGINS SHUTDOWN_PENDING DATABASE_STATUS INSTANCE_ROLE ACTIVE_STATE BLOCKED CON_ID INSTANCE_MODE EDITION FAMILY DATABASE_TYPE
--------------- ------------- --------- ------- ------------ ------ -------- ------- -------- --------------- ------ ---------------- --------------- ------------- ------------ ------- ------ ------------- ------- ------ -------------
6 esj1pod6 12.2.0.1.0 13-MAY-18 OPEN YES 6 STARTED ALLOWED NO ACTIVE PRIMARY_INSTANCE NORMAL NO 0 REGULAR EE RAC

What’s in the .zip

The tnsnames.ora has entries for the low, medium, high services.

adwc_high = (description= (address=(protocol=tcps)(port=1522)(host=adwc.uscom-east-1.oraclecloud.com))(connect_data=(service_name=p7zyfbmcnl4kjy3_adwc_high.adwc.oraclecloud.com))(security=(ssl_server_cert_dn="CN=adwc.uscom-east-1.oraclecloud.com,OU=Oracle BMCS US,O=Oracle Corporation,L=Redwood City,ST=California,C=US")) )
 
adwc_low = (description= (address=(protocol=tcps)(port=1522)(host=adwc.uscom-east-1.oraclecloud.com))(connect_data=(service_name=p7zyfbmcnl4kjy3_adwc_low.adwc.oraclecloud.com))(security=(ssl_server_cert_dn="CN=adwc.uscom-east-1.oraclecloud.com,OU=Oracle BMCS US,O=Oracle Corporation,L=Redwood City,ST=California,C=US")) )
 
adwc_medium = (description= (address=(protocol=tcps)(port=1522)(host=adwc.uscom-east-1.oraclecloud.com))(connect_data=(service_name=p7zyfbmcnl4kjy3_adwc_medium.adwc.oraclecloud.com))(security=(ssl_server_cert_dn="CN=adwc.uscom-east-1.oraclecloud.com,OU=Oracle BMCS US,O=Oracle Corporation,L=Redwood City,ST=California,C=US")) )

The sqlnet.ora mentions the wallet used for SQL*Net encryption:

WALLET_LOCATION = (SOURCE = (METHOD = file) (METHOD_DATA = (DIRECTORY="?/network/admin")))
SSL_SERVER_DN_MATCH=yes

Note that the directory is an absolute path and you probably want to change it to your TNS_ADMIN one where you unzip the file.

In the wallet location, you find the ewallet.p12 that contain the certificate and private keys, protected with password, and the cwallet.sso which do not need to provide the password to open it, so protect them with file permissions.

You find also keystore.jks which also contains the Self-signed certificate but in JKS truststore format. and referenced from ojdbc.properties properties:

oracle.net.wallet_location=(SOURCE=(METHOD=FILE)(METHOD_DATA=(DIRECTORY=${TNS_ADMIN})))

Once you have unzipped the credentials, you can use them to connect with OCI.

SQL*Plus or SQLcl -oci

If you want to connect with an OCI client, you have to unzip this file to your TNS_ADMIN directory. That can be the $ORACLE_HOME/network/admin, but be careful to overwrite existing files, or it can be a new directory you will use by setting the TNS_ADMIN environment variable (or registry entry) to it.

Here are some examples where I set TNS_ADMIN to the directory where I unzipped the credentials:

TNS_ADMIN=/media/sf_share/ADWC/wallet_ADWC sqlcl -oci /nolog
TNS_ADMIN=/media/sf_share/ADWC/wallet_ADWC sqlplus /nolog

Any application using OCI (the oracle client, which can be the InstantClient or a full database installation) can use this without providing any password.

Databas Link

With the unzipped credentials you can access through OCI which means that you can also have a database link to the ADWC database. The credentials must be unzipped (or merged) in the TNS_ADMIN (or default ?/rnetwork/admin) of the instance:

SQL> create database link ADWC connect to ADMIN identified by "Ach1z0#dAch1z0#d" using 'adwc_high';
Database link created.
 
SQL> select banner from v$version@ADWC;
 
BANNER
--------------------------------------------------------------------------------
Oracle Database 18c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

Client Credential password

When downloading the .zip you are asked for a password to ‘protect this file to prevent unauthorized database access‘. But that is misleading. The .zip is not password protected. Anyone can open it. And it contains an auto-login wallet, so anybody can use it. You can access the database without this password. Of course, you cannot connect if you don’t have a user/password with a create session privileges, but you access to it for user credentials verification.

So what is this password used for? We have seen that SQL Developer needs the password (or you will get a files as java.io.IOException: Keystore was tampered with, or password was incorrect). Then, you may remove the .sso auto-login wallet from the .zip file when it is used only by SQL Developer. But of course, you have to think about where the password is stored in SQL Developer. Is is more secured than the .sso ?

As long as the auto-login wallet is there, you do not need to store the wallet password. But of course, you will protect credential files.

 

Cet article ADWC – connect from your premises est apparu en premier sur Blog dbi services.

12c upuserxt.lst, upobjxt.lst & Oracle Maintained objects/users

$
0
0

Mike Dietrich has blogged recently about upuserxt.lst and upobjxt.lst and how to query them with external table. The first time I’ve seen those ‘.lst’ files, the default extension for sqlplus spool files, I wondered whether they were provided in ?/rdbms/admin on purpose, or if they were just some leftovers from some tests Oracle did before packaging the Oracle Home. Finally, I realized that they were there on purpose and that those ‘.lst’ are important files when upgrading to 12c.

I’ll look at an 18c Oracle Home (/rdbms/admin) in the Oracle Cloud but that applies to all 12c (and 18c is a 12cR2 patchset). One of the most important little feature of 12c is the tagging of Oracle Supplied objects and users. Before 12c it was a nightmare to distinguish system users from application ones. I detailed that in a previous post.

At database creation: _oracle_script

In a newly created 12c database, all the objects and users belonging to the system are flagged with ORACLE_MAINTAINED=Y

Here is an example listing system users and roles:

SQL> select listagg(username,',' on overflow truncate) within group (order by username) from dba_users where oracle_maintained='Y';LISTAGG(USERNAME,','ONOVERFLOWTRUNCATE)WITHINGROUP(ORDERBYUSERNAME)
 
ANONYMOUS,APPQOSSYS,AUDSYS,CTXSYS,DBSFWUSER,DBSNMP,DIP,DVF,DVSYS,GGSYS,GSMADMIN_INTERNAL,GSMCATUSER,GSMUSER,LBACSYS,MDDATA,MDSYS,OJVMSYS,OLAPSYS,ORACLE_OCM,ORDDATA,ORDPLUGINS,ORDSYS,OUTLN,REMOTE_SCHEDULER_AGENT,SI_INFORMTN_SCHEMA,SYS,SYS$UMF,SYSBACKUP,SYSDG,SYSKM,SYSRAC,SYSTEM,WMSYS,XDB,XS$NULL
 
SQL> select listagg(role,',' on overflow truncate) within group (order by role) from dba_roles where oracle_maintained='Y';
LISTAGG(ROLE,','ONOVERFLOWTRUNCATE)WITHINGROUP(ORDERBYROLE)
 
ADM_PARALLEL_EXECUTE_TASK,APPLICATION_TRACE_VIEWER,AQ_ADMINISTRATOR_ROLE,AQ_USER_ROLE,AUDIT_ADMIN,AUDIT_VIEWER,AUTHENTICATEDUSER,CAPTURE_ADMIN,CDB_DBA,CONNECT,CTXAPP,DATAPATCH_ROLE,DATAPUMP_EXP_FULL_DATABASE,DATAPUMP_IMP_FULL_DATABASE,DBA,DBFS_ROLE,DBJAVASCRIPT,DBMS_MDX_INTERNAL,DV_ACCTMGR,DV_ADMIN,DV_AUDIT_CLEANUP,DV_DATAPUMP_NETWORK_LINK,DV_GOLDENGATE_ADMIN,DV_GOLDENGATE_REDO_ACCESS,DV_MONITOR,DV_OWNER,DV_PATCH_ADMIN,DV_POLICY_OWNER,DV_PUBLIC,DV_REALM_OWNER,DV_REALM_RESOURCE,DV_SECANALYST,DV_STREAMS_ADMIN,DV_XSTREAM_ADMIN,EJBCLIENT,EM_EXPRESS_ALL,EM_EXPRESS_BASIC,EXECUTE_CATALOG_ROLE,EXP_FULL_DATABASE,GATHER_SYSTEM_STATISTICS,GDS_CATALOG_SELECT,GGSYS_ROLE,GLOBAL_AQ_USER_ROLE,GSMADMIN_ROLE,GSMUSER_ROLE,GSM_POOLADMIN_ROLE,HS_ADMIN_EXECUTE_ROLE,HS_ADMIN_ROLE,HS_ADMIN_SELECT_ROLE,IMP_FULL_DATABASE,JAVADEBUGPRIV,JAVAIDPRIV,JAVASYSPRIV,JAVAUSERPRIV,JAVA_ADMIN,JMXSERVER,LBAC_DBA,LOGSTDBY_ADMINISTRATOR,OEM_ADVISOR,OEM_MONITOR,OLAP_DBA,OLAP_USER,OLAP_XS_ADMIN,OPTIMIZER_PROCESSING_RATE,ORDADMIN,PDB_DBA,PROVISIONER,RDFCTX_ADMIN,RECOVERY_CATALOG_OWNER,RECOVERY_CATALOG_OWNER_VPD,RECOVERY_CATALOG_USER,RESOURCE,SCHEDULER_ADMIN,SELECT_CATALOG_ROLE,SODA_APP,SYSUMF_ROLE,WM_ADMIN_ROLE,XDBADMIN,XDB_SET_INVOKER,XDB_WEBSERVICES,XDB_WEBSERVICES_OVER_HTTP,XDB_WEBSERVICES_WITH_PUBLIC,XS_CACHE_ADMIN,XS_CONNECT,XS_NAMESPACE_ADMIN,XS_SESSION_ADMIN

And here is an exemple listing the owners of system objects flagged with ORACLE_MAINTAINED=Y

SQL> select listagg(num||' '||owner,',' on overflow truncate) within group (order by num) from (select owner,count(*) num from dba_objects where oracle_maintained='Y' group by owner);LISTAGG(NUM||''||OWNER,','ONOVERFLOWTRUNCATE)WITHINGROUP(ORDERBYNUM)
 
6 APPQOSSYS,8 DBSFWUSER,8 ORACLE_OCM,8 SI_INFORMTN_SCHEMA,10 ORDPLUGINS,10 OUTLN,13 REMOTE_SCHEDULER_AGENT,22 DVF,24 OJVMSYS,25 OLAPSYS,35 AUDSYS,55 DBSNMP,209 GSMADMIN_INTERNAL,239 LBACSYS,292 ORDDATA,398 DVSYS,399 WMSYS,412 CTXSYS,466 SYSTEM,1029 XDB,2574 MDSYS,3171 ORDSYS,12173 PUBLIC,51069 SYS

How this is done? That’s easy. All system objects are created by Oracle scripts, such as those called by catalog.sql and catproc.sql during database creation. Those scripts set “_oracle_script”=true before running the DDL and all object created while “_oracle_script”=true is flagged as Oracle Maintained.

If, in a lab (not in prod), you create your own object in the same way, they will also be flagged as Oracle Maintained:

SQL> connect / as sysdba
Connected.
SQL> alter session set "_oracle_script"=true;
Session altered.
 
SQL> create user FRANCK identified by myself;
User FRANCK created.
SQL> alter user FRANCK quota unlimited on users;
User FRANCK altered.
 
SQL> create table FRANCK.DEMO(n primary key) as select 1 from dual;
Table FRANCK.DEMO created.
SQL> alter session set "_oracle_script"=false;
Session altered.
 
SQL> select username,oracle_maintained from dba_users where username='FRANCK';
 
USERNAME ORACLE_MAINTAINED
-------- -----------------
FRANCK Y
SQL> select owner,object_name,object_type,oracle_maintained from dba_objects where owner='FRANCK';
 
OWNER OBJECT_NAME OBJECT_TYPE ORACLE_MAINTAINED
------ ---- ----- -
FRANCK DEMO TABLE Y
FRANCK SYS_C007409 INDEX Y

So, this one is easy. Database creation runs with “_oracle_script”=true and objects and users created when this parameter is set to true are flagged as Oracle Supplied objects.

And during upgrades?

When you upgrade from 11g to 12c you don’t have this Oracle Maintained information. The catupgrd runs with “_oracle_script”=true but this script does not create all objects. However the upgraded database has all system objects flagged as Oracle Maintained. This is where upuserxt.lst upobjxt.lst are used.

When Oracle developers build a new Oracle Home to be shipped, they create a database (including all options I suppose) and then run the utlupox.sql script. This script will list all Oracle Maintained users and objects, just relying on the flag that has been set during creation, and spool to the upuserxt.lst upobjxt.lst files. And those files will be shipped in the Oracle Home (all that under /rdbms/admin).

These .lst files will be used when upgrading from pre-12c in order to set the flags for Oracle Maintained objects. The external tables SYS.USERXT on upuserxt.lst and SYS.OBJXT on upobjxt.lst are created by catupcox.sql and, finally, those tables are read by catuposb.sql to set Oracle Maintained in USER$ and OBJ$. The catuposb.sql is a bit more complex that that because there are objects that can have different name when a database is created.

Note that this information about Oracle Maintained objects, in addition to being very useful for us, is crucial when you further convert the non-CDB to a PDB because those will become metadata links.

 

Cet article 12c upuserxt.lst, upobjxt.lst & Oracle Maintained objects/users est apparu en premier sur Blog dbi services.

ADWC – System and session settings (DWCS lockdown profile)

$
0
0

The Autonomous Data Warehouse Cloud service is a PaaS managed service where we have a PDB and an ADMIN user which has most of the system privileges. For example, we have the privilege to change initialization parameters:
SQL> select * from dba_sys_privs where grantee=user and privilege like 'ALTER S%';
 
GRANTEE PRIVILEGE ADMIN_OPTION COMMON INHERITED
------- --------- ------------ ------ ---------
ADMIN ALTER SESSION YES NO NO
ADMIN ALTER SYSTEM YES NO NO

Still, not everything is allowed for several reasons: ensure that we cannot break the Oracle managed CDB and force us to use only the features allowed in the ‘autonomous’ service. This is limited with a lockdown profile:
SQL> show parameter pdb_lockdown
 
NAME TYPE VALUE
------------ ------ -----
pdb_lockdown string DWCS

DWCS means Data Warehouse Cloud Service which was the name of the Autonomous Data Warehouse Cloud service until Larry Ellison announces this self-driven-no-human trend under the marketing umbrella of ‘autonomous’.

The limitations are all documented but I like to see them by myself and in 18c we have a mean to see the lockdown rules from the PDB itself, through V$LOCKDOWN_RULES.

ALTER SYSTEM

Basically, in this ADWC all ALTER SYSTEM statements are disallowed and then they add the few exceptions for what we are allowed to:

SQL> select * from v$lockdown_rules where rule in ('ALTER SYSTEM') and clause_option is null;
RULE_TYPE RULE CLAUSE CLAUSE_OPTION STATUS USERS CON_ID
--------- ---- ------ ------------- ------ ----- ------
STATEMENT ALTER SYSTEM DISABLE ALL 73
STATEMENT ALTER SYSTEM SET ENABLE COMMON 73
STATEMENT ALTER SYSTEM KILL SESSION ENABLE ALL 73

You can ignore what is enabled for COMMON users because we have no common user to connect to our PDB. We will see which ALTER SYSTEM SET clauses are allowed. But in addition to those, only the ‘KILL SESSION’ is allowed for ALTER SYSTEM.

Here is the detail about the parameters we can set:

SQL> select * from v$lockdown_rules left outer join (select upper(name) clause_option,display_value,description from v$parameter) using (clause_option) where rule in ('ALTER SYSTEM') and clause_option is not null and status='ENABLE';
 
CLAUSE_OPTION RULE_TYPE RULE CLAUSE STATUS USERS CON_ID DISPLAY_VALUE DESCRIPTION
------------- --------- ---- ------ ------ ----- ------ ------------- -----------
APPROX_FOR_AGGREGATION STATEMENT ALTER SYSTEM SET ENABLE ALL 73 FALSE Replace exact aggregation with approximate aggregation
APPROX_FOR_COUNT_DISTINCT STATEMENT ALTER SYSTEM SET ENABLE ALL 73 FALSE Replace count distinct with approx_count_distinct
APPROX_FOR_PERCENTILE STATEMENT ALTER SYSTEM SET ENABLE ALL 73 none Replace percentile_* with approx_percentile
AWR_PDB_AUTOFLUSH_ENABLED STATEMENT ALTER SYSTEM SET ENABLE ALL 73 TRUE Enable/Disable AWR automatic PDB flushing
NLS_LANGUAGE STATEMENT ALTER SYSTEM SET ENABLE ALL 73 AMERICAN NLS language name
NLS_SORT STATEMENT ALTER SYSTEM SET ENABLE ALL 73 NLS linguistic definition name
NLS_TERRITORY STATEMENT ALTER SYSTEM SET ENABLE ALL 73 AMERICA NLS territory name
NLS_CALENDAR STATEMENT ALTER SYSTEM SET ENABLE ALL 73 NLS calendar system name
NLS_COMP STATEMENT ALTER SYSTEM SET ENABLE ALL 73 BINARY NLS comparison
NLS_CURRENCY STATEMENT ALTER SYSTEM SET ENABLE ALL 73 NLS local currency symbol
NLS_DATE_FORMAT STATEMENT ALTER SYSTEM SET ENABLE ALL 73 DD-MON-YYYY HH24:MI:ss NLS Oracle date format
NLS_DATE_LANGUAGE STATEMENT ALTER SYSTEM SET ENABLE ALL 73 NLS date language name
NLS_DUAL_CURRENCY STATEMENT ALTER SYSTEM SET ENABLE ALL 73 Dual currency symbol
NLS_ISO_CURRENCY STATEMENT ALTER SYSTEM SET ENABLE ALL 73 NLS ISO currency territory name
NLS_LENGTH_SEMANTICS STATEMENT ALTER SYSTEM SET ENABLE ALL 73 BYTE create columns using byte or char semantics by default
NLS_NCHAR_CONV_EXCP STATEMENT ALTER SYSTEM SET ENABLE ALL 73 FALSE NLS raise an exception instead of allowing implicit conversion
NLS_NUMERIC_CHARACTERS STATEMENT ALTER SYSTEM SET ENABLE ALL 73 NLS numeric characters
NLS_TIMESTAMP_FORMAT STATEMENT ALTER SYSTEM SET ENABLE ALL 73 time stamp format
NLS_TIMESTAMP_TZ_FORMAT STATEMENT ALTER SYSTEM SET ENABLE ALL 73 timestamp with timezone format
NLS_TIME_FORMAT STATEMENT ALTER SYSTEM SET ENABLE ALL 73 time format
NLS_TIME_TZ_FORMAT STATEMENT ALTER SYSTEM SET ENABLE ALL 73 time with timezone format
OPTIMIZER_IGNORE_HINTS STATEMENT ALTER SYSTEM SET ENABLE ALL 73 TRUE enables the embedded hints to be ignored
OPTIMIZER_IGNORE_PARALLEL_HINTS STATEMENT ALTER SYSTEM SET ENABLE ALL 73 TRUE enables embedded parallel hints to be ignored
PLSCOPE_SETTINGS STATEMENT ALTER SYSTEM SET ENABLE ALL 73 identifiers:all plscope_settings controls the compile time collection, cross reference, and storage of PL/SQL source code identifier and SQL statement data
PLSQL_CCFLAGS STATEMENT ALTER SYSTEM SET ENABLE ALL 73 PL/SQL ccflags
PLSQL_DEBUG STATEMENT ALTER SYSTEM SET ENABLE ALL 73 FALSE PL/SQL debug
PLSQL_OPTIMIZE_LEVEL STATEMENT ALTER SYSTEM SET ENABLE ALL 73 1 PL/SQL optimize level
PLSQL_WARNINGS STATEMENT ALTER SYSTEM SET ENABLE ALL 73 DISABLE:ALL PL/SQL compiler warnings settings

The APPROX_ ones, disable by default, can be used to transparently use approximations for faster results.
The NLS_ ones can be used to set NLS defaults for our sessions.
OPTIMIZER_IGNORE_ are new in 18c and are set by default here to ignore embedded hints. However, we can set then to false.
PLSQL_ are the defaults for sessions and I don’t understand why warnings are not enabled by default. Fortunately, we are able to change that at PDB level.

There are also some rules to disable some ALTER SYSTEM SET. They are there for the common users only (which have ALTER SYSTEM SET enabled) but they are interesting to see what Oracle choose to set in the ADWC service which cannot be changed in the PDB even by their common users:

SQL> select * from v$lockdown_rules left outer join (select upper(name) clause_option,display_value,description from v$parameter) using (clause_option) where rule in ('ALTER SYSTEM') and clause_option is not null and status='DISABLE';
CLAUSE_OPTION RULE_TYPE RULE CLAUSE STATUS USERS CON_ID DISPLAY_VALUE DESCRIPTION
------------- --------- ---- ------ ------ ----- ------ ------------- -----------
DB_FILES STATEMENT ALTER SYSTEM SET DISABLE ALL 73 25 max allowable # db files
"_PDB_INHERIT_CFD" STATEMENT ALTER SYSTEM SET DISABLE ALL 73
"_PDB_MAX_AUDIT_SIZE" STATEMENT ALTER SYSTEM SET DISABLE ALL 73
"_PDB_MAX_DIAG_SIZE" STATEMENT ALTER SYSTEM SET DISABLE ALL 73
MAX_IDLE_TIME STATEMENT ALTER SYSTEM SET DISABLE ALL 73 60 maximum session idle time in minutes
PARALLEL_DEGREE_POLICY STATEMENT ALTER SYSTEM SET DISABLE ALL 73 AUTO policy used to compute the degree of parallelism (MANUAL/LIMITED/AUTO/ADAPTIVE)
_PARALLEL_CLUSTER_CACHE_POLICY STATEMENT ALTER SYSTEM SET DISABLE ALL 73 ADAPTIVE policy used for parallel execution on cluster(ADAPTIVE/CACHED)
_ENABLE_PARALLEL_DML STATEMENT ALTER SYSTEM SET DISABLE ALL 73 TRUE enables or disables parallel dml
RESULT_CACHE_MODE STATEMENT ALTER SYSTEM SET DISABLE ALL 73 FORCE result cache operator usage mode
RESULT_CACHE_MAX_RESULT STATEMENT ALTER SYSTEM SET DISABLE ALL 73 1 maximum result size as percent of cache size
RESOURCE_MANAGER_PLAN STATEMENT ALTER SYSTEM SET DISABLE ALL 73 FORCE:DWCS_PLAN resource mgr top plan
_CELL_OFFLOAD_VECTOR_GROUPBY STATEMENT ALTER SYSTEM SET DISABLE ALL 73 FALSE enable SQL processing offload of vector group by
PARALLEL_MIN_DEGREE STATEMENT ALTER SYSTEM SET DISABLE ALL 73 CPU controls the minimum DOP computed by Auto DOP
_MAX_IO_SIZE STATEMENT ALTER SYSTEM SET DISABLE ALL 73 33554432 Maximum I/O size in bytes for sequential file accesses
_LDR_IO_SIZE STATEMENT ALTER SYSTEM SET DISABLE ALL 73 33554432 size of write IOs used during a load operation
_LDR_IO_SIZE2 STATEMENT ALTER SYSTEM SET DISABLE ALL 73 33554432 size of write IOs used during a load operation of EHCC with HWMB
_OPTIMIZER_GATHER_STATS_ON_LOAD_ALL STATEMENT ALTER SYSTEM SET DISABLE ALL 73 TRUE enable/disable online statistics gathering for nonempty segments
_OPTIMIZER_GATHER_STATS_ON_LOAD_HIST STATEMENT ALTER SYSTEM SET DISABLE ALL 73 TRUE enable/disable online histogram gathering for loads
_DATAPUMP_GATHER_STATS_ON_LOAD STATEMENT ALTER SYSTEM SET DISABLE ALL 73 TRUE Gather table statistics during Data Pump load rather thanimporting statistics from the dump file. This should be set to TRUE in the lockdown profile in a DWCS environment.
_OPTIMIZER_ANSWERING_QUERY_USING_STATS STATEMENT ALTER SYSTEM SET DISABLE ALL 73 TRUE enable statistics-based query transformation
_PX_XTGRANULE_SIZE STATEMENT ALTER SYSTEM SET DISABLE ALL 73 128000 default size of a external table granule (in KB)
_OPTIMIZER_ALLOW_ALL_ACCESS_PATHS STATEMENT ALTER SYSTEM SET DISABLE ALL 73 FALSE allow all access paths
_DATAPUMP_INHERIT_SVCNAME STATEMENT ALTER SYSTEM SET DISABLE ALL 73 TRUE Inherit and propagate service name throughout job
_DEFAULT_PCT_FREE STATEMENT ALTER SYSTEM SET DISABLE ALL 73 1 Default value of PCT_FREE enforced by DWCS lockdown

So, among the interesting ones, Result Cache is forced for all results (RESULT_CACHE_MODE=FORCE), Parallel DML is enabled for all sessions (but we will see that we can disable it at session level), PCTFREE will always be 1 (_DEFAULT_PCT_FREE=1), statistics are gathered during load (this is a 18c feature). And we cannot change that.

There are only few additional ALTER SYSTEM SET which are allowed at session level:

SQL> select * from v$lockdown_rules where rule in ('ALTER SESSION') and clause is not null and clause_option is not null
and (clause_option,status,users) not in (select clause_option,status,users from v$lockdown_rules where rule in ('ALTER SYSTEM') and clause is not null and clause_option is not null)
;
RULE_TYPE RULE CLAUSE CLAUSE_OPTION STATUS USERS CON_ID
--------- ---- ------ ------------- ------ ----- ------
STATEMENT ALTER SESSION SET CONTAINER ENABLE ALL 73
STATEMENT ALTER SESSION SET CURRENT_SCHEMA ENABLE ALL 73
STATEMENT ALTER SESSION SET EDITION ENABLE ALL 73
STATEMENT ALTER SESSION SET OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES ENABLE ALL 73
STATEMENT ALTER SESSION SET DEFAULT_COLLATION ENABLE ALL 73
STATEMENT ALTER SESSION SET ROW ARCHIVAL VISIBILITY = ACTIVE ENABLE ALL 73
STATEMENT ALTER SESSION SET ROW ARCHIVAL VISIBILITY = ALL ENABLE ALL 73
STATEMENT ALTER SESSION SET TIME_ZONE ENABLE ALL 73

Besides the parameters here are what we can do with ALTER SESSION:

SQL> select * from v$lockdown_rules where rule='ALTER SESSION' and clause_option is null;
 
RULE_TYPE RULE CLAUSE CLAUSE_OPTION STATUS USERS CON_ID
--------- ---- ------ ------------- ------ ----- ------
STATEMENT ALTER SESSION DISABLE ALL 73
STATEMENT ALTER SESSION SET ENABLE COMMON 73
STATEMENT ALTER SESSION ADVISE COMMIT ENABLE ALL 73
STATEMENT ALTER SESSION CLOSE DATABASE LINK ENABLE ALL 73
STATEMENT ALTER SESSION DISABLE COMMIT IN PROCEDURE ENABLE ALL 73
STATEMENT ALTER SESSION DISABLE PARALLEL DDL ENABLE ALL 73
STATEMENT ALTER SESSION DISABLE PARALLEL DML ENABLE ALL 73
STATEMENT ALTER SESSION DISABLE PARALLEL QUERY ENABLE ALL 73
STATEMENT ALTER SESSION DISABLE RESUMABLE ENABLE ALL 73
STATEMENT ALTER SESSION ENABLE COMMIT IN PROCEDURE ENABLE ALL 73
STATEMENT ALTER SESSION ENABLE PARALLEL DDL ENABLE ALL 73
STATEMENT ALTER SESSION ENABLE PARALLEL DML ENABLE ALL 73
STATEMENT ALTER SESSION ENABLE PARALLEL QUERY ENABLE ALL 73
STATEMENT ALTER SESSION ENABLE RESUMABLE ENABLE ALL 73
STATEMENT ALTER SESSION FORCE PARALLEL DDL ENABLE ALL 73
STATEMENT ALTER SESSION FORCE PARALLEL DML ENABLE ALL 73
STATEMENT ALTER SESSION FORCE PARALLEL QUERY ENABLE ALL 73

I’ll show other rules (other than ALTER SYSTEM and ALTER SESSION statements) in a future post. Lockdown profiles is a great feature because they have very fine granularity and makes it easy to document what is allowed or not. Oracle introduced them for their own usage in the public cloud. You can use the same on-premises for your private cloud. This requires multitenant architecture, but the option is not mandatory.

 

Cet article ADWC – System and session settings (DWCS lockdown profile) est apparu en premier sur Blog dbi services.

Which Bitnami service to choose in the Oracle Cloud Infrastructure?

$
0
0

In the Oracle Cloud PaaS you have a marketplace where you can choose your service. Some are packaged from Bitnami and available on multiple OS. My first idea is that in PaaS you do not care about the OS. But Oracle Cloud has this very nice feature where you still have full access to the OS, as root, even in PaaS. Then, you choose the Linux distribution of your preference. Except if performance is different. They run on different Linux kernels. Is Oracle Linux Unbreakable Kernel more efficient?

We need to compare with relevant and reliable measures. And for that I’ve created a Postgres service and used Kevin Closson SLOB method, as I’m a lucky beta tester for pgio. I’ll post later about the creation of the Postgres service in the Oracle Cloud.

Cached IO

First, I’ve run cached IO to measure CPU performance.

I’ve run with the following settings in pgio.conf:

UPDATE_PCT=0
RUN_TIME=300
NUM_SCHEMAS=4
NUM_THREADS=1
WORK_UNIT=255
SCALE=1G

This is 4 schemas with 1GB of data. This fit in my 30GB host. Actually, here is a vmstat sample during the run showing 5GB cached and large free memory:

procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
4 0 0 15508168 12444 5495652 0 0 0 0 1530 1631 28 22 50 0 0
4 0 0 15508292 12452 5495648 0 0 0 4 1506 1564 27 23 50 0 0

There’s no I/O to block device here, which is my goal.

I’ve run 10 times the runit.sh and here is the summary of main result with IOPS:


$ for i in allopc@144.21.82.244.out allopc@144.21.82.255.out allopc@144.21.82.252.out ; do grep os-release $i | grep PRETTY | tail -1 ; grep x86 $i | tail -1 ; grep -E "runit.out.*DBNAME" $i | tail -10 | nl ; done
 
Fri May 25 11:56:11 UTC 2018 /etc/os-release : PRETTY_NAME="Ubuntu 16.04.4 LTS"
Linux ubuntu 4.4.0-127-generic #153-Ubuntu SMP Sat May 19 10:58:46 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
1 Fri May 25 11:11:05 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >393232< CACHE_HITS/s >15711<
2 Fri May 25 11:16:06 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >398179< CACHE_HITS/s >16000<
3 Fri May 25 11:21:07 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >402080< CACHE_HITS/s >16019<
4 Fri May 25 11:26:07 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >404682< CACHE_HITS/s >16086<
5 Fri May 25 11:31:07 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >408524< CACHE_HITS/s >16327<
6 Fri May 25 11:36:08 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >409209< CACHE_HITS/s >16390<
7 Fri May 25 11:41:08 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >403647< CACHE_HITS/s >16327<
8 Fri May 25 11:46:09 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >397440< CACHE_HITS/s >15894<
9 Fri May 25 11:51:09 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >397273< CACHE_HITS/s >15956<
10 Fri May 25 11:56:11 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >396906< CACHE_HITS/s >15904<
 
Fri May 25 11:54:56 UTC 2018 /etc/os-release : PRETTY_NAME="Debian GNU/Linux 9 (stretch)"
Linux debian 4.9.0-6-amd64 #1 SMP Debian 4.9.88-1+deb9u1 (2018-05-07) x86_64 GNU/Linux
1 Fri May 25 11:09:53 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >395672< CACHE_HITS/s >15882<
2 Fri May 25 11:14:54 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >401400< CACHE_HITS/s >16188<
3 Fri May 25 11:19:54 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >391551< CACHE_HITS/s >15764<
4 Fri May 25 11:24:54 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >393827< CACHE_HITS/s >15802<
5 Fri May 25 11:29:54 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >404462< CACHE_HITS/s >16198<
6 Fri May 25 11:34:55 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >392712< CACHE_HITS/s >15732<
7 Fri May 25 11:39:55 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >399389< CACHE_HITS/s >16063<
8 Fri May 25 11:44:55 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >390283< CACHE_HITS/s >15567<
9 Fri May 25 11:49:56 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >391426< CACHE_HITS/s >15771<
10 Fri May 25 11:54:56 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >392760< CACHE_HITS/s >15874<
 
Fri May 25 11:53:58 UTC 2018 /etc/os-release : PRETTY_NAME="Oracle Linux Server 7.5"
Linux b5e501 4.1.12-124.15.2.el7uek.x86_64 #2 SMP Tue May 22 11:52:31 PDT 2018 x86_64 x86_64 x86_64 GNU/Linux
1 Fri May 25 11:08:54 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >395815< CACHE_HITS/s >15759<
2 Fri May 25 11:13:55 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >395346< CACHE_HITS/s >16009<
3 Fri May 25 11:18:55 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >395340< CACHE_HITS/s >15898<
4 Fri May 25 11:23:56 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 301 seconds. RIOPS >402556< CACHE_HITS/s >16200<
5 Fri May 25 11:28:56 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >397847< CACHE_HITS/s >16039<
6 Fri May 25 11:33:56 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >398848< CACHE_HITS/s >16027<
7 Fri May 25 11:38:57 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >398817< CACHE_HITS/s >16089<
8 Fri May 25 11:43:57 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >398878< CACHE_HITS/s >15961<
9 Fri May 25 11:48:57 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >395885< CACHE_HITS/s >15606<
10 Fri May 25 11:53:58 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >395965< CACHE_HITS/s >15893<

I’ve put this data in Excel to show the picture where, basically, all 3 environments perform exactly the same:

CapturePGLIOPS

Physical IO

For the second test, I wanted to test physical IOPS. But not to measure disk performance, which is the same for my 3 services. I want to do physical I/O only to see if there’s a difference in context switches when doing non-blocking I/O – which we do not see in the previous test because they were filesystem cache hits. Then I kept the small scale of 4 sessions with 1GB so that there’s a good chance that it remains in the storage cache. But I reduced the memory in order to have less than 4GB in filesystem cache.

pgio comes with an utility (pgio_reduce_free_memory.sh) to allocate enough huge page to limit the filesystem cache:


$ sudo bash pgio/pgio_reduce_free_memory.sh 2
 
Taking action to reduce free memory down to 2GB available.
total used free shared buff/cache available
Mem: 30886100 124676 30698952 19088 62472 30469900
Swap: 0 0 0
 
Attempting to allocate 13966 huge pages
MemAvailable: 1869148 kB
HugePages_Total: 13966

This is perfect: 13966 huge pages, that’s 27 GB in my 30GB VM that cannot be used by the filesystem, so that my 4x1GB will need I/O calls to the disk.

Here is a vmstat extract to confirm that the filesystem cache is less than 2GB

procs -----------memory---------- ---swap-- -----io---- ---system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
0 4 0 154416 2344 2015204 0 0 93025 0 32425 24295 2 3 51 44 1
0 4 0 153816 2428 2015848 0 0 94923 21 32400 24414 2 3 51 44 1

runit.sh calls vmstat with a 3 seconds delay so you can see a higher number of block/s and context switches.

Here are the interesting lines from the runit.sh output:

$ for i in allopc@144.21.89.85.out allopc@144.21.89.53.out allopc@144.21.89.26.out ; do grep os-release $i | grep PRETTY | tail -1 ; grep x86 $i | tail -1 ; grep -E "runit.out.*DBNAME" $i | tail -10 | nl ; done
 
Fri May 25 14:58:47 UTC 2018 /etc/os-release : PRETTY_NAME="Ubuntu 16.04.4 LTS"
Linux ubuntu 4.4.0-127-generic #153-Ubuntu SMP Sat May 19 10:58:46 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
1 Fri May 25 14:13:40 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 301 seconds. RIOPS >1214< CACHE_HITS/s >55<
2 Fri May 25 14:18:42 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >4545< CACHE_HITS/s >195<
3 Fri May 25 14:23:43 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >17053< CACHE_HITS/s >682<
4 Fri May 25 14:28:43 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >18865< CACHE_HITS/s >801<
5 Fri May 25 14:33:44 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >18933< CACHE_HITS/s >794<
6 Fri May 25 14:38:44 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >18832< CACHE_HITS/s >777<
7 Fri May 25 14:43:45 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >18826< CACHE_HITS/s >757<
8 Fri May 25 14:48:46 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 301 seconds. RIOPS >19229< CACHE_HITS/s >819<
9 Fri May 25 14:53:46 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >19408< CACHE_HITS/s >835<
10 Fri May 25 14:58:47 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >19524< CACHE_HITS/s >799<
 
Fri May 25 14:58:41 UTC 2018 /etc/os-release : PRETTY_NAME="Debian GNU/Linux 9 (stretch)"
Linux debian 4.9.0-6-amd64 #1 SMP Debian 4.9.88-1+deb9u1 (2018-05-07) x86_64 GNU/Linux
1 Fri May 25 14:13:35 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >1727< CACHE_HITS/s >82<
2 Fri May 25 14:18:36 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >10743< CACHE_HITS/s >534<
3 Fri May 25 14:23:37 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >18660< CACHE_HITS/s >763<
4 Fri May 25 14:28:37 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >18496< CACHE_HITS/s >811<
5 Fri May 25 14:33:38 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >18537< CACHE_HITS/s >757<
6 Fri May 25 14:38:38 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >18647< CACHE_HITS/s >774<
7 Fri May 25 14:43:39 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >18838< CACHE_HITS/s >775<
8 Fri May 25 14:48:40 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >18686< CACHE_HITS/s >786<
9 Fri May 25 14:53:40 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >18742< CACHE_HITS/s >782<
10 Fri May 25 14:58:41 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 301 seconds. RIOPS >18634< CACHE_HITS/s >793<
 
Fri May 25 14:57:25 UTC 2018 /etc/os-release : PRETTY_NAME="Oracle Linux Server 7.5"
Linux b5e501 4.1.12-124.15.2.el7uek.x86_64 #2 SMP Tue May 22 11:52:31 PDT 2018 x86_64 x86_64 x86_64 GNU/Linux
1 Fri May 25 14:12:20 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >17025< CACHE_HITS/s >721<
2 Fri May 25 14:17:21 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >19331< CACHE_HITS/s >792<
3 Fri May 25 14:22:21 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >19271< CACHE_HITS/s >770<
4 Fri May 25 14:27:22 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >19324< CACHE_HITS/s >802<
5 Fri May 25 14:32:22 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >18943< CACHE_HITS/s >802<
6 Fri May 25 14:37:23 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >19202< CACHE_HITS/s >818<
7 Fri May 25 14:42:24 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >18999< CACHE_HITS/s >803<
8 Fri May 25 14:47:24 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >19459< CACHE_HITS/s >823<
9 Fri May 25 14:52:24 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >19138< CACHE_HITS/s >836<
10 Fri May 25 14:57:25 UTC 2018 runit.out : DBNAME: pgio. 4 schemas, 1 threads(each). Run time: 300 seconds. RIOPS >18958< CACHE_HITS/s >781<

And I’ve graphed them:
CapturePGPIOPS

As I hit the storage here, I needed a few runs to warm up the storage cache and get the I/O I wanted: low latency rfom storage cache, but involving context switches on the server. And basically, all 3 alternatives (Ubuntu with Linux 4.4, Debian with Linux 4.9 and OEL with the unbreakable kernel 4.1) behave the same. From these tests, I can say that the performance is not the major criteria to choose one of the PaaS alternatives. Just choose the distribution you like.

About pgio, it is great to get performance measures that are reliable and focused on what I want to test. I had no problems to run it on all 3 platforms. Just needed to apt-get / yum to install systat and bc which are not there by default.

 

Cet article Which Bitnami service to choose in the Oracle Cloud Infrastructure? est apparu en premier sur Blog dbi services.


OpenShift on my Windows 10 laptop with MiniShift

$
0
0

If you want to play with OpenShift on your laptop, you can, in a Virtual Machine. I have VirtualBox installed on my laptop. I’ll install Minishift here, which will create the VM to run OpenShift with few simple commands only. On Linux you can refer to Daniel’s post. Here is the Windows version. Oh, and Daniel did that to run Postgres but my goal is to run an Oracle container of course. Or MySQL maybe.

I’ve downloaded minishift-1.18.0-windows-amd64.zip and unzipped it in D:\Downloads\minishift-1.18.0-windows-amd64 where I have minishift.exe

Minishift

I configure to use VirtualBox

minishift config set vm-driver virtualbox

It is installed in my Windows profile:

C:\Users\fpa\.minishift

Be careful, minishift do not know that we have multiple drives in Windows. When I was running minishift.exe from the D: disk is was not able to find the virtual machine’s files that were on C:

D:\Downloads\minishift-1.18.0-windows-amd64\minishift-1.18.0-windows-amd64>minishift start
-- Starting profile 'minishift'
-- Checking if https://github.com is reachable ... OK
-- Checking if requested OpenShift version 'v3.9.0' is valid ... OK
-- Checking if requested OpenShift version 'v3.9.0' is supported ... OK
-- Checking if requested hypervisor 'virtualbox' is supported on this platform ... OK
-- Checking if VirtualBox is installed ... OK
-- Checking the ISO URL ... OK
-- Downloading OpenShift binary 'oc' version 'v3.9.0'
40.81 MiB / 40.81 MiB [===================================================================================] 100.00% 0s-- Downloading OpenShift v3.9.0 checksums ... OK
-- Checking if provided oc flags are supported ... OK
-- Starting local OpenShift cluster using 'virtualbox' hypervisor ...
-- Minishift VM will be configured with ...
Memory: 2 GB
vCPUs : 2
Disk size: 20 GB
-- Starting Minishift VM ..... FAIL E0529 17:08:31.056327 1448 start.go:391] Error starting the VM: Error creating the VM. Error creating machine: Error in driver during machine creation: open /Users/fpa/.minishift/cache/iso/b2d/v1.3.0/minishift-b2d.iso: The system cannot find the path specified.. Retrying.
Error starting the VM: Error creating the VM. Error creating machine: Error in driver during machine creation: open /Users/fpa/.minishift/cache/iso/b2d/v1.3.0/minishift-b2d.iso: The system cannot find the path specified.

Then, I changed to the C: drive

D:\Downloads\minishift-1.18.0-windows-amd64\minishift-1.18.0-windows-amd64>C:
 
C:\Users\fpa>dir \Users\fpa\.minishift\cache\iso\b2d\v1.3.0\
Volume in drive C is OS
Volume Serial Number is 26AE-33F7
 
Directory of C:\Users\fpa\.minishift\cache\iso\b2d\v1.3.0
 
29-May-18 15:22 .
29-May-18 15:22 ..
29-May-18 15:22 41,943,040 minishift-b2d.iso
1 File(s) 41,943,040 bytes
2 Dir(s) 30,652,370,944 bytes free

And I run minishift from there:

C:\Users\fpa>D:minishift start
-- Starting profile 'minishift'
-- Checking if https://github.com is reachable ... OK
-- Checking if requested OpenShift version 'v3.9.0' is valid ... OK
-- Checking if requested OpenShift version 'v3.9.0' is supported ... OK
-- Checking if requested hypervisor 'virtualbox' is supported on this platform ... OK
-- Checking if VirtualBox is installed ... OK
-- Checking the ISO URL ... OK
-- Checking if provided oc flags are supported ... OK
-- Starting local OpenShift cluster using 'virtualbox' hypervisor ...
-- Minishift VM will be configured with ...
Memory: 2 GB
vCPUs : 2
Disk size: 20 GB
-- Starting Minishift VM .............................. OK
-- Checking for IP address ... OK
-- Checking for nameservers ... OK
-- Checking if external host is reachable from the Minishift VM ...
Pinging 8.8.8.8 ... OK
-- Checking HTTP connectivity from the VM ...
Retrieving http://minishift.io/index.html ... OK
-- Checking if persistent storage volume is mounted ... OK
-- Checking available disk space ... 0% used OK
Importing 'openshift/origin:v3.9.0' . CACHE MISS
Importing 'openshift/origin-docker-registry:v3.9.0' . CACHE MISS
Importing 'openshift/origin-haproxy-router:v3.9.0' . CACHE MISS
-- OpenShift cluster will be configured with ...
Version: v3.9.0
-- Copying oc binary from the OpenShift container image to VM ...................................... OK
-- Starting OpenShift cluster ........................
Using Docker shared volumes for OpenShift volumes
Using public hostname IP 192.168.99.102 as the host IP
Using 192.168.99.102 as the server IP
Starting OpenShift using openshift/origin:v3.9.0 ...
OpenShift server started.
 
The server is accessible via web console at:
  https://192.168.99.102:8443
 
You are logged in as:
User: developer
Password:
 
To login as administrator:
oc login -u system:admin

Docker

That’s all. I have a new VM in VirtualBox whith its main files in C:/Users/fpa/.minishift

C:/Users/fpa/.minishift/machines/minishift/boot2docker.iso
C:/Users/fpa/.minishift/machines/minishift/disk.vmdk

The VM boots on the Boot2Docker iso, which is the way to run Docker on Windows without enabling HyperV. The first network interface is NAT for internet access. The second one has a DHCP IP from 192.168.99.1

You can control the VM with minishift (start, stop, configure, ssh,…):

D:\Downloads\minishift-1.18.0-windows-amd64\minishift-1.18.0-windows-amd64>d:minishift ssh
## .
## ## ## ==
## ## ## ## ## ===
/"""""""""""""""""\___/ ===
~~~ {~~ ~~~~ ~~~ ~~~~ ~~~ ~ / ===- ~~~
\______ o __/
\ \ __/
\____\_______/
_ _ ____ _ _
| |__ ___ ___ | |_|___ \ __| | ___ ___| | _____ _ __
| '_ \ / _ \ / _ \| __| __) / _` |/ _ \ / __| |/ / _ \ '__|
| |_) | (_) | (_) | |_ / __/ (_| | (_) | (__| < __/ |
|_.__/ \___/ \___/ \__|_____\__,_|\___/ \___|_|\_\___|_|
Boot2Docker version 1.12.6, build HEAD : 5ab2289 - Wed Jan 11 03:20:40 UTC 2017
Docker version 1.12.6, build 78d1802

We have everything running in containers here:

docker@minishift:/mnt/sda1$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
77c0ef5a80d7 50c7bffa0653 "/usr/bin/openshift-r" 8 minutes ago Up 8 minutes k8s_router_router-1-tsmw7_default_7a1be0e2-635b-11e8-843d-f2c6a11ee2db_1
10fb6a2a6b70 9b472363b07a "/bin/sh -c '/usr/bin" 8 minutes ago Up 8 minutes k8s_registry_docker-registry-1-zfxm5_default_7865ac33-635b-11e8-843d-f2c6a11ee2db_1
2f6b90fb0bb4 openshift/origin-pod:v3.9.0 "/usr/bin/pod" 8 minutes ago Up 8 minutes k8s_POD_router-1-tsmw7_default_7a1be0e2-635b-11e8-843d-f2c6a11ee2db_1
3c720d166989 fae77002371b "/usr/bin/origin-web-" 8 minutes ago Up 8 minutes k8s_webconsole_webconsole-7dfbffd44d-48b9h_openshift-web-console_62b66c66-635b-11e8-843d-f2c6a11ee2db_1
bb5870fc0b7e openshift/origin-pod:v3.9.0 "/usr/bin/pod" 8 minutes ago Up 8 minutes k8s_POD_docker-registry-1-zfxm5_default_7865ac33-635b-11e8-843d-f2c6a11ee2db_1
95663bf29487 openshift/origin-pod:v3.9.0 "/usr/bin/pod" 8 minutes ago Up 8 minutes k8s_POD_webconsole-7dfbffd44d-48b9h_openshift-web-console_62b66c66-635b-11e8-843d-f2c6a11ee2db_1
5fc022dbe112 openshift/origin:v3.9.0 "/usr/bin/openshift s" 8 minutes ago Up 8 minutes origin

But we should not have to connect to this machine.

The minishift executable can be used to control anything. As I have docker client installed on my laptop (the Docker Toolbox) I can get the environment variables:

D:\Downloads\minishift-1.18.0-windows-amd64\minishift-1.18.0-windows-amd64>d:minishift docker-env
SET DOCKER_TLS_VERIFY=1
SET DOCKER_HOST=tcp://192.168.99.102:2376
SET DOCKER_CERT_PATH=C:\Users\fpa\.minishift\certs
REM Run this command to configure your shell:
REM @FOR /f "tokens=*" %i IN ('minishift docker-env') DO @call %i

and see, from Windows, the docker images that are in the VM:

D:\Downloads\minishift-1.18.0-windows-amd64\minishift-1.18.0-windows-amd64>where docker
C:\Program Files\Docker Toolbox\docker.exe
 
SET DOCKER_TLS_VERIFY=1
SET DOCKER_HOST=tcp://192.168.99.102:2376
SET DOCKER_CERT_PATH=C:\Users\fpa\.minishift\certs
 
D:\Downloads\minishift-1.18.0-windows-amd64\minishift-1.18.0-windows-amd64>docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
openshift/origin-web-console v3.9.0 fae77002371b 12 days ago 495.1 MB
openshift/origin-docker-registry v3.9.0 9b472363b07a 12 days ago 464.9 MB
openshift/origin-haproxy-router v3.9.0 50c7bffa0653 12 days ago 1.283 GB
openshift/origin-deployer v3.9.0 e4de3cb64af9 12 days ago 1.261 GB
openshift/origin v3.9.0 83ec0170e887 12 days ago 1.261 GB
openshift/origin-pod v3.9.0 b6d2be1df9c0 12 days ago 220.1 MB

While I’m there, I can run whatever I want as a docker container. Let’s try with Oracle.

I need to login to the Docker store (where I have accepted the license conditions)

D:\Downloads\minishift-1.18.0-windows-amd64\minishift-1.18.0-windows-amd64>docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.
Username: pachot
Password:
Login Succeeded

Let’s pull the Oracle ‘slim’ image:

D:\Downloads\minishift-1.18.0-windows-amd64\minishift-1.18.0-windows-amd64>docker pull store/oracle/database-enterprise:12.2.0.1-slim
12.2.0.1: Pulling from store/oracle/database-enterprise
4ce27fe12c04: Downloading [> ] 1.076 MB/83.31 MB
9d3556e8e792: Downloading [> ] 535.3 kB/151 MB
fc60a1a28025: Download complete
0c32e4ed872e: Download complete
b465d9b6e399: Waiting

And run it:

D:\Downloads\minishift-1.18.0-windows-amd64\minishift-1.18.0-windows-amd64>docker run -it --rm --name orcl store/oracle/database-enterprise:12.2.0.1-slim
Setup Oracle Database
Oracle Database 12.2.0.1 Setup
Wed May 30 20:16:56 UTC 2018
 
Check parameters ......
log file is : /home/oracle/setup/log/paramChk.log
paramChk.sh is done at 0 sec
 
untar DB bits ......
log file is : /home/oracle/setup/log/untarDB.log
untarDB.sh is done at 38 sec
...

You may find that funny, but the Oracle images in the Docker store contains only a tarball of Oracle Home and a pre-created database. Just the time to untar those and run the instance and after 2 minutes I have my database ready. All is untar-ed to the volume, including the software.

Here are the ports that are redirected to:

C:\Users\fpa>SET DOCKER_TLS_VERIFY=1
C:\Users\fpa>SET DOCKER_HOST=tcp://192.168.99.102:2376
C:\Users\fpa>SET DOCKER_CERT_PATH=C:\Users\fpa\.minishift\certs
C:\Users\fpa>docker port orcl
1521/tcp -> 0.0.0.0:32771
5500/tcp -> 0.0.0.0:32770

Then, easy to connect with SQL*Net with the credentials provided (see the setup instructions)

C:\Users\fpa>sqlcl sys/Oradoc_db1@//192.168.99.102:32771/orclpdb1.localdomain as sysdba
SQLcl: Release 18.1.1 Production on Wed May 30 22:41:10 2018
Copyright (c) 1982, 2018, Oracle. All rights reserved.
 
SQL> select instance_name,host_name from v$instance;
 
INSTANCE_NAME HOST_NAME
------------- ------------
ORCLCDB 254f96463883

OpenShift

So, that’s an easy way to run Oracle on Docker when you have VirtualBox. One download and 5 commands and I’m ready to connect. But that’s not the goal. Here we have OpenShift here to manage multiple Docker containers.

According to the ‘minishift start’ output I have a Web server on https://192.168.99.102:8443 (user: system, password: admin)

It already contains a lot of databases:
CaptureMiniShift001

They are really easy to use. In two clicks I’ve run a MySQL container:
CaptureMiniShift002

If you don’t like the GUI, there’s the command line interface of OpenShift within the minishift ‘cache:

C:\Users\fpa>C:\Users\fpa\.minishift\cache\oc\v3.9.0\windows\oc.exe login https://192.168.99.102:8443
Authentication required for https://192.168.99.102:8443 (openshift)
Username: system
Password:
Login successful.
 
You have one project on this server: "test"
 
Using project "test".
 
C:\Users\fpa>C:\Users\fpa\.minishift\cache\oc\v3.9.0\windows\oc.exe status
In project test on server https://192.168.99.102:8443
 
svc/mysql - 172.30.224.246:3306
dc/mysql deploys openshift/mysql:5.7
deployment #1 deployed 7 minutes ago - 1 pod

Now that I have OpenShift running and installed, I’ll be able to run Oracle and manage containers from there. That’s for the next post on this subject.

 

Cet article OpenShift on my Windows 10 laptop with MiniShift est apparu en premier sur Blog dbi services.

SQLcl connect target depends on previous connection

$
0
0

I thought it was a bug for two reasons: first, because I don’t like that my connect destination depends on the context, and then because it is a different behavior than in SQL*Plus. But finally, it is the correct behavior, and the 100% compatibility with SQL*Plus is expected only when SET CLASSIC=ON. And as I was surprised, and I think it can be dangerous, here is a blog post about it. Basically, be very careful if you are connected with a service name and you want to connect locally.

Basically, if you attended my demo about ‘From Transportable Tablespaces to Pluggable Databases’, where I switch between different instances you may have seen that I had to run ‘connect / as sysdba’ two times because the first one failed with invalid username/password

SQL> connect scott/tiger@//localhost/PDB1
Connected.
...
SQL> connect / as sysdba
USER =
URL = jdbc:oracle:oci8:@//localhost/PDB1
Error Message = ORA-01017: invalid username/password; logon denied
 
Warning: You are no longer connected to ORACLE.
 
SQL> connect / as sysdba
Connected.

This is not a big deal, but that means that it tries to connect to //localhost/PDB1 when I wanted to connect locally to my ORACLE_SID environment variable. Here, expecting a bequeath connection, I didn’t provide a password, then I cannot connect to the PDB. But imagine that I use a password, and the password is the same in the two databases… I would have been connected to the wrong database. Just imagine this:

SQL> host echo $ORACLE_SID
TEST
SQL> connect sys/password as sysdba
Connected.
-- checking something on PROD
SQL> connect sys/password@PROD as sysdba
Connected.
...
-- back to TEST (or at least I think so)
SQL> host echo $ORACLE_SID
TEST
SQL> connect sys/password as sysdba
Connected.
-- do terrible things
SQL> drop table VERY_IMPORTANT_TABLE;
Table VERY_IMPORTANT_TABLE dropped.
-- now look where I am:
SQL> show connection
CONNECTION:
SYS@jdbc:oracle:oci8:@PROD AS SYSDBA
CONNECTION_IDENTIFIER:
PROD

Actually, what happens is that when SQLcl is already connected with a connection string (i.e not locally using bequeath) the next connect command will use the same connection string. This means that:

connect user/password

is actually equivalent to

connect user/password@&_CONNECT_IDENTIFIER

SQL*Plus

This behavior has been introduced in SQLcl but this is not how SQL*Plus works:

SQL> connect sys/oracle@//localhost/PDB1 as sysdba
Connected.
SQL> connect / as sysdba
Connected.
SQL> define _CONNECT_IDENTIFIER
DEFINE _CONNECT_IDENTIFIER = "CDB1" (CHAR)

Disconnect

The first solution to avoid this in SQLcl is to always disconnect before you want to connect to a different service:

SQL> connect sys/oracle@//localhost/PDB1 as sysdba
Connected.
SQL> disc
 
Disconnected from Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.2.0.0.0
SQL> connect sys/oracle as sysdba
Connected.
SQL> show connection
CONNECTION:
SYS@jdbc:oracle:oci8:@ AS SYSDBA
CONNECTION_IDENTIFIER:
CDB$ROOT

This is why the second time was ok in my case: first one failed with invalid password and then I was disconnected.

TWO_TASK

The second solution is to set an impossible TWO_TASK (or LOCAL in Windows) so that local connections are impossible:

SQL> connect sys/oracle@//localhost/PDB1 as sysdba
Connected.
SQL> connect sys/oracle as sysdba
USER = sys
URL = jdbc:oracle:oci8:@NoWhere
Error Message = ORA-12154: TNS:could not resolve the connect identifier specified
USER = sys
URL = jdbc:oracle:thin:@NoWhere
Error Message = IO Error: Unknown host specified
USER = sys
URL = jdbc:oracle:thin:@NoWhere:1521/NoWhere
Error Message = IO Error: Unknown host specified

CLASSIC=ON

The third solution is to run SQLcl in SQL*Plus 100% compatible mode:

SQL> connect sys/oracle@//localhost/PDB1 as sysdba
Connected.
SQL> set classic on
SQL> show classic
SQL*Plus mode: ON
SQL> connect / as sysdba
Connected.

Here we have the same behavior as SQL*Plus: no use of current connection string.

The SQL CLASSIC ON is usually for the output (error messages, autotrace statistics, and a few enhancement made to SQLcl). And the online help still says that it is about output:

SQL> help set classic
SET CLASSIC
SET CLASSIC [ ON | OFF ] Set classic SQL*Plus Settings on
This will allow scripts which expect traditional output to be honored.

However, it seems that this CLASSIC mode is also very important for connection.

Test and show _CONNECTION_STRING

If you show the connection string at the prompt, this may prevent errors:

SQL> set sqlprompt "_connect_identifier> "
//localhost/CDB2> connect sys/oracle@//localhost/CDB1 as sysdba
Connected.
//localhost/CDB1> connect sys/oracle as sysdba
Connected.

Always check which database

By the way, when I prepare a script that can make some damages when not run at the correct place, I usually add a test on DBID on top of it:

CDB1> whenever sqlerror exit failure;
CDB1> select 0/decode(dbid,'944121612',1,0) from v$database;
 
Error starting at line : 1 in command -
select 0/decode(dbid,'944121612',1,0) from v$database
Error report -
ORA-01476: divisor is equal to zero

Different passwords

Of course, you should have different passwords on prod and test databases. However, I prefer to have passwords in a wallet (external password file) and then you will always have the correct identification as it is recorded for each service name.

 

Cet article SQLcl connect target depends on previous connection est apparu en premier sur Blog dbi services.

Installing MAMP to play with PHP, MySQL and OpenFlights

$
0
0

You may wonder what I’m doing with those technologies that are completely different from what I usually do. I’ll detail in a future blog post but the idea is giving a short introduction to databases to students at https://powercoders.org/, a coding academy for refugees in Switzerland. They install MAMP (My Apache – MySQL – PHP) during their curriculum for website development, and then I’ll use this environment to teach data modeling and SQL basics. Thus, I’ve to look at MAMP, PHP and MySQL for the first time… And I decided to load the OpenFlights open data to play with.
That explains the title.

CaptureMAMP002
So MAMP is like LAMP (Linux+Apache+PHP+MySQL) but with a M for MacOS, but also Windows (W being an upside-down M after all). Let’s install that on my laptop. I download it from https://www.mamp.info/en/downloads/, run the .exe, all is straightforward and the installer notifies me that the installation will be completed after a reboot.

What? Reboot? Seriously, we are in 2018, that’s Windows 10, I refuse to reboot to install a simple Apache server!

This bundle is very easy to use: a simple window to start and stop the servers (Apache and MySQL) . A setup menu to configure them, but I keep the default. And a link to the start page. All that is installed under C:\MAMP (you can change it, I just kept the default settings). The first time you start the servers, the Windows Firewall configuration is raised and you have to accept it:

CaptureMAMP003

With all defaults (Apache on port 80) my web server pages are on http://localhost (serving the files in C:\MAMP\htdocs) and administration page is at http://localhost/MAMP/
The MySQL administration page (phpMyAdmin) is at http://localhost/MAMP/index.php?page=phpmyadmin. It seems that, at least by default, I don’t need a password to go to the admin pages.

display_errors

I’ll write some PHP and because it’s the first time in my life, I will have some errors. With the default configuration, Apache just raises and Error 500 which does not help me a lot for debugging. This configuration is ok for production because displaying errors may give clues to hackers. But I’m there to play and I want to see the error messages and line numbers.

I have to set display_errors=on for that. The current setting is displayed in http://localhost/MAMP/index.php?language=English&page=phpinfo#module_core and I can change it in C:\MAMP\conf\php7.2.1\php.ini and after a restart of the Apache server I can see full error messages:

Warning: mysqli_real_connect(): (28000/1045): Access denied for user 'root'@'localhost' (using password: YES) in C:\MAMP\htdocs\index.php on line 123

Little fix

But now that I display the errors, I get this annoying message each time I try to do something in phpMyAdmin (which runs as PHP in the same Apache server):

MAMP "count(): Parameter must be an array or an object that implements Countable"

CaptureMAMP004

So this product, which is free but has also a ‘PRO’ version, probably running the same code, is delivered with bad code, raising errors that were ignored. Don’t tell me that it is just a warning. You will see that parentheses are missing, this is a syntax error and raising only a warning for that is quite bad.
CaptureMAMP006
My common sense tells me that we should set display_errors=on and test a few screens before releasing a software. But that step has probably been skipped. Fortunately, the message is clear: line 615 of C:\MAMP\bin\phpMyAdmin\libraries\sql.lib.php

The message is about count() not having the correct parameter. The line 615 shows count($analyzed_sql_results[‘select_expr’] == 1 ) which is probably not correct because it counts a boolean expression. I’ve changed it to (count($analyzed_sql_results[‘select_expr’]) == 1 ) as I suppose they want to count and compare to one.

Well, I’ve never written one line of PHP and I already hate it for its error handling weakness.

Load some data

I want to initialize the database with some data and I’ll use the OpenFlights database. I’ve downloaded and unzipped https://github.com/jpatokal/openflights/blob/master/sql/master.zip
I go to the unzipped directory and run MySQL:

cd /d D:\Downloads\openflights-master

Another little thing to fix here: the sql\create.sql and sql\load-data.sql files contain some lines starting with “\! echo” but this \! command (to run a system command) exists on Linux but not on the Windows port of MySQL. We have to remove them before running the SQL scripts. I’m used to Oracle where I can port my code and scripts from one platform to the other, and was a but surprised by this.

Ready to connect:

C:\MAMP\bin\mysql\bin\mysql test --port=3306 --host=localhost --user root --password
Enter password:

The MySQL connection parameters are displayed on http://localhost/MAMP/ including the password (root)


source sql\create.sql
 
mysql> source sql\create.sql
Query OK, 0 rows affected (0.00 sec)
 
Connection id: 314
Current database: flightdb2
 
Query OK, 0 rows affected (0.02 sec)
 
Query OK, 0 rows affected (0.02 sec)
 
Query OK, 0 rows affected (0.01 sec)
 
Query OK, 0 rows affected (0.01 sec)
 
Query OK, 0 rows affected (0.02 sec)
Records: 0 Duplicates: 0 Warnings: 0
 
Query OK, 0 rows affected (0.02 sec)
Records: 0 Duplicates: 0 Warnings: 0
 
Query OK, 0 rows affected (0.00 sec)
...

This has created the flightdb2 database, with openflights user, and 15 tables.

Now, if you are still in the unzipped directory, you can load data with the source sql\load-data.sql script which loads from the data\*.dat files

mysql> source sql\load-data.sql
Query OK, 6162 rows affected, 4 warnings (0.04 sec)
Records: 6162 Deleted: 0 Skipped: 0 Warnings: 4
 
Query OK, 7184 rows affected, 7184 warnings (0.12 sec)
Records: 7184 Deleted: 0 Skipped: 0 Warnings: 7184
 
Query OK, 67663 rows affected (0.53 sec)
Records: 67663 Deleted: 0 Skipped: 0 Warnings: 0
 
Query OK, 260 rows affected (0.01 sec)
Records: 260 Deleted: 0 Skipped: 0 Warnings: 0
 
Query OK, 12 rows affected (0.01 sec)
Records: 12 Deleted: 0 Skipped: 0 Warnings: 0

Query from PHP

So, for my first lines of PHP I’ve added the following to C:\MAMP\htdocs\index.php:

<?php
$user = 'openflights'; $password = '';
$db = 'flightdb2'; $host = 'localhost'; $port = 3306;
 
$conn = mysqli_init();
if (!$conn) {
die("mysqli_init failed");
}
if (!$success = mysqli_real_connect( $conn, $host, $user, $password, $db, $port)) {
die("😠 Connection Error: " . mysqli_connect_error());
}
echo "😎 Connected to database <b>$db</b> as user <b>$user</b>.";
?>
 
<p>
Here are the Airports:
<table border=1>
<tr><th>IATA</th><th>Name</th></tr>
 
<?php
$result = $conn->query("select iata,name from airports where country='Greenland' order by 2");
if ($result->num_rows > 0) {
while($row = $result->fetch_assoc()) {
echo "<tr><td>" . $row["iata"]. "</td><td> " . $row["name"]. "</tr>";
}
} else {
echo "0 results";
}
mysqli_close($conn);
?>
</table>

CaptureMAMP008

Here, I call mysqli_init(), set the credentials and call mysqli_real_connect() to get the connection handle. Then I run my query and display the result as an HTML table. Nothing difficult here. The main challenge is probably to keep the code maintainable.

In my opinion, and despite the small issues encountered, MAMP is a simple way to setup a development environment on Windows. All is there to introduce SQL and Database for developers, and show how to call it from a programming language.

 

Cet article Installing MAMP to play with PHP, MySQL and OpenFlights est apparu en premier sur Blog dbi services.

Change Data Capture from Oracle with StreamSets Data Collector

$
0
0

With this trend of CQRS architectures where the transactions are streamed to a bunch of heterogenous eventually consistent polyglot-persistence microservices, logical replication and Change Data Capture becomes an important component, already at the architecture design phase. This is good for existing products vendors such as Oracle GoldenGate (which must be licensed even to use only the CDC part in the Oracle Database as Streams is going to be desupported) or Dbvisit replicate to Kafka. But also for Open Source projects. There are some ideas running on (Debezium), VOODOO but not yet released.

Today I tested the Oracle CDC Data Collector for StreamSets. StreamSets Data Collector is an open-source project started by former people from Cloudera and Informatica, to define pipelines streaming data from data collectors. It is easy, simple and has a buch of destinations possible. The Oracle CDC is based on LogMiner which means that it is easy but may have some limitations (mainly datatypes, DDL replication and performance).

Install

The installation guide is at streamsets.com. I choose the easiest way for testing as they provide a Docker container (https://github.com/streamsets)

# docker run --restart on-failure -p 18630:18630 -d --name streamsets-dc streamsets/datacollector
Unable to find image 'streamsets/datacollector:latest' locally
latest: Pulling from streamsets/datacollector
605ce1bd3f31: Pull complete
529a36eb4b88: Pull complete
09efac34ac22: Pull complete
4d037ef9b54a: Pull complete
c166580a58b2: Pull complete
1c9f78fe3d6c: Pull complete
f5e0c86a8697: Pull complete
a336aef44a65: Pull complete
e8d1e07d3eed: Pull complete
Digest: sha256:0428704019a97f6197dfb492af3c955a0441d9b3eb34dcc72bda6bbcfc7ad932
Status: Downloaded newer image for streamsets/datacollector:latest
ef707344c8bd393f8e9d838dfdb475ec9d5f6f587a2a253ebaaa43845b1b516d

CaptureStreamSets001
And that’s all. I am ready to connect with http on port 18630.

The default user/password is admin/admin

The GUI looks simple and efficient. There’s a home page where you define the ‘pipelines’ and monitor them running. In the pipelines, we define sources and destinations. Some connectors are already installed, others can be automatically installed. For Oracle, as usual, you need to download the JDBC driver yourself because Oracle doesn’t allow to get it embedded for legal reasons. I’ll do something simple here just to check the mining from Oracle.

CaptureStreamSets002CaptureStreamSets003
In ‘Package Manager’ (the little gift icon on the top) go to JDBC and check ‘install’ for the streamsets-datacollector-jdbc-lib library
Then in ‘External Libraries’, install (with the ‘upload’ icon at the top) the Oracle jdbc driver (ojdbc8.jar).
I’ve also installed the MySQL one for future tests:

File Name Library ID
ojdbc8.jar streamsets-datacollector-jdbc-lib
mysql-connector-java-8.0.11.jar streamsets-datacollector-jdbc-lib

Oracle CDC pipeline

I’ll use the Oracle Change Data Capture here, based on Oracle LogMiner. The GUI is very easy: just select ‘Oracle CDC’ as source in a new pipeline. Click on it and configure it. I’ve set the minimum here.
In JDBC tab I’ve set only the JDBC Connection String to: jdbc:oracle:thin:scott/tiger@//192.168.56.188:1521/pdb1 which is my PDB (I’m on Oracle 18c here and multitenant is fully supported by StreamSets). In the Credentials tab I’ve set ‘sys as sysdba’ as username and its password. The configuration can also be displayed as JSON and here is the corresponding entry:

"configuration": [
{
"name": "hikariConf.connectionString",
"value": "jdbc:oracle:thin:scott/tiger@//192.168.56.188:1521/pdb1"
},
{
"name": "hikariConf.useCredentials",
"value": true
},
{
"name": "hikariConf.username",
"value": "sys as sysdba"
},
{
"name": "hikariConf.password",
"value": "oracle"
},
...

I’ve provided SYSDBA credentials and only the PDB service, but it seems that StreamSets figured out automatically how to connect to the CDB (as LogMiner can be started only from CDB$ROOT). The advantage of using LogMiner here is that you need only a JDBC connection to the source – but of course, it will use CPU and memory resource from the source database host in this case.

Then I’ve defined the replication in the Oracle CDC tab. Schema to ‘SCOTT’ and Table Name Pattern to ‘%’. Initial Change as ‘From Latest Change’ as I just want to see the changes and not actually replicate for this first test. But of course, we can define a SCN here which is what must be used to ensure consistency between the initial load and the replication. ‘Dictionary source to ‘Online Catalog’ – this is what will be used by LogMiner to map the object and column IDs to table names and column names. But be carefull as table structure changes may not be managed correctly with this option.

{
"name": "oracleCDCConfigBean.baseConfigBean.schemaTableConfigs",
"value": [
{
"schema": "SCOTT",
"table": "%"
}
] },
{
"name": "oracleCDCConfigBean.baseConfigBean.changeTypes",
"value": [
"INSERT",
"UPDATE",
"DELETE",
"SELECT_FOR_UPDATE"
] },
{
"name": "oracleCDCConfigBean.dictionary",
"value": "DICT_FROM_ONLINE_CATALOG"
},

I’ve left the defaults. I can’t think yet about a reason for capturing the ‘select for update’, but it is there.

Named Pipe destination

I know that the destination part is easy. I just want to see the captured changes here and I took the easiest destination: Named Pipe where I configured only the Named Pipe (/tmp/scott) and Data Format (JSON)

{
"instanceName": "NamedPipe_01",
"library": "streamsets-datacollector-basic-lib",
"stageName": "com_streamsets_pipeline_stage_destination_fifo_FifoDTarget",
"stageVersion": "1",
"configuration": [
{
"name": "namedPipe",
"value": "/tmp/scott"
},
{
"name": "dataFormat",
"value": "JSON"
},
...

Supplemental logging

The Oracle redo log stream is by default focused only on recovery (replay of transactions in the same database) and contains only the minimal physical information requried for it. In order to get enough information to replay them in a different database we need supplemental logging for the database, and for the objects involved:

SQL> alter database add supplemental log data;
Database altered.
SQL> exec for i in (select owner,table_name from dba_tables where owner='SCOTT' and table_name like '%') loop execute immediate 'alter table "'||i.owner||'"."'||i.table_name||'" add supplemental log data (primary key) columns'; end loop;
PL/SQL procedure successfully completed.

Run

And that’s all. Just run the pipeline and look at the logs:

CaptureStreamSets005-log

StreamSet Oracle CDC pulls continuously from LogMiner to get the changes. Here are the queries that it uses for that:

BEGIN DBMS_LOGMNR.START_LOGMNR( STARTTIME => :1 , ENDTIME => :2 , OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + DBMS_LOGMNR.CONTINUOUS_MINE + DBMS_LOGMNR.NO_SQL_DELIMITER); END;

This starts to mine between two timestamp. I suppose that it will read the SCNs to get finer grain and avoid overlapping information.

And here is the main one:

SELECT SCN, USERNAME, OPERATION_CODE, TIMESTAMP, SQL_REDO, TABLE_NAME, COMMIT_SCN, SEQUENCE#, CSF, XIDUSN, XIDSLT, XIDSQN, RS_ID, SSN, SEG_OWNER FROM V$LOGMNR_CONTENTS WHERE ((( (SEG_OWNER='SCOTT' AND TABLE_NAME IN ('BONUS','DEPT','EMP','SALGRADE')) ) AND (OPERATION_CODE IN (1,3,2,25))) OR (OPERATION_CODE = 7 OR OPERATION_CODE = 36))

This reads the redo records. The operation codes 7 and 36 are for commit and rollbacks. The operations 1,3,2,25 are those that we want to capture (insert, update, delete, select for update) and were defined in the configuration. Here the pattern ‘%’ for the SCOTT schema has been expanded to the table names. As far as I know, there’s no DDL mining here to automatically capture new tables.

Capture

Then I’ve run this simple insert (I’ve added a primary key on this table as it is not ther from utlsampl.sql):

SQL> insert into scott.dept values(50,'IT','Cloud');

And I committed (as it seems that StreamSet buffers the changes until the end of the transaction)

SQL> commit;

and here I got the message from the pipe:

/ $ cat /tmp/scott
 
{"LOC":"Cloud","DEPTNO":50,"DNAME":"IT"}

The graphical interface shows how the pipeline is going:
CaptureStreamSets006

I’ve tested some bulk loads (direct-path inserts) and it seems to be managed correctly. Actually, this Oracle CDC is based on LogMiner so it is fully supported (no mining of proprietary redo stream format) and limitations are clearly documented.

Monitoring

Remember that the main work is done by LogMiner, so don’t forget to look at the alert.log on the source database. With big transactions, you may need large PGA (but you can also choose buffer to disk). If you have Oracle Tuning Pack, you can also monitor the main query which retreives the redo information from LogMiner:
CaptureStreamSets007
You will see a different SQL_ID because the filter predicates sues literals instead of bind variables (which is not a problem here).

Conclusion

This product is very easy to test, so you can do a Proof of Concept within a few hours and test for your context: supported datatypes, operations and performance. By easy to test, I mean: very good documentation, very friendly and responsive graphical interface, very clear error messages,…

 

Cet article Change Data Capture from Oracle with StreamSets Data Collector est apparu en premier sur Blog dbi services.

ADWC new OCI interface

$
0
0

A few things have changed about the Autonomous Data Warehouse Cloud service recently. And I’ve found the communication not so clear, so here is a short post about what I had to do to start the service again. The service has always been on the OCI data centers but was managed with the classic management interface. It has been recently migrated to the new interface:
CaptureADWCnew
Note that ADWC here is the name I’ve given for my service. It seems that the Autonomous Data Warehouse Cloud Service is now referred by the ADW acronym.

The service itself did not have any outage. The migration concerns only the interface. However, once the migration done, you cannot use the old interface. I went to the old interface with the URL I bookmarked, tried to start the service, and got a ‘last activity START_SERVICE failed’ error message without additional detail.
CaptureADWCfail

You can forget the old bookmark (such as https://psm-tenant.console.oraclecloud.com/psmui/faces/paasRunner.jspx?serviceType=ADWC) and you now have to use the new one (such as https://console.us-ashburn-1.oraclecloud.com/a/db/adws/ocid1.autonomousdwdatabase.oc1.iad.al-long-IAD-identifier)

So I logged to the console https://console.us-ashburn-1.oraclecloud.com (My service is in Ashburn-1 region). There I provided the tenant name (was the cloud account in the old interface) which can also be provided in the URL as https://console.us-ashburn-1.oraclecloud.com/?tenant=tenant. I selected oracleidentitycloudservice as the ‘identity provider’, my username and password and I am on the OCI console.

From the top-left menu, I can go to Autonomous Data Warehouse. I see nothing until I choose the compartement in the ‘list scope’. The ADWC service I had created when in the old interface is in the ‘tenant (root)’ compartment. Here I can start the service.

The previous PSM command line interface cannot be used anymore. We need to install the OCI CLI:

$ bash -c "$(curl -L https://raw.githubusercontent.com/oracle/oci-cli/master/scripts/install/install.sh)"

You will need the Tenancy ID (Tenancy OCID:ocid1.tenancy.oc1..aaaaaaaa… that you find on the bottom of each page in the console), the User ID (User OCID ocid1.user.oc1..aaaaaaa… that you find in the ‘users’ menu). All those ‘OCID’ are documented in https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm

If you used the REST API, they change completely. You will have to post to something like:

/20160918/autonomousDataWarehouses/ocid1.autonomousdwdatabase.oc1.iad.abuwcljrb.../actions/start

where the OCID is the database one that cou can copy from the console.

 

Cet article ADWC new OCI interface est apparu en premier sur Blog dbi services.

Remote syslog from Linux and Solaris

$
0
0

Auditing operations with Oracle Database is very easy. The default configuration, where SYSDBA operations go to ‘audit_file_dest’ (the ‘adump’ directory) and other operations go to the database may be sufficient to log what is done but is definitely not a correct security audit method as both destinations can have their audit trail deleted by the DBA. If you want to secure your environment by auditing the most privileged accounts, you need to send the audit trail to another server.

This is easy as well and here is a short demo involving Linux and Solaris as the audited environments. I’ve created those 3 computer services in the Oracle Cloud:
CaptureSyslog000

So, I have an Ubuntu service where I’ll run the Oracle Database (XE 11g) and the hostname is ‘ubuntu’

root@ubuntu:~# grep PRETTY /etc/os-release
PRETTY_NAME="Ubuntu 16.04.4 LTS"

I have a Solaris service which will also run Oracle, and the hostname is ‘d17872′

root@d17872:~# cat /etc/release
Oracle Solaris 11.3 X86
Copyright (c) 1983, 2016, Oracle and/or its affiliates. All rights reserved.
Assembled 03 August 2016

I have an Oracle Enterprise Linux service which will be my audit server, collecting syslog messages from remote hosts, the hostname is ‘b5e501′ and the IP address in the PaaS network is 10.29.235.150

[root@b5e501 ~]# grep PRETTY /etc/os-release
PRETTY_NAME="Oracle Linux Server 7.5"

Testing local syslog

I start to ensure that syslog works correctly on my audit server:

[root@b5e501 ~]# jobs
[1]+ Running tail -f /var/log/messages &
[root@b5e501 ~]#
[root@b5e501 ~]# logger -p local1.info "hello from $HOSTNAME"
[root@b5e501 ~]# Jun 20 08:28:35 b5e501 bitnami: hello from b5e501

Remote setting

On the aduit server, I un-comment the lines about receiving syslog from TCP and UDP on port 514

[root@b5e501 ~]# grep -iE "TCP|UDP" /etc/rsyslog.conf
# Provides UDP syslog reception
$ModLoad imudp
$UDPServerRun 514
# Provides TCP syslog reception
$ModLoad imtcp
$InputTCPServerRun 514
# Remote Logging (we use TCP for reliable delivery)

I restart syslog service

[root@b5e501 ~]# systemctl restart rsyslog
Jun 20 08:36:47 b5e501 systemd: Stopping System Logging Service...
Jun 20 08:36:47 b5e501 rsyslogd: [origin software="rsyslogd" swVersion="8.24.0" x-pid="2769" x-info="http://www.rsyslog.com"] exiting on signal 15.
Jun 20 08:36:47 b5e501 systemd: Starting System Logging Service...
Jun 20 08:36:47 b5e501 rsyslogd: [origin software="rsyslogd" swVersion="8.24.0" x-pid="2786" x-info="http://www.rsyslog.com"] start
Jun 20 08:36:47 b5e501 systemd: Started System Logging Service.

I tail the /var/log/messages (which is my default destination for “*.info;mail.none;authpriv.none;cron.none”)

[root@b5e501 ~]# tail -f /var/log/messages &
[root@b5e501 ~]# jobs
[1]+ Running tail -f /var/log/messages &

I test with local1.info and check that the message is tailed even when logger is sending it though the network:

[root@b5e501 ~]# logger -n localhost -P 514 -p local1.info "hello from $HOSTNAME"
Jun 20 09:18:07 localhost bitnami: hello from b5e501

That’s perfect.

Now I can test the same from my Ubuntu host to ensure that the firewall settings allow for TCP and UDP on port 514


root@ubuntu:/tmp/Disk1# logger --udp -n 10.29.235.150 -P 514 -p local1.info "hello from $HOSTNAME in UDP"
root@ubuntu:/tmp/Disk1# logger --tcp -n 10.29.235.150 -P 514 -p local1.info "hello from $HOSTNAME in TCP"

Here are the correct messages received:

Jun 20 09:24:46 ubuntu bitnami hello from ubuntu in UDP
Jun 20 09:24:54 ubuntu bitnami hello from ubuntu in TCP

Destination setting for the audit

As I don’t want to have all messages into /var/log/messages, I’m now setting, in the audit server, a dedicated file for “local1″ facility and “info” level that I’ll use for my Oracle Database audit destination

[root@b5e501 ~]# touch "/var/log/audit.log"
[root@b5e501 ~]# echo "local1.info /var/log/audit.log" >> /etc/rsyslog.conf
[root@b5e501 ~]# systemctl restart rsyslog

After testing the same two ‘logger’ commands from the remote host I check the entries in my new file:

[root@b5e501 ~]# cat /var/log/audit.log
Jun 20 09:55:09 ubuntu bitnami hello from ubuntu in UDP
Jun 20 09:55:16 ubuntu bitnami hello from ubuntu in TCP

Remote logging

Now that I validated that remote syslog is working, I set automatic forwarding of syslog messages on my Ubuntu box to send all ‘local1.info to the audit server':

root@ubuntu:/tmp/Disk1# echo "local1.info @10.29.235.150:514" >> /etc/rsyslog.conf
root@ubuntu:/tmp/Disk1# systemctl restart rsyslog

This, with a single ‘@’ forwards in UDP. You can double the ‘@’ to forward using TCP.

Here I check with logger in local (no mention of the syslog host here):

root@ubuntu:/tmp/Disk1# logger -p local1.info "hello from $HOSTNAME with forwarding"

and I verify that the message is logged in the audit server into /var/log/audit.log

[root@b5e501 ~]# tail -1 /var/log/audit.log
Jun 20 12:00:25 ubuntu bitnami: hello from ubuntu with forwarding

Repeated messages

Note that when testing, you may add “$(date)” to your message in order to see it immediately because syslog keeps the message to avoid flooding when the message is repeated. This:

root@ubuntu:/tmp/Disk1# logger -p local1.info "Always the same message"
root@ubuntu:/tmp/Disk1# logger -p local1.info "Always the same message"
root@ubuntu:/tmp/Disk1# logger -p local1.info "Always the same message"
root@ubuntu:/tmp/Disk1# logger -p local1.info "Always the same message"
root@ubuntu:/tmp/Disk1# logger -p local1.info "Always the same message"
root@ubuntu:/tmp/Disk1# logger -p local1.info "Always the same message"
root@ubuntu:/tmp/Disk1# logger -p local1.info "Then another one"

is logged as this:

Jun 20 12:43:12 ubuntu bitnami: message repeated 5 times: [ Always the same message] Jun 20 12:43:29 ubuntu bitnami: Then another one

I hope that one day this idea will be implemented by Oracle when flooding messages to the alert.log ;)

Oracle Instance

The last step is to get my Oracle instance sending audit message to the local syslog, with facility.level local1.info so that they will be automatically forwarded to my audit server. I have to set audit_syslog_level to ‘local1.info’ and the audit_trail to ‘OS':

oracle@ubuntu:~$ sqlplus / as sysdba
 
SQL*Plus: Release 11.2.0.2.0 Production on Wed Jun 20 11:48:00 2018
 
Copyright (c) 1982, 2011, Oracle. All rights reserved.
 
Connected to:
Oracle Database 11g Express Edition Release 11.2.0.2.0 - 64bit Production
 
SQL> alter system set audit_syslog_level='local1.info' scope=spfile;
 
System altered.
 
SQL> alter system set audit_trail='OS' scope=spfile;
 
System altered.
 
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.
 
Total System Global Area 1068937216 bytes
Fixed Size 2233344 bytes
Variable Size 616565760 bytes
Database Buffers 444596224 bytes
Redo Buffers 5541888 bytes
Database mounted.
Database opened.

It is very easy to check that it works as the SYSDBA and the STARTUP are automatically audited. Here is what I can see in my audit server /var/log/audit.log:

[root@b5e501 ~]# tail -f /var/log/audit.log
Jun 20 11:55:47 ubuntu Oracle Audit[27066]: LENGTH : '155' ACTION :[7] 'STARTUP' DATABASE USER:[1] '/' PRIVILEGE :[4] 'NONE' CLIENT USER:[6] 'oracle' CLIENT TERMINAL:[13] 'Not Available' STATUS:[1] '0' DBID:[0] ''
Jun 20 11:55:47 ubuntu Oracle Audit[27239]: LENGTH : '148' ACTION :[7] 'CONNECT' DATABASE USER:[1] '/' PRIVILEGE :[6] 'SYSDBA' CLIENT USER:[6] 'oracle' CLIENT TERMINAL:[5] 'pts/0' STATUS:[1] '0' DBID:[0] ''
Jun 20 11:55:51 ubuntu Oracle Audit[27419]: LENGTH : '159' ACTION :[7] 'CONNECT' DATABASE USER:[1] '/' PRIVILEGE :[6] 'SYSDBA' CLIENT USER:[6] 'oracle' CLIENT TERMINAL:[5] 'pts/0' STATUS:[1] '0' DBID:[10] '2860420539'

In the database server, I have no more files in the adump since this startup:

oracle@ubuntu:~/admin/XE/adump$ /bin/ls -alrt
total 84
drwxr-x--- 6 oracle dba 4096 Jun 20 11:42 ..
-rw-r----- 1 oracle dba 699 Jun 20 11:44 xe_ora_26487_1.aud
-rw-r----- 1 oracle dba 694 Jun 20 11:44 xe_ora_26515_1.aud
-rw-r----- 1 oracle dba 694 Jun 20 11:44 xe_ora_26519_1.aud
-rw-r----- 1 oracle dba 694 Jun 20 11:44 xe_ora_26523_1.aud
drwxr-x--- 2 oracle dba 4096 Jun 20 11:48 .
-rw-r----- 1 oracle dba 896 Jun 20 11:48 xe_ora_26574_1.aud

Solaris

I have also started a Solaris service:

opc@d17872:~$ pfexec su -
Password: solaris_opc
su: Password for user 'root' has expired
New Password: Cl0udP01nts
Re-enter new Password: Cl0udP01nts
su: password successfully changed for root
Oracle Corporation SunOS 5.11 11.3 June 2017
You have new mail.
root@d17872:~#

Here, I add the forwarding to /etc/syslog.conf (tab is a required separator which cannot be replaced with spaces) and restart the syslog service:

root@d17872:~# echo "local1.info\t@10.29.235.150" >> /etc/syslog.conf
root@d17872:~# svcadm restart system-log

Then logging a message locally

root@d17872:~# logger -p local1.info "hello from $HOSTNAME with forwarding"

Here is the messaged received from the audit server:

[root@b5e501 ~]# tail -f /var/log/audit.log
Jun 20 05:27:51 d17872.compute-a511644.oraclecloud.internal opc: [ID 702911 local1.info] hello from d17872 with forwarding

Here in Solaris I have the old ‘syslog’ with no syntax to change the UDP port. The default port is defined in /etc/services, which is the one I’ve configured to listen to on my audit server:

root@d17872:~# grep 514 /etc/services
shell 514/tcp cmd # no passwords used
syslog 514/udp

If you want more features, you can install syslog-ng or rsyslog on Solaris.

 

Cet article Remote syslog from Linux and Solaris est apparu en premier sur Blog dbi services.

Introduction to databases for {Power.Coders} with MySQL

$
0
0
    PC_IMG_5528
    This week I took some days off to do something related to my job but a bit different. I’ve given a course on Databases and SQL. But not for my usual customers. And not with the database I know the best. So, it is still in a domain that I know, but out of my comfort zone. And this is something that we should do more often because it gives a little apprehension and a big satisfaction.

    The little apprehension is because there were a lot of unknown parameters for me. I taught to students from Power.Coders, a coding academy for refugees. 18 young people with a very different background. Some already knew how to code. Some did some front-end stuff and website design in the past but had no idea about what is a server. Some others are doing all that for the first time, and have to learn what is a program. But there’s one thing that is common to everybody here: all are motivated to learn, understand, and acquire the knowledge and experience to start a career in IT. This is the good energy that makes everything possible.

    The big satisfaction is because, with everybody doing their best, things works and everyone gains confidence. It is out of the comfort zone that you can get your best, and that is for the students as well as the teacher and coaches. Because I wanted the course to stay consistent with what they learned in the curriculum, I did the database examples and exercises on MySQL and from PHP code. I never did that so I had to study as well. The big advantage I have, from experience, is that I know where to search on the internet. One of the students told me “don’t tell me to google for it, that’s like swimming in the ocean!”. When you are not used to it ‘googling’ for solutions is not easy. Experienced people do not always consider that and they answer in forums with a rude “RTFM” or “LMGTFY”. I’ve never felt obliged to answer on forums but when I do it, it is not to argue about the OP and his question, and whether he did his own research before. If I choose to answer, then my goal is to explain as clearly as possible. And I do not fear to repeat myself because the more I explain and the better understanding I have about what I explain. I remember my first answers on the dba-village forum. Some questions were recurrent. And each time I tried to answer with a shorter and clearer explanation.

    Google Slides

    PC_IMG_5523I’ve prepared slides and exercises for 3 days and here I share the content (however, there were a lot of whiteboard explanations so the slides may not be sufficient). I did the pages were with Google Sites and the presentation with Google Slides. Both of them were, again, new things for me, out of my .ppt comfort zone. It went very well. For correct presenter experience, I installed the “Google Slides Auto Resize Speaker Notes” Google Chrome extension. One things annoys me with Google Slides: readers cannot copy/paste from the slides. Please, comment here if you have a solution. My workaround was to copy the code to the presenter’s notes and tell the students to open them (with ‘s’ key) and copy from there. But I don’t like to duplicate the code.

    – Day 1 on Data Structures:
    Data Modeling, YAML, XML, JSON, CSV and introduction to relation tables.

    Exercise: load some OpenFlight data into MySQL

    – Day 2 on Introduction to Databases:
    RDBMS, create and load tables, normalization

    Exercise: a PHP page to query a simple table

    – Day 3 on Introduction to SQL:
    SELECT, INSERT, DELETE, UPDATE, ACID and transactions

    Exercise: a PHP page to list Flights from multi-creteria form

    In addition to the course, I also did some coaching for their PHP exercises. I discovered this language (which I do not like – meaningless error messages, improbable implicit conversion,…). But at least we were able to make some concepts more clear: what is a web server, sessions, cookies, access to the database… And the method is important. How to approach a code that doesn’t work, where nothing is displayed: change the connection parameters to wrong ones to see if we go to this part of code, add explicitly a syntax error in the SQL statement to see if errors are correctly trapped, echo some variables to see if they are set. Before learning magic IDEs, we must put the basics that will help everywhere. The main message is: you are never stuck with an error. There is always a possibility to trace more. And when you have all details, you can focus your google search better.

    Thanks

    Big thanks to SQLFiddle where it is easy to do some SQL without installing anything. However, being 20 people on a small Wi-Fi, using local resources is preferable. And we installed MAMP (see here how I discovered it and had to fix a bug at the same time). Big thanks to Chris Saxon ‘Database for Developers’ videos which will help the students to review all the concepts in an entertaining way. Thanks to w3schools for the easy learning content.

    Oh, and thanks to facebook sponsoring-privacy-intrusive-algorithms! Because this is how I heard about PowerCoders. For the first time of my life, I clicked on a sponsored link on a social media. This was for the WeMakeIt crowdfunding project for this powercoders curriculum in Lausanne. I’ve read about the project. I watched the video and that’s how I wanted to participate in this project. You should watch this Christian Hirsig TED talk as well. At a time where everybody is talking about autonomous self-driven cars, his accomplishment was to move from completely powerless to be back in the driver’s seat…

    IMG_5531
    And of course thanks to Powercoders organizers, students, teachers, coaches, mentors and the companies who propose internships to complete the curriculum (I was happy, and proud of my employer, when dbi-services was in immediately).
    Teaching to motivated people who want to learn as much as possible is a great experience, and not all days are like this in the professional life. And explaining topics that are aside of my comfort zone is lot of work, but also a rewarding experience. In this world where technology goes faster and faster, showing the approach and the method to adapt to new topics gives a lot of self-confidence.

     

    Cet article Introduction to databases for {Power.Coders} with MySQL est apparu en premier sur Blog dbi services.


18c: No Active Data Guard required (and detected) when only CDB$ROOT and PDB$SEED are opened in read-only

$
0
0

When you clone a PDB on the primary CDB, you want that the same is automatically done on the standby ones. Unfortunately, for this operation, the CDB must be opened read-only. In 12.1 even the PDB needed to be opened read-only (Ludovico Caldara reported this in his blog). This, as far as I know, was fixed in 12.2 where MRP reads the files without the need to have the PDB opened. But another problem comes with online cloning, as reported by Alain Fuhrer in his blog, where the CDB needs to be opened read-only with real-time apply. This again requires the Active Data Guard option, which is then mandatory to use all power from the multitenant option. This brings the use of multitenant from x1.36 the price for this option to x1.61 for both, on the primary and the standby servers. Hopefully, this has been addressed in 18c where you don’t need the Active Data Guard to have the CDB opened in read-only with real-time apply, as long as you ensure that the user PDB are always closed when apply is on.

This test was done on the Oracle Cloud DBaaS because it is very easy to provision a Data Guard configuration. It is 18.1 because this is what is created and the patch apply for 18.2 is not so easy. But behaviour the same in 18.2

CDB in MOUNT do not requires ADG

The standby is not opened:

SQL> select dbid,name,open_mode,database_role from v$database;
 
DBID NAME OPEN_MODE DATABASE_ROLE
---------- --------- -------------------- ----------------
1507386248 ORCL MOUNTED PHYSICAL STANDBY
 
SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED MOUNTED
4 PDB1 MOUNTED

In the primary, we see that the standby destination is in real time apply, but not opened:

SQL> select dest_id,dest_name,status,type,database_mode,recovery_mode from v$archive_dest_status;
 
DEST_ID DEST_NAME STATUS TYPE DATABASE_MODE RECOVERY_MODE
---------- -------------------- --------- ---------------- --------------- ----------------------------------
1 LOG_ARCHIVE_DEST_1 VALID LOCAL OPEN IDLE
2 LOG_ARCHIVE_DEST_2 VALID PHYSICAL MOUNTED-STANDBY MANAGED REAL TIME APPLY
3 LOG_ARCHIVE_DEST_3 INACTIVE LOCAL UNKNOWN IDLE
4 LOG_ARCHIVE_DEST_4 INACTIVE LOCAL UNKNOWN IDLE

This is the information that is logged into DBA_FEATURE_USAGE_STATISTICS. Here is the query that counts the usage of real-time query which is one of the features that require ADG option:

SQL> select count(*) from v$archive_dest_status where status = 'VALID' and recovery_mode like 'MANAGED%QUERY' and database_mode = 'OPEN_READ-ONLY';
 
COUNT(*)
----------
0

Here we have no ADG usage recorded because the database is not opened read-only. But if you try some PDB cloning on the primary you will quickly encounter the problems: the PDB on the standby has no datafiles.

CDB in READ and PDB closed do not require ADG in 18c

I’ll now open the standby read-only:

SQL> alter database open;
Database altered.
 
SQL> show pdbs;
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
4 PDB1 READ ONLY NO

Unfortunately, this has also opened the PDB because the saved state was OPEN for the primary database and we cannot manage that properly on the standby (except with Grid Infrastructure or Oracle Restart or custom scripts or database triggers). I close it quickly before the ADG usage is recorded:

SQL> alter pluggable database pdb1 close;
Pluggable database altered.

So, my standby is opened read-only with real-time apply:

SQL> select dbid,name,open_mode,database_role from v$database;
 
DBID NAME OPEN_MODE DATABASE_ROLE
---------- --------- -------------------- ----------------
1507386248 ORCL READ ONLY WITH APPLY PHYSICAL STANDBY

but all the user PDBs are not opened:

SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
4 PDB1 MOUNTED

Here is what we see from the primary about this archive destination:

SQL> select dest_id,dest_name,status,type,database_mode,recovery_mode from v$archive_dest_status;
 
DEST_ID DEST_NAME STATUS TYPE DATABASE_MODE RECOVERY_MODE
---------- -------------------- --------- ---------------- --------------- ----------------------------------
1 LOG_ARCHIVE_DEST_1 VALID LOCAL OPEN IDLE
2 LOG_ARCHIVE_DEST_2 VALID PHYSICAL OPEN_READ-ONLY MANAGED REAL TIME APPLY
3 LOG_ARCHIVE_DEST_3 INACTIVE LOCAL UNKNOWN IDLE
4 LOG_ARCHIVE_DEST_4 INACTIVE LOCAL UNKNOWN IDLE

This in 12.2 would have recorded the usage of real-time query usage because the standby is opened for queries and we are in managed recovery mode. Here was the 12.2 query to detect it (you can see it in catfusrg.sql):

SQL> select count(*) from v$archive_dest_status where status = 'VALID' and recovery_mode like 'MANAGED%' and database_mode = 'OPEN_READ-ONLY';
 
COUNT(*)
----------
1

But the pattern has been changed in 18c to include ‘QUERY':

SQL> select count(*) from v$archive_dest_status where status = 'VALID' and recovery_mode like 'MANAGED%QUERY' and database_mode = 'OPEN_READ-ONLY';
 
COUNT(*)
----------
0

and this ‘QUERY’ word is added only when some user pluggable databases are not in MOUNT but opened for queries.

I check quickly that a PDB clone on the primary:

SQL> create pluggable database PDB2 from PDB1 keystore identified by "Ach1z0#d";
Pluggable database created.
 
SQL> alter pluggable database PDB2 open;
Pluggable database altered.

has the clone created on the standby, in closed mode:

SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
4 PDB1 MOUNTED
5 PDB2 MOUNTED

So, finally, we have a good way to use multitenant features without the need for Active Data Guard. This is good for DBaaS where we want easy cloning, protect their availability with Data Guard, but with no need to query them on the standby.

CDB in READ and PDB opened requires ADG in 18c

Now, if I open the PDB:

SQL> select dbid,name,open_mode,database_role from v$database;
 
DBID NAME OPEN_MODE DATABASE_ROLE
---------- --------- -------------------- ----------------
1507386248 ORCL READ ONLY WITH APPLY PHYSICAL STANDBY
 
SQL> alter pluggable database pdb1 open;
 
Pluggable database PDB1 altered.
 
SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
4 PDB1 READ ONLY NO

From the primary, the recovery mode for this standby mentions ‘WITH QUERY’ for the managed real time apply:

SQL> select dest_id,dest_name,status,type,database_mode,recovery_mode from v$archive_dest_status;
 
DEST_ID DEST_NAME STATUS TYPE DATABASE_MODE RECOVERY_MODE
---------- -------------------- --------- ---------------- --------------- ----------------------------------
1 LOG_ARCHIVE_DEST_1 VALID LOCAL OPEN IDLE
2 LOG_ARCHIVE_DEST_2 VALID PHYSICAL OPEN_READ-ONLY MANAGED REAL TIME APPLY WITH QUERY
3 LOG_ARCHIVE_DEST_3 INACTIVE LOCAL UNKNOWN IDLE
4 LOG_ARCHIVE_DEST_4 INACTIVE LOCAL UNKNOWN IDLE

And only in this case the ADG usage is recorded:

SQL> select count(*) from v$archive_dest_status where status = 'VALID' and recovery_mode like 'MANAGED%QUERY' and database_mode = 'OPEN_READ-ONLY';
 
COUNT(*)
----------
1

The proper way to open without activating ADG

In the previous test, I had the PDB opened for a short time when I opened the CDB because of the saved state. If you want to avoid this, the proper way is to ensure that the apply is off when you open it so that you do not have, at the same time, a PDB opened and real time apply.

I’m back to the state where the standby is in mount and apply on:

SQL> select dest_id,dest_name,status,type,database_mode,recovery_mode from v$archive_dest_status where status='VALID';
 
DEST_ID DEST_NAME STATUS TYPE DATABASE_MODE RECOVERY_MODE
---------- -------------------- --------- ---------------- --------------- ----------------------------------
1 LOG_ARCHIVE_DEST_1 VALID LOCAL OPEN IDLE
2 LOG_ARCHIVE_DEST_2 VALID PHYSICAL MOUNTED-STANDBY MANAGED REAL TIME APPLY

I stop the apply in the standby:

DGMGRL> edit database "ORCL_02" set state=apply-off;
Succeeded.

I check that the primary sees that apply is off:

SQL> select dest_id,dest_name,status,type,database_mode,recovery_mode from v$archive_dest_status where status='VALID';
 
DEST_ID DEST_NAME STATUS TYPE DATABASE_MODE RECOVERY_MODE
---------- -------------------- --------- ---------------- --------------- ----------------------------------
1 LOG_ARCHIVE_DEST_1 VALID LOCAL OPEN IDLE
2 LOG_ARCHIVE_DEST_2 VALID PHYSICAL MOUNTED-STANDBY IDLE

I open the standby CDB read-only and ensures that all PDB are closed:

SQL> alter database open;
Database altered.
 
SQL> alter pluggable database all close;
Pluggable database altered.
 
SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
4 PDB1 MOUNTED

The primary sees the open state:

SQL> select dest_id,dest_name,status,type,database_mode,recovery_mode from v$archive_dest_status where status='VALID';
 
DEST_ID DEST_NAME STATUS TYPE DATABASE_MODE RECOVERY_MODE
---------- -------------------- --------- ---------------- --------------- ----------------------------------
1 LOG_ARCHIVE_DEST_1 VALID LOCAL OPEN IDLE
2 LOG_ARCHIVE_DEST_2 VALID PHYSICAL OPEN_READ-ONLY IDLE

I can now restart the apply:

DGMGRL> edit database "ORCL_02" set state=apply-on;
Succeeded.

Then the primary detects the managed recovery

SQL> select dest_id,dest_name,status,type,database_mode,recovery_mode from v$archive_dest_status where status='VALID';
 
DEST_ID DEST_NAME STATUS TYPE DATABASE_MODE RECOVERY_MODE
---------- -------------------- --------- ---------------- --------------- ----------------------------------
1 LOG_ARCHIVE_DEST_1 VALID LOCAL OPEN IDLE
2 LOG_ARCHIVE_DEST_2 VALID PHYSICAL OPEN_READ-ONLY MANAGED

Then once the gap is resolved, we are in real time apply:

SQL> select dest_id,dest_name,status,type,database_mode,recovery_mode from v$archive_dest_status where status='VALID';
  DEST_ID DEST_NAME STATUS TYPE DATABASE_MODE RECOVERY_MODE
---------- -------------------- --------- ---------------- --------------- ----------------------------------
1 LOG_ARCHIVE_DEST_1 VALID LOCAL OPEN IDLE
2 LOG_ARCHIVE_DEST_2 VALID PHYSICAL OPEN_READ-ONLY MANAGED REAL TIME APPLY

But there is no mention of ‘WITH QUERY’ thanks to the closed PDBs, which means that no ADG usage is recorded. In summary, 18c has added two possible values for RECOVERY_MODE: ‘MANAGED WITH QUERY’ and ‘MANAGED REAL TIME APPLY WITH QUERY’, and those ones, in multitenant, are for standby CDB opened read-only when at least one user PDB is also opened read-only.

For DBaaS, you probably run with Grid Infrastructure (because Oracle Restart do not allow databases on ACFS). For single-tenant, this new feature can solve the problem of activating ADG option by mistake, at least if we have good control of PDB states depending on the role.

 

Cet article 18c: No Active Data Guard required (and detected) when only CDB$ROOT and PDB$SEED are opened in read-only est apparu en premier sur Blog dbi services.

Kernel panic – not syncing: Out of memory and no killable processes

$
0
0

This is a quick post to give a solution (maybe not the best one as this was just quick troubleshooting) if, at boot, you see something like:
Trying to allocate 1041 pages for VMLINUZ [Linux=EFI, setup=0x111f, size=0x41108d01]
and then:
Kernel panic - not syncing: Out of memory and no killable processes
Pid: 1228 comm: kworker Not tainted 3.8.13-118.17.4.el6uek.x86_64 #2


IMG_E5543
If you do not see the messages, then you may need to remove the ‘quiet’ option of kernel and replace it by ‘nosplash’ – this is done from grub.

So, there’s not enough memory to start the kernel. I got this from a server that had been shut down to be moved to another data center, so at first, I checked that the RAM was ok. Actually, the problem was with the configuration of hugepages (vm.nr_hugepages in /etc/sysctl.conf) which was allocating nearly the whole RAM. Probably because the setting has been erroneously changed without checking /proc/meminfo.

So, without any clue about what the problem was, I started by forcing the kernel to use 4GB (instead of getting it from hardware info). This is done by pressing any key on the GRUB menu, ‘e’ to edit, add the ‘mem=4g’. Then the boot was ok. Of course, if the server tries to start the Oracle instances, the systems starts to swap so better change also the runlevel.

This is sufficient to check the huge pages allocation and ensure that you leave enough memory for at least the kernel (but also consider also the system, the PGA, …). So don’t forget that the mem= option, usually there to give a maximum limit, may be useful also to guarantee a minimum of RAM for the kernel.

 

Cet article Kernel panic – not syncing: Out of memory and no killable processes est apparu en premier sur Blog dbi services.

PGIO, PG_STAT_ACTIVITY and PG_STAT_STATEMENTS

$
0
0

For an Oracle DBA, we are used to join active sessions (from V$SESSION where status=’ACTIVE) with active statements (from V$SQL where users_executing>0) on the SQL_ID. V$ACTIVE_SESSION_HISTORY also displays the TOP_LEVEL_SQL_ID to get the entrypoint of the usercall if we need it. With Postgres it is a bit more difficult because it seems that PG_STAT_ACTIVITY do not show the active statement but only the top-level one. But pg_stat_statement collects statistics for the final statements.

Here is an example where I’ve run pgio (the SLOB method for Postgres by Kevin Closson) in a Bitnami Postgres Compute service on the Oracle Cloud. pgio runs all statements from a PL/pgSQL function MYPGIO. This function runs the SELECT statements and some UPDATE statements depending on UPDATE_PCT. Most of the time is spend in those statements and very few in PL/pgSQL itself, which is the goal of pgio – measuring logical and physical I/O without the noise of other application components.

I have added the pg_stat_statements extension by setting the following in postgresql.conf and installing postgresql10-contrib

shared_preload_libraries = 'pg_stat_statements'
pg_stat_statements.track = all

I’ve reset the statistics before running pgio:

select pg_stat_reset();
select pg_stat_statements_reset();

The I’ve run the runit.sh with 4 schemas and 20 threads.

pg_stat_activity

Here is what I can see in PG_STAT_ACTIVITY when it is running:

postgres=# select substr(query,1,100) query,backend_type,application_name,query_start,wait_event_type,wait_event,state from pg_stat_activity where datid=17021;
 
query | backend_type | application_name | query_start | wait_event_type | wait_event | state
------------------------------------------------------------------------------------------------------+-------------------+------------------------+-------------------------------+-----------------+---------------+--------
select query,backend_type,application_name,query_start,wait_event_type,wait_event,state from pg_stat | client backend | PostgreSQL JDBC Driver | 2018-06-27 20:15:11.578023+00 | Client | ClientRead | idle
autovacuum: VACUUM ANALYZE public.pgio3 | autovacuum worker | | 2018-06-27 20:01:19.785971+00 | | | active
autovacuum: VACUUM ANALYZE public.pgio4 | autovacuum worker | | 2018-06-27 20:02:19.811009+00 | | | active
autovacuum: VACUUM ANALYZE public.pgio1 | autovacuum worker | | 2018-06-27 20:08:19.864763+00 | | | active
SELECT * FROM mypgio('pgio1', 10, 300, 131072, 255, 8); | client backend | psql.bin | 2018-06-27 20:14:55.915761+00 | Lock | transactionid | active
SELECT * FROM mypgio('pgio1', 10, 300, 131072, 255, 8); | client backend | psql.bin | 2018-06-27 20:14:55.925117+00 | Lock | transactionid | active
SELECT * FROM mypgio('pgio1', 10, 300, 131072, 255, 8); | client backend | psql.bin | 2018-06-27 20:14:55.934903+00 | Lock | transactionid | active
SELECT * FROM mypgio('pgio2', 10, 300, 131072, 255, 8); | client backend | psql.bin | 2018-06-27 20:14:55.931038+00 | Lock | transactionid | active
SELECT * FROM mypgio('pgio1', 10, 300, 131072, 255, 8); | client backend | psql.bin | 2018-06-27 20:14:55.9269+00 | Lock | transactionid | active
SELECT * FROM mypgio('pgio1', 10, 300, 131072, 255, 8); | client backend | psql.bin | 2018-06-27 20:14:55.923288+00 | Lock | transactionid | active
SELECT * FROM mypgio('pgio2', 10, 300, 131072, 255, 8); | client backend | psql.bin | 2018-06-27 20:14:55.95252+00 | LWLock | WALWriteLock | active
SELECT * FROM mypgio('pgio3', 10, 300, 131072, 255, 8); | client backend | psql.bin | 2018-06-27 20:14:55.96237+00 | | | active
SELECT * FROM mypgio('pgio1', 10, 300, 131072, 255, 8); | client backend | psql.bin | 2018-06-27 20:14:55.951347+00 | Lock | transactionid | active
SELECT * FROM mypgio('pgio1', 10, 300, 131072, 255, 8); | client backend | psql.bin | 2018-06-27 20:14:55.962725+00 | Lock | transactionid | active
SELECT * FROM mypgio('pgio4', 10, 300, 131072, 255, 8); | client backend | psql.bin | 2018-06-27 20:14:55.985567+00 | Lock | transactionid | active
SELECT * FROM mypgio('pgio1', 10, 300, 131072, 255, 8); | client backend | psql.bin | 2018-06-27 20:14:55.98943+00 | Lock | transactionid | active
SELECT * FROM mypgio('pgio2', 10, 300, 131072, 255, 8); | client backend | psql.bin | 2018-06-27 20:14:55.976483+00 | Lock | transactionid | active
SELECT * FROM mypgio('pgio4', 10, 300, 131072, 255, 8); | client backend | psql.bin | 2018-06-27 20:14:56.032111+00 | Lock | transactionid | active
SELECT * FROM mypgio('pgio4', 10, 300, 131072, 255, 8); | client backend | psql.bin | 2018-06-27 20:14:55.998013+00 | Lock | transactionid | active
SELECT * FROM mypgio('pgio4', 10, 300, 131072, 255, 8); | client backend | psql.bin | 2018-06-27 20:14:56.043566+00 | Lock | transactionid | active
(20 rows)

My client sessions are active on the MYPGIO function. This is the top-level statement which runs some SELECT and UPDATE and there is a very small chance to get samples with the session active on the controller procedural code rather than one of those statements. However, that’s the only thing I can see here.

pg_stat_statements

Now looking at PG_STAT_STATEMENTS:

postgres=# create extension pg_stat_statements;
CREATE EXTENSION
 
postgres=# select substr(query,1,100) query,calls,total_time/1e6 seconds,total_time/calls/1e6 "sec./call",rows from pg_stat_statements where dbid=17021 order by total_time desc;
query | calls | seconds | sec./call | rows
------------------------------------------------------------------------------------------------------+-------+---------------------+----------------------+-------
UPDATE pgio1 SET scratch = scratch + $1 WHERE mykey BETWEEN $2 AND $3 | 2081 | 0.583796520366999 | 0.000280536530690533 | 18729
UPDATE pgio4 SET scratch = scratch + $1 WHERE mykey BETWEEN $2 AND $3 | 2076 | 0.565067778457 | 0.000272190644728805 | 18684
UPDATE pgio2 SET scratch = scratch + $1 WHERE mykey BETWEEN $2 AND $3 | 2229 | 0.412313896512 | 0.000184977073356662 | 20061
SELECT sum(scratch) FROM pgio3 WHERE mykey BETWEEN $1 AND $2 | 19199 | 0.317597650247998 | 1.65424058673888e-05 | 19199
UPDATE pgio3 SET scratch = scratch + $1 WHERE mykey BETWEEN $2 AND $3 | 2202 | 0.304893281095001 | 0.000138461980515441 | 19818
SELECT sum(scratch) FROM pgio2 WHERE mykey BETWEEN $1 AND $2 | 19500 | 0.29686205481 | 1.52236951184615e-05 | 19500
SELECT sum(scratch) FROM pgio1 WHERE mykey BETWEEN $1 AND $2 | 17873 | 0.285481468151 | 1.59727783892464e-05 | 17873
SELECT sum(scratch) FROM pgio4 WHERE mykey BETWEEN $1 AND $2 | 17889 | 0.273855678533 | 1.53086074421711e-05 | 17889
SELECT pgio_get_random_number($6, v_scale - v_select_batch_size) | 83129 | 0.00134782461999997 | 1.62136513130192e-08 | 83129
select blks_read from pg_stat_database where datname = $1 | 1 | 1.0456142e-05 | 1.0456142e-05 | 1
create table pgio_audit_table ( c1 numeric , c2 numeric, c3 numeric , c4 numeric, c5 numeric, c6 | 1 | 7.636071e-06 | 7.636071e-06 | 0
CREATE OR REPLACE FUNCTION mypgio( +| 1 | 6.425742e-06 | 6.425742e-06 | 0
v_mytab varchar, +| | | |
v_pctupd int, +| | | |
v_runtime_secs bigint, +| | | |
v_scal | | | |
select substr(query,$1,$2) query,calls,total_time/$3 seconds,total_time/calls/$4 "sec./call",rows fr | 9 | 5.017752e-06 | 5.57528e-07 | 139
CREATE OR REPLACE FUNCTION pgio_get_random_number(BIGINT, BIGINT) RETURNS BIGINT AS $$ +| 1 | 4.217947e-06 | 4.217947e-06 | 0
DECLARE +| | | |
v | | | |
drop table pgio_audit_table | 1 | 3.337471e-06 | 3.337471e-06 | 0
DROP TYPE pgio_return CASCADE | 1 | 2.451694e-06 | 2.451694e-06 | 0
CREATE TYPE pgio_return AS ( +| 1 | 1.746512e-06 | 1.746512e-06 | 0
mypid int, +| | | |
loop_iterations bigint , +| | | |
sql_selects bigint, +| | | |
sql_updates | | | |
select query,backend_type,application_name,query_start,wait_event_type,wait_event,state from pg_stat | 3 | 1.312499e-06 | 4.37499666666667e-07 | 172
SELECT pg_backend_pid() | 80 | 6.10539e-07 | 7.6317375e-09 | 80
select pg_stat_statements_reset() | 1 | 1.8114e-07 | 1.8114e-07 | 1
select datname, blks_hit, blks_read,tup_returned,tup_fetched,tup_updated from pg_stat_database where | 1 | 1.60161e-07 | 1.60161e-07 | 1
select blks_hit from pg_stat_database where datname = $1 | 1 | 1.48597e-07 | 1.48597e-07 | 1
select pg_stat_reset() | 1 | 5.3776e-08 | 5.3776e-08 | 1
SET application_name = 'PostgreSQL JDBC Driver' | 2 | 5.2967e-08 | 2.64835e-08 | 0
SET extra_float_digits = 3 | 2 | 3.7514e-08 | 1.8757e-08 | 0
show shared_buffers--quiet | 1 | 2.1151e-08 | 2.1151e-08 | 0
(26 rows)

Here I can see that the main activity is on UPDATE and SELECT. There’s even no mention of the MYPGIO function except for its creation. I don’t even see the calls for it. We can see the same graphically from Orachrome Lighty for Postgres where only the top-level statement is displayed:
CapturePGIO

This makes it very difficult to match those two views. From PG_STAT_ACTIVITY I know that I spend time on MYPGIO function call and its system activity (here lot of locks waiting for vacuum sessions and some latch contention on WAL). We have sampling information, but nothing about database time and read/write measures. From PG_STAT_STATEMENT we have all measures, but with no link to the sampling activity, and then no link with the wait events. Coming from Oracle databases, the most important in Oracle tuning is the DB time which can be detailed from the system point of view (wait events, CPU, I/O) and from the user point of view (time model, SQL statements, Segments). And it can be a cumulative measure (each session measuring the time in a call or an operation) or a sampling one (counting the sessions active and on which operation). And ASH makes the link between all those dimensions. With this, we can directly address the user response time with our actions on the system, digging from the end-user perceived problem to the internal application or system root cause.

With current Postgres statistics, it seems that we can look at SQL statements on one side and at the system on the other side. But can we ensure that we really address the user problems? Especially when calling the statements from a function, which is the optimized way to avoid useless roundtrips and context switches? Postgres is evolving a lot from the community. I hope that more and more information will be available in PG_STAT_ACTIVITY to be sampled and link all dimensions related to the system and the users. Like with any analytic query, we need all dimensions at the same level of granularity.

 

Cet article PGIO, PG_STAT_ACTIVITY and PG_STAT_STATEMENTS est apparu en premier sur Blog dbi services.

DataGuard and Transparent Data Encryption

$
0
0

Setting up a DatagGard environment for a database with Transparent Data Encryption requires some tasks concerning the encryption keys. Otherwise the steps are the same than for an environment without TDE.
In this blog we will present the tasks we have to do on both primary and standby servers for the keys. We will not describe the procedure to build the standby database. We will just talk about tasks for the wallet and we will verify that data for encrypted tables are being replicated.
We are using oracle 12.2 and a non-container database.
Tasks on primary side
First on the primary server we have to configure the keystore location. This will be done by updating the sqlnet.ora with the directory whch will contain the keys.

[oracle@primaserver ~]$mkdir /u01/app/wallet

[oracle@primaserver admin]$ cat sqlnet.ora
# sqlnet.ora Network Configuration File: /u01/app/oracle/product/12.2.0/dbhome_1/network/admin/sqlnet.ora
# Generated by Oracle configuration tools.

NAMES.DIRECTORY_PATH= (TNSNAMES, ONAMES, HOSTNAME)

# For TDE
ENCRYPTION_WALLET_LOCATION=
 (SOURCE=
  (METHOD=file)
   (METHOD_DATA=
    (DIRECTORY=/u01/app/wallet)))
[oracle@primaserver admin]$

After on the primary we have to create the keystore.

SQL> ADMINISTER KEY MANAGEMENT CREATE KEYSTORE '/u01/app/wallet' identified by root ;

keystore altered.

Next we have to open the keystore before creating the master key

SQL> ADMINISTER KEY MANAGEMENT set KEYSTORE open   identified by root ;

keystore altered.

And then we can create the master key.

SQL> ADMINISTER KEY MANAGEMENT SET KEY IDENTIFIED BY root WITH BACKUP;

keystore altered.

The wallet should be open before we can access to encrypted objects. So every time the database starts up, we have to manually open the wallet. To avoid this we can just create an auto_login wallet which will automatically opened at each database startup.

SQL> ADMINISTER KEY MANAGEMENT CREATE AUTO_LOGIN KEYSTORE FROM KEYSTORE '/u01/app/wallet' identified by root;

keystore altered.

Tasks on standby side
On the standby side we just have to copy files in the wallet and to update the sqlnet.ora file.

[oracle@primaserver wallet]$ pwd
/u01/app/wallet
[oracle@primaserver wallet]$ ls
cwallet.sso  ewallet_2018062707462646.p12  ewallet.p12
[oracle@primaserver wallet]$ scp * standserver1:$PWD
oracle@standserver1's password:
cwallet.sso                                   100% 3891     3.8KB/s   00:00
ewallet_2018062707462646.p12                  100% 2400     2.3KB/s   00:00
ewallet.p12                                   

And that’s all. We can now configure our standby database. Below our configuration

DGMGRL> show configuration;

Configuration - DGTDE

  Protection Mode: MaxPerformance
  Members:
  DGTDE_SITE1 - Primary database
    DGTDE_SITE2 - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 1 second ago)

DGMGRL>

Now let’s verify that encrypted data are being replicated. We have a table with an encrypted column

SQL> show user
USER is "SCOTT"
SQL> desc TEST_COL_ENC
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 ID                                                 NUMBER
 DESIGNATION                                        VARCHAR2(30) ENCRYPT

SQL> select * from TEST_COL_ENC;

        ID DESIGNATION
---------- ------------------------------
         1 toto
         2 tito
         3 tata

SQL>

And let’s insert some data form the primary

SQL> insert into TEST_COL_ENC values (4,'titi');

1 row created.

SQL> insert into TEST_COL_ENC values (5,'teti');

1 row created.

SQL> commit;

Commit complete.

SQL>

From the standby let’s query the table

SQL> select db_unique_name,open_mode from v$database;

DB_UNIQUE_NAME                 OPEN_MODE
------------------------------ --------------------
DGTDE_SITE2                    READ ONLY WITH APPLY

SQL> select * from scott.TEST_COL_ENC;

        ID DESIGNATION
---------- ------------------------------
         4 titi
         5 teti
         1 toto
         2 tito
         3 tata

SQL>

To finish we will remind following notes about DataGuard and TDE (Oracle Documentation)

The database encryption wallet on a physical standby database must be replaced with a fresh copy of the database encryption wallet from the primary database whenever the TDE master encryption key is reset on the primary database.

For online tablespaces and databases, as of Oracle Database 12c Release 2 (12.2.0.1), you can encrypt, decrypt, and re-key both new and existing tablespaces, and existing databases within an Oracle Data Guard environment. This tasks will be automatically performed on the standby once done on the primary. Note that these online tasks cannot be done directly on the standby side.

In an offline conversion, the encryption or decryption must be performed manually on both the primary and standby. An offline conversion affects the data files on the particular primary or standby database only. Both the primary and physical standby should be kept at the same state. You can minimize downtime by encrypting (or decrypting) the tablespaces on the standby first, switching over to the primary, and then encrypting (or decrypting) the tablespaces on the primary.

 

Cet article DataGuard and Transparent Data Encryption est apparu en premier sur Blog dbi services.

Statspack installation scripts

$
0
0

When Diagnostic Pack is disabled, either because you don’t have Diagnostic Pack or you are in Standard Edition, I highly recommend to install Statspack. When you will need it, to investigate on an issue that occured in the past, you will be happy to have it already installed and gathering snapshots.

I order to be sure to have it installed correctly, there’s a bit more to do than just what is described in spcreate.doc and I detail that in a UKOUG Oracle Scene article Improving Statspack Experience.

For easy download, I’ve put the scripts on GitHub: https://github.com/FranckPachot/scripts/tree/master/statspack

You will find the following scripts for Statspack installation:

  • 01-install.sql to create the tablespace and call spcreate
  • 02-schedule.sql to schedule snap and purge jobs
  • 03-idle-events.sql to fix issue described at https://blog.dbi-services.com/statspack-idle-events/
  • 04-modify-level.sql to collect execution plans and segment statistics

And some additional scripts:

  • 08-create-delta-views.sql to create views easy to query
  • 11-comment-with-load.sql to add load information to snapshots without comments

You can leave feedback and comment on Twitter:

 

Cet article Statspack installation scripts est apparu en premier sur Blog dbi services.

Viewing all 533 articles
Browse latest View live