Oracle – Container Database Where Am I

Smart Panda - Database

Oracle 12c – Where am I

In the good old days it was relatively easy to figure out what database you were in.  If you do a quick select on v$instance, you are good.

However, in 12c, when we introduce pluggable container databases things get a little more complicated.  When you issue a sqlplus / as sysdba, you will be placed parent container.

Once you are logged in you can change to a pluggable database by issuing:

alter session set container=<my_pluggable_db_name>;

if you issue the command:

show con_id con_name

Smart Panda - Oracle Container Where Am IIt will report the container id & container name:

Now that you are in the pluggable database you can use the new view v$pdbs which will report information for the pluggable database that v$instance does for the container database.

If you log into the container database you can select from v$pdbs and it will report all the pluggable databases within the container.

 

PeopleSoft – Kill Application Engine (Gently)

Smart Panda - PeopleSoft in the CloudKill Application Engine (AE)

It amazes me that in the world of these high end databases that simple data handling can go for a crap simply because data was bulk loaded into the system.  Usually during big changes to an environment: data conversions, upgrades,  archiving or other aggressive changes, data will become out of sync with the statistics that are stored on the tables.

What are Database Statistics? In a nut shell it is data about data or  simply metadata. Oracle statistics is metadata about your data. There are several kinds of statistics in Oracle mainly: Object statistics, System statistics and fixed table statistics.

So if you have bad statistics your Application Engine may not be able to execute code correctly.  It will simply appear to be hung.  I have seen processes run for days and it will never return a result because of bad statistics.  After updating the statistics those exact same processes that ran for days will run in seconds, sometimes faster.  Okay, so now the catch is you have to “kill” the process in order for the process to be re-ran and pick up the correct statistics. This isn’t the easiest process in the world to do, because if you try to just cancel the job through the process scheduler it may or may not cancel nicely. However, if you “kill” the SQL at the database level that is hung the application engine for go to ERROR, and you can restart the AE from the last committed point that it executed to.  This is much cleaner than any other method I have found, especially if you have a huge amount of stuff already done by the application engine prior to running into this issue.  Upgrades are notorious for this, which is why they have “update statististics” steps all throughout the upgrade process now.

So in Oracle SQL we need to find the SQL that is causing the issue:

select * from v$session where program like ‘%PSAESRV%’;  –See SQL running by Application Engine

select * from v$sql where sql_id = ‘SQL_ID from v$Session’;  — Confirm you have the write SQL

alter system kill session ‘{sid},{serial#}’;

It may take a few minutes to find the SQL causing the problems, but don’t worry, it will still be running after you find it.  Once it is killed, the AE will go to error, and you can restart it.  It will start up at the last commit point, which you can usually see if you query the AE Control table:

select * from ps_aeruncontrol;

Rerun your statistics for the schema or for the tables in question.

exec dbms_stats.gather_schema_stats(‘SYSADM’, cascade=>true);

Good luck out there!

Oracle Sequences Create & Use

Smart Panda - Database

Oracle Sequences

Sometimes in the middle of the night you will be working on a conversion effort and you will need to populate a field with a sequence of values.  This is easily done using Excel but if you are dynamically loading data you may want to have a more flexible feature.  Oracle sequences have that ability.

Create A Sequence:

CREATE SEQUENCE adjustment_fix START WITH 1001 INCREMENT BY 1 NOCACHE NOCYCLE;

So this sequence will start are 1001 and each additional next value will by greater by 1 — so 1002, 1003, 1004….etc.  I had an oddball situation where I needed an 18 digit number that increased by 10000000. So in my case it was:

CREATE SEQUENCE adjustment_fix START WITH 300000000010000000 INCREMENT BY 10000000 NOCACHE NOCYCLE;

Now that you have the sequence, you need to populate the rows with the sequence.  So I created a temporary table and loaded all the values in the table.  Then I had another field for the sequence number.  I tried to insert the sequence while doing the insert into the staging table of the values I needed staged for the adjustment but it whined and complained.  So I did the insert and then did an update of the rows:

SQL:    UPDATE STAGING_TABLE_4_ADJ SET ADJ_SEQ = adjustment_fix.nextval;

BAM!  Now I have 2200 rows of adjustment data with a unique sequence number on each row so that I can load the staging data tables.Smart Panda Number Sequence

 

Oracle – Audit Create Session Logins

Smart Panda - Database

Audit Create Session Logins

I had an interesting security breach the other day where a developer who had sql access to the production environment gave out his access to basically “everybody”. There was a huge panic and at the end of the day, I found a tip from Burleson at dba-oracle.com that showed that you can easily turn on auditing for session creation and it gives a wonder account of the people that are failing to login because of the changed credentials.
First you need to set the audit parameters on and bounce the system if it isn’t.

audit_trail=true
audit_file_dest=’/dest/to/my/audit/logs/’

from SQL*Plus: audit create session whenever not successful;

Now when you query: Select * from dba_audit_trail; you will get a listing of the folks that have failed to create a session because of incorrect credentials.

Oracle – Retrieve Deleted Rows

Smart Panda - DatabaseRetrieve Deleted Rows from Oracle Database 12c

Okay so last night I went to clear a table of historical data except for the last 7 days.  When I ran the command, it deleted the rows and I committed the transaction and then realized that I deleted the last 7 days and left the other 360 days worth that I had wanted to delete.  Greater Than vs Less Than, lesson there is to not write code on the fly when your tired at 2am in the morning.

I have flashback turned on and archive logging, but I found a neat little SQL method to get my missing rows back:

Insert into TABLE_I_DELETED_ROWS_FROM
(SELECT * FROM TABLE_I_DELETED_ROWS_FROM AS OF TIMESTAMP TO_TIMESTAMP('2014-07-06 10:45:10 PM','YYYY-MM-DD HH:MI:SS PM')
WHERE MYDATEFIELD > TO_DATE('2016-06-29','YYYY-MM-DD'))

I executed this at about 10:55 pm, and it put the 10,000 rows I deleted back into my system that were there at between 06-29 and 07-06.

Nice and easy.