I ran into a odd situation yesterday. I got handed a performance problem that had been going on for months, the developers were blaming the database people, the database people were producing useless reports and statistics and around and around it went. All the end user wanted to do was view a journal entry in the finance system from the web tier.
Well, my initial thought most likely the journal line table was really large and there may be need for another index. The journal line table had 1.1 million rows, but the row count SQL (select count(1) from ps_jrnl_ln) was taking a really long time to return, when I selected the data by the first two key fields to return a specific journal (select count(1) from ps_jrnl_ln where business_unit = ‘SHARE’ and journal_id = ‘PAY1234’). The RED flag is waving madly at this point. So here is what I did:
1) Opened the Record: JRNL_LN in application designer and did an alter table build (w/even if no change flag on). The build produced the following set of errors:
ORA-12801: error signaled in parallel query server xyz
ORA-01452: cannot CREATE UNIQUE INDEX; duplicate keys found
How in the world did that happen? Good question, can this really happen? Not under normal operation, but it has happened so how do we fix it:
select business_unit, journal_id, journal_date, unpost_seq, journal_line, ledger, count(1)
from ps_jrnl_ln
group by business_unit, journal_id, journal_date, unpost_seq, journal_line, ledger
having count(1) > 1;
This identified 31 rows having a count greater than 1. So I looked at a couple and found they were completely identical rows. So, how to get rid of them. I used the returned data from the previous select and built and export script in datamover that basically exported all the duplicate rows to a data file.
set log l:\data\logs\export_bad_data.log;
set output l:\data\bad_data.dat;
export jrnl_ln where business_unit = 'xyz' and journal_id = '123';
rem ..... (fill in all the key fields from the above select);
export jrnl_ln where business_unit = 'xyz' and journal_id = '128'
rem ..... so on until all the rows are exported;
Now, I exited datamover, and went back into datamover in bootstrap mode. I imported the data using ignore_dups and the as command, to import the data into a temporary table.
set log l:\data\logs\import_bad_data.log;
set input l:\data\bad_data.dat;
set ignore_dups;
import jrnl_ln as temp_jrnl_ln;
Now, I have the 31 unique rows in a temporary table. I delete all the duplicate rows from the jrnl_ln table, using the results from the first select, I turn the results into delete statements by all the keys. This deletes a total of 62 rows from the jrnl_ln table. I then do a insert from the temp table, which puts the 31 unique rows back into the jrnl_ln table, and I drop the temp table.
Rebuild the index, and it returns a success with no warnings or errors. On the web, I open up the journal in question in a couple of seconds. !!!!!!!!!!!