FSCM External Punchout – Extremely slow

Smart Panda - Oracle DevelopmentIn one of my clients Oracle Managed Cloud Services environments which is extremely locked down, they were experiencing terrible performance issues trying to go to WB Mason’s external punchout site.  It was taking upwards of 5 minutes to return the catalog pages.

Something was CLEARLY wrong. In this case it was DTD – and what is DTD?  Well, document type definition of course – which is a set of markup declarations that define a document type for an SGML-family markup language (SGML, XML, HTML). A Document Type Definition (DTD) defines the legal building blocks of an XML document. It defines the document structure with a list of legal elements and attributes. A DTD can be declared inline inside an XML document, or as an external reference. XML uses a subset of SGML DTD. As of 2009, newer XML namespace-aware schema languages (such as W3C XML Schema and ISO RELAX NG) have largely superseded DTDs. A namespace-aware version of DTDs is being developed as Part 9 of ISO DSDL.[2] DTDs persist in applications that need special publishing characters, such as the XML and HTML Character Entity References, which derive from larger sets defined as part of the ISO SGML standard effort. (Yes, I cut and pasted that from wikipedia DTD page )

So basically the cXML of the external punchout is trying to validate against an external reference which the application server is not allowed to go to because of firewall restrictions. In this case because the only items we had validating were these valid external punchout sites, it seemed logical to turn off DTD validation! Which you can do since 8.49 PeopleTools. The version my client is on is 8.54, so it turns out that it is really easy in 8.54 to turn off DTD validation.

In the Web Integration Broker Gateway configuration (integrationgateway.properties) file, there is a setting for DTD Validation, simply change this to “false” from “true” and reboot the web server and what was once extremely slow is now returning in fractions of a second.

There are other ways to work around this issue, but this is a relatively quick item you can test to validate if DTD is causing your performance issues.

PeopleSoft – Move File Attachments (Database to SFTP)

This is the scenario, client needs to move from database stored file attachments to an actual file share, this allows them to remove 1/4 of their entire database storage needs out to a storage area.  Clearly this is a good thing. So, here is the quick run down on what we did:

1. Copy all the existing URL references that contained record:// for backup purposes.

2. Setup a SSH account with a SSH key pair for security

3. Install the SSH key pair into the digital certificates

4. Change all the existing URLs that point to the database storage to the new SFTP storage locations.

5. Run the Orphan cleanup process for File Attachments

6. Copy the file attachments from the OLDURL to the NEWURL

7. Setup the default file attachment server to use the new URL attachment server.

8. Generate a list of all the attachments moved into the file storage and compare that against what was actually in the attachment record

9. Purged the attachment record of all the migrated files.

10.  Updated attachment reference records when necessary if they referenced the old URL in any way.

This is where we ran into an interesting problem with unicode characters.  The process was relatively slow, and we found that to move approximately 200,000 files took about 40 hours.  So to get into production what we did was made a copy of production and run the process in a test environment.  Then we moved all the files that we copied to the production file share and than removed all the entries from the database attachment record that were already moved and then ran the tiny subset of new attachments out to the storage so we could minimize our downtime.

 

Unique Index missing producing slow performance

I ran into a odd situation yesterday.  I got handed a performance problem that had been going on for months, the developers were blaming the database people, the database people were producing useless reports and statistics and around and around it went.  All the end user wanted to do was view a journal entry in the finance system from the web tier.

Well, my initial thought most likely the journal line table was really large and there may be need for another index.  The journal line table had 1.1 million rows, but the row count SQL (select count(1) from ps_jrnl_ln) was taking a really long time to return, when I selected the data by the first two key fields to return a specific journal (select count(1) from ps_jrnl_ln where business_unit = ‘SHARE’ and journal_id = ‘PAY1234’).  The RED flag is waving madly at this point. So here is what I did:

1) Opened the Record: JRNL_LN in application designer and did an alter table build (w/even if no change flag on).  The build produced the following set of errors:

ORA-12801: error signaled in parallel query server xyz
ORA-01452: cannot CREATE UNIQUE INDEX; duplicate keys found

How in the world did that happen?  Good question, can this really happen?  Not under normal operation, but it has happened so how do we fix it:

select business_unit, journal_id, journal_date, unpost_seq, journal_line, ledger, count(1)
from ps_jrnl_ln
group by business_unit, journal_id, journal_date, unpost_seq, journal_line, ledger
having count(1) > 1;

This identified 31 rows having a count greater than 1.  So I looked at a couple and found they were completely identical rows. So, how to get rid of them.  I used the returned data from the previous select and built and export script in datamover that basically exported all the duplicate rows to a data file.

set log l:\data\logs\export_bad_data.log;
set output l:\data\bad_data.dat;
export jrnl_ln where business_unit = 'xyz' and journal_id = '123';
rem ..... (fill in all the key fields from the above select);
export jrnl_ln where business_unit = 'xyz' and journal_id = '128'
rem ..... so on until all the rows are exported;

Now, I exited datamover, and went back into datamover in bootstrap mode. I imported the data using ignore_dups and the as command, to import the data into a temporary table.

set log l:\data\logs\import_bad_data.log;
set input l:\data\bad_data.dat;
set ignore_dups;
import jrnl_ln as temp_jrnl_ln;

Now, I have the 31 unique rows in a temporary table.  I delete all the duplicate rows from the jrnl_ln table, using the results from the first select, I turn the results into delete statements by all the keys.  This deletes a total of 62 rows from the jrnl_ln table. I then do a insert from the temp table, which puts the 31 unique rows back into the jrnl_ln table, and I drop the temp table.

Rebuild the index, and it returns a success with no warnings or errors.  On the web, I open up the journal in question in a couple of seconds.   !!!!!!!!!!!