Installing Cobol Runtime on Linux

This is just one of those things that isn’t well documented and it is a little frustrating.  Most of the documentation seems to be based on the windows installs.

The first thing I would recommend is to send Oracle a license request, the email is:  licensecodes_ww@oracle.com and identify the client with their client support id.  They typically will respond with the licensing that the client is entitled to, and in this situation, I asked for the runtime install details.  Their response contains a link to download the runtime files.  If you are enterprising you can find them on your own: ftp://ftp.oracle.com

In this situation I am installing Micro Focus Server Express 5.1 on a virtual Redhat 5.5 Linux x86-64 server.  The install files for this are on the edelivery website.  I had the server administrator do the server express install, unfortunately the install was done in the wrong directory but as usual work with what you got.  The install was done to /root directory.

Make sure the variables COBDIR, COBPATH, and add $COBDIR/lib to the LD_LIBRARY_PATH, and add $COBDIR/bin to the PATH variable.   The COBPATH will be the directory where you install the runtime, in my case I set it to: /data/app/mfcbl

Untar the ps-sx-auto.tar  (tar -xvf ps-sx-auto.tar) into the /data/app/mfcbl directory, chmod +x p*, and run ./psauto64

It should tell you that the Unlimited RunTime License is installed and working. Now we need to get the process scheduler to run cobols, but first, we need to re-link the cobol.  Change the directory to your PS_HOME directory, and run the psconfig.sh and then change to the setup directory, and run psrun.mak  – It should tell you the files were successfully linked.  You should already have copied a freshly compiled cblbin directory from your compiling server to your run-time server (note: you can’t compile on a different OS platform and use the run-time on another – the platform must be the same).

Run psadmin, and go into your process scheduler configuration and re-configure the process scheduler, and restart it.  This will set the cobol directories into the environment files of the scheduler.

Go into the web front end, and go to:  PeopleTools > Process Scheduler > System Process Request and run the Simple COBOL test program. (PTPDBTST) and it should run to success.

Unique Index missing producing slow performance

I ran into a odd situation yesterday.  I got handed a performance problem that had been going on for months, the developers were blaming the database people, the database people were producing useless reports and statistics and around and around it went.  All the end user wanted to do was view a journal entry in the finance system from the web tier.

Well, my initial thought most likely the journal line table was really large and there may be need for another index.  The journal line table had 1.1 million rows, but the row count SQL (select count(1) from ps_jrnl_ln) was taking a really long time to return, when I selected the data by the first two key fields to return a specific journal (select count(1) from ps_jrnl_ln where business_unit = ‘SHARE’ and journal_id = ‘PAY1234’).  The RED flag is waving madly at this point. So here is what I did:

1) Opened the Record: JRNL_LN in application designer and did an alter table build (w/even if no change flag on).  The build produced the following set of errors:

ORA-12801: error signaled in parallel query server xyz
ORA-01452: cannot CREATE UNIQUE INDEX; duplicate keys found

How in the world did that happen?  Good question, can this really happen?  Not under normal operation, but it has happened so how do we fix it:

select business_unit, journal_id, journal_date, unpost_seq, journal_line, ledger, count(1)
from ps_jrnl_ln
group by business_unit, journal_id, journal_date, unpost_seq, journal_line, ledger
having count(1) > 1;

This identified 31 rows having a count greater than 1.  So I looked at a couple and found they were completely identical rows. So, how to get rid of them.  I used the returned data from the previous select and built and export script in datamover that basically exported all the duplicate rows to a data file.

set log l:\data\logs\export_bad_data.log;
set output l:\data\bad_data.dat;
export jrnl_ln where business_unit = 'xyz' and journal_id = '123';
rem ..... (fill in all the key fields from the above select);
export jrnl_ln where business_unit = 'xyz' and journal_id = '128'
rem ..... so on until all the rows are exported;

Now, I exited datamover, and went back into datamover in bootstrap mode. I imported the data using ignore_dups and the as command, to import the data into a temporary table.

set log l:\data\logs\import_bad_data.log;
set input l:\data\bad_data.dat;
set ignore_dups;
import jrnl_ln as temp_jrnl_ln;

Now, I have the 31 unique rows in a temporary table.  I delete all the duplicate rows from the jrnl_ln table, using the results from the first select, I turn the results into delete statements by all the keys.  This deletes a total of 62 rows from the jrnl_ln table. I then do a insert from the temp table, which puts the 31 unique rows back into the jrnl_ln table, and I drop the temp table.

Rebuild the index, and it returns a success with no warnings or errors.  On the web, I open up the journal in question in a couple of seconds.   !!!!!!!!!!!