AWS Mount Data Disk

Smart Panda - AWSAWS Mount Data Disk (Linux)

Amazon Web Services allows you to create EBS (Elastic Block Storage) instances which can be mounted to your EC2 Server instances. This allows an administrator to be able to size the storage solution effectively. These instructions should work on most Linux Flavours.

Use the lsblk command to view your available disk devices and their mount points (if applicable) to help you determine the correct device name to use.

[root ~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvdf 202:80 0 100G 0 disk
xvda1 202:1 0 8G 0 disk /

The output of lsblk removes the /dev prefix from full device paths. In this example, /dev/xvda1 is mounted as the root device (note the MOUNTPOINT is listed as /, the root of the Linux file system hierarchy), and /dev/xvdf is attached, but it has not been mounted yet.

Determine whether you need to create a file system on the volume. New volumes are raw block devices, and you need to create a file system on them before you can mount and use them. Volumes that have been restored from snapshots likely have a file system on them already; if you create a new file system on top of an existing file system, the operation overwrites your data. Use the file -s device command to list special information, such as file system type.

[root ~]$ file -s /dev/xvdf
/dev/xvdf: data

If the output of the previous command shows simply “data” for the device, then there is no file system on the device and you need to create one. If you run this command on a device that contains a file system, then your output will be different.

[root ~]$ file -s /dev/xvda1
/dev/xvda1: Linux rev 1.0 ext4 filesystem data, UUID=1xxxxxx-exxx-4xxx-axxx-8xxxxxxxxxxxx (needs journal recovery) (extents) (large files) (huge files)

Use the following command to create an ext4 file system on the volume. Substitute the device name (such as /dev/xvdf) for device_name.

NOTE: This step assumes that you’re mounting an empty volume. If you’re mounting a volume that already has data on it (for example, a volume that was restored from a snapshot), don’t use mkfs before mounting the volume. Otherwise, you’ll format the volume and delete the existing data.

[root ~]$ mkfs -t ext4 device_name

To create a mount point directory for the volume. The mount point is where the volume is located in the file system tree and where you read and write files to after you mount the volume. Substitute a location for mount_point, such as /data.

[root ~]$ mkdir mount_point

To mount the volume at the location you just created.

[root ~]$ mount device_name mount_point

(Optional) To mount this EBS volume on every system reboot, add an entry for the device to the /etc/fstab file.

Format: device_name mount_point file_system_type fs_mntops fs_freq fs_passno

The last three fields on this line are the file system mount options, the dump frequency of the file system, and the order of file system checks done at boot time. If you don’t know what these values should be, then use the values in the following example for them (defaults,nofail 0 2). For more information on /etc/fstab entries, see the fstab manual page (by entering man fstab on the command line). use the UUID from the file -s device_name for the device_name. So for this example the entry would be:

[root ~]$ file -s /dev/xvdf
/dev/xvdf: Linux rev 1.0 ext4 filesystem data, UUID=2xxxxxx-5xxx-3xxx-1xxx-exxxxxxxxxxxx (needs journal recovery) (extents) (large files) (huge files)

[root ~]$ vi /etc/fstab

UUID=2xxxxxx-5xxx-3xxx-1xxx-exxxxxxxxxxxx /data ext4 default,nofail 0 2

Panda + Community February 2017

Smart Panda - OOC - Curl-4-Kids

Panda + Community February 2017

Okay, seriously, what is up with this crazy weather?  This past month I drove through snow, sleet, rain, 100 mile per hour winds, a tropical depression and a tornado warning zone all within 24 hours. Combine that with dressing in 3 thermal layers to run outside one day and dawning the summer run shorts the next.

This past month did not yield many exciting things in the community as I spent a majority of it out of town, but I did receive my Paul Harris +3 pin, which came as a surprise as all the extra we have been doing yielded an extra +! This year we were pleased to be able to ear mark the Paul Harris funds towards erradicating Polio (yes — that’s still a thing). The Canadian Government along with The Bill and Melinda Gates Foundation both matched donations 2:1. This means that over $5000.00 USD dollars went to the cause.

In a couple of weeks we will be Curling-4-Kids with our local Big Brothers Big Sisters. We are excited be the title sponsor again this year with our Big Brothers Big Sisters organization which provides hundreds of amazing kids amazing mentors through their various programs to help make our community strong and better. If you could spare a dollar or two to support our Curling Team that would be FANTASTIC! [Click To Sponsor]

Check back next month for pictures from the Curling Event! Till next month……

Smart Panda - Cornell - CS92 Go-Live

Cornell -CS 9.2 – Go Live

Smart Panda - Tornado in February

Tornado in Texas

Smart Panda - Paul Harris +3

Paul Harris +3

The Smart Panda - BBBS - Curling

Curling 2016

Apache: DoS & DDoS Attack

Smart Panda - Apache Web Server

Apache: DoS & DDoS Attack

Firstly, what is a DoS or DDoS Attack?  A DoS attack is a “Denial of Service” attack which typically is a computer being used to flood a target system or resource in an attempt to overwhelm it thus making the target system unable to service requests. A DDoS attack is a “Distributed Denial of Service” attack which are often global in nature and are distributed via botnets using multiple systems and/or resources to flood a target system. Both are bad.

Unfortunately, it was our turn to deal with a DoS attack.  Yesterday the phone rang with a client saying that there LAMP server was performing terribly. After a quick check the Apache httpd service was running hundreds of connections that were sucking the life out of the server.

A quick check of the connections yielded an IP address that did not seem to fit with the traffic pattern for the environment.

netstat -anp |grep ‘tcp\|udp’ | awk ‘{print $5}’ | cut -d: -f1 | sort | uniq -c | sort -n

It turned out that an IP address from the Netherlands was hammering the system.  So, as a quick fix, the IP address was added to the iptables deny list. The IP Tables Service restarted and the Apache Service restarted and the environment returned to normal operations.

Additional efforts should be taken to blacklist the IP addresses causing issues.

Oracle – OPSTAT Maintenance

Smart Panda - Database

Oracle – OPSTAT Maintenance

The other day a clients production environment was having some really bad performance issues and it was causing some jobs to fail with deadlock or wait timeouts.  This seemed highly odd as the wait timeouts where on dedicated temporary tables which actually makes no sense.  After a deeper dive the system was trying to do statistics updates on the table and it was trying to purge from the OPSTAT tables.

When reviewing the OPSTAT tables which are tables associated with Oracle’s Optimizer Statistics, it turned out that there was 330 million entries in the wri$opstat_histhead_history table. Approximately 5 millions rows for 31 days, but you may say that isn’t 330 million and you’d be correct. There is a background MMON process that runs every 24 hours that which must run in 5 minutes or the task is not executed.  Basically once the table got to a specific size the maintenance job couldn’t keep up with the volume.

This was caused when the client wanted stats to be ran nightly on the entire main schema which contains 72,000 records and indexes.  So the first thing was to change the statistics job to weekly. Next the retention policy was changed to 7 days:

exec dbms_stats.alter_stats_history_retention(7);

Now to clean the necessary statistics:

exec dbms_stats.purge_stats(sysdate-7);

Yes, this is a terribly thing to run and it took 7 hours on the environment, but no outages where necessary and the environment continued to function without issues.

Here is a great script to see the size of the AWR being used (Thanks to Oracle DBA – A Lifelong learning experience):

set linesize 120
set pagesize 100
col ash form a30
col retention form a30
col snap form a30
COLUMN "Item" FORMAT A25
COLUMN "Space Used (GB)" FORMAT 999.99
COLUMN "Schema" FORMAT A25
COLUMN "Move Procedure" FORMAT A40
SELECT  occupant_name "Item",
space_usage_kbytes/1048576 "Space Used (GB)",
schema_name "Schema",
move_procedure "Move Procedure"
FROM v$sysaux_occupants
WHERE occupant_name = 'SM/AWR'
ORDER BY 1
/

PeopleSoft Oracle – Security Setup

 Smart Panda - Oracle DB SecurityOracle – Security Setup

I had a client ask me last week to change up all the oracle – security setup in their production Oracle Database. The cleanup was relatively straightforward because we only use the basic accounts for the PeopleSoft environment and all the passwords had been changed since going live so there was very little risk to the environment, but they wanted to be safe so:

PeopleSoft Default Accounts: Access-Id, this account is typically by default sysadm, however you can define this account to be pretty much any name, it just needs to ensure that it is an 8-digit id and even though Oracle says the password can be greater, I highly recommend the password be 8 digits and make sure not to include any funky characters (we have seen all sorts of issues with this password being anything special). The next account we need to ensure it is working correctly is the Connect-Id, this account is used for authentication purposes with PeopleSoft.  This account has a base of 3 tables it can select data from, PSOPRDEFN, PSACCESSPRFL, PSSTATUS. As of PeopleTools 8.55, an additional grant for select is on table PSACCESSPROFILE which basically replaces PSACCESSPRFL and supposedly supports passwords up to 30 characters. The next account you will see is the PS account, this account is created when the environment is built and contains an table PS.PSDBOWNER that shows the database name with the schema owner for that database. The PS account is not allowed to be connected to.  The last two accounts you should have be default are SYS & SYSTEM, neither of which should be accessed by anybody expect your DBA.

The first major item is that they wanted the dba’s to have dedicated accounts instead of using the sysadm account.  So to do this:

In SQL*Plus:

create user dbauser1 identified by dbapassword1 ;

grant create session, grant any privilege to dbauser1;

grant connect, resource, dba to dbauser1;

grant unlimited tablespace to dbauser1;

All the work the dbauser1 needs to do is in the sysadm schema, but when it logs in it will default to the schema dbauser1.  So to fix this, we build a trigger to execute a call to switch the current schema on login:

In SQL*Plus:

CREATE OR REPLACE TRIGGER LOGIN_SCHEMA
AFTER LOGON ON DATABASE WHEN (USER in (‘DBAUSER1′,’DBAUSER2’))
DECLARE
exsql VARCHAR2(100);
BEGIN
exsql := ‘ALTER SESSION SET CURRENT_SCHEMA = SYSADM’;
EXECUTE IMMEDIATE exsql;
END;
/

In order to troubleshoot the environment we had created a sysadm_read account which had the role sysadm_read_only, which has select access to all PS tables in the SYSADM schema. This account was shared by multiple users for data validation.  It was determined that they wanted to have dedicated accounts for each user and they wanted to have a higher level of password security so this is where things go creative.

The first element that I want to enable is a better level of password security. Oracle has delivered password verify functions which can be applied to the environment quickly and easily.  You will want to modify the script to ensure that it affects the correct profile for your environment.  The script is:  utlpwdmg.sql which for Oracle 12c introduced two functions ora12c_verify_function and ora12c_strong_verify_function.  For my example I went with the ora12c_verify_function that more than meets the minimum requirements for the client. So now we create a new profile for the database users:

In SQL*Plus:

create profile sysadm_read_only limit
failed_login_attempts 3
password_lock_time 1
password_life_time 90
password_reuse_max 3
PASSWORD_VERIFY_FUNCTION    ora12c_verify_function    ;

This creates a new profile that allows for only 3 failed login attempts, if the user fails all 3 times the account is locked out for 1 day. Password is valid for 90 days, and can only be used every 4th time it is changed, and the password verification as at least 8 characters, at least 1 letter, at least 1 digit, must not contain database name, must not contain user name or reverse user name, must not contain oracle, must not be too simple like welcome1, and must differ by at least 3 characters from the old password (ora12c_verify_function).

Now we just need to create the new users with the sysadm_read_role:

In SQL*Plus:

create user readuser1 identified by readpassword password expire profile sysadm_read_only;

grant sysadm_read_role to readuser1;

grant connect to readuser1;

Now when the user logs into the environment they will be required to change their password and they will have read only access to the sysadm schema.  We have stored procedure that is scheduled to run nightly that grants select access to all PS tables in the sysadm schema and creates public synonyms for each one so that the read only users can select from the PS tables.  The grants are done to the sysadm_read_role Role.