AWS Mount Data Disk

Smart Panda - AWSAWS Mount Data Disk (Linux)

Amazon Web Services allows you to create EBS (Elastic Block Storage) instances which can be mounted to your EC2 Server instances. This allows an administrator to be able to size the storage solution effectively. These instructions should work on most Linux Flavours.

Use the lsblk command to view your available disk devices and their mount points (if applicable) to help you determine the correct device name to use.

[root ~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvdf 202:80 0 100G 0 disk
xvda1 202:1 0 8G 0 disk /

The output of lsblk removes the /dev prefix from full device paths. In this example, /dev/xvda1 is mounted as the root device (note the MOUNTPOINT is listed as /, the root of the Linux file system hierarchy), and /dev/xvdf is attached, but it has not been mounted yet.

Determine whether you need to create a file system on the volume. New volumes are raw block devices, and you need to create a file system on them before you can mount and use them. Volumes that have been restored from snapshots likely have a file system on them already; if you create a new file system on top of an existing file system, the operation overwrites your data. Use the file -s device command to list special information, such as file system type.

[root ~]$ file -s /dev/xvdf
/dev/xvdf: data

If the output of the previous command shows simply “data” for the device, then there is no file system on the device and you need to create one. If you run this command on a device that contains a file system, then your output will be different.

[root ~]$ file -s /dev/xvda1
/dev/xvda1: Linux rev 1.0 ext4 filesystem data, UUID=1xxxxxx-exxx-4xxx-axxx-8xxxxxxxxxxxx (needs journal recovery) (extents) (large files) (huge files)

Use the following command to create an ext4 file system on the volume. Substitute the device name (such as /dev/xvdf) for device_name.

NOTE: This step assumes that you’re mounting an empty volume. If you’re mounting a volume that already has data on it (for example, a volume that was restored from a snapshot), don’t use mkfs before mounting the volume. Otherwise, you’ll format the volume and delete the existing data.

[root ~]$ mkfs -t ext4 device_name

To create a mount point directory for the volume. The mount point is where the volume is located in the file system tree and where you read and write files to after you mount the volume. Substitute a location for mount_point, such as /data.

[root ~]$ mkdir mount_point

To mount the volume at the location you just created.

[root ~]$ mount device_name mount_point

(Optional) To mount this EBS volume on every system reboot, add an entry for the device to the /etc/fstab file.

Format: device_name mount_point file_system_type fs_mntops fs_freq fs_passno

The last three fields on this line are the file system mount options, the dump frequency of the file system, and the order of file system checks done at boot time. If you don’t know what these values should be, then use the values in the following example for them (defaults,nofail 0 2). For more information on /etc/fstab entries, see the fstab manual page (by entering man fstab on the command line). use the UUID from the file -s device_name for the device_name. So for this example the entry would be:

[root ~]$ file -s /dev/xvdf
/dev/xvdf: Linux rev 1.0 ext4 filesystem data, UUID=2xxxxxx-5xxx-3xxx-1xxx-exxxxxxxxxxxx (needs journal recovery) (extents) (large files) (huge files)

[root ~]$ vi /etc/fstab

UUID=2xxxxxx-5xxx-3xxx-1xxx-exxxxxxxxxxxx /data ext4 default,nofail 0 2

PeopleSoft – Move File Attachments (Database to SFTP)

This is the scenario, client needs to move from database stored file attachments to an actual file share, this allows them to remove 1/4 of their entire database storage needs out to a storage area.  Clearly this is a good thing. So, here is the quick run down on what we did:

1. Copy all the existing URL references that contained record:// for backup purposes.

2. Setup a SSH account with a SSH key pair for security

3. Install the SSH key pair into the digital certificates

4. Change all the existing URLs that point to the database storage to the new SFTP storage locations.

5. Run the Orphan cleanup process for File Attachments

6. Copy the file attachments from the OLDURL to the NEWURL

7. Setup the default file attachment server to use the new URL attachment server.

8. Generate a list of all the attachments moved into the file storage and compare that against what was actually in the attachment record

9. Purged the attachment record of all the migrated files.

10.  Updated attachment reference records when necessary if they referenced the old URL in any way.

This is where we ran into an interesting problem with unicode characters.  The process was relatively slow, and we found that to move approximately 200,000 files took about 40 hours.  So to get into production what we did was made a copy of production and run the process in a test environment.  Then we moved all the files that we copied to the production file share and than removed all the entries from the database attachment record that were already moved and then ran the tiny subset of new attachments out to the storage so we could minimize our downtime.

 

SQL Server – Storing Data

I often get asked about performance issues with SQL Server. I often find with a quick check that the data and log files for the database are being stored in terrible inefficient ways. I am not going to argue local disks versus SAN storage in this post, as more often then not my clients are moving to virtualized environments so the storage is on a SAN or on a centralized disk array of some sort.

Lets start with one big drive is not a good way to run an Enterprise Database. I appreciate that SQL Server is easy to setup and get working, but making it work efficiently and effectively takes a little more understanding then accepting the default prompts during the install.

1. Have a small drive setup to run the OS layer of the machine, I have found 40gb is usually more then sufficient for this.
2. Storing Data files: these should be stored on a drive that is within a RAID 5, 6 or 10 configuration.
3. Storing Log Files: these should be stored on a drive that is within a RAID 1 or 10 (however, 5 or 6 will work too).
4. Storing TEMP DB Files: these should be stored on a drive that is RAID 1 (however, 5,6 or 10 will work too).

In a lot of cases, items 2,3,4 are all stored on the same drive. This maybe fine for non-critical stuff, but you will want your Production environments split up. The most common drive configuration I see is RAID 5 or 6, but in bigger SAN’s you will see RAID 10 often. Now that drives are relatively cheap I find disk arrays to be very cost effective for organizations and offer up great flexibility. This is one area that I recommend buying good equipment. Additionally, take speed over capacity and quantity over capacity. The faster the data can be read and/or written makes a difference, and the more drives you have the more heads you have for reading and/or writing.

Lastly, know what is sharing your storage areas. If you have a SAN that is carved up and server multiple databases or environments, you will end up spreading the read/write ability of the equipment across those environments. This is a common problem I see where people will say there must be something wrong with PeopleSoft because it isn’t doing anything and is still slow, but if you dig into the available resources you will find another application is hammering the resources which are shared with PeopleSoft, and therefore PeopleSoft is working fine it just doesn’t have any available resources to run effectively.