Pages

Tuesday 19 August 2014

Storage Stress test with VDBENCH

Simulation environment:

1 Storage system (physical storage or software-based storage with protocol of SAN, NAS ...)
1 (or more) testing server.

On server, install vdbench:

Download vdbench : http://www.oracle.com/technetwork/server-storage/vdbench-downloads-1901681.html

This utility require java and csh shell, install it before you can use vdbench.

Stress test template file:

Refer this guide for further testing case (or vdbench home directory for some example): http://www.oracle.com/technetwork/server-storage/vdbench-1901683.pdf

My basic example:

Edit a text file in /home/user/template.vdbench

sd=sd1,lun=/dev/vdb,openflags=o_direct,threads=200
wd=wd1,sd=sd1,xfersize=(1M,70,10M,30),rdpct=70
rd=run1,wd=wd1,iorate=max,elapsed=600,interval=1
With:
sd : storage definition (use any: sd1, sd2 ...sdtest...)
lun=/dev/vdb : i use RAW device (that mounted from storage, create LUN or Volume on Storage system and mount it to testing server. There are many kind of storage if you want to stress, disk, raw device, file system etc.)
threads: maximum number of concurrent outstanding I/O that we want to flush.
wd: workload definition (use any)
xfersize: data transfer size 
(1M,70, 10M, 30): Generate xfersize as a random value between 1 Megabyte and 10 Megabyte with weight for random value is 70%.
rdpct: read percentage (70% is read and 30% is write).
rd: run definition (use name any)
iorate=max: Run an uncontrolled workload. (iorate=100 : Run a workload of 100 I/Os per second)
elapsed: time to run this test (second)
interval: report interval to your screen in second. 

Run command for test:
Change to vdbench directory:
# cd /opt/vdbench

#./vdbench -f /home/user/template.vdbench -o <ouput_directory_for_log>

Watch stressing status on screen.




Sunday 3 August 2014

LINUX NETWORK INTERFACE BONDING

Summary task: configure and un-configure NIC bonding on Linux

Installation

sudo apt-get install ifenslave
Install ifenslave to attach or detach slave network interface to bonding device

Step 1: Ensure kernel support
Before Ubuntu can configure your network cards into a NIC bond, you need to ensure that the correct kernel module bonding is present, and loaded at boot time.

Edit your /etc/modules configuration:

sudo vi /etc/modules
Ensure that the bonding module is loaded:

# /etc/modules: kernel modules to load at boot time.
#
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.

loop
lp
rtc
bonding
Step 2: Configure network interfaces

Ensure that your network is brought down:

sudo stop networking
Then load the bonding kernel module and edit network configuration:

sudo modprobe bonding
sudo vi /etc/network/interfaces

For example, to combine eth0 and eth1 as slaves to the bonding interface bond0 using a simple active-backup setup, with eth0 being the primary interface:

#eth0 is manually configured, and slave to the "bond0" bonded NIC
auto eth0
iface eth0 inet manual
bond-master bond0
bond-primary eth0

#eth1 ditto, thus creating a 2-link bond.
auto eth1
iface eth1 inet manual
bond-master bond0

# bond0 is the bonding NIC and can be used like any other normal NIC.
# bond0 is configured using static network information.
auto bond0
iface bond0 inet static
address 192.168.1.10
gateway 192.168.1.1
netmask 255.255.255.0
bond-mode active-backup
bond-miimon 100
bond-slaves none

Step 3: Checking and Start up bonding interface

# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.5.0 (November 4, 2008)

Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

802.3ad info
LACP rate: fast
Aggregator selection policy (ad_select): stable
bond bond0 has no active aggregator

Slave Interface: eth1
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:0c:29:f5:b7:11
Aggregator ID: N/A

Slave Interface: eth2
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:0c:29:f5:b7:1b
Aggregator ID: N/A
To bring up the bonding interface, run

ifup bond0
To bring down the bonding interface, run

ifdown bond0


Remove

Bring down the device bond0

ifconfig bond0 down

Remove slave interface from bond0 device, first eth0

echo "-eth0" > /sys/class/net/bond0/bonding/slaves

and eth1:

echo "-eth1" > /sys/class/net/bond0/bonding/slaves

Next, remove the config and files related to bond0 device

echo "-bond0" > /sys/class/net/bonding_masters
and
rmmod bonding

Source:
https://help.ubuntu.com/community/UbuntuBonding

Using resize2fs for resizing Linux partition (Centos/Ubuntu)

In vitualization world, sometime, when you create a VM, you assign certain amount of disk space and then you realize that you need more. Resizing a partition is not a easy task, especially is boot partition, you have to reboot the machine to take effect. This memories will take me through some basically step to help me resize linux partition online/offline (this step apply for both Centos and Ubuntu).

Before go through the guidance, let talk a bit to make clear my concept about Linux disk, there include 3 type of disk(or partition) will be mention in this document is physical disk, underlying partition and parttion. Physical disk is the this that we attach to the server (hard disk), underlying partition is just a partition that occupies a whole physical disk (ext: /dev/sda), and partition is a parted disk (/dev/sda1; /dev/sda2).

First, on hypervisor layer, try to extend the physical disk and check by fdisk -l, we should see the disk has ready to extend the partition.

Please NOTE, 
1) We can not extend root disk on-line, this will not take effect after we reboot the server. So just extend the physical disk and reboot the server (on Centos, resize2fs is automatically run after boot, otherwise we have to run resize2fs manually with Ubuntu).

2) resize2fs or grow_xfs utilities, as their document, can extend partition without un-mount it, but after test, i realize it can be only done when we using underlying partition. Not effect to other partition.


For offline re-size: (resize partition such as /dev/sda1, /dev/sda2)

In this example, partition will be resized using ext4 format, first, umount the partition and check for error and disable journal as well.
# umount <device> 
# fsck -n <device> (ext: fsck /dev/sda1)
# tun2fs -O ^has_journal <device>
# e2fsck -f <device>

The next step will be stress you up with delete partition steps, we have to delete the partition which we want  to extend the capacity, don't worry, we are not going to lose the data, this step just make sure the partition table is updated with new size
> cd /home/<username>/svn/repository
> svnadmin create myfirstproject
# fdisk /dev/<underlyung partition, ext: /dev/sda>
Type p for showing the device we want to delete
Command (m for help): d
Partition number (1-4) : <device ID> (ext: 1, ext number 1 is /dev/sda1)
Command (m for help): n
Command action: p
Partition number (1-4): <same device ID we just deleted> (ext: 1)
First cylinder (<number> - <number>): [enter]
Last cyliinder, +cylinder or +size{K, M, G} (<number> - <number>): [enter]
Command (m for help): w




Now, recheck the partition and resize it
# fsck -n <device>
# resize2fs <device> (ext: resize2fs /dev/sda1)
We almost done here, re-enable something that we throw away from our first step
# tune2fs -j <device> (ext: tune2fs -j /dev/sda1)
Now, the partition is ready to mount and use.

For online re-size: (resize underlying partition /dev/sda)

Just run command:
# resize2fs <device_name>