Install Storage Element
This will detail in how to setup an SE with DPM (only headnode)
Obvious first step is to make sure the hardware is properly installed and configured (eg. hardware raid systems)
Second, configure Quattor templates to match you configuration:
At the K.U.Leuven site we wanted to set up a storage element as dpm headnode with no additional storage servers (yet). The SE has two 3ware raid cards, with 5 harddisks of 250GB each (sum is 2.5TB). We wanted one disk on one controller (sda) to be the installation disk; the same disk on the other controller (sdd) would be used as manual backup partition (no raid). All other disks would be configured in raid5 (one per controller).
Summarized configuration:
sda: 250GB (1 disk, no raid) sdb: 750GB (4 disks, raid5) sdc: 750GB (4 disks, raid5) sdd: 250GB (1 disk, no raid)
We want to set up sdb as a volatile storage pool, and sdc as a permanent storage pool.
CAUTION: in case of reinstallation, we don't want sdb and sdc to be erased!
Quattor template
First, we define some standard configuration for the SE; these lines will define the SE as a DPM server and as an SRM server:
variable SEDPM_CONFIG_SITE = "site/dpm"; variable SEDPM_SRM_SERVER = true; include machine-types/se_dpm;
We use the standard partitioning configuration "classic_server" for sda:
include site/filesystems/classic_server;
This results in the following configuration:
/dev/sda1: 250M, /boot /dev/sda2: 10G, / /dev/sda3: 4G, swap /dev/sda4: "the rest" /var
Every partition on sda will be formatted (this is default in the "classic_server" template).
Next, we specify our own partitioning for sdb and sdc:
# DON'T clear all partition tables (we want to save sdb & sdc!) # You can comment this line if this is the first installation on these disks and you WANT # them to be formatted (or if you know what you're doing) variable AII_OSINSTALL_OPTION_CLEARPART = list(); # Define the partitions (one per device) and their partition table formats: "/system/blockdevices" = nlist ( "physical_devs", nlist ( "sdb", nlist ("label", "msdos"), "sdc", nlist ("label", "msdos"), ), "partitions", nlist ( "sdb1", nlist ( "holding_dev", "sdb", ), "sdc1", nlist ( "holding_dev", "sdc", ), ), ); # Configure these partitions: "/system/filesystems" = list ( nlist ("mount", true, "preserve", true, # means: don't delete data here "format", false, # means: don't do a "cold" format "mountopts", "auto", "type", "ext3", "mountpoint","/storage1", "block_device", "partitions/sdb1", ), nlist ("mount", true, "preserve", true, # means: don't delete data here "format", false, # means: don't do a "cold" format "mountopts", "auto", "type", "ext3", "mountpoint","/storage2", "block_device", "partitions/sdc1", ), );
Install
Just install the server as any machine with the AII (but remember that formatting huge partitions takes time ...)
Post-install
When the server is completely installed you should check to see if the dpm-daemons are running:
$ ps ax| grep dpm 3468 ? Ssl 0:30 /opt/lcg/bin/dpm -c /opt/lcg/etc/DPMCONFIG -l /var/log/dpm/log 3783 ? Ss 0:00 /opt/globus/sbin/globus-gridftp-server -l /var/log/dpm-gsiftp/gridftp.log <...>
Check also if all storage is correctly mounted:
$ df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 9.4G 2.7G 6.3G 30% / /dev/sda1 244M 5.4M 226M 3% /boot none 1014M 0 1014M 0% /dev/shm /dev/sdb1 688G 101M 653G 1% /storage1 <-- our volatile storage /dev/sdc1 688G 101M 653G 1% /storage2 <-- our permanent storage /dev/sda4 216G 209M 205G 1% /var
If this is ok, you can create the dpm-pools. With the first line the DPM will start delete volatile files if only 10% of space is left, until 20% of space is free. And 100MB will be reserved for a file, instead of 200MB. The second line uses "default" parameters and "permanent" storage.
$ dpm-addpool --poolname Volatile --def_filesize 100M --gc_start_thresh 10 --gc_stop_thresh 20 $ dpm-addpool --poolname Permanent --def_filesize 200M --s_type P
... and add the filesystems to the pools:
$ dpm-addfs --poolname Volatile --server kg-se01 --fs /storage1 $ dpm-addfs --poolname Permanent --server kg-se01 --fs /storage2
You can check if the storage is now properly configured:
$ dpm-qryconf POOL Volatile DEFSIZE 100.00M GC_START_THRESH 10 GC_STOP_THRESH 20 DEF_LIFETIME 7.0d DEFPINTIME 2.0h MAX_LIFETIME 1.0m MAXPINTIME 12.0h FSS_POLICY maxfreespace GC_POLICY lru RS_POLICY fifo GIDS 0 S_TYPE - MIG_POLICY none RET_POLICY R CAPACITY 687.50G FREE 652.48G ( 94.9%) kg-se01.cc.kuleuven.be /storage1 CAPACITY 687.50G FREE 652.48G ( 94.9%) POOL Permanent DEFSIZE 200.00M GC_START_THRESH 0 GC_STOP_THRESH 0 DEF_LIFETIME 7.0d DEFPINTIME 2.0h MAX_LIFETIME 1.0m MAXPINTIME 12.0h FSS_POLICY maxfreespace GC_POLICY lru RS_POLICY fifo GIDS 0 S_TYPE P MIG_POLICY none RET_POLICY R CAPACITY 687.50G FREE 652.47G ( 94.9%) kg-se01.cc.kuleuven.be /storage2 CAPACITY 687.50G FREE 652.47G ( 94.9%)
Testing
You can test the dpm server from your login node:
$ echo "test" > testfile $ lcg-cr --vo betest -d kg-se01.cc.kuleuven.be -l lfn:/grid/betest/testfile file://$(pwd)/testfile
This should return you a GUID, like:
guid:d65d8b67-6216-420b-a6fa-f9cfdea06969
Now you can check on the SE if the file is there:
$ ls -lrt /storage*
Now see in the directories until you find the file (hint: it contains "test").