XENBEgridClient

From Begrid Wiki
Revision as of 09:14, 9 June 2021 by Maintenance script (talk | contribs) (Created page with " == XEN managed BEgrid client == The idea is to use XEN to save on needed hardware resources at BEgrid sites, by combining on the same hardware the NAT box (needed for WN p...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigationJump to search

XEN managed BEgrid client

The idea is to use XEN to save on needed hardware resources at BEgrid sites, by combining on the same hardware the NAT box (needed for WN private subnet) and the BEgrid client. This allows to upgrade/reboot the BEgrid client without affecting the NAT functionalities.

Requirements

One machine with 2 network interfaces (needed for NAT) and at least 1GB of memory. The memory will be mainly used by the BEgrid client (e.g. for templates compilation) as the NAT box requires less resources. Provided that the machine has enough memory and diskspace, other DomU's such as a monitoring box can be added.

Quick XEN HOWTO

Have a look at this short introduction to XEN for Centos5 which provides useful and necessary details that are also needed for this XEN setup. From the howto:

  dom0 is the privileged administrative domain of which only one can run. 
  domU is an unprivileged domain, of which many can run at the same time. 
  Although it is an incorrect analogy, it often helps to think of dom0 as the host system, and domU as a guest system.

Installation

The installation consists of setting up the dom0 and then installing the BEgrid client in a domU.

I. Installing dom0 including NAT and DNS

I.1. dom0 server installation

Install a basic SL50 i386 machine (e.g. following the BEgrid client base installation (only the base and post installation are needed).) The only extra requirement is that xen and a xen-enabled kernel are installed. There exists 2 ways to get this:

  • During the installation procedure(recommended!):
    • select Virtualisation during the package selection step. This will install the needed rpms and configure the correct kernel to boot with.
  • After the OS installation:
    • install the xen rpms:
  yum install xen kernel-xen
    • make sure the correct kernel is used during reboot. Edit the /etc/grub.conf file and set the default to a xen enabled kernel. (The first title is 0, the second is 1, etc.)
    • reboot and check if the correct kernel is used

If all went well, you should now have a xen-enabled kernel and the xm tool. Check it with

 xm info

The output should look like

[root@localhost ~]# xm info
host                   : localhost.localdomain
release                : 2.6.18-8.1.3.el5xen
version                : #1 SMP Mon Apr 30 14:45:24 EDT 2007
machine                : i686
nr_cpus                : 1
nr_nodes               : 1
sockets_per_node       : 1
cores_per_socket       : 1
threads_per_core       : 1
cpu_mhz                : 2660
hw_caps                : bfebfbff:00000000:00000000:00000080:00004400
total_memory           : 1006
free_memory            : 0
xen_major              : 3
xen_minor              : 0
xen_extra              : .3-rc5-8.1.3.el
xen_caps               : xen-3.0-x86_32p
xen_pagesize           : 4096
platform_params        : virt_start=0xf5800000
xen_changeset          : unavailable
cc_compiler            : gcc version 4.1.1 20070105 (Red Hat 4.1.1-52)
cc_compile_by          : brewbuilder
cc_compile_domain      : (none)
cc_compile_date        : Mon Apr 30 14:13:15 EDT 2007
xend_config_format     : 2

I.2. NAT on dom0

  • configure both interfaces (one with public IP and the other for the NAT with private addresses)
  • add iptables rules (the --to-source address has to be the public interface of the dom0)
iptables -t nat -A POSTROUTING -s 192.168.0.0/255.255.0.0 -d ! 192.168.0.0/255.255.0.0 -j SNAT --to-source 193.190.246.188
/sbin/iptables-save > /etc/sysconfig/iptables
  • enable ip forwarding and set it permanently by running these lines:
echo 1 > /proc/sys/net/ipv4/ip_forward
echo 'FORWARD_IPV4=yes' >> /etc/sysconfig/network
  • With the new version of Red Hat Linux 6.2 the FORWARD_IPV4= parameter is now specified in the /etc/sysctl.conf file instead of the /etc/sysconfig/network file. So change this in /etc/sysctl.conf:
net.ipv4.ip_forward = 1
  • restart iptables
service iptables restart
  • test that you get the correct rules set
iptables -L -t nat

I.3. DNS on dom0 (optional/recommended)

  • you need a DNS for the WNs
yum install bind-utils bind caching-nameserver
  • enable the service
chkconfig --add named
chkconfig --level 345 named on
  • configuration files
    • modify with /path/to/your/configfiles
    • /etc/named.conf (example for iihe.ac.be)
// generated by named-bootconf.pl

options {
        directory "/var/named";
        /*
*****If there is a firewall between you and nameservers you want
*****to talk to, you might need to uncomment the query-source
*****directive below.  Previous versions of BIND always asked
*****questions using port 53, but BIND 8.1 uses an unprivileged
*****port by default.
*****/
        // query-source address * port 53;
};

//
// a caching only nameserver config
//
controls {
        inet 127.0.0.1 allow { localhost; } keys { rndckey; };
};
zone "." IN {
        type hint;
        file "named.ca";
};

zone "localhost" IN {
        type master;
        file "localhost.zone";
        allow-update { none; };
};

zone "0.0.127.in-addr.arpa" IN {
        type master;
        file "named.local";
        allow-update { none; };
};

include "/etc/rndc.key";

zone "wn.iihe.ac.be" IN {
  type master;
  file "wn.zone";
  allow-update { none; };
};

zone "168.192.in-addr.arpa" IN {
  type master;
  file "wn.rev";
  allow-update { none; };
};


  • /var/named/wn.zone
    • q is the name of the dom0: replace it with yours
    • q3 is the name of the domU: replace it with yours
$TTL 3600
@                   IN   SOA   q.wn.iihe.ac.be. admin.wn.iihe.ac.be. (
                               2007071200       ; Serial
                               3600             ; Refresh every hour
                               900              ; Retry every 15 minutes
                               3600000          ; Expire 1000 hours
                               3600 )           ; Minimum 1 hour

wn.iihe.ac.be.  IN      NS      q.wn.iihe.ac.be.

q               IN      A       192.168.10.100
q3              IN      A       192.168.10.11

node11-1        IN      A       192.168.11.1
node11-2        IN      A       192.168.11.2
node11-3        IN      A       192.168.11.3
node11-4        IN      A       192.168.11.4
node11-5        IN      A       192.168.11.5
node11-6        IN      A       192.168.11.6
node11-7        IN      A       192.168.11.7
node11-8        IN      A       192.168.11.8

  • /var/named/wn.rev
    • q is the name of the dom0: replace it with yours
    • q3 is the name of the domU: replace it with yours
$TTL 3600
@       IN      SOA     q.wn.iihe.ac.be. admin.wn.iihe.ac.be. (
                                1       ; Serial
                                3600    ; Refresh every hour
                                900     ; Retry every 15 minutes
                                3600000 ; Expire 1000 hours
                                3600 )  ; Minimum 1 hour

          IN      NS      q.wn.iihe.ac.be.

100.10     IN      PTR     q.wn.iihe.ac.be.
11.10     IN      PTR     q3.wn.iihe.ac.be.


#11    IN      PTR     node11-1.wn.iihe.ac.be.
2.11    IN      PTR     node11-2.wn.iihe.ac.be.
3.11    IN      PTR     node11-3.wn.iihe.ac.be.
4.11    IN      PTR     node11-4.wn.iihe.ac.be.
5.11    IN      PTR     node11-5.wn.iihe.ac.be.
6.11    IN      PTR     node11-6.wn.iihe.ac.be.

  • maybe also /var/named/named.local
$TTL    86400
@       IN      SOA     localhost. root.localhost.  (
                                      1997022700 ; Serial
                                      28800      ; Refresh
                                      14400      ; Retry
                                      3600000    ; Expire
                                      86400 )    ; Minimum
              IN      NS      localhost.

1       IN      PTR     localhost.
    • /var/named/localhost.zone
$TTL    86400
@               IN SOA  @       root (
                                        42              ; serial (d. adams)
                                        3H              ; refresh
                                        15M             ; retry
                                        1W              ; expiry
                                        1D )            ; minimum

                IN NS           @
                IN A            127.0.0.1
                IN AAAA         ::1


    • /var/named/named.ca
    • generate it with
dig @e.root-servers.net . ns > /var/named/named.ca
    • check the contents, should contain stuff like
;       This file holds the information on root name servers needed to
;       initialize cache of Internet domain name servers
;       (e.g. reference this file in the "cache  .  <file>"
;       configuration file of BIND domain name servers).
;
;       This file is made available by InterNIC
;       under anonymous FTP as
;           file                /domain/named.cache
;           on server           FTP.INTERNIC.NET
;       -OR-                    RS.INTERNIC.NET
;
;       last update:    Jan 29, 2004
;       related version of root zone:   2004012900
;
;
; formerly NS.INTERNIC.NET
;
.                        3600000  IN  NS    A.ROOT-SERVERS.NET.
A.ROOT-SERVERS.NET.      3600000      A     198.41.0.4
;
; formerly NS1.ISI.EDU
;
.                        3600000      NS    B.ROOT-SERVERS.NET.
B.ROOT-SERVERS.NET.      3600000      A     192.228.79.201
;
; formerly C.PSI.NET
;
.                        3600000      NS    C.ROOT-SERVERS.NET.
C.ROOT-SERVERS.NET.      3600000      A     192.33.4.12
;
; formerly TERP.UMD.EDU
;
.                        3600000      NS    D.ROOT-SERVERS.NET.
D.ROOT-SERVERS.NET.      3600000      A     128.8.10.90
;
; formerly NS.NASA.GOV
;
.                        3600000      NS    E.ROOT-SERVERS.NET.
E.ROOT-SERVERS.NET.      3600000      A     192.203.230.10
;
; formerly NS.ISC.ORG
;
.                        3600000      NS    F.ROOT-SERVERS.NET.
F.ROOT-SERVERS.NET.      3600000      A     192.5.5.241
;
; formerly NS.NIC.DDN.MIL
;
.                        3600000      NS    G.ROOT-SERVERS.NET.
G.ROOT-SERVERS.NET.      3600000      A     192.112.36.4
;
; formerly AOS.ARL.ARMY.MIL
;
.                        3600000      NS    H.ROOT-SERVERS.NET.
H.ROOT-SERVERS.NET.      3600000      A     128.63.2.53
;
; formerly NIC.NORDU.NET
;
.                        3600000      NS    I.ROOT-SERVERS.NET.
I.ROOT-SERVERS.NET.      3600000      A     192.36.148.17
;
; operated by VeriSign, Inc.
;
.                        3600000      NS    J.ROOT-SERVERS.NET.
J.ROOT-SERVERS.NET.      3600000      A     192.58.128.30
;
; operated by RIPE NCC
;
.                        3600000      NS    K.ROOT-SERVERS.NET.
K.ROOT-SERVERS.NET.      3600000      A     193.0.14.129
;
; operated by ICANN
;
.                        3600000      NS    L.ROOT-SERVERS.NET.
L.ROOT-SERVERS.NET.      3600000      A     198.32.64.12
;
; operated by WIDE
;
.                        3600000      NS    M.ROOT-SERVERS.NET.
M.ROOT-SERVERS.NET.      3600000      A     202.12.27.33
; End of File

  • Start the service
/etc/init.d/named start
  • small script to generate entries in DNS for the workernodes
#!/bin/bash

for i in <tt>seq 11 20</tt>
do
  for j in <tt>seq 1 254</tt>
  do
    echo "node$i-$j     IN      A       192.168.$i.$j" >> /var/named/wn.zone
echo "$j.$i     IN      PTR     node$i-$j.wn.iihe.ac.be." >> /var/named/wn.rev
  done
done

II. BEgrid client on domU

This section describes the additional steps related to the BEgrid client setup using Xen. All further details are identical to the standard installation of a BEgrid client.

Resources

A BEgrid client basically does 2 things that are important to consider when using XEN:

  • profile compilation: this requires memor; at least 650MB of memory should dedicated to the domU for its proper functioning.
  • caching: this requires some disk space; at least 15GB should be foreseen for catching puposes, in addition to the base OS installation disk space requirements(about 10GB).

The default configuration provides the disk space through file based images that are converted in virtual disks. 2 files will be created, one for the OS and one for the web cache.

The dom0 should have set explicitly the amount of memory reserved for it

  • add in /etc/grub.conf to the default boot kernel dom0_mem=350M
    • the default boot kernel is listed as default=<integer>
    • counting starts from 0!!

II.1. Preparation on dom0

  • download xen enabled SL5 i386 install images:
wget -O /boot/vmlinuz-xen-install-i386 http://linuxsoft.cern.ch/scientific/50/i386/images/xen/vmlinuz
wget -O /boot/initrd-xen-install-i386.img http://linuxsoft.cern.ch/scientific/50/i386/images/xen/initrd.img
  • check if the SElinux context is properly set in /var/lib/xen/images/. The important part is xen_image_t
  [root@localhost ~]# ls -Zd /var/lib/xen/images/
  drwxr-xr-x  root root system_u:object_r:xen_image_t    /var/lib/xen/images/
  • create 2 empty files:
  dd if=/dev/zero of=/var/lib/xen/images/quattor-client-os.img oflag=direct bs=1M count=10000
  dd if=/dev/zero of=/var/lib/xen/images/quattor-client-cache.img oflag=direct bs=1M count=15000
  • check the security context of the 2 files if they have the xen_image_t context
 ls -Z /var/lib/xen/images/quattor-client*
  • add a new virtual network interface (first check if device ethX is unused and available with ifconfig <network device>):
/etc/xen/scripts/network-bridge start vifnum=<number of virtual interface> bridge=<xen bridge name> netdev=<network device>
    • e.g. if you setup the dom0 with interface eth0, you probably already have a device called xenbr0. Check it with ifconfig xenbr0.
    To add a new xen virtual device using eth1, configure eth1 first and then run
  /etc/xen/scripts/network-bridge start vifnum=1 bridge=xenbr1 netdev=eth1
      • if you run into troubles during the configuration of the xen-bridge(s), you can remove the configuration and start over with
  /etc/xen/scripts/network-bridge stop vifnum=1 bridge=xenbr1 netdev=eth1
    • To make it permanent copy the /etc/xen/scripts/network-bridge to /etc/xen/scripts/network-bridge.xen.
      • Edit /etc/xen/xend-config.sxp and add a line to your new network bridge script (this example uses "network-xen-multi-bridge").
      • In the xend-config.sxp file, the new line should reflect your new script (keep the parentheses):
(network-script network-xen-multi-bridge)
      • Make sure to comment the line that states:
(network-script network-bridge)
      • If you want to create multiple Xen bridges, you must create a custom script. We call it /etc/xen/scripts/network-xen-multi-bridge. This example below creates two Xen bridges (called xenbr0 and xenbr1) and attaches them to eth0 and eth1, respectively:
#!/bin/sh
# network-xen-multi-bridge
# Exit if anything goes wrong
set -e
# First arg is operation.
OP=$1
shift
script=/etc/xen/scripts/network-bridge.xen
case "${OP}" in
        start)
                $script start vifnum=1 bridge=xenbr1 netdev=eth1
                $script start vifnum=0 bridge=xenbr0 netdev=eth0
                ;;

        stop)
                $script stop vifnum=1 bridge=xenbr1 netdev=eth1
                $script stop vifnum=0 bridge=xenbr0 netdev=eth0
                ;;

        status)
                $script status vifnum=1 bridge=xenbr1 netdev=eth1
                $script status vifnum=0 bridge=xenbr0 netdev=eth0
                ;;
*****)
                echo 'Unknown command: ' ${OP}
                echo 'Valid commands are: start, stop, status'
                exit 1
esac

domU configuration files

2 XEN configuration files are needed.

  • the first one is solely for the domU installation. It contains references to kernel installation and initrd image.
  • the second configuration file is needed for everything else. It uses pygrub to select the installation kernel.

II.2 domU installation

  • create a file /etc/xen/BEgrid_client_install containing
kernel = "/boot/vmlinuz-xen-install-i386"
ramdisk = "/boot/initrd-xen-install-i386.img"
extra = "text"
name = "BEgrid_client_install"
memory = "700"
disk = [ 'tap:aio:/var/lib/xen/images/quattor-client-os.img,xvda,w',]
vif = [ 'bridge=xenbr0','bridge=xenbr1', ]
vcpus=1
on_reboot = 'destroy'
on_crash = 'destroy'
  • start the domU with xm create BEgrid_client_install.
  • watch and configure the installation of the domU, use xm console BEgrid_client_install.
    • you will have a text terminal with the familiar installation process.
    • to stop it forcefully use xm destroy BEgrid_client_install
  • follow the steps in the normal BEgrid client base install
    • the disk xvda is the quattor-client-os.img created before.
    • when configuring the network interfaces, make sure to assign a public IP address to the device with the same name as on the dom0. (if eth0 is the public interface on dom0, make it also the public interface on the domU)
    • package selection: you can't choose Virtualisation here. It's ok, the installer knows what it's doing.
  • after the installation you will get a prompt back in dom0.

II.3. domU post-installation

  • to continue, you need to create a file /etc/xen/BEgrid_client containing
name = "BEgrid_client"
memory = "700"
disk = [ 'tap:aio:/var/lib/xen/images/quattor-client-os.img,xvda,w','tap:aio:/var/lib/xen/images/quattor-client-cache.img,xvdb,w', ]
vif = [ 'bridge=xenbr0','bridge=xenbr1', ]
vcpus=1
bootloader="/usr/bin/pygrub"
on_reboot = 'restart'
on_crash = 'restart'
  • to start the domain automatically when the (dom0) system is started, copy the domain configuration to the /etc/xen/auto directory.
    • This will also shut down the domain properly when the system is shut down.
ln -s /etc/xen/BEgrid_client /etc/xen/auto/BEgrid_client
  • start this domU with xm create BEgrid_client.
  • you can use xm console BEgrid_client to connect or login using ssh.
    • it's probably best to use ssh
  • you now need to add the device xvdb to the setup (doing it during the installation is difficult).
    • make a partition on it and configure it for mounting as /var/www/cache
#comment
**(note Wim Obbels: parted '/dev/xvdb unit MB print' failed with 'Error: Unable to open /dev/xvdb - unrecognised disk label.')
    I (interactively) created one big partition with fdisk, and deleted it again ...; then following procedure worked)
  
  this was caused by confirmation probblems with parted on a clean device (does not need a 'y'). fixed by making sure the device is empty (and has no label).

  dd if=/dev/zero of=/dev/xvdb bs=1M count=1
  parted /dev/xvdb mklabel msdos
  end_in_MB=<tt>parted /dev/xvdb unit MB print |sed -nr '{ s#Disk[[:space:]]+/dev/[a-z]+:[[:space:]]([0-9]+)MB#\1#p }'</tt>
  parted /dev/xvdb mkpart primary ext3 1 $end_in_MB
  mkfs.ext3 /dev/xvdb1
  mkdir -p /var/www/cache
  echo "/dev/xvdb1 /var/www/cache  ext3    defaults        1 2" >> /etc/fstab
  mount /var/www/cache
  chown apache.apache /var/www/cache
  • make sure that the dom0 is configured as the default DNS server for the domU. Check the content of the file /etc/resolv.conf and see if it contains the dom0 as the first server. Example of a valid file for IIHE cluster (193.190.246.188 is the dom0, 193.190.247.71 is the 'normal' DNS)
search iihe.ac.be
nameserver 193.190.246.188
nameserver 193.190.247.71

Back to BEgrid_And_Quattor page


Template:TracNotice