Cradlepoint Escape Spécifications Page 68

  • Télécharger
  • Ajouter à mon manuel
  • Imprimer
  • Page
    / 100
  • Table des matières
  • DEPANNAGE
  • MARQUE LIVRES
  • Noté. / 5. Basé sur avis des utilisateurs
Vue de la page 67
experience involves Red Hat’s Linux Enterprise distribution, but
I had trouble finding information on adding XFS support. I
specifically wanted to avoid anything difficult or complicated
to reproduce. CentOS seemed like the best OS choice, as it
leveraged my Red Hat experience and had a trivial process for
adding XFS support.
For the project system, I installed the OS using Kickstart.
I created a kickstart file that automatically created a 6GB /,
150MB /boot and a 64GB swap partition on the /dev/sda
virtual disk using a conventional msdos disk label and ext3
filesystems. (I typically would allocate less swap than this,
but I’ve found through experience that the xfs_check utility
required something like 26GB of memory to function—
anything less and it would die with “out of memory”
errors). The Kickstart installation ignored the /dev/sdb disk
for the time being. I could have automated all the disk
partitioning and XFS configuration via Kickstart, but I
specifically wanted to play around with the creation of the
large partition manually. Once the Kickstart OS install was
finished, I manually added XFS support with the following
yum commands:
yum install kmod-xfs xfs-utils
At this time, I downloaded and installed the 3ware tw_cli
command line and 3dm Web interface package from the
3ware Web site. I used the 3dm Web interface to create
the hot spare.
Next, I used parted to create a gpt-labeled disk with a sin-
gle XFS filesystem on the 14TB virtual disk /dev/sdb. Another
argument for using something other than ext3 is filesystem
creation time. For example, when I first experimented with a
3TB test partition under both ext3 and XFS, an mkfs took 3.5
hours under ext3 and less than 15 seconds for XFS. The XFS
mkfs operation was extremely fast, even with the RAID array
initialization in progress.
I used the following commands to set up the large parti-
tion named /backup for storing the disk-to-disk backups:
# parted /dev/sdb
(parted) mklabel gpt
(parted) mkpartfs primary xfs 0% 100%
(parted) print
Model: AMCC 9650SE-16M DISK (scsi)
Disk /dev/sdb: 13.9TB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Number Start End Size File system Name Flags
1 17.4kB 13.9TB 13.9TB xfs primary
(parted) quit
# mkfs.xfs /dev/sdb1
# mount /dev/sdb1 /backup
Next, I made the mount permanent by adding it to /etc/fstab.
I now considered the system to be pretty much functional,
and the rest of the configuration effort was specifically related
to the system’s role as a disk-to-disk backup server.
Performance
I know I could have used SAS drives with an SAS controller for
better performance, but SAS disks are not yet available in the
capacities offered by SATA, and they would have been much
more expensive for less disk space.
For this project, I settled on a 16-drive system with a
16-port RAID controller. I did find a Supermicro 24-drive
chassis (SC486) and a 3ware 24-port RAID controller
(9650SE-24M8) that should work together. It would be
interesting to see whether there is any performance down-
side to such a large system, but this would be overkill for
my needs at the moment.
There are still plenty of options and choices with the
existing configuration that may yield better performance
than the default settings. I did not pursue all of these, as
I needed to get this particular machine into production
quickly. I would be interested in exploring performance
improvements in the future, especially if the system was
going to be used interactively by humans (and not just for
automated backups late at night).
Possible areas for performance tuning include the following:
1) RAID schemes: I could have used a different scheme for
better performance, but I felt RAID 5 was sufficient for my
needs. I think RAID 6 also would have worked, and I would
have ended up with the same amount of disk space (assuming
two parity drives and no hot spare), but my understanding is
that it would be slower than RAID 5.
2) ext3/XFS filesystem creation and mount options: I had a
hard time finding any authoritative or definitive information on
how to make XFS as fast as possible for a given situation. In
my case, this was a relatively small number of large (multi-
gigabyte) files. The mount and mkfs options that I used came
from examples I found on various discussion groups, but I did
not try to verify their performance claims. For example, some
articles said that the mount options of noatime, nodiratime
and osyncisdsync would improve performance. 3ware has a
whitepaper covering optimizing XFS and 2.6 kernels with an
older RAID controller model, but I have not tried those sugges-
tions on the controller I used.
3) Drive jumpers: one surprise (for me at least) was finding
that the Seagate drives come from the factory with the
1.5Gbps rate-limit jumper installed. As far as I can tell, the
drive documentation does not say that this is the factory
default setting, only that it “can be used”. Removing this
jumper enables the drive to run at 3.0Gbps with controllers
that support this speed (such as the 3ware 9560 used for this
project). I was able to confirm the speed setting by using the
3ware 3dm Web interface (InformationDrive), but when I
tried using tw_cli to view the same information, it did not
display the speed currently in use:
# tw_cli /c0/p0 show lspeed
/c0/p0 Link Speed Supported = 1.5 Gbps and 3.0 Gbps
/c0/p0 Link Speed = unknown
The rate-limiting jumper is tiny and recessed into the back
of the drive. I ended up either destroying or losing most of
the jumpers in the process of prying them off the pins (before
buying an extremely long and fine-tipped pair of needle-nose
pliers for this task).
66 | august 2008 www.linuxjournal.com
FEATURE One Box. Sixteen Trillion Bytes.
Vue de la page 67
1 2 ... 63 64 65 66 67 68 69 70 71 72 73 ... 99 100

Commentaires sur ces manuels

Pas de commentaire