Using Lustre file systems — performance hints 6–7
south-ost51_UUID 2113787820 681296236 1432491584 32 /mnt/data[OST:2]
south-ost52_UUID 2113787820 532323328 1581464492 25 /mnt/data[OST:3]
filesystem summary: 8455151280 2579597988 5875553292 30 /mnt/data
#
2. Deactivate the OST service, as described in the Managing space on OST services section in
Chapter 5 of the HP StorageWorks Scalable File Share System User Guide.
3. Use the lfs find command to find all files belonging to the OST service, as shown in the following
example, where the OST service is south-ost51_UUID, on the mount point /mnt/data, and the
output is stored in the /tmp/allfiles.log file:
# lfs find --recursive --obd south-ost51_UUID /mnt/data 2>&1 > /tmp/allfiles.log
4. Use the list of files in the /tmp/allfiles.log file to find several large files and relocate those files
to another OST service, as follows:
a. Create an empty file with an explicit stripe using the lfs setstripe command, or create a
directory with a default stripe.
b. Copy the existing large file to the new location.
c. Remove the original file.
5. If you decide to reactivate the OST service, follow the instructions provided in the Managing space on
OST services section in Chapter 5 of the HP StorageWorks Scalable File Share System User Guide.
6.3 Using Lustre file systems — performance hints
This section provides some tips on improving the performance of Lustre file systems, and is organized as
follows:
• Creating and deleting large numbers of files (Section 6.3.1)
• Large sequential I/O operations (Section 6.3.2)
• Variation of file stripe count with shared file access (Section 6.3.3)
• Timeouts and timeout tuning (Section 6.3.4)
• Using a Lustre file system in the PATH variable (Section 6.3.5)
• Optimizing the use of the GNU ls command on Lustre file systems (Section 6.3.6)
• Using st_blksize to determine optimum I/O block size (Section 6.3.7)
6.3.1 Creating and deleting large numbers of files
You can improve the aggregate time that it takes to create large numbers of small files or to delete a large
directory tree on a Lustre file system by sharing the work among multiple HP SFS client nodes.
As an example, if you have a directory hierarchy comprising of 64 subdirectories, and a client population
of 16 client nodes, you can share the work of removing the tree so that one process on each client node
removes four subdirectories. The job will complete in less time than it would if you were to issue a single
rm -rf command at the top level of the hierarchy from a single client.
Note also that for best performance in situations of parallel access, client processes from different nodes
should act on different parts of the directory tree to provide the most efficient caching of file system internal
locks (a large number of lock revocations can impose a high penalty on overall performance).
Similarly, when you are creating files, distributing the load among client nodes operating on individual
subdirectories yields optimum results.
Comentários a estes Manuais