Andrew Gideon
2010-06-30 01:43:02 UTC
We do backups using rsync --link-dest. On one of our volumes, we just
hit a limit in ext3 which generated the error:
rsync: link "..." => ... failed: Too many links (31)
This appears to be related to a limit in the number of directory entries
to which an inode may be connected. In other words, it's a limit on the
number of hard links that can exist to a given file. This limit is
apparently 32000.
This isn't specifically an rsync problem, of course. I can recreate it
with judicious use of "cp -Rl", for example. But any site using --link-
dest as heavily as we are - and ext3 - is vulnerable to this. So I
thought I'd share our experience.
This is admittedly an extreme case: We've a lot of snapshots preserved
for this volume. And the files failing are under /usr/lib/locale; there
is a lot of hardlinking already occurring in there.
I've thought of two solutions: (1) deliberating breaking linking (and
therefore wasting disk space) or (2) using a different file system.
This is running on CentOS 5, so xfs was there to be tried. I've had
positive experiences with xfs in the past, and from what I have read this
limit does not exist in that file system. I've tried it out, and - so
far - the problem has been avoided. There are inodes with up to 32868
links at the moment on the xfs copy of this volume.
I'm curious, though, what thoughts others might have.
I did wonder, for example, whether rsync should, when faced with this
error, fall back on creating a copy. But should rsync include behavior
that exists only to work around a file system limit? Perhaps only as a
command line option (ie. definitely not the default behavior)?
Thanks...
- Andrew
hit a limit in ext3 which generated the error:
rsync: link "..." => ... failed: Too many links (31)
This appears to be related to a limit in the number of directory entries
to which an inode may be connected. In other words, it's a limit on the
number of hard links that can exist to a given file. This limit is
apparently 32000.
This isn't specifically an rsync problem, of course. I can recreate it
with judicious use of "cp -Rl", for example. But any site using --link-
dest as heavily as we are - and ext3 - is vulnerable to this. So I
thought I'd share our experience.
This is admittedly an extreme case: We've a lot of snapshots preserved
for this volume. And the files failing are under /usr/lib/locale; there
is a lot of hardlinking already occurring in there.
I've thought of two solutions: (1) deliberating breaking linking (and
therefore wasting disk space) or (2) using a different file system.
This is running on CentOS 5, so xfs was there to be tried. I've had
positive experiences with xfs in the past, and from what I have read this
limit does not exist in that file system. I've tried it out, and - so
far - the problem has been avoided. There are inodes with up to 32868
links at the moment on the xfs copy of this volume.
I'm curious, though, what thoughts others might have.
I did wonder, for example, whether rsync should, when faced with this
error, fall back on creating a copy. But should rsync include behavior
that exists only to work around a file system limit? Perhaps only as a
command line option (ie. definitely not the default behavior)?
Thanks...
- Andrew