Search Mailing List Archives
[farmshare-discuss] glusterfs remounted on barley03, barley07, and barley20
Michael Maxwell Murray
mmurray1 at stanford.edu
Sat Sep 29 17:25:40 PDT 2012
Thanks for telling me! That explains why several of my jobs had
mysteriously stopped making progress, yet were still reported as
running by qstat.
I also had a job running on barley19 that hung. Perhaps /mnt/glusterfs
also needs to be remounted on barley19?
----- Original Message -----
From: "Jason Bishop" <bishopj at stanford.edu>
To: "Open discussion for users of FarmShare" <farmshare-discuss at lists.stanford.edu>
Sent: Saturday, September 29, 2012 3:16:32 PM
Subject: [farmshare-discuss] glusterfs remounted on barley03, barley07, and barley20
Hi folks, /mnt/glusterfs filesystem dropped from barley03, barley07, and barley20 around 8am this morning. I've remounted it now. If you had any jobs scheduled for these nodes you may need to re-submit those.
We are also down to 500G free space on /mnt/glusterfs. Please remove any files you can.
barley20:/var/log/glusterfs# df -h /mnt/glusterfs
Filesystem Size Used Avail Use% Mounted on
220.127.116.11:/bvol 7.6T 6.7T 484G 94% /mnt/glusterfs
On a related note: from a capacity planning point of view, we would like to hear from you if you would like to significantly increase the amount of data you store in your /mnt/glusterfs/<user> directory. Please let us know at research-computing-support at stanford.edu
farmshare-discuss mailing list
farmshare-discuss at lists.stanford.edu
More information about the farmshare-discuss