Thursday, March 3, 2016

GlusterFS 3.5.8 is out, two bugs fixed in this stable release update

Last month GlusterFS 3.5.8 was tagged for release in our git repository. The tarball got placed on our main distribution server, and some packages got built for different distributions. Because releases and packages are mostly done by volunteers in their free time, it sometimes takes a little longer to get all the packages for different distributions available. Please be patient until the release has been made completely (at that point we'll update the 3.5/LATEST symlink). If you are interested in helping out with the packaging for a certain distribution or project, send your introduction and offer for assistance to our packaging mailinglist.

Release Notes for GlusterFS 3.5.8

This is a bugfix release. The Release Notes for 3.5.0, 3.5.1, 3.5.2, 3.5.3, 3.5.4, 3.5.5, 3.5.6 and 3.5.7 contain a listing of all the new features that were added and bugs fixed in the GlusterFS 3.5 stable release.

Bugs Fixed:

  • 1117888: Problem when enabling quota : Could not start quota auxiliary mount
  • 1288195: log improvements:- enabling quota on a volume reports numerous entries of "contribution node list is empty which is an error" in brick logs

Known Issues:

  • The following configuration changes are necessary for 'qemu' and 'samba vfs plugin' integration with libgfapi to work seamlessly:
    1. gluster volume set <volname> server.allow-insecure on
    2. restarting the volume is necessary
      gluster volume stop <volname>
      gluster volume start <volname>
    3. Edit /etc/glusterfs/glusterd.vol to contain this line:
      option rpc-auth-allow-insecure on
    4. restarting glusterd is necessary
      service glusterd restart
    More details are also documented in the Gluster Wiki on the Libgfapi with qemu libvirt page.
  • For Block Device translator based volumes open-behind translator at the client side needs to be disabled.
    gluster volume set <volname> performance.open-behind disabled
  • libgfapi clients calling glfs_fini before a successful glfs_init will cause the client to hang as reported here. The workaround is NOT to call glfs_fini for error cases encountered before a successful glfs_init. This is being tracked in Bug 1134050 for glusterfs-3.5 and Bug 1093594 for mainline.
  • If the /var/run/gluster directory does not exist enabling quota will likely fail (Bug 1117888).