Sunday, July 26, 2015

Gluster News of week #29/2015

An other week has passed, and here is an other “Gluster Weekly News” post. Please add topics for the next post to the etherpad. Anything that is worth noting can be added, contributions from anyone are very much appreciated.
GlusterFS 3.5.5 landed in the Fedora 21 updates repository (moved out of updates-testing).

Fedora 23 has been branched from Rawhide and will contain GlusterFS 3.7. Previous Fedora releases will stick with the stable branches, meaning F22 keeps glusterfs-3.6 and F21 will stay with glusterfs-3.5.

Shared Storage for Containers in Cloud66 using Gluster.
Real Internet Solutions from Belgium (Dutch only website) started to deploy a Gluster solution for their multi-datacenter Cloud Storage products.

Wednesday the regular community meeting took place under the guidance of Atin. He posted the minutes so that everyone else can follow what was discussed.

Several Gluster talks have been accepted for LinuxCon/CloudOpen Europe in Dublin. The accepted talks have been added (with links) to our event etherpad. Attendees interested in meeting other Gluster community people should add their names to the list on the etherpad, maybe we can setup a Gluster meetup or something.

More Gluster topics have been proposed for the OpenStack Summit in Tokyo. Go to https://www.openstack.org/summit/tokyo-2015/vote-for-speakers/SearchForm and search for “gluster” to see them all. You can vote for talks you would like to attend.

GlusterFS 3.7.3 is going to be released by Kaushal early next week.

Documentation update describing the different projects for users, developers, website and feature planning. More feedback for these suggestions are very welcome.

Sunday, July 19, 2015

Gluster News of week #28/2015

Thanks to André Bauer for suggesting a "This week in Gluster" blog post series. This post is the 1st of its kind, and hopefully we manage to write something every week. Future blog posts are edited on a public etherpad where everyone can contribute snippets. Suggestions for improvement can be shared on the mailinglists or on the etherpad.

As every week on Wednesday, there was a Gluster Community Meeting. The minutes have been posted to the list. The next meeting happens on Wednesday, at 12:00 UTC in #gluster-meeting on Freenode IRC.

Proxmox installations of Debian 8 fail when the VM image is stored on a Gluster volume. After many troubleshooting steps and trying different ideas, it was found out that there is an issue with the version of Qemu delivered by Proxmox. Qemu on Proxmox 3.4, the Debian 8 kernel with virtio-disks and storage on Gluster do not work well together. Initially thought to be a Gluster issue, was identified to be related to Proxmox. In order to install Debian 8 on Proxmox 3.4, a workaround is to configure IDE/SATA disks instead of virtio, or use NFS instead of Qemu+libgfapi. More details and alternative workarounds can be found in the email thread.

On IRC, Jampy asked about a problem with Proxmox containers which have their root filesystem on Gluster/NFS. Gluster/NFS has a bug where unix-domain-sockets are created as pipes/fifos. This unsurprisingly causes applications of unix-domain-sockets to behave incorrectly. Bug 1235231 already was filed and fixed in the master branch, on Friday backports have been posted to the release-3.7, 3.6 and 3.5 branches. Next releases are expected to have the fix merged.

Atin sent out a call for "Gluster Office Hours", manning the #gluster IRC channel and announce who will (try to) be available on certian days/times. Anyone who is willing to man the IRC channel and help to answer (or redirect) questions of users can sign up.

From Douglas Landgrafs report of FISL16:
We also had a talk about Gluster and oVirt by Marcelo Barbosa, he showed how oVirt + Gluster is running in the development company that he works. In the end, people asked questions how he integrated FreeIPA with oVirt and how well is running Jenkins and Gerrit servers on top of oVirt. Next year Marcelo should take 2 slots for a similar talks, people are very interested in Gluster with oVirt and real use cases as demonstrated.
GlusterFS 3.5.5 and 3.6.4 have been released and packages for different distributions have been made available.

Our Jenkins instance now supports connecting over https, before only http was available. A temporary self-signed certificate is used, an official one has been requested from the certificate authority.

Manu has updated the NetBSD slaves that are used for regression testing with Jenkins. The slaves are now running NetBSD 7.0 RC1.

Another stable release, GlusterFS 3.5.5 is ready

Packages for Fedora 21 are available in updates-testing, RPMs and .debs can be found on the main Gluster download site.

This is a bugfix release. The Release Notes for 3.5.0, 3.5.1, 3.5.2, 3.5.3 and 3.5.4 contain a listing of all the new features that were added and bugs fixed in the GlusterFS 3.5 stable release.

Bugs Fixed:

  • 1166862: rmtab file is a bottleneck when lot of clients are accessing a volume through NFS
  • 1217432: DHT:Quota:- brick process crashed after deleting .glusterfs from backend
  • 1217433: glusterfsd crashed after directory was removed from the mount point, while self-heal and rebalance were running on the volume
  • 1231641: cli crashes when listing quota limits with xml output

Known Issues:

  • The following configuration changes are necessary for 'qemu' and 'samba vfs plugin' integration with libgfapi to work seamlessly:
    1. gluster volume set <volname> server.allow-insecure on
    2. restarting the volume is necessary
       gluster volume stop <volname>
       gluster volume start <volname>
      
    3. Edit /etc/glusterfs/glusterd.vol to contain this line:
       option rpc-auth-allow-insecure on
      
    4. restarting glusterd is necessary
       service glusterd restart
      
      More details are also documented in the Gluster Wiki on the Libgfapi with qemu libvirt page.
  • For Block Device translator based volumes open-behind translator at the client side needs to be disabled.
    gluster volume set <volname> performance.open-behind disabled
    
  • libgfapi clients calling glfs_fini before a successful glfs_init will cause the client to hang as reported here. The workaround is NOT to call glfs_fini for error cases encountered before a successful glfs_init. This is being tracked in Bug 1134050 for glusterfs-3.5 and Bug 1093594 for mainline.
  • If the /var/run/gluster directory does not exist enabling quota will likely fail (Bug 1117888).