Thursday, September 15, 2016

GlusterFS 3.8.4 is available, Gluster users are advised to update

Even though the last release 3.8 was just two weeks ago, we're sticking to the release schedule and have 3.8.4 ready for all our current and future users. As with all updates, we advise users of previous versions to upgrade to the latest and greatest. Several bugs have been fixed, and upgrading is one way to prevent hitting known problems in future.

Release notes for Gluster 3.8.4

This is a bugfix release. The Release Notes for 3.8.0, 3.8.1, 3.8.2 and 3.8.3 contain a listing of all the new features that were added and bugs fixed in the GlusterFS 3.8 stable release.

Bugs addressed

A total of 23 patches have been merged, addressing 22 bugs:
  • #1332424: geo-rep: address potential leak of memory
  • #1357760: Geo-rep silently ignores config parser errors
  • #1366496: 1 mkdir generates tons of log messages from dht xlator
  • #1366746: EINVAL errors while aggregating the directory size by quotad
  • #1368841: Applications not calling glfs_h_poll_upcall() have upcall events cached for no use
  • #1368918: tests/bugs/cli/bug-1320388.t: Infrequent failures
  • #1368927: Error: quota context not set inode (gfid:nnn) [Invalid argument]
  • #1369042: thread CPU saturation limiting throughput on write workloads
  • #1369187: fix bug in protocol/client lookup callback
  • #1369328: [RFE] Add a count of snapshots associated with a volume to the output of the vol info command
  • #1369372: gluster snap status xml output shows incorrect details when the snapshots are in deactivated state
  • #1369517: rotated FUSE mount log is using to populate the information after log rotate.
  • #1369748: Memory leak with a replica 3 arbiter 1 configuration
  • #1370172: protocol/server: readlink rsp xdr failed while readlink got an error
  • #1370390: Locks xlators is leaking fdctx in pl_release()
  • #1371194: segment fault while join thread reaper_thr in fini()
  • #1371650: [Open SSL] : Unable to mount an SSL enabled volume via SMB v3/Ganesha v4
  • #1371912: gluster system:: uuid get hangs
  • #1372728: Node remains in stopped state in pcs status with "/usr/lib/ocf/resource.d/heartbeat/ganesha_mon: line 137: [: too many arguments ]" messages in logs.
  • #1373530: Minor improvements and cleanup for the build system
  • #1374290: "gluster vol status all clients --xml" doesn't generate xml if there is a failure in between
  • #1374565: [Bitrot]: Recovery fails of a corrupted hardlink (and the corresponding parent file) in a disperse volume

Tuesday, August 23, 2016

The out-of-order GlusterFS 3.8.3 release addresses a usability regression

On occasion the Gluster projects deems an out-of-order release the best approach to address a problem that got introduced with the last update. The 3.8.3 version is such a release, and we advise all users to upgrade to it, if possible skipping the 3.8.2 release. See the included release notes for more details. We're sorry for any inconvenience caused.

Release notes for Gluster 3.8.3

This is a bugfix release. The Release Notes for 3.8.0, 3.8.1 and 3.8.2 contain a listing of all the new features that were added and bugs fixed in the GlusterFS 3.8 stable release.

Out of Order release to address a severe usability regression

Due to a major regression that was not caught and reported by any of the testing that has been performed, this release is done outside of the normal schedule.
The main reason to release 3.8.3 earlier than planned is to fix bug 1366813:
On restarting GlusterD or rebooting a GlusterFS server, only the bricks of the first volume get started. The bricks of the remaining volumes are not started. This is a regression caused by a change in GlusterFS-3.8.2.
This regression breaks automatic start of volumes on rebooting servers, and leaves the volumes inoperable. GlusterFS volumes could be left in an inoperable state after upgrading to 3.8.2, as upgrading involves restarting GlusterD.
Users can forcefully start the remaining volumes, by doing running the gluster volume start <name> force command.

Bugs addressed

A total of 24 patches have been merged, addressing 21 bugs:
  • #1357767: Wrong XML output for Volume Options
  • #1362540: glfs_fini() crashes with SIGSEGV
  • #1364382: RFE:nfs-ganesha:prompt the nfs-ganesha disable cli to let user provide "yes or no" option
  • #1365734: Mem leak in meta_default_readv in meta xlators
  • #1365742: inode leak in brick process
  • #1365756: [SSL] : gluster v set help does not show ssl options
  • #1365821: IO ERROR when multiple graph switches
  • #1365864: gfapi: use const qualifier for glfs_*timens()
  • #1365879: [libgfchangelog]: If changelogs are not available for the requested time range, no proper error message
  • #1366281: glfs_truncate missing
  • #1366440: [AFR]: Files not available in the mount point after converting Distributed volume type to Replicated one.
  • #1366482: SAMBA-DHT : Crash seen while rename operations in cifs mount and windows access of share mount
  • #1366489: "heal info --xml" not showing the brick name of offline bricks.
  • #1366813: Second gluster volume is offline after daemon restart or server reboot
  • #1367272: [HC]: After bringing down and up of the bricks VM's are getting paused
  • #1367297: Error and warning messages related to xlator/features/snapview-client.so adding up to the client log on performing IO operations
  • #1367363: Log EEXIST errors at DEBUG level
  • #1368053: [geo-rep] Stopped geo-rep session gets started automatically once all the master nodes are upgraded
  • #1368423: core: use for makedev(3), major(3), minor(3)
  • #1368738: gfapi-trunc test shouldn't be .t

Friday, August 12, 2016

The GlusterFS 3.8.2 bugfix release is available

Pretty much according to the release schedule, GlusterFS 3.8.2 has been released this week. Packages are available in the standard repositories, and moving from testing-status in different distributions to normal updates.

Release notes for Gluster 3.8.2

This is a bugfix release. The Release Notes for 3.8.0 and 3.8.1 contain a listing of all the new features that were added and bugs fixed in the GlusterFS 3.8 stable release.

Bugs addressed

A total of 54 patches have been merged, addressing 50 bugs:
  • #1339928: Misleading error message on rebalance start when one of the glusterd instance is down
  • #1346133: tiering : Multiple brick processes crashed on tiered volume while taking snapshots
  • #1351878: client ID should logged when SSL connection fails
  • #1352771: [DHT]: Rebalance info for remove brick operation is not showing after glusterd restart
  • #1352926: gluster volume status client" isn't showing any information when one of the nodes in a 3-way Distributed-Replicate volume is shut down
  • #1353814: Bricks are starting when server quorum not met.
  • #1354250: Gluster fuse client crashed generating core dump
  • #1354395: rpc-transport: compiler warning format string
  • #1354405: process glusterd set TCP_USER_TIMEOUT failed
  • #1354429: [Bitrot] Need a way to set scrub interval to a minute, for ease of testing
  • #1354499: service file is executable
  • #1355609: [granular entry sh] - Clean up (stale) directory indices in the event of an rm -rf and also in the normal flow while a brick is down
  • #1355610: Fix timing issue in tests/bugs/glusterd/bug-963541.t
  • #1355639: [Bitrot]: Scrub status- Certain fields continue to show previous run's details, even if the current run is in progress
  • #1356439: Upgrade from 3.7.8 to 3.8.1 doesn't regenerate the volfiles
  • #1357257: observing " Too many levels of symbolic links" after adding bricks and then issuing a replace brick
  • #1357773: [georep]: If a georep session is recreated the existing files which are deleted from slave doesn't get sync again from master
  • #1357834: Gluster/NFS does not accept dashes in hostnames in exports/netgroups files
  • #1357975: [Bitrot+Sharding] Scrub status shows incorrect values for 'files scrubbed' and 'files skipped'
  • #1358262: Trash translator fails to create 'internal_op' directory under already existing trash directory
  • #1358591: Fix spurious failure of tests/bugs/glusterd/bug-1111041.t
  • #1359020: [Bitrot]: Sticky bit files considered and skipped by the scrubber, instead of getting ignored.
  • #1359364: changelog/rpc: Memory leak- rpc_clnt_t object is never freed
  • #1359625: remove hardcoding in get_aux function
  • #1359654: Polling failure errors getting when volume is started&stopped with SSL enabled setup.
  • #1360122: Tiering related core observed with "uuid_is_null () message".
  • #1360138: [Stress/Scale] : I/O errors out from gNFS mount points during high load on an erasure coded volume,Logs flooded with Error messages.
  • #1360174: IO error seen with Rolling or non-disruptive upgrade of an distribute-disperse(EC) volume from 3.7.5 to 3.7.9
  • #1360556: afr coverity fixes
  • #1360573: Fix spurious failures in split-brain-favorite-child-policy.t
  • #1360574: multiple failures of tests/bugs/disperse/bug-1236065.t
  • #1360575: Fix spurious failures in ec.t
  • #1360576: [Disperse volume]: IO hang seen on mount with file ops
  • #1360579: tests: ./tests/bitrot/br-stub.t fails intermittently
  • #1360985: [SNAPSHOT]: The PID for snapd is displayed even after snapd process is killed.
  • #1361449: Direct io to sharded files fails when on zfs backend
  • #1361483: posix: leverage FALLOC_FL_ZERO_RANGE in zerofill fop
  • #1361665: Memory leak observed with upcall polling
  • #1362025: Add output option --xml to man page of gluster
  • #1362065: tests: ./tests/bitrot/bug-1244613.t fails intermittently
  • #1362069: [GSS] Rebalance crashed
  • #1362198: [tiering]: Files of size greater than that of high watermark level should not be promoted
  • #1363598: File not found errors during rpmbuild: /var/lib/glusterd/hooks/1/delete/post/S57glusterfind-delete-post.py{c,o}
  • #1364326: Spurious failure in tests/bugs/glusterd/bug-1089668.t
  • #1364329: Glusterd crashes upon receiving SIGUSR1
  • #1364365: Bricks doesn't come online after reboot [ Brick Full ]
  • #1364497: posix: honour fsync flags in posix_do_zerofill
  • #1365265: Glusterd not operational due to snapshot conflicting with nfs-ganesha export file in "/var/lib/glusterd/snaps"
  • #1365742: inode leak in brick process
  • #1365743: GlusterFS - Memory Leak - High Memory Utilization

Tuesday, July 19, 2016

First stable update for 3.8 is available, GlusterFS 3.8.1 fixes several bugs

The initial release of Gluster 3.8 was the start of a new Long-Term-Maintenance version with monthly updates. These updates include bugfixes and stability improvements only, making it a version that can safely be installed in production environments. It is planned that the Long-Term-Maintenance versions receive updates for a year. With minor releases going to happen every three months, the upcoming 3.9 version will be a Short-Term-Maintenance with updates until the next version is released three months later.
GlusterFS 3.8.1 has been released a week ago, and in the mean time packages for many distributions have been made available. We recommend all our 3.8.0 users to upgrade to 3.8.1. Environments that run on 3.6.x should consider an upgrade path in the next months, 3.6 will be End-Of-Life when 3.9 is released.

Release notes for Gluster 3.8.1

This is a bugfix release. The Release Notes for 3.8.0 contain a listing of all the new features that were added and bugs fixed in the GlusterFS 3.8 stable release.

Bugs addressed

A total of 35 patches have been sent, addressing 32 bugs:
  • #1345883: [geo-rep]: Worker died with [Errno 2] No such file or directory
  • #1346134: quota : rectify quota-deem-statfs default value in gluster v set help command
  • #1346158: Possible crash due to a timer cancellation race
  • #1346750: Unsafe access to inode->fd_list
  • #1347207: Old documentation link in log during Geo-rep MISCONFIGURATION
  • #1347355: glusterd: SuSE build system error for incorrect strcat, strncat usage
  • #1347489: IO ERROR when multiple graph switches
  • #1347509: Data Tiering:tier volume status shows as in-progress on all nodes of a cluster even if the node is not part of volume
  • #1347524: NFS+attach tier:IOs hang while attach tier is issued
  • #1347529: rm -rf to a dir gives directory not empty(ENOTEMPTY) error
  • #1347553: O_DIRECT support for sharding
  • #1347590: Ganesha+Tiering: Continuous "0-glfs_h_poll_cache_invalidation: invalid argument" messages getting logged in ganesha-gfapi logs.
  • #1348055: cli core dumped while providing/not wrong values during arbiter replica volume
  • #1348060: Worker dies with [Errno 5] Input/output error upon creation of entries at slave
  • #1348086: [geo-rep]: Worker crashed with "KeyError: "
  • #1349274: [geo-rep]: If the data is copied from .snaps directory to the master, it doesn't get sync to slave [First Copy]
  • #1349711: [Granular entry sh] - Implement renaming of indices in index translator
  • #1349879: AFR winds a few reads of a file in metadata split-brain.
  • #1350326: Protocol client not mounting volumes running on older versions.
  • #1350785: Add relative path validation for gluster copy file utility
  • #1350787: gfapi: in case of handle based APIs, close glfd after successful create
  • #1350789: Buffer overflow when attempting to create filesystem using libgfapi as driver on OpenStack
  • #1351025: Implement API to get page aligned iobufs in iobuf.c
  • #1351151: ganesha.enable remains on in volume info file even after we disable nfs-ganesha on the cluster.
  • #1351154: nfs-ganesha disable doesn't delete nfs-ganesha folder from /var/run/gluster/shared_storage
  • #1351711: build: remove absolute paths from glusterfs spec file
  • #1352281: Issues reported by Coverity static analysis tool
  • #1352393: [FEAT] DHT - rebalance - rebalance status o/p should be different for 'fix-layout' option, it should not show 'Rebalanced-files' , 'Size', 'Scanned' etc as it is not migrating any files.
  • #1352632: qemu libgfapi clients hang when doing I/O
  • #1352817: [scale]: Bricks not started after node reboot.
  • #1352880: gluster volume info --xml returns 0 for nonexistent volume
  • #1353426: glusterd: glusterd provides stale port information when a volume is recreated with same brick path

Wednesday, April 13, 2016

GlusterFS 3.5.9 is available, would it be the last 3.5 release?

There has been a delay in announcing the most recent 3.5 release, I'm sorry about that! Packages for most distributions are available by now, either from the standard distribution repositories (NetBSD) or from download.gluster.org.

We are working hard to release the next major version of Gluster. The roadmap for Gluster 3.8 is getting more complete every day. There is still some work to do though. A reminder for all users of the 3.5 stable series; when GlusterFS 3.8 is released, the 3.5 version will become unmaintained. We do our best to maintain three versions of Gluster, with the 3.8 release it will be 3.8, 3.7 and 3.6. Users still running version 3.5 are highly encouraged to start planning their upgrade process. If there are no critical problems reported against the 3.5 version, and no patches get sent, 3.5.9 might well be the last 3.5 release.

Release Notes for GlusterFS 3.5.9

This is a bugfix release. The Release Notes for 3.5.0, 3.5.1, 3.5.2, 3.5.3, 3.5.4, 3.5.5, 3.5.6, 3.5.7 and 3.5.8 contain a listing of all the new features that were added and bugs fixed in the GlusterFS 3.5 stable release.

Bugs Fixed:

  • 1313968: Request for XML output ignored when stdin is not a tty
  • 1315559: SEEK_HOLE and SEEK_DATA should return EINVAL when protocol support is missing

Known Issues:

  • The following configuration changes are necessary for 'qemu' and 'samba vfs plugin' integration with libgfapi to work seamlessly:
    1. gluster volume set <volname> server.allow-insecure on
    2. restarting the volume is necessary
      gluster volume stop <volname>
      gluster volume start <volname>
    3. Edit /etc/glusterfs/glusterd.vol to contain this line:
      option rpc-auth-allow-insecure on
    4. restarting glusterd is necessary
      service glusterd restart
    More details are also documented in the Gluster Wiki on the Libgfapi with qemu libvirt page.
  • For Block Device translator based volumes open-behind translator at the client side needs to be disabled.
    gluster volume set <volname> performance.open-behind disabled
  • libgfapi clients calling glfs_fini before a successful glfs_init will cause the client to hang as reported here. The workaround is NOT to call glfs_fini for error cases encountered before a successful glfs_init. This is being tracked in Bug 1134050 for glusterfs-3.5 and Bug 1093594 for mainline.
  • If the /var/run/gluster directory does not exist enabling quota will likely fail (Bug 1117888).

Thursday, March 3, 2016

GlusterFS 3.5.8 is out, two bugs fixed in this stable release update

Last month GlusterFS 3.5.8 was tagged for release in our git repository. The tarball got placed on our main distribution server, and some packages got built for different distributions. Because releases and packages are mostly done by volunteers in their free time, it sometimes takes a little longer to get all the packages for different distributions available. Please be patient until the release has been made completely (at that point we'll update the 3.5/LATEST symlink). If you are interested in helping out with the packaging for a certain distribution or project, send your introduction and offer for assistance to our packaging mailinglist.

Release Notes for GlusterFS 3.5.8

This is a bugfix release. The Release Notes for 3.5.0, 3.5.1, 3.5.2, 3.5.3, 3.5.4, 3.5.5, 3.5.6 and 3.5.7 contain a listing of all the new features that were added and bugs fixed in the GlusterFS 3.5 stable release.

Bugs Fixed:

  • 1117888: Problem when enabling quota : Could not start quota auxiliary mount
  • 1288195: log improvements:- enabling quota on a volume reports numerous entries of "contribution node list is empty which is an error" in brick logs

Known Issues:

  • The following configuration changes are necessary for 'qemu' and 'samba vfs plugin' integration with libgfapi to work seamlessly:
    1. gluster volume set <volname> server.allow-insecure on
    2. restarting the volume is necessary
      gluster volume stop <volname>
      gluster volume start <volname>
    3. Edit /etc/glusterfs/glusterd.vol to contain this line:
      option rpc-auth-allow-insecure on
    4. restarting glusterd is necessary
      service glusterd restart
    More details are also documented in the Gluster Wiki on the Libgfapi with qemu libvirt page.
  • For Block Device translator based volumes open-behind translator at the client side needs to be disabled.
    gluster volume set <volname> performance.open-behind disabled
  • libgfapi clients calling glfs_fini before a successful glfs_init will cause the client to hang as reported here. The workaround is NOT to call glfs_fini for error cases encountered before a successful glfs_init. This is being tracked in Bug 1134050 for glusterfs-3.5 and Bug 1093594 for mainline.
  • If the /var/run/gluster directory does not exist enabling quota will likely fail (Bug 1117888).

Tuesday, December 15, 2015

GlusterFS 3.5.7 has been released

Around the 10th of each month the release schedule allows for a 3.5 stable update. This got delayed a few days due to the unfriendly weather in The Netherlands, making me take some holidays in a more sunny place in Europe.

This release fixes two bugs, one is only a minor improvement for distributions using systemd, the other fixes a potential client-side segfault when the server.manage-gids option is used. Packages for different distributions are available on the main download server, distributions that still provide glusterfs-3.5 packages should get updates out shortly too.

To keep informed about Gluster, you can follow the project on Twitter, read articles on Planet Gluster or check other sources like mailinglists on our home page.

Release Notes for GlusterFS 3.5.7

This is a bugfix release. The Release Notes for 3.5.0, 3.5.1, 3.5.2, 3.5.3, 3.5.4, 3.5.5 and 3.5.6 contain a listing of all the new features that were added and bugs fixed in the GlusterFS 3.5 stable release.

Bugs Fixed:

  • 1283542: glusterfs does not register with rpcbind on restart
  • 1283691: core dump in protocol/client:client_submit_request

Known Issues:

  • The following configuration changes are necessary for 'qemu' and 'samba vfs plugin' integration with libgfapi to work seamlessly:
    1. gluster volume set <volname> server.allow-insecure on
    2. restarting the volume is necessary
      gluster volume stop <volname>
      gluster volume start <volname>
    3. Edit /etc/glusterfs/glusterd.vol to contain this line:
      option rpc-auth-allow-insecure on
    4. restarting glusterd is necessary
      service glusterd restart
    More details are also documented in the Gluster Wiki on the Libgfapi with qemu libvirt page.
  • For Block Device translator based volumes open-behind translator at the client side needs to be disabled.
    gluster volume set <volname> performance.open-behind disabled
  • libgfapi clients calling glfs_fini before a successful glfs_init will cause the client to hang as reported here. The workaround is NOT to call glfs_fini for error cases encountered before a successful glfs_init. This is being tracked in Bug 1134050 for glusterfs-3.5 and Bug 1093594 for mainline.
  • If the /var/run/gluster directory does not exist enabling quota will likely fail (Bug 1117888).