Sunday, October 5, 2014

GlusterFS 3.5.3beta1 has been released for testing

The first beta for GlusterFS 3.5.3 is now available for download.

Packages for different distributions will land on the download server over the next few days. When packages become available, the package maintainers will send a notification to the gluster-users mailinglist.

With this beta release, we make it possible for bug reporters and testers to check if issues have indeed been fixed. All community members are invited to test and/or comment on this release.

If a bug from the list below has not been sufficiently fixed, please open the bug report, leave a comment with details of the testing and change the status of the bug to ASSIGNED.

In case someone has successfully verified a fix for a bug, please change the status of the bug to VERIFIED.

The Release Notes for 3.5.0, 3.5.1 and 3.5.2 contain a listing of all the new features that were added and bugs fixed in the GlusterFS 3.5 stable release.

Bugs Fixed:

  • 1081016: glusterd needs xfsprogs and e2fsprogs packages
  • 1129527: DHT :- data loss - file is missing on renaming same file from multiple client at same time
  • 1129541: [DHT:REBALANCE]: Rebalance failures are seen with error message " remote operation failed: File exists"
  • 1132391: NFS interoperability problem: stripe-xlator removes EOF at end of READDIR
  • 1133949: Minor typo in afr logging
  • 1136221: The memories are exhausted quickly when handle the message which has multi fragments in a single record
  • 1136835: crash on fsync
  • 1138922: DHT + rebalance : rebalance process crashed + data loss + few Directories are present on sub-volumes but not visible on mount point + lookup is not healing directories
  • 1139103: DHT + Snapshot :- If snapshot is taken when Directory is created only on hashed sub-vol; On restoring that snapshot Directory is not listed on mount point and lookup on parent is not healing
  • 1139170: DHT :- rm -rf is not removing stale link file and because of that unable to create file having same name as stale link file
  • 1139245: vdsm invoked oom-killer during rebalance and Killed process 4305, UID 0, (glusterfs nfs process)
  • 1140338: rebalance is not resulting in the hash layout changes being available to nfs client
  • 1140348: Renaming file while rebalance is in progress causes data loss
  • 1140549: DHT: Rebalance process crash after add-brick and `rebalance start' operation
  • 1140556: Core: client crash while doing rename operations on the mount
  • 1141558: AFR : "gluster volume heal <volume_name> info" prints some random characters
  • 1141733: data loss when rebalance + renames are in progress and bricks from replica pairs goes down and comes back
  • 1142052: Very high memory usage during rebalance
  • 1142614: files with open fd's getting into split-brain when bricks goes offline and comes back online
  • 1144315: core: all brick processes crash when quota is enabled
  • 1145000: Spec %post server does not wait for the old glusterd to exit
  • 1147243: nfs: volume set help says the rmtab file is in "/var/lib/glusterd/rmtab"

Known Issues:

  • The following configuration changes are necessary for 'qemu' and 'samba vfs plugin' integration with libgfapi to work seamlessly:
    1. gluster volume set <volname> server.allow-insecure on
    2. restarting the volume is necessary
       gluster volume stop <volname>
       gluster volume start <volname>
      
    3. Edit /etc/glusterfs/glusterd.vol to contain this line:
       option rpc-auth-allow-insecure on
      
    4. restarting glusterd is necessary
       service glusterd restart
      
      More details are also documented in the Gluster Wiki on the Libgfapi with qemu libvirt page.
  • For Block Device translator based volumes open-behind translator at the client side needs to be disabled.
    gluster volume set <volname> performance.open-behind disabled
    
  • libgfapi clients calling glfs_fini before a successfull glfs_init will cause the client to hang as reported here. The workaround is NOT to call glfs_fini for error cases encountered before a successfull glfs_init. This is being tracked in Bug 1134050 for glusterfs-3.5 and Bug 1093594 for mainline.
  • If the /var/run/gluster directory does not exist enabling quota will likely fail (Bug 1117888).

Monday, September 22, 2014

Experimenting with Ceph support for NFS-Ganesha

NFS-Ganesha is a user-space NFS-server that is available in Fedora. It contains several plugins (FSAL, File System Abstraction Layer) for supporting different storage backends. Some of the more interesting are:

Setting up a basic NFS-Ganesha server

Exporting a mounted filesystem is pretty simple. Unfortunately this failed for me when running with the standard nfs-ganesha packages on a minimal Fedora 20 installation. The following changes were needed to make NFS-Ganesha work for a basic export:

  • install rpcbind and make the nfs-ganesha.service depend on it
  • copy /etc/dbus-1/system.d/org.ganesha.nfsd.conf from the sources
  • create a /etc/sysconfig/nfs-ganesha environment file

When these initial things have been taken care of, a configuration file needs to be created. The default configuration file mentioned in the environment file is /etc/ganesha.nfsd.conf. The sources of nfs-ganesha contain some examples, the vfs.conf is quite usable as a starting point. After copying the example and modifying the paths to something more suitable, starting the NFS-server should work:

# systemctl start nfs-ganesha

In case something failed, there should be a note about it in /var/log/ganesha.log.

Exporting the Ceph filesystem with NFS-Ganesha

This assumes you have a working Ceph Cluster which includes several MON, OSD and one or more MDS daemons. The FSAL_CEPH from NFS-Ganesha uses libcephfs which seems to be the same as the ceph-fuse package for Fedora. the easiest way to make sure that the Ceph filesystem is functional, is to try and mount it with ceph-fuse.

The minimal requirements to get a Ceph client system to access the Ceph Cluster, seem to be a /etc/ceph/ceph.conf with a [global] section and a suitable keyring. Creating the ceph.conf on the Fedora system that was done the ceph-deploy:

$ ceph-deploy config push $NFS_SERVER

In my setup I scp'd the /etc/ceph/ceph.client.admin.keyring from one of my Ceph servers to the $NFS_SERVER. There probably are better ways to create/distribute a keyring, but I'm new to Ceph and this worked sufficiently for my testing.

When the above configuration was done, it was possible to mount the Ceph filesystem on the Ceph client that is becoming the NFS-server. This command worked without issues:

# ceph-fuse /mnt
# echo 'Hello Ceph!' > /mnt/README
# umount /mnt

The first write to the Ceph filesystem took a while. This is likely due to the initial work the MDS and OSD daemons need to do (like creating pools for the Ceph filesystem).

After confirming that the Ceph Cluster and Filesystem work, the configuration for NFS-Ganesha can just be taken from the sources and saved as /etc/ganesha.nfsd.conf. With this configuration, and restarting the nfs-ganesha.service, the NFS-export becomes available:

# showmount -e $NFS_SERVER
Export list for $NFS_SERVER:
/ (everyone)

NFSv4 uses a 'pseudo root' as mentioned in the configuration file. This means that mounting the export over NFSv4 results in a virtual directory structure:

# mount -t nfs $NFS_SERVER:/ /mnt
# find /mnt
/mnt
/mnt/nfsv4
/mnt/nfsv4/pseudofs
/mnt/nfsv4/pseudofs/ceph
/mnt/nfsv4/pseudofs/ceph/README

Reading and writing to the mountpoint under /mnt/nfsv4/pseudofs/ceph works fine, as long as the usual permissions allow that. By default NFS-Ganesha enabled 'root squashing', so the 'root' user may not do a lot on the export. Disabling this security measure can be done by placing this option in the export section:

Squash = no_root_squash;

Restart the nfs-ganesha.service after modifying /etc/ganesha.nfsd.conf and writing files as 'root' should work too now.

Future Work

For me, this was a short "let's try it out" while learning about Ceph. At the moment, I have no intention on working on the FSAL_CEPH for NFS-Ganesha. My main interest of this experiment with exporting a Ceph filesystem though NFS-Ganesha on a plain Fedora 20 installation, was to learn about usability of a new NFS-Ganesha configuration/deployment. In order to improve the user experience with NFS-Ganesha, I'll try and fix some of the issues I run into. Progress can be followed in Bug 1144799.

In future, I will mainly use NFS-Ganesha for accessing Gluster Volumes. My colleague Soumya posted a nice explanation on how to download, build and run NFS-Ganesha with support for Gluster. We will be working on improving the out-of-the-box support in Fedora while stabilizing the FSAL_GLUSTER in the upstream NFS-Ganeasha project.

Thursday, July 31, 2014

GlusterFS 3.5.2 has been released!


GlusterFS 3.5.2 has been announced some minutes ago. These are the changes that have been included in this release. Known issues are documented below too.

Release Notes for GlusterFS 3.5.2

This is mostly a bugfix release. The Release Notes for 3.5.0 and 3.5.1 contain a listing of all the new features that were added and bugs fixed.

Bugs Fixed:

  • 1096020: NFS server crashes in _socket_read_vectored_request
  • 1100050: Can't write to quota enable folder
  • 1103050: nfs: reset command does not alter the result for nfs options earlier set
  • 1105891: features/gfid-access: stat on .gfid virtual directory return EINVAL
  • 1111454: creating symlinks generates errors on stripe volume
  • 1112111: Self-heal errors with "afr crawl failed for child 0 with ret -1" while performing rolling upgrade.
  • 1112348: [AFR] I/O fails when one of the replica nodes go down
  • 1112659: Fix inode leaks in gfid-access xlator
  • 1112980: NFS subdir authentication doesn't correctly handle multi-(homed,protocol,etc) network addresses
  • 1113007: nfs-utils should be installed as dependency while installing glusterfs-server
  • 1113403: Excessive logging in quotad.log of the kind 'null client'
  • 1113749: client_t clienttable cliententries are never expanded when all entries are used
  • 1113894: AFR : self-heal of few files not happening when a AWS EC2 Instance is back online after a restart
  • 1113959: Spec %post server does not wait for the old glusterd to exit
  • 1114501: Dist-geo-rep : deletion of files on master, geo-rep fails to propagate to slaves.
  • 1115369: Allow the usage of the wildcard character '*' to the options "nfs.rpc-auth-allow" and "nfs.rpc-auth-reject"
  • 1115950: glfsheal: Improve the way in which we check the presence of replica volumes
  • 1116672: Resource cleanup doesn't happen for clients on servers after disconnect
  • 1116997: mounting a volume over NFS (TCP) with MOUNT over UDP fails
  • 1117241: backport 'gluster volume status --xml' issues
  • 1120151: Glustershd memory usage too high
  • 1124728: SMB: CIFS mount fails with the latest glusterfs rpm's

Known Issues:

  • The following configuration changes are necessary for 'qemu' and 'samba vfs plugin' integration with libgfapi to work seamlessly:
    1. gluster volume set <volname> server.allow-insecure on
    2. restarting the volume is necessary
       gluster volume stop <volname>
       gluster volume start <volname>
      
    3. Edit /etc/glusterfs/glusterd.vol to contain this line:
       option rpc-auth-allow-insecure on
      
    4. restarting glusterd is necessary
       service glusterd restart
      
      More details are also documented in the Gluster Wiki on the Libgfapi with qemu libvirt page.
  • For Block Device translator based volumes open-behind translator at the client side needs to be disabled.
      gluster volume set <volname> performance.open-behind disabled
    
  • libgfapi clients calling glfs_fini before a successfull glfs_init will cause the client to hang as reported here. The workaround is NOT to call glfs_finifor error cases encountered before a successfull glfs_init.
  • If the /var/run/gluster directory does not exist enabling quota will likely fail (Bug 1117888).

Monday, July 21, 2014

Testers needed for GlusterFS 3.5.2beta1

GlusterFS 3.5.2beta1 has just been released. This is the first beta to allow users to verify the fixes for the bugs that were reported. See the bug reports below for more details on how to test and confirm the fix (or not).

This is a bugfix only release. The Release Notes for 3.5.0 and 3.5.1 contain a listing of all the new features that were added and bugs fixed.

Bugs Fixed:

  • 1096020: NFS server crashes in _socket_read_vectored_request
  • 1100050: Can't write to quota enable folder
  • 1103050: nfs: reset command does not alter the result for nfs options earlier set
  • 1105891: features/gfid-access: stat on .gfid virtual directory return EINVAL
  • 1111454: creating symlinks generates errors on stripe volume
  • 1112111: Self-heal errors with "afr crawl failed for child 0 with ret -1" while performing rolling upgrade.
  • 1112348: [AFR] I/O fails when one of the replica nodes go down
  • 1112659: Fix inode leaks in gfid-access xlator
  • 1112980: NFS subdir authentication doesn't correctly handle multi-(homed,protocol,etc) network addresses
  • 1113007: nfs-utils should be installed as dependency while installing glusterfs-server
  • 1113403: Excessive logging in quotad.log of the kind 'null client'
  • 1113749: client_t clienttable cliententries are never expanded when all entries are used
  • 1113894: AFR : self-heal of few files not happening when a AWS EC2 Instance is back online after a restart
  • 1113959: Spec %post server does not wait for the old glusterd to exit
  • 1114501: Dist-geo-rep : deletion of files on master, geo-rep fails to propagate to slaves.
  • 1115369: Allow the usage of the wildcard character '*' to the options "nfs.rpc-auth-allow" and "nfs.rpc-auth-reject"
  • 1115950: glfsheal: Improve the way in which we check the presence of replica volumes
  • 1116672: Resource cleanup doesn't happen for clients on servers after disconnect
  • 1116997: mounting a volume over NFS (TCP) with MOUNT over UDP fails
  • 1117241: backport 'gluster volume status --xml' issues
  • 1120151: Glustershd memory usage too high

Known Issues:

  • The following configuration changes are necessary for qemu and samba integration with libgfapi to work seamlessly:

    1. gluster volume set server.allow-insecure on
    2. restarting the volume is necessary

       gluster volume stop 
       gluster volume start 
      
    3. Edit /etc/glusterfs/glusterd.vol to contain this line:

       option rpc-auth-allow-insecure on
      
    4. restarting glusterd is necessary

       service glusterd restart
      

      More details are also documented in the Gluster Wiki on the Libgfapi with qemu libvirt page.

  • For Block Device translator based volumes open-behind translator at the client side needs to be disabled.

  • libgfapi clients calling glfs_fini before a successfull glfs_init will cause the client to hang as reported here. The workaround is NOT to call glfs_fini for error cases encountered before a successfull glfs_init.

  • If the /var/run/gluster directory does not exist enabling quota will likely fail (Bug 1117888).

Sunday, July 20, 2014

Change the default search engine in Epiphany, the GNOME Web application

When I'm enjoying the sun/wind/rain on the balcony, I tend to use my XO-1.75 for duties where most people would use a tablet. Reading/writing emails, browsing the internet, bug triaging or writing small fixes, release notes and all can be done fine on a small screen. My preference goes definitely towards physical keyboards, and less to their onscreen variants. Even when the keyboard is small, I like the typing on it much more than using a touchscreen for it. Of course, the space saving of not needing to display a keyboard helps too. But well, that aside...


My XO is is installed with the stock OLPC distribution, based on Fedora. Sometimes I use the Sugar desktop environment, on other days I'll switch to GNOME (Classic). With GNOME comes the Epiphany browser (recently renamed to Web). Unfortunately Epiphany uses Google as default search engine, and there is no option in the settings menu to change that. After a little DuckDuckGo'ing, I found a hint that the keyword-search-url can get set by gsettings:

$ gsettings set org.gnome.Epiphany keyword-search-url

Using the gsettings command works fine, but does not apply the option for all users on the system. I could not find a command to change the system-wide settings, which would help with automatically setting the option after a reinstall. More searching (now directly from the addressbar) suggested that I could use a special .gschema.override file. Indeed, the installation of the XO already has some of these .gschema.override files under /usr/share/glib-2.0/schemas/. Dropping the following file in the directory:

# filename: /usr/share/glib-2.0/schemas/50_use-duckduckgo.gschema.override
#
# use https://duckduckgo.com instead of Google for searches from the addressbar
#

[org.gnome.Epiphany]
keyword-search-url='https://duckduckgo.com/?q=%s'

After creating the file, it is needed to 'compile' the gschemas:

# glib-compile-schemas /usr/share/glib-2.0/schemas

Happy searching!

Tuesday, June 24, 2014

glusterfs-3.5.1 has been released


On Tue, Jun 24, 2014 at 03:15:58AM -0700, Gluster Build System wrote:
> 
> 
> SRC: http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.5.1.tar.gz
> 
> This release is made off jenkins-release-73

Many thanks to everyone how tested the glusterfs-3.5.1 beta releases and 
gave feedback. There were no regressions reported compared to the 3.5.0 
release.

Many bugs have been fixed, and documentation for all new features in 3.5 
should be included now. Thanks to all the reporters, developers and 
testers for improving the 3.5 stable series.

Below you will find the release notes in MarkDown format for 
glusterfs-3.5.1, these are included in the tar.gz as
doc/release-notes/3.5.1.md. The mirror repository on GitHub provides 
a nicely rendered version:
- https://github.com/gluster/glusterfs/blob/v3.5.1/doc/release-notes/3.5.1.md

Packages for different Linux distributions will follow shortly.  
Notifications are normally sent to this list when the packages are 
available for download, and/or have reached the distributions update 
infrastructure.

Changes for a new 3.5.2 release are now being accepted. The list of 
proposed fixes is already growing:
- https://bugzilla.redhat.com/showdependencytree.cgi?hide_resolved=0&id=glusterfs-3.5.2

Anyone is free to request a bugfix or backport for the 3.5.2 release. In 
order to do so, file a bug and set the 'blocked' field to 
'glusterfs-3.5.2' so that we can track the requests. Use this link to 
make it a little easier for yourself:
- https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS&version=3.5.1&blocked=glusterfs-3.5.2

Cheers,
Niels

Release Notes for GlusterFS 3.5.1

This is mostly a bugfix release. The Release Notes for 3.5.0 contain a listing of all the new features that were added.
There are two notable changes that are not only bug fixes, or documentation additions:
  1. a new volume option server.manage-gids has been added This option should be used when users of a volume are in more than approximately 93 groups (Bug 1096425)
  2. Duplicate Request Cache for NFS has now been disabled by default, this may reduce performance for certain workloads, but improves the overall stability and memory footprint for most users

Bugs Fixed:

  • 765202: lgetxattr called with invalid keys on the bricks
  • 833586: inodelk hang from marker_rename_release_newp_lock
  • 859581: self-heal process can sometimes create directories instead of symlinks for the root gfid file in .glusterfs
  • 986429: Backupvolfile server option should work internal to GlusterFS framework
  • 1039544: [FEAT] "gluster volume heal info" should list the entries that actually required to be healed.
  • 1046624: Unable to heal symbolic Links
  • 1046853: AFR : For every file self-heal there are warning messages reported in glustershd.log file
  • 1063190: Volume was not accessible after server side quorum was met
  • 1064096: The old Python Translator code (not Glupy) should be removed
  • 1066996: Using sanlock on a gluster mount with replica 3 (quorum-type auto) leads to a split-brain
  • 1071191: [3.5.1] Sporadic SIGBUS with mmap() on a sparse file created with open(), seek(), write()
  • 1078061: Need ability to heal mismatching user extended attributes without any changelogs
  • 1078365: New xlators are linked as versioned .so files, creating .so.0.0.0
  • 1086743: Add documentation for the Feature: RDMA-connection manager (RDMA-CM)
  • 1086748: Add documentation for the Feature: AFR CLI enhancements
  • 1086749: Add documentation for the Feature: Exposing Volume Capabilities
  • 1086750: Add documentation for the Feature: File Snapshots in GlusterFS
  • 1086751: Add documentation for the Feature: gfid-access
  • 1086752: Add documentation for the Feature: On-Wire Compression/Decompression
  • 1086754: Add documentation for the Feature: Quota Scalability
  • 1086755: Add documentation for the Feature: readdir-ahead
  • 1086756: Add documentation for the Feature: zerofill API for GlusterFS
  • 1086758: Add documentation for the Feature: Changelog based parallel geo-replication
  • 1086760: Add documentation for the Feature: Write Once Read Many (WORM) volume
  • 1086762: Add documentation for the Feature: BD Xlator - Block Device translator
  • 1086766: Add documentation for the Feature: Libgfapi
  • 1086774: Add documentation for the Feature: Access Control List - Version 3 support for Gluster NFS
  • 1086781: Add documentation for the Feature: Eager locking
  • 1086782: Add documentation for the Feature: glusterfs and oVirt integration
  • 1086783: Add documentation for the Feature: qemu 1.3 - libgfapi integration
  • 1088848: Spelling errors in rpc/rpc-transport/rdma/src/rdma.c
  • 1089054: gf-error-codes.h is missing from source tarball
  • 1089470: SMB: Crash on brick process during compile kernel.
  • 1089934: list dir with more than N files results in Input/output error
  • 1091340: Doc: Add glfs_fini known issue to release notes 3.5
  • 1091392: glusterfs.spec.in: minor/nit changes to sync with Fedora spec
  • 1095256: Excessive logging from self-heal daemon, and bricks
  • 1095595: Stick to IANA standard while allocating brick ports
  • 1095775: Add support in libgfapi to fetch volume info from glusterd.
  • 1095971: Stopping/Starting a Gluster volume resets ownership
  • 1096040: AFR : self-heal-daemon not clearing the change-logs of all the sources after self-heal
  • 1096425: i/o error when one user tries to access RHS volume over NFS with 100+ GIDs
  • 1099878: Need support for handle based Ops to fetch/modify extended attributes of a file
  • 1101647: gluster volume heal volname statistics heal-count not giving desired output.
  • 1102306: license: xlators/features/glupy dual license GPLv2 and LGPLv3+
  • 1103413: Failure in gf_log_init reopening stderr
  • 1104592: heal info may give Success instead of transport end point not connected when a brick is down.
  • 1104915: glusterfsd crashes while doing stress tests
  • 1104919: Fix memory leaks in gfid-access xlator.
  • 1104959: Dist-geo-rep : some of the files not accessible on slave after the geo-rep sync from master to slave.
  • 1105188: Two instances each, of brick processes, glusterfs-nfs and quotad seen after glusterd restart
  • 1105524: Disable nfs.drc by default
  • 1107937: quota-anon-fd-nfs.t fails spuriously
  • 1109832: I/O fails for for glusterfs 3.4 AFR clients accessing servers upgraded to glusterfs 3.5
  • 1110777: glusterfsd OOM - using all memory when quota is enabled

Known Issues:

  • The following configuration changes are necessary for qemu and samba integration with libgfapi to work seamlessly:
    1. gluster volume set <volname> server.allow-insecure on
    2. restarting the volume is necessary
      gluster volume stop 
      gluster volume start 
      
    3. Edit /etc/glusterfs/glusterd.vol to contain this line:
      option rpc-auth-allow-insecure on
      
    4. restarting glusterd is necessary
      service glusterd restart
      
    More details are also documented in the Gluster Wiki on the Libgfapi with qemu libvirt page.
  • For Block Device translator based volumes open-behind translator at the client side needs to be disabled.
  • libgfapi clients calling glfs_fini before a successfull glfs_init will cause the client to hang has been reported by QEMU developers. The workaround is NOT to call glfs_fini for error cases encountered before a successfull glfs_init. Follow Bug 1091335 to get informed when a release is made available that contains a final fix.
  • After enabling server.manage-gids, the volume needs to be stopped and started again to have the option enabled in the brick processes
    gluster volume stop <volname>
    gluster volume start <volname>
    

Sunday, May 25, 2014

glusterfs-3.5.1beta released

Reposting the email to the Gluster Users and Developers mailinglists.
On Sat, 24 May, 2014 at 11:34:36PM -0700, Gluster Build System wrote:
> > SRC: http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.5.1beta.tar.gz
This beta release is intended to verify the changes that should resolve the bugs listed below. We appreciate tests done by anyone. Please leave a comment in the respective bugreport with a short description of the success or failure. Visiting one of the bugreports is as easy as opening the bugzilla.redhat.com/$BUG URL, for the first in the list, this results in http://bugzilla.redhat.com/765202.

Bugs expected to be fixed (31 in total since 3.5.0):
#765202 - lgetxattr called with invalid keys on the bricks
#833586 - inodelk hang from marker_rename_release_newp_lock
#859581 - self-heal process can sometimes create directories instead of symlinks for the root gfid file in .glusterfs
#986429 - Backupvolfile server option should work internal to GlusterFS framework
#1039544 - [FEAT] "gluster volume heal info" should list the entries that actually required to be healed.
#1046624 - Unable to heal symbolic Links
#1046853 - AFR : For every file self-heal there are warning messages reported in glustershd.log file
#1063190 - [RHEV-RHS] Volume was not accessible after server side quorum was met
#1064096 - The old Python Translator code (not Glupy) should be removed
#1066996 - Using sanlock on a gluster mount with replica 3 (quorum-type auto) leads to a split-brain
#1071191 - [3.5.1] Sporadic SIGBUS with mmap() on a sparse file created with open(), seek(), write()
#1078061 - Need ability to heal mismatching user extended attributes without any changelogs
#1078365 - New xlators are linked as versioned .so files, creating <xlator>.so.0.0.0
#1086748 - Add documentation for the Feature: AFR CLI enhancements
#1086750 - Add documentation for the Feature: File Snapshots in GlusterFS
#1086752 - Add documentation for the Feature: On-Wire Compression/Decompression
#1086756 - Add documentation for the Feature: zerofill API for GlusterFS
#1086758 - Add documentation for the Feature: Changelog based parallel geo-replication
#1086760 - Add documentation for the Feature: Write Once Read Many (WORM) volume
#1086762 - Add documentation for the Feature: BD Xlator - Block Device translator
#1088848 - Spelling errors in rpc/rpc-transport/rdma/src/rdma.c
#1089054 - gf-error-codes.h is missing from source tarball
#1089470 - SMB: Crash on brick process during compile kernel.
#1089934 - list dir with more than N files results in Input/output error
#1091340 - Doc: Add glfs_fini known issue to release notes 3.5
#1091392 - glusterfs.spec.in: minor/nit changes to sync with Fedora spec
#1095775 - Add support in libgfapi to fetch volume info from glusterd.
#1095971 - Stopping/Starting a Gluster volume resets ownership
#1096040 - AFR : self-heal-daemon not clearing the change-logs of all the sources after self-heal
#1096425 - i/o error when one user tries to access RHS volume over NFS with 100+ GIDs
#1099878 - Need support for handle based Ops to fetch/modify extended attributes of a file

Before a final glusterfs-3.5.1 release is made, we hope to have all the blocker bugs fixed. There are currently 13 bugs marked that still need some work done:
#1081016 - glusterd needs xfsprogs and e2fsprogs packages
#1086743 - Add documentation for the Feature: RDMA-connection manager (RDMA-CM)
#1086749 - Add documentation for the Feature: Exposing Volume Capabilities
#1086751 - Add documentation for the Feature: gfid-access
#1086754 - Add documentation for the Feature: Quota Scalability
#1086755 - Add documentation for the Feature: readdir-ahead
#1086759 - Add documentation for the Feature: Improved block device translator
#1086766 - Add documentation for the Feature: Libgfapi
#1086774 - Add documentation for the Feature: Access Control List - Version 3 support for Gluster NFS
#1086781 - Add documentation for the Feature: Eager locking
#1086782 - Add documentation for the Feature: glusterfs and oVirt integration
#1086783 - Add documentation for the Feature: qemu 1.3 - libgfapi integration
#1095595 - Stick to IANA standard while allocating brick ports

A more detailed overview of the status of each of these bugs is here.