owned this note
owned this note
Published
Linked with GitHub
# GlusterFS-6
# Release notes for Gluster 6.0
This is a major release that includes a range of code improvements and stability
fixes along with a few features as noted below.
A selection of the key features and changes are documented in this page.
A full list of bugs that have been addressed is included further below.
- [Announcements](#announcements)
- [Major changes and features](#major-changes-and-features)
- [Major issues](#major-issues)
- [Bugs addressed in the release](#bugs-addressed)
## Announcements
1. Releases that receive maintenance updates post release 6 are, 4.1 and 5
([reference](https://www.gluster.org/release-schedule/))
2. Release 6 will receive maintenance updates around the 30th of every month
for the first 3 months post release (i.e Mar'19, Apr'19, May'19). Post the
initial 3 months, it will receive maintenance updates every 2 months till EOL.
([reference](https://lists.gluster.org/pipermail/announce/2018-July/000103.html))
3. A series of features/xlators have been deprecated in release 6 as follows,
for upgrade procedures from volumes that use these features to release 6 refer
to the release 6 [upgrade guide](https://docs.gluster.org/en/latest/Upgrade-Guide/upgrade_to_6/).
This deprecation was announced at the gluster-users list [here](https://lists.gluster.org/pipermail/gluster-users/2018-July/034400.html).
Features deprecated:
- Block device (bd) xlator
- Decompounder feature
- Crypt xlator
- Symlink-cache xlator
- Stripe feature
- Tiering support (tier xlator and changetimerecorder)
## Major changes and features
### Highlights
* Stability fixes
- coverity, clang-scan.
- removal of unused / deprecated code / features.
* Memory leak fixes - ASan, Valgrind
* Client side inode garbage collection
- Major concern of our users over the years when the number of files in gluster volume was more.
* Performance Improvements
- `--auto-invalidation` during mount.
Features are categorized into the following sections,
- [Management](#management)
- [Standalone](#standalone)
- [Developer](#developer)
### Management
Stability improvements for brick-mux usecases.
#### GlusterD2
GlusterD2 (or GD2, in short) was planned as a GenNext management service for Gluster project. At present, GD2's main focus is not replacing GD1 (or current `glusterd`) entirely, but be a thin layer in Gluster's Container story.
With this glusterfs release, there is no specific update we will provide about GD2.
#### gluster-ansible
[gluster-ansible](https://github.com/gluster/gluster-ansible) project is implemented to deploy glusterfs using ansible and is the recommended way of deploying glusterfs, as it gives consistency in your deployment.
It is not mandatory to only depend on ansible playbooks, as community has many users who use different mechanism, but plan is to make sure every one would follow best practice guidelines, which is implemented in `gluster-ansible` already.
### Standalone
#### 1. client-side inode garbage collection via LRU list
A FUSE mount's inode cache can now be limited to a maximum number, thus reducing
the memory footprint of FUSE mount processes.
See the lru-limit option in `man 8 mount.glusterfs` for details.
NOTE: Setting this to a low value (say less than 4000), will evict inodes from
FUSE and Gluster caches at a much faster rate, and can cause performance
degrades. The setting has to be determined based on the available client memory and required
performance.
#### 2. Glusterfind tool enhanced with a filter option
glusterfind tool has an added option "--type", to be used with the "--full"
option. The option supports finding and listing files or directories only, and
defaults to both if not specified.
Example usage with the pre and query commands are given below,
1. Pre command ([reference](https://docs.gluster.org/en/latest/GlusterFS%20Tools/glusterfind/#pre-command)):
- Lists both files and directories in OUTFILE:
`glusterfind pre SESSION_NAME VOLUME_NAME OUTFILE`
- Lists only files in OUTFILE:
`glusterfind pre SESSION_NAME VOLUME_NAME OUTFILE --type f`
- Lists only directories in OUTFILE:
`glusterfind pre SESSION_NAME VOLUME_NAME OUTFILE --type d`
2. Query command:
- Lists both files and directories in OUTFILE:
`glusterfind query VOLUME_NAME --full OUTFILE`
- Lists only files in OUTFILE:
`glusterfind query VOLUME_NAME --full --type f OUTFILE`
- Lists only directories in OUTFILE:
`glusterfind query VOLUME_NAME --full --type d OUTFILE`
#### 3. FUSE mounts are enhanced to handle interrupts to blocked lock requests
FUSE mounts are enhanced to handle interrupts to blocked locks.
For example, scripts using the flock (`man 1 flock`) utility without the -n(nonblock)
option against files on a FUSE based gluster mount, can now be interrupted when
the lock is not granted in time or using the -w option with the same utility.
#### 4. Optimized/pass-through distribute functionality for 1-way distributed volumes
**NOTE:** There are no user controllable changes with this feature
The distribute xlator now skips unnecessary checks and operations when the
distribute count is one for a volume, resulting in improved performance.
#### 5. Options introduced to disable invalidations of kernel page cache
For workloads, where multiple FUSE client mounts do not concurrently operate on
any files in the volume, it is now possible to maintain a longer duration kernel
page cache using the following options in conjunction,
- Setting `--auto-invalidation` option to "no" on the glusterfs FUSE mount
process
- Disabling the volume option `performance.global-cache-invalidation`
This enables better performance as the data is served from the kernel page cache
where possible.
#### 6. Changes to gluster based SMB share management
Previously all GlusterFS volumes were being exported by default via smb.conf in
a Samba-CTDB setup. This includes creating a share section for CTDB lock volume
too which is not recommended. Along with few syntactical errors these scripts
failed to execute in a non-Samba setup in the absence of necessary configuration
and binary files.
Hereafter newly created GlusterFS volumes are not exported as SMB share via
Samba unless either of 'user.cifs' or 'user.smb' volume set options are enabled
on the volume. The existing GlusterFS volume share sections in smb.conf will
remain unchanged.
#### 7. ctime feature is enabled by default now
The ctime features which maintains (c/m) time consistent across replica and disperse subvolumes is enabled by default now.
Also with this release, single option is provided to enable/disable ctime feature.
```
#gluster vol set <volname> ctime <on/off>
```
In previous releases, to enable the ctime feature, it was required to enable following two options.
```
#gluster vol set <volname> utime on
#gluster vol set <volname> ctime on
```
>**Pre-requisite:**
> The times are taken from client. Hence it's required that clients are NTP configured.
>**Limitations:**
>The existing limitations still holds good. Mounting gluster volume with time attribute
>options (noatime, realatime...) is not supported with this feature - Certain entry
>operations (with differing creation flags) would reflect an eventual consistency w.r.t
>the time attributes - This feature does not guarantee consistent time for directories
>if hashed sub-volume for the directory is down - readdirp (or directory listing) is not
>supported with this feature- Older files created before upgrade, would witness update
>of ctime upon accessing after upgrade [BUG:1593542](https://bugzilla.redhat.com/show_bug.cgi?id=1593542)
### Developer
#### 1. Gluster code can be compiled and executed using [TSAN](https://clang.llvm.org/docs/ThreadSanitizer.html)
While configuring the sources for a build use the extra option `--enable-tsan`
to enable thread sanitizer based builds.
#### 2. gfapi: A class of APIs have been enhanced to return pre/post gluster_stat information
A set of [apis](https://github.com/gluster/glusterfs/blob/release-6/api/src/gfapi.map#L245) have been enhanced to return pre/post gluster_stat information.
Applications using gfapi would need to adapt to the newer interfaces to compile
against release-6 apis. Pre-compiled applications, or applications using the
older API SDK will continue to work as before.
## Major issues
<TODO>
## Bugs addressed
Bugs addressed since release-5 are listed below.
<TODO>
-----
# Old discussion
## Below are the proposals
Finish the following:
- space-efficient userspace implementation of reflink (with some
limitations) #377
- In the list already
- Current status: In early stages. There are some discussions still pending.
- Atin: This is a stretch at this point.
- This will need one big change and then incremental changes on top. Perhaps over multiple releases.
- This will be useful because Gluster doesn't have a native snapshot
- Classification: Feature
- [RFE] Improve IPv6 support in GlusterFS #192
- How bad is this need, and can we close the gap?
- First thing pending on this is a test bed with 2 nodes, with IPv6 only and see if everything works there.
- FB is running this already.
- We will need an infra test bed since most devs don't run a IPv6 only machine.
- We have code that's behind a flag.
- FB's testing is single-homed stack with IPv6 only.
- This needs to be owned and scoped. We need testing and code-related work
- Classification: Infra feature
- Production-level TLS support #293
- Again, address the gaps, also looks like folks are moving to
TLS1.1/2 at the minimum, and performance with TLS can be concerning
- Performance is key
- Also needs on-disk encryption (recommendations).
- It's not widely used. We don't have significant tests where it's always enabled.
- It has some performance hits which need to be looked into.
- The workflow is tedious and very manual.
- Is this with GD1 stack or GD2 stack? Current focus on GD2 first.
- Atin will bring this up in a GCS sprint.
- Should be made easy to setup a cluster on SSL by default
- Needs scope definition. What does it mean to be production ready and What does it mean to be easier?
- Open an issue to scope this out. Beyond GD2, there will be other areas to work on when it comes to TLS support.
- This is a bigger stretch goal than reflink and other items.
- We need to break this out into several issues and figure out which pieces to target for Gluster 6.
- Highly useful in the cloud environment for both as identifying genuine connections and in-flight encryption.
- gfapi: New apis as FOPS return the associated (iattr/pre/post)
attributes #389
- This is half baked in master, and we need to complete this so that we can easily branch
- Projects that depend on gfAPI also have compile problems against master (gluster-block, gfapi-python)
- The intention of the feature is to expose the pre/post stat information like the xlator stack, at least for NFS Ganesha like consumers
- Classification: Feature
- Add support for statx #273
- Related to the gfapi change and also to other parts of the stack, possibly a 7 target but need to get behind it
- Improve SOS report plugin maintainance #224
- Another thing for GCS, so that we have troubleshooting or problem reporting clean
- revisit all the options' ranges again. #194
- Again as we work with options we can fold this into the same task
- Changes to options tables in xlators #302
- Need to close this issue and mark it donenn
- Infra change
- storage/posix: cache stat info in the inode context #285
- If not Release-6 then by 7 we should target this and the single xattr for gluster
- We need to rescope the old code that Raghavendra Bhatt wrote a few years ago and include the ctime based xattr for gluster meta-data.
- Needs a proposal and then getting things done out post firming up the design.
- <shyam please write notes />
- Classification: Feature
- Add an additional field to xlator struct to tag the support level #430
- Already part of the plan
- Move unsupported xlators to experimental status #399
- Need to add 'xlator_api_t' to all xlator, and set the flags.
- Implement proper cleanup sequence #404
- Already part of the plan
- Heal performance improvements
- ???
## Stretch
- Move fields from xdata into protocol definition #67
- Again, left over task, that we need folks to complete at some point
in time
- Test and fix bugs encountered when cluster.lookup-optimize is set to
on #118
- Again, long pending and causes confusion among all users when we ask
them to turn this off
- Fix or eliminate synctasks #144
- Bringing this up, as experience with synctask is not been performant
across, so we should at least stop using this for newer features
- Improve the ability to scale gluster volumes by 1 (disk/brick) #169
- Again, needed by other parties
- [RFE] GFID2 - File type in GFID #207
- Had this in the list, as it is another feature that needs completion
- Small file performance on Gluster #340
- What can we add/do here?
- We need to consolidate and create a plan or list of actions that can be acieved in the near to long term
- Currently thoughts and actions are scattered, we need to focus on this, so that we drive the best way forward
- Posix framework to serve iatt information from extended attribute. #442
- Part of single xattr for all gluster meta-data, hence related
Tests and code sanity:
- Remove retries, therey reducing and eliminating racy tests/real issues
- Get to near-zero in
- Coverity
- clang
- ASAN
- Improve code coverage
- Establish performance test baselines
- (wish) Get package builds automated
- (wish) Get package testing automated
- (wish) Get upgrade testing automated
- Write unit tests for fairly stable components in gluster #223
---