owned this note
owned this note
Published
Linked with GitHub
Amar: Scratch Space (for emails)
===
#### Date : 2018, 16th July
---
### Email - 1
### Subject: Proposal to mark few features as Deprecated / SunSet from Version 5.0
Hi all,
Over last 12 years of Gluster, we have developed many features, and continue to maintain~~support~~ most of it as of date. Along the way we have figured out better methods to accomplish the functionalities provided by these features and have not been actively maintaining some of these features.
We want to take time and cleanup some of the 'unsupported' components/features and mark them as 'SunSet' (i.e., would be taken out of the code base in subsequent releases) in next upcoming release, `v5.0`. Users would find options for smoothly migrating off features being labeled as 'Sunset' in the release notes.
If you are using any of these features, please do let us know and we will be happy to help you with migrating off them. On the same note, we are also happy to guide new developers to work on those components which are not actively maintained by current set of developers.
### List of 'SunSet' features:
#### 'cluster/stripe' translator:
This translator was developed very early in the evolution of GlusterFS, and addressed one of the very common question of Distributed FS, which is "What happens if one of my file is bigger than the available brick. Say, I have 2TB hdd, exported in glusterfs, my file is 3TB". While it solved the purpose, it was very hard to handle failure scenarios, and give a real good experience to our users with this feature. Over the time, Gluster solved the problem with its 'Shard' feature, which solves the problem in much better way, and provides much better solution with existing well supported stack. Hence the proposal for Deprecation.
If you are using this feature, then do write to us, as it needs a proper migration from existing volume to a new full supported volume type before you upgrade.
#### 'storage/bd' translator:
This feature got into the code base 5 years back with this [patch](http://review.gluster.org/4809)[1]. Plan was to use a block device directly as a brick, which would help to handle disk-image storage much easily in glusterfs.
As the feature is not getting more contribution, and we are not seeing any user traction on this, would like to propose for Deprecation.
If you are using the feature, plan to move to a supported gluster volume configuration, and have your setup 'supported' before upgrading to your new gluster version.
#### 'RDMA' transport support:
Gluster started supporting RDMA while ib-verbs was still new, and very high-end infra around that time were using Infiniband. Engineers did work with Mellanox, and got the technology into GlusterFS for better data migration, data copy. While current day kernels support very good speed with IPoIB module itself, and there are no more bandwidth for experts in these area to maintain the feature, we recommend migrating over to TCP (IP based) network for your volume.
If you are successfully using RDMA transport, do get in touch with us to prioritize the migration plan for your volume. Plan is to work on this after the release, so by version6.0, we will have a cleaner transport code, which just needs to support one type.
#### 'Tiering' feature
Gluster's tiering feature which was planned to be providing an option to keep your 'hot' data in different location than your cold data, so one can get better performance. While we saw some users for the feature, it needs much more attention to be completely bug free. At the time, we are not having any active maintainers for the feature, and hence suggesting to take it out of the 'supported' tag.
If you are willing to take it up, and maintain it, do let us know, and we are happy to assist you.
If you are already using tiering feature, before upgrading, make sure to do `gluster volume tier detach` all the bricks before upgrading to next release. Also, we recommend you to use features like `dmcache` on your LVM setup to get best performance from bricks.
#### 'Quota'
This is a call out for 'Quota' feature, to let you all know that it will be 'no new development' state. While this feature is 'actively' in use by many people, the challenges we have in accounting mechanisms involved, has made it hard to achieve good performance with the feature. Also, the amount of extended attribute get/set operations while using the feature is not very ideal. Hence we recommend our users to move towards setting quota on backend bricks directly (ie, XFS project quota), or to use different volumes for different directories etc.
As the feature wouldn't be deprecated immediately, the feature doesn't need a migration plan when you upgrade to newer version, but if you are a new user, we wouldn't recommend setting quota feature. By the release dates, we will be publishing our best alternatives guide for gluster's current quota feature.
Note that if you want to contribute to the feature, we have [project quota based issue open](https://github.com/gluster/glusterfs/issues/184)[2] Happy to get contributions, and help in getting a newer approach to Quota.
---
These are our set of initial features which we propose to take out of 'fully' supported features. While we are in the process of making the user/developer experience of the project much better with providing well maintained codebase, we may come up with few more set of features which we may possibly consider to move out of support, and hence keep watching this space.
[1] - http://review.gluster.org/4809
[2] - https://github.com/gluster/glusterfs/issues/184
Regards,
Vijay, Shyam, Amar
----
## Email 2
### Subject: Gluster Project's focus on container storage.
Hi all,
We would like to let Gluster community know that some of us have started focus on a project called 'Gluster for Container Storage' (in short **GCS**). As of now, one can already use Gluster to provide persistent storage for containers by making use of available projects in github [gluster.org](https://github.com/gluster) & [Heketi](https://github.com/heketi/heketi). The goal of the new project is to bring these sub-projects together, and making it easier to use Gluster in containers.
Below section highlights the efforts involved for GCS project, across various projects in the Gluster organization.
#### GD2
Repo: https://github.com/gluster/glusterd2
The challenge with current management layer of Gluster (`glusterd`) is that it is not designed for latest application integrations, like REST APIs.
For the k8s world, while this was the limitation, heketi project filled the gap, and made Gluster storage consumable in k8s platform.
GD2 (aka, glusterd2) project was designed to provide many of these functionalities from within gluster management framework itself. As a result, it makes sense to combine these two projects, and provide a single interface to the user. There are already efforts ongoing in this direction, and you can follow the action in gd2 repository (mentioned above).
#### gluster-block
Repo: https://github.com/gluster/gluster-block
This project intends to expose the files in GlusterFS as block device through iSCSI interface. This project is implemented so that we can provide all the required access interfaces to consume storage, using the same storage solution as the backend.
This is a much needed piece in GCS, as varying workloads in Container environments, may demand stricter consistency or performance gurantees, which may in turn need 'block' storage capabilities.
#### anthill / operator
Repo: https://github.com/gluster/anthill
The goal of this project is to make the whole Gluster experience in k8s much smoother using operators. With the operators taking care of handling many of the install, upgrade and regular maintenence tasks, this would reduce human intervention needed to maintain a Gluster cluster.
This is a new project, and can leverage contributions from interested contributors (check the issues list for ideas).
#### gluster-kubernetes
Repo: https://github.com/gluster/gluster-kubernetes
This project is intended to provide all the required installation and management steps for getting Gluster up and running in container world. This project would play an important role in integrating all the sub-projects together.
#### <TODO> ADD CSI project </TODO>
#### GlusterFS
Repo: https://github.com/gluster/glusterfs
GlusterFS is the main repository of Gluster. To support storage in container world, we don't need all the features of Gluster. Hence, we would be focusing on a stack which would be absolutely required in the container world. Thus allowing us to deliver better tested and thus more stable stack, and also provide users with a stack which works with fewer options to tweak.
Notice that glusterfs default volumes would continue to work as of now, but the translator stack which is used in GCS may be much leaner, and will have certain features enabled by default. For example, 'brick-multiplex', which is used when there are more bricks exported from same node, makes little sense if you are using Gluster with few volumes. But, if you are using more than 100 volumes on a single node, then the benefit of the feature starts kicking in, and such usage patterns are common in k8s world, so this for example, would be default in GCS.
#### monitoring
As k8s ecosystem provides it's own monitoring interfaces, we would like GCS to have the required monitoring plugins integrated. The work on this is not completely scoped out, happy to get help.
----
We will be posting more on the progress of the project, how we will track progress, so you can also track it, and contribute to it.
Regards,
Maintainers (@ Red Hat)
----
### Email - 3 (To be sent after feature classification email)
### Subject: Tag based regression testing
We have different features having different supportability levels, code quality, feature completeness, tests available, code coverage etc. Hence having a single regression test (for every patch), wouldn't be an ideal scenario. Hence I propose moving into more of Tag based testing for our regression tests.
The proposal is being made keeping in mind, some core changes may impact all the components, but the person making these changes will focus on only making changes to 'Supported' and 'Maintained' features to keep them up-to-date. All other features which are in 'Tech-Preview' or 'SunSet', 'Orphan', 'Deprecated' should be part of nightly/weekly regression, which up-on noticing the failure, people can pick up and test.
Now, to have this classification properly done, we need to bring classification of tests as Tags (ie, mention TAG=TECHPREVIEW, TAG=EXPERIMENTAL, etc etc). We can agree upon which tags should be run when, and on which environment (ie, Fedora28, Centos7, etc etc).
That way, we can make sure per patch regression tests cover 'most' but not 'all' but, a nightly and weekly tests would make it complete. It also keeps the regression times in manage-able limits.
Regards,
Amar Tumballi (amarts)
---
Discussion with Infra:
Testing / Infra focus for July, 2018:
* gcc7
* coverity
* 'tests' section in commit msg
- AI: Amar to help here
* distributed tests
* shellcheck - no exit code.
* py3 - run it in Fedora28
* py2 - run it on centos after running a script which would change python3 to 'python'.
* Output of regression in gerrit
----
#### 2018, July 10th
#### Subject: Gluster project's intentions to move to python3 as default.
Hi all,
GlusterFS as a project is not a core python based project, but many of its features (like geo-replication, eventsapi etc), are dependent on some of the python scripts. With many Linux distributions moving towards python3 as the default shipped python version in their environment, Gluster project too is planning to move towards python3 as the default.
More on this activity can be tracked down to [Github issue #411](https://github.com/gluster/glusterfs/issues/411)
But for all the components which have valid regression tests to validate the scripts they have, we will continue to support functionality on CentOS7 (ie, python2).
For making sure we have support for python3 completely, we will run regressions/smoke on fedora28 (where python3 is the default).
#### What it means to our users?
* Till Gluster release 4.1 (which is the current latest), python2 would be supported by default.
* From future releases (5.0 onwards), Gluster will default work only on python3 (if you install from source).
* The package maintainers of particular distributions can choose to make minor modifications to work on python2.
As of our current analysis, all the existing python files needed minimum changes to work on both python2 and python3. Hence, for few more releases we would be happy to give assistance to make relevant changes to support python2 if some users want it that way.
In Summary, we don't see any issues for our users who are using python2 or python3. The experience would continue to be smooth.
#### What it means for our developers?
* If a developer wants to make changes to existing python file, he would need to be considerate of compatibility on python2 setup.
- It would fail tests on centos7 if its already automated.
* If a developer wants to write a new file, then it is expected to be python3 format, and python2 compatibility is not mandatory.
- Developer gets a chance to run a test on only python3 or only python2 (or both). He has to provide a test case with the commit.
* We would be having automated check for all the python files modified in the repository to be python3 syntax friendly. So, advised to develop and test your new scripts in python3 environment.
- Syntax would still be only python3 friendly check.
If you have further questions on this, happy to discuss on email, or on github issue pointed above.
Regards,
Amar Tumballi (amarts)
-------
(Old)
# Proposal for automated bugzilla status updates
This proposal intends to make the development process more efficient by automating aspects of the Red Hat Bugzilla workflow.
This is part of the ongoing process improvements that were discussed in the maintainers meeting and are recorded [here](https://docs.google.com/document/d/1AFkZmRRDXRxs21GnGauieIyiIiRZ-nTEW8CPi7Gbp3g/edit?usp=sharing).
### How is bugzilla and github used right now?
Today, GlusterFS project uses [bugzilla](https://bugzilla.redhat.com) for tracking the bugs, and [github](https://github.com/gluster/glusterfs/issues) for tracking feature requests.
At present, the workflow of a bug involves a large degree of manual intervention. The lifecycle of bugzilla is something like below:
* Anyone can file a bug, and it starts its life at status 'NEW'
* When a developer starts working on it, {s,}he changes it to 'ASSIGNED'.
* When the patch is posted to review, the bug should be moved to 'POST' state.
* When the final patch (a bug can have more than 1 patch required to fix it) is merged, the bug status should be changed to 'MODIFIED'.
* When the release happens, the bugs should be closed with 'CLOSED' 'CURRENTRELEASE' with a comment saying which release has the fix.
Today, other than the last step, other things are manual, and hence a high chance of missing proper updates on bugzilla. This also causes problems when a user files a bug, and it is not updated at all for long time, but the fixes are present in release, because someone has already worked on it.
### What are we changing?
We are proposing the similar tags used in [github issues](https://help.github.com/articles/closing-issues-using-keywords/) for handling the bugs automatically. In `./rfc.sh` we will ask one more question, if the patch is the final patchset in the series, and it will use `fixes: ` or `updates: ` appropriately.
For those using other forms of code submission than rfc.sh, the tags to add in the commit message are,
- <fixes/updates>: #<num>
- If you are referencing a github issue, from another repository use, <fixes/updates>: gluster/glusterfs#<num>, IOW <fixes/updates>: <repo-location in github>#<num>
Considering the gluster specific bugzilla started around 743000 number, around October 11th, 2011, we will treat any number below 743000 as github issue for now.
While github issues would take significant time to reach upto 743000 number, we think this model would be simple and straight forward to implement, and also to understand for users/developers.
This change involves changes in [`glusterfs`](https://review.gluster.org/19564), [`build-jobs`](https://review.gluster.org/19565) and [`glusterfs-patch-acceptance-tests`](https://github.com/gluster/glusterfs-patch-acceptance-tests/pull/121) repositories. Appreciate reviews and comments on these changes.
ETA for this changes to be in action are March 31st, 2018. So, please voice your concerns soon, if any. If the points you have against these changes are agreed upon, we are happy to revert too, so, regardless of dates, do let us know what your opinions are.
---
Email 2
# Proposal to change the version numbers of Gluster project
Until now, Gluster project's releases followed `x.y.z` model, where `x` is indicating a major revision, and `y` a minor, and `z` as a patched release. Read more on this model at [wikipedia](https://en.wikipedia.org/wiki/Software_versioning#Change_significance)
As we are announcing the release and availability of Gluster 4.0[.0], it is a good time to reconsider our version numbering.
### What is the need to reconsider version number now?
The major and minor version numbering is a good strategy for projects which would bring incompatibility between major versions.
For Gluster, as it is a filesystem, and one of the major reason people use this project is because of 'High Availability', we can never think of breaking compatibility between releases. So, regardless of any major version changes, the filesystem should continue to work from a given mount point.
NOTE: We are not saying there will be no issues ever for clients at all, but users will have enough time to plan, based on called out incompatabilities, and hence adapt to the new changes in an application maintenance window.
### So, what next?
There are multiple changes we are proposing.
* As announced earlier 4.0 will be STM, and it will be the last STM.
* As we had already announced, 4.1 will be our LTM (Long Term Maintenance) release. This will release 3 months from 4.0 (June, 2018 end)
* After 4.1, we want to move to either continuous numbering (like Fedora), or time based (like ubuntu etc) release numbers. Which is the model we pick is not yet finalized. Happy to hear opinions.
* There will be no more STM releases for early access, still to mature features. We will either use the `experimental` branch, or tag a feature in a release as experimental. Everything core to the operation of Gluster, will remain stable and will only improve from release to release.
**NOTE:** Exact mechanisims for tagging something experimental Vs stable is being evolved. Further, what this means for a user is also being evovled and will be put out for discussion soon.
* Considering we had 6 months release cycle for LTM releases, and 3 months for branching, we want to fall back to 4 months release cycle for different versions, so we will cut down on number of backports, and supported versions from which we can upgrade to latest. Also users will benefit from more releases which are going to be supported long term.
* Every release will be maintained for 1 year as earlier
- Monthly bug fixs per maintained release would be made available (as before) (update releases)
- Post the first 3 or 4 months, for monthly bug fix update releases, the cycle will change to bi-monthy (once in 2 months) or expidated as necessary
----
Email 1:
# Proposal for automated bugzilla status updates
This proposal intends to make the development process more efficient by automating aspects of the Red Hat Bugzilla workflow.
~~This proposal is for bringing a positive change in the development process, by reducing the responsibility of the developers to manage the bugzilla manually.~~
~~This is one of the suggestions we got in maintainers meeting. We captured this in [this google doc](https://docs.google.com/document/d/1AFkZmRRDXRxs21GnGauieIyiIiRZ-nTEW8CPi7Gbp3g/edit?usp=sharing) Feel free to add to the suggestion or give feedback.~~
This is part of the ongoing process improvements that were discussed in the maintainers meeting and are recorded [here](https://docs.google.com/document/d/1AFkZmRRDXRxs21GnGauieIyiIiRZ-nTEW8CPi7Gbp3g/edit?usp=sharing).
### How is bugzilla and github used right now?
Today, GlusterFS project uses [bugzilla](https://bugzilla.redhat.com) for tracking the bugs, and [github](https://github.com/gluster/glusterfs/issues) for tracking feature requests.
At present, the workflow of a bug involves a large degree of manual intervention.~~The bugzilla management is today manual. ~~The lifecycle of bugzilla is something like below:
* Anyone can file a bug, and it starts its life at status 'NEW'
* When a developer starts working on it, {s,}he changes it to 'ASSIGNED'.
* When the patch is posted to review, the bug should be moved to 'POST' state.
* When the final patch (a bug can have more than 1 patch required to fix it) is merged, the bug status should be changed to 'MODIFIED'.
* When the release happens, the bugs should be closed with 'CLOSED' 'CURRENTRELEASE' with a comment saying which release has the fix.
Today, other than the ~~first~~ last step, other things are manual, and hence a high chance of missing proper updates on bugzilla. This also causes problems when a user files a bug, and it is not updated at all for long time, but the fixes are present in release, because someone has already worked on it.
### What are we ~~proposing~~changing(?)?
We are proposing the [model of github issues](https://help.github.com/articles/closing-issues-using-keywords/) for handling the bugs automatically. In `./rfc.sh` we will ask one more question, if the patch is the final patchset in the series, and will use `fixes: ` or `updates: ` appropriately.
For those using other forms of code submission than rfc.sh, the tags to add in the commit message are,
- <fixes/updates>: #<num>
- If you are referencing a github issue, from another repository use, <fixes/updates>: gluster/glusterfs#<num>, IOW <fixes/updates>: <repo-location in github>#<num>
Considering the gluster specific bugzilla started around 743000 number, ~~after Gluster acquistion on~~ around October 11th, 2011, we will treat any number below 743000 as github issue for now.
While github issues would take significant time to reach upto 743000 number, we think this model would be simple and straight forward to implement, and also to understand for users/developers.
This change involves changes in [`glusterfs`](https://review.gluster.org/19564), [`build-jobs`](https://review.gluster.org/19565) and [`glusterfs-patch-acceptance-tests`](https://github.com/gluster/glusterfs-patch-acceptance-tests/pull/121) repositories. Appreciate reviews and comments on these changes.
ETA for this changes to be in action are March 31st, 2018. So, please voice your concerns soon, if any. If the points you have against these changes are agreed upon, we are happy to revert too, so, regardless of dates, do let us know what your opinions are.
---
Email 2
# Proposal to change the version numbers of Gluster project
Until now, Gluster project's releases followed `x.y.z` model, where `x` is indicating a major revision, and `y` a minor, and `z` as a patched release. Read more on this model at [wikipedia](https://en.wikipedia.org/wiki/Software_versioning#Change_significance)
As we are announcing the release and availability of Gluster 4.0[.0], it is a good time to reconsider our version numbering.
### What is the need to reconsider version number now?
The major and minor version numbering is a good strategy for projects which would bring incompatibility between major versions.
For Gluster, as it is a filesystem, and one of the major reason people use this project is because of 'High Availability', we can never think of breaking compatibility between releases. So, regardless of any major version changes, the filesystem should continue to work from a given mount point.
NOTE: We are not saying there will be no issues ever for clients at all, but users will have enough time to plan, based on called out incompatabilities, and hence adapt to the new changes in an application maintenance window.
### So, what next?
There are multiple changes we are proposing.
* As announced earlier 4.0 will be STM, and it will be the last STM.
* As we had already announced, 4.1 will be our LTM (Long Term Maintenance) release. This will release 3 months from 4.0 (June, 2018 end)
* After 4.1, we want to move to either continuous numbering (like Fedora), or time based (like ubuntu etc) release numbers. Which is the model we pick is not yet finalized. Happy to hear opinions.
* There will be no more STM releases for early access, still to mature features. We will either use the `experimental` branch, or tag a feature in a release as experimental. Everything core to the operation of Gluster, will remain stable and will only improve from release to release.
**NOTE:** Exact mechanisims for tagging something experimental Vs stable is being evolved. Further, what this means for a user is also being evovled and will be put out for discussion soon.
* Considering we had 6 months release cycle for LTM releases, and 3 months for branching, we want to fall back to 4 months release cycle for different versions, so we will cut down on number of backports, and supported versions from which we can upgrade to latest. Also users will benefit from more releases which are going to be supported long term.
* Every release will be maintained for 1 year as earlier
- Monthly bug fixs per maintained release would be made available (as before) (update releases)
- Post the first 3 or 4 months, for monthly bug fix update releases, the cycle will change to bi-monthy (once in 2 months) or expidated as necessary
## New msg
Hello all,
I wanted to update you all with 2 things.
1). I am starting with 2 new ventures, along with few friends.
2). We are having opportunities if you are interested.
If these 2 points doesn't interest you, then feel free to ignore rest of message. Also no need to forward this to people outside of this group, mainly because the message is drafted for 1st level contacts.
About what I am involved with.
1. https://kadalu.io
* About:
This is an extension of what I was doing for last 15 years. This is 'Storage' domain in software field. If you or your company (or your friend's company) is working in technologies like kubernetes / hybrid cloud / open-shift / microservices / containers, this may come to help.
Kadalu tries to make storage easy for admins / devops dealing in these technologies, and saves money for your company, because we help you to use your storage resources better.
This is a open source project, which means, you can use and contribute to the project freely. We build / maintain the project, but sell only support.
* How can you help:
Multiple ways.
1. Help in spreading the word. - Follow us [@kadaluIO](https://twitter.com/KadaluIO) and [@tumballi](https://twitter.com/tumballi) in Twitter.
- Also you can follow us on https://www.linkedin.com/company/kadalu-io/ - LinkedIn.
2. If you hear someone talking 'kubernetes' / 'docker' / openshift / containers, let them know about us.
3. Use it if you are in need.
4. Contribute the code back - This will help you in your job, and more visibility as a open-source developer.
5. If you are in college, we can help with Internship (remote at present), where stipend is not yet promised, but learning is guaranteed.
6. If you are using github, give a github star (https://github.com/kadalu/kadalu/stargazers)
7. There are discussions on some deals, if they are through we can hire if you are having any storage / sysadmin / devops experience. (C/Python on Linux must). This may take some time, but happy to brief you if you are interested. Better to have people with common background as teammates.
Get in touch with me or write to hello@kadalu.io if you have any queries.
2. dhiway.com
* About:
This is an initiative to solve identity management issues, specially those where receivers needs proofs. Some easy examples are KYC process, application filling (which needs proofs attached), to managing identities in different domains. We use blockchain (hyperledger projects) and other cloud technologies to solve these things.
* How can you help:
1. Follow us [@dhiwaynetworks](https://twitter.com/dhiwaynetworks) in Twitter.
- Also you can follow us on LinkedIn - https://www.linkedin.com/company/dhiway
2. If you are looking at any solutions related to Identity in your domain, write to me (you have my number), we can discuss.
3. If you are experienced in developing mobile application (preferrably react-native apps) get in touch.
4. If you have build and managed scalable systems in cloud (AWS/Azure/GCP/DigitalOcean etc), get in touch.
5. If you are having experience with blockchain / SSI (Self-Soverign-Identity), Hyperledger/Indy, Hyperledger/Aries projects, get in touch.
6. Happy to discuss about Internship opportunities, send your resume. Prefer people with some experience with Linux systems, REST APIs, or react-native mobile app developments, mainly because we are ourselves in busy deadlines :-) If you like challenges, and are self-learner, great place to be