mikhailtestov
    • Create new note
    • Create a note from template
      • Sharing URL Link copied
      • /edit
      • View mode
        • Edit mode
        • View mode
        • Book mode
        • Slide mode
        Edit mode View mode Book mode Slide mode
      • Customize slides
      • Note Permission
      • Read
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Write
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Engagement control Commenting, Suggest edit, Emoji Reply
    • Invite by email
      Invitee

      This note has no invitees

    • Publish Note

      Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

      Your note will be visible on your profile and discoverable by anyone.
      Your note is now live.
      This note is visible on your profile and discoverable online.
      Everyone on the web can find and read all notes of this public team.
      See published notes
      Unpublish note
      Please check the box to agree to the Community Guidelines.
      View profile
    • Commenting
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
      • Everyone
    • Suggest edit
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
    • Emoji Reply
    • Enable
    • Versions and GitHub Sync
    • Note settings
    • Note Insights New
    • Engagement control
    • Make a copy
    • Transfer ownership
    • Delete this note
    • Save as template
    • Insert from template
    • Import from
      • Dropbox
      • Google Drive
      • Gist
      • Clipboard
    • Export to
      • Dropbox
      • Google Drive
      • Gist
    • Download
      • Markdown
      • HTML
      • Raw HTML
Menu Note settings Note Insights Versions and GitHub Sync Sharing URL Create Help
Create Create new note Create a note from template
Menu
Options
Engagement control Make a copy Transfer ownership Delete this note
Import from
Dropbox Google Drive Gist Clipboard
Export to
Dropbox Google Drive Gist
Download
Markdown HTML Raw HTML
Back
Sharing URL Link copied
/edit
View mode
  • Edit mode
  • View mode
  • Book mode
  • Slide mode
Edit mode View mode Book mode Slide mode
Customize slides
Note Permission
Read
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Write
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Engagement control Commenting, Suggest edit, Emoji Reply
  • Invite by email
    Invitee

    This note has no invitees

  • Publish Note

    Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

    Your note will be visible on your profile and discoverable by anyone.
    Your note is now live.
    This note is visible on your profile and discoverable online.
    Everyone on the web can find and read all notes of this public team.
    See published notes
    Unpublish note
    Please check the box to agree to the Community Guidelines.
    View profile
    Engagement control
    Commenting
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    • Everyone
    Suggest edit
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    Emoji Reply
    Enable
    Import from Dropbox Google Drive Gist Clipboard
       Owned this note    Owned this note      
    Published Linked with GitHub
    • Any changes
      Be notified of any changes
    • Mention me
      Be notified of mention me
    • Unsubscribe
    ``` 0. Preparation [x] 0.1. Instances summary sheet https://docs.google.com/spreadsheets/d/1yhwfv3PWD7AlnZperCV0tMqSxf9X7MhnBV23aZibN6k/edit?usp=sharing [x] 0.2. Ensure accesses to all instances [x] 0.3. Perform test run on a copy of any clusters. Done, using IAD GFS instances' backup AMIs. [ ] ``` ``` 1. Pre-actions check Cluster 1 (IAD) Common approach is to check cluster for any errors before performing any modifications. If issues are found (except for easy tasks like mount/remount volumes), escalate and postpone executions. [x] 1.1. Check nagios for issues related to gfs*.iad* hosts [x] 1.2. Ensure instance i-0af638f14f94e3ce0 (gfs1-1.iad-aws.prod.sli.io) is ready for retype. [x] 1.2.0. Connect to the instance: ssh 10.79.114.41 [x] 1.2.1. Check mount: [root@gfs1-1.iad-aws.prod.sli.io ~]$ mount|grep gfs1 /dev/mapper/vg01-gfs1.brick1 on /gluster/gfs1/brick1 type xfs (rw,relatime,attr2,inode64,noquota) /gluster/sec-gfs1/sec-gfs1.brick1 on /gluster/sec-gfs1/brick1 type xfs (rw,relatime,attr2,inode64,noquota) gfs1-1.iad-aws.prod.sli.io:/sec-gfs1 on /mnt/sec-gfs1 type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072) gfs1-1.iad-aws.prod.sli.io:/gfs1 on /mnt/gfs1 type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072) [x] 1.2.2. Check mount is healthy: Following commands should produce no output: [root@gfs1-1.iad-aws.prod.sli.io ~]$ ls /mnt/sec-gfs1 1>/dev/null [root@gfs1-1.iad-aws.prod.sli.io ~]$ ls /mnt/gfs1 1>/dev/null If nothing printed out, proceed to the next step. Following is an example of mounted volume, which has issues: [root@gfs1-1.iad-aws.prod.sli.io ~]$ ls /mnt/gfs1 1>/dev/null ls: cannot access /mnt/gfs1: Transport endpoint is not connected [root@gfs1-1.iad-aws.prod.sli.io ~]$ ls /mnt/sec-gfs1 1>/dev/null ls: cannot access /mnt/sec-gfs1: Transport endpoint is not connected Try easy solution first: [root@gfs1-1.iad-aws.prod.sli.io ~]$ umount /mnt/gfs1 [root@gfs1-1.iad-aws.prod.sli.io ~]$ umount /mnt/sec-gfs1 [root@gfs1-1.iad-aws.prod.sli.io ~]$ mount /mnt/gfs1 [root@gfs1-1.iad-aws.prod.sli.io ~]$ mount /mnt/sec-gfs1 After issuing above commands, retry whole step starting with 1.2.1. [x] 1.2.3. Check node VIP status: [root@gfs1-1.iad-aws.prod.sli.io ~]$ vip-control status VIP-CONTROL OK - IN VIP: write_file,success; read_file,success; alive; If output of the command is as shown above, proceed to the next step. [root@gfs1-1.iad-aws.prod.sli.io ~]$ vip-control status VIP-CONTROL CRITICAL - NOT IN VIP: write_file,failed; check_failed; remove,already_removed; If output of the command shows above failure, try following: [root@gfs1-1.iad-aws.prod.sli.io ~]$ vip-control insert Sat Jul 31 00:38:39 NZST 2021,vip-control,insert,successful If above command succeeded as shown, continue to the next step. If fails, stop execution and escalate to BU SaaS. [x] 1.2.4. Check peers status: [root@gfs1-1.iad-aws.prod.sli.io ~]$ gluster peer status Number of Peers: 3 Hostname: gfs1-4.iad-aws.prod.sli.io Uuid: 0183b376-1aa0-4d98-945e-6197a8876f27 State: Peer in Cluster (Connected) Hostname: gfs1-2.iad-aws.prod.sli.io Uuid: 95e9d91b-fe51-4e0d-86cd-00097890f113 State: Peer in Cluster (Connected) Hostname: gfs1-3.iad-aws.prod.sli.io Uuid: 23884f97-8149-4dd9-b4d3-1faccee03d1a State: Peer in Cluster (Connected) If there are issues with nodes above (not Connected), stop execution and escalate to BU SaaS. [x] 1.2.5. Check volumes status: [root@gfs1-1.iad-aws.prod.sli.io ~]$ gluster volume status Status of volume: gfs1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick gfs1-1.iad-aws.prod.sli.io:/gluster/g fs1/brick1 49152 0 Y 3510 Brick gfs1-2.iad-aws.prod.sli.io:/gluster/g fs1/brick1 49152 0 Y 31340 Brick gfs1-3.iad-aws.prod.sli.io:/gluster/g fs1/brick1 49152 0 Y 3316 Brick gfs1-4.iad-aws.prod.sli.io:/gluster/g fs1/brick1 49152 0 Y 31413 Self-heal Daemon on localhost N/A N/A Y 3665 Self-heal Daemon on gfs1-3.iad-aws.prod.sli .io N/A N/A Y 20255 Self-heal Daemon on gfs1-2.iad-aws.prod.sli .io N/A N/A Y 77317 Self-heal Daemon on gfs1-4.iad-aws.prod.sli .io N/A N/A Y 66457 Task Status of Volume gfs1 ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: sec-gfs1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick gfs1-1.iad-aws.prod.sli.io:/gluster/s ec-gfs1/brick1 49153 0 Y 3521 Brick gfs1-2.iad-aws.prod.sli.io:/gluster/s ec-gfs1/brick1 49153 0 Y 31351 Brick gfs1-3.iad-aws.prod.sli.io:/gluster/s ec-gfs1/brick1 49153 0 Y 3339 Brick gfs1-4.iad-aws.prod.sli.io:/gluster/s ec-gfs1/brick1 49153 0 Y 31424 Self-heal Daemon on localhost N/A N/A Y 3665 Self-heal Daemon on gfs1-3.iad-aws.prod.sli .io N/A N/A Y 20255 Self-heal Daemon on gfs1-4.iad-aws.prod.sli .io N/A N/A Y 66457 Self-heal Daemon on gfs1-2.iad-aws.prod.sli .io N/A N/A Y 77317 Task Status of Volume sec-gfs1 ------------------------------------------------------------------------------ There are no active volume tasks If any of bricks above are not online, stop execution and escalate to BU SaaS. If there are active volume tasks in progress, wait for completion before continue with modifications. [up to 30min, then escalate] ``` ``` [x] 1.3. Ensure instance i-08738fdcf35071be8 (gfs1-2.iad-aws.prod.sli.io) is ready for retype. [x] 1.3.0. Connect to the instance: ssh 10.79.114.45 [x] 1.3.1. Check mount: [root@gfs1-2.iad-aws.prod.sli.io ~]$ mount|grep gfs1 /dev/mapper/vg01-gfs1.brick1 on /gluster/gfs1/brick1 type xfs (rw,relatime,attr2,inode64,noquota) /gluster/sec-gfs1/sec-gfs1.brick1 on /gluster/sec-gfs1/brick1 type xfs (rw,relatime,attr2,inode64,noquota) gfs1-2.iad-aws.prod.sli.io:/sec-gfs1 on /mnt/sec-gfs1 type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072) gfs1-2.iad-aws.prod.sli.io:/gfs1 on /mnt/gfs1 type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072) [x] 1.3.2. Check mount is healthy: Following commands should produce no output: [root@gfs1-2.iad-aws.prod.sli.io ~]$ ls /mnt/sec-gfs1 1>/dev/null [root@gfs1-2.iad-aws.prod.sli.io ~]$ ls /mnt/gfs1 1>/dev/null If nothing printed out, proceed to the next step. Following is an example of mounted volume, which has issues: [root@gfs1-2.iad-aws.prod.sli.io ~]$ ls /mnt/gfs1 1>/dev/null ls: cannot access /mnt/gfs1: Transport endpoint is not connected [root@gfs1-2.iad-aws.prod.sli.io ~]$ ls /mnt/sec-gfs1 1>/dev/null ls: cannot access /mnt/sec-gfs1: Transport endpoint is not connected Try easy solution first: [root@gfs1-2.iad-aws.prod.sli.io ~]$ umount /mnt/gfs1 [root@gfs1-2.iad-aws.prod.sli.io ~]$ umount /mnt/sec-gfs1 [root@gfs1-2.iad-aws.prod.sli.io ~]$ mount /mnt/gfs1 [root@gfs1-2.iad-aws.prod.sli.io ~]$ mount /mnt/sec-gfs1 After issuing above commands, retry whole step starting with 1.3.1. [x] 1.3.3. Check node VIP status: [root@gfs1-2.iad-aws.prod.sli.io ~]$ vip-control status VIP-CONTROL OK - IN VIP: write_file,success; read_file,success; alive; If output of the command is as shown above, proceed to the next step. [root@gfs1-2.iad-aws.prod.sli.io ~]$ vip-control status VIP-CONTROL CRITICAL - NOT IN VIP: write_file,failed; check_failed; remove,already_removed; If output of the command shows above failure, try following: [root@gfs1-2.iad-aws.prod.sli.io ~]$ vip-control insert Sat Jul 31 00:38:39 NZST 2021,vip-control,insert,successful If above command succeeded as shown, continue to the next step. If fails, stop execution and escalate to BU SaaS. [x] 1.3.4. Check peers status: [root@gfs1-2.iad-aws.prod.sli.io ~]$ gluster peer status Number of Peers: 3 Hostname: gfs1-1.iad.prod.sli.io Uuid: c9381ca4-99ac-4e69-8823-b9b8e208f873 State: Peer in Cluster (Connected) Hostname: gfs1-3.iad-aws.prod.sli.io Uuid: 23884f97-8149-4dd9-b4d3-1faccee03d1a State: Peer in Cluster (Connected) Hostname: gfs1-4.iad-aws.prod.sli.io Uuid: 0183b376-1aa0-4d98-945e-6197a8876f27 State: Peer in Cluster (Connected) If there are issues with nodes above (not Connected), stop execution and escalate to BU SaaS. [x] 1.3.5. Check volumes status: [root@gfs1-1.iad-aws.prod.sli.io ~]$ gluster volume status Status of volume: gfs1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick gfs1-1.iad-aws.prod.sli.io:/gluster/g fs1/brick1 49152 0 Y 3510 Brick gfs1-2.iad-aws.prod.sli.io:/gluster/g fs1/brick1 49152 0 Y 31340 Brick gfs1-3.iad-aws.prod.sli.io:/gluster/g fs1/brick1 49152 0 Y 3316 Brick gfs1-4.iad-aws.prod.sli.io:/gluster/g fs1/brick1 49152 0 Y 31413 Self-heal Daemon on localhost N/A N/A Y 77317 Self-heal Daemon on gfs1-1.iad.prod.sli.io N/A N/A Y 3665 Self-heal Daemon on gfs1-3.iad-aws.prod.sli .io N/A N/A Y 20255 Self-heal Daemon on gfs1-4.iad-aws.prod.sli .io N/A N/A Y 66457 Task Status of Volume gfs1 ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: sec-gfs1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick gfs1-1.iad-aws.prod.sli.io:/gluster/s ec-gfs1/brick1 49153 0 Y 3521 Brick gfs1-2.iad-aws.prod.sli.io:/gluster/s ec-gfs1/brick1 49153 0 Y 31351 Brick gfs1-3.iad-aws.prod.sli.io:/gluster/s ec-gfs1/brick1 49153 0 Y 3339 Brick gfs1-4.iad-aws.prod.sli.io:/gluster/s ec-gfs1/brick1 49153 0 Y 31424 Self-heal Daemon on localhost N/A N/A Y 77317 Self-heal Daemon on gfs1-1.iad.prod.sli.io N/A N/A Y 3665 Self-heal Daemon on gfs1-3.iad-aws.prod.sli .io N/A N/A Y 20255 Self-heal Daemon on gfs1-4.iad-aws.prod.sli .io N/A N/A Y 66457 Task Status of Volume sec-gfs1 ------------------------------------------------------------------------------ There are no active volume tasks If any of bricks above are not online, stop execution and escalate to BU SaaS. If there are active volume tasks in progress, wait for completion before continue with modifications. [up to 30min, then escalate] ``` ``` [x] 1.4. Ensure instance i-0a0c2ad3bba430706 (gfs1-3.iad-aws.prod.sli.io) is ready for retype. [x] 1.4.0. Connect to the instance: ssh 10.79.114.199 [x] 1.4.1. Check mount: [root@gfs1-3.iad-aws.prod.sli.io ~]$ mount|grep gfs1 /dev/mapper/vg01-gfs1.brick1 on /gluster/gfs1/brick1 type xfs (rw,relatime,attr2,inode64,noquota) /gluster/sec-gfs1/sec-gfs1.brick1 on /gluster/sec-gfs1/brick1 type xfs (rw,relatime,attr2,inode64,noquota) gfs1-3.iad-aws.prod.sli.io:/sec-gfs1 on /mnt/sec-gfs1 type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072) gfs1-3.iad-aws.prod.sli.io:/gfs1 on /mnt/gfs1 type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072) [x] 1.4.2. Check mount is healthy: Following commands should produce no output: [root@gfs1-3.iad-aws.prod.sli.io ~]$ ls /mnt/sec-gfs1 1>/dev/null [root@gfs1-3.iad-aws.prod.sli.io ~]$ ls /mnt/gfs1 1>/dev/null If nothing printed out, proceed to the next step. Following is an example of mounted volume, which has issues: [root@gfs1-3.iad-aws.prod.sli.io ~]$ ls /mnt/gfs1 1>/dev/null ls: cannot access /mnt/gfs1: Transport endpoint is not connected [root@gfs1-3.iad-aws.prod.sli.io ~]$ ls /mnt/sec-gfs1 1>/dev/null ls: cannot access /mnt/sec-gfs1: Transport endpoint is not connected Try easy solution first: [root@gfs1-3.iad-aws.prod.sli.io ~]$ umount /mnt/gfs1 [root@gfs1-3.iad-aws.prod.sli.io ~]$ umount /mnt/sec-gfs1 [root@gfs1-3.iad-aws.prod.sli.io ~]$ mount /mnt/gfs1 [root@gfs1-3.iad-aws.prod.sli.io ~]$ mount /mnt/sec-gfs1 After issuing above commands, retry whole step starting with 1.4.1. [x] 1.4.3. Check node VIP status: [root@gfs1-3.iad-aws.prod.sli.io ~]$ vip-control status VIP-CONTROL OK - IN VIP: write_file,success; read_file,success; alive; If output of the command is as shown above, proceed to the next step. [root@gfs1-3.iad-aws.prod.sli.io ~]$ vip-control status VIP-CONTROL CRITICAL - NOT IN VIP: write_file,failed; check_failed; remove,already_removed; If output of the command shows above failure, try following: [root@gfs1-3.iad-aws.prod.sli.io ~]$ vip-control insert Sat Jul 31 00:38:39 NZST 2021,vip-control,insert,successful If above command succeeded as shown, continue to the next step. If fails, stop execution and escalate to BU SaaS. [x] 1.4.4. Check peers status: [root@gfs1-3.iad-aws.prod.sli.io ~]$ gluster peer status Number of Peers: 3 Hostname: gfs1-4.iad-aws.prod.sli.io Uuid: 0183b376-1aa0-4d98-945e-6197a8876f27 State: Peer in Cluster (Connected) Hostname: gfs1-2.iad-aws.prod.sli.io Uuid: 95e9d91b-fe51-4e0d-86cd-00097890f113 State: Peer in Cluster (Connected) Hostname: gfs1-1.iad.prod.sli.io Uuid: c9381ca4-99ac-4e69-8823-b9b8e208f873 State: Peer in Cluster (Connected) If there are issues with nodes above (not Connected), stop execution and escalate to BU SaaS. [x] 1.4.5. Check volumes status: [root@gfs1-3.iad-aws.prod.sli.io ~]$ gluster volume status Status of volume: gfs1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick gfs1-1.iad-aws.prod.sli.io:/gluster/g fs1/brick1 49152 0 Y 3510 Brick gfs1-2.iad-aws.prod.sli.io:/gluster/g fs1/brick1 49152 0 Y 31340 Brick gfs1-3.iad-aws.prod.sli.io:/gluster/g fs1/brick1 49152 0 Y 3316 Brick gfs1-4.iad-aws.prod.sli.io:/gluster/g fs1/brick1 49152 0 Y 31413 Self-heal Daemon on localhost N/A N/A Y 77317 Self-heal Daemon on gfs1-1.iad.prod.sli.io N/A N/A Y 3665 Self-heal Daemon on gfs1-3.iad-aws.prod.sli .io N/A N/A Y 20255 Self-heal Daemon on gfs1-4.iad-aws.prod.sli .io N/A N/A Y 66457 Task Status of Volume gfs1 ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: sec-gfs1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick gfs1-1.iad-aws.prod.sli.io:/gluster/s ec-gfs1/brick1 49153 0 Y 3521 Brick gfs1-2.iad-aws.prod.sli.io:/gluster/s ec-gfs1/brick1 49153 0 Y 31351 Brick gfs1-3.iad-aws.prod.sli.io:/gluster/s ec-gfs1/brick1 49153 0 Y 3339 Brick gfs1-4.iad-aws.prod.sli.io:/gluster/s ec-gfs1/brick1 49153 0 Y 31424 Self-heal Daemon on localhost N/A N/A Y 77317 Self-heal Daemon on gfs1-1.iad.prod.sli.io N/A N/A Y 3665 Self-heal Daemon on gfs1-3.iad-aws.prod.sli .io N/A N/A Y 20255 Self-heal Daemon on gfs1-4.iad-aws.prod.sli .io N/A N/A Y 66457 Task Status of Volume sec-gfs1 ------------------------------------------------------------------------------ There are no active volume tasks If any of bricks above are not online, stop execution and escalate to BU SaaS. If there are active volume tasks in progress, wait for completion before continue with modifications. [up to 30min, then escalate] ``` ``` [x] 1.5. Ensure instance i-001707ebdd3539c48 (gfs1-4.iad-aws.prod.sli.io) is ready for retype. [x] 1.5.0. Connect to the instance: ssh 10.79.114.67 [x] 1.5.1. Check mount: [root@gfs1-4.iad-aws.prod.sli.io ~]$ mount|grep gfs1 /dev/mapper/vg01-gfs1.brick1 on /gluster/gfs1/brick1 type xfs (rw,relatime,attr2,inode64,noquota) /gluster/sec-gfs1/sec-gfs1.brick1 on /gluster/sec-gfs1/brick1 type xfs (rw,relatime,attr2,inode64,noquota) gfs1-4.iad-aws.prod.sli.io:/sec-gfs1 on /mnt/sec-gfs1 type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072) gfs1-4.iad-aws.prod.sli.io:/gfs1 on /mnt/gfs1 type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072) [x] 1.5.2. Check mount is healthy: Following commands should produce no output: [root@gfs1-4.iad-aws.prod.sli.io ~]$ ls /mnt/sec-gfs1 1>/dev/null [root@gfs1-4.iad-aws.prod.sli.io ~]$ ls /mnt/gfs1 1>/dev/null If nothing printed out, proceed to the next step. Following is an example of mounted volume, which has issues: [root@gfs1-4.iad-aws.prod.sli.io ~]$ ls /mnt/gfs1 1>/dev/null ls: cannot access /mnt/gfs1: Transport endpoint is not connected [root@gfs1-4.iad-aws.prod.sli.io ~]$ ls /mnt/sec-gfs1 1>/dev/null ls: cannot access /mnt/sec-gfs1: Transport endpoint is not connected Try easy solution first: [root@gfs1-4.iad-aws.prod.sli.io ~]$ umount /mnt/gfs1 [root@gfs1-4.iad-aws.prod.sli.io ~]$ umount /mnt/sec-gfs1 [root@gfs1-4.iad-aws.prod.sli.io ~]$ mount /mnt/gfs1 [root@gfs1-4.iad-aws.prod.sli.io ~]$ mount /mnt/sec-gfs1 After issuing above commands, retry whole step starting with 1.5.1. [x] 1.5.3. Check node VIP status: [root@gfs1-4.iad-aws.prod.sli.io ~]$ vip-control status VIP-CONTROL OK - IN VIP: write_file,success; read_file,success; alive; If output of the command is as shown above, proceed to the next step. [root@gfs1-4.iad-aws.prod.sli.io ~]$ vip-control status VIP-CONTROL CRITICAL - NOT IN VIP: write_file,failed; check_failed; remove,already_removed; If output of the command shows above failure, try following: [root@gfs1-4.iad-aws.prod.sli.io ~]$ vip-control insert Sat Jul 31 00:38:39 NZST 2021,vip-control,insert,successful If above command succeeded as shown, continue to the next step. If fails, stop execution and escalate to BU SaaS. [x] 1.5.4. Check peers status: [root@gfs1-4.iad-aws.prod.sli.io ~]$ gluster peer status Number of Peers: 3 Hostname: gfs1-3.iad-aws.prod.sli.io Uuid: 23884f97-8149-4dd9-b4d3-1faccee03d1a State: Peer in Cluster (Connected) Hostname: gfs1-2.iad-aws.prod.sli.io Uuid: 95e9d91b-fe51-4e0d-86cd-00097890f113 State: Peer in Cluster (Connected) Hostname: gfs1-1.iad.prod.sli.io Uuid: c9381ca4-99ac-4e69-8823-b9b8e208f873 State: Peer in Cluster (Connected) If there are issues with nodes above (not Connected), stop execution and escalate to BU SaaS. [x] 1.5.5. Check volumes status: [root@gfs1-4.iad-aws.prod.sli.io ~]$ gluster volume status Status of volume: gfs1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick gfs1-1.iad-aws.prod.sli.io:/gluster/g fs1/brick1 49152 0 Y 3510 Brick gfs1-2.iad-aws.prod.sli.io:/gluster/g fs1/brick1 49152 0 Y 31340 Brick gfs1-3.iad-aws.prod.sli.io:/gluster/g fs1/brick1 49152 0 Y 3316 Brick gfs1-4.iad-aws.prod.sli.io:/gluster/g fs1/brick1 49152 0 Y 31413 Self-heal Daemon on localhost N/A N/A Y 77317 Self-heal Daemon on gfs1-1.iad.prod.sli.io N/A N/A Y 3665 Self-heal Daemon on gfs1-3.iad-aws.prod.sli .io N/A N/A Y 20255 Self-heal Daemon on gfs1-4.iad-aws.prod.sli .io N/A N/A Y 66457 Task Status of Volume gfs1 ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: sec-gfs1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick gfs1-1.iad-aws.prod.sli.io:/gluster/s ec-gfs1/brick1 49153 0 Y 3521 Brick gfs1-2.iad-aws.prod.sli.io:/gluster/s ec-gfs1/brick1 49153 0 Y 31351 Brick gfs1-3.iad-aws.prod.sli.io:/gluster/s ec-gfs1/brick1 49153 0 Y 3339 Brick gfs1-4.iad-aws.prod.sli.io:/gluster/s ec-gfs1/brick1 49153 0 Y 31424 Self-heal Daemon on localhost N/A N/A Y 77317 Self-heal Daemon on gfs1-1.iad.prod.sli.io N/A N/A Y 3665 Self-heal Daemon on gfs1-3.iad-aws.prod.sli .io N/A N/A Y 20255 Self-heal Daemon on gfs1-4.iad-aws.prod.sli .io N/A N/A Y 66457 Task Status of Volume sec-gfs1 ------------------------------------------------------------------------------ There are no active volume tasks If any of bricks above are not online, stop execution and escalate to BU SaaS. If there are active volume tasks in progress, wait for completion before continue with modifications. [up to 30min, then escalate] ``` ``` 2. Processing Cluser 1 (IAD) [x] 2.0. Remove Cluster 1 (IAD) out of rotation: [local]% for host in 10.79.114.86 10.79.114.150 10.79.114.92 10.79.114.15 10.79.114.94 10.79.114.65; do ssh ${host} 'runuser -l gbrain bash -c "webcontrol vip remove"'; done (allow a minute before performing next step) [x] 2.1. Ensure Cluster 1 (IAD) is not serving to the clients 2.1.1. Check vip status on searches: [local]% for host in 10.79.114.86 10.79.114.150 10.79.114.92 10.79.114.15 10.79.114.94 10.79.114.65; do ssh ${host} 'runuser -l gbrain bash -c "webcontrol vip status"'; done 2.1.2. Request healthcheck document from the search hosts: [local]% for host in 10.79.114.86 10.79.114.150 10.79.114.92 10.79.114.15 10.79.114.94 10.79.114.65; do echo -n "${host}: "; curl -A "Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:59.0) Gecko/20100101 Firefox/59.0" http://${host}/shared/vip-status.txt; done expecting all hosts to return down 2.2. Perform retype of all cluster instances [x] 2.2.1. Switch off CRON on all GFS instances [local]% for host in 10.79.114.41 10.79.114.45 10.79.114.199 10.79.114.67; do ssh ${host} "systemctl disable crond; systemctl stop crond; pkill -f cron -9; pkill -f CRON -9; ps ax|grep -i cron"; done [x] 2.2.2. Remove all GFS instances out of serving requests [local]% for host in 10.79.114.41 10.79.114.45 10.79.114.199 10.79.114.67; do ssh ${host} "vip-control remove"; done 2.2.3. Proceed with instance retype [x] 2.2.3.1. gfs1-1.iad-aws.prod.sli.io [local]% ./automation/bin/run.sh arn:aws:ssm:us-east-1:862508395508:document/co-wu-resize-instance "InstanceId=i-0af638f14f94e3ce0,NewInstanceType=m5.large" 610092376560 us-east-1 [x] 2.2.3.1. gfs1-2.iad-aws.prod.sli.io [local]% ./automation/bin/run.sh arn:aws:ssm:us-east-1:862508395508:document/co-wu-resize-instance "InstanceId=i-08738fdcf35071be8,NewInstanceType=m5.large" 610092376560 us-east-1 [x] 2.2.3.1. gfs1-3.iad-aws.prod.sli.io [local]% ./automation/bin/run.sh arn:aws:ssm:us-east-1:862508395508:document/co-wu-resize-instance "InstanceId=i-0a0c2ad3bba430706,NewInstanceType=m5.large" 610092376560 us-east-1 [x] 2.2.3.1. gfs1-4.iad-aws.prod.sli.io [local]% ./automation/bin/run.sh arn:aws:ssm:us-east-1:862508395508:document/co-wu-resize-instance "InstanceId=i-001707ebdd3539c48,NewInstanceType=m5.large" 610092376560 us-east-1 [x] 2.2.4. Return all GFS instances to serve requests [local]% for host in 10.79.114.41 10.79.114.45 10.79.114.199 10.79.114.67; do echo -n "${host}: "; ssh ${host} "vip-control insert"; done all hosts expected to return successful result Sat Jul 31 00:38:39 NZST 2021,vip-control,insert,successful if any host will return failure, investigate individual host [x] 2.3. Perform GFS healthcheck for instance i-0af638f14f94e3ce0 (gfs1-1.iad-aws.prod.sli.io). [x] 2.3.0. Connect to the instance: ssh 10.79.114.41 [x] 2.3.1. Check gluster daemon is running: [root@gfs1-1.iad-aws.prod.sli.io ~]$ systemctl status glusterd If not started, (re)start glusterd [root@gfs1-1.iad-aws.prod.sli.io ~]$ systemctl restart glusterd If still fail to start all daemons (glusterd, glusterfsd, glusterfs), investigate logs in /var/log/glusterfs/ [If no solution found in 10 minutes, escalate to BU SaaS] [x] 2.3.2. Check mount: [root@gfs1-1.iad-aws.prod.sli.io ~]$ mount|grep gfs1 /dev/mapper/vg01-gfs1.brick1 on /gluster/gfs1/brick1 type xfs (rw,relatime,attr2,inode64,noquota) /gluster/sec-gfs1/sec-gfs1.brick1 on /gluster/sec-gfs1/brick1 type xfs (rw,relatime,attr2,inode64,noquota) gfs1-1.iad-aws.prod.sli.io:/sec-gfs1 on /mnt/sec-gfs1 type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072) gfs1-1.iad-aws.prod.sli.io:/gfs1 on /mnt/gfs1 type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072) If mounts are missing, try to mount volumes: [root@gfs1-1.iad-aws.prod.sli.io ~]$ mount /mnt/gfs1 [root@gfs1-1.iad-aws.prod.sli.io ~]$ mount /mnt/sec-gfs1 If mount fails, investigate logs in /var/log/glusterfs/, /var/log/messages, dmesg [If no solution found in 10 minutes, escalate to BU SaaS] [x] 2.3.3. Check mount is healthy: Following commands should produce no output: [root@gfs1-1.iad-aws.prod.sli.io ~]$ ls /mnt/sec-gfs1 1>/dev/null [root@gfs1-1.iad-aws.prod.sli.io ~]$ ls /mnt/gfs1 1>/dev/null If nothing printed out, proceed to the next step. Following is an example of mounted volume, which has issues: [root@gfs1-1.iad-aws.prod.sli.io ~]$ ls /mnt/gfs1 1>/dev/null ls: cannot access /mnt/gfs1: Transport endpoint is not connected [root@gfs1-1.iad-aws.prod.sli.io ~]$ ls /mnt/sec-gfs1 1>/dev/null ls: cannot access /mnt/sec-gfs1: Transport endpoint is not connected Try easy solution first: [root@gfs1-1.iad-aws.prod.sli.io ~]$ umount /mnt/gfs1 [root@gfs1-1.iad-aws.prod.sli.io ~]$ umount /mnt/sec-gfs1 [root@gfs1-1.iad-aws.prod.sli.io ~]$ mount /mnt/gfs1 [root@gfs1-1.iad-aws.prod.sli.io ~]$ mount /mnt/sec-gfs1 After issuing above commands, retry whole step starting with 2.3.2. If fails for the second time, investigate logs in /var/log/glusterfs/, /var/log/messages, dmesg [If no solution found in 10 minutes, escalate to BU SaaS] [x] 2.3.4. Check volumes status: [root@gfs1-1.iad-aws.prod.sli.io ~]$ gluster volume status Status of volume: gfs1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick gfs1-1.iad-aws.prod.sli.io:/gluster/g fs1/brick1 49152 0 Y 3510 Brick gfs1-2.iad-aws.prod.sli.io:/gluster/g fs1/brick1 49152 0 Y 31340 Brick gfs1-3.iad-aws.prod.sli.io:/gluster/g fs1/brick1 49152 0 Y 3316 Brick gfs1-4.iad-aws.prod.sli.io:/gluster/g fs1/brick1 49152 0 Y 31413 Self-heal Daemon on localhost N/A N/A Y 3665 Self-heal Daemon on gfs1-3.iad-aws.prod.sli .io N/A N/A Y 20255 Self-heal Daemon on gfs1-2.iad-aws.prod.sli .io N/A N/A Y 77317 Self-heal Daemon on gfs1-4.iad-aws.prod.sli .io N/A N/A Y 66457 Task Status of Volume gfs1 ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: sec-gfs1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick gfs1-1.iad-aws.prod.sli.io:/gluster/s ec-gfs1/brick1 49153 0 Y 3521 Brick gfs1-2.iad-aws.prod.sli.io:/gluster/s ec-gfs1/brick1 49153 0 Y 31351 Brick gfs1-3.iad-aws.prod.sli.io:/gluster/s ec-gfs1/brick1 49153 0 Y 3339 Brick gfs1-4.iad-aws.prod.sli.io:/gluster/s ec-gfs1/brick1 49153 0 Y 31424 Self-heal Daemon on localhost N/A N/A Y 3665 Self-heal Daemon on gfs1-3.iad-aws.prod.sli .io N/A N/A Y 20255 Self-heal Daemon on gfs1-4.iad-aws.prod.sli .io N/A N/A Y 66457 Self-heal Daemon on gfs1-2.iad-aws.prod.sli .io N/A N/A Y 77317 Task Status of Volume sec-gfs1 ------------------------------------------------------------------------------ There are no active volume tasks If any of bricks above are offline, perform following on failed node: [local]% ssh <IP address of the node> [root@gfs1-<node ID>.iad-aws.prod.sli.io ~]$ gluster volume start <volume name> force If above command's output will state that there are active volume tasks, continue, but doublecheck in 1h and notify BU SaaS. [x] 2.3.5. Check peers' status: [root@gfs1-1.iad-aws.prod.sli.io ~]$ gluster peer status Number of Peers: 3 Hostname: gfs1-4.iad-aws.prod.sli.io Uuid: 0183b376-1aa0-4d98-945e-6197a8876f27 State: Peer in Cluster (Connected) Hostname: gfs1-2.iad-aws.prod.sli.io Uuid: 95e9d91b-fe51-4e0d-86cd-00097890f113 State: Peer in Cluster (Connected) Hostname: gfs1-3.iad-aws.prod.sli.io Uuid: 23884f97-8149-4dd9-b4d3-1faccee03d1a State: Peer in Cluster (Connected) If there are issues in above output, something unexpected happened. As if all bricks are online on previous step, but some of peers are disconnected, this is totally wrong. So escalate to BU SaaS and continue. [x] 2.3.6. Switch on CRON [root@gfs1-1.iad-aws.prod.sli.io ~]$ systemctl enable crond [root@gfs1-1.iad-aws.prod.sli.io ~]$ systemctl start crond [x] 2.4. Perform GFS healthcheck for instance i-08738fdcf35071be8 (gfs1-2.iad-aws.prod.sli.io). [x] 2.4.0. Connect to the instance: ssh 10.79.114.45 [x] 2.4.1. Check gluster daemon is running: [root@gfs1-2.iad-aws.prod.sli.io ~]$ systemctl status glusterd If not started, (re)start glusterd [root@gfs1-2.iad-aws.prod.sli.io ~]$ systemctl restart glusterd If still fail to start all daemons (glusterd, glusterfsd, glusterfs), investigate logs in /var/log/glusterfs/ [If no solution found in 10 minutes, escalate to BU SaaS] [x] 2.4.2. Check mount: [root@gfs1-2.iad-aws.prod.sli.io ~]$ mount|grep gfs1 /dev/mapper/vg01-gfs1.brick1 on /gluster/gfs1/brick1 type xfs (rw,relatime,attr2,inode64,noquota) /gluster/sec-gfs1/sec-gfs1.brick1 on /gluster/sec-gfs1/brick1 type xfs (rw,relatime,attr2,inode64,noquota) gfs1-2.iad-aws.prod.sli.io:/sec-gfs1 on /mnt/sec-gfs1 type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072) gfs1-2.iad-aws.prod.sli.io:/gfs1 on /mnt/gfs1 type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072) If mounts are missing, try to mount volumes: [root@gfs1-2.iad-aws.prod.sli.io ~]$ mount /mnt/gfs1 [root@gfs1-2.iad-aws.prod.sli.io ~]$ mount /mnt/sec-gfs1 If mount fails, investigate logs in /var/log/glusterfs/, /var/log/messages, dmesg [If no solution found in 10 minutes, escalate to BU SaaS] [x] 2.4.3. Check mount is healthy: Following commands should produce no output: [root@gfs1-2.iad-aws.prod.sli.io ~]$ ls /mnt/sec-gfs1 1>/dev/null [root@gfs1-2.iad-aws.prod.sli.io ~]$ ls /mnt/gfs1 1>/dev/null If nothing printed out, proceed to the next step. Following is an example of mounted volume, which has issues: [root@gfs1-2.iad-aws.prod.sli.io ~]$ ls /mnt/gfs1 1>/dev/null ls: cannot access /mnt/gfs1: Transport endpoint is not connected [root@gfs1-2.iad-aws.prod.sli.io ~]$ ls /mnt/sec-gfs1 1>/dev/null ls: cannot access /mnt/sec-gfs1: Transport endpoint is not connected Try easy solution first: [root@gfs1-2.iad-aws.prod.sli.io ~]$ umount /mnt/gfs1 [root@gfs1-2.iad-aws.prod.sli.io ~]$ umount /mnt/sec-gfs1 [root@gfs1-2.iad-aws.prod.sli.io ~]$ mount /mnt/gfs1 [root@gfs1-2.iad-aws.prod.sli.io ~]$ mount /mnt/sec-gfs1 After issuing above commands, retry whole step starting with 2.4.2. If fails for the second time, investigate logs in /var/log/glusterfs/, /var/log/messages, dmesg [If no solution found in 10 minutes, escalate to BU SaaS] [x] 2.4.4. Check volumes status: [root@gfs1-2.iad-aws.prod.sli.io ~]$ gluster volume status Status of volume: gfs1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick gfs1-1.iad-aws.prod.sli.io:/gluster/g fs1/brick1 49152 0 Y 3510 Brick gfs1-2.iad-aws.prod.sli.io:/gluster/g fs1/brick1 49152 0 Y 31340 Brick gfs1-3.iad-aws.prod.sli.io:/gluster/g fs1/brick1 49152 0 Y 3316 Brick gfs1-4.iad-aws.prod.sli.io:/gluster/g fs1/brick1 49152 0 Y 31413 Self-heal Daemon on localhost N/A N/A Y 3665 Self-heal Daemon on gfs1-3.iad-aws.prod.sli .io N/A N/A Y 20255 Self-heal Daemon on gfs1-2.iad-aws.prod.sli .io N/A N/A Y 77317 Self-heal Daemon on gfs1-4.iad-aws.prod.sli .io N/A N/A Y 66457 Task Status of Volume gfs1 ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: sec-gfs1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick gfs1-1.iad-aws.prod.sli.io:/gluster/s ec-gfs1/brick1 49153 0 Y 3521 Brick gfs1-2.iad-aws.prod.sli.io:/gluster/s ec-gfs1/brick1 49153 0 Y 31351 Brick gfs1-3.iad-aws.prod.sli.io:/gluster/s ec-gfs1/brick1 49153 0 Y 3339 Brick gfs1-4.iad-aws.prod.sli.io:/gluster/s ec-gfs1/brick1 49153 0 Y 31424 Self-heal Daemon on localhost N/A N/A Y 3665 Self-heal Daemon on gfs1-3.iad-aws.prod.sli .io N/A N/A Y 20255 Self-heal Daemon on gfs1-4.iad-aws.prod.sli .io N/A N/A Y 66457 Self-heal Daemon on gfs1-2.iad-aws.prod.sli .io N/A N/A Y 77317 Task Status of Volume sec-gfs1 ------------------------------------------------------------------------------ There are no active volume tasks If any of bricks above are offline, perform following on failed node: [local]% ssh <IP address of the node> [root@gfs1-<node ID>.iad-aws.prod.sli.io ~]$ gluster volume start <volume name> force If above command's output will state that there are active volume tasks, continue, but doublecheck in 1h and notify BU SaaS. [x] 2.4.5. Check peers' status: [root@gfs1-2.iad-aws.prod.sli.io ~]$ gluster peer status Number of Peers: 3 Hostname: gfs1-1.iad.prod.sli.io Uuid: c9381ca4-99ac-4e69-8823-b9b8e208f873 State: Peer in Cluster (Connected) Hostname: gfs1-3.iad-aws.prod.sli.io Uuid: 23884f97-8149-4dd9-b4d3-1faccee03d1a State: Peer in Cluster (Connected) Hostname: gfs1-4.iad-aws.prod.sli.io Uuid: 0183b376-1aa0-4d98-945e-6197a8876f27 State: Peer in Cluster (Connected) If there are issues in above output, something unexpected happened. As if all bricks are online on previous step, but some of peers are disconnected, this is totally wrong. So escalate to BU SaaS and continue. [x] 2.4.6. Switch on CRON [root@gfs1-2.iad-aws.prod.sli.io ~]$ systemctl enable crond [root@gfs1-2.iad-aws.prod.sli.io ~]$ systemctl start crond [x] 2.5. Perform GFS healthcheck for instance i-0a0c2ad3bba430706 (gfs1-3.iad-aws.prod.sli.io). [x] 2.5.0. Connect to the instance: ssh 10.79.114.199 [x] 2.5.1. Check gluster daemon is running: [root@gfs1-3.iad-aws.prod.sli.io ~]$ systemctl status glusterd If not started, (re)start glusterd [root@gfs1-3.iad-aws.prod.sli.io ~]$ systemctl restart glusterd If still fail to start all daemons (glusterd, glusterfsd, glusterfs), investigate logs in /var/log/glusterfs/ [If no solution found in 10 minutes, escalate to BU SaaS] [x] 2.5.2. Check mount: [root@gfs1-2.iad-aws.prod.sli.io ~]$ mount|grep gfs1 /dev/mapper/vg01-gfs1.brick1 on /gluster/gfs1/brick1 type xfs (rw,relatime,attr2,inode64,noquota) /gluster/sec-gfs1/sec-gfs1.brick1 on /gluster/sec-gfs1/brick1 type xfs (rw,relatime,attr2,inode64,noquota) gfs1-3.iad-aws.prod.sli.io:/sec-gfs1 on /mnt/sec-gfs1 type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072) gfs1-3.iad-aws.prod.sli.io:/gfs1 on /mnt/gfs1 type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072) If mounts are missing, try to mount volumes: [root@gfs1-3.iad-aws.prod.sli.io ~]$ mount /mnt/gfs1 [root@gfs1-3.iad-aws.prod.sli.io ~]$ mount /mnt/sec-gfs1 If mount fails, investigate logs in /var/log/glusterfs/, /var/log/messages, dmesg [If no solution found in 10 minutes, escalate to BU SaaS] [x] 2.5.3. Check mount is healthy: Following commands should produce no output: [root@gfs1-3.iad-aws.prod.sli.io ~]$ ls /mnt/sec-gfs1 1>/dev/null [root@gfs1-3.iad-aws.prod.sli.io ~]$ ls /mnt/gfs1 1>/dev/null If nothing printed out, proceed to the next step. Following is an example of mounted volume, which has issues: [root@gfs1-3.iad-aws.prod.sli.io ~]$ ls /mnt/gfs1 1>/dev/null ls: cannot access /mnt/gfs1: Transport endpoint is not connected [root@gfs1-3.iad-aws.prod.sli.io ~]$ ls /mnt/sec-gfs1 1>/dev/null ls: cannot access /mnt/sec-gfs1: Transport endpoint is not connected Try easy solution first: [root@gfs1-3.iad-aws.prod.sli.io ~]$ umount /mnt/gfs1 [root@gfs1-3.iad-aws.prod.sli.io ~]$ umount /mnt/sec-gfs1 [root@gfs1-3.iad-aws.prod.sli.io ~]$ mount /mnt/gfs1 [root@gfs1-3.iad-aws.prod.sli.io ~]$ mount /mnt/sec-gfs1 After issuing above commands, retry whole step starting with 2.5.2. If fails for the second time, investigate logs in /var/log/glusterfs/, /var/log/messages, dmesg [If no solution found in 10 minutes, escalate to BU SaaS] [x] 2.5.4. Check volumes status: [root@gfs1-3.iad-aws.prod.sli.io ~]$ gluster volume status Status of volume: gfs1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick gfs1-1.iad-aws.prod.sli.io:/gluster/g fs1/brick1 49152 0 Y 3510 Brick gfs1-2.iad-aws.prod.sli.io:/gluster/g fs1/brick1 49152 0 Y 31340 Brick gfs1-3.iad-aws.prod.sli.io:/gluster/g fs1/brick1 49152 0 Y 3316 Brick gfs1-4.iad-aws.prod.sli.io:/gluster/g fs1/brick1 49152 0 Y 31413 Self-heal Daemon on localhost N/A N/A Y 3665 Self-heal Daemon on gfs1-3.iad-aws.prod.sli .io N/A N/A Y 20255 Self-heal Daemon on gfs1-2.iad-aws.prod.sli .io N/A N/A Y 77317 Self-heal Daemon on gfs1-4.iad-aws.prod.sli .io N/A N/A Y 66457 Task Status of Volume gfs1 ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: sec-gfs1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick gfs1-1.iad-aws.prod.sli.io:/gluster/s ec-gfs1/brick1 49153 0 Y 3521 Brick gfs1-2.iad-aws.prod.sli.io:/gluster/s ec-gfs1/brick1 49153 0 Y 31351 Brick gfs1-3.iad-aws.prod.sli.io:/gluster/s ec-gfs1/brick1 49153 0 Y 3339 Brick gfs1-4.iad-aws.prod.sli.io:/gluster/s ec-gfs1/brick1 49153 0 Y 31424 Self-heal Daemon on localhost N/A N/A Y 3665 Self-heal Daemon on gfs1-3.iad-aws.prod.sli .io N/A N/A Y 20255 Self-heal Daemon on gfs1-4.iad-aws.prod.sli .io N/A N/A Y 66457 Self-heal Daemon on gfs1-2.iad-aws.prod.sli .io N/A N/A Y 77317 Task Status of Volume sec-gfs1 ------------------------------------------------------------------------------ There are no active volume tasks If any of bricks above are offline, perform following on failed node: [local]% ssh <IP address of the node> [root@gfs1-<node ID>.iad-aws.prod.sli.io ~]$ gluster volume start <volume name> force If above command's output will state that there are active volume tasks, continue, but doublecheck in 1h and notify BU SaaS. [x] 2.5.5. Check peers' status: [root@gfs1-3.iad-aws.prod.sli.io ~]$ gluster peer status Number of Peers: 3 Hostname: gfs1-4.iad-aws.prod.sli.io Uuid: 0183b376-1aa0-4d98-945e-6197a8876f27 State: Peer in Cluster (Connected) Hostname: gfs1-2.iad-aws.prod.sli.io Uuid: 95e9d91b-fe51-4e0d-86cd-00097890f113 State: Peer in Cluster (Connected) Hostname: gfs1-1.iad.prod.sli.io Uuid: c9381ca4-99ac-4e69-8823-b9b8e208f873 State: Peer in Cluster (Connected) If there are issues in above output, something unexpected happened. As if all bricks are online on previous step, but some of peers are disconnected, this is totally wrong. So escalate to BU SaaS and continue. [x] 2.5.6. Switch on CRON [root@gfs1-3.iad-aws.prod.sli.io ~]$ systemctl enable crond [root@gfs1-3.iad-aws.prod.sli.io ~]$ systemctl start crond [x] 2.6. Perform GFS healthcheck for instance i-001707ebdd3539c48 (gfs1-4.iad-aws.prod.sli.io). [x] 2.6.0. Connect to the instance: ssh 10.79.114.67 [x] 2.6.1. Check gluster daemon is running: [root@gfs1-4.iad-aws.prod.sli.io ~]$ systemctl status glusterd If not started, (re)start glusterd [root@gfs1-4.iad-aws.prod.sli.io ~]$ systemctl restart glusterd If still fail to start all daemons (glusterd, glusterfsd, glusterfs), investigate logs in /var/log/glusterfs/ [If no solution found in 10 minutes, escalate to BU SaaS] [x] 2.6.2. Check mount: [root@gfs1-4.iad-aws.prod.sli.io ~]$ mount|grep gfs1 /dev/mapper/vg01-gfs1.brick1 on /gluster/gfs1/brick1 type xfs (rw,relatime,attr2,inode64,noquota) /gluster/sec-gfs1/sec-gfs1.brick1 on /gluster/sec-gfs1/brick1 type xfs (rw,relatime,attr2,inode64,noquota) gfs1-4.iad-aws.prod.sli.io:/sec-gfs1 on /mnt/sec-gfs1 type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072) gfs1-4.iad-aws.prod.sli.io:/gfs1 on /mnt/gfs1 type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072) If mounts are missing, try to mount volumes: [root@gfs1-4.iad-aws.prod.sli.io ~]$ mount /mnt/gfs1 [root@gfs1-4.iad-aws.prod.sli.io ~]$ mount /mnt/sec-gfs1 If mount fails, investigate logs in /var/log/glusterfs/, /var/log/messages, dmesg [If no solution found in 10 minutes, escalate to BU SaaS] [x] 2.6.3. Check mount is healthy: Following commands should produce no output: [root@gfs1-4.iad-aws.prod.sli.io ~]$ ls /mnt/sec-gfs1 1>/dev/null [root@gfs1-4.iad-aws.prod.sli.io ~]$ ls /mnt/gfs1 1>/dev/null If nothing printed out, proceed to the next step. Following is an example of mounted volume, which has issues: [root@gfs1-4.iad-aws.prod.sli.io ~]$ ls /mnt/gfs1 1>/dev/null ls: cannot access /mnt/gfs1: Transport endpoint is not connected [root@gfs1-4.iad-aws.prod.sli.io ~]$ ls /mnt/sec-gfs1 1>/dev/null ls: cannot access /mnt/sec-gfs1: Transport endpoint is not connected Try easy solution first: [root@gfs1-4.iad-aws.prod.sli.io ~]$ umount /mnt/gfs1 [root@gfs1-4.iad-aws.prod.sli.io ~]$ umount /mnt/sec-gfs1 [root@gfs1-4.iad-aws.prod.sli.io ~]$ mount /mnt/gfs1 [root@gfs1-4.iad-aws.prod.sli.io ~]$ mount /mnt/sec-gfs1 After issuing above commands, retry whole step starting with 2.6.2. If fails for the second time, investigate logs in /var/log/glusterfs/, /var/log/messages, dmesg [If no solution found in 10 minutes, escalate to BU SaaS] [x] 2.6.4. Check volumes status: [root@gfs1-4.iad-aws.prod.sli.io ~]$ gluster volume status Status of volume: gfs1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick gfs1-1.iad-aws.prod.sli.io:/gluster/g fs1/brick1 49152 0 Y 3510 Brick gfs1-2.iad-aws.prod.sli.io:/gluster/g fs1/brick1 49152 0 Y 31340 Brick gfs1-3.iad-aws.prod.sli.io:/gluster/g fs1/brick1 49152 0 Y 3316 Brick gfs1-4.iad-aws.prod.sli.io:/gluster/g fs1/brick1 49152 0 Y 31413 Self-heal Daemon on localhost N/A N/A Y 3665 Self-heal Daemon on gfs1-3.iad-aws.prod.sli .io N/A N/A Y 20255 Self-heal Daemon on gfs1-2.iad-aws.prod.sli .io N/A N/A Y 77317 Self-heal Daemon on gfs1-4.iad-aws.prod.sli .io N/A N/A Y 66457 Task Status of Volume gfs1 ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: sec-gfs1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick gfs1-1.iad-aws.prod.sli.io:/gluster/s ec-gfs1/brick1 49153 0 Y 3521 Brick gfs1-2.iad-aws.prod.sli.io:/gluster/s ec-gfs1/brick1 49153 0 Y 31351 Brick gfs1-3.iad-aws.prod.sli.io:/gluster/s ec-gfs1/brick1 49153 0 Y 3339 Brick gfs1-4.iad-aws.prod.sli.io:/gluster/s ec-gfs1/brick1 49153 0 Y 31424 Self-heal Daemon on localhost N/A N/A Y 3665 Self-heal Daemon on gfs1-3.iad-aws.prod.sli .io N/A N/A Y 20255 Self-heal Daemon on gfs1-4.iad-aws.prod.sli .io N/A N/A Y 66457 Self-heal Daemon on gfs1-2.iad-aws.prod.sli .io N/A N/A Y 77317 Task Status of Volume sec-gfs1 ------------------------------------------------------------------------------ There are no active volume tasks If any of bricks above are offline, perform following on failed node: [local]% ssh <IP address of the node> [root@gfs1-<node ID>.iad-aws.prod.sli.io ~]$ gluster volume start <volume name> force If above command's output will state that there are active volume tasks, continue, but doublecheck in 1h and notify BU SaaS. [x] 2.6.5. Check peers' status: [root@gfs1-4.iad-aws.prod.sli.io ~]$ gluster peer status Number of Peers: 3 Hostname: gfs1-3.iad-aws.prod.sli.io Uuid: 23884f97-8149-4dd9-b4d3-1faccee03d1a State: Peer in Cluster (Connected) Hostname: gfs1-2.iad-aws.prod.sli.io Uuid: 95e9d91b-fe51-4e0d-86cd-00097890f113 State: Peer in Cluster (Connected) Hostname: gfs1-1.iad.prod.sli.io Uuid: c9381ca4-99ac-4e69-8823-b9b8e208f873 State: Peer in Cluster (Connected) If there are issues in above output, something unexpected happened. As if all bricks are online on previous step, but some of peers are disconnected, this is totally wrong. So escalate to BU SaaS and continue. [x] 2.6.6. Switch on CRON [root@gfs1-4.iad-aws.prod.sli.io ~]$ systemctl enable crond [root@gfs1-4.iad-aws.prod.sli.io ~]$ systemctl start crond ``` ``` 2.7. Checking remain cluster components' health 2.7.1. Search servers [x] 2.7.1.1. Ensure mounts [local]% for host in 10.79.114.86 10.79.114.150 10.79.114.92 10.79.114.15 10.79.114.94 10.79.114.65; do echo ${host}; ssh ${host} "mount | grep gfs1"; done all hosts should report both gfs1 and sec-gfs1 are mounted systemd-1 on /mnt/sec-gfs1 type autofs (rw,relatime,fd=30,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=10485) systemd-1 on /mnt/gfs1 type autofs (rw,relatime,fd=32,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=10489) gfs1.iad-aws.prod.sli.io:/sec-gfs1 on /mnt/sec-gfs1 type fuse.glusterfs (ro,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072) gfs1.iad-aws.prod.sli.io:/gfs1 on /mnt/gfs1 type fuse.glusterfs (ro,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072) If any host will not return all expected mounts as an output, mount volumes manually: [local]% ssh <IP address of the host> "mount /mnt/gfs1; mount /mnt/sec-gfs1" If mounting unsuccessful, escalate to BU SaaS. [x] 2.7.1.2. Ensure mounts are healthy [local]% for host in 10.79.114.86 10.79.114.150 10.79.114.92 10.79.114.15 10.79.114.94 10.79.114.65; do echo ${host}; ssh ${host} "ls /mnt/sec-gfs1 1>/dev/null"; done [local]% for host in 10.79.114.86 10.79.114.150 10.79.114.92 10.79.114.15 10.79.114.94 10.79.114.65; do echo ${host}; ssh ${host} "ls /mnt/gfs1 1>/dev/null"; done All hosts should produce no output. If nothing printed out, proceed to the next step. If any host will produce output like following: ls: cannot access /mnt/gfs1: Transport endpoint is not connected or ls: cannot access /mnt/sec-gfs1: Transport endpoint is not connected Remount volumes on a specific hosts: [local]% ssh <IP address of the host> "umount /mnt/gfs1; umount /mnt/sec-gfs1; sync; sleep 2; mount /mnt/gfs1; mount /mnt/sec-gfs1" Recheck mounts starting from 2.7.1.1. [if there are issues after second run, escalate to BU SaaS] [x] 2.7.1.3. Ensure Apache is running [local]% for host in 10.79.114.86 10.79.114.150 10.79.114.92 10.79.114.15 10.79.114.94 10.79.114.65; do echo ${host}; ssh ${host} "systemctl status httpd -l"; done on any problem, restart apache on a respective host: [local]% ssh <IP address of the host> "systemctl restart httpd" [x] 2.7.1.3. Ensure Localbrain is running [local]% for host in 10.79.114.86 10.79.114.150 10.79.114.92 10.79.114.15 10.79.114.94 10.79.114.65; do echo ${host}; ssh ${host} "systemctl status braind -l"; done on any problem, restart apache on a respective host: [local]% ssh <IP address of the host> "systemctl stop braind; sleep 30; systemctl start braind" 2.7.2. Dory servers [x] 2.7.2.1. Ensure mounts [local]% for host in 10.79.114.251 10.79.114.88; do echo ${host}; ssh ${host} "mount | grep gfs1"; done all hosts should report gfs1 is mounted gfs1.iad-aws.prod.sli.io:/gfs1 on /mnt/gfs1 type fuse.glusterfs (ro,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072) If any host will not return all expected mounts as an output, mount volumes manually: [local]% ssh <IP address of the host> "mount /mnt/gfs1" If mounting unsuccessful, escalate to BU SaaS. [x] 2.7.2.2. Ensure mounts are healthy [local]% for host in 10.79.114.251 10.79.114.88; do echo ${host}; ssh ${host} "ls /mnt/gfs1 1>/dev/null"; done All hosts should produce no output. If nothing printed out, proceed to the next step. If any host will produce output like following: ls: cannot access /mnt/gfs1: Transport endpoint is not connected Remount volumes on a specific hosts: [local]% ssh <IP address of the host> "umount /mnt/gfs1; sync; sleep 2; mount /mnt/gfs1" Recheck mounts starting from 2.7.2.1. [if there are issues after second run, escalate to BU SaaS] [x] 2.7.2.3. Ensure Apache is running [local]% for host in 10.79.114.251 10.79.114.88; do echo ${host}; ssh ${host} "systemctl status httpd -l"; done on any problem, restart apache on a respective host: [local]% ssh <IP address of the host> "systemctl restart httpd" [x] 2.7.2.3. Ensure Dory is running [local]% for host in 10.79.114.251 10.79.114.88; do echo ${host}; ssh ${host} "systemctl status dory -l"; done on any problem, restart apache on a respective host: [local]% ssh <IP address of the host> "systemctl stop dory; sleep 30; systemctl start dory" [ ] 2.7.3. Ensure servers return valid information instead of errors. Use Nagios (https://nagios-master.sli.io/nagios/) monitoring to check for errors. If nagios reports issues like following (errors 400 or 404) for any IAD cluster servers, escalate to BU SaaS search1-3.iad.prod asahi braindead Notifications for this service have been disabled CRITICAL 02-08-2021 22:45:42 5d 21h 24m 45s 3/3 asahi/search1-3.iad-aws.prod.sli.io returned 400 ``` ``` If any of the following steps fail, escalate to BU SaaS. [ ] 2.8. Returning Cluster 1 (IAD) back to rotation: [local]% for host in 10.79.114.86 10.79.114.150 10.79.114.92 10.79.114.15 10.79.114.94 10.79.114.65; do ssh ${host} 'runuser -l gbrain bash -c "webcontrol vip insert"'; done (allow a minute before performing next step) 2.9. Ensure Cluster 1 (IAD) is serving to the clients [ ] 2.9.1. Check vip status on searches: [local]% for host in 10.79.114.86 10.79.114.150 10.79.114.92 10.79.114.15 10.79.114.94 10.79.114.65; do ssh ${host} 'runuser -l gbrain bash -c "webcontrol vip status"'; done [ ] 2.9.2. Request healthcheck document from the search hosts: [local]% for host in 10.79.114.86 10.79.114.150 10.79.114.92 10.79.114.15 10.79.114.94 10.79.114.65; do echo -n "${host}: "; curl -A "Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:59.0) Gecko/20100101 Firefox/59.0" http://${host}/shared/vip-status.txt; done expecting all hosts to return UP ``` ``` 3. Processing Cluser 2 (DFW) 4. Processing Cluser 3 (SEA) ```

    Import from clipboard

    Paste your markdown or webpage here...

    Advanced permission required

    Your current role can only read. Ask the system administrator to acquire write and comment permission.

    This team is disabled

    Sorry, this team is disabled. You can't edit this note.

    This note is locked

    Sorry, only owner can edit this note.

    Reach the limit

    Sorry, you've reached the max length this note can be.
    Please reduce the content or divide it to more notes, thank you!

    Import from Gist

    Import from Snippet

    or

    Export to Snippet

    Are you sure?

    Do you really want to delete this note?
    All users will lose their connection.

    Create a note from template

    Create a note from template

    Oops...
    This template has been removed or transferred.
    Upgrade
    All
    • All
    • Team
    No template.

    Create a template

    Upgrade

    Delete template

    Do you really want to delete this template?
    Turn this template into a regular note and keep its content, versions, and comments.

    This page need refresh

    You have an incompatible client version.
    Refresh to update.
    New version available!
    See releases notes here
    Refresh to enjoy new features.
    Your user state has changed.
    Refresh to load new user state.

    Sign in

    Forgot password

    or

    By clicking below, you agree to our terms of service.

    Sign in via Facebook Sign in via Twitter Sign in via GitHub Sign in via Dropbox Sign in with Wallet
    Wallet ( )
    Connect another wallet

    New to HackMD? Sign up

    Help

    • English
    • 中文
    • Français
    • Deutsch
    • 日本語
    • Español
    • Català
    • Ελληνικά
    • Português
    • italiano
    • Türkçe
    • Русский
    • Nederlands
    • hrvatski jezik
    • język polski
    • Українська
    • हिन्दी
    • svenska
    • Esperanto
    • dansk

    Documents

    Help & Tutorial

    How to use Book mode

    Slide Example

    API Docs

    Edit in VSCode

    Install browser extension

    Contacts

    Feedback

    Discord

    Send us email

    Resources

    Releases

    Pricing

    Blog

    Policy

    Terms

    Privacy

    Cheatsheet

    Syntax Example Reference
    # Header Header 基本排版
    - Unordered List
    • Unordered List
    1. Ordered List
    1. Ordered List
    - [ ] Todo List
    • Todo List
    > Blockquote
    Blockquote
    **Bold font** Bold font
    *Italics font* Italics font
    ~~Strikethrough~~ Strikethrough
    19^th^ 19th
    H~2~O H2O
    ++Inserted text++ Inserted text
    ==Marked text== Marked text
    [link text](https:// "title") Link
    ![image alt](https:// "title") Image
    `Code` Code 在筆記中貼入程式碼
    ```javascript
    var i = 0;
    ```
    var i = 0;
    :smile: :smile: Emoji list
    {%youtube youtube_id %} Externals
    $L^aT_eX$ LaTeX
    :::info
    This is a alert area.
    :::

    This is a alert area.

    Versions and GitHub Sync
    Get Full History Access

    • Edit version name
    • Delete

    revision author avatar     named on  

    More Less

    Note content is identical to the latest version.
    Compare
      Choose a version
      No search result
      Version not found
    Sign in to link this note to GitHub
    Learn more
    This note is not linked with GitHub
     

    Feedback

    Submission failed, please try again

    Thanks for your support.

    On a scale of 0-10, how likely is it that you would recommend HackMD to your friends, family or business associates?

    Please give us some advice and help us improve HackMD.

     

    Thanks for your feedback

    Remove version name

    Do you want to remove this version name and description?

    Transfer ownership

    Transfer to
      Warning: is a public team. If you transfer note to this team, everyone on the web can find and read this note.

        Link with GitHub

        Please authorize HackMD on GitHub
        • Please sign in to GitHub and install the HackMD app on your GitHub repo.
        • HackMD links with GitHub through a GitHub App. You can choose which repo to install our App.
        Learn more  Sign in to GitHub

        Push the note to GitHub Push to GitHub Pull a file from GitHub

          Authorize again
         

        Choose which file to push to

        Select repo
        Refresh Authorize more repos
        Select branch
        Select file
        Select branch
        Choose version(s) to push
        • Save a new version and push
        • Choose from existing versions
        Include title and tags
        Available push count

        Pull from GitHub

         
        File from GitHub
        File from HackMD

        GitHub Link Settings

        File linked

        Linked by
        File path
        Last synced branch
        Available push count

        Danger Zone

        Unlink
        You will no longer receive notification when GitHub file changes after unlink.

        Syncing

        Push failed

        Push successfully