RonKat
    • Create new note
    • Create a note from template
      • Sharing URL Link copied
      • /edit
      • View mode
        • Edit mode
        • View mode
        • Book mode
        • Slide mode
        Edit mode View mode Book mode Slide mode
      • Customize slides
      • Note Permission
      • Read
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Write
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Engagement control Commenting, Suggest edit, Emoji Reply
    • Invite by email
      Invitee

      This note has no invitees

    • Publish Note

      Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

      Your note will be visible on your profile and discoverable by anyone.
      Your note is now live.
      This note is visible on your profile and discoverable online.
      Everyone on the web can find and read all notes of this public team.
      See published notes
      Unpublish note
      Please check the box to agree to the Community Guidelines.
      View profile
    • Commenting
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
      • Everyone
    • Suggest edit
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
    • Emoji Reply
    • Enable
    • Versions and GitHub Sync
    • Note settings
    • Note Insights New
    • Engagement control
    • Make a copy
    • Transfer ownership
    • Delete this note
    • Save as template
    • Insert from template
    • Import from
      • Dropbox
      • Google Drive
      • Gist
      • Clipboard
    • Export to
      • Dropbox
      • Google Drive
      • Gist
    • Download
      • Markdown
      • HTML
      • Raw HTML
Menu Note settings Note Insights Versions and GitHub Sync Sharing URL Create Help
Create Create new note Create a note from template
Menu
Options
Engagement control Make a copy Transfer ownership Delete this note
Import from
Dropbox Google Drive Gist Clipboard
Export to
Dropbox Google Drive Gist
Download
Markdown HTML Raw HTML
Back
Sharing URL Link copied
/edit
View mode
  • Edit mode
  • View mode
  • Book mode
  • Slide mode
Edit mode View mode Book mode Slide mode
Customize slides
Note Permission
Read
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Write
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Engagement control Commenting, Suggest edit, Emoji Reply
  • Invite by email
    Invitee

    This note has no invitees

  • Publish Note

    Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

    Your note will be visible on your profile and discoverable by anyone.
    Your note is now live.
    This note is visible on your profile and discoverable online.
    Everyone on the web can find and read all notes of this public team.
    See published notes
    Unpublish note
    Please check the box to agree to the Community Guidelines.
    View profile
    Engagement control
    Commenting
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    • Everyone
    Suggest edit
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    Emoji Reply
    Enable
    Import from Dropbox Google Drive Gist Clipboard
       Owned this note    Owned this note      
    Published Linked with GitHub
    • Any changes
      Be notified of any changes
    • Mention me
      Be notified of mention me
    • Unsubscribe
    **Mask RCNN Testing** * The First Model I chose to use for the image segmentation part of the project was the Mask R-CNN. This was because the Mask R-CNN was great for Instance Segmentation. * I tested out my Comet Assay Images with a pre-trained Mask R-CNN. It was trained on the coco dataset. * https://colab.research.google.com/drive/1cD7KyzT4_4D7bG8ypmVz2NbPLul9kkVJ?usp=sharing(Link for Pretrained Mask R-CNN) * This is an image of how this pre trained Mask R-CNN segmented an image of my Comet Assay ![](https://hackmd.io/_uploads/SJYe6_Akp.png) * The Pre-Trained Mask-RCNN did not do too well, and this was likely because it was trained on different images. The Mask-RCNN may do better when it is trained on our dataset. **Segment Anything Model Dataset** * Meta AI recently created the SAM Model and is one of the most state-of-the-art AI models for Image Segmentation * I tested this model out by importing my images on their demo, which is on their website. * The link for the website is https://segment-anything.com/demo# * The SAM Model has multiple segmentation features listed on their demo page. I will provide images of the hover, click, box, and everything settings. The hover and click location allows you to hover the image and segment whatever you are hovering. It also allows you to click certain parts of the image, which are then segmented. The box feature allows you to create a box in the image and segments whatever is in it. Finally, the Everything setting segments everything in the image. It creates a grid of dots over the whole image and segments parts of the image on the dots. * HOVER and Click:![](https://hackmd.io/_uploads/HyR2FOAkp.png) * BOX:![](https://hackmd.io/_uploads/S1BJ5_Rka.png) * Everything: ![](https://hackmd.io/_uploads/SJr09_0k6.png) * The SAM Model did great on my Comet Assay Images and it looks like that it would be great for the Image Segmentation portion of our project. >>> Please specify metrics of how you can quantitatively judge the performance of SAM for your problem. >>> * In addition, these are results of a SAM Model that wasn't fine tuned on our data. The SAM Model should perfrom much better after fine tuning it on our own dataset. * **UPDATE** * I have been working on the segmentation model of my code and have been able to complete everything required up to the training portion of the model. In the training I imported a module called supervision, which used the SAM model's outputs to create an annotated image. Then I used this compared to my original image in order to complete the loss function. However, this is where I am encountering an error. >>> TRAINING needs to be principled with mini-batch sampling, regularization and cross-validation. Without that there is no guarantee on generalization of the trained model to new data. For some reason, the loss function is not able to process these two images properly and I am unable to figure out what the issue is. I will attach a link to my code and there are certain lines where I have kept it which are unecessary and wil be changed later. \ * Code link ------> * **Newest Update** * I have finished the coed and was able to train it to a validation loss of 2 and a training loss of .3. The project it self really needs to be done by the 17th of Febuary and all I have left is the display the data, run it on my test set, and find the accuracy of the model using the test set. Here is the link to the google colab ---> https://colab.research.google.com/drive/1-F-HSFDiDYUz5VtxGtIBBD8NFbORDiV1?usp=sharing ### Next Steps 1.Learn Python, Pytorch and Jupyter (many resources on the web) 2. Convert the codes into Jupyter Notebooks and run them in https://colab.google ### Nature Article Methodology Review :::danger Please put pointer to the Nature Article here. Also please cite below the page nos from the paper from which you took the info. Also please explain how Hyperparamters were tuned. For the stoch Grad descent did they use Adam or as the optimizer . Please elaborate in detail so one understands the full computational algorithm deeply. besides just their architecture and their controls/paemeters ::: Link to paper ---> https://www.nature.com/articles/s41598-020-75592-7 I obtained the information from pages 4 & 5 of the paper! 1.Dataset: The DeepComet Model consited of 1037 comet assay images. The images were annotated using the VGG annotation software. Their proceses of annotating comprised of placing one dot at the center of the comet head and one dot at the edge. This enabled to easily determine the area, diameter, etc of the head. Finally, the rest of the comet was annotated with a polyline tool. In addition, the dataset conmtains ghost cells, which are more difficult to see cels within comet assay. 2.Model Development: A Mask RCNN was utilized in order to create masks of each comet and determine if each cell was a ghost cell or a non-ghost cell. Next, the Mask R-CNN was extended with a comet head segmentation module that utilized the two points within the comet head to create segmentations. The training targer was defined with each key point being viewed as an one hot binary mask. The comet head module then predicted a mask for each of the two key points on the comet. 3. Hyperparmeters and techniques: For the backbone of the Mask R-CNN a Res-Net 50 was used. Next, the original images of 2048x 2880 were converted to a size of 512 x 720. A batch size of 4 was also used for training of the model. The model was trained for 20 epochs, it utilized stochastic gradient decent as the optimization algorithm, it had a momentum of .9. a weight decay of 0.0005, and a intial learning rate of 0.005 which was reduced by a power of 10 every 5 epochs. Image augmentations teqniques were also used such as random vertical and horizontal shifts of images images by random pixel [-10, 10], rotation by arbitrary degree [− 30, 30], and random vertical flip. In addition, there were random brightness modulation [− 0.25, 0.25] and contrast modulation [0.25, 1.75]. 4. Metrics: The researchers used Intersection over Union, True Positive, True Negative, False Positive, False Negative, percision, and recall as metrics in order to test their model. 5.The optimizer that was used was Sotchastic Gradient Decent and they did not use Adam for the Mask RCNN. In addition, I did go through the whole paper and unfortunately the paper did not mention any other information about their hyperparameter tuning and training process of their. The only information relevent to model training was found in page 4. I would love if you could inform me of any other areas where I could find this information as I beleive it owuld be extremely valuable for my goals within this project. ### SegFormer Model Analysis 1. Dataset: The dataset contains 33 comet assay images with corresponding masks. In order to preprocess the images, I utilied the MinMaxScaler to normalize the images. The Masks contained pixel values of 0, 127, and 255. I first converted the pixel values of 127 to 1 and the pixel values of 255 to 2. I next applied to_categorical to the masks. The Dataset contained a batch size of 1(I only used a batch size of 1 because my google colab cannot train with a higer batch size. Once I choose a final model to use, I plan to utilize a server for better training). 2. Model Parameters: ![image](https://hackmd.io/_uploads/BkmCz-9OR.png) 3. Hyperparameters: Loss Function: I usd the nn.crossentropyloss for the loss as it is a commonly used loss for multiclass classification and semantic segmentionn. I use the Adam Optimzer with a learning rate of 0.0001 and a regularization rate of 0.0005. I then trained for 100 epcohs. I achieved a training loss of around 0.02 and a validation loss of 0.07. 4. Model Evaluation: Ground Truth Image: ![image](https://hackmd.io/_uploads/BJCmEZc_R.png) Model Prediction: ![image](https://hackmd.io/_uploads/rJIB4-5_R.png) Intersection Over Untion and Overall Accuracy Results: * Mean Iou: 0.7135960290844391 * Mean Accuracy: 0.7669789284497243 * Overall Accuracy: 0.9697340829031807 * Comet Tail IoU(Green Segmentation): 0.62652797 * Comet Head IoU(Yellow Segmentation): 0.53943379 * Comet Tail Accuracy: 0.7183381 * Comet Head Accuracy: 0.58785744 5. Final Notes: I am currently still experimenting wiht hyperparameters and adding data augmentaion to the model. SegFormer performed relativly well as I was able to beat SAM for this dataset. However, I think the accuracy can get higher using other model such as Dinov2, which I am currenlty learning on how to use. In addition, I am also going to try non-vision transformer models as transformer models require lots of data to be more effective;however, I have limited data. Thus, I will be trying other models such as a UNET and seeing the results. 6. Code Link: https://colab.research.google.com/drive/1Kdq5on6OSW5n1emIU67FGMb5DGH0JXNB?usp=sharing ### Stress Test the code and carefully document (o) tabulate All performance metrics/scores IOU, DICE, (1) robustness train, cross-validate and test with **varying degrees** of noise, different contrast, (2) transferability train on 1 data set and test on others (3) model selection and generalizability (4) stability in training learn how you can capture its performance on various NEW data https://paperswithcode.com/datasets?task=image-dehazing ### UNET Model Analysis With Varing degrees of noise 1.Degrees of Noise: To the training dataset, I added 2 levels of noice with contras and brightness. The contrast levels from ranging from -25 to 25. This brightness levesl from -25 to 125. This added more robustness to the model as IT showed a higher level of generalization. 2.Model Training.![image](https://hackmd.io/_uploads/HkPR4lwKR.png) A Standared UNET Model was use just like the one above. It used the same hyperparameters as the SegFormer Model and was trained for 100 epochs.![image](https://hackmd.io/_uploads/SyIWBgDFC.png) 3. Model Accuracy Levels * IoU Average: .79 * Recall Average: .78 * Percision Average: .92 * F1 Scoure/ Dice Average: .82 * More Metrics to bed added 4. Final Thoughts and Next Work: We can see that the UNET was significantly better than the Segformer Model most likely becasue ViT Models need a lot more data to perform better.My future work is to test on a second dataset that I have from a different research paper and see how it does. Will Updatae ASAP ### UNET Model with Non-Related Dataset From Comet Analyzer Research Paper Dataset Notes: The dataset orignally had an odd size for the images and I had to interprolate that to a larger size which made the comets get deformed. Finally, the ground truth only had a single class so I had to convert all classes to 1 on the model predictions. 1. Model Accuracy Levels * IoU Average: .80, IoU for comet category average: .60 * Recall Average: .82 * Percision Average: .97 * F1 Scoure/ Dice Average: .87 2 Notes. I feel that the Model did pretty well, but It did not perform as well the research paper in which I found the dataset. The paper had the same training data as me but was able to achieve a better IoU of .67 be using a ResNet18. I feel like the UNET should have performed better and I do not really know why it under performed. I would love if you could look at the code and see if there are any issues with the model. Also a reason some accuracy may have been lost is that I utilized contast, brightness, and horizontal/vertical flips, while the paper did not. 3. Final Steps: I was planning to retrain the model and if there is any overfitting that is causing the model to underperform. Link to Paper which contained the dataset: https://www.sciencedirect.com/science/article/pii/S2001037022003336 Link to code: https://colab.research.google.com/drive/1D3fACMXItVFSJkKwcZLEaR4M08XSRQgn?usp=sharing Link to Data: https://drive.google.com/drive/folders/14mvYEqE63R8ox4Kb9kmAgCLazUIspxY5?usp=sharing ### U-MixFormer Model Analyis 1. Data Preprocessing and Hyperparameters: I trained the model using the MMSegmentation Library. I utilized random crops, random flips, and one-hot encoding for the data preprocessing. I trained for 500 Iterations and MMSegmentation utilzed a a learning rate schedule to dynamically change the learning rate. I also used a Batch Size of 1 as using more would deplete the resources on colab. 2. Results: * Mean IoU: 78.66 * Mean Pixal Accuracy 95% 3. Notes: Althogh the model was able to perform better than the Unet and segformer there were overfitting issues. I was testing the model on another dataset that I have, and the model only output values of 0 and couldn't segment the images. I believe that this could be due to the differneces within the second dataset and the limited training data that I had. I would love if you could give some tips on how to reduce this overfitting and achieve a good accuracy on both datasets. Update: I did some analysis and realized that there are errors for training past 500 iterations with mmsegmentation. I am currently planning on resolving this error so I can traing for long while utilizing a bigger batch size to ensure proper generalization and learning. Will get back ASAP! Code Link ---> https://colab.research.google.com/drive/1_jFpteXWBRvZfuJ8GWWcJ_t6n0-rIe1n?usp=sharing ### Statistical Analyiss: Hyper Parameters: Learning Rate: .01 Momentum: 0.9 Stochastic Gradient Decent was used Batch size of 1 Weight Decay: 0.0005 Average Training Accuracy: ![aAcc (4)](https://hackmd.io/_uploads/SkT1cSdyyg.svg) Basr_LR: ![base_lr (2)](https://hackmd.io/_uploads/B14b9BdJkl.svg) Decode_Ave_Seg: ![decode.acc_seg (2)](https://hackmd.io/_uploads/SkrN5HOJkx.svg) Loss: ![decode.loss_ce (2)](https://hackmd.io/_uploads/rkh49H_yJx.svg) Validation Accuracy: ![mAcc (2)](https://hackmd.io/_uploads/HkOS9rOkkg.svg) Validation IoU: ![mIoU (2)](https://hackmd.io/_uploads/ryO89Sdkye.svg) Next Steps: I plan to a do more hyperparamter tuning so I can reduce noise in the accuracy and Iou graphs and increase these values. I would love if you could give some advice on what to specifically change.

    Import from clipboard

    Paste your markdown or webpage here...

    Advanced permission required

    Your current role can only read. Ask the system administrator to acquire write and comment permission.

    This team is disabled

    Sorry, this team is disabled. You can't edit this note.

    This note is locked

    Sorry, only owner can edit this note.

    Reach the limit

    Sorry, you've reached the max length this note can be.
    Please reduce the content or divide it to more notes, thank you!

    Import from Gist

    Import from Snippet

    or

    Export to Snippet

    Are you sure?

    Do you really want to delete this note?
    All users will lose their connection.

    Create a note from template

    Create a note from template

    Oops...
    This template has been removed or transferred.
    Upgrade
    All
    • All
    • Team
    No template.

    Create a template

    Upgrade

    Delete template

    Do you really want to delete this template?
    Turn this template into a regular note and keep its content, versions, and comments.

    This page need refresh

    You have an incompatible client version.
    Refresh to update.
    New version available!
    See releases notes here
    Refresh to enjoy new features.
    Your user state has changed.
    Refresh to load new user state.

    Sign in

    Forgot password

    or

    By clicking below, you agree to our terms of service.

    Sign in via Facebook Sign in via Twitter Sign in via GitHub Sign in via Dropbox Sign in with Wallet
    Wallet ( )
    Connect another wallet

    New to HackMD? Sign up

    Help

    • English
    • 中文
    • Français
    • Deutsch
    • 日本語
    • Español
    • Català
    • Ελληνικά
    • Português
    • italiano
    • Türkçe
    • Русский
    • Nederlands
    • hrvatski jezik
    • język polski
    • Українська
    • हिन्दी
    • svenska
    • Esperanto
    • dansk

    Documents

    Help & Tutorial

    How to use Book mode

    Slide Example

    API Docs

    Edit in VSCode

    Install browser extension

    Contacts

    Feedback

    Discord

    Send us email

    Resources

    Releases

    Pricing

    Blog

    Policy

    Terms

    Privacy

    Cheatsheet

    Syntax Example Reference
    # Header Header 基本排版
    - Unordered List
    • Unordered List
    1. Ordered List
    1. Ordered List
    - [ ] Todo List
    • Todo List
    > Blockquote
    Blockquote
    **Bold font** Bold font
    *Italics font* Italics font
    ~~Strikethrough~~ Strikethrough
    19^th^ 19th
    H~2~O H2O
    ++Inserted text++ Inserted text
    ==Marked text== Marked text
    [link text](https:// "title") Link
    ![image alt](https:// "title") Image
    `Code` Code 在筆記中貼入程式碼
    ```javascript
    var i = 0;
    ```
    var i = 0;
    :smile: :smile: Emoji list
    {%youtube youtube_id %} Externals
    $L^aT_eX$ LaTeX
    :::info
    This is a alert area.
    :::

    This is a alert area.

    Versions and GitHub Sync
    Get Full History Access

    • Edit version name
    • Delete

    revision author avatar     named on  

    More Less

    Note content is identical to the latest version.
    Compare
      Choose a version
      No search result
      Version not found
    Sign in to link this note to GitHub
    Learn more
    This note is not linked with GitHub
     

    Feedback

    Submission failed, please try again

    Thanks for your support.

    On a scale of 0-10, how likely is it that you would recommend HackMD to your friends, family or business associates?

    Please give us some advice and help us improve HackMD.

     

    Thanks for your feedback

    Remove version name

    Do you want to remove this version name and description?

    Transfer ownership

    Transfer to
      Warning: is a public team. If you transfer note to this team, everyone on the web can find and read this note.

        Link with GitHub

        Please authorize HackMD on GitHub
        • Please sign in to GitHub and install the HackMD app on your GitHub repo.
        • HackMD links with GitHub through a GitHub App. You can choose which repo to install our App.
        Learn more  Sign in to GitHub

        Push the note to GitHub Push to GitHub Pull a file from GitHub

          Authorize again
         

        Choose which file to push to

        Select repo
        Refresh Authorize more repos
        Select branch
        Select file
        Select branch
        Choose version(s) to push
        • Save a new version and push
        • Choose from existing versions
        Include title and tags
        Available push count

        Pull from GitHub

         
        File from GitHub
        File from HackMD

        GitHub Link Settings

        File linked

        Linked by
        File path
        Last synced branch
        Available push count

        Danger Zone

        Unlink
        You will no longer receive notification when GitHub file changes after unlink.

        Syncing

        Push failed

        Push successfully