anindya sarkar
    • Create new note
    • Create a note from template
      • Sharing URL Link copied
      • /edit
      • View mode
        • Edit mode
        • View mode
        • Book mode
        • Slide mode
        Edit mode View mode Book mode Slide mode
      • Customize slides
      • Note Permission
      • Read
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Write
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Engagement control Commenting, Suggest edit, Emoji Reply
    • Invite by email
      Invitee

      This note has no invitees

    • Publish Note

      Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

      Your note will be visible on your profile and discoverable by anyone.
      Your note is now live.
      This note is visible on your profile and discoverable online.
      Everyone on the web can find and read all notes of this public team.
      See published notes
      Unpublish note
      Please check the box to agree to the Community Guidelines.
      View profile
    • Commenting
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
      • Everyone
    • Suggest edit
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
    • Emoji Reply
    • Enable
    • Versions and GitHub Sync
    • Note settings
    • Note Insights New
    • Engagement control
    • Make a copy
    • Transfer ownership
    • Delete this note
    • Save as template
    • Insert from template
    • Import from
      • Dropbox
      • Google Drive
      • Gist
      • Clipboard
    • Export to
      • Dropbox
      • Google Drive
      • Gist
    • Download
      • Markdown
      • HTML
      • Raw HTML
Menu Note settings Note Insights Versions and GitHub Sync Sharing URL Create Help
Create Create new note Create a note from template
Menu
Options
Engagement control Make a copy Transfer ownership Delete this note
Import from
Dropbox Google Drive Gist Clipboard
Export to
Dropbox Google Drive Gist
Download
Markdown HTML Raw HTML
Back
Sharing URL Link copied
/edit
View mode
  • Edit mode
  • View mode
  • Book mode
  • Slide mode
Edit mode View mode Book mode Slide mode
Customize slides
Note Permission
Read
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Write
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Engagement control Commenting, Suggest edit, Emoji Reply
  • Invite by email
    Invitee

    This note has no invitees

  • Publish Note

    Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

    Your note will be visible on your profile and discoverable by anyone.
    Your note is now live.
    This note is visible on your profile and discoverable online.
    Everyone on the web can find and read all notes of this public team.
    See published notes
    Unpublish note
    Please check the box to agree to the Community Guidelines.
    View profile
    Engagement control
    Commenting
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    • Everyone
    Suggest edit
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    Emoji Reply
    Enable
    Import from Dropbox Google Drive Gist Clipboard
       Owned this note    Owned this note      
    Published Linked with GitHub
    • Any changes
      Be notified of any changes
    • Mention me
      Be notified of mention me
    • Unsubscribe
    ---------------------------------------------- We thank all the reviewers for their constructive and insightful feedback. We are delighted to see that our work is found novel and technically solid by all the reviewer. We’ve addressed all reviewer concerns to the best of our effort by adding detailed clarifications with supporting experimental results, and will incorporate the feedback in the final version. **Reviewer yVuL** Thank you for your comments. We are encouraged to see that you appreciated our contribution. We respond to all the comments below. > **Q1**: Not sure of the reproducibility of this work. Also, no code is provided in the supplementary. **A1**: With anonymity being a primary concern, we have chosen not to release the code at this point. Nevertheless, we are committed to open-sourcing the GitHub link upon acceptance, inclusive of all model weights, to facilitate the reproduction of all our experimental results. > **Q2**: why imitation learning (IL) is not adopted? IL and RL loss can be used together for the task. **A2**: We wish to clarify that for imitation learning to be applied, we would need examples of search demonstrations by people, which we do not have. Rather, our labels provide information about object locations across grid cells. The reason this is not useful for an imitation learning paradigm is that the search task is fundmentally about handling unknown object locations, leveraging previous observations about these, and trading off exploration and exploitation. Consequently, in our setting the only way we can make use of imitation learning is to learn to greedily choose a location most likely to contain an object. And, in fact, such a greedy approach is one of our baselines (we refer to it as greedy classification (GC)), and performs rather poorly (e.g., in Table 1, it is one of the weakest baselines). <!--Generally, imitation learning is useful when it is easier for an expert to demonstrate the desired behaviour rather than to specify a reward function which would generate the same behaviour. But, in visual active search scenario, we directly get the true value of reward after every search query, hence, imitation learning is not suitable for the task at hand.--> > **Q3**: It will be good to provide the formulation of L_BCE and L_RL loss in the paper? **A3**: Thank you for the suggestion. We thought the formulation of binary cross-entropy loss ($L_{BCE}$) and REINFORCE loss ($L_{RL}$) is well-known and not necessary to include. But, following your suggestion we will include this in the revision. **Reviewer 6bym** Thank you for the thoughtful comments and suggestions. We are inspired to see that you found our overall method is techniqually sound and empirically proved. We respond to all the comments below. > **Q1**: How different balancing factor lambda between RL and BCE loss would affect the performance is worth to investigate in ablation study section. **A1**: We performed experiments with different choices of $\lambda$ and found $\lambda$ = 0.1 to be the best choice across all different experimental setup. For comparison, here we report the results in the case when we train the policy with different values of $\lambda$ using small car as a target class and test the policy with small car, building, and sail boat as target on xView. We evaluate the policy with varying search budgets C ∈ {25, 50, 75} and the number of equal sized grid cells N = 49. In the following table, we provide the result for the **PSVAS** framework with **small car** as target. | $\lambda$ | C = 25 | C = 50 | C = 75 | |---|:---:|:---:|:---:| |0.001| 4.96 | 7.75 | 9.74 | | 0.01 | 5.02 | 7.87 | 9.96 | | 0.1 | **5.51** | **8.33** | **10.52** | | 1.0 | 5.10 | 7.98 | 10.04 | In the following table, we provide the result for **MPS-VAS** framework with **small car** as target. | $\lambda$ | C = 25 | C = 50 | C = 75 | |---|:---:|:---:|:---:| |0.001| 4.99 | 7.82 | 9.90 | | 0.01 | 5.06 | 7.93 | 10.03 | | 0.1 | **5.55** | **8.40** | **10.69** | | 1.0 | 5.12 | 8.01 | 10.12 | In the following table, we provide the result for **PSVAS** framework with **building** as target. | $\lambda$ | C = 25 | C = 50 | C = 75 | |---|:---:|:---:|:---:| |0.001| 6.08 | 9.64 | 12.35 | | 0.01 | 6.37 | 9.95 | 12.77 | | 0.1 | **6.81** | **10.53** | **13.44** | | 1.0 | 6.39 | 10.16 | 12.81 | In the following table, we provide the result for **MPS-VAS** framework with **building** as target. | $\lambda$ | C = 25 | C = 50 | C = 75 | |---|:---:|:---:|:---:| |0.001| 6.15 | 9.74 | 12.44 | | 0.01 | 6.41 | 10.09 | 12.89 | | 0.1 | **6.83** | **10.59** | **13.64** | | 1.0 | 6.46 | 10.21 | 12.96 | In the following table, we provide the result for **PSVAS** framework with **sail boat** as target. | $\lambda$ | C = 25 | C = 50 | C = 75 | |---|:---:|:---:|:---:| |0.001| 0.74 | 1.12 | 1.43 | | 0.01 | 0.88 | 1.19 | 1.54 | | 0.1 | **0.93** | **1.23** | **1.66** | | 1.0 | 0.89 | 1.20 | 1.59 | In the following table, we provide the result for **MPS-VAS** framework with **sail boat** as target. | $\lambda$ | C = 25 | C = 50 | C = 75 | |---|:---:|:---:|:---:| |0.001| 0.83 | 1.22 | 1.53 | | 0.01 | 0.98 | 1.46 | 1.87 | | 0.1 | **1.07** | **1.67** | **2.10** | | 1.0 | 1.01 | 1.52 | 1.90 | Our empirical findings across all the experimental settings are quite consistent, and justify the choice of $\lambda=0.1$. > **Q2**: Whether it is possible to apply such method across different datasets is not discusses. **A2**: We can apply our method directly across different datasets without requiring any further modifications or hyperparameter tuning. In the following tables, we demonstrate this by presenting results of training on one dataset for one target class while evaluating on another dataset and for another target class. We use the number of equal sized grid cells N = 64 and varying search budgets C = {25,50,75}. <!--In the following tables, we provide the detailed results when the policy is trained on xView and tested on different classes on the DOTA dataset. Subsequently, we present the results when the policy is trained on DOTA with large vehicle as the target and is evaluated on xView with small car, building, and sail boat as the target respectively.--> In the following table, we report the results when we **train** the policy with **large vehicle on DOTA** as target and **evaluate** with **small car on xView** as target. | Method | C = 25 | C = 50 | C = 75 | |---|:---:|:---:|:---:| |E2EVAS | 4.22 | 6.73 | 8.07 | |OnlineTTA | 4.23 | 6.75 | 8.10 | | PSVAS | 4.95 | 7.74 | 9.45 | | MPS-VAS | **5.07** | **7.92** | **9.73** | In the next table, we report the results when we **train** the policy with **large vehicle on DOTA** as target and **evaluate** with **building on xView** as target. | Method | C = 25 | C = 50 | C = 75 | |---|:---:|:---:|:---:| |E2EVAS | 5.12 | 8.24 | 10.50 | |OnlineTTA | 5.14 | 8.27 | 10.53 | | PSVAS | 6.10 | 9.45 | 12.31 | | MPS-VAS | **6.18** | **9.68** | **12.83** | In the next table, we report the result when we **train** the policy with **large vehicle on DOTA** as target and **evaluate** with **sail boat on xView** as target. | Method | C = 25 | C = 50 | C = 75 | |---|:---:|:---:|:---:| |E2EVAS | 0.48 | 0.56 | 0.92 | |OnlineTTA | 0.49 | 0.57 | 0.95 | | PSVAS | 0.89 | 1.05 | 1.54 | | MPS-VAS | **1.02** | **1.39** | **1.91** | Our experimental findings suggest that our proposed PSVAS and MPS-VAS framework significantly improves ANT compared to the most competitive baselines even in the case when the training and evaluation datasets and target objects differ. **Reviewer 2ixD** We thank the reviewer for the insightful comments and suggestions. We are encouraged to see that you find our proposed method is promising and novel. We respond to all the comments below. > **Q1**: Can we train on a dataset (xView) and generalize to the classes on another dataset (DOTA)? **A1**: Absolutely! Please see the data we provide in response to a similar question by Reviewer 6bym. In a nutshell, we can apply the proposed approach with no modification to train on one dataset with a particular target object class and then evaluate on a different dataset, and a different target object class. In such cases, our approach also demonstrates superior performance---often by a large margin---compared to the most competitive baselines. <!---we can apply the proposed approach with no modification to train on one dataset and target object class to a different dataset, and a different target object class.<!---> > **Q2**: Can we use the same model for image classification task on the fMoW dataset? **A2**: Active search is qualitatively distinct from image classification task. The goal of the image classification task is solely to learn to predict well. In active search, in contrast, we aim to learn a search policy that balances exploration (improving our ability to predict where target objects are) and exploitation (actually finding such objects) within a limited budget. The key reason for this balance is that queries are informative about the location of target objects in two ways: 1) geospatial correlations in object locations, and 2) improvement of the quality of the learned predictive model. In contrast, traditional classification is a sequence of one-shot **prediction** tasks, rather than **search** tasks. Consequently, traditional vision benchmarks do not provide an appropriate evaluation framework for our problem. > **Q3**: It would increase the contribution significantly if it was tested on some traditional vision benchmarks. At least, a discussion on it would be useful. **A3**: As we mention in our response to the question above, visual active search (VAS) is a qualitatively different problem than traditional vision tasks, and therefore typical vision benchmarks are not appropriate means for evaluating VAS approaches, as we are interested primarily in the quality of visual search (trading off exploration and exploitation) rather than the quality of visual prediction. We agree that this warrants further discussion and clarification, which we will add in the revision. <!--Previous work on Visual Active Search leverages xView and DOTA dataset for benchmarking search performance. Hence, in order to ensure a fair comparison with prior research on Visual Active Search, we also utilize the same datasets in all our experiments. Also traditional vision datasets are not suitable for testing Visual Active Search as these datasets are designed for traditional vision tasks, such as - Image Recoginition, object detection, semantic segmentation, etc. As it is briefly discussed in our paper and in E2EVAS paper, that VAS task is a novel visual reasoning task that involves balance between exploration and exploitation and thus fundamentally very different than traditional vision tasks. Hence, testing VAS on some traditional vision benchmarks is not reasonable.--> **Reviewer mjFZ** Thank you for the comments and suggestions. We are encouraged to see your appreciation of the writing and presentation style, proposed methodology being intuitive and technically sound, and motivation of TTA and MQA being logically sound and well justified. We respond to all the comments below. > **Q1**: The authors should conduct more ablation studies to provide readers with a deeper understanding of their method. For instance, an ablation study on hyper-parameters and different modules of the proposed method would be valuable. **A1**: We thank the review for the comment. Indeed, we analyse the importance of the task-specific prediction module in Sections D.1 and D.2 of Supplementary Material by freezing the prediction module parameters during inference time. Here, we additionally analyse the efficacy of the task-specific prediction module by setting **$\lambda$ = 0** while training the policy. We call the resulting policy **USVAS** (Un-Supervised VAS). We observe a significant drop in performance across all settings, demonstrating the importance of the Supervised prediction module in order to learn an effective search policy. Specifically, in the following tables we present the results when the policy is trained with small car on xView as a target, while the performance of the policy is evaluated for the following target classes: Small Car (SC), Helicopter, SailBoat (SB), Construction Site (CS), Building, and Helipad. We evaluate the policy with varying search budgets C ∈ {25, 50, 75} and the number of equal sized grid cells N = 49. In the following table, we report the test result with **small car** as a target. | Method | C = 25 | C = 50 | C = 75 | |---|:---:|:---:|:---:| |**USVAS** | 4.77 | 7.46 | 9.61 | | PSVAS | 5.51 | 8.33 | 10.52 | | MPS-VAS | **5.55** | **8.40** | **10.69** | In the following table, we report the test result with **Helicopter** as a target. | Method | C = 25 | C = 50 | C = 75 | |---|:---:|:---:|:---:| |**USVAS** | 0.53 | 0.84 | 1.19 | | PSVAS | 0.87 | 1.08 | 1.28 | | MPS-VAS | **0.92** | **1.13** | **1.38** | In the following table, we report the test result with **Sail Boat** as a target. | Method | C = 25 | C = 50 | C = 75 | |---|:---:|:---:|:---:| |**USVAS** | 0.64 | 1.08 | 1.27 | | PSVAS | 0.93 | 1.23 | 1.66 | | MPS-VAS | **1.07** | **1.67** | **2.10** | In the following table, we report the test result with **Construction Site** as a target. | Method | C = 25 | C = 50 | C = 75 | |---|:---:|:---:|:---:| |**USVAS** | 1.44 | 2.27 | 2.99 | | PSVAS | 1.62 | 2.49 | 3.14 | | MPS-VAS | **1.74** | **2.64** | **3.47** | In the following table, we report the test result with **Building** as a target. | Method | C = 25 | C = 50 | C = 75 | |---|:---:|:---:|:---:| |**USVAS** | 5.86 | 9.37 | 12.05 | | PSVAS | 6.81 | 10.53 | 13.44 | | MPS-VAS | **6.83** | **10.59** | **13.64** | In the following table, we report the test result with **Helipad** as a target. | Method | C = 25 | C = 50 | C = 75 | |---|:---:|:---:|:---:| |**USVAS** | 0.80 | 1.16 | 1.42 | | PSVAS | 0.91 | 1.22 | 1.47 | | MPS-VAS | **0.96** | **1.30** | **1.63** | > **Q2**: I wonder if integrating a more fine-grained supervised loss could enhance the overall performance. For example, would adding a detection or segmentation loss be beneficial? **A2**: This is an intriguing question. Our main objective here is to **find as many target grids** as possible within a pre-specified budget. Consequently, we concentrate on devising an efficient search policy capable of identifying grids containing one or more targets. To achieve an efficient search policy, it becomes crucial to have knowledge about the likely locations of target grids. To address this, we employ a task-specific prediction module trained using BCE loss. However, if we were to shift our focus on a slightly different problem, i.e., Visual Active Target Object Detection, which aims to **precisely identify as many target objects** (along with their exact locations and shapes) as possible within the search budget, using a more fine-grained loss such as detection or segmentation loss would become essential. While this problem of Visual Active Target Object Detection is beyond the scope of our current work, it is a terrific problem for follow up research! <!--For an efficicent search policy, it is important to be aware of the probable target grids location. To this end, we train a task specific prediction module via BCE loss. But if we now change our objective and focus on Visual Active Target Detection, that aims to identify as many target objects with it's exact location and shape within the search budget. In that scenario, using a more fine-grained loss like - detection or segmentation loss is necessary. Visual Active Target Detection is outside the scope of this work. We will explore this in our future work.<!---> > **Q3**: The paragraph from lines 128 to 140 appears to lack logical consistency. I recommend that the authors restructure this section for improved clarity. **A3**: Thank you for the suggestion. We will restructure the referenced section in a revised draft.

    Import from clipboard

    Paste your markdown or webpage here...

    Advanced permission required

    Your current role can only read. Ask the system administrator to acquire write and comment permission.

    This team is disabled

    Sorry, this team is disabled. You can't edit this note.

    This note is locked

    Sorry, only owner can edit this note.

    Reach the limit

    Sorry, you've reached the max length this note can be.
    Please reduce the content or divide it to more notes, thank you!

    Import from Gist

    Import from Snippet

    or

    Export to Snippet

    Are you sure?

    Do you really want to delete this note?
    All users will lose their connection.

    Create a note from template

    Create a note from template

    Oops...
    This template has been removed or transferred.
    Upgrade
    All
    • All
    • Team
    No template.

    Create a template

    Upgrade

    Delete template

    Do you really want to delete this template?
    Turn this template into a regular note and keep its content, versions, and comments.

    This page need refresh

    You have an incompatible client version.
    Refresh to update.
    New version available!
    See releases notes here
    Refresh to enjoy new features.
    Your user state has changed.
    Refresh to load new user state.

    Sign in

    Forgot password

    or

    By clicking below, you agree to our terms of service.

    Sign in via Facebook Sign in via Twitter Sign in via GitHub Sign in via Dropbox Sign in with Wallet
    Wallet ( )
    Connect another wallet

    New to HackMD? Sign up

    Help

    • English
    • 中文
    • Français
    • Deutsch
    • 日本語
    • Español
    • Català
    • Ελληνικά
    • Português
    • italiano
    • Türkçe
    • Русский
    • Nederlands
    • hrvatski jezik
    • język polski
    • Українська
    • हिन्दी
    • svenska
    • Esperanto
    • dansk

    Documents

    Help & Tutorial

    How to use Book mode

    Slide Example

    API Docs

    Edit in VSCode

    Install browser extension

    Contacts

    Feedback

    Discord

    Send us email

    Resources

    Releases

    Pricing

    Blog

    Policy

    Terms

    Privacy

    Cheatsheet

    Syntax Example Reference
    # Header Header 基本排版
    - Unordered List
    • Unordered List
    1. Ordered List
    1. Ordered List
    - [ ] Todo List
    • Todo List
    > Blockquote
    Blockquote
    **Bold font** Bold font
    *Italics font* Italics font
    ~~Strikethrough~~ Strikethrough
    19^th^ 19th
    H~2~O H2O
    ++Inserted text++ Inserted text
    ==Marked text== Marked text
    [link text](https:// "title") Link
    ![image alt](https:// "title") Image
    `Code` Code 在筆記中貼入程式碼
    ```javascript
    var i = 0;
    ```
    var i = 0;
    :smile: :smile: Emoji list
    {%youtube youtube_id %} Externals
    $L^aT_eX$ LaTeX
    :::info
    This is a alert area.
    :::

    This is a alert area.

    Versions and GitHub Sync
    Get Full History Access

    • Edit version name
    • Delete

    revision author avatar     named on  

    More Less

    Note content is identical to the latest version.
    Compare
      Choose a version
      No search result
      Version not found
    Sign in to link this note to GitHub
    Learn more
    This note is not linked with GitHub
     

    Feedback

    Submission failed, please try again

    Thanks for your support.

    On a scale of 0-10, how likely is it that you would recommend HackMD to your friends, family or business associates?

    Please give us some advice and help us improve HackMD.

     

    Thanks for your feedback

    Remove version name

    Do you want to remove this version name and description?

    Transfer ownership

    Transfer to
      Warning: is a public team. If you transfer note to this team, everyone on the web can find and read this note.

        Link with GitHub

        Please authorize HackMD on GitHub
        • Please sign in to GitHub and install the HackMD app on your GitHub repo.
        • HackMD links with GitHub through a GitHub App. You can choose which repo to install our App.
        Learn more  Sign in to GitHub

        Push the note to GitHub Push to GitHub Pull a file from GitHub

          Authorize again
         

        Choose which file to push to

        Select repo
        Refresh Authorize more repos
        Select branch
        Select file
        Select branch
        Choose version(s) to push
        • Save a new version and push
        • Choose from existing versions
        Include title and tags
        Available push count

        Pull from GitHub

         
        File from GitHub
        File from HackMD

        GitHub Link Settings

        File linked

        Linked by
        File path
        Last synced branch
        Available push count

        Danger Zone

        Unlink
        You will no longer receive notification when GitHub file changes after unlink.

        Syncing

        Push failed

        Push successfully