Nicous
    • Create new note
    • Create a note from template
      • Sharing URL Link copied
      • /edit
      • View mode
        • Edit mode
        • View mode
        • Book mode
        • Slide mode
        Edit mode View mode Book mode Slide mode
      • Customize slides
      • Note Permission
      • Read
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Write
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Engagement control Commenting, Suggest edit, Emoji Reply
    • Invite by email
      Invitee

      This note has no invitees

    • Publish Note

      Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

      Your note will be visible on your profile and discoverable by anyone.
      Your note is now live.
      This note is visible on your profile and discoverable online.
      Everyone on the web can find and read all notes of this public team.
      See published notes
      Unpublish note
      Please check the box to agree to the Community Guidelines.
      View profile
    • Commenting
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
      • Everyone
    • Suggest edit
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
    • Emoji Reply
    • Enable
    • Versions and GitHub Sync
    • Note settings
    • Note Insights New
    • Engagement control
    • Make a copy
    • Transfer ownership
    • Delete this note
    • Save as template
    • Insert from template
    • Import from
      • Dropbox
      • Google Drive
      • Gist
      • Clipboard
    • Export to
      • Dropbox
      • Google Drive
      • Gist
    • Download
      • Markdown
      • HTML
      • Raw HTML
Menu Note settings Note Insights Versions and GitHub Sync Sharing URL Create Help
Create Create new note Create a note from template
Menu
Options
Engagement control Make a copy Transfer ownership Delete this note
Import from
Dropbox Google Drive Gist Clipboard
Export to
Dropbox Google Drive Gist
Download
Markdown HTML Raw HTML
Back
Sharing URL Link copied
/edit
View mode
  • Edit mode
  • View mode
  • Book mode
  • Slide mode
Edit mode View mode Book mode Slide mode
Customize slides
Note Permission
Read
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Write
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Engagement control Commenting, Suggest edit, Emoji Reply
  • Invite by email
    Invitee

    This note has no invitees

  • Publish Note

    Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

    Your note will be visible on your profile and discoverable by anyone.
    Your note is now live.
    This note is visible on your profile and discoverable online.
    Everyone on the web can find and read all notes of this public team.
    See published notes
    Unpublish note
    Please check the box to agree to the Community Guidelines.
    View profile
    Engagement control
    Commenting
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    • Everyone
    Suggest edit
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    Emoji Reply
    Enable
    Import from Dropbox Google Drive Gist Clipboard
       Owned this note    Owned this note      
    Published Linked with GitHub
    • Any changes
      Be notified of any changes
    • Mention me
      Be notified of mention me
    • Unsubscribe
    # General response to all reviewers Thank you for the feedback. Please refer to the “General response to all reviewers” section for overall feedback to all reviewers. # eLTe 8 / 4 Thank you for the feedback. Given that you had several questions, our response will be split into several openreview comments due to the 5000 character limit per comment. Please refer to the “General response to all reviewers” section for overall feedback to all reviewers. 1. In Tab. 1, some classic VideoQA benchmarks are missed, e.g., MSVD-QA[1], MSRVTT-QA[1], ActivityNet-QA[2], and SUTD-TrafficQA[3]. Thank you for your correction. We have revisited the VideoQA field and meticulously included important benchmarks in Table 1, including but not limited to MSVD-QA[1], MSRVTT-QA[1], ActivityNet-QA[2], and SUTD-TrafficQA[3]. 2. Table 2 is confusing. The model performance on Timestamp localization (i.e., H1, C1, and M1) did not provide in the paper and without explanation. The authors are expected to give more discussions. 我们对于Timestamp localization的iou得分缺失的解释出现在我们result的第一点,但是我们由于字数的限制对结果讨论的并不充分,关于每个模型毫无解决此类问题的例子我们展示在了Appendix C.4 Figure 17。我们将再对此做一个详细的解释,如下: - 总的来说,现今的VLM模型对于一段视频的处理通常是读取了其中的某些帧的图像信息,所以从模型的构造层面,现今的VLM模型就对于Timestamp localization的问题时就没有能力来回答。所以我们无法给出打分。 - 对于captino-based的模型,由于他们重点为了处理captioning task,即这些模型在处理captioning问题时并不需要利用时序信息,所以他们的训练数据不涉及大量的时序信息(如时间点定位,前后物体关系,结果的推理等)的问题。这样导致在回答我们的任务(重点考察时序信息)时他们会毫不犹豫的回答captioning的信息。我们无法给出得分。 - 而对于instruction-based的模型,以Otter为例,模型的输入实际是视频某些帧的图像信息,并不会有时序信息作为输入。所以回答往往只会停留在某一帧,我们也无法给出得分。 我们重点要设计Timestamp localization task的原因是:我们认为他们是重要且现如今VLM及欠缺的能力。对于一个视频,我们认为回答从第几秒到第几秒有趣是一个常见且有意义的问题。 - VLM在追求factoral类型问题(如what,how,when)的准确度的同时,需要把重点放在回答选取时间段的问题的能力上。我们期望现在的VLM有如下的能力: FunQA: 下面10s的视频哪一段让你觉得这个视频有趣。 VLM:我看这段视频看到3-6s的时候让我哈哈大笑,我觉得这一部分是该视频有趣的核心。 - 当我们想知道一个视频为什么有趣,我们只有知道了是哪一段使视频有趣(即Timestamp localization task),才能回答后面的问题。所以我们认为Timestamp localization task是现在VLM完全处理不好又急需找到新方法处理好的关键问题之一。 3. The strategy of translating the Chinese annotations into English by GPT-3.5 is limited since the same sentence may have different semantics in different scenarios. 诚然,同样的一段话会在不同视频情景下有不同的语义与情感,所以在翻译的过程中我们极力避免直接的文字翻译文字。针对您所说的不同的句子会产生不同的情感我们的解决方案如下: - 第一阶段我们使用了GPT-3.5而不是类似谷歌翻译的翻译网站来进行翻译,之后我们让GPT-3.5生成5句同义句。前者使用GPT-3.5是因为我们希望GPT可以利用自身的常识理解句子并在翻译后保留更多语境信息,此阶段不包含视频信息。而后者生成5句同义句的的部分原因是我们认为只有一个翻译往往会产生情感含义变化(即翻译失真),所以更多的同义句生成会提高正确翻译出现的可能性(在保留标注的语言表达的多样性的基础上)。 - 第二阶段是人工筛选,我们安排掌握中英双语并参与过视频标注的人进行此阶段的工作,他们先观看视频和中文标注,正确理解中文标注含义后再对GPT-3.5生成的英文标注(一句翻译的英文和五句生成的同义句的)进行筛选与更正。筛选标准为: - 是否忠于原来的视频和中文标注的含义 - 无事实和逻辑错误, - 无攻击性或敏感性 所以通过以上策略我们能够保证一句话在从中文翻译到英文之后,即使扩充了语言表达的多样性,在原视频情境下的翻译后句子的语义和情感是不会改变的。 具体解决语言在翻译过程中出现语义和情感差异的例子可以见Appendix A.2 Figure 8。 # jbLY 5 / 3 1. In #Line 201-206, the author said they used GPT3.5 to expand the dataset to 312K. Can the author list the dataset size before using GPT3.5 to expand. Also, are there some evidence/quantitative measurement to support the GPT3.5 can produce the faithful answers to the original ideas? 谢谢您的提醒,扩充前的视频信息我们补充如下: 使用GPT-3.5将原有的数据翻译后再进行扩充是我们本次论文提出的一种新颖的方法,在这之前少有探讨。我们使用了人工的consensus evaluation这种quantitative方法证明了GPT-3.5产生错误的答案是极少概率的。如Figure 2的(i)所示,只有0.9%的回答是错误的。 2. I am unsure if the poor results presented by those video-models are resulted by domain-shift. Can the author finetune 1 or 2 baselines on the subset of the proposed dataset and tested on the remain data to check how performance changes? The comparisons between un-finetuned and finetuned models should be insightful. domain-shift只是一个可以在细分领域中被怀疑的小模型结果较差的理由,但如今大量的VLM使用大模型进行预训练,本质目的就是获得广泛的能力。 FunQA的部分视频数据(HumorQA,MagicQA)属于生活类视频,其中出现的都是常见事物,VLM已经使用大量生活类视频进行预训练,在Appendix C.4 Figure 16的回答举例中我们也发现,模型能够正确答出视频中存在的一些常见事物,但在逻辑推理上存在很大问题,并会在答案中添加错误的与视频无关的事物,这些很差的结果大部分源于模型弱的推理能力和free-text输出能力,与domain-shift无关。而关于CretiveQA,表演形式的视频很少出现在VLM的预训练数据中,但视频中的行为仍是在模仿现实生活中的事物,我们认为这是VLM应该有的能力。 我们制作FunQA并进行实验是为了证明现在的VLM(以instruction-based和LLM-based为主)缺少理解视频和输出free-text的答案的能力,并且通过FunQA上finetune能让模型学习这种能力。我们希望强调一点,FunQA注重的并不是“Fun”这一个领域,而是希望提高模型的理解能力,从而能在原有大规模预训练的基础上解决新颖的问题。 我们的Table 2已经做了Finetune experience,在进行实验时,Otter模型并没有达到SOTA结果,Otter(D.C)是finetune前的结果,Otter(FunQA)是Otter(D.C)在FunQA上finetune后的结果。我们一直在实时更新用官网release的最新模型来进行finetune。最新的对照结果如下: 可以看出使用了官网发布的最新的Otter模型finetune后,在各个得分上都有明显的提升。 3. **(LIMITATION)** The author discussed some limitations in Section 5. Annotate with Chinese and translate them into English is suboptimal. Also, I am unsure if deep learning models can hack the proposed dataset and tasks, i.e., **if the model can solve the problem rely on strong image-based text generation ability, while do not use temporal reasoning.** The proposed QA dataset, Humor, Creative, and Magic, seem need high-level semantics/reasoning, but there lack of evidence to prove this point. Especially, Table 2 shows the image-based methods achieved highest performance for some metrics. FOR **翻译:** 诚然,同样的一段话会在不同视频情景下有不同的语义与情感,而中文和英文本事也存在着因为文化差异而导致的翻译难题。所以在翻译的过程中我们极力避免直接的文字翻译文字。针对您所说的不同的句子会产生不同的情感我们的解决方案如下: - 第一阶段我们使用了GPT-3.5而不是类似谷歌翻译的翻译网站来进行翻译,之后我们让GPT-3.5生成5句同义句。前者使用GPT-3.5是因为我们希望GPT可以利用自身的常识理解句子并在翻译后保留更多语境信息,此阶段不包含视频信息。而后者生成5句同义句的的部分原因是我们认为只有一个翻译往往会产生情感含义变化(即翻译失真),所以更多的同义句生成会提高正确翻译出现的可能性(在保留标注的语言表达的多样性的基础上)。 - 第二阶段是人工筛选,我们安排掌握中英双语并参与过视频标注的人进行此阶段的工作,他们先观看视频和中文标注,正确理解中文标注含义后再对GPT-3.5生成的英文标注(一句翻译的英文和五句生成的同义句的)进行筛选与更正。筛选标准为: - 是否忠于原来的视频和中文标注的含义 - 无事实和逻辑错误, - 无攻击性或敏感性 所以通过以上策略我们能够保证一句话在从中文翻译到英文之后,即使扩充了语言表达的多样性,在原视频情境下的翻译后句子的语义和情感是不会改变的。 具体解决语言在翻译过程中出现语义和情感差异的例子可以见Appendix A.2 Figure 8。 FOR **if the model can solve the problem rely on strong image-based text generation ability, while do not use temporal reasoning:** 首先我想说明我们的表二并没有直接证明“image-based methods achieved highest performance for some metrics.”,表二只是对instrctino-based和LLM-based的模型做了一个对比。 我们在设计FunQA数据集时的原则之一就是:设计的不同任务都是需要推理才能回答,所以这使得即使是一个拥有非常强的image-based的模型,也完全无法回答好FunQA的问题。 我将简单举几个例子: (视频1,2) 下面是不是都可以删了233: 接下来我将说明FunQA在说明对于模型回答FunQA设计问题需要使用到temporal reasoning的证明之前,我需要解释无论是caption-based还是instruction-based的模型,他们都是基于imaged-based的,他们对于一段视频的处理通常是读取了其中的8 / 16 / 128 帧,也有部分模型从零开始学习时序信息。从回答的结果来看,这些模型本身都是不能很好地利用时序信息的。 而对于理解FunQA,也就是理解“fun”,需要使用temporal reasoning这一结论的直接证明,需要复杂的心理学知识和生活常识,我们已经在文中提到,三个数据集的反直觉性理解都需要时序信息的参与,….。 FunQA的大部分问题的设计(Task 1,Task 2,Task 3)都是基于temporal reasoning的,一个视频要想回答这些问题,首先就需要通过temporal reasoning来定位哪一段有趣,完成Task 1(Timestamp localization)才是能完成Task 2和3的基础。然而事实证明无论是caption-based还是instruction-based的模型,都不具备Task 1的能力。 综合来看,并没有任何结论证明了imaged-based的方法有更好的表现。Table 2只是说明了instruction-based的模型在有一些Task 2上回答更好,产生这个的原因是因为caption-based的回答往往很简短,而instruction-based的模型会有很多错误内容,导致扣分过多而得分低。所以得分的差异大部分是因为回答字数的原因而不是回答这些问题不需要temporal reasoning。 # jKvF 5 / 3 1. The task setting seems overly complex and I am concerned whether a single model can address all of these tasks. It is necessary to analyze the performance of human on this benchmark, to understand the significance of the baseline results. 在制作funqa数据集时,我们希望这是视频问答领域内开创性的工作。我们选择了有趣的视频和困难的free-text形式的答案,这是因为我们发现现有的数据集都非常相似,并且对模型的挑战性不足,funqa要求更高的视频理解能力和输出能力,这是现有的独立模型应该尝试去解决的,并且我们的实验证明模型已经有一定回答的能力。对于人类对视频的理解,我们设计实验要求受验者理解视频中的反直觉因素,发现他们能很好的发现时间点并给出深层的解释,人类的能力暂时远超模型,所以Funqa是非常有应用意义的。举个简单的例子,当模型有了对于长视频的有趣内容理解能力之后,我们可以应用于对于一个长视频捕获其正真有趣的片段并生成一个引人入胜的标题,这将很适用于tiktok的热门视频推荐和自动标题生成。更进一步的是,只有模型产生了对有趣视频的理解,未来才会有可能自动生成有趣的视频。与此同时,我们举办了奖金100万的基于FunQA数据集的比赛--[看视频说话](https://iacc.pazhoulab-huangpu.com/contestdetail?id=64af50154a0ed647faca623a&award=1,000,000),希望引领新的社区研究方向。 2. It would be better to provide multiple reference answers for each question. This could alleviate the issues of annotation subjectiveness and language diversity. Moreover, the evaluation can be focused more on understanding of ‘fun’ instead of language generation. 对于您提出的关于标注内容的担忧,我们在FunQA标注过程中也通过了GPT-3.5来生成多个答案以及二次标注尽力提高了标注客观性和语言多样性,具体步骤如下: - 首先我们使用了GPT-3.5基于原始的回答来生成5句同义句。我们通过这种方法极大增加了语言的多样性。例如: - 我们的原始标注为: - A child and two dogs are lying on the beach. The child kicks one of the dogs with their foot, and the dog gets up and retaliates by digging sand back at the child. - 使用GPT-3.5扩充后的五句分别为: - Expansion_with_Eng_1: A young child and two canines are reclining on the sandy beach. The child delivers a kick to one of the dogs, prompting the dog to rise and retaliate by digging sand back at the child. - Expansion_with_Eng_2: On the sandy shore, there is a small child accompanied by two dogs. The child uses their foot to kick one of the dogs, and in response, the dog stands up and counterattacks by digging sand towards the child. - Expansion_with_Eng_3: Resting on the beach are a young child and a pair of dogs. The child decides to give a kick to one of the dogs, resulting in the dog getting up and retaliating by digging sand back at the child. - Expansion_with_Eng_4: A scene unfolds on the shoreline where a child and two dogs are sprawled out. The child employs a foot to kick one of the dogs, provoking the dog to rise and retaliate by excavating sand towards the child. - Expansion_with_Eng_5: At the beach, there lies a small child alongside two dogs. Using their foot, the child kicks one of the dogs, and in response, the dog stands up and counterattacks by scooping sand and launching it back at the child." - 其次我们通过二次标注,我们让不同的人再次观看视频并做出自己独立的回答之后,与我们先前的标注进行比较进行consensus evaluation。结果显示仅有平均0.5%的人对FunQA数据集中的视频是否“有趣”是存在偏见的。所以可以证明我们的数据集是具有很少的主观性的。 对于您提出的关于evaluation的担忧。我们前四种评价指标BLEU / ROUGE-L / CIDEr 和 BLEURT都不是关注于understanding of ’fun‘,基于此我们重点设计了基于GPT-4的evaluation,我们精心设计的prompt(详见Appendix C)确保我们的evaluation是关注于模型是否真正理解了有趣,而不是仅仅是文字的生成。 详细来说,GPT-4对于两段文本的打分的会从以下五个方面考虑:语言表达(流畅性)、逻辑、常识性、细节理解和对于有趣的点是否理解一致。权重分别为5%,10%,10%,35%和40%。通过这种方法我们努力让评价指标更多的关注于“the understanding of the humor"。 3. I cannot find the IoU results for task H1/C1/M1 in Tab.2, or do I miss something? Also, how the baseline methods achieve temporal localization? There is no related introduction. 我们对于Timestamp localization的iou得分缺失的解释出现在我们result的第一点,但是我们由于字数的限制对结果讨论的并不充分,关于每个模型毫无解决此类问题的例子我们展示在了Appendix C.4 Figure 17。我们将再对此做一个详细的解释,如下: - 总的来说,现今的VLM模型对于一段视频的处理通常是读取了其中的某些帧的图像信息,所以从模型的构造层面,现今的VLM模型就对于Timestamp localization的问题时就没有能力来回答。所以我们无法给出打分。 - 对于captino-based的模型,由于他们重点为了处理captioning task,即这些模型在处理captioning问题时并不需要利用时序信息,所以他们的训练数据不涉及大量的时序信息(如时间点定位,前后物体关系,结果的推理等)的问题。这样导致在回答我们的任务(重点考察时序信息)时他们会毫不犹豫的回答captioning的信息。我们无法给出得分。 - 而对于instruction-based的模型,以Otter为例,模型的输入实际是视频某些帧的图像信息,并不会有时序信息作为输入。所以回答往往只会停留在某一帧,我们也无法给出得分。 我们重点要设计Timestamp localization task的原因是:我们认为他们是重要且现如今VLM及欠缺的能力。对于一个视频,我们认为回答从第几秒到第几秒有趣是一个常见且有意义的问题。 - VLM在追求factoral类型问题(如what,how,when)的准确度的同时,需要把重点放在回答选取时间段的问题的能力上。我们期望现在的VLM有如下的能力: FunQA: 下面10s的视频哪一段让你觉得这个视频有趣。 VLM:我看这段视频看到3-6s的时候让我哈哈大笑,我觉得这一部分是该视频有趣的核心。 - 当我们想知道一个视频为什么有趣,我们只有知道了是哪一段使视频有趣(即Timestamp localization task),才能回答后面的问题。所以我们认为Timestamp localization task是现在VLM完全处理不好又急需找到新方法处理好的关键问题之一。 4. The dataset emphasizes the understanding of ‘fun’ from the visual aspect of videos. I am thus curious if audio could ease the challenge of the benchmark, or some specific sub tasks. 我们在收集数据的阶段就确定funqa是以视觉为中心的数据集,在论文的实验中我们是保留视频的声音送入模型。我们会进行一组消融实验,将音频静音后在训练和测试的过程中输入模型,结果如下(待补充): 这可以证明:audio could not ease the challenge of the benchmark. 同时我们提供一些FunQA数据集如下(待补充),相信您观看这些视频后会发现静音播放与否都会觉得有趣的原因是一致的。 5. ***** I am not sure what should the community work on to solve the defined tasks. To develop more gigantic models? 现如今没有任何的模型对于有趣内容的回答真正让我们可以满意,因为他们连“这个视频哪一部分有趣”都回答不出来,我们再次给出一些视频和这些模型的回答(待补充)。我们理解审稿人的顾虑,论文中Table 2所示Otter模型再Fintune后也并没有在每一个指标都达到SOTA结果。我们一直在实时用官网release的最新模型来进行finetune,实验证明随着Otter原始的预训练模型的提升,在FunQA Finetune后的效果也有显著的提升。 最新的对照结果如下(待补充): 与此同时,我们举办了奖金100万RMB的基于FunQA数据集的比赛——[看视频说话](https://iacc.pazhoulab-huangpu.com/contestdetail?id=64af50154a0ed647faca623a&award=1,000,000),我们相信在社区的驱动力下会有更优异的效果。 # sSpc 6 / 4 1. ***** The authors motivate their problem by citing that their tasks require commonsense knowledge. While this makes some intuitive sense, it is not backed up by their experiments (e.g., showing that commonsense reasoning improves performance). One could argue that the dataset simply addresses a new set of labels that are different/possibly more challenging than existing datasets。 在VideoQA中常识信息是重要的,但我们认为基于常识的理解是模型能解决问题的最重要内容,这也是我们设计FunQA的重要原因之一。 我希望再用论文主图中番茄酱甩到脸上的视频阐述(视频url),这个视频展示的内容是: We witness a man engrossed in his phone, sharing a meal with friends. Suddenly, one of his companions squeezes a generous amount of ketchup, which, instead of adorning the fries, splatters onto the man’s face. 在这个视频中,模型准确理解人摔番茄酱这个动作以及番茄酱在脸上这个情景是需要大量的常识信息的。但是只有模型具备“番茄酱在脸上”是非常识行为(即counter-intuitiveness),模型才能识别出这是视频有趣的关键。所以FunQA更多的不是提供常识性信息,而是给出了很多的基于常识的,来自推理的逻辑信息(如:吃快餐摔番茄酱是正常的行为,番茄酱出现在人脸上是不正常的行为,即counter-intuitiveness的)。 综上所述,FunQA代表了向常识理解迈出的必要一步(但并不充分,正如评论者所指出的)。也就是说,我们通过实验证明了任何具有常识能力的机器都能以更高的准确度完成这些任务(通过Finetune FunQA datatset)。 同时这也解答了评论者在Additional Feedback中的疑虑(it isn't clear what new things we can learn from this dataset ),通过FunQA让模型获得使用时空推理,利用常识信息来推理出counter-intuitiveness的能力并能更好的理解视频。 2. ***** It isn't clear that performance on the proposed task is not directly correlated with improvements on other datasets. As such, it isn't completely clear what new topics can be explored beyond the differences in domains that the authors did demonstrate. It's correct that performance on benchmarks is often correlated, e.g. vqa performance often correlates with mscoco performance. But, broader evaluation coverage is still important to check. Given the new domains we consider, we believe our benchmark at minimum provides additional coverage for evaluation suites. 总之我们的数据集是大部分模型都回答不了的,举例(补充如下): 3. The authors did well describe some of the limitations of their work, but a potentially missing element is the somewhat subjective nature of tasks like humor, where some people may find it funny and others offended. 什么是“有趣”是一个很有深意的问题,一些有趣的内容本身可能就会参杂着冒犯的成分。我们通过如下方法极力避免了有争议的内容出现在我们的数据集(详细可见Appendix A.1): - 定义FunQA数据集中的幽默:在宏观上我们定义了我们FunQA数据集中什么是有趣:humor arises from the incongruity [41, 42] between reality and expectations, flourishing with the skillful juxtaposition and transformation of events,我们这么定义是因为我们很符合我们对于模型spatial ( juxtaposition) and temporal (transformation)能力的考察。 - FunQA全部数据集的标注过程:我们在标注阶段对由冒犯因素的视频进行筛选,并对FunQA中每一组视频问答对都做了二次标注和consensus evaluation以消除主观性,结果显示仅有平均0.5%的人对FunQA数据集中的视频是否“有趣”是存在偏见的。 举例:某一个视频的幽默性来源于这个人对物理规则的不了解使他的头被自己推出的铁桶砸到,在标注筛选阶段,如果这个人的头被砸的受伤流血,我们会筛掉它,而在二次标注和consensus evaluation阶段,如果这个视频只被少数人认为有趣,我们会将其筛掉。 通过这种策略,我们尽量避免了some people may find a video funny and others offended的问题。

    Import from clipboard

    Paste your markdown or webpage here...

    Advanced permission required

    Your current role can only read. Ask the system administrator to acquire write and comment permission.

    This team is disabled

    Sorry, this team is disabled. You can't edit this note.

    This note is locked

    Sorry, only owner can edit this note.

    Reach the limit

    Sorry, you've reached the max length this note can be.
    Please reduce the content or divide it to more notes, thank you!

    Import from Gist

    Import from Snippet

    or

    Export to Snippet

    Are you sure?

    Do you really want to delete this note?
    All users will lose their connection.

    Create a note from template

    Create a note from template

    Oops...
    This template has been removed or transferred.
    Upgrade
    All
    • All
    • Team
    No template.

    Create a template

    Upgrade

    Delete template

    Do you really want to delete this template?
    Turn this template into a regular note and keep its content, versions, and comments.

    This page need refresh

    You have an incompatible client version.
    Refresh to update.
    New version available!
    See releases notes here
    Refresh to enjoy new features.
    Your user state has changed.
    Refresh to load new user state.

    Sign in

    Forgot password

    or

    By clicking below, you agree to our terms of service.

    Sign in via Facebook Sign in via Twitter Sign in via GitHub Sign in via Dropbox Sign in with Wallet
    Wallet ( )
    Connect another wallet

    New to HackMD? Sign up

    Help

    • English
    • 中文
    • Français
    • Deutsch
    • 日本語
    • Español
    • Català
    • Ελληνικά
    • Português
    • italiano
    • Türkçe
    • Русский
    • Nederlands
    • hrvatski jezik
    • język polski
    • Українська
    • हिन्दी
    • svenska
    • Esperanto
    • dansk

    Documents

    Help & Tutorial

    How to use Book mode

    Slide Example

    API Docs

    Edit in VSCode

    Install browser extension

    Contacts

    Feedback

    Discord

    Send us email

    Resources

    Releases

    Pricing

    Blog

    Policy

    Terms

    Privacy

    Cheatsheet

    Syntax Example Reference
    # Header Header 基本排版
    - Unordered List
    • Unordered List
    1. Ordered List
    1. Ordered List
    - [ ] Todo List
    • Todo List
    > Blockquote
    Blockquote
    **Bold font** Bold font
    *Italics font* Italics font
    ~~Strikethrough~~ Strikethrough
    19^th^ 19th
    H~2~O H2O
    ++Inserted text++ Inserted text
    ==Marked text== Marked text
    [link text](https:// "title") Link
    ![image alt](https:// "title") Image
    `Code` Code 在筆記中貼入程式碼
    ```javascript
    var i = 0;
    ```
    var i = 0;
    :smile: :smile: Emoji list
    {%youtube youtube_id %} Externals
    $L^aT_eX$ LaTeX
    :::info
    This is a alert area.
    :::

    This is a alert area.

    Versions and GitHub Sync
    Get Full History Access

    • Edit version name
    • Delete

    revision author avatar     named on  

    More Less

    Note content is identical to the latest version.
    Compare
      Choose a version
      No search result
      Version not found
    Sign in to link this note to GitHub
    Learn more
    This note is not linked with GitHub
     

    Feedback

    Submission failed, please try again

    Thanks for your support.

    On a scale of 0-10, how likely is it that you would recommend HackMD to your friends, family or business associates?

    Please give us some advice and help us improve HackMD.

     

    Thanks for your feedback

    Remove version name

    Do you want to remove this version name and description?

    Transfer ownership

    Transfer to
      Warning: is a public team. If you transfer note to this team, everyone on the web can find and read this note.

        Link with GitHub

        Please authorize HackMD on GitHub
        • Please sign in to GitHub and install the HackMD app on your GitHub repo.
        • HackMD links with GitHub through a GitHub App. You can choose which repo to install our App.
        Learn more  Sign in to GitHub

        Push the note to GitHub Push to GitHub Pull a file from GitHub

          Authorize again
         

        Choose which file to push to

        Select repo
        Refresh Authorize more repos
        Select branch
        Select file
        Select branch
        Choose version(s) to push
        • Save a new version and push
        • Choose from existing versions
        Include title and tags
        Available push count

        Pull from GitHub

         
        File from GitHub
        File from HackMD

        GitHub Link Settings

        File linked

        Linked by
        File path
        Last synced branch
        Available push count

        Danger Zone

        Unlink
        You will no longer receive notification when GitHub file changes after unlink.

        Syncing

        Push failed

        Push successfully