User: Explain the project description with the following text in Chinese: # Sign Language Datasets This repository includes TFDS data loaders for sign language datasets. ## Installation #### From Source ```bash pip install git+https://github.com/sign-language-processing/datasets.git ``` #### PyPi ```bash pip install sign-language-datasets ``` ## Usage We demonstrate a loading script for every dataset in [examples/load.ipynb](examples/load.ipynb) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/sign-language-processing/datasets/blob/master/examples/load.ipynb) Our config includes the option to choose the resolution and fps, for example: ```python import tensorflow_datasets as tfds import sign_language_datasets.datasets from sign_language_datasets.datasets.config import SignDatasetConfig # Loading a dataset with default configuration aslg_pc12 = tfds.load("aslg_pc12") # Loading a dataset with custom configuration config = SignDatasetConfig(name="videos_and_poses256x256:12", version="3.0.0", # Specific version include_video=True, # Download and load dataset videos process_video=True, # Process videos to tensors, or only save path to video fps=12, # Load videos at constant, 12 fps resolution=(256, 256), # Convert videos to a constant resolution, 256x256 include_pose="holistic") # Download and load Holistic pose estimation rwth_phoenix2014_t = tfds.load(name='rwth_phoenix2014_t', builder_kwargs=dict(config=config)) ``` ## Datasets | Dataset | Videos | Poses | Versions | |--------------------|--------------------------------------------------------------|-------------------------------------------------------|----------| | aslg_pc12 | N/A | N/A | 0.0.1 | | asl-lex | No | | 2.0.0 | | rwth_phoenix2014_t | Yes | Holistic | 3.0.0 | | autsl | Yes | OpenPose, Holistic | 1.0.0 | | dgs_corpus | Yes | OpenPose, Holistic | 3.0.0 | | dgs_types | Yes | | 3.0.0 | | how2sign | Yes | OpenPose | 1.0.0 | | sign2mint | Yes | | 1.0.0 | | signtyp | Links | | 1.0.0 | | swojs_glossario | Yes | | 1.0.0 | | SignBank | N/A | | 1.0.0 | | wlasl | [Failed](https://github.com/tensorflow/datasets/issues/2960) | [OpenPose](https://github.com/gulvarol/bsl1k/issues/4) | None | | wmtslt | Yes | OpenPose, Holistic | 1.2.0 | | signsuisse | Yes | | 1.0.0 | | msasl | | | None | | Video-Based CSL | | | None | | RVL-SLLL ASL | | | None | | ngt_corpus | Yes | | 3.0.0 | | bsl_corpus | No | No | 3.0.0 | ## Data Interface We follow the following interface wherever possible to make it easy to swap datasets. ```python { "id": tfds.features.Text(), "signer": tfds.features.Text() | tf.int32, "video": tfds.features.Video(shape=(None, HEIGHT, WIDTH, 3)), "depth_video": tfds.features.Video(shape=(None, HEIGHT, WIDTH, 1)), "fps": tf.int32, "pose": { "data": tfds.features.Tensor(shape=(None, 1, POINTS, CHANNELS), dtype=tf.float32), "conf": tfds.features.Tensor(shape=(None, 1, POINTS), dtype=tf.float32) }, "gloss": tfds.features.Text(), "text": tfds.features.Text() } ``` ## Adding a new dataset For general instructions, see the [TFDS guide to writing custom datasets](https://github.com/tensorflow/datasets/blob/master/docs/add_dataset.md). Instructions below are specific to this repository. Make a new folder inside `sign_language_datasets/datasets` with the same name as the dataset. As a convention, the name of the dataset should be lowercase and words should be separated by an underscore. Example: ```sh cd sign_language_datasets/datasets tfds new new_dataset ``` For our purposes, creating a custom TFDS dataset means writing a new class which inherits from `tfds.core.GeneratorBasedBuilder`. If you use `tfds new` to create a new dataset then the dataset class is stored in a file with the exact same name as the dataset, i.e. `new_dataset.py`. `new_dataset.py` must contain a line similar to: ```python class NewDataset(tfds.core.GeneratorBasedBuilder): ``` ### Registering a new dataset The mechanism to add a custom dataset to TFDS' dataset registry is to import the class `NewDataset`. For this reason the folder `sign_language_datasets/datasets/new_dataset` must have an `__init__.py` file that imports the class `NewDataset`: ```python from .new_dataset import NewDataset ``` Even though the name of the class is `NewDataset`, it will be available for loading in lowercase and uppercase characters are interpreted as the start of a new word that should be separated with an underscore. This means that the class can be loaded as follows: ```python ds = tfds.load('new_dataset') ``` ### Generating checksums The folder for the new dataset should contain a file `checksums.tsv` with checksums for every file in the dataset. This allows the TFDS download manager to check the integrity of the data it downloads. Use the `tfds build` tool to generate the checksum file: ```sh tfds build --register_checksums new_dataset.py ``` Use a dataset configuration which includes all files (e.g. does include the video files if any) using the `--config` argument. The default behaviour is to build all configurations which might be redundant. ## Why not Huggingface Datasets? Huggingface datasets do not work well with videos. From the lack of native support of the video type, to lack of support of arbitrary tensors. Furthermore, they currently have memory leaks that prevent from saving even the smallest of video datasets. ## Cite ```bibtex @misc{moryossef2021datasets, title={Sign Language Datasets}, author={Moryossef, Amit and M\"{u}ller, Mathias}, howpublished={\url{https://github.com/sign-language-processing/datasets}}, year={2021} } ``` Model: ## 手语数据集项目说明 (Sign Language Datasets Project Description) 这个代码库包含了用于手语数据集的 TFDS 数据加载器。 ### 安装 (Installation) #### 从源代码安装 (From Source) ```bash pip install git+https://github.com/sign-language-processing/datasets.git ``` #### 从 PyPi 安装 (PyPi) ```bash pip install sign-language-datasets ``` ### 使用方法 (Usage) 我们为每个数据集在 [examples/load.ipynb](examples/load.ipynb) 中演示了加载脚本。 [![在 Colab 中打开](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/sign-language-processing/datasets/blob/master/examples/load.ipynb) 我们的配置包括选择分辨率和帧率的选项,例如: ```python import tensorflow_datasets as tfds import sign_language_datasets.datasets from sign_language_datasets.datasets.config import SignDatasetConfig # 使用默认配置加载数据集 aslg_pc12 = tfds.load("aslg_pc12") # 使用自定义配置加载数据集 config = SignDatasetConfig(name="videos_and_poses256x256:12", version="3.0.0", # 指定版本 include_video=True, # 下载并加载数据集视频 process_video=True, # 将视频处理为张量,或仅保存视频路径 fps=12, # 以恒定的 12 fps 加载视频 resolution=(256, 256), # 将视频转换为恒定的分辨率,256x256 include_pose="holistic") # 下载并加载 Holistic 姿势估计 rwth_phoenix2014_t = tfds.load(name='rwth_phoenix2014_t', builder_kwargs=dict(config=config)) ``` ### 数据集 (Datasets) | 数据集 | 视频 | 姿势 | 版本 | |--------------------|--------------------------------------------------------------|-------------------------------------------------------|----------| | aslg_pc12 | N/A | N/A | 0.0.1 | | asl-lex | 否 | | 2.0.0 | | rwth_phoenix2014_t | 是 | Holistic | 3.0.0 | | autsl | 是 | OpenPose, Holistic | 1.0.0 | | dgs_corpus | 是 | OpenPose, Holistic | 3.0.0 | | dgs_types | 是 | | 3.0.0 | | how2sign | 是 | OpenPose | 1.0.0 | | sign2mint | 是 | | 1.0.0 | | signtyp | 链接 | | 1.0.0 | | swojs_glossario | 是 | | 1.0.0 | | SignBank | N/A | | 1.0.0 | | wlasl | [失败](https://github.com/tensorflow/datasets/issues/2960) | [OpenPose](https://github.com/gulvarol/bsl1k/issues/4) | None | | wmtslt | 是 | OpenPose, Holistic | 1.2.0 | | signsuisse | 是 | | 1.0.0 | | msasl | | | None | | Video-Based CSL | | | None | | RVL-SLLL ASL | | | None | | ngt_corpus | 是 | | 3.0.0 | | bsl_corpus | 否 | 否 | 3.0.0 | ## 数据接口 (Data Interface) 我们尽可能遵循以下接口,以便轻松交换数据集。 ```python { "id": tfds.features.Text(), "signer": tfds.features.Text() | tf.int32, "video": tfds.features.Video(shape=(None, HEIGHT, WIDTH, 3)), "depth_video": tfds.features.Video(shape=(None, HEIGHT, WIDTH, 1)), "fps": tf.int32, "pose": { "data": tfds.features.Tensor(shape=(None, 1, POINTS, CHANNELS), dtype=tf.float32), "conf": tfds.features.Tensor(shape=(None, 1, POINTS), dtype=tf.float32) }, "gloss": tfds.features.Text(), "text": tfds.features.Text() } ``` ## 添加新数据集 (Adding a new dataset) 有关一般说明,请参阅 [TFDS 编写自定义数据集指南](https://github.com/tensorflow/datasets/blob/master/docs/add_dataset.md)。以下说明特定于此代码库。 在 `sign_language_datasets/datasets` 中创建一个与数据集同名的新文件夹。按照惯例,数据集的名称应为小写,单词之间应使用下划线分隔。例如: ```sh cd sign_language_datasets/datasets tfds new new_dataset ``` 对于我们的目的,创建自定义 TFDS 数据集意味着编写一个继承自 `tfds.core.GeneratorBasedBuilder` 的新类。如果您使用 `tfds new` 创建新数据集,则数据集类存储在一个与数据集完全同名的文件中,即 `new_dataset.py`。`new_dataset.py` 必须包含类似于以下内容的行: ```python class NewDataset(tfds.core.GeneratorBasedBuilder): ``` ### 注册新数据集 (Registering a new dataset) 将自定义数据集添加到 TFDS 的数据集注册表中的机制是导入类 `NewDataset`。因此,文件夹 `sign_language_datasets/datasets/new_dataset` 必须有一个 `__init__.py` 文件,该文件导入类 `NewDataset`: ```python from .new_dataset import NewDataset ``` 即使类的名称为 `NewDataset`,它也可以使用小写字母加载,大写字母被解释为新单词的开头,应使用下划线分隔。这意味着可以按如下方式加载类: ```python ds = tfds.load('new_dataset') ``` ### 生成校验和 (Generating checksums) 新数据集的文件夹应包含一个 `checksums.tsv` 文件,其中包含数据集中每个文件的校验和。这允许 TFDS 下载管理器检查其下载数据的完整性。使用 `tfds build` 工具生成校验和文件: ```sh tfds build --register_checksums new_dataset.py ``` 使用包含所有文件的