# Packaging, releases and archiving [TOC] ## Packaging in Python Packaging your Python code offers an effective way for distribution and reuse. By turning your code into a library and hosting it on a platform like PyPI (Python Package Index), you can significantly broaden your project's reach. Moreover, embracing this approach not only enhances the quality and sustainability of your software, but also invites contributions from external collaborators. ### `pyproject.toml` The `pyproject.toml` file has become the standard configuration file for packaging tools. This file contains metadata about the project and specifies which build tools should be used. The `pyproject.toml` consists of TOML tables, and can include `[build-system]`, `[project]` or `[tools]` tables. #### `[build-system]` The `[build-system]` table is essential, because it defines which [build backend](https://packaging.python.org/en/latest/tutorials/packaging-projects/#choosing-a-build-backend) you will be using, and also which dependencies are required to build your project. This is needed, because frontend tools like `pip` are not responsible for transforming your source code into a distributable package, and this is handled by one of the build backends (e.g. Hatchling, setuptools). :::spoiler **Example:** ```toml [build-system] requires = ["setuptools>=64.0"] build-backend = "setuptools.build_meta" ``` ::: #### `[project]` Under the `[project]` table you can describe your metadata. It can become quite extensive, but this is where you would list the name of your project, version, authors, licensing, dependencies specific to your project and other requirements, as well as other optional information. For a detailed list of what can be included under `[project]` check the [Declaring project metadata](https://packaging.python.org/en/latest/specifications/pyproject-toml/#declaring-project-metadata-the-project-table) section of Python Packaging Guide. :::spoiler **Example:** ```toml [project] name = "exampleproject" # Define the name of your project here. This is mandatory. Once you publish your package for the first time, # this name will be locked and associated with your project. It affects how users will # install your package via pip, like so: # # $ pip install exampleproject # # Your project will be accessible at: https://pypi.org/project/exampleproject/ # version = "2.0.0" # Version numbers should conform to PEP 440, and are also mandatory (but they can be set dynamic) # https://www.python.org/dev/peps/pep-0440/ # description = "Short description of your project" # Provide a short, one-line description of what your project does. This is known as the # "Summary" metadata field: # https://packaging.python.org/specifications/core-metadata/#summary # readme = "README.md" # Here, you can include a longer description which often mirrors your README file. # This description will appear on PyPI when your project is published. # This corresponds to the "Description" metadata field: # https://packaging.python.org/en/latest/guides/writing-pyproject-toml/#readme # requires-python = ">=3.8" # Indicate the versions of Python your project is compatible with. Unlike the # 'Programming Language' classifiers, 'pip install' will verify this field # and prevent installation if the Python version does not match. # license = {file = "LICENSE.txt"} # This specifies the license. # It can be a text (e.g. license = {text = "MIT License"}) or a reference to a file with the license text as shown above. # keywords = ["wind-energy", "simulation"] # Keywords that describe your project. These assist users in discovering your project on PyPI searches. # These should be a comma-separated list reflecting the nature or domain of the project. # authors = [ {name = "A. Doe", email = "author@tudelft.nl" } ] # Information about the original authors of the project and their contact details. # maintainers = [ {name = "B. Smith", email = "maintainer@tudelft.nl" } ] # Information about the current maintainers of the project and their contact details. # # # # Classifiers help categorize the project on PyPI and aid in discoverability. # For a full list of valid classifiers, see https://pypi.org/classifiers/ classifiers = [ # Indicate the development status of your project (maturity). Commonly, this is # 3 - Alpha # 4 - Beta # 5 - Stable #. 6 - Mature "Development Status :: 4 - Beta", # Target audience "Intended Audience :: Developers", "Topic :: Scientific/Engineering", # License type "License :: OSI Approved :: MIT License", # Python versions your software supports. This is not checked by pip install, and is different from "requires-python". "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.8", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3 :: Only", ] # Dependencies needed by your project. These packages will be installed by pip when # your project is installed. Ensure these are existing, valid packages. # # For more on how this field compares to pip's requirements files, see: # https://packaging.python.org/discussions/install-requires-vs-requirements/ dependencies = [ "numpy", "pandas>=1.5.3", "matplotlib>=3.7.1" ] # # You can define additional groups of dependencies here (e.g., development dependencies). # These can be installed using the "extras" feature of pip, like so: # # $ pip install exampleproject[dev] # # These are often referred to as "extras" and provide optional functionality. [project.optional-dependencies] test = ["coverage"] # [project.urls] "Homepage" = "https://github.com/awegroup" "Source" = "https://github.com/awegroup/MegAWES" # # List of relevant URLs for your project. These are displayed on the left sidebar of your PyPI page. # This can include links to the homepage, source code, changelog, funding, etc. # # # This [project] example was adopted from https://github.com/pypa/sampleproject/blob/main/pyproject.toml ``` ::: #### `[tools]` The `[tool]` table contains subtables specific to each tool. For example, Poetry uses the `[tool.poetry]` table instead of the `[project]` table. **Example:** [Poetry project setup](https://python-poetry.org/docs/basic-usage/) :::info :pencil2: **Difference between `[build system]` and `[project]`** The `[build-system]` and `[project]` tables serve distinct roles. The `[build-system]` table is essential and must always be included, as it specifies the build tool used, regardless of the backend. On the other hand, the `[project]` table is recognized by most build backends for defining project metadata, though some backends may not and use a different format. ::: :::success :books: **Further reading** - [Writing your pyproject.toml](https://packaging.python.org/en/latest/guides/writing-pyproject-toml/) - [pyproject.toml specification](https://packaging.python.org/en/latest/specifications/pyproject-toml/) - [Configuring setuptools using pyproject.toml files](https://setuptools.pypa.io/en/latest/userguide/pyproject_config.html) ::: Before shifting to `pyproject.toml`, a common approach was to use a `setup.py` build script. You might encounter them in legacy projects. :::success :books: **Further reading** - [Is `setup.py` deprecated?](https://packaging.python.org/en/latest/discussions/setup-py-deprecated/#setup-py-deprecated) - [How to modernize a `setup.py` based project?](https://packaging.python.org/en/latest/guides/modernize-setup-py-project/) ::: ### Package structuring If you want to distribute your Python code as a package, you will need to have an `__init__.py` file in the root directory of your package. This allows Python to treat that directory as a package that can be imported. Every subfolder should also contain an `__init__.py` file. When importing a package, Python searches through the directories on `sys.path` looking for the package subdirectory. The presence of `__init__.py` files within these directories is essential, as it tells Python that these directories should be treated as packages. This mechanism helps avoid the scenario where directories with commonplace names, accidentally overshadow valid modules that appear later in the search path. While `__init__.py` can simply be an empty file, serving just to mark a directory as a package, it can also contain code that runs when the package is imported. This code can initialize package-level variables, import submodules, and other tasks. Referring to our project organization in or [Software Development Workflow guide](https://hackmd.io/mJxtkV4-RH-_7TFZlgDHkw#Project-Organization) we can build on top of that structure. :::spoiler :bulb: **Reminder about flat vs `src` layout** In a flat layout, the project's root directory directly contains the package directories and modules. This layout is straightforward and works well for simple projects. ``` your_project/ │ ├── mypkg/ │ ├── __init__.py │ ├── module.py │ └── subpkg1/ │ └── __init__.py │ ... ``` The `src` layout places the package directory inside a top-level `src` directory. This layout helps prevent accidental imports from the current working directory, ensuring that you always import from the installed package rather than the source directory. ``` your_project/ │ ├── src/ │ └── mypkg/ │ ├── __init__.py │ ├── module.py │ └── subpkg1/ │ └── __init__.py │ ... ``` ::: So our example package structure would now look like this: ``` your_project/ │ ├── docs/ # documentation directory ├── notebooks/ # Jupyter notebooks or MATLAB Live Editor scripts ├── src/ # your project's source code, including the main script │ └── yourpkg_name/ # Package │ ├── __init__.py # Package initializer │ ├── module # nested module │ └── subpkg1/ # sub-package │ └── __init__.py # Sub-package initializer ├── tests/ # your test directory │ ├── data/ # data files used in the project (if applicable) ├── processed_data/ # files from your analysis (if applicable) ├── results/ # results (if applicable) │ ├── .gitignore # untracked files ├── pyproject.toml # pyproject.toml ├── README.md # overview └── LICENSE # license information ``` You might notice that in our updated structure the `requirements.txt` is absent. In many cases, if you have a `pyproject.toml` file, you may not need a `requirements.txt` file anymore, since the `pyproject.toml` file is part of the new standardized Python packaging format (defined in [PEP 518](https://peps.python.org/pep-0518/)) and can include dependencies. However, some deployment and CI/CD pipelines might still expect a `requirements.txt` file, because a set of fixed dependency versions create more stable pipelines. For simple projects, you can still prefer to use a `requirements.txt` for its simplicity and wide adoption. :::info It is not considered best practice to use the `pyproject.toml` to pin dependencies to specific versions, or to specify sub-dependencies (i.e. dependencies of your dependencies). This is overly-restrictive, and prevents a user from gaining the benefit of dependency upgrades. For more info, see [**this discussion**](https://packaging.python.org/en/latest/discussions/install-requires-vs-requirements/). ::: We also do not include `lib/` and `build/` directories: - The `build/` directory is typically used to store compiled or built artifacts of your project, such as binary executables, wheels, or other distribution files. This directory is usually not part of your source code repository and is generated during the (automated) build or packaging process. - The `lib/` directory stores third-party libraries or dependencies that are not installed through a package manager. By specifying your project's dependencies in the `pyproject.toml` file, and using a package manager like `pip` or `poetry` to install and manage them, these dependencies will be automatically downloaded and installed in the appropriate location (usually the site-packages directory). ::: success :books:**Further reading:** - [Packaging projects](https://packaging.python.org/en/latest/tutorials/packaging-projects/) - [Good integration practices](https://docs.pytest.org/en/7.2.x/explanation/goodpractices.html) - [Python documentation packages](https://docs.python.org/3/tutorial/modules.html#packages) ::: ### Local package installation By installing a Python package locally during development you can test your changes in an environment that mimics how the package will be used once it's deployed. This process allows you to ensure that your package works correctly when installed and imported by others. You can use `pip` to install your package in editable mode (`-e`).This way, changes you make to the source code are immediately available without needing to reinstall the package. ```python pip install -e . ``` ### Testing packaging on TestPyPI before publishing to PyPI By testing your package on [TestPyPI](https://test.pypi.org) before publishing to PyPI, you can identify and address any issues with your package metadata, dependencies, or distribution files before making your package publicly available. You'll need to create an account for TestPyPI. The next step is to create distribution packages for your package. These packages are archives that can be uploaded to TestPyPI/PyPI and installed using `pip`. Afterwards, you can use [Twine](https://twine.readthedocs.io/en/latest/) to upload your package to TestPyPI. 1. Register on TestPyPI. 2. Check if PyPA build is installed: - `pip install --upgrade build` 3. Run either `python3 -m build` (Linux/macOS) or `python -m build` (Windows) from the same directory where `pyproject.toml` is located. This creates the distribution packages. - After running this command, you’ll see a substantial amount of text output. Upon completion, it will generate two files (a wheel and `.tar.gz` file) in the `dist/` directory. The `.tar.gz` file represents a source distribution, while the `.whl` file is a built distribution. More recent versions of `pip` prioritize the installation of built distributions, reverting to source distributions if necessary. It’s advisable to always upload a source distribution and include built distributions compatible with the platforms your project supports. 3. Install Twine (`pip install twine`). 4. Upload to TestPyPI by specifying the `--repository` flag. - `twine upload --repository testpypi dist/*` 5. You will find your package on `https://test.pypi.org/project/yourproject`. You can then `pip install` it by adding the `--index-url` flag. - `pip install --index-url https://test.pypi.org/simple/yourpackage` ::: success :books: **Further reading** - [RealPython Packaging guide](https://realpython.com/pypi-publish-python-package/) - [Twine Read the Docs](https://twine.readthedocs.io/en/latest/) - [Using TestPyPI](https://packaging.python.org/en/latest/guides/using-testpypi/) ::: ### Publishing to PyPI Publishing your package to PyPI makes it accessible to anyone in the Python community through a simple `pip install your_package` command. You'll also need an account for PyPI. TestPyPI and PyPI use seperate databases so you need to register on both sites. 1. Register on PyPI. 2. Run `pip install --upgrade build` 3. Then run `python3 -m build` (Linux/macOS) or `py -m build` (Windows) from the same directory where `pyproject.toml` is located. 4. Use `twine upload dist/*` to upload your package to PyPI. Input your credentials associated with the account you registered on the official PyPI platform. 5. Your package is live on PyPI. 6. You can now install it by simply `pip install yourpackage` :::success :bulb: **Tip:** If you need a particular name for your package, check whether it is taken on PyPI and claim it as soon as possible if available. ::: ## Packaging and publishing in MATLAB MATLAB does not have a centralized repository similar to PyPI. Packaging in MATLAB involves creating a `.mlappinstall` file or a toolbox `.mltbx` file which can be shared directly with users or through MATLAB File Exchange. A MATLAB app is a self-contained MATLAB program with a **user interface** that automates a task or calculation. When you package an app, the app packaging tool: - Performs a dependency analysis that helps you find and add the files your app requires. - Reminds you to add shared resources and helper files. - Stores information you provide about your app with the app package. This information includes a description, a list of additional MATLAB products required by your app, and a list of supported platforms. - Automates app updates (versioning). Then, when others install your app: - It is a one-click installation. - Users do not need to manage the MATLAB search path or other installation details. - Your app appears alongside MATLAB toolbox apps in the apps gallery. ## Releases ### Change log Maintaining a change log is essential for tracking modifications, fixes, and enhancements in your software. It provides a clear history of changes for current and future users and helps manage versions effectively. - [How do I make a good change log?](https://keepachangelog.com/en/1.1.0/) - [Example from eScience Center](https://github.com/matchms/matchms/blob/master/CHANGELOG.md) ### Semantic versioning [Semantic Versioning](https://semver.org) (SemVer), is a versioning scheme that reflects changes in your software systematically. It consists of three numbers: major, minor, and patch (e.g., 1.9.1). - **Major version** increments are meant for significant changes that may make backward-incompatible changes. - **Minor version** increments add functionality in a backward-compatible manner. - **Patch version** increments are for backward-compatible bug fixes only. Adopting this practice helps both users and developers anticipate the impact of updating the software. It clearly communicates the nature of the changes. #### Release notification After a release, it’s important to communicate what has changed. Release notes are detailed descriptions of the new changes, fixes, and sometimes known issues. They are usually published alongside the change log in repositories. You could use [GitHub Releases](https://docs.github.com/en/repositories/releasing-projects-on-github/managing-releases-in-a-repository), it is a feature that allows you to present your software, along with the corresponding source code, change log, and release notes. #### Automatically generated release notes GitHub provides a useful feature to [automatically generate release notes](https://docs.github.com/en/repositories/releasing-projects-on-github/automatically-generated-release-notes) for new versions of your software. It scans the commits between your releases and compiles a summary of the changes, fixes, and enhancements made. This can not only save time but also help to avoid undocumenting changes. How to enable automatic release notes: 1. Go to your repository on GitHub and navigate to the releases section. 2. Draft a new release. When you select a tag, GitHub will offer an option to auto-generate the release notes based on the commits since the last release. 3. Customize the release notes. You can edit the auto-generated content to add more details or format it according to your preferences. 4. When publishing, the release notes will be attached to your release. This feature is particularly helpful for maintaining accurate and up-to-date release documentation. ## Archiving ### Zenodo Zenodo is a research data management service that supports the archiving of research outputs, including software releases. Zenodo can automatically archive releases from GitHub repositories and assign a DOI, making each version citable. To use Zenodo with with GitHub: 1. Link your GitHub account to Zenodo to allow access to repository information. 2. Enable the repository you want to archive on the Zenodo dashboard. 3. Create a new release on GitHub. Zenodo will automatically archive this release and issue a DOI. 4. You can then share the DOI link provided by Zenodo in your project's README or documentation, or paper, so a specific version of your software can be referenced. :::success :bulb: **Tips** - Zenodo can only access public repositories. - If you need to archive a repository from an organization, the owner of the organization might have to authorize the Zenodo application to access it. - Sandboxing - You can also try out [Zenodo Sandbox](https://sandbox.zenodo.org) before archiving your projects to Zenodo. Zenodo Sandbox mimics the main Zenodo platform, and is designed to test out the functionality of Zenodo without accidently making mistakes to the real data and the main site. Since it's an exact mirror of Zenodo it provides the same user experience and all the same tools and the same interface. - **Further reading:** [Referencing and citing content](https://docs.github.com/en/repositories/archiving-a-github-repository/referencing-and-citing-content) ::: ### 4TU.ResearchData 4TU.ResearchData is another platform that offers reliable archiving of research data and software. 4TU.ResearchData offers at least 15 years of archival storage. To get started: 1. Log in to your 4TU.ResearchData account (using institutional access). 2. From the dashboard navigate to upload a new project. 3. Either choose open access, or you also have the option to choose embargoed or restricted access. 4. Upload your relevant files. 4TU.ResearchData supports Git for version control. Either just drag and drop datasets, or to deposit software you can push your Git repository to the 4tu remote. Add the remote: - `git remote add 4tu [link automatically generated by 4tu]` Then, push your repository: - `git push 4tu --all` - `git push 4tu --tags` 5. You will have a DOI number reserved. 6. Maintain and update your archive as necessary to reflect any significant changes or additions to the software. :::info :information_source: **Information** - When you are ready to publish your software on 4TU.ResearchData, ensure to choose the "Software deposit" option in the Files section at the bottom of the upload form. This option allows you to upload your software files directly from your Git repository. If you have additional files, you can also manually drag your software files from your local drive into the upload box. However, the archiving process is more manual compared to Zenodo, requiring additional steps to upload and manage software releases. - 4TU.ResearchData also has a [sandbox](https://next.data.4tu.nl) available. ::: While both Zenodo and 4TU.ResearchData offer similar archiving capabilities, 4TU.ResearchData is tailored to the needs of data-heavy disciplines, such as secure storage, long-term data preservation, and specific metadata standards that enhance data discoverability and usability. Zenodo might be more suited for easy integration with GitHub and broad accessibility. More information on 4TU.ResearchData can be found [here](https://data.4tu.nl/info/about-4turesearchdata/frequently-asked-questions). ## Containerisation Similar to [dependencies](https://hackmd.io/mJxtkV4-RH-_7TFZlgDHkw#Dependency-management), it is important to record and manage your development environments. This involves documenting the specific configurations, tools, and versions used during development to ensure that everything runs consistently across different setups. By recording your environments, you create a reproducible framework that significantly reduces the infamous *"it works on my machine"* syndrome, meaning that the code works on one machine but not on others. Environment management includes specifying your operating system, programming language (and their versions), system libraries, and any other software or tools required. This practice not only helps in maintaining consistency across development, testing, and production stages but also streamlines onboarding for new team members, as they can quickly set up their environment to match the recorded specifications. ### Containers Containers remove the need to manually install or troubleshoot anything if something is not working on another user's machine. They function as a self-contained unit, where everything is bundled in a single package, simulating a complete operating environment and are bundled in a single file (image). The definition files, like Dockerfiles or Singularity definition files, are essentially instruction manuals to build a container image. :::spoiler :bulb: **An analogy for a container image** :::info Consider a container image as a detailed script for a movie, outlining every scene and dialogue. When the decision is made to shoot the movie ("executing" the container image), a temporary set is constructed. This set serves as the container, where all filming and acting occurs, confined to this controlled environment. After the shooting wraps up (or the task is completed), the set is dismantled, and all props are stored away, but the script (the container image) remains unaltered. This allows to recreate scenes as and if needed, with each shoot starting from the same original script, ensuring consistency across takes while leaving the script intact for future use. ::: Containers can be useful for various purposes: - For example, if installing specific software is complicated or incompatible with your OS, you might check if an image is available and run the software from the container. - Likewise, if you want to make sure that the people you are working with use the exact same environment, you could provide them an image of your container. - If you are facing issues due to different system architectures you can distribute a definition file to build an image that tailors to different machines. While it might not replicate your environment exactly, it often provides a sufficiently close alternative. Popular containerisation solutions: - [Docker](https://www.docker.com) - [Singularity](https://sylabs.io/docs/) Both Docker and Singularity are robust options for containerisation, with Docker being more widely used in general software development, and Singularity addressing specific needs of HPC environments. [Docker's official samples and examples](https://github.com/dockersamples) :::success :notebook: **Further reading** [Recording environments lesson on CodeRefinery](https://coderefinery.github.io/reproducible-research/environments/) [Carpentries incubator lesson on Docker](https://carpentries-incubator.github.io/docker-introduction/) [Guides and manuals for Docker](https://docs.docker.com) [Singularity documentation](https://singularity-docs.readthedocs.io/en/latest/) ::: #### Advantages and disadvantages of using containers Containers have gained widespread popularity due to their significant benefits in solving various challenges: - They enable smooth transition of workflows across different operating systems and configurations. - They address the common issue of software behaving differently on different machines by providing a consistent runtime environment. - For software with nested dependencies, containers can be a vital tool for ensuring long-term reproducibility. - Containers bundle the software in an isolated file, and that simplifies the management, installation and removal of the software (compared to a traditional uninstallation process). But it is important to consider the downsides: - The convenience of containers might lead to overlooking underlying software installation issues and not adhering to good software development practices. - There's a risk of creating a new form of dependency - software that *"only works in a specific container"*, which could limit flexibility and interoperability. - Container images can grow in size substantially, especially if not carefully managed. - Modifying existing containers can sometimes be challenging, requiring a good understanding of the container's configuration. :::danger :exclamation: **Caution** It's crucial to source your container images from reputable and official channels. There have been instances where images were found to be malicious, so it is very important to apply the same caution as when installing software packages from untrusted sources. ::: ## Resources ### Resources :::spoiler **References used in this guide** 1. [Choosing a build backend](https://packaging.python.org/en/latest/tutorials/packaging-projects/#choosing-a-build-backend) 1. [Declaring project metadata](https://packaging.python.org/en/latest/specifications/pyproject-toml/#declaring-project-metadata-the-project-table) 1. [PEP 440](https://www.python.org/dev/peps/pep-0440/) 1. [Core Metadata - Summary](https://packaging.python.org/specifications/core-metadata/#summary) 1. [RealPython Packaging guide](https://realpython.com/pypi-publish-python-package/) 1. [Writing pyproject.toml - Readme](https://packaging.python.org/en/latest/guides/writing-pyproject-toml/#readme) 1. [Python Package Index (PyPI) - Classifiers](https://pypi.org/classifiers/) 1. [Install Requires vs Requirements](https://packaging.python.org/discussions/install-requires-vs-requirements/) 1. [Sample Project on GitHub](https://github.com/pypa/sampleproject/blob/main/pyproject.toml) 1. [Writing your pyproject.toml](https://packaging.python.org/en/latest/guides/writing-pyproject-toml/) 1. [pyproject.toml specification](https://packaging.python.org/en/latest/specifications/pyproject-toml/) 1. [Configuring setuptools using pyproject.toml files](https://setuptools.pypa.io/en/latest/userguide/pyproject_config.html) 1. [Is `setup.py` deprecated?](https://packaging.python.org/en/latest/discussions/setup-py-deprecated/#setup-py-deprecated) 1. [How to modernize a `setup.py` based project?](https://packaging.python.org/en/latest/guides/modernize-setup-py-project/) 1. [Software Development Workflow guide](https://hackmd.io/mJxtkV4-RH-_7TFZlgDHkw#Project-Organization) 1. [PEP 518](https://peps.python.org/pep-0518/) 1. [Packaging projects](https://packaging.python.org/en/latest/tutorials/packaging-projects/) 1. [Good integration practices](https://docs.pytest.org/en/7.2.x/explanation/goodpractices.html) 1. [Python documentation packages](https://docs.python.org/3/tutorial/modules.html#packages) 1. [TestPyPI](https://test.pypi.org) 1. [Twine Read the Docs](https://twine.readthedocs.io/en/latest/) 1. [Using TestPyPI](https://packaging.python.org/en/latest/guides/using-testpypi/) 1. [Semantic Versioning](https://semver.org) 1. [GitHub Releases](https://docs.github.com/en/repositories/releasing-projects-on-github/managing-releases-in-a-repository) 1. [Poetry project setup](https://python-poetry.org/docs/basic-usage/) 1. [Recording environments lesson on CodeRefinery](https://coderefinery.github.io/reproducible-research/environments/) 1. [Carpentries incubator lesson on Docker](https://carpentries-incubator.github.io/docker-introduction/) 1. [Guides and manuals for Docker](https://docs.docker.com) 1. [Singularity documentation](https://singularity-docs.readthedocs.io/en/latest/) 1. [Docker's official samples and examples](https://github.com/dockersamples) :::