djaniak
    • Create new note
    • Create a note from template
      • Sharing URL Link copied
      • /edit
      • View mode
        • Edit mode
        • View mode
        • Book mode
        • Slide mode
        Edit mode View mode Book mode Slide mode
      • Customize slides
      • Note Permission
      • Read
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Write
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Engagement control Commenting, Suggest edit, Emoji Reply
    • Invite by email
      Invitee

      This note has no invitees

    • Publish Note

      Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

      Your note will be visible on your profile and discoverable by anyone.
      Your note is now live.
      This note is visible on your profile and discoverable online.
      Everyone on the web can find and read all notes of this public team.
      See published notes
      Unpublish note
      Please check the box to agree to the Community Guidelines.
      View profile
    • Commenting
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
      • Everyone
    • Suggest edit
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
    • Emoji Reply
    • Enable
    • Versions and GitHub Sync
    • Note settings
    • Note Insights New
    • Engagement control
    • Make a copy
    • Transfer ownership
    • Delete this note
    • Save as template
    • Insert from template
    • Import from
      • Dropbox
      • Google Drive
      • Gist
      • Clipboard
    • Export to
      • Dropbox
      • Google Drive
      • Gist
    • Download
      • Markdown
      • HTML
      • Raw HTML
Menu Note settings Note Insights Versions and GitHub Sync Sharing URL Create Help
Create Create new note Create a note from template
Menu
Options
Engagement control Make a copy Transfer ownership Delete this note
Import from
Dropbox Google Drive Gist Clipboard
Export to
Dropbox Google Drive Gist
Download
Markdown HTML Raw HTML
Back
Sharing URL Link copied
/edit
View mode
  • Edit mode
  • View mode
  • Book mode
  • Slide mode
Edit mode View mode Book mode Slide mode
Customize slides
Note Permission
Read
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Write
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Engagement control Commenting, Suggest edit, Emoji Reply
  • Invite by email
    Invitee

    This note has no invitees

  • Publish Note

    Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

    Your note will be visible on your profile and discoverable by anyone.
    Your note is now live.
    This note is visible on your profile and discoverable online.
    Everyone on the web can find and read all notes of this public team.
    See published notes
    Unpublish note
    Please check the box to agree to the Community Guidelines.
    View profile
    Engagement control
    Commenting
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    • Everyone
    Suggest edit
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    Emoji Reply
    Enable
    Import from Dropbox Google Drive Gist Clipboard
       Owned this note    Owned this note      
    Published Linked with GitHub
    • Any changes
      Be notified of any changes
    • Mention me
      Be notified of mention me
    • Unsubscribe
    # LSDP Architecture of computers --- ### Flynn’s taxonomy examples **SISD** – sequential computer; von Neumann architecture; many PCs before 2010 and mainframes **SIMD** – GPU; modern CPUs with vectorization **MISD** – systolic computer; fault-tolerant systems **MIMD** – cluster, where each processor is programmed separately; Intel Xeon Phi; multi-core superscalar processors; distributed systems ### Compilers - programming languages: - interpreted (e.g., Python, JavaScript), - compiled (e.g., C, C++, Rust), - mixed (e.g., Java - Bytecode+JVM, Python in some cases), - interpreted PLs are in general slower than compiled ones (however there is JIT), - this is caused by heavy optimizations, which are applied in the compilation process, e.g.: - removal of unused code – if the compiler detects that some variable, function - etc. is declared, but is never used, then all instructions concerning that variable - are removed (can be problematic in some cases like embedded systems; see: - volatile in C/C++) - unrolling loops into vector operations – ... ### GIL **Global Interpreter Lock** ensures that only one thread in the interpreter runs at a given time: - the problem occurs especially for CPU bound threads (very little I/O), additionally there is no smart thread scheduling algorithm, - this could lead to a situation where only one thread is running all the time and the others wait, - together with a special check mechanism in the Python interpreter implementation this can cause extreme slowdown of running times ![](https://i.imgur.com/pStC1JR.png) ### Paralellism in ML - data paralellism - same model on each distributed node - split data among nodes - repeat - train - synchronize - task paralellism - parts of model on each distributed node - same data on each node, or get results of previous part of model --- Languages --- ### Python - interpreted language - strong*, dynamic typing - Dynamic: Not variables, but objects are typed! - Strong*: types do not change unexpectedly - ... but this works (list behaves like a bool; implicit casting) - many packages available - most popular language in ML - computation speedup via FFI to C, C++ **Other features**: - f-strings, - breakpoint(), - positional only arguments, - literal types, - typed dicts, - final objects **Celery**: - distributed task queue, - uses RabbitMQ underneath --- ### Rust - compiled language - fast (LLVM backend – same as C++) - excellent toolkit (cargo) - compiler **guarantees memory safety** - **borrow checker (no GC)** - no dangling pointers, - no double free (security risk!), - no data races (easy concurrency), - if it compiles then it’s valid code (in 99% of all cases), - **static typing**, - systems programming language, - access to low level OS resources - like in C (pipes, sockets, message queues), - easy FFI, - suitable for embedded devices, - WASM – compile with JavaScript, - compile to native GPU kernels*, - **steep learning curve**, **Cargo**: - dependency management, - code linter (cargo clippy), - documentation (cargo doc), - code formatting (cargo fmt), - tests execution (cargo test), - toolset upgrading (rustup) **Paralellization** - mostly threads with channels are being used, - 3rd party tools: actix, rayon, tokio - async*, --- ### Erlang - concurrent, - functional programming language, - garbage collection, - actor programming, **Erlang runtime**: - distributed, - fault tolerant, - soft real-time - HA, non-stop applications - Hot swapping (change code without system stop) **Actor programming**: - actor programming = real OOP, - actor = primitive unit of computation, - actor receives a message and do some kind of computation based on it, - actors are completely separated (no shared memory etc - you shouldn’t care about fault tolerance, - create a supervisor that can retrieve other actors when they fail, - distribution is easy (just serialize messages) *When an actor receives a message, it can do one of these 3 things:* - create more actors, - send messages to other actors, - designate what to do with the next message. --- ### Scala - JVM - breaking changes - can use all from JVM world - multi-paradigm - immutable first - val vs var - immutable case classes - lazy - higher order functions **Parallel collections**: - Separate library since 2.13 - Better to use Vector - IT IS NOT ALWAYS WORTH TO PARALLELIZE **Multithreading** - Runnable - Futures - Promises - ExecutionContext - context switching has a cost **Fork join pool** - asynchronous operations **Thread pool** - blocking - failure - map - flatMap - sequential Frameworks for Scala paralell computing: - Akka - Play --- ### C++ - high performance - optimizations - low-level - full control **(Auto)-vectorization** - SIMD instructions for the execution of a loop - depends on the available SIMD instructions - 2x for Single Precision ops - can introduce different rounding **Loop unrolling:** - better pipelining - better vectorization - increase binary size :**Function inlining:** - function call puts previous results on the stack - function calls stops further optimizations (vectorization etc.) - increase binary size **OpenMP**: - multithreading - explicit - API or pragmas - task or data parallel - SIMD --- ### Go - high performance - easy development - simple syntax **Protoactor** - blazing fast - protobuffers - virtual actors - up to 10x speed of erlang - up to 100x speed of Akka.NET - Kotlin, C#, GO --- ### MPI - widely used standard for message passing - distributed-memory concurrent computers - sender and receiver must specify data type - point-to-point communication - collective communication - defines common data types - generally we are not using all-to-all communication - models application topology - support for Cartesian topology - data shifting among dimension - collective communication among dimension **Functions**: broadcast, multicast, all-to-all, barrier, scatter, gather, all gather, reduce --- Concepts (Stateless vs Stateful application) --- ### Stateless - Application do not persist any data in their memory - Requests are processed one by one - Requests are processed independently (without context of previous requests) - No session persisted in the app memory - Session can be persisted in persistence solution - Accessible by other replicas of app - For example Redis or MongoDB - Can be scaled and replicated separately - There is no need of sticky session usage - App can be easily scaled horizontally - Development process might be more complex ### Stateful - Application can persist data in memory - Requests can be processed in context of previous requests - Session is persisted in app memory - We need to use sticky session - Scaling and availability is not so trivial - Development might be easier ### High Availability - characteristic of production level systems - should be online (available) as much as possible - SLA (Service Level Agreement) - "nines" **How to achieve HA?**: - replicate your service - route all the traffic through a proxy / load balancer But in reality it’s not that easy: - implementation of health checks, liveness probes, - retry mechanisms, - how to automatically scale services? - choose scaling criterion, number of fallback replicas, - ensure fast application startup, **Active-active vs active-passive**: - two types of HA: active-active, active-passive - for statefull application: - active-active is hard to maintain → the state of all instances must be synchronized along with processing of requests, - active-passive requires also state synchronization (easier, only one node receives requests), - stateless applications - both HA types are easier to apply, Software Update Types -- ### Blue Green - Load-balancer of proxy in front of app - Two same environments - Green - One environment currently running - Blue - One that is updated - Redirect traffic from green to blue after update - All ok - blue becomes green, green becomes blue - Problem? - redirect traffic back Update procedure: - Update blue environment - Redirect traffic from green to blue - All OK - blue becomes green, green can be again used as blue - Problem? - redirect traffic back ### Canary - Roll-out updates only to small part of traffic - u can roll-out only to part of your users Update procedure: - Create app instance with new features - Redirect part of traffic using some predicate to new instance - All OK? - continue redirecting the traffic until 100% on new app - Problem - roll-back to the old instances ### Rolling - N - number of running instances before update - During the update, at most N + 1 running instances - At each step of update, replace one old version instance with new one Update procedure - Create new version instance - All OK - Turn off one old version instance - All OK - Repeat until you have only new version instances - Problem? - Recreate old instances ## LSDP labs answers: ### L2 - why we use a broker - Celery requires a solution to send and receive messages; usually this comes in the form of a separate service called a message broker. - Celery communicates via messages, usually using a broker to mediate between clients and workers. To initiate a task, a client adds a message to the queue, which the broker then delivers to a worker. - why there is no broker URL defined in code - It is defined in a docker-compose yaml file as an environment variable - how the broker URL is build (what is guest etc.) - transport://userid:password@hostname:port/virtual_host - do tasks need to return results? - No, the tasks do not necessarily need to return the results. If you want to keep track of tasks or need the return values, then Celery must store or send the states somewhere so that they can be retrieved later. In Celery, a result back end is a place where, when you call a Celery task with a return statement, the task results are stored. - can we schedule periodical tasks? - Yes, celery beat is a scheduler; It kicks off tasks at regular intervals, that are then executed by available worker nodes in the cluster. ### L4 - why do we need to install Java, how pySpark works? - PySpark is built on top of Spark's Java API. - Data is processed in Python and cached / shuffled in the JVM. - PySpark, helps you interface with Resilient Distributed Datasets (RDDs) in Apache Spark and Python programming language. - This has been achieved by taking advantage of the Py4j library. - Py4J is a popular library which is integrated within PySpark and allows python to dynamically interface with JVM objects. https://cwiki.apache.org/confluence/display/SPARK/PySpark+Internals https://databricks.com/glossary/pyspark - do we need to use Java 8 - Spark runs on Java 8/11, Scala 2.12, Python 2.7+/3.4+ and R 3.5+. Java 8 prior to version 8u92 support is deprecated as of Spark 3.0.0. https://spark.apache.org/docs/latest/ - can we connect to an external cluster from python code - setMaster(value) − To set the master URL. https://www.tutorialspoint.com/pyspark/pyspark_sparkconf.htm https://stackoverflow.com/questions/54641574/submitting-pyspark-script-to-a-remote-spark-server - can we deploy our python code to Spark cluster - yes: https://spark.apache.org/docs/latest/submitting-applications.html - how can we observe Spark jobs progress (Spark HTTP UI) - Every SparkContext launches a Web UI, by default on port 4040, that displays useful information about the application. - You can access this interface by simply opening "http://<driver-node>:4040 in a web browser. - If multiple SparkContexts are running on the same host, they will bind to successive ports beginning with 4040 (4041, 4042, etc). https://spark.apache.org/docs/latest/monitoring.html - logistic regression vs linear regression - In linear regression, the outcome (dependent variable) is continuous. It can have any one of an infinite number of possible values. In logistic regression, the outcome (dependent variable) has only a limited number of possible values. https://stackoverflow.com/questions/12146914/what-is-the-difference-between-linear-regression-and-logistic-regression - multi-class vs multi-label - In multi-class problem we want to assign a positive label to one of the n classes where n > 2. - On the other hand in multi-label scenario we want to assign a positive label to 0-n of the n classes so there is not restristion to numer of classes we will assign a positive label. - One can say that this is a generalization of the multi-class problem. - Spark: - what is RDD - The main abstraction Spark provides is a resilient distributed dataset (RDD), which is a collection of elements partitioned across the nodes of the cluster that can be operated on in parallel. - RDDs are created by starting with a file in the Hadoop file system (or any other Hadoop-supported file system), or an existing Scala collection in the driver program, and transforming it. - Users may also ask Spark to persist an RDD in memory, allowing it to be reused efficiently across parallel operations. Finally, RDDs automatically recover from node failures. https://spark.apache.org/docs/latest/rdd-programming-guide.html - what is DataSet - A Dataset is a distributed collection of data. - Dataset is a new interface added in Spark 1.6 that provides the benefits of RDDs (strong typing, ability to use powerful lambda functions) with the benefits of Spark SQL’s optimized execution engine. - A Dataset can be constructed from JVM objects and then manipulated using functional transformations (map, flatMap, filter, etc.) https://spark.apache.org/docs/latest/sql-programming-guide.html#datasets-and-dataframes - what is DataFrame - A DataFrame is a Dataset organized into named columns. - It is conceptually equivalent to a table in a relational database or a data frame in R/Python, but with richer optimizations under the hood. - DataFrames can be constructed from a wide array of sources. https://spark.apache.org/docs/latest/sql-programming-guide.html#datasets-and-dataframes - how Spark generally works (master, worker) - A Spark application runs as independent processes, coordinated by the SparkSession object in the driver program. - The resource or cluster manager assigns tasks to workers, one task per partition. - A task applies its unit of work to the dataset in its partition and outputs a new partition dataset. Because iterative algorithms apply operations repeatedly to data, they benefit from caching datasets across iterations. - Results are sent back to the driver application or can be saved to disk. - Spark uses master/slave architecture i.e. one central coordinator and many distributed workers. Here, the central coordinator is called the driver. - The driver runs in its own Java process. These drivers communicate with a potentially large number of distributed workers called executors. - Each executor is a separate java process. A Spark Application is a combination of driver and its own executors. - With the help of cluster manager, a Spark Application is launched on a set of machines. - Standalone Cluster Manager is the default built in cluster manager of Spark. - Apart from its built-in cluster manager, Spark also works with some open source cluster manager like Hadoop Yarn, Apache Mesos etc. https://data-flair.training/blogs/how-apache-spark-works/ https://developer.hpe.com/blog/4jqBP6MO3rc1Yy0QjMOq/spark-101-what-is-it-what-it-does-and-why-it-matters - Spark stack (SQL, ML, GraphX etc.) - https://subscription.packtpub.com/book/big_data_and_business_intelligence/9781785885655/1/ch01lvl1sec11/the-spark-stack - https://spark.apache.org/docs/latest/index.html# - shuffling - https://medium.com/swlh/revealing-apache-spark-shuffling-magic-b2c304306142 https://stackoverflow.com/questions/31386590/when-does-shuffling-occur-in-apache-spark - [reduceByKey vs groupByKey](https://jaceklaskowski.gitbooks.io/mastering-apache-spark/spark-rdd-shuffle.html) ### L5 - what is the difference between a Docker container and a Kubernetes Pod? - Po stworzeniu Deploymentu w Module 2, Kubernetes stworzył Pod, który "przechowuje" instancję Twojej aplikacji. Pod jest obiektem abstrakcyjnym Kubernetes, który reprezentuje grupę jednego bądź wielu kontenerów (jak np. Docker) wraz ze wspólnymi zasobami dla tych kontenerów. Zasobami mogą być: - Współdzielona przestrzeń dyskowa, np. Volumes - Zasoby sieciowe, takie jak unikatowy adres IP klastra - Informacje służące do uruchamiania każdego z kontenerów ⏤ wersja obrazu dla kontenera lub numery portów, które mają być użyte - Pody są niepodzielnymi jednostkami na platformie Kubernetes. W trakcie tworzenia Deploymentu na Kubernetes, Deployment tworzy Pody zawierające kontenery (w odróżnieniu od tworzenia kontenerów bezpośrednio). Każdy Pod związany jest z węzłem, na którym zostało zlecone jego uruchomienie i pozostaje tam aż do jego wyłączenia (zgodnie z polityką restartowania) lub skasowania. W przypadku awarii węzła, identyczny pod jest skierowany do uruchomienia na innym węźle klastra. - what are the following Kubernetes objects and what are they used for? - **deployment** - A Deployment provides declarative updates for Pods and ReplicaSets. - Deployment informuje Kubernetesa, jak tworzyć i aktualizować instancje Twojej aplikacji. Po stworzeniu Deploymentu, węzeł master Kubernetesa zleca uruchomienie tej aplikacji na indywidualnych węzłach klastra. - Po utworzeniu instancji aplikacji, Kubernetes Deployment Controller na bieżąco monitoruje te instancje. Jeśli węzeł, na którym działała jedna z instancji ulegnie awarii lub zostanie usunięty, Deployment Controller zamieni tę instancję z instancją na innym węźle klastra. W ten sposób działa samo naprawiający się mechanizm, który reaguje na awarie lub wyłączenia maszyn w klastrze. - **replicaset** - A ReplicaSet ensures that a specified number of pod replicas are running at any given time. However, a Deployment is a higher-level concept that manages ReplicaSets and provides declarative updates to Pods along with a lot of other useful features. Therefore, we recommend using Deployments instead of directly using ReplicaSets, unless you require custom update orchestration or don't require updates at all. - Pody Kubernetes są nietrwałe. Pody mają swój cykl życia. Jeśli węzeł roboczy ulegnie awarii, tracone są wszystkie pody działające na węźle. ReplicaSet będzie próbował automatycznie doprowadzić klaster z powrotem do pożądanego stanu tworząc nowe pody i w ten sposób zapewnić działanie aplikacji. Innym przykładem może być system na back-endzie przetwarzania obrazów posiadający 3 repliki. Każda z tych replik jest wymienna - system front-endu nie powinien musieć pilnować replik back-endu ani tego, czy któryś z podów przestał działać i został odtworzony na nowo. Nie należy jednak zapominać o tym, że każdy Pod w klastrze Kubernetes ma swój unikatowy adres IP, nawet pody w obrębie tego samego węzła, zatem powinna istnieć metoda automatycznego uzgadniania zmian pomiędzy podami, aby aplikacja mogła dalej funkcjonować. - **service** - An abstract way to expose an application running on a set of Pods as a network service. - With Kubernetes you don't need to modify your application to use an unfamiliar service discovery mechanism. Kubernetes gives Pods their own IP addresses and a single DNS name for a set of Pods, and can load-balance across them. - Serwis w Kubernetes jest abstrakcyjnym obiektem, która definiuje logiczny zbiór podów oraz politykę dostępu do nich. Serwisy pozwalają na swobodne łączenie zależnych podów. Serwis jest zdefiniowany w YAMLu (zalecane) lub w JSONie - tak, jak wszystkie obiekty Kubernetes. Zbiór podów, które obsługuje Serwis, jest zazwyczaj określany przez LabelSelector (poniżej opisane jest, w jakich przypadkach możesz potrzebować zdefiniować Serwis bez specyfikowania selektora). - **daemonset** - A DaemonSet ensures that all (or some) Nodes run a copy of a Pod. As nodes are added to the cluster, Pods are added to them. As nodes are removed from the cluster, those Pods are garbage collected. Deleting a DaemonSet will clean up the Pods it created. - A Kubernetes DaemonSet is a container tool that ensures that all nodes (or a specific subset of them, but we’ll get to that later) are running exactly one copy of a pod. DaemonSets will even create the pod on new nodes that are added to your cluster! - When using Kubernetes, most of the time you don’t care where your pods are running, but sometimes you want to run a single pod on all your nodes. For example, you might want to run fluentd on all your nodes to collect logs. In this case, using a DaemonSet tells Kubernetes to make sure there is one instance of the pod on nodes in your cluster. - **statefulset** - StatefulSet is the workload API object used to manage stateful applications. - Manages the deployment and scaling of a set of Pods, and provides guarantees about the ordering and uniqueness of these Pods. - Like a Deployment, a StatefulSet manages Pods that are based on an identical container spec. Unlike a Deployment, a StatefulSet maintains a sticky identity for each of their Pods. These pods are created from the same spec, but are not interchangeable: each has a persistent identifier that it maintains across any rescheduling. - If you want to use storage volumes to provide persistence for your workload, you can use a StatefulSet as part of the solution. Although individual Pods in a StatefulSet are susceptible to failure, the persistent Pod identifiers make it easier to match existing volumes to the new Pods that replace any that have failed. - StatefulSets are valuable for applications that require one or more of the following. - Stable, unique network identifiers. - Stable, persistent storage. - Ordered, graceful deployment and scaling. - Ordered, automated rolling updates. - **configmap** - A ConfigMap is an API object used to store non-confidential data in key-value pairs. Pods can consume ConfigMaps as environment variables, command-line arguments, or as configuration files in a volume. - A ConfigMap allows you to decouple environment-specific configuration from your container images, so that your applications are easily portable. - Use a ConfigMap for setting configuration data separately from application code. - **secret** - Kubernetes Secrets let you store and manage sensitive information, such as passwords, OAuth tokens, and ssh keys. Storing confidential information in a Secret is safer and more flexible than putting it verbatim in a Pod definition or in a container image. - A Secret is an object that contains a small amount of sensitive data such as a password, a token, or a key. Such information might otherwise be put in a Pod specification or in an image. Users can create Secrets and the system also creates some Secrets. - To use a Secret, a Pod needs to reference the Secret. A Secret can be used with a Pod in three ways: - As files in a volume mounted on one or more of its containers. - As container environment variable. - By the kubelet when pulling images for the Pod. - **persistentvolume(claim)** - The PersistentVolume subsystem provides an API for users and administrators that abstracts details of how storage is provided from how it is consumed. - A PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes. It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual Pod that uses the PV. - A PersistentVolumeClaim (PVC) is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., they can be mounted ReadWriteOnce, ReadOnlyMany or ReadWriteMany, see AccessModes). - While PersistentVolumeClaims allow a user to consume abstract storage resources, it is common that users need PersistentVolumes with varying properties, such as performance, for different problems. Cluster administrators need to be able to offer a variety of PersistentVolumes that differ in more ways than just size and access modes, without exposing users to the details of how those volumes are implemented. For these needs, there is the StorageClass resource.

    Import from clipboard

    Paste your markdown or webpage here...

    Advanced permission required

    Your current role can only read. Ask the system administrator to acquire write and comment permission.

    This team is disabled

    Sorry, this team is disabled. You can't edit this note.

    This note is locked

    Sorry, only owner can edit this note.

    Reach the limit

    Sorry, you've reached the max length this note can be.
    Please reduce the content or divide it to more notes, thank you!

    Import from Gist

    Import from Snippet

    or

    Export to Snippet

    Are you sure?

    Do you really want to delete this note?
    All users will lose their connection.

    Create a note from template

    Create a note from template

    Oops...
    This template has been removed or transferred.
    Upgrade
    All
    • All
    • Team
    No template.

    Create a template

    Upgrade

    Delete template

    Do you really want to delete this template?
    Turn this template into a regular note and keep its content, versions, and comments.

    This page need refresh

    You have an incompatible client version.
    Refresh to update.
    New version available!
    See releases notes here
    Refresh to enjoy new features.
    Your user state has changed.
    Refresh to load new user state.

    Sign in

    Forgot password

    or

    By clicking below, you agree to our terms of service.

    Sign in via Facebook Sign in via Twitter Sign in via GitHub Sign in via Dropbox Sign in with Wallet
    Wallet ( )
    Connect another wallet

    New to HackMD? Sign up

    Help

    • English
    • 中文
    • Français
    • Deutsch
    • 日本語
    • Español
    • Català
    • Ελληνικά
    • Português
    • italiano
    • Türkçe
    • Русский
    • Nederlands
    • hrvatski jezik
    • język polski
    • Українська
    • हिन्दी
    • svenska
    • Esperanto
    • dansk

    Documents

    Help & Tutorial

    How to use Book mode

    Slide Example

    API Docs

    Edit in VSCode

    Install browser extension

    Contacts

    Feedback

    Discord

    Send us email

    Resources

    Releases

    Pricing

    Blog

    Policy

    Terms

    Privacy

    Cheatsheet

    Syntax Example Reference
    # Header Header 基本排版
    - Unordered List
    • Unordered List
    1. Ordered List
    1. Ordered List
    - [ ] Todo List
    • Todo List
    > Blockquote
    Blockquote
    **Bold font** Bold font
    *Italics font* Italics font
    ~~Strikethrough~~ Strikethrough
    19^th^ 19th
    H~2~O H2O
    ++Inserted text++ Inserted text
    ==Marked text== Marked text
    [link text](https:// "title") Link
    ![image alt](https:// "title") Image
    `Code` Code 在筆記中貼入程式碼
    ```javascript
    var i = 0;
    ```
    var i = 0;
    :smile: :smile: Emoji list
    {%youtube youtube_id %} Externals
    $L^aT_eX$ LaTeX
    :::info
    This is a alert area.
    :::

    This is a alert area.

    Versions and GitHub Sync
    Get Full History Access

    • Edit version name
    • Delete

    revision author avatar     named on  

    More Less

    Note content is identical to the latest version.
    Compare
      Choose a version
      No search result
      Version not found
    Sign in to link this note to GitHub
    Learn more
    This note is not linked with GitHub
     

    Feedback

    Submission failed, please try again

    Thanks for your support.

    On a scale of 0-10, how likely is it that you would recommend HackMD to your friends, family or business associates?

    Please give us some advice and help us improve HackMD.

     

    Thanks for your feedback

    Remove version name

    Do you want to remove this version name and description?

    Transfer ownership

    Transfer to
      Warning: is a public team. If you transfer note to this team, everyone on the web can find and read this note.

        Link with GitHub

        Please authorize HackMD on GitHub
        • Please sign in to GitHub and install the HackMD app on your GitHub repo.
        • HackMD links with GitHub through a GitHub App. You can choose which repo to install our App.
        Learn more  Sign in to GitHub

        Push the note to GitHub Push to GitHub Pull a file from GitHub

          Authorize again
         

        Choose which file to push to

        Select repo
        Refresh Authorize more repos
        Select branch
        Select file
        Select branch
        Choose version(s) to push
        • Save a new version and push
        • Choose from existing versions
        Include title and tags
        Available push count

        Pull from GitHub

         
        File from GitHub
        File from HackMD

        GitHub Link Settings

        File linked

        Linked by
        File path
        Last synced branch
        Available push count

        Danger Zone

        Unlink
        You will no longer receive notification when GitHub file changes after unlink.

        Syncing

        Push failed

        Push successfully