Ching Liu
    • Create new note
    • Create a note from template
      • Sharing URL Link copied
      • /edit
      • View mode
        • Edit mode
        • View mode
        • Book mode
        • Slide mode
        Edit mode View mode Book mode Slide mode
      • Customize slides
      • Note Permission
      • Read
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Write
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Engagement control Commenting, Suggest edit, Emoji Reply
    • Invite by email
      Invitee

      This note has no invitees

    • Publish Note

      Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

      Your note will be visible on your profile and discoverable by anyone.
      Your note is now live.
      This note is visible on your profile and discoverable online.
      Everyone on the web can find and read all notes of this public team.
      See published notes
      Unpublish note
      Please check the box to agree to the Community Guidelines.
      View profile
    • Commenting
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
      • Everyone
    • Suggest edit
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
    • Emoji Reply
    • Enable
    • Versions and GitHub Sync
    • Note settings
    • Note Insights New
    • Engagement control
    • Make a copy
    • Transfer ownership
    • Delete this note
    • Save as template
    • Insert from template
    • Import from
      • Dropbox
      • Google Drive
      • Gist
      • Clipboard
    • Export to
      • Dropbox
      • Google Drive
      • Gist
    • Download
      • Markdown
      • HTML
      • Raw HTML
Menu Note settings Note Insights Versions and GitHub Sync Sharing URL Create Help
Create Create new note Create a note from template
Menu
Options
Engagement control Make a copy Transfer ownership Delete this note
Import from
Dropbox Google Drive Gist Clipboard
Export to
Dropbox Google Drive Gist
Download
Markdown HTML Raw HTML
Back
Sharing URL Link copied
/edit
View mode
  • Edit mode
  • View mode
  • Book mode
  • Slide mode
Edit mode View mode Book mode Slide mode
Customize slides
Note Permission
Read
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Write
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Engagement control Commenting, Suggest edit, Emoji Reply
  • Invite by email
    Invitee

    This note has no invitees

  • Publish Note

    Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

    Your note will be visible on your profile and discoverable by anyone.
    Your note is now live.
    This note is visible on your profile and discoverable online.
    Everyone on the web can find and read all notes of this public team.
    See published notes
    Unpublish note
    Please check the box to agree to the Community Guidelines.
    View profile
    Engagement control
    Commenting
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    • Everyone
    Suggest edit
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    Emoji Reply
    Enable
    Import from Dropbox Google Drive Gist Clipboard
       Owned this note    Owned this note      
    Published Linked with GitHub
    • Any changes
      Be notified of any changes
    • Mention me
      Be notified of mention me
    • Unsubscribe
    # Intelligent Robotics Review # 1. Introduction ### What is an intelligent Robot made of? - Sensors - Camera - Microphone - Button - ... - Processing Unit - Classic desktop system - microcontroller - fpga - ... - Actuators - Wheels - Gripper - ... # 2. Fundamentals ## Perception - Sensor data provides abstract perception of environments - Perception-Action-Cycle represents control loop: 1. Sensing of the environment 2. Intelligent processing of obtained data 3. Execution of an action - Detailed cycle: 1. Data acquisition: Sampling of sensor signal output 2. Data preprocessing: Filtering, normalization, scaling etc. 3. Data fusion: Combination of multimodal or redundant sensor data 4. Feature extraction: Representation of sensed environment 5. Pattern recognition: Search for patterns to classify data 6. Environment modeling: Model environment from classified patterns 7. ... 8. Action: Based on environment choose some goal oriented action to manipulate the environment. - A **sensor** is a unit, which - receives a signal or stimulus - and reacts to it - A **physical sensor** is a unit, which - receives a signal or stimulus - and reacts to it with an electrical signal - A **stimulus** is a quantity, characteristic or state which is perceived and converted into an electrical signal. ## Taxonomy 1. Intrinsic sensors: Provide data about the internal system state Example: Encoder (incremental/absolute) accelerometer, gyroscope, ... 2. Extrinsic sensors: Provide data about the environment Example: Strain gauge, force-torque sensor, piezoelectric sensor, ... 3. Active sensors: Modify an applied electrical signal in response to change of stimulus Example: Sonar sensor, infrared sensor, laser range finder 4. Passive sensors: Create an electrical signal in response to the change of the stimulus Example: Linear camera, CCD-/CMOS-camera, stereo vision cameras, omnidirectional vision camera, ... ![Sensor classification example](https://i.imgur.com/dDNl9T0.png) ## Measurement error 1. Systematic deviation - Deviation is caused by sensor itself - wrong calibration, interference, etc. - Elimination possible, requires elaborate examination 2. Random deviation - Deviation caused by inevitable, external interference - Repeated measurements yield different results - Individual results around mean value - Absolute measurement deviation: deviation of single measurement of mean of N measurements of a measurement series - Relative measurement deviation: percentage between absolute error and mean value - Variance: measures distribution of a of values around mean $$ s^2 = \frac{\sum^N_{i=1}(x_i - \bar{x})^2}{N-1} $$ - Standard deviation: $$ s = \sqrt{s^2} = \sqrt{\frac{\sum^N_{i=1}(x_i - \bar{x})^2}{N-1}}$$ - Confidence interval: contains n% of population samples - $\mu$ contains 68.27% of samples - $2\mu$ contains 95.45% - $3\mu$ contains 99.73% ### Accuracy - Manufacturers usually provide specification of accuracy for given range of output signal - Inaccuracy often in form of a relative error - Sometimes data of systematic errors is included ### Sensor resolution - defined by number of bits of digital output - Continuous if it doesn't have distinct resolution steps - Resolution is bound by the noise floor ### Transfer function Transfer function is the ideal relation between stimulus and output signal of a sensor $$S = f(s)$$ where S represents true value of stimulus s Transfer functions can be: Linear, logarithmic, exponential, polynomial - Real transfer function: - $S = f_{ideal}(s)$: ideal transfer function - $\pm \Delta$: Maximum deviation from ideal transfer function - $\pm \delta$: Actual deviation $$ S' = f_{real}(s) = f_{ideal}(s) \pm \delta \quad \delta \leq \Delta $$ ### Approximation of transfer function Measurement of a relation between two quantities x and y - Linear relation -> Linear regression - Non-linear relation - linearization followed by linear regression - least-squares fit through numerical optimization techniques ## Sensor errors and characteristics - Hysteresis error: Error when approaching a value from different directions - Dead band: Insensitivity to specific coherent range of input signal - Saturation: Sensor has limited input range - Repeatability error: Sensor produces different outputs for same input - $\delta_r = \frac{\Delta}{FSI}\cdot 100\%$ - Dynamic characteristics: - time-dependency of sensor - results in dynamic errors - Reliability (MTBF): Meantime before failure - Properties relevant to field of application: - Design - Weight - Form factor - Price - Environmental factors # 3. Vision Sensors ## Coordinate systems ### Relative Pose - Left or right-handed coordinate systems - Pose of a rigid object is it's **position** and **orientation** wrt some coordinate system - represented by: - Cartesian coordinate system **K** (also called **frame**) and - a transformation between **K** and global CS **B** (**B -> K**) - Orientation given by: - Euler-angles $\phi, \theta, \psi$ (roll-pitch-yaw)![](https://i.imgur.com/YtKYsxd.png =300x) ![](https://i.imgur.com/K0C1HWq.png =x150) - Axis-Angle $\hat{e}, \theta$ (rotation around axis $\hat{e}$ by angle $\theta$)![](https://i.imgur.com/qA323qe.png =200x) - Axis angle represented as $\hat{e} = [x,y,z]\in \mathcal{R}^3$ and $\theta \in [0;2\pi\)$ - Unit quaternions represented as: $q = w + x\cdot i + y \cdot j + z \cdot k$ with unit vecotrs $i,j,k$ and $||q|| = 1$, $q = \cos \frac{\theta}{2} + (\hat{e}_xi + \hat{e}_yj + \hat{e}_zk)\sin \frac{\theta}{2}$ - Rotation matrix $R \in \mathcal{R}^{3x3}$![](https://i.imgur.com/yV7cK2f.png =200x) - Fused angles ![](https://i.imgur.com/fsc2ERW.png =300x) - representation ![](https://i.imgur.com/puggnNT.png =x150) Points in local frame can be transformed to global frame through translations and rotations. Transformations can be chained: ![](https://i.imgur.com/AP8LyHT.png) :::info Always try to use quaternions except when using as input/output of a neural network ::: ## Vision systems - Many vision sensors available - Linear camera sensor - Analog CCD camera - Standard CMOS camera - HDR CMOS camera - Structured light camera - Stereo vision system - Omnidirectional vision system - Event cameras - Tasks: - Object grasping tasks - Object handling tasks - Assembly tasks - Used for: - Perception of objects (static, dynamic) - Perception of humans (face recognition, gaze tracking, gesture recognition, ...) - Robot localization (relative, absolute) - Object recognition and localization - 3D scene reconstruction - Allocation of environment - Vision-based motion control - Visual-Servoing - Collision avoidance - Coordination with other robots or humans ### Camera calibration - Intrinsic parameters - Internal geometrical structure and optical features of camera - Extrinsic parameters - Position and orientation - Relation between 3D and 2D - Perspective projection (3D -> 2D) - With camera projection matrix, 2D projection of 3D point can be determined - Backprojection (2D -> 3D) - if 2D point is known, there exists a ray in 3D space on which this point lies - traingulation can be used if 2 or more views exist - Camera calibration can be on-line or off-line - using calibration object ![](https://i.imgur.com/HcWWR2D.jpg) - using self-calibration approaches - using machine learning methods - Pinhole camera projection: ![](https://i.imgur.com/bswYwD8.png) - Possible distortions in real cameras - Radial ![](https://i.imgur.com/TJMe714.png =250x) - Tangential ![](https://i.imgur.com/wVTj739.png =300x) - Distortions can be removed by calibration - Using open-source implementations (ROS, OpenCV, Matlab) - Fiducal Markers (Tags) - Useful for computing pose of marked objects - Can be used for calibration - AprilTags most commonly used - 587 tags with 11 bit hamming distance - Tag discussion: - Pros: - Simplifies task - production costs low - computational cost low - quite precise - easy to employ in artificial environments - Cons: - Difficult to use for large distances - Difficult at very small scales - Not usable in natural environments - Tags in training data lead to learned tag detection - Rotation not so precise (can be improves by bundles) # 4. Rotation Sensors ## Rotation / Motion Measurement of angular motion and rotation. ### Optical Encoder - Main component is mask with transparent and opaque areas - Cast ray of light is registered by photodiode on other side of mask - Mask pattern usually disk or strip for angular and linear motion - Measurement with regard to time yiels velocity - Incremental encoder: - equidistant areas - one LED and photodiode per channel - two channels to detect motion direction - Absolute encoder: - Provides absolute angles as output - Binary encoded pattern on disk/strip - Several LEDs and photodiodes required - Resolution limited by number of channels (bits) ### Resolver - Based on electromagnetic induction - Brushless transmitter resolver is most common type - reference winding (rotor) R - two secondary windings SIN (S1) and COS (S2) at 90° to eachother ![](https://i.imgur.com/T2R44b9.png =200x) - R is powered with alternating voltage - Induces voltages to secondary windings $$ V_{S1} = V_R\sin(\theta)\\ V_{S2} = V_R\cos(\theta) \\ \theta = \arctan2(V_{S1}, V_{S2}) $$ ### Potentiometer - position and direction of rotation - Gives resistance value in realtion to its absolute position - High wear due to direct contact of material - limit of 360° ### Hall Effect Sensor - the speed or position of wheels and shafts - Lorentz force acting on charges in magnetic field - Results in voltage difference orthogonal to the current - Smaller and cheaper than other solutions - No AC needed - Can be influenced by strong magnetic fields ![](https://i.imgur.com/wGRfXJU.png) ### Gyroscope (angular velocity) - "Direction keeper" alternative to compass - Most commonly used in navigation - Mechanical Gyroscope - Semiconductor gyroscope - Micro-Electro-Mechanical System (MEMS) ### Accelerometer (acceleration) - Relies on displacement of intertial mass - Measures proper acceleration in one dimension - Includes gravity when pointing up ### Magnetometer - Compass - Measures orientation in magnetic field - Based on measuring hall effect ### Inertial Measurement Unit (IMU) - Combination of gyroscope, accelerometer, magnetometer ## Odometry ### Dead-reckoning - Encoders used in combination with motors - Knowledge about transmission and wheels allows to determine distance traveled - Mobile robotic systems mostly equipped with incremental encoders - Localization of mobile robots is called **dead-reckoning** - Relative position and orientation is determined using history of accumulated measurements - Differential drive (independent wheels on shared axis) - Pose calculation is given using forward kinematics ![](https://i.imgur.com/2II8LoP.png) :::info **General issue:** accumulation of measurement errors ::: ### Odometry - Odometry is the process of calculating robot pose based on knowledge of own actions/motions - Errors in orientation have strong impact on deviation - Often combined with absolute pose measurements, landmarks #### Odometry Deviation - Systematic errors caused by: - Varying wheel diameters - Actual baseline differs from expected distance - Wheels not on same axis - Finite encoder resolution / sampling rate - Varying floor friction - Random errors caused by: - Uneven ground - Unexpected objects on the ground - Spinning wheels - Slippery ground - Excessive acceleration - Skidding (fast turning) - Internal/external forces - No contact with the ground - Systematic errors can be reduced by calibration, random cannot #### Multi-sensory Odometry - Wheel-based superior linear estimates - IMU superior orientation estimates - Legged odometry equal linear and orientation estimates - Camera-based visual flow good in structured environments - Combination of IMU and wheel-based odometry: "gyrodometry" - Integrate readings with Kalmanfilter #### Visual Odometry - Difference between 2 sequential images to compute movement - Different approaches: - Classical feature extraction - Learned neural networks - Accuracy bound in image resolution - Resolution often bound by hardware - Possible error sources: - Lighting conditions - Feature less environment - Repeating features - Motion blur - Rolling shutter effect - Large parts of the visible environment move in relation to the robot (e.g. when there is a bus in the image) #### Lidar odometry - "picture" with laser distance sensor - Compute difference and get motion - Features are structural not visual - Less prone to light problems #### Walking odometry - Velocity not directly readable from motors - Compute transformation with forward kinematics for each step and sum them - Always have one frame on support foot and transformation from there to torso - Different error sources: - Angle of a joint is not correct - Link length not correctly modeled - Backlash in joints (depends on the servo) - Due to multiple joints and links, we have often multiple error sources each step - Small angular errors (joints), lead quickly to large absolute errors (step position) - Slippage - Uneven ground #### Drone odometry - Movement in all 6 dimensions - Rotator speeds to compute odometry theoretically possible but not often used - "Visual intertial odometry" is combination of visual odometry and IMU odometry # 5. Force and Tactile Two main physical attributes: 1. Force (Linear) 2. Torque (rotational) Sensors: - ## Force: - ### Strain Gauge: - deformation of an elastic material, resistive foil - this foil changes wire length changes due to strain, and change is small ∆L << L - length change causes resistance change. - basically measures the small changed resistance. ![Strain Guage Sensor](https://dwu6cij6g7c482o3ubd8gc11-wpengine.netdna-ssl.com/wp-content/gallery/illustrations/strain-gauge-diagram-example.png) - measure of strain-gauge resistance change is done by Wheatstone Bridge - small resistance changes could be measured with ![WSB](https://i.imgur.com/Z2oe0gY.png) ![WSB_formula](https://i.imgur.com/nLpAvWO.png) - If R1/R2 ≈ R4/R3 are same then V_output is zero, this way we can measure small change in voltages. - ##### drawback: only 1 linear direction of force is meausred - ### 6-Axis Force-torque sensor: - 8 strain guages in a + sign elastic resistive material - common tool for industrial robots - between Wrist and tools - calibrates by all_Forces_measured - (tool/sensor forces) - use for robot control - only measures end-effector forces ofcourse ![6D F-T_sensor](https://i.imgur.com/oWBb5Ne.jpg)![6D F-T_sensor_internals](https://i.imgur.com/Zzjhozn.png) - three forces and three torques, influences different strain guages ![6D F-T_sensor_internals_matrix](https://i.imgur.com/DaIGO6R.png) - Now to use these linear strain guage values into forces and torques, we use a **Coulping Matrix K** (device specific) - it is not perfectly specific which guages get influenced, hence manufecturer already cllibrarted that affect and created the matrix k - ![Coupling_matrix](https://i.imgur.com/OsuJjP8.png) - ### callibration - mass of the attached tool F_tool and center of gravity of tool P_CoG are once known can be subtracted by the software, **estimation of external forces** - callibration is also temp. sensitive - ### approximate examples 1. hard to do this precise work without any feedback as it collides in the socket this tool is being inserted, so let's do it in some automatic way? ![6D F-T_sensor_use](https://i.imgur.com/gxn9bKx.png) 2. 5-axis finger-tip (Fx , Fy , Fz , Mx , My) 3. Miniaturization: MEMS F/T sensor - ### Alternative: 1. 3D-printed alternative ![](https://i.imgur.com/USPFKM7.png) - it uses proximity sensor to detect distance between deformable thongs. - ### Applications: 1. **Guarded motions**, stop when certain threshold of force reaches 2. **Compliant motions ( constant normal forces)**, peg-in-hole insertion, human guided motion, trajectory teaching, polishing, grinding etc. 3. **Force control**, (rather learning positions of the joints learn their forces, obviously we can only learn the ones that are mounted with this sensor and not the whole robot), torque control of motors 4. gripper grasp and release triggered by interaction forces ![6D-app](https://i.imgur.com/6h2xX2P.png) - ## Tactile: - ### Human tactile sensing - four different receptor types: 1. vibration and motion detection 2. sustained pressure, texture perception 3. skin deformation, vibration, tool use 4. skin stretch, slip, tangential forces In **cold** capabilities drop dramatically low - ### Tactile sensors - very thin, robot's fingertips and skin, touch screen etc. - **Requirements**: 1. low hysteresis error 2. low crosstalk between sensors 3. force range 0.4 -> 10N 4. spacial resolution very low 5. minimum freq, 100Hz (these suggest that such tactile sensors can only be used for very delicate purposes, not some pin-hole insertion of industrial heavy tools ) - Sensor layer inside, outside both (depends on how much sensitivity and resolution needed in the skin/fingertip) - ### Sensor types: 1. #### Switch Type Sensors: ![](https://i.imgur.com/7w0PiUI.png) 1. binary on/off decision, no force measurement, low spatial resolution, no shearing forces 2. #### Resistive type: 1. force-sensitive resistor (FSR), resistive type: - changes its resistance depending on the applied pressure - conductive elastomeres - low cost, simple principle - drift of resistance during prolonged pressure, more useful for qualitative measurements 2. FSR, capacitive type: ![](https://i.imgur.com/Xm1XUfR.png) - foam with embedded conductive particles - non-linear resistance curves 3. FSR, Matrix type: 3. #### Capacitive type: ![](https://i.imgur.com/u4g77s5.png) ![](https://i.imgur.com/K4PaH25.png) - flexible material - applied forces changes, distance or area between the sheets (normal or shearing forces) - good sensitivity to normal forces, but less sensitive to shearing forces - oscillating R-C circuit, to measure capacitance - used for : contactless proximity sensing (human tissue near the top electrode also changes capacitance) - finger electrode layout to increase sensitivity to shearing forces 4. #### MEMS (Micro-electromechanical systems): - ![](https://i.imgur.com/kRvJ5z0.png) - small sensor size, good spatial resolution, high sensitivity, high dynamic range - fragile, sensors too small to cover large areas 5. #### Optical sensors: ![](https://i.imgur.com/DD560Gg.png) - combination of light emitters and detector - principles: 1. touching object changes refractive properties of surface 2. membrance deflection changes reflection 3. bending optical fibers - usually robust but unreliable performance in bright ambient light - **Applications**: 1. FingerVision - dots position changes with force ![](https://i.imgur.com/IdXVGbC.png) 2. Syntouch BioTAC sensor (Bio-inspired robot fingertip) - combines pressure sensor, conductive liquid, thermistor - ![](https://i.imgur.com/QXuFZAV.png) - static pressure relates to applied normal force - pressure vibration for surface identification and slip detection - electrodes provide spatial information, reconstruct force location and shearing forces - temp, for material identification - ### Robot Skin: - robot touches itself, correlates arm/hand position and sensor response - manual calibration of all modules difficult (a lot of work) - let the robot learn its kinematics and sensor layout - ### Recording human manipulation: - learn grasping from human demonstration 1. record human manipulation experiments 2. annotate and classify the phases of the manipulation 3. learn human strategies 4. transfer to robot system - complex multi-sensor system 1. I Camera(s): video of the overall scene 2. Stereo-Camera: 3D-scene reconstruction, hand position 3. Cyberglove: hand shape 4. Polhemus: 3D fingertip positions (magnetic tracker) 5. TekScan Grip: fingertip and hand palm forces 6. Instrumented Rubik: forces on grasped object # 6. Distance * Time of Flight - throw the signal and once it echos back, measure the time it took - **distance = (speed * time/2)** * phase shift difference - throw the signal and once it echos back, measure the phase shift it experienced - **distance = lambda (phase_shift/2*pi) / 2** - **con:** - 2D < λ must be obeyed - f = 10 MHz results in wavelength of 30 m, distance under 15m is only meausred * Triangulation - ![tri](https://i.imgur.com/z1wTcNp.png) ## Distance Sensors: ### 1. Infrared: (object detection, !distance measurement) * emit signal from infrared spectrum * **only indoors**, due to sunlight oudoors * usually modulated with a low frequency (e.g. 100 Hz) to distingush from other infrared in the vacinity * **The intensity of the reflected light** is read * The intensity of the reflected light is inversely proportional to the squared distance * **Cons:** - Have to assume that all objects are equal in color and surface, otherwise white or black plastic gives different sensor output - Black is invisible - object detection, but not for exact distance measurement - Small distances usually (50 to 100 cm) ![](https://i.imgur.com/gcjoIdm.png) ### 2. Ultrasound: * require a medium like air or water * short distance measurement possible * frequency of 50 kHz amounts to ≈ 6.872 mm * **Time of Flight** principle * #### Piezoelectric ultrasonic transducer: - ![piezo](https://i.imgur.com/hurWT8P.png) - min distance mesureable![min](https://i.imgur.com/12IeeXo.png) - max distance mesureable![max](https://i.imgur.com/34M9niB.png) * **Cons:** - To produce ultrasonic waves, the movement of a surface is required, leading to a compression or expansion of the medium - Applied voltage causes a bending of the piezoelectric element - Piezoelectricity is reversible, therefore incoming ultrasonic waves produce an output voltage - opening angle up to 30 degrees wide - Reflection of ultrasonic waves from smooth (and flat) surfaces is well-defined but rough or round surfaces give diffused reflection of ultrasound wave - but from smooth object in a flat angle, no echo will reach the sensor - Mirror and total reflections cause flawed measurements - If several sonar sensors can cause crosstalk may occur - the measurement depends on the temperature of the medium, (e.g. a difference of 16◦C will cause a measurement error of 30cm over a distance of 10m) - Different errors can occur when the incorrect beam or beams are echoed back to the senor: - Ideally: ![](https://i.imgur.com/yA2k2Ol.png) - faulty reflections: ![](https://i.imgur.com/KdLfIkI.png) ![](https://i.imgur.com/CSBenGu.png)![](https://i.imgur.com/GK4jWYL.png) ### 3. Laser Ranger Finder: * distance, speed and acceleration of recognized objects * short light impulse is emitted * **Time of Flight** principle * 0.1Hz and 100Hz * typical resolution of 0.25◦, 0.5◦ or 1◦ * s depends on the remission (reflectivity) of the object and the transmitting power * **Moon reflectors** , can be used on space ![](https://i.imgur.com/dbLZEna.png) ### 4. Human stereo Vision: * up to 2m * learn to cheat like a human, shading, prior knowledge, borders etc by deep CNN * phase shift difference principle * we can calculate depth of the scene as such: ![](https://i.imgur.com/uBgO8fG.png) ![](https://i.imgur.com/eAKDDUd.png) * #### Problems with Stereo Cameras: * **Coresponding probem** occurs with stereo cameras, both cameras catch different scene from the image * Role of Baseline (distance between the cameras): 1. Small baseline: large depth not measured 2. large baseline: small depth not measured and correspondance error worsens as rays from both cameras have more area to collide in wrong areas * #### Solution: * Use multiple cameras, but that could be costly ![](https://i.imgur.com/vFmzXGn.png) 5. ### Stereo Audio: * similar as stereo cameras * uses 2 or more microphones * Application: finding human speaker * problem: sound correspondance 6. ### Depth Sensors: * principles: 1. Structured light: * simplifies correspondance problem 2. Time of Flight: * throws infrared and measures the phase shift (yes still called ToF cameras for some unknown reason) ![](https://i.imgur.com/r7TzQlL.png) ![](https://i.imgur.com/MX7DtPv.png) * **in reality**: Integration of different cameras work together with sometimes depth processing onboard 7. ### Radio Landmark Tracking: * uses radio signals * uses satellites, GPS and/or wifi etc. * measures absolute position to the sensors * uses **trangulation** for relative position * acceleration and velocity can be measured overtime * accuracy depends on: 1. atmosphere 2. satelite coverage 3. signal blockage etc... * overall good accuracy * indoors not good * sometimes buildings etc cause signal problems or storms etc ![](https://i.imgur.com/quwD3e6.png) # 7. Scan Processing ### Scan filtering * Data filtering * Smoothing * Data reudction * Feature extraction * Line segments * Corners * ... * Clustering/classification -> polar coordinates $( \alpha_i, \gamma_i )^T$. -> With measuring location $p = (x , y , \theta)^T$ a scan point $m_i= ( \alpha_i, \gamma_i )^T$ can be converted to cartesian coordinates. ![](https://i.imgur.com/e63pvtm.png) * Issue: Big amount of data, noise/outliers, etc. * Basic solutions: * Median filter: * Definition: The median filter recognizes outliers and replaces them with a more suitable measurement * Disadvantage: Corners are rounded ![](https://i.imgur.com/48QqrJl.png) ![](https://i.imgur.com/ZxmgclS.png) * Reduction filter * Definition: The reduction filter reduces point clusters to a single point. * Advantage: * No significantninformation loss * This leads to shorter duration of scan post-processing($O(n)$) * The result is a more uniform distribution of the points * Disadvantage: * Feature extraction is not as easy any more * Possibly too few points for a feature (e.g. corner) * Example: Clustering the points, set cluster radius = 2 -> Each cluster is replaced by the center of gravity of the corresponding points ![](https://i.imgur.com/yYeBnJF.png) * Angle reduction filter * Definition: The angle reduction filter resembles the reduction filter * Scan points having a similar measurement angle are grouped and replaced by the point with the median distance * Used for an even reduction of scan data that have a high angular resolution * The time complexity $O(n)$ ![](https://i.imgur.com/eZMaAE4.png) * Feature extraction Extraction of features instead of low-level processing of complete scans like Lines or Corners. * Line Detection * Devide and Conquer Regression * Hough-Transform * RANSAC ![](https://i.imgur.com/r6EjLTw.png) * ... * Devide and Conquer Regression * Initially a regression line is fitted to the points * If the deviation is too big, the set of points is divided * Dividing point is the one with the highest distance to the line * Critical parameters: * Minimum number of points to form a line * Maximum allowed deviation * Time complexity $O(logn)$ ![](https://i.imgur.com/dlgeK1b.png) Result ![](https://i.imgur.com/LpgCFRT.png) * Hough transform: a feature extraction approach applied in digital image processing * Recognition of lines, circles, ... * Points in the image are mapped onto a parameter space * Suitable parameters: * Line: Slope and y-intercept * Circle: Radius and center * Straight line recognition * Parameters: Slope and y-intercept * Disadvantage: Straight lines having an infinite slope can not be mapped * Better: Straight line in Hessian normal form ![](https://i.imgur.com/omovPxD.png) ![](https://i.imgur.com/kun2zzV.png) ![](https://i.imgur.com/ILCFeBi.png) ![](https://i.imgur.com/MnrVMNK.png) ![](https://i.imgur.com/SdMbbLq.png) ![](https://i.imgur.com/QYhGQ9w.png) ![](https://i.imgur.com/Odu9Zj6.png) ![](https://i.imgur.com/kbF0Cj0.png) * Each relevant scan data point is tested with several value pairs from the parameter space * Points of intersection in parameter space represent potential parameter candidates for the straight line found in the scan data * If multiple candidates exist clusters are formed * The θ-r -point representing the parameters of the straight line is determined as the center of gravity * Scan matching * Definition: In mobile robotics, scan data obtained with a laser rangefinder is frequently used to determine the location of a robot on a map * Raw scan data is transformed into a set of features (e.g. lines) * The a priori available map is searched for overlap and alignment with the extracted set of features → e.g. ICL - Iterative Closest Line * The output is a transformation that allows to determine the location that the scan was taken at (best alignment) * Scan matching is an optimization problem that suffers fromhaving many local minima * Scan matching can be performed using: * Scan data and map data * Scan data and scan data (e.g. previous scan) * Map data and map data * Procedure ![](https://i.imgur.com/nxrHKBG.png) * ICP - Iterative Closest Point ![](https://i.imgur.com/ZGlUTs5.png) ![](https://i.imgur.com/oBhor5Y.png) # 8. State Estimation ## Basics First parts of the lecture are Introduction on Bayesian statistics and the normal distribution. I won't go into detail as that's also part of the Research Methods course. - There is two types of state estimation: - Proprioception (estimating the state of the robot itself) - Exteroception (estimating the state around the robot) - State estimation generally depends on three factors: - All previous states - All previous measurements - All previous control commands - Definition "complete state": - Knowledge of "past" states does not improve prediction accuracy of the next state - Hidden Markov Models (HMM) or Dynamic Bayes Models (DBM) can be used to describe state transision probabilities - Knowledge of a system is called **belief** - True state is generally **not equal** to the belief - Defined as the posterior probability of the state variable after measurement - The belief before incorporating all measurements is called **prediction** - A basic algorithm to calculate belief is the **Bayes filter** - Uses the last belief, current measurement data and control command data to determine the next belief - Is due to the depencency on the last belief a recursive algorithm (needs initialization with intial belief, also called "prior") - Pseudo-code: - For all estimatable belief states: - Calculate the integral sum of the product of the prior for the last state and the probability of switching to the currently estimated state under the assumption of the given control command variables (**prediction step**) - Correct the calculated probability by multiplying it with the probability of detecting the given sensor measurements in the given state - Tips for choosing the initial distribution: - Information about a certain point can be represented by a (Gaussian) bell curve - If you do not have any pre-knowledge, initialize the Bayes filter with a uniform distribution - In the presented form, the Bayes filter algorithm can only be implemented for very simple problems - The assumption of a state being "complete" (i.e. not dependant on previous or future data) is called **Markov assumption**. Bayes filters make this assumption -- while in practice this assumption is usually not true. ## Robot Localization - There is two kinds of robot localization problems: - Position tracking (initial pose is known, track position after control commands) - Global localization (initial pose unknown) - Variant: Kidnapped Robot Problem (robot gets randomly picked up and moved to another position) - There is two different approaches to robot localization: - Position based (using a representation of space, e.g., a point cloud) - Feature based (using a graph-like structure that stores location features (e.g., doors) and their relative position to each other) - Algorithmic approaches - Markov localization - Based on the Bayes filter - Requires a map of the environment - Applicable to both localization problems - Kalman filter localization - Belief represented by uni-modal Gaussian - Efficiently integrates multiple sensors into the localization - Just applicable on position tracking - Requires map of uniquely identifyable features - Assumes that measurements are a linear function of the state and the next state is a linear function of the current state (Usally not true for robotics) - Extended Kalman filter (EKF) relaxes this assumption by approximating non-linear functions using Jacobian Matrices - EKF is good for multi-sensor fusion in an efficient way - It can be sub-optimal as beliefs are approximated and can diverge if non-linearities are large - Unscented Kalman filter (UKF) is better at approximating non-linear problems while not needing Jacobians - Grid localization - Requires discretization of the search space into an n-dimensional grid - Metric grid decomposition represents space by grid cells of equal size (corresponding to a fixed real-world size, e.g., 15x15cm) - Topological grid decomposition represents space by a (usually) coarse grid of significant locations/features on the map - The state probabilities are usually determined using a Histogram filter - Particle filter localization - Belief represented by particles (discrete samples of state probability distribution) - Applicable to both localization problems - Usually done using a Monte Carlo filter - Resampling focuses the samples on the "important" regions of the search space by replacing low-weighted samples - Sample size can be changed online (to reduce computational complexity or increase accuracy) - Likelyhood-based adaption -> if measurements agree with most particles, less particles are required - KLD-sampling -> if the particle distributions change by a large amount, increasing the particle count could be required for higher accuracy and faster convergence. # 9. Decision Making First of all: There is no silver bullet! All presented approaches have up- and downsides. ## Basics Four basic approaches to robot control - Deliberative - think, then act -> highly sequential - Can plan into the future by prediciting the results of its actions - In most real-life scenarios impossible to use because of noise and unforeseen events - Robot is not reactive because of long planning duration - Example: Planning the complete path of a wheeled robot using a model of the world - Reactive - "Don't think, (re)act" -> very fast reaction time due to no planning - Actions are diretly coupled to sensory information -- much like insects work - Not usable if an internal world model is required - Hybrid - "Think and Act Concurrently" - Deliberative part plans long-term goals with a low update rate - Reactive part handles current changes with a high update rate, overrides deliberative part - Example: Path planning with additional reactive system for avoidance of obstacles - Behavior based - "Think the Way You Act" - Multiple reacting and interacting modules create the robot's behavior - No centralized world model, each module has its own ## Approaches - Symbolic Reasoning (1950s) - Complex, high-level concepts are decomposed into basic, low-level actions - Actions the robot chooses are easily explainable - Does not need much data but lots of expert knowledge - Requires colaborating knowledge-builders to avoid duplicate building blocks to be useful - Can be used by the STRIPS Stanford Research Institute Problem Solver or PDDL Planning Domain Definition Language - Fuzzy Logic (1960s) - Uses similar concepts like human language: fuzzy, unspecific terms ("big", "beautiful", "strong"...) - Is useful for mapping sensor readings onto actions - especially for noisy sensor data - Rule base maps concrete sensor readings to fuzzy terms and fuzzy terms to concrete actions (formulated as if-then rules) - Inference process - Fuzzyfier pre-processes the sensor data into the fuzzy domain - Inference engine takes fuzzyfied input data and maps them to fuzzy output terms - Defuzzyfier maps the fuzzy output terms into concrete actions - interpolating between multiple fuzzy terms if overlaps occur (weighted average) - Is affected by the curse of dimensionality: The number of required rules grows exponentially for the amount of input sensors - Decision Trees (1960s) - Basically a tree-like structure of nested if-else decisions - Internal nodes are the if-else decisions - Leaf nodes are the actions to take - Easy to understand, implement and debug for humans - But leads to repeated code and are thus not very maintainable - Finite state machines (1960s) - Consists of action states and explicitly modeled conditional state transistions. - Easy to understand concept for humans - Becomes hard to maintain for bigger problems - Hierarchical finite state machines (1980s) - Extension of FSTs: Allows using full FSTs in higher-layer FSTs - i.e., using an FST as a subprocedure in another FST - Better modularity than FSTs - Still not as maintainable as other approaches - Subsumption (1980s) - Multiple explicitly modeled actions based on conditional statements that have a priority ordering - Higher-priority actions subsume ("dominate", "overrule") lower-priority actions when their condition becomes true - Not stateful - Useful for intermediate emergency reactions to certain sensor readings - e.g., emergency stop for a parking routine when distance to other cars is less than 5cm. - Not very scalable to high-complex logic requirements - Not usable for logic that requires knowledge about the time domain (which implies a requirement on storing a state) - Behavior trees (2000s) - Consists of 6.5 different node types - Sequential nodes execute all child nodes in the given order, all have to execute for a success - Fallback nodes try to execute at least one child node in the given order, at least one has to succeed for a success - Parallel nodes execute all child nodes in parallel. A success ratio is given, which needs to at least be matched for a node success - Memory nodes remember successful execution of their child nodes, skipping the successful ones on re-invokation (can be exchanged with a Fallback node that has a Condition as first child) - Condition nodes return a success if their condition is true - Action nodes return a success when their process completes successfully - (Custom nodes) may implement custom behavior - The control flow is strictly controlled by the nodes returning "SUCCESS" or "FAILURE". - Only on success, the control flow may proceed to next nodes according to the tree's structure - The tree is evaluated using "ticks" that basically are individual timesteps of the robot's control loop - The complete execution of the tree states should (usually) not take longer than the duration of one tick - Otherwise an event congestion on the robot's CPU may occur - Good for modularization, reusing code and maintainability; widely used in game industry - Less intuitive for humans, mostly overkill for small scenarios - Dynamic stack decider (2010s) - Co-Created by Marc, so clearly the best approach of all!! </irony> - Decision Tree-like structure, consists of 2 types of nodes - Internal Decision nodes model the decision logic - Can have multiple child nodes - Decision nodes return string literals used to determine the right child node - Not time-dependant - Leaf nodes model actions to take - Time dependant (valid until preconditions are invalid) - The "stack" of decisions is not re-evaluated until a pre-condition changes - Makes for reusable and maintainable code, has clearly defined state and preconditions - No parallel execution, not (yet) widely used in the industry ## Summary - No single perfect solution - Combining multiple approaches can lead to a good and maintainable solution, getting the best of multiple approaches # 10. ML - Find the best $\theta$ of $f(\theta)$ to Minimum Error Function $E_f(\theta)$ as: $$\theta^\star = argmin_{\theta}(E_f(\theta))$$ - Solution - gradient decent - smooth gradients allow better optimization - Since most commonsense concepts are hard to formulate in function notation, it is better to use: - ML -> what is hard to program - Traditional method -> what is hard to lean - learnable example - understand - label-classification of sensor stimuli - predict - probability of success - simulation dynamics - Friction in between to entity - imitate - giving the smililar parameter set and get the similar result such as VAE. - Reinforcement learning - $Q(s,a)$: Q-learning from state(s) to action(a) - $V(s)$: Value function, when you start with specific state(s) - $A(s,a)$: Q(s,a) - V(s) [Reference](https://arxiv.org/abs/1511.06581) - $\pi(s)$: policy function which equals to action - Image classification - sensor -> camera -> image stream -> classification -> result like "table" -> localization(prior knowledge like April Tag or segmentation and aware the capture bias so that the object is not in the image center) -> Action - Image Localization - predict X and Y in the end using convolutional layer + fully connected layer ![](https://i.imgur.com/oiXj1ES.png) - Bonding boxes - no object shape - Approach like: - R-CNN - Fast R-CNN - Faster R-CNN - YOLOv1-> YOLOv5 - - Segmentation - upsample conv-net output ![](https://i.imgur.com/zhdYxzH.png) - Masks of region - No orientation of the object - No coordinates of the object - Need 6D data for interaction - Pose estimation - No silver bullet - visual learning -> feature matching -> pose tracking -> local model fitting - Tactile classification - Gather data(tactile array) -> image with amplified channel -> crop center -> FFT -> classification -> label such as {stable, translation, rotation} - Data Acquisition - put Object between and define sllipage -> control relax force until slippage -> orthogonal sensor - Tactile simulation - Gather data(contact pin with priltags(fiducial-based)):[contact point, 3x force vectors, sensor temperature] - Model ![](https://i.imgur.com/DuYK9xz.png) - Output 23 channels for simulation - Grasping Objects - Where the object should be grasped? - What gripper does the robot have? - How stable is the resulting grasp be? - Is the object handle-like structures? - Why pickup the object instead of others? - If all objects are modeled -> pose estimation - Commonly use 2-finger parallel gripper - Task1: pose generator - generate good grasp points - based on RGB-D camera ![](https://i.imgur.com/Vaklxw5.png) - Method - sample 6dof grasps so that gripper enclosed at least one point and not in collision with the point cloud. - Input: 15 constructed image - Output: choose best-rankding samples for execution - Data Acquisition - generate random grasp candidate and simulate - Task2: Supersizing Self-Supervision - predict 3dof grasp poses $(x , y , \phi)$ - Gripper above -> get orientation degree -> try to pick check for success - AlexNet + retrain last layer ![](https://i.imgur.com/p18eapD.png) - Output 18 possible angles - Try to using best strategy everytime online and gather more data. - Inverse Kinematics - Kinematics: Describes the motion of points, bodies (objects), and systems of bodies (groups of objects) without considering the forces that cause them to move. - Inverse Kinematics: Given robot kinematics design and find out the configuration of the joints. ![](https://i.imgur.com/Yj0O7ng.png) - feature: - fast solution exist - data easy to generate - difficulties: - high accuracy - multiple solution exists - Input: joint states - Output: 6dof(dimension of freedom) - Online optimization have better performance - Learning Hand-Eye Coordination - pick objects from cluttered container - Input: - o - 2 RGB images of current and first state - a - 3D Cartesian force vector & a twist $\varphi$ - Output: - Successful rate $l$ for grasping - End-to-End Visuomotor Control - move red block into blue basket ![](https://i.imgur.com/dWr8Zsp.png) - Input: 4 last RGB images and current joint angles - Output: - 6 joint velocities - softmax gripper action - auxiliary outputs - Using sim2real application with domain randomization - Reinforcement learning - More investment, better result. - common environments: - OpenAI Grm - RoboSchool - PyBullet - baseline - OpenAI baselines - INRIA stable_baselines - Vanilla PPO(Proximal Policy Optimization) ![](https://i.imgur.com/6DyN1n4.png) - learned joint efforts, velocity, accerlerate, balance, orientation - Policy: proprioceptive(self-movement and body position) + exteroceptive(external) observations - Reward - stay at track - Forward Velocity - penalization for torques(dead) - policy update and clipped everytime being used. - Deep Mimic ![](https://i.imgur.com/cbeTR3V.png) - SFV ![](https://i.imgur.com/NO2yA0v.png) - DeepQuintic(Marc's work) - Only use the path and doing the optimization for the robot different than human have soft palm. - Central Pattern Generators - Our body would do the regular motion by pass our brain. ![](https://i.imgur.com/PGwDXSW.png) - Evolutionary Approach - Evaluate individuals based on fitness function - pick best - Recombination / mutation - Simulation Downsides - Mass, size, inertia(stay the same) - Sensor noise non Gaussian - Actuator properties not correct - soft body - light - background - Physics problems - continuous system - performance bounds - Glitches - Bridging the Reality Gap - Improving simulation accuracy - Adding sensor noise - Domain randomization(most used) - Learning Dexterity(Task learning) ![](https://i.imgur.com/W1CoH9G.png) - PPO - Learning Dynamic Skills ![](https://i.imgur.com/VQmknSM.png) - MLP - Question (Not sure): - imitate functions from different inputs (reparameterization) - compute on RGB images instead of SE (3) # 11. Ethics (X) # 12. Exam example Generic outline: 1. What Robot need to do? - Actuators - Wheels - Gripper - Shadowhands 3. How much budget we have? * Medical robot or military robots * Yearly salary for a person and the life of the robots 5. Describe solution idea in general -> (States) 6. Describe required sensors - Odometry - Vision - Tactile - Force - Distance 7. Processing Unit related budget(-> go ML) - fpga - microcontroller - classic desktop system 8. Task planning (is it necessary, how to do) - path planning - Moveit library in ROS - What kind of approach suit us - Deliberative - Reactive - Hybrid - Behavior based - Decision making structure - State machine - Hierarchical finit state machines - Dynamic Stack decider - Behavior tree 10. Discuss ethics 11. Conclude description Example 1. Topic: service robots - Waiter * Know Environment -> scan filter * receive order by app * receive drink and dishes * bring to customer * avoid obstacles * bring drinks to customer's table * go back to charge automatically * Stop for accidence - Budget: limited (be economical) - Sensor we need (why and how it works) * Wifi or just put ipad on top form Human Interactive * Vision: odemetry * Dead reconing(odemetry based on intrinsic sensor): * Kidnapped robot problem or drifting * depth sensor pointing to the floor * bump by custommer * bump sensor * distance sensor * Know where we are * Wheel-based odometry: * Hall effect sensor or encoder sensor to measure wheel rotation * Forward kinematics to calculate position * -> combine with location from recognized patterns using [Monte-Carlo estimation](#Robot-Localization) * Aprail Tags * Transformer (TF) * Coordination system of camera (z is point inside) * laser scanner -> detect environment * Laser range finder rotate 180 degree -> active sensor and extrinsic * Pattern on the floor * bluetooth or RFID sensor to go back to charger * Know is there dishes on robots(wend away from cook or customers) * Force or pressure sensor below the plate * Stable * Wheel -> easy -> stablizing wheel -> help slippary * IMU -> hard - Path planning and Task planning(Why and how the structure work incorporate scan processing) * Subsumption * No state * Can be fast to react * Finite state machine * hard to maintain and unusable -> Behavior tree * Hierarchycal state machine * Behavior tree & dynamic stacks & decision tree * Emergency: bump into everthing it would stop * path planning -> fix path -> A* more efficient than dijkasla algorithm -> D* - Risk management * Calculate what is the error rate for the robots * depends on this go for accuracy way or maybe cheaper solution Example 2. Topic: the robot on the spaceship - Laser range finder Example 3. Topic submarin robot Example 4. Topic: Industrial LEGO building robot Example 5. Topic Robot play the soccer Example 6. Topic: Drone * Karman filter * Odemetry Example 7. Topic: Cooking robot(humanoid chef robot) & Cafe Machine Example 8. Topic: Librarian robot * Searching word for the book * Localize where is the shelf * Localize What are the books on this shelf * Shadowhands for manipulation(HARD) * give this book back to human Example 9. Topic: Cleaning Robot Example 10. Topic: Stunt-double robot Example: Lego Building Robot - At a table - has 2 arms - Only regular (fixed set) lego blocks - No limit budget! - All required blocks in gripping range sorted in bins - fixed position - Need to consider the obstacles aside - no wheels - arms and camera used only - legos are all upside - There is only red, yello, green, blue colors - Target state is to stack in order red rectangle on bottom, yellow block in middle and green block on the top - Initail state is all the blocks is messy on the tables.

    Import from clipboard

    Paste your markdown or webpage here...

    Advanced permission required

    Your current role can only read. Ask the system administrator to acquire write and comment permission.

    This team is disabled

    Sorry, this team is disabled. You can't edit this note.

    This note is locked

    Sorry, only owner can edit this note.

    Reach the limit

    Sorry, you've reached the max length this note can be.
    Please reduce the content or divide it to more notes, thank you!

    Import from Gist

    Import from Snippet

    or

    Export to Snippet

    Are you sure?

    Do you really want to delete this note?
    All users will lose their connection.

    Create a note from template

    Create a note from template

    Oops...
    This template has been removed or transferred.
    Upgrade
    All
    • All
    • Team
    No template.

    Create a template

    Upgrade

    Delete template

    Do you really want to delete this template?
    Turn this template into a regular note and keep its content, versions, and comments.

    This page need refresh

    You have an incompatible client version.
    Refresh to update.
    New version available!
    See releases notes here
    Refresh to enjoy new features.
    Your user state has changed.
    Refresh to load new user state.

    Sign in

    Forgot password

    or

    By clicking below, you agree to our terms of service.

    Sign in via Facebook Sign in via Twitter Sign in via GitHub Sign in via Dropbox Sign in with Wallet
    Wallet ( )
    Connect another wallet

    New to HackMD? Sign up

    Help

    • English
    • 中文
    • Français
    • Deutsch
    • 日本語
    • Español
    • Català
    • Ελληνικά
    • Português
    • italiano
    • Türkçe
    • Русский
    • Nederlands
    • hrvatski jezik
    • język polski
    • Українська
    • हिन्दी
    • svenska
    • Esperanto
    • dansk

    Documents

    Help & Tutorial

    How to use Book mode

    Slide Example

    API Docs

    Edit in VSCode

    Install browser extension

    Contacts

    Feedback

    Discord

    Send us email

    Resources

    Releases

    Pricing

    Blog

    Policy

    Terms

    Privacy

    Cheatsheet

    Syntax Example Reference
    # Header Header 基本排版
    - Unordered List
    • Unordered List
    1. Ordered List
    1. Ordered List
    - [ ] Todo List
    • Todo List
    > Blockquote
    Blockquote
    **Bold font** Bold font
    *Italics font* Italics font
    ~~Strikethrough~~ Strikethrough
    19^th^ 19th
    H~2~O H2O
    ++Inserted text++ Inserted text
    ==Marked text== Marked text
    [link text](https:// "title") Link
    ![image alt](https:// "title") Image
    `Code` Code 在筆記中貼入程式碼
    ```javascript
    var i = 0;
    ```
    var i = 0;
    :smile: :smile: Emoji list
    {%youtube youtube_id %} Externals
    $L^aT_eX$ LaTeX
    :::info
    This is a alert area.
    :::

    This is a alert area.

    Versions and GitHub Sync
    Get Full History Access

    • Edit version name
    • Delete

    revision author avatar     named on  

    More Less

    Note content is identical to the latest version.
    Compare
      Choose a version
      No search result
      Version not found
    Sign in to link this note to GitHub
    Learn more
    This note is not linked with GitHub
     

    Feedback

    Submission failed, please try again

    Thanks for your support.

    On a scale of 0-10, how likely is it that you would recommend HackMD to your friends, family or business associates?

    Please give us some advice and help us improve HackMD.

     

    Thanks for your feedback

    Remove version name

    Do you want to remove this version name and description?

    Transfer ownership

    Transfer to
      Warning: is a public team. If you transfer note to this team, everyone on the web can find and read this note.

        Link with GitHub

        Please authorize HackMD on GitHub
        • Please sign in to GitHub and install the HackMD app on your GitHub repo.
        • HackMD links with GitHub through a GitHub App. You can choose which repo to install our App.
        Learn more  Sign in to GitHub

        Push the note to GitHub Push to GitHub Pull a file from GitHub

          Authorize again
         

        Choose which file to push to

        Select repo
        Refresh Authorize more repos
        Select branch
        Select file
        Select branch
        Choose version(s) to push
        • Save a new version and push
        • Choose from existing versions
        Include title and tags
        Available push count

        Pull from GitHub

         
        File from GitHub
        File from HackMD

        GitHub Link Settings

        File linked

        Linked by
        File path
        Last synced branch
        Available push count

        Danger Zone

        Unlink
        You will no longer receive notification when GitHub file changes after unlink.

        Syncing

        Push failed

        Push successfully