JohnPro
    • Create new note
    • Create a note from template
      • Sharing URL Link copied
      • /edit
      • View mode
        • Edit mode
        • View mode
        • Book mode
        • Slide mode
        Edit mode View mode Book mode Slide mode
      • Customize slides
      • Note Permission
      • Read
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Write
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Engagement control Commenting, Suggest edit, Emoji Reply
    • Invite by email
      Invitee

      This note has no invitees

    • Publish Note

      Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

      Your note will be visible on your profile and discoverable by anyone.
      Your note is now live.
      This note is visible on your profile and discoverable online.
      Everyone on the web can find and read all notes of this public team.
      See published notes
      Unpublish note
      Please check the box to agree to the Community Guidelines.
      View profile
    • Commenting
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
      • Everyone
    • Suggest edit
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
    • Emoji Reply
    • Enable
    • Versions and GitHub Sync
    • Note settings
    • Note Insights
    • Engagement control
    • Transfer ownership
    • Delete this note
    • Save as template
    • Insert from template
    • Import from
      • Dropbox
      • Google Drive
      • Gist
      • Clipboard
    • Export to
      • Dropbox
      • Google Drive
      • Gist
    • Download
      • Markdown
      • HTML
      • Raw HTML
Menu Note settings Versions and GitHub Sync Note Insights Sharing URL Create Help
Create Create new note Create a note from template
Menu
Options
Engagement control Transfer ownership Delete this note
Import from
Dropbox Google Drive Gist Clipboard
Export to
Dropbox Google Drive Gist
Download
Markdown HTML Raw HTML
Back
Sharing URL Link copied
/edit
View mode
  • Edit mode
  • View mode
  • Book mode
  • Slide mode
Edit mode View mode Book mode Slide mode
Customize slides
Note Permission
Read
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Write
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Engagement control Commenting, Suggest edit, Emoji Reply
  • Invite by email
    Invitee

    This note has no invitees

  • Publish Note

    Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

    Your note will be visible on your profile and discoverable by anyone.
    Your note is now live.
    This note is visible on your profile and discoverable online.
    Everyone on the web can find and read all notes of this public team.
    See published notes
    Unpublish note
    Please check the box to agree to the Community Guidelines.
    View profile
    Engagement control
    Commenting
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    • Everyone
    Suggest edit
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    Emoji Reply
    Enable
    Import from Dropbox Google Drive Gist Clipboard
       owned this note    owned this note      
    Published Linked with GitHub
    Subscribed
    • Any changes
      Be notified of any changes
    • Mention me
      Be notified of mention me
    • Unsubscribe
    Subscribe
    # Creating my own Beowulf cluster ###### tags: `實驗與紀錄` :::danger :crying_cat_face: :crying_cat_face: :crying_cat_face: This project has temporarily terminated due to the funding problem. :crying_cat_face: :crying_cat_face: :crying_cat_face: ::: ### Contents 1. Introduction 2. Hardware and environment 3. Theoretical estimating and calculation 4. Software setup 5. Benchmarking and test 6. Epilogue :::warning :warning: DISCLAIMER :warning: This Beowulf cluster was built for personal and experimental purposes only, and is not intended for practical or commercial use. Its performance may not be suitable for high-performance computing tasks. The actual performance of the system may vary depending on the specific workload and software optimization. Me as a builder of this system do not assume any liability for any damages or losses resulting from any imitation or adaptation of this project. ::: ---------------------------------------------------------- ### Introduction Welcome to my cluster project! This project is the result of my exploration and analysis of a cluster of related topics. In this project, I aim to provide an in-depth understanding of these topics and their interconnections. Throughout this project, I will provide a comprehensive overview of each aspect of my cluster, explore their interconnections, and offer insights and recommendations for future developments. By the end of this project, you will have a deeper understanding of these topics and their implications, and be able to make informed decisions and engage in critical discussions about them. Also, as a poor student, I’m unable to afford those high-end CPUs for my hobby and my research, thus, I create this cluster as an alternative solution towarding my other project such as simulation a mathematic computing. Introducing our Beowulf cluster, a high-performance computing solution designed to handle complex workloads and demanding applications. Built with six nodes, each featuring an Intel Core i5-760 processor, 4GB of DDR3 RAM, and networked by gigabit Ethernet, our cluster provides a powerful computing platform for a wide range of tasks. Whether you need to process large datasets, run simulations, or perform other compute-intensive tasks, this Beowulf cluster offers the performance and scalability we need to get the job done. With a TDP wattage of approximately 440W, this cluster is providing a “cost-effective” solution for those looking to boost their computing capabilities like me. The original design also supports the further upgrade, it would be able to add more node on demand, which provides more scalability. And in the aspect of hard drive, I decided to install individual disk on every node rather than attach them to NAS in order to prevent extra loads on ethernet and avoid possible bottleneck on performance. -------------------------------------------------------- ### Hardware and Enviroment Introducing the hardware specifications of our Beowulf cluster, a high-performance computing solution designed to handle complex workloads and demanding applications. Our cluster is comprised of six slave nodes, each featuring an Intel Core i5-760 processor, which includes four cores running at a clock speed of 2.8GHz. Each node is equipped with 8GB of DDR3 RAM, providing a total of 24GB of RAM across the cluster. The nodes are connected by gigabit Ethernet, providing fast and reliable network connectivity. In addition, our cluster is built with a shared storage solution, with each node connected to a network-attached storage (NAS) device, providing a total of 2TB of shared storage capacity. With this hardware configuration, our Beowulf cluster provides a cost-effective and scalable high-performance computing solution, perfect for me to accelerate my workloads. #### CPU in every computing node :::info ### Intel Core i5-760 Number of Cores:==4== Clock Speed (Turbo Boosted): ==2.8 (3.3) GHz== Hyper-Threading: ==Yes== L3 Cache: ==8MB== Manufacturing Process: ==45nm== Socket Compatibility: ==LGA 1156== Memory Support: ==DDR3 up to 1333 MHz== Thermal Design Power (TDP): ==95 watts== <font color="#f00">(Please note that CPU in every computing node is the same)</font> ::: #### Details for every single node :::success ### Node 1 Power Supply: Motherboard: Hard drive: RAM module: ::: :::success ### Node 2 Power Supply: Motherboard: Hard drive: RAM module: ::: :::success ### Node 3 Power Supply: Motherboard: Hard drive: RAM module: ::: :::success ### Node 4 Power Supply: Motherboard: Hard drive: RAM module: ::: :::success ### Node 5 Power Supply: Motherboard: Hard drive: RAM module: ::: :::success ### Node 6 Power Supply: Motherboard: Hard drive: RAM module: ::: #### Network Utilities (here is some possible choices) There's actually many ways to connect our nodes together,here are three choices I might able to build. A. gigabit ethernet B. fibre based 4G network C. 10\100M fast(is it?) ethernet the details will be explained in chapter of estimating and the data below will show like this, ::: danger red for fibre network hardware ::: :::warning yellow for gigabit ethernet hardware ::: :::success green for 10/100 ethernet hardware ::: :::warning ### switch <font color="#33F">Mercusys MS108G 8 port 10/100/1000Mbps Gigabit hub</font> with Auto Negotiation & Auto MDI/MDIX IEEE 802.3, IEEE 802.3u, IEEE 802.3x CSMA/CD supported 10/100/1000Mbps half duplex 20/200/2000Mbps full duplex Impulse 64KB Jumbo frame 9KB ### wires and cables just some CAT6 copper cables with RJ-45 connectors, nothing more to explain :no_good: ### Network interface cards depends on every motherboard on board equips' might add external one lol ::: :::success ### fibre host bus adapter (HBA) Qlogic QLE2460 Single-Port, 4Gbps Fibre Channelto-PCI Express Host Bus Adapter. FC or LC connector ##### Data rate • 4/2/1Gbps auto-negotiation (4.2480/ 2.1240/ 1.0625Gbps) Performance • 150,000 IOPS ##### Topology • Point-to-point (N_Port), arbitrated loop (NL_Port), switched fabric (N_Port) ##### Logins • Support for F_Port and FL_Port login: 2,048 concurrent logins and 2,048 active exchanges ##### Class of service • Class 2 and 3 ##### Protocols • FCP (SCSI-FCP), FC-TAPE (FCP-2) cable travel about 70m in 4G mode ::: #### software ------------------------------------------------------------ ### Theoretical Estimating #### About the computing perfromance Performance: The peak theoretical performance of this six-node cluster is 537.6 GFLOPS, based on the i5-760 processor's performance. :::info :bulb: The Intel Core i5-760 processor has four processing cores with a base clock speed of 2.8 GHz. Assume it can perform up to 8 floating point operations per clock cycle (4 additions and 4 multiplications). we assume that processors include AVX instructions, the peak theoretical performance per core will be: 4 cores x 2.8 GHz x 8 FLOPs per clock cycle = 89.6 GFLOPS Multiplying the peak theoretical performance per processor by the number of processors in cluster (6) gives the total peak theoretical performance of the cluster: Total peak theoretical performance = 6 processors x 89.6 GFLOPS per processor = <font color="#f00">537.6 GFLOPS</font> ::: Based on the total peak theoretical performance of your Beowulf cluster (537.6 GFLOPS), we can make a comparison to modern CPUs in terms of their single-precision floating-point performance. Here are some examples of modern CPUs and their single-precision floating-point performance: |processor|GFLOPS| |---------|------| This Beowulf Cluster (6x Intel Core i5-760)|<font color="#f00">537.6 GFLOPS</font> Intel Xeon E5-2699 v3|540 AMD Opteron 6378|541 Intel Core i7-5960X|469.4 AMD Ryzen 9 5950X|988 AMD EPYC 7763|7056 :exploding_head: <font color="#f00">EPYC 7763 is just here for fun.... As you can see, the performance of this cluster is not very good, as the previous statement said, this is only built for learning. </font>The actual performance may be lower due to factors such as network latency and communication overhead.Also I'm gonig to run MATLAB on this cluster.MATLAB is a parallelizable application and can take advantage of the multiple cores and nodes in cluster to accelerate computations. The degree of acceleration will depend on the specific computations being performed and how well they can be parallelized.The size of the dataset being processed may also impact the performance of the cluster. If the data is stored on the local disks of each node, then I/O bandwidth may become a bottleneck, especially if the data needs to be transferred between nodes frequently. #### About the fibre network solution There is actually three choices that I can choose on networking between nodes, they are fibre based network, gigabit ethernet and 10/100 Mbps fast ethernet. Here are some estimation.I first think of using fiber network which provides a ultra fast data exchange, however, the motherboard im using does not have pci x4 slot for the QLE2460 fiber HBA. So that enforce me to use the adapter from x1 to x16 in order to equip it. And this might cause the bottleneck on transfer speed: The maximum transfer speed of the QLogic QLE2460 Fiber Channel Host Bus Adapter (HBA) can reach up to 4 Gbps (or 4000 Mbps). The QLE2460 can support up to 2.125 Gbps data rates for full duplex or 4 Gbps data rates for half duplex operations. The actual transfer speed of the HBA may vary depending on the configuration of the server, the storage device, and the network. And the following table displays the transfer speed of HBA installed on different slot. Transfer Speed| PCIe 3.0 x1| PCIe 3.0 x4 --------------|------------|---------------- Maximum Speed| 4 Gbps|16 Gbps Maximum Bandwidth| 1 GB/s|4 GB/s Note: The actual transfer speeds of the QLogic QLE2460 HBA may vary depending on various factors such as the configuration of the server, the storage device, and the network. The speeds listed here are the theoretical maximums for each interface. Please also note that the motherboard Im using has only PCIe 3.0 which means there's no way to get it faster :cry: Also fibre on node can cost a lot and because of this, I hadn't even decide which fibre switch to buy yet.They are just expensive as fuck.Maybe I'll try somethings from China rather than from Cisco or Netgear or whatever. :::spoiler some notes part still not arrived : 1. motherbords 2. cases 3. CPUs 4. RAM module still miss 24 5. 2.5"disks 6. another shelf to place those nodes find another way to estimate the performance :::

    Import from clipboard

    Paste your markdown or webpage here...

    Advanced permission required

    Your current role can only read. Ask the system administrator to acquire write and comment permission.

    This team is disabled

    Sorry, this team is disabled. You can't edit this note.

    This note is locked

    Sorry, only owner can edit this note.

    Reach the limit

    Sorry, you've reached the max length this note can be.
    Please reduce the content or divide it to more notes, thank you!

    Import from Gist

    Import from Snippet

    or

    Export to Snippet

    Are you sure?

    Do you really want to delete this note?
    All users will lose their connection.

    Create a note from template

    Create a note from template

    Oops...
    This template has been removed or transferred.
    Upgrade
    All
    • All
    • Team
    No template.

    Create a template

    Upgrade

    Delete template

    Do you really want to delete this template?
    Turn this template into a regular note and keep its content, versions, and comments.

    This page need refresh

    You have an incompatible client version.
    Refresh to update.
    New version available!
    See releases notes here
    Refresh to enjoy new features.
    Your user state has changed.
    Refresh to load new user state.

    Sign in

    Forgot password

    or

    By clicking below, you agree to our terms of service.

    Sign in via Facebook Sign in via Twitter Sign in via GitHub Sign in via Dropbox Sign in with Wallet
    Wallet ( )
    Connect another wallet

    New to HackMD? Sign up

    Help

    • English
    • 中文
    • Français
    • Deutsch
    • 日本語
    • Español
    • Català
    • Ελληνικά
    • Português
    • italiano
    • Türkçe
    • Русский
    • Nederlands
    • hrvatski jezik
    • język polski
    • Українська
    • हिन्दी
    • svenska
    • Esperanto
    • dansk

    Documents

    Help & Tutorial

    How to use Book mode

    Slide Example

    API Docs

    Edit in VSCode

    Install browser extension

    Contacts

    Feedback

    Discord

    Send us email

    Resources

    Releases

    Pricing

    Blog

    Policy

    Terms

    Privacy

    Cheatsheet

    Syntax Example Reference
    # Header Header 基本排版
    - Unordered List
    • Unordered List
    1. Ordered List
    1. Ordered List
    - [ ] Todo List
    • Todo List
    > Blockquote
    Blockquote
    **Bold font** Bold font
    *Italics font* Italics font
    ~~Strikethrough~~ Strikethrough
    19^th^ 19th
    H~2~O H2O
    ++Inserted text++ Inserted text
    ==Marked text== Marked text
    [link text](https:// "title") Link
    ![image alt](https:// "title") Image
    `Code` Code 在筆記中貼入程式碼
    ```javascript
    var i = 0;
    ```
    var i = 0;
    :smile: :smile: Emoji list
    {%youtube youtube_id %} Externals
    $L^aT_eX$ LaTeX
    :::info
    This is a alert area.
    :::

    This is a alert area.

    Versions and GitHub Sync
    Get Full History Access

    • Edit version name
    • Delete

    revision author avatar     named on  

    More Less

    Note content is identical to the latest version.
    Compare
      Choose a version
      No search result
      Version not found
    Sign in to link this note to GitHub
    Learn more
    This note is not linked with GitHub
     

    Feedback

    Submission failed, please try again

    Thanks for your support.

    On a scale of 0-10, how likely is it that you would recommend HackMD to your friends, family or business associates?

    Please give us some advice and help us improve HackMD.

     

    Thanks for your feedback

    Remove version name

    Do you want to remove this version name and description?

    Transfer ownership

    Transfer to
      Warning: is a public team. If you transfer note to this team, everyone on the web can find and read this note.

        Link with GitHub

        Please authorize HackMD on GitHub
        • Please sign in to GitHub and install the HackMD app on your GitHub repo.
        • HackMD links with GitHub through a GitHub App. You can choose which repo to install our App.
        Learn more  Sign in to GitHub

        Push the note to GitHub Push to GitHub Pull a file from GitHub

          Authorize again
         

        Choose which file to push to

        Select repo
        Refresh Authorize more repos
        Select branch
        Select file
        Select branch
        Choose version(s) to push
        • Save a new version and push
        • Choose from existing versions
        Include title and tags
        Available push count

        Pull from GitHub

         
        File from GitHub
        File from HackMD

        GitHub Link Settings

        File linked

        Linked by
        File path
        Last synced branch
        Available push count

        Danger Zone

        Unlink
        You will no longer receive notification when GitHub file changes after unlink.

        Syncing

        Push failed

        Push successfully