Shafinul Haque
    • Create new note
    • Create a note from template
      • Sharing URL Link copied
      • /edit
      • View mode
        • Edit mode
        • View mode
        • Book mode
        • Slide mode
        Edit mode View mode Book mode Slide mode
      • Customize slides
      • Note Permission
      • Read
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Write
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Engagement control Commenting, Suggest edit, Emoji Reply
    • Invite by email
      Invitee

      This note has no invitees

    • Publish Note

      Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

      Your note will be visible on your profile and discoverable by anyone.
      Your note is now live.
      This note is visible on your profile and discoverable online.
      Everyone on the web can find and read all notes of this public team.
      See published notes
      Unpublish note
      Please check the box to agree to the Community Guidelines.
      View profile
    • Commenting
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
      • Everyone
    • Suggest edit
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
    • Emoji Reply
    • Enable
    • Versions and GitHub Sync
    • Note settings
    • Note Insights New
    • Engagement control
    • Make a copy
    • Transfer ownership
    • Delete this note
    • Save as template
    • Insert from template
    • Import from
      • Dropbox
      • Google Drive
      • Gist
      • Clipboard
    • Export to
      • Dropbox
      • Google Drive
      • Gist
    • Download
      • Markdown
      • HTML
      • Raw HTML
Menu Note settings Note Insights Versions and GitHub Sync Sharing URL Create Help
Create Create new note Create a note from template
Menu
Options
Engagement control Make a copy Transfer ownership Delete this note
Import from
Dropbox Google Drive Gist Clipboard
Export to
Dropbox Google Drive Gist
Download
Markdown HTML Raw HTML
Back
Sharing URL Link copied
/edit
View mode
  • Edit mode
  • View mode
  • Book mode
  • Slide mode
Edit mode View mode Book mode Slide mode
Customize slides
Note Permission
Read
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Write
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Engagement control Commenting, Suggest edit, Emoji Reply
  • Invite by email
    Invitee

    This note has no invitees

  • Publish Note

    Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

    Your note will be visible on your profile and discoverable by anyone.
    Your note is now live.
    This note is visible on your profile and discoverable online.
    Everyone on the web can find and read all notes of this public team.
    See published notes
    Unpublish note
    Please check the box to agree to the Community Guidelines.
    View profile
    Engagement control
    Commenting
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    • Everyone
    Suggest edit
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    Emoji Reply
    Enable
    Import from Dropbox Google Drive Gist Clipboard
       Owned this note    Owned this note      
    Published Linked with GitHub
    • Any changes
      Be notified of any changes
    • Mention me
      Be notified of mention me
    • Unsubscribe
    # LLM Inference Streaming ## Motivation This project idea is inspired by one of the ideas suggested on the course page: to build an LLM inference backend with streaming. We chose this project idea as LLMs have become widely used over the last few years, while still being relatively new to the Rust (and other programming languages) ecosystem. Therefore, we are interested in building a user application for LLM use which can build upon the currently available Rust inference engines such as Candle and Mistral.rs, and implement token-by-token streaming during inference, a feature that is currently unsupported by these engines. This project gives us the opportunity to work with local large language models. We find this valuable as using LLMs have become an expected part of our research projects; however, it is not always possible or preferable to run our data through commercial LLMs such as GPT or Claude, for reasons such as data privacy or costs. Therefore, it is helpful for us to be able to build our own applications to run local models, rather than relying on commercial LLMs. By building our own systems, we can deepen our understanding of these new emerging technologies. Finally, this project will allow us to work with an interesting tech stack and a variety of programming concepts, such as async programming, concurrency, memory safety, and networking in Rust. --- ## Objective and Key Features ### Objective To design and implement a lightweight that supports streaming outputs token-by-token to clients via HTTP. The service will allow users to select and load a local model, accept prompts through an API, and stream generated text incrementally, simulating the responsiveness of modern LLM chat systems. ### Key Features #### Core Features 1. **Model Selection, Loading, and Management** - Allow user selection from list of available models. - Load a small local model (e.g., using Candle) for inference. - Implement a lightweight model manager to initialize and reuse models efficiently. 2. **Streaming Inference** - Modify the model’s generation loop to emit tokens incrementally instead of waiting for full output. - Use **Axum** and **Server-Sent Events (SSE)** to stream tokens to the client in real time. 3. **Client Interaction** - Build a simple web or CLI interface for sending prompts and viewing streamed responses. - Provide basic input/output handling and connection to the backend API. - Provide model selection from list. #### Stretch Goals 4. **Chat History and Persistence** - Use SQLite with SQLx to store past chat sessions and message history. - Allow retrieval of past chats and reuse of context within a session. 5. **Multi-User Support** - Enable multiple clients to connect simultaneously. - Leverage Axum’s async runtime to handle multiple requests. 6. **Naive Request Queuing** - Implement a simple FIFO queue to manage simultaneous user prompts. #### Advanced / Exploratory Ideas - Improve concurrency and scheduling (e.g., round-robin prompt token generation). - Cache inference state (KV cache reuse) for faster context handling in long conversations. - Add optional voice-to-text or image-to-text input. --- ### Tech Stack | Component | Technology | Purpose / Notes | |------------|-------------|----------------| | **Model Inference** | **Candle** | Load and run local LLM inference | | **Web Framework** | **Axum** | Async web server and HTTP API with SSE streaming support | | **Serialization** | **Serde / serde_json** | Handle JSON request/response formats | | **Database** | **SQLite + SQLx** | Store chat history and user sessions | | **Frontend** | **Simple HTML/JS** or **Rust CLI** | Terminal user interface for sending prompts and resuming chat history | --- ### Work Division | Member | Responsibilities | |---------|------------------| | **Shafin** | **Phase 1:** Work with Alan on integrating Candle with Axum to enable SSE token streaming.<br>**Phase 2:** Develop the frontend (web or CLI) for sending prompts and displaying streamed responses; add basic chat continuation support.<br> | | **Alan** | **Phase 1:** Work with Shafin on backend implementation of the streaming pipeline between Candle and Axum.<br>**Phase 2:** Implement chat history and persistence using SQLite with SQLx. | **Kimberly** | **Phase 1:** Design the API structure and message flow between client and backend.<br>**Phase 3:** Implement server-side handling of concurrent requests and queuing for multiple clients. | _All members will contribute to debugging, testing, and documentation as needed._ Workload distribution may shift as new challenges come up or development priorities evolve. --- ## Tentative Plan Our plan is arranged in phases that allows us to start simple and expand as we gain more familiarity with Rust. Since all group members are new to Rust, we don’t yet have a strong sense of how time-consuming debugging, ownership issues, or async handling might be. To manage this uncertainty, we’ll first aim to build the **core streaming feature** and ensure it’s functional, which will be our Minimum Viable Product. Afterwards, we can supplement the application by branching out into additional features like chat history, concurrency, and multi-user support. ### Phase 1: Core Streaming Feature - Shafin and Alan work together to integrate the model backend (Candle) with Axum. - Implement Server-Sent Events (SSE) to stream generated tokens to the client. - Verify that prompts sent through the API produce real-time streamed responses. - Kimberly assists by setting up basic project scaffolding and API routes for testing. ### Phase 2: Chat Interface and Persistence - Add a simple frontend or CLI client to interact with the backend. - Implement SQLite storage using SQLx to persist chats and restore previous sessions. - Add support for maintaining context within a single chat (feed previous messages into new prompts). - Begin integration testing to verify data flows across components. ### Phase 3: Multi-User and Optional Extensions - Introduce concurrency handling for multiple users and queued requests. - Expand the backend to manage user sessions and separate chat histories. - Explore optional stretch goals such as multiple model support, keyword search, or simple authentication. - Wrap up with testing, documentation, and video demo preparation. We aim to begin writing our final report following the completion of Phase 2, and update as the project progresses. As Phase 3 involves the development of our stretch goals, we will work on this phase until 1-2 weeks before the deadline, before pivoting to writing a script and filming our video demonstration. --- ### Feasibility The project scope is intentionally kept manageable, with each feature designed to be independent and incrementally implemented. The core version (streaming inference, API, and basic chat handling) should be achievable within 3–4 weeks of part-time work and will have the focus of all three members. Stretch goals can be explored once the minimum viable product is complete. This phased plan ensures that even if we face unexpected challenges while learning Rust, we can still deliver a working system that meets the course requirements while leaving room for extension if progress allows. As well, it ensures that we will have enough time to properly develop the final report and video demonstration. --- ## References - [Candle: Rust-based ML Framework](https://github.com/huggingface/candle) - [Mistral.rs](https://github.com/EricLBuehler/mistral.rs) - [Axum Web Framework](https://docs.rs/axum/) - [SQLx Async Database](https://docs.rs/sqlx/) - [Server-Sent Events (MDN)](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events)

    Import from clipboard

    Paste your markdown or webpage here...

    Advanced permission required

    Your current role can only read. Ask the system administrator to acquire write and comment permission.

    This team is disabled

    Sorry, this team is disabled. You can't edit this note.

    This note is locked

    Sorry, only owner can edit this note.

    Reach the limit

    Sorry, you've reached the max length this note can be.
    Please reduce the content or divide it to more notes, thank you!

    Import from Gist

    Import from Snippet

    or

    Export to Snippet

    Are you sure?

    Do you really want to delete this note?
    All users will lose their connection.

    Create a note from template

    Create a note from template

    Oops...
    This template has been removed or transferred.
    Upgrade
    All
    • All
    • Team
    No template.

    Create a template

    Upgrade

    Delete template

    Do you really want to delete this template?
    Turn this template into a regular note and keep its content, versions, and comments.

    This page need refresh

    You have an incompatible client version.
    Refresh to update.
    New version available!
    See releases notes here
    Refresh to enjoy new features.
    Your user state has changed.
    Refresh to load new user state.

    Sign in

    Forgot password

    or

    By clicking below, you agree to our terms of service.

    Sign in via Facebook Sign in via Twitter Sign in via GitHub Sign in via Dropbox Sign in with Wallet
    Wallet ( )
    Connect another wallet

    New to HackMD? Sign up

    Help

    • English
    • 中文
    • Français
    • Deutsch
    • 日本語
    • Español
    • Català
    • Ελληνικά
    • Português
    • italiano
    • Türkçe
    • Русский
    • Nederlands
    • hrvatski jezik
    • język polski
    • Українська
    • हिन्दी
    • svenska
    • Esperanto
    • dansk

    Documents

    Help & Tutorial

    How to use Book mode

    Slide Example

    API Docs

    Edit in VSCode

    Install browser extension

    Contacts

    Feedback

    Discord

    Send us email

    Resources

    Releases

    Pricing

    Blog

    Policy

    Terms

    Privacy

    Cheatsheet

    Syntax Example Reference
    # Header Header 基本排版
    - Unordered List
    • Unordered List
    1. Ordered List
    1. Ordered List
    - [ ] Todo List
    • Todo List
    > Blockquote
    Blockquote
    **Bold font** Bold font
    *Italics font* Italics font
    ~~Strikethrough~~ Strikethrough
    19^th^ 19th
    H~2~O H2O
    ++Inserted text++ Inserted text
    ==Marked text== Marked text
    [link text](https:// "title") Link
    ![image alt](https:// "title") Image
    `Code` Code 在筆記中貼入程式碼
    ```javascript
    var i = 0;
    ```
    var i = 0;
    :smile: :smile: Emoji list
    {%youtube youtube_id %} Externals
    $L^aT_eX$ LaTeX
    :::info
    This is a alert area.
    :::

    This is a alert area.

    Versions and GitHub Sync
    Get Full History Access

    • Edit version name
    • Delete

    revision author avatar     named on  

    More Less

    Note content is identical to the latest version.
    Compare
      Choose a version
      No search result
      Version not found
    Sign in to link this note to GitHub
    Learn more
    This note is not linked with GitHub
     

    Feedback

    Submission failed, please try again

    Thanks for your support.

    On a scale of 0-10, how likely is it that you would recommend HackMD to your friends, family or business associates?

    Please give us some advice and help us improve HackMD.

     

    Thanks for your feedback

    Remove version name

    Do you want to remove this version name and description?

    Transfer ownership

    Transfer to
      Warning: is a public team. If you transfer note to this team, everyone on the web can find and read this note.

        Link with GitHub

        Please authorize HackMD on GitHub
        • Please sign in to GitHub and install the HackMD app on your GitHub repo.
        • HackMD links with GitHub through a GitHub App. You can choose which repo to install our App.
        Learn more  Sign in to GitHub

        Push the note to GitHub Push to GitHub Pull a file from GitHub

          Authorize again
         

        Choose which file to push to

        Select repo
        Refresh Authorize more repos
        Select branch
        Select file
        Select branch
        Choose version(s) to push
        • Save a new version and push
        • Choose from existing versions
        Include title and tags
        Available push count

        Pull from GitHub

         
        File from GitHub
        File from HackMD

        GitHub Link Settings

        File linked

        Linked by
        File path
        Last synced branch
        Available push count

        Danger Zone

        Unlink
        You will no longer receive notification when GitHub file changes after unlink.

        Syncing

        Push failed

        Push successfully