Try   HackMD

Foundation of Quantum Machine Learning / Module 1

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More →

Motivation

Why are we building supercomputers? Why do we need faster or larger-scale solutions? Is the technology we’re using today truly sufficient to live and build with?
This is one of the most reasonable questions I received from an audience member during an innovation panel at the UK Parliament. It cuts to the core of our relentless pursuit of progress: If current tools work, why push for more? The answer lies in the challenges we face—climate modeling, drug discovery, AI advancement—and the reality that today’s technology often falls short of solving tomorrow’s problems. Supercomputers aren’t just about raw power; they’re about unlocking possibilities that redefine what “enough” even means.

Those who lean toward strictly practical views might argue that advanced technologies, like quantum computing or AI, are better suited for abstract scientific exploration than solving everyday problems. They may see such pursuits as unnecessary or wasteful, especially when existing tools seem sufficient. But this perspective overlooks a critical truth: some of the most groundbreaking practical advancements emerge from what initially appears to be purely theoretical or speculative work.

History shows that blue-sky research—exploring ideas without immediate practical goals—often leads to unexpected and transformative applications. The extended capabilities of cutting-edge technologies don’t just answer abstract questions; they unlock new possibilities that redefine what’s practical. By investing in these frontiers, we don’t just satisfy curiosity—we lay the groundwork for innovations that can profoundly impact society, even if their value isn’t immediately obvious.

Intro to Quantum Machine Learning

Quantum Machine Learning (QML) explores the intersection of quantum computing and machine learning through four main approaches:

CC (Classical-Classical): Classical data processed classically, but using methods inspired by quantum information research. Examples include tensor networks adapted from quantum many-body systems or quantum-inspired classical algorithms.

QC (Quantum-Classical): Machine learning applied to quantum computing tasks, such as analyzing quantum measurement data, learning phase transitions in quantum systems, or discriminating between quantum states.

CQ (Classical-Quantum): Classical data (e.g., text, images, or time series) processed by quantum computers. This involves designing quantum algorithms for tasks like data mining, often by translating classical machine learning models into quantum frameworks or creating entirely new quantum-based models.

QQ (Quantum-Quantum): Quantum data (e.g., from quantum experiments or simulations) processed by quantum computers. This approach leverages the quantum computer’s ability to access and manipulate quantum states directly, offering potential exponential speedups for tasks like simulating quantum systems or analyzing quantum data.

CQ approach is the primary focus in the market which we gonna dive deeper in these blog series, the other most interesting QQ approach presents exciting but underexplored opportunities, such as learning directly from quantum states or combining quantum data generation and analysis.

Let's Go!

Quantum Computing for Supervised Learning

Alright, let’s dive into the fun part first: using quantum computers for supervised learning! Because why not ? We’re focusing on the CQ case where classical data gets processed by quantum machines. Now, when it comes to designing quantum machine learning algorithms, there are two main strategies. And let me tell you, both are fascinating in their own ways.

1. The Translational Approach

First up, we’ve got the translational approach. This is where we take a classical machine learning model—like a neural network or a Gaussian process—and say, “Hey, let’s see if a quantum computer can do this faster!”

The Idea: We’re not reinventing the wheel here. We’re just trying to outsource the heavy lifting to a quantum device. Maybe it’s matrix inversion, maybe it’s optimizing some gnarly non-convex function—whatever it is, we’re using quantum tricks to speed it up.

The Challenge: It’s not easy! You’ve got to be a bit of a quantum wizard to translate classical algorithms into quantum routines. And you’ve got to do it efficiently, without gobbling up too many qubits or too much time.

The Big Picture: This approach is like using quantum computing as a supercharged calculator. It’s powerful, but it’s still solving the same old problems. The speedups are exciting, but it’s more about applying quantum computing than creating something entirely new.

2. The Exploratory Approach

Now, this is where things get really interesting. The exploratory approach doesn’t start with classical models. Nope. It starts with the quantum computer itself and asks, “What kind of machine learning can this thing do that classical computers can’t even dream of?”

The Idea: Instead of copying classical models, we’re building something entirely new—a model that’s native to the quantum world. Maybe it’s a new way to optimize, a new way to classify data, or even a whole new branch of machine learning!

The Freedom: We’re not tied to universal quantum computers here. Any system that follows the rules of quantum mechanics can be a playground for this approach.

The Big Canvas: This isn’t just about speedups. It’s about inventing new ways to learn from data. It’s about pushing the boundaries of what machine learning can do. And sure, it’s harder—you need to really understand both quantum mechanics and machine learning to pull it off. But oh, the possibilities!

Let's jump into beefy part!

Information Encoding Strategies

"I think I can safely say that nobody really understands quantum mechanics." - Richard Feynman

True but let's try, perhaps!

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More →

Imagine you have a beautiful classical art painting that you want to transfer into the quantum world, because why not? How do you do it? That's the fundamental problem of data encoding in quantum machine learning.

When we talk about quantum machine learning, we're essentially trying to teach quantum computers to recognize patterns, just like we teach classical computers. But quantum computers speak a different language - they talk in terms of quantum states, superpositions, and entanglement.

So our first challenge is translation: how do we convert our classical information into something a quantum computer can understand and work with?

To use quantum computers for analyzing classical data (the CQ case), we need to solve two problems:

Quantum Representation: How to encode classical data (like numbers or features) into a quantum system.

Data Loading: How to physically transfer this data from classical storage into the quantum computer.

This data transfer step is called state preparation in quantum computing.

Let's dive into the different approaches we can take, building our understanding step by step.

Basis Encoding: The Direct Translation

Basis encoding is the most straightforward approach - like translating a sentence word-for-word between languages.

Classical: 1011
Quantum: |1011⟩

Here's how it works: if you have a classical bit string, say 1011, you simply create a quantum state where each qubit represents exactly one bit. The state |1011⟩ means the first qubit is in state |1⟩, the second in state |0⟩, and so on.

Why is this useful? Well, it's intuitive! It's easy to understand what's happening because there's a one-to-one correspondence. If you have 1000 bits of classical data, you'll need 1000 qubits to encode it this way.

But here's the catch - we're not using one of quantum computing's most powerful features: superposition. It's like having a Ferrari but only driving it in first gear.Ofc We can do better!

Amplitude Encoding

Now let's think bigger. What if, instead of putting our data in the "coordinates" of the quantum state, we put it in the "heights" or amplitudes?

In amplitude encoding, we use the probability amplitudes of the quantum state to represent our data values. This is powerful because with just n qubits, we can encode 2^n classical values!

 For example, with 3 qubits, we can create a state:
 |ψ⟩ = 0.1|000⟩ + 0.2|001⟩ + 0.3|010⟩ + 0.4|011⟩ + 0.5|100⟩ + 0.6|101⟩ + 0.0|110⟩ + 0.3|111⟩
 
This single quantum state encodes 8 different values! 

That's the power of quantum superposition - we're storing information in all possible configurations simultaneously.
But there's no free lunch in physics. While we get exponential compression, we face two challenges:

  1. Creating this precise state can be complicated
  2. Reading out all the encoded values requires many measurements

It's like compressing a file to save space - great for storage, but you have to decompress it to use it fully.

QSample Encoding

What if your data is already probabilistic? Like the probability distribution of customer preferences or stock market movements?

QSample encoding is perfect for this. Instead of encoding raw values, We encode probability distributions.

If we have a classical probability distribution p(x), 
we create a quantum state:

|ψ⟩ = Σₓ √p(x) |x⟩

When we measure this state in the computational basis, we'll get outcomes that follow our original probability distribution p(x). It's like building a quantum die that, when rolled, gives results according to any distribution we want!

This approach is particularly elegant for machine learning tasks that involve sampling, like generative models. The quantum system naturally performs the sampling for us. I hear ML devs giggling here :) Alright, we gonna have a whole module focus on QNN models, it's gonna be fun!

Dynamic Encoding: Embedding in Time Evolution

Now for something completely different. Instead of encoding our data in static quantum states, what if we encode it in how the quantum system evolves over time?

In dynamic encoding, we embed our classical data into the Hamiltonian - the "energy function" that governs how quantum states change. It's like encoding information not in the position of a pendulum, but in how it swings.

Mathematically, we create a time evolution operator:
U(t) = e^(-iHt)

Where H is our data-dependent Hamiltonian. 

This approach feels more "native" to quantum systems and can be particularly powerful for simulating physical systems.

For reference: A summary of the different types of information encoding presented in the text

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More →

Quantum Speed Up

Before diving into the design of quantum machine learning algorithms in the next modules, let’s take a moment to explore how quantum computing actually helping machine learning. While quantum computing researchers often focus on asymptotic computational speedups, machine learning is a bit more nuanced. There are three key dimensions to consider when evaluating quantum advantages in machine learning:

Computational Complexity: How fast can we solve the problem?
Sample Complexity: How much data do we need to learn effectively?
Model Complexity: Can quantum systems inspire entirely new types of models?

Let’s break these down one by one.

Computational Complexity of Learning

When we talk about computational complexity, we’re asking: How long does it take to run an algorithm? This is the most common way to measure the potential of quantum computing in machine learning, and it’s inherited from the broader field of quantum computing, where researchers often focus on proving theoretical runtime bounds.

What is Runtime Complexity?

Runtime is the time it takes to execute an algorithm, measured in the number of elementary operations. In classical computing, we can count logic gates to estimate runtime, but this becomes tricky as technology evolves. That’s why we use asymptotic complexity—how the runtime grows as the input size n increases.

If the runtime grows polynomially (like nᶜ for some constant c), the problem is considered tractable. Even if c is large, we can theoretically solve the problem with enough time and resources.

If the runtime grows exponentially, the problem becomes intractable for large inputs. Imagine you’re trying to crack a safe. A 2-digit code? Easy—100 guesses max. But a 30-digit code? You’d need more attempts than there are stars in the universe! That’s exponential complexity—the nightmare of classical computing.

Quantum computers flip the script. They’re not just faster; they’re smarter. By leveraging superposition and entanglement, they can tackle problems that grow exponentially on classical machines.

Take Grover’s algorithm: it searches an unsorted database in √n steps instead of n. For a 1-million-entry database, that’s 1,000 steps vs. 1,000,000. That’s a quantum advantage!

But here’s the catch: Quantum runtime is tricky to pin down. We don’t have standardized qubits yet, and mapping algorithms to quantum gates is like translating poetry into hieroglyphics. Still, the promise is huge: problems that are impossible classically might become tractable quantumly.

Quantum Complexity

As we mention, for quantum computers, estimating runtime is even trickier. We don’t yet have a standardized set of quantum gates or qubit implementations, and decomposing quantum algorithms into elementary operations is far from straightforward. That’s why most researchers focus on asymptotic complexity (as something tends to infinity) —how quantum algorithms scale compared to classical ones.

A quantum algorithm is qubit-efficient if its runtime scales polynomially with the number of qubits.

It’s amplitude-efficient if it scales polynomially with the number of quantum amplitudes (like in Grover’s search algorithm).

The big question in quantum complexity theory is: Can quantum computers solve problems faster than classical ones? When we talk about quantum speedups, we’re usually referring to asymptotic runtime advantages. An exponential speedup—where a quantum algorithm solves a problem exponentially faster than the best classical algorithm—is sometimes called quantum supremacy.

Sample Complexity

Machine learning loves data—but what if we don’t have much? Imagine training a model to diagnose rare diseases with only a handful of cases. Classical methods struggle, but quantum algorithms might thrive.

Quantum states can encode information in ways classical bits can’t. For example, a single qubit can represent a superposition of many data points. This lets quantum models learn patterns from fewer examples—a sample complexity advantage. It’s like reading a book once and remembering every word, versus needing to re-read it 100 times.

Model Complexity

Here’s where things get wild. Instead of forcing classical models onto quantum hardware, what if we let quantum mechanics inspire entirely new models?

Quantum neural networks that exploit entanglement to process information in parallel.
Quantum Boltzmann machines that simulate quantum systems to find optimal solutions.
Quantum kernels that map data into high-dimensional quantum feature spaces.

These aren’t just faster versions of classical tools—they’re new ways to think about learning. It’s like inventing the wheel, then realizing you can also build a rocket.

In the next modules, we’ll explore these algorithms and applications in more detail. But for now, remember this: quantum computing isn’t just about speed—it’s about reimagining what’s possible in machine learning.

Until next time, Stay Entangled Think Quantum!