Quantum Machine Learning with Jessica Pointing

Download MP3
Sebastian Hassinger:

The New Quantum Era, a podcast by Sebastian Hassinger. And Kataroni.

Kevin Rowney:

Hey. Welcome back. Hey. We have a interesting interview today with, Jessica. Yeah.

Kevin Rowney:

Jessica Poynting. She's, pursued her PhD at Oxford. We'll give a longer intro in the main episode, but, you know, impressive person. But the this whole context, right, of, of this paper, I saw it fly by, on my Twitter feed. And I'm like That's right.

Kevin Rowney:

That looks kind of interesting. And, you know, read it in more detail, just the the abstract. I was like, we we really gotta do an episode on this one. So they there's just a rich context here where I think at a fascinating time, right, in the intellectual history of these ideas around data science, classical machine learning, quantum machine learning. So there's there's much of the frontier left to reveal.

Kevin Rowney:

So, that gets us to a I I think a next level understanding, of what this all, means it does.

Sebastian Hassinger:

Yeah. I'm I'm really excited by this conversation, Kevin. You know, as you said, we're we're at a particularly interesting moment in the history of of machine of classical machine learning. You know, things like OpenAI and, and the rest of those technologies are certainly dominating the headlines, and the investment landscape and a lot of, interesting topics of discussion around, you know, the progress that's being made or is not being made or what the applications may be. And then straddling that over into the level of uncertainty we have about quantum computing.

Sebastian Hassinger:

You know, there's it it's it could be one of those topics can become perplexing. The 2 in combination are are completely baffling, so I'm hoping that Jessa can can help, shed some light on on our understanding here.

Kevin Rowney:

Good stuff. Alright. Here we go. And welcome back. Hey.

Kevin Rowney:

It's Kevin Roney. Hey, Sebastian. How are you, man?

Sebastian Hassinger:

Hey, Kevin.

Kevin Rowney:

Hey. So, today we've got a really, interesting, interview with, Jessica Poynting on this, I think fascinating result on some of the trade offs, limits, and possibilities with quantum neural networks. Our guest today is Jessica Poynting, who's now pursuing a PhD in physics at at Oxford. Previously she was a PhD student at Stanford in CS. And wow, she just goes back and back, more and more impressive resume achievements.

Kevin Rowney:

Time at Harvard and MIT. I think she's had some experience at at Google, McKinsey. I mean, just a impressive array, of experience. Aside from this, selected as a Forbes 30 under 30 in science. So, hey, look, so many more.

Kevin Rowney:

It's it's almost, overwhelming, but very impressive, Jessica. Welcome. Welcome to the podcast.

Jessica Pointing:

Thank you. Thanks for having me. Glad to be here.

Kevin Rowney:

Well, look, I, you know, the, the thing that really, had us pursue you for being a guest on the show was your your recent pep paper called, do quantum neural networks have simplicity bias? And so, you know, what we're trying to do often on this podcast is both embrace, you know, some of the optimism of the possibility of quantum computing, but see it, you know, in a very sober way. I there's there's too much hype. And I think delving down into results like these, I think, helps set expectations and position, you know, the whole space for future growth. So, Jessica, let's let's begin with, you know, the context that that brought you to, this, this, new result.

Jessica Pointing:

Yeah. Thank you. Yeah. I agree that it's good to sort of have this, understanding of both the sort of advantages and possible disadvantages of quantum at the moment and the current state we're at. So I think it's sort of interesting actually to start from the context of just classical machine learning.

Jessica Pointing:

As you can see, you know, it's crazy what's going on. We'll take, you know, for example, chat you know, chat gbt, I think, is one of the fastest consumer growing applications with, you know, estimated a 100,000,000 users within just a couple months after launch. And I don't know if, you know, maybe the popular, like, just the public, if they understand that actually the underlying technology is neural networks.

Kevin Rowney:

Yeah.

Jessica Pointing:

And, neural networks have a sort of a history, but, I I will kind of I will sort of focus on particular this idea of, the bias variance trade off because that's sort of one of the things that, has been sort of in in statistical learning theory, sort of one of the basis that people have been thinking about. But maybe long so the bias is basically, you know, we humans have a bias. So for example, if I'm going on Amazon and I want to, like, I choose, like, I wanna buy some headphones and, you know, you get, like, hundreds of items, maybe my bias is to choose, like, the cheapest item, for example, or, like, I have a bias towards something that's cheaper. Or maybe you're you're

Kevin Rowney:

an Apple fanboy or something in the

Jessica Pointing:

industry. Exactly. Or maybe yeah. You have a bias towards a particular brand or something. And so a bias is basically a set of assumptions that one makes, and we see that neural networks so they're basically, a machine learning model, and a model is basically a program that has been trained to find patterns and make decisions on data.

Jessica Pointing:

And so we see that neural networks have a bias as well. And so we can also look at this other sort of property called the the variance, and that sort of is telling us how spread out the the data is. And so it's seen in statistical learning theory that there's this idea of this bias variance trade off. So basically, if you have a very simple model, it could have there could be an error with finding like, predicting the right outcome because it's too simple. But if you have, a very complex model, like, so that means there's high variance, then there could also be an error because you're fitting the very, like, particular parts of the data that you're training on.

Jessica Pointing:

And so, like, for an example is, you know, if you only let's say you have, like, images of cats and you only take images of cats when it's outside, like, so the sky is blue or something. And then maybe the neural network model now thinks that whenever it sees a blue sky, that's a cat. Right. So a cat. Right.

Jessica Pointing:

Yeah. So it's also it's training for that particular, that particular data that you trained it on. So there's this idea of the SPICE yeah. So there's this idea of the SPICE variance trade off, that people thought was the case. But, actually, we have now deep learning, which is basically, you know, neural networks with many layers, and it's actually found that this bias variance trade off, it's doesn't, sort of deep learning basically defies us.

Jessica Pointing:

It it doesn't really, sort of apply. And, if people are looking at a visual representation of it, there's this double descent curve, to sort of visually represent it. But, basically, the idea is that actually in deep learning, what happens is you sort of have these very complex models that can fit exactly the data that you're training on, but then, it's actually still able to generalize well on data that it hasn't seen. And generalization is basically how accurately it can predict the outcome value for unseen data. So so this is sort of kind of a dilemma or a sort of a question.

Jessica Pointing:

It's like, what what's going on? Why are deep neural networks able to do this? And so there are a couple and that's actually why it's kind of exciting because I would say there isn't a strong consensus on why this is happening. Even though this technology is, like, being very widely used, we still don't really understand the fundamental properties of it. And this is where hotly

Kevin Rowney:

debated right now. Yes. I mean, you know, guys like Hinton, you know, one of the founders of this whole field, talking about how these machine learning outcomes, they're almost like a blanket that could be draped across a complex shape, but it, you know, capture most of its nuance. And, you know, there's there's others that dispute, right, whether or not that's the appropriate intuition. So you you're right.

Kevin Rowney:

There's still there's still deep mysteries, it seems, in this domain.

Jessica Pointing:

Yeah. Exactly. Which is, like, quite fascinating because it's been so, like, widely used. Right? And I think and I think some of the problems we have today with AI can stem from the fact that we don't really fully understand what's going on

Jessica Pointing:

with these models. Yes. Yeah. So so, yeah, as you can see, it's it's an important sort of thing to look at. And so there is, a group at Oxford, so they have a theory which is comes to this idea of simplicity bias.

Jessica Pointing:

So their theory is that, basically, that these neural networks have a bias towards simple, like, functions. And so that is and most real well, data is simple. And when I say simple, you can think of it as a structure, like symmetry, for example, or, you know so and because most people

Kevin Rowney:

are basically the so called manifold hypothesis by what's sustained by Joshua Bengio. Right? I mean

Jessica Pointing:

Yeah.

Kevin Rowney:

Somehow a a high high dimensional dataset has a a thin lower dimensional, like, concentration of where the where the truth lies.

Jessica Pointing:

Yeah. So yeah. Yeah. You can think of it like effective dimension. So for example, if you have, like, the MNIST dataset, which is, like, basically handwritten digits, you know, it's, like, 28 by 28.

Jessica Pointing:

But, so but you if you actually look, people actually think it's a very small subset of those pixels that are actually sort of maybe a very simple example is if you have, like, a three-dimensional space, but you have, like, 2 data points that are on a 2 d line, then the actual the effective dimension is 2 even though it's, like, in a 3 3 d space.

Kevin Rowney:

Yes. Right.

Jessica Pointing:

Right. So so, yeah, so there's this idea. So, yeah, basically, real world data is sort of has structure and is simple. And so they they showed that deep neural networks also have have this simplicity bias. And, I mean, you could even go into philosophy here, right, with Occam's razor, right, is the idea that we should have this bias towards simple solutions.

Jessica Pointing:

And so it's interesting that neural networks have seem to have this bias towards simple solutions. So this brings, you know, interesting questions about quantum neural networks because, Yeah.

Kevin Rowney:

And that's just the context of the classical case. Yeah.

Jessica Pointing:

That's the context.

Kevin Rowney:

Yeah. And there's even more interesting, like, frontier is right down to the quantum machine learning. So yeah. Sorry. Continue just trying to set some milestones here.

Jessica Pointing:

Yeah. So so that's sort of the first milestone is just, this idea of simplicity bias. And then the second is sort of looking at quantum neural networks. So we know that they're inspired by neural networks, but they are sort of different in some ways. And the sort of so the so, yeah, there's a lot of questions about what quantum neural networks are useful for.

Jessica Pointing:

And, you know, it's possible.

Kevin Rowney:

Yeah. It seems like there's many people who are still, standing on the sidelines quite skeptical. They they could be optimistic about QC in general, but still, really questioning whether or not there are, room for, there's room for, for progress in this domain. Now there's been many attempts at finding huge breakthroughs over classical algorithms on time complexity advantage. It's hard to point to a big result yet.

Kevin Rowney:

Right?

Jessica Pointing:

Yeah. Yeah. Exactly. And, yeah, as he says, like, in just quantum computing, we know we have these sort of advantages, in terms of complexity. And some of that has been mapped onto this sort of the space of quantum machine learning.

Jessica Pointing:

But now you you see a fundamental difference because in in machine learning in classical machine learning, sort of, it's all, you know, we're dealing with benchmarking, we're dealing with empirical results. You know, there aren't all these theoretical underpinnings. But in quantum computing, the sort of the underpinnings have been mostly, like, theoretical because we have to sort of justify, like, why we're doing this. Yes. Right.

Jessica Pointing:

So so, yeah, so so quantum neural networks comes in this weird sort of place where, you know, maybe we have empirical results, but we also have complexity arguments and we're we're not really sort of, quite sure what, so, like, what it could be used for. So I think sort of so, yeah, when I read this paper on simplicity bias, I I guess yeah. That was sort of the motivation was to understand, well, okay. Has anyone looked at this in quantum neural networks? Because if simplicity bias is maybe one of the reasons why classical neural networks work well, maybe it can give us an insight into whether quantum neural networks can work well on, like, real world data and in in classical

Kevin Rowney:

data. Yeah.

Jessica Pointing:

So that so that was the sort of motivation. Yeah. And then yeah. I guess then sorry. So what we find is maybe going to a little bit more of the specifics, but,

Kevin Rowney:

For your for your paper now. Yeah. Yeah.

Jessica Pointing:

Yeah. Yeah. So, yeah, so in terms of the quantum neural networks, so I think there's 3 sort of different, sort of parts of a quantum neural network. And sort of I should say this is the standard architecture. So first, you have, like, the encoding circuit.

Jessica Pointing:

So this is how you sort of, actually encode the data into the neural network. Because if you have classical data, there there has to be a way that you put the data into the network. And then you have this, variational circuit, which is basically where you're changing the parameters to sort of fit fit, some cost function. And then, lastly, you have, like, the measurements. So,

Kevin Rowney:

you you have a classical loop on the outside, assisting with with gradient descent on the internal, yeah, yeah, quantum computing component of the loop.

Jessica Pointing:

Yes. Yes. You're correct. So that's the sort of the the actual optimization stages. So you can use a classical optimizer to optimize the parameters.

Jessica Pointing:

And you could even actually evaluate the the gradients on a quantum hardware, but that's, like, another thing, but that's not as practical. So so yeah. So so, we have sort of that sort of framework. Now yeah. Basically, what we show is, and some papers have sort of kind of shown this through, yeah, through other ways, but that the encoding method is really sort of what changes the inductive bias sorry, the bias, yeah, of the quantum neural network.

Jessica Pointing:

So this is the main, sort of, kind of the main part of the quantum neural network is really sort of the way that you encode the data into the neural network. And that's really important for understanding, like, what the bias is. And so we were interested to see if, you know, what type of bias we could get and if you could get this sort of simplicity bias. And so what we

Kevin Rowney:

So the active area of interest rate, I mean, all the the numerous ways of of doing encoding. Right? I mean, there's spectacular amounts of of attempts, right, at finding, you know, new representations that might create this breakthrough. I mean, still still I don't think they've found yet, compelling evidence of of some basis that's going to create a a huge advantage in QML. It's an open question.

Kevin Rowney:

So it seems like your paper makes valuable contributions to this exact question.

Jessica Pointing:

Yeah. Exactly. Yeah. So, yes, there are some papers that have sort of come up with different encoding methods or, yeah, try to look at them. And so we sort of try to we look at it through this angle of sort of simplicity bias and and seeing Yeah.

Jessica Pointing:

What happens there. And yeah. So in particular, we look at Boolean data, so just like 0101s, because it's sort of easier to sort of, make a statement about the property of the data. So we can say, what is the complexity of the data? So, for example, if I just have a string of 0, then that's sort of we call that that's very simple because, you know, it's just I I just say, oh, it's 0 repeated, you know, and how many times?

Jessica Pointing:

But if it's something very, like, 0101001, you know, something very, like, random and so then that has a high complexity. So we want to see sort of what sort of functions does the quantum neural network produce. Like, does it produce something that's, like, simpler or more complex, and what probability? So so that's what we look at, and we find that, well, we look at a range of different encoding methods, but I'll sort of highlight the the main ones. But, you know, the most basic way of encoding the data is just in the state of the the quantum bit.

Jessica Pointing:

So for example, if you have if my data is 1, I I just put it into, like I make the quantum bit 0, and I make the other quantum bit 1. And so this is called basis encoding. So it's the most basic, type of encoding. And, so other papers have shown this too, but there's basically no bias for that. So basically, it means it's a random it's it's kind of a random learner because without base if you have no bias, it's sort of like you're just randomly picking something.

Kevin Rowney:

Right.

Jessica Pointing:

So going back to the Amazon example, if I have no bias, I'm just going, like, there's no way for me to choose what headphone I'm going to choose. I'm just gonna

Kevin Rowney:

choose something random. Meandering at random around

Jessica Pointing:

the

Kevin Rowney:

real estate space. Yeah. Right.

Jessica Pointing:

Exactly. So, so that basically means it's not really effective as a learner because it's not going to learn anything. But the thing is, if you restrict, you can actually restrict the expressivity. So basically, the expressivity is what sort of functions can it, can it actually express. So this is, so going back to, like, the 0,011, maybe there's, like, it can't express, you know, all zeros or it can't express all ones.

Jessica Pointing:

So the way that we build the quantum neural network, it depends on what sort of functions it can express. And if it can't express those functions, then, it means it's gonna be hard to, like, sort of learn learn on that. So what we find is that you can restrict the expressiveness of the quantum neural network and that can somehow introduce some sort of bias. But it's it's sort of artificial. Like, I the way I think of it, maybe intuitively, is, like, going back to the Amazon example.

Jessica Pointing:

Like, if I say, sort of, I have a budget, and I'm like, oh, I I'm only going to look at headphones below a certain like, below $50 or something. Right. Then there is some sort of if you look at the whole, like, you know, all of the items, then there is some sort of bias. Right? There's a bias towards the cheaper ones, but then it's but it's because you're not able to even express the ones beyond that.

Jessica Pointing:

So

Kevin Rowney:

I see. Right. Right.

Jessica Pointing:

Yeah.

Kevin Rowney:

And and and if I if I'm not mistaken, inside your paper, you're using really 2 2 measures, right, to 0 in on on these features. It's it's Shannon entropy, right, and and, l z complexity. Do I do I have that right?

Jessica Pointing:

Yeah. Yeah. You're you're correct. Yeah. So, yeah.

Jessica Pointing:

So the reason I this goes into a little bit of, other other things. But yeah. Yeah. These these are these are the 2 measures. It's like a whole it's a whole rich a rich thing.

Jessica Pointing:

But yeah. Basically yeah. I can see. But yeah. Basically, these are measures to sort of measure the complexity of the the the the boolean string that I talked about.

Jessica Pointing:

So, yeah. And and we wanted to distinguish between entropy and the l zed complexity because the l zed complexity is maybe more of a more of a maybe truer way to see the actual complexity of the string, but whereas entropy is just looking at, like, the frequency of of zeros and and ones in the string. So it's it's a bit more trivial. So we wanted to distinguish between those 2.

Kevin Rowney:

And thank thank you for tolerating all my tangents here. I was just trying to prompt questions to illustrate key points for the for the audience. So please continue. Sorry for my interruption.

Jessica Pointing:

No. No. No. No. Thank you.

Jessica Pointing:

Yeah. And so, yeah, so we have the basis encoding, and then I think the the real encoding methods that is interesting is the amplitude encoding because we actually show that there can be a simplicity bias with amplitude encoding. So this is basically where you put the data into the amplitudes of the quantum state. And then we do see simplicity bias, but then we show that, actually, the expressivity of it is reduced. So it's not able to express certain functions.

Jessica Pointing:

And it's quite interesting because, actually, if we go back to just classical machine learning, the perceptron, which is basically just like a single layer neural network, it was it's like the building block of neural networks. There was a paper, or but, yeah, by Minsky that showed that the perceptron did

Kevin Rowney:

not Classic story.

Jessica Pointing:

Yeah. Yeah. The perceptron couldn't express the xor function, which is basically the parity function for 2. So so, basically, the parity function is you have, 1 if the input has an odd number of ones. But anyway so like and then but this sort of kind of prompted this AI winter because people were like, oh, no.

Jessica Pointing:

It can't express

Kevin Rowney:

It would that was catastrophic for us

Jessica Pointing:

to do

Kevin Rowney:

the agenda. Yeah.

Jessica Pointing:

Yeah. Exactly. So but but it did obviously spur on research to sort of, come up with, you know, multiple layer neural networks and actually see that, you know, we could we could express more functions. So I think yeah. So kind of in a similar way, the amplitude encoding, it it can't express the parity function.

Jessica Pointing:

And, actually, if you go beyond like, if you go to high levels of, like, cubits, there are more functions it cannot express.

Kevin Rowney:

And that's that's a really basic operator. I mean, if it can't get to that, even it's it's simplicity is obviously too, too extreme to an extent. Right?

Jessica Pointing:

Yeah. Yeah. So, yeah, as you can see, that sort of is is kind of a little bit of a, an obstacle to becoming a general purpose learning algorithm. Because the thing is so I think this goes back to the context in the the beginning is that deep neural networks, they're able to be highly expressive, and they're able to, have this good bias towards simple functions, which enables them to generalize well on data that they haven't seen. But I guess our paper is suggesting that there is this sort of, trade off with the encoding method that you use that, if you have a good inductive bias, like the simplicity bias, but now you don't have good expressivity.

Jessica Pointing:

Or if you have good expressivity, like the basis encoding, now you don't have a good inductive bias. And that's sort of a bit of, yeah, a bit of a a problem for general purpose.

Kevin Rowney:

You you really have to have both. Right? I mean, there's just no other way. Yeah.

Jessica Pointing:

Yeah. At least to be comparable to deep neural networks because,

Kevin Rowney:

Right.

Jessica Pointing:

Yeah. Because if deep neural networks are able to do this, then, and most real world data is simple. So a quantum neural network, if it wants to be a good general purpose learning algorithm on real world data, it should have the expressivity and the sort of simplicity bias. At least, that's what we we claim. It should.

Jessica Pointing:

So

Kevin Rowney:

I and and so the paper covers, the amplitude basis. It covers the canonical basis. Any other bases in the in the scope of analysis here?

Jessica Pointing:

Yeah. So we do look at some other encoding methods, sort of, so we look at this one called the zed zed feature map, which is sort of a, a popular one based on this other paper on, yeah, on supervised learning with quantum enhanced, features maps. And, we also look at this other one, which is just basically kind of a random non unitary transform. But, I mean, that's going into the specific, but, yeah, but we showed that the these have, like, a trivial bias, so they're not as interesting. But, well, what is kind of so the zed zed feature map actually does have a bias towards the parity function, in some sense, which it was sort of designed to do.

Jessica Pointing:

And so this actually raises, like, another point, which is that you can actually always create an encoding method that has a bias towards the thing that you're looking for. Yes. Yeah. So you can you can do that. But the thing is

Kevin Rowney:

It's a general. Yeah.

Jessica Pointing:

Right? Right. So and yeah. Yeah. So so, I mean, that's what kind of happened in in classical machine learning.

Jessica Pointing:

I mean, yeah, at first, you're kind of hand encoding sort of the the the problem into the the the thing. But now but now we've kind of moved past that because we don't need to do that anymore because you can just, like, take a neural network, and it's just so general it applies to to most problems. So I think

Kevin Rowney:

You're you're cheating the the bias variance trade off there. Right? Yeah. Overfitting on one particular use case and, right, blowing the rest.

Jessica Pointing:

Right. Right. So Mhmm. Yeah. So I think that's the the thing is and this kind of goes into, sort of the the cases where we see quantum advantage.

Jessica Pointing:

Basically, they do this. They take a particular problem that they know may be hard to do classically, and then they they create an encoding method essentially to target right at that point. Yes. Has a bias towards that particular problem.

Speaker 3:

I see.

Jessica Pointing:

And then it's able to do well, which is which is good.

Speaker 3:

Self fulfilling prophecy.

Jessica Pointing:

Yeah. But I guess, like, the the good thing is they do show that, you know, it's hard to do that in a classical setting, but it's easy maybe if you had an an ideal quantum computer, like the fault tolerant quantum computer. So I think there is obviously some it's it's, like, good research to do that. But as you can see, it's, like, very specific cases. So yeah.

Kevin Rowney:

I and so this is really this is, I think, the crux of the matter for me. I mean so help me understand. I mean, it it looks like you're doing this, fairly detailed analysis of sketching the possibilities and limits, right, with these QML algorithms. And and I think you're working within a framework, which is, you know, this, you know, variational quantum, ML. I mean, this is this is a broad it's got broad generality across the entire space.

Kevin Rowney:

And you seem to be sketching out this possibility that, numerous of these bases, just just won't do the job. And so the question I guess I'm coming down to, and maybe I'm going at a limb here. I don't wanna, you know, put you in an awkward spot, but I mean, is it is it possible that this paper sketches fundamental limits right on the the eventual merit, of quantum machine learning?

Jessica Pointing:

Yeah. I mean, I think I think there are other papers that have sort of, definitely kind of gone in this direction, and sort of hinted at this, in different way or, like, come at it from different angles. Yeah. And I think sort of the paper we it is sort of, I mean, we haven't, like, proved this sort of bias and specificity trade off or something. So I do think there would maybe need to be more research done to sort of really see if this is, sort of something that is, like, very fundamental to the the framework that we have.

Kevin Rowney:

Attribute of the space right

Jessica Pointing:

now

Kevin Rowney:

versus some some finding that is applicable on some scope but not not general yet. Yeah.

Jessica Pointing:

Right. So but I think our our paper sort of contributes to this idea that quantum neural networks, at least the frameworks we have at the moment, are probably not going to be like these general purpose learning algorithms on classical data. And I think I mean, in one sense, that's like, okay. But I think it can be I think it's good because it can then lead us towards other alternative directions, which is sort of

Kevin Rowney:

For instance, could it could it be there's there's some other undiscovered as of yet, encoding that that, you know, satisfies all these desirable requirements. So it's not just targeted at a particular problem. It's got generality. It's got the right bias. It's got the right simplicity.

Kevin Rowney:

I mean, is it possible there's just some undiscovered outcome and advanced research to come across a new encoding?

Jessica Pointing:

Right. Yeah. So, I mean, I think given sort of the literature and what we show, I think it's, sort of an unlikely, but, yeah. I mean, you never really know. So it could be possible.

Jessica Pointing:

I mean, the ideal scenario is you have this encoding method that has a good inductive bias, and it has good expressivity. And then I mean, that's just to make it comparable to deep neural networks. Right? And it has to do something more, which is it has to sort of have some sort of speed up or or something, where it's, like, over the the deep neural networks, which at this point yeah. So you can see that it's, like, something very difficult to do, because we don't even have, as as we talked about before, like, the theoretical underpinnings of the deep neural networks to even compare maybe if there is some sort of, you know, exponential advantage or something.

Jessica Pointing:

And and then you also have the problem of, at the moment, the hardware is, you know, very limited, and it's nowhere near nowhere near deep neural networks. And so, yeah, I think this is sort of the ideal scenario. But I think also it's possible that we just look at completely new frameworks for quantum neural networks.

Kevin Rowney:

Right.

Jessica Pointing:

So this is based on, like, sort of the standard framework, which I talked about, which is this encoding method and this variation. But I think there could be completely new strategies for doing quantum neural networks. And I think that's sort of I think that's probably a bit more fruitful to to look at. And yeah. So

Speaker 3:

That's interesting. So in a sense, what you're saying is that, you know, we've gone through this process of trying to fit what we know about neuro classical neural networks onto, quantum computers, at least, theoretical quantum computers, which don't exist. And and and it doesn't seem like there's any inherent sort of performance gain, from from from source the the the simplistic sort of porting of those methods onto quantum computers. Is that is that right?

Jessica Pointing:

Yeah. Sort of. Yeah. I'd say that, yeah, just sort of by using this sort of framework, we're, yeah, we're not having this general purpose, advantage, but it's only for very specific use cases, which actually at the moment, yeah, haven't really shown to be relevant to to, like, real world practical use cases as well. And also their network is

Kevin Rowney:

not possible alternative is is it one narrow but extremely valuable problem

Jessica Pointing:

Right. Right.

Kevin Rowney:

That there

Jessica Pointing:

could be

Kevin Rowney:

there could be discovered a brand new encoding that would nail it on that one. Right? That would be

Jessica Pointing:

Yeah. So I think that's an alternative area of research is if we can find, like, relevant, useful sort of very specific use case, example, and then you create an encoding method for that. That that's also, a possibility. But, yeah, that is yet to be done as well.

Kevin Rowney:

Wow. So there's just a a whole landscape of really interesting questions ahead.

Jessica Pointing:

Mhmm.

Kevin Rowney:

And if this this paper, again, it seems like it sketches out this fundamental attribute of what is, due diligence, right, on on cross checking one's one's new results in in QML. And and look at look at these various trade offs in ways that really tell the story because because I think you're right. There's been a lot of results that have been, you published and height in terms of their huge advantage for quantum machine learning. Sounds like a cool topic. And later there's retractions, there's dequantizing, there's, you know, there's all sorts of, ugly outcomes here.

Speaker 3:

It's a it's

Kevin Rowney:

a treacherous space. Yeah. But, again, it just feels like a really interesting framework you've got here to at least help with more rigor and clarity evaluate, you know, future future algorithms in this domain.

Jessica Pointing:

Yeah. Yeah. You you bring an interesting point, about sort of, yeah, like, sort of the claims of a quantum advantage. And as I said, like, you can there are there are, like, theoretical claims, which is based on all these, like, complexity arguments and stuff, and and that's good, but it's it's not really sort of relevant, and it's not available on current hardware. And then you have maybe these claims that have been done sort of empirically, so they they run some experiments.

Jessica Pointing:

But actually, if you look if you take a closer look at some of these, well, one is, they may make comparisons to very simple deep classical deep neural networks. And in some sense, there's a straw man argument, you know, like, because they're they're comparing it to something that actually isn't that that good. So, yes, you get an advantage, but you've compared it to something that's not that good. So if you actually compare it to something that's good, then then that's not not the case necessarily. And then the second thing is they may use very simplified datasets, and they may use it on very, very small scale.

Jessica Pointing:

And then it's hard to sort of make arguments about the larger scale when you're looking at the smaller scale. So, and I think Maria Schuld actually does, says a lot about this, and she has a lot of interest in papers. And I think she is yeah. She she is I think some of her papers are sort of questioning what's go what's going on in the field.

Speaker 3:

Right. Right. Yeah.

Jessica Pointing:

And and so, yeah, I think she has a paper called, yeah, is quantum advantage even the right goal? And so she's sort of saying that, you know, we had this hyper focus on quantum advantage, but, it's sort of very limited because of these problems I've just talked about and that she talks about. And she suggests, like, ways to move forward by looking at alternative things. And, like, one of them is kind of what I mentioned, like like, this idea of a new building block for quantum neural networks Yeah. Kind of a new, maybe, framework.

Jessica Pointing:

And then also, like, keep on, like, developing the quantum software and then looking at, quantum kernels and and things like that. So, Right. Yeah. I think there are there is some sort of people are talking about it. So A

Kevin Rowney:

a a possible future guest. Yeah. She'd be she'd be Yeah.

Jessica Pointing:

Oh, yeah. She she'd be great.

Kevin Rowney:

Yeah. So so far, it looks like, you know, you've got Grover's algorithm. You've got Shor's. Those are clear breakthroughs for for the quantum computing, scenario. But it it's hard so far to sort of see clear clearly, right?

Kevin Rowney:

A big, a big example, prominent example of a, of a killer algorithm that really rings the bell.

Speaker 3:

Well and it it seems like as you were saying earlier, Jessica, like, there's we don't really understand how deep neural networks work classically anyway. So there there's it their heuristics, or our understanding of them is heuristics. So it seems like a massive challenge to figure out how to create a theoretical proof of advantage in a quantum setting for something that we don't really understand how it operates in a classical setting.

Jessica Pointing:

Right. Right. For the for the neural networks. And, I think there is, like, there is sort of interesting so, like, we haven't talked about, like, quantum kernels, and that's a whole other thing. But I think quantum like, the kernels, like, that is actually interesting because we can sort of sometimes make, more of these arguments.

Jessica Pointing:

And that's actually what some of the papers do when they look at

Kevin Rowney:

Yeah.

Jessica Pointing:

They look at quantum kernels versus, like, the classical kernels, and then they are able to sort of make these claims of quantum advantage. But, you know, the thing is deep neural network the classical neural networks have sort of, overtaken kernels in the classical setting. Like, there's a reason, like, the reason there's a reason why kernels aren't as popular, because, you know, it's it's harder to scale and things like that. So so yeah. So it's sort of the many, many components that, like, need to be taken into account.

Kevin Rowney:

You have the same dilemma, don't you, with kernel methods as you do with this scenario with, with a coding choice. Right? And that there's numerous different kernels you could, conceivably choose, finding the best fit against a given problem with adequate, you know, generality and and time efficiency. You know, not a not a easy trivial problem, that one.

Jessica Pointing:

Right. Right. Yeah. And and Maria Scholl's work, does this sort of yeah. She has some papers to like, mapping quantum neural networks to kernels and and Yeah.

Kevin Rowney:

That's a cool result with the broad generality of

Jessica Pointing:

Yeah.

Kevin Rowney:

Of QML reduced to quantum kernel methods. Yeah.

Jessica Pointing:

Yeah. Which which is interesting, and it does help sort of have a better theoretical understanding. But, yeah, I guess as you can see, there's still still a lot to be understood. Okay. Yeah.

Jessica Pointing:

Which I guess is, like, what science is in it, and it's, like, fascinating because there's all these Absolutely. Kind of, yeah, all of these things kind of coming together, like in classical machine learning and quantum machine learning and just quantum computing and sort of like yeah.

Speaker 3:

I mean, yeah, we could keep talking about this for a much longer period of time, but I'm curious. Is is this, sort of central to your, your PhD thesis? Is this what what you're working towards?

Jessica Pointing:

Yeah. So, this is yeah. So I've been sort of working on this, and I've, yeah, I've done some other projects, which is, I did, like, a project on a quantum compiler, for example, and, actually, like, applying a variation of quantum algorithm to an optimization problem in, open pit mining, which is kind of a bit like trying and I was actually trying to find, like, a real world application.

Speaker 3:

Right.

Jessica Pointing:

Yeah. So but I'd say sort of my PhD is sort of the the the main focus has been on on this this particular problem.

Kevin Rowney:

Yeah. Such a cool topic. We really appreciate your time, Jessica. Yeah. And and thank you so much for giving us this this glimpse of the current status of this really, you know, interesting time that we live in.

Kevin Rowney:

There's, you know, so much, look to look forward to in terms of new results in this domain. So thank you so much.

Jessica Pointing:

Yeah. I appreciate it. Thanks.

Kevin Rowney:

Wow. That that was so cool. It's just a a great, I think, overview, right, of the landscape of the foundations, right, of data science and bias variance trade off, the huge new breakthroughs, you know, pioneered by, you know, machine learning and, this whole generative movements and these, this, capacity of these systems to with with simple functions get to high accuracy on this this manifold of of truth. That's just the context. Right?

Kevin Rowney:

Mhmm. Of of of base camp on this huge climb up up Everest, so to speak, as, towards some possible huge advantage that might be in the future for using quantum computers on the machine learning use case. So, yeah, really interesting framework she's got here

Speaker 3:

to Absolutely.

Kevin Rowney:

You do due diligence on on these algorithms.

Speaker 3:

Yeah. I really appreciated her starting by giving us a foundation in in the classical context. You know, it seems like, as I said, we've got this really interesting moment, where deep learning, classical machine learning is, is progressing, but there's questions about how far that progress is gonna take us.

Kevin Rowney:

Lots of fights. Some some of it warranted. Yeah. I mean Right.

Speaker 3:

Sure. I mean, potentially. Right? We don't have a theoretical understanding for how these things work or or what their limits are. These these, systems are being built by, by OpenAI, by Claude, by, Google, etcetera.

Speaker 3:

And so, you know, it it's it was really interesting to me to understand that level when certainly going into the quantum context where, you know, we already we have some theoretical basis for performance advantage in things like Shores and Grovers and HHL as you brought up, in the conversation, Kevin. But but, you know, we don't have that kind of generalized sense to understanding in quantum machine learning. And in part, that's because we don't have that kind of generalized understanding in classical machine learning. So it's uncertainty piled on top of uncertainty in a sense.

Kevin Rowney:

And so, yeah, so such cool new frontiers she sketches out in terms of yeah. Could there be, you know, a brand new set of encodings, right, that could be, show generality or applicability to a specific set of high value use cases? Right. And and also there's this whole, you know, interesting area, right, of of kernel methods Right. Which goes deep mathematically, but fascinating topic.

Kevin Rowney:

So, you know, I think we've got to talk to some more guests Absolutely. On that front. Yeah.

Speaker 3:

Absolutely. But, yeah, this conversation with with Jessica was was a terrific, sort of starting point. We really appreciate her time and her insight, and we look forward to continuing the conversation on a topic in the future.

Kevin Rowney:

Super fun. Okay. That's it for this episode of The New Quantum Era, a podcast by Sebastian Hassinger and Kevin Roney. Our cool theme music was composed and played by Omar Khosda Hamido. Production work is done by our wonderful team over at Podfi.

Kevin Rowney:

If you are at all like us and enjoy this rich, deep, and interesting topic, please subscribe to our podcast on whichever platform you may stream from. And even consider, if you like what you've heard today, reviewing us on iTunes, and or mentioning us on your preferred social media platforms. We're just trying to get the word out on this fascinating topic and would really appreciate your help spreading the word and building community. Thank you so much for your time.

Creators and Guests

Sebastian Hassinger🌻
Host
Sebastian Hassinger🌻
Business development #QuantumComputing @AWScloud Opinions mine, he/him.
Jessica Pointing
Guest
Jessica Pointing
✞ Physics PhD student at Oxford doing quantum computing. Web3 security researcher. Former CS PhD student at Stanford. B.A. Physics, CS at Harvard & MIT.
Quantum Machine Learning with Jessica Pointing
Broadcast by