Quantum Supremacy to Generative AI and Back with Scott Aaronson

Download MP3

Sebastian Hassinger 0:01
The new quantum era, a podcast by Sebastian Hassinger and Kevin Rooney.

Kevin Rowney 0:31
Welcome back to the new quantum era we have today of a contributor in quantum computing that almost needs no significant introduction. We were lucky enough to score an interview with Scott Aaronson, who's a leading theoretician in the space. Very, very fascinating person with a long list of relevant achievements for the scope of our podcast. He's also the author of if you haven't seen it yet, the fascinating blog called the shtetl optimized, a great treasure trove of wisdom and wit as well. It's a good time. Yes.

Sebastian Hassinger 1:04
Yeah, I really, I really liked talking with Scott, Kevin, I think this is a great opportunity to talk to him in depth. I'm always impressed with, he's sort of closely identified with quantum supremacy and the Google quantum supremacy experiment, which he helped design. But I think that's thinking of him as the supremacy guy is, is selling him very short. He's sort of omnivoursly curious. He's, you know, he spends a lot of different areas within not just quantum computing, not just complexity theory, not just computer science. But he's curious. Yeah, he's genuinely curious. He's really good at connecting concepts and extremely good at explaining things and digging deeply into things. So very, very excited to have him on the podcast,

Kevin Rowney 1:53
great interview, we got into some depth on numerous topics of, I think, pertinence to quantum computing, but also here and there straight into a little bit of his new work he's doing with with open AI, fascinating stuff.

Sebastian Hassinger 2:05
All right, here we go.

All right, we're joined today by a very special guest, someone who in quantum computing sort of qualifies for the needs no introduction, kind of. I think we've got Scott Aronson with us today. He is a theoretical computer scientist, the David J. Bruton, Jr, Centennial Professor of Computer Science at UT Austin, and quite well known for his work in computational complexity theory, quantum computing, and very closely associated with the early sort of establishment of of the Google supremacy experiment, for example. So we're really excited about the conversation with Scott today. And super happy to have them with us. Scott, welcome. Thanks.

Scott Aaronson 3:22
It's great to be here. Thank you.

Kevin Rowney 3:24
Great to have you on man. Yeah,

Sebastian Hassinger 3:25
so yeah, I mean, the we always like to start with sort of, you know, how did you get to quantum computing as a as a super new field, there's no real established paths, and it feels like everybody has their own little, you know, journey to get into quantum computing. So So what was that journey? Like for you?

Scott Aaronson 3:43
I don't know that quantum computing, is that new anymore? But in the 90s, it was no, yes. I think I as a as a teenager, you know, I was very into programming, mostly, because I wanted to make my own video games. And I quickly realized that while I liked programming, I was awful at software engineering. Make making my code work with other people's code or getting it done by a deadline, or documenting it or any of that stuff and all that right. Yeah, I was more drawn to the the theoretical side. I mean, I learned the P versus NP problem was, you know, had fantasies that okay, you know, everyone must have just been overcomplicating this and if, if I just spend a few weeks on it, then maybe you know, with my child and a sentence, I'll just I'll see. I'll see further. So I'm out, you know, I mean, it's good to be burned. I think, you know, what, is sometime in one's life by that absolutely variance. And then, at some point, I read a popular article about Shor's factoring algorithm, which had just been discovered, you know, a year or two prior, you know, 9095 or so, you know, I would have been 14 or 15. And my first reaction when I read about it was, this sounds like obvious garbage sounds like physicists who just do not understand what they are up against. You know, because because the way that popular articles would describe quantum computing, then as now was well, it just somehow kind of magically tries all the

Sebastian Hassinger 5:36
things at once.

Scott Aaronson 5:38
You know, Shor's algorithm will just try every possible divisor in a different parallel universe. That sounds too good to be true. That That sounds like you know, even if it worked on some tiny scale, you know, it's not going to scale up, right. And, but, you know, okay, in order to be sure, you know, I had to find out so So, so what is this quantum mechanics thing, after all, and, you know, of course, I had read popular science books, or I had, you know, high school chemistry, where they tell you, that will the, the, the electron is not in one place, it's really just in a smear of probability wave, you know, all over the place. Until you look, and then it decides where to be. And you think to yourself like that, you know, that that that doesn't make any sense. Isn't that just a fancy way of saying that they don't know where they are? But then,

Kevin Rowney 6:39
words, what was

Scott Aaronson 6:40
what was new with with quantum computing in the mid 90s? Was that, you know, you could look it up. And at that point that were already web pages explaining it, you know, the web itself was not that old, white, but there were, you know, websites explaining Shor's factoring algorithm, Grover's search algorithm, and they just, you know, rather than taking you through sort of years of physics and partial differential equations, before you could understand it, they just sort of immediately told you for this very practical purpose, like, you know, what is the core of quantum mechanics, and it's that the state of any physical system is a vector. It's a unit vector of complex numbers, right. And, you know, these complex numbers, which are called amplitudes, are very closely related to probabilities. In fact, when you make a measurement, the probability is the squared absolute value of the amplitude of some state, but but they're not probabilities, right? You never talk about a negative 30% chance of rain, but the amplitudes can be negative or complex. And that that is the source of all the difference. Right? You know, you know, it's not just that nature is probabilistic, which, you know, we that that itself would not be the surprising part, you know, God playing dice, this is not actually as big a deal as

Sebastian Hassinger 8:07
Sorry, I'll be

Scott Aaronson 8:11
quoting Einstein thinking, we would use probability all over the place, even if the world had been classical,

Sebastian Hassinger 8:17
right? Yeah, the real Vegas sometime.

Scott Aaronson 8:22
Really surprising part is what we have to do to calculate the probabilities for the atomic scale. And what we have to do is add up these amplitudes, which are these complex numbers. And if something can happen one way with a positive amplitude and a different way with a negative amplitude, then the two contributions can cancel each other out right interfere destructively, so the total amplitude is zero. And the thing never happens, right. And yet, if you blocked off one of the ways that it could have happened, now the amplitude is positive, or it's negative, and now the thing can happen, okay, so probability is not monotone, the way you thought it was right, you can decrease the number of ways for a thing to happen, and thereby increase the chance of it happening, okay. And quantum computing, all it is, is you just take the theory of computation, as it had developed since the 1960s. And as I had already learned about, and you combine it with that with that new twist with that so so now the idea with it with a quantum computer is going to always be that you're trying to choreograph a pattern of interference so that for each wrong answer each answer you don't want to see some contributions to its amplitude are positive and others are negative and they cancel each other out whereas for the right answer, the one you do want to see the contributions to its amplitude reinforce each other. Okay? And, you know, he this is not obvious how to do that for for interesting problems. Right. So, you know, computer scientists still have their work cut out for them. Okay, but this this is The source of the power. Right? And, you know, it's a very specialized hammer that nature sort of gives you. And then, you know, you sort of as a computer scientist, you know, you have to figure out what nails can that strange new hammer. Right. Right. And that is what short did, right? He exploited very, very special properties of the factoring problem in order to set up a sort of interference experiment that would reveal the answer, right? And, you know, in some sense, you know, the source of the power is the interference, combined with the fact that there is an absolutely enormous number of these amplitudes. Right, right. You know, if I have a state of 1000 bits, which can be in quantum superpositions, of zero state, and one state, so what we call qubits, then, you know, the number of amplitudes that I need is two to the 1000 power, right, which is much more than the number of atoms in the observable universe. Yeah. And in some sense, physicists knew that since Schrodinger, you know, wrote his paper in 1926. Right, but it was only with quantum computing, that sort of people's faces were rubbed in the fall, staggering this. You know, and, you know, and finally, you know, in the early 80s, when Fineman and David Deutsch and others started talking about this, you know, the question was finally posed explicitly, like, does nature really work this way or not? Right? If it does work this way, then you should be able to exploit it to do computations, that would take you an astronomical amount of time, basically. And if nature does not work this way, then you know, in a way, that's even more exciting, that's a revolution in physics. So either way, let's find out. And and, you know, when you when you put it that way, you know, and when you combined with the fact that in the 90s, like this field was brand new, and there were just open questions left and right, all around. I mean, the, you know, the case was compelling. It was like, how could you not be you know, I mean, the main worry at that point was just that I would not be able to do anything new in this area, right. So, I had a summer job at Bell Labs, about statistical software had nothing to do with quantum computing, but my boss was very nice. And he said, I can spend a few weeks, you know, reading up on this. And so I saw, I saw I learned how it worked. I was very excited that you know, prove some theorems and quantum complexity theory, which I thought would be would be new, publishable, you know, this, this would have been in like, 97, I guess, when I was 16, or 17. But, you know, it turns out that those theorems were already known, okay, but it was like, you know, like, maybe a few years before, I would have been rediscovering math that was like, known to Euler, or like known. And then I was rediscovering stuff that was known in the 1950s. You know, but then with quantum computing, now, I'm rediscovering stuff that was already known but from like the year before.

Kevin Rowney 13:13
That's right, step by step.

Scott Aaronson 13:15
Gap. Right, exactly. But then I found out that love Grover, the discoverer of Grover's algorithm, maybe, you know, the sort of other central quantum algorithm other than Shor's, he worked at Bell Labs in the same building as me. So I went and talked to him. And you know, I had ideas for how to improve Grover's algorithm, which were completely wrong. They just did not work at all. Okay, but lob was extremely nice. And he offered me an internship to work with him the following summer. Cool. So I did that. And while I was there, I got to know a student from Berkeley named Ashwin. Nayak, who would, who was working with who mesh VAs Irani. And you know, at that time, you know, that was kind of the center of quantum computing theory in the work. And so now I was sort of plugged in to what was actually happening in this field. And, you know, what were the current open questions? And, you know, I got the idea that like, wow, this is this is really my dream. Like, I wish I could go to Berkeley for grad school and find out whatever these people know, right. I was at Cornell, right? At that time, there was no one at Cornell doing quantum computing, right. Actually, at Cornell, I was working much more in AI. I worked on a Robocop there were a robot soccer team, you know, which was a very important experience and not to become a software engineer.

Sebastian Hassinger 14:50
There is something humbling I think about the clips of video clips of those soccer robot soccer players. We think the ball and falling over

Scott Aaronson 14:58
basically really hard. Yeah, aggressive and they're and they're super hard. It was already cool then and they're just incomparably more impressive now. Absolutely. I watched it recently, right. But again, you know, I realized, like, I would be off thinking about, like, you know, could we could we formalize this problem about how to move the goal, such that we could prove that it's NP complete? I'd be like, really happy, I could finally do that. I'm like, Yeah, we don't care.

Sebastian Hassinger 15:32
Maybe not an application guy.

Scott Aaronson 15:37
So you know, but but but that that summer at Bell Labs, I worked on proving a quantum lower bound, which means, you know, proving the limitation of quantum algorithms for a variation of the Grover's search problem, could you search called the end of ores, okay, which is a kind of like, like, you know, I give you a grid and I asked you, is there a black pixel in every column? Each column has at least one black pixel. Okay, and you can do this kind of thing by running Grover's algorithm recursively. The question was, could you do better than that? And I tried and tried to solve this problem. I couldn't. And then that fall, Andras unbiasness, who is a, another student of Azerbaijanis, at Berkeley managed to solve that problem by inventing a whole new quantum lower bound method, what we now call quantum adversary method. Okay, and, you know, so like, I felt bad that, you know, I couldn't come up with it. But on the other on the other hand, you know, I, I couldn't have come up with it. Right. But I, you know, because I spent the whole summer on the problem, I was immediately able to understand it. Right. Right. And that really solidified that, you know, I wanted to go to Berkeley. And as it happened, I did get admitted to Berkeley for PhD, but it was the AI people who recruited me, not computing people. But yeah, so I spent a year at Berkeley doing machine learning, actually, with Mike Jordan is a, you know, a well known machine learning guy. And, you know, even then, you know, this was 2000. So this was like, a decade before the deep learning revolution started, right. But I could already see that, you know, deep learning is going, or what, what was then called neural nets. Yeah. You know, statistical machine learning, let's say, was going to have an impact on the world, sort of sooner than quantum computing, let's say, you know, I don't know, I didn't know how well it was going to work when you just scaled it up, right? Yeah. Oh, hard. Hardly anyone knew that at the time. But I feel like I love quantum computing more, just. And the reason was that, you know, in, in machine learning, then as now, you know, the the ultimate test for almost everything is empirical. You don't really understand why things work, right. And you see that they do in fact, work, right? Some, you know, you get some log log plot, or, you know, some some messy data, that

Sebastian Hassinger 18:22
robot kicks the ball. Yeah.

Scott Aaronson 18:25
It's doing well. And, you know, and again, you know, I felt like, if I had any comparative advantage, it was more on the side of, of explaining why things work. And so I, I sort of secretly, you know, wanted to, you know, sort of join the quantum crowd, or at least combine AI with quantum computing somehow, you know, and at that time, like, 2000 2001, that was just a completely crazy idea. Right? Yeah. Like a decade later, it became a whole industry, right. But But at that, like, in 2000, I felt like okay, quantum AI sounds cool, but I just don't know what to say about it. That's not right.

Kevin Rowney 19:07
If science fiction topic back then but yeah, not not. Right.

Scott Aaronson 19:11
Right. But if I just waited a decade, what I would have learned was that I didn't have to say anything non trip. was about it anyway. community that was

Sebastian Hassinger 19:24
for the VC money FOMO.

Scott Aaronson 19:28
Buying the things, but then the, the summer after my first year at Berkeley, I was at I was at Caltech, in John Prescott, his group and I, you know, there was a particular problem in quantum computing theory that Vazirani you know, sort of given out as his favorite it was called the collision problem. Okay. So, it's, it's a variation of the Grover, the Grover problem says, you know, you have Have a blackbox, you know, mapping inputs, the outputs, let's say and you just want to find an input that is good, like that maps to a one output, right? And, and for this problem, quantum computers can give you a quadratic speed up, they can solve, you know, if there were n possible inputs, that a quantum computer can find a good one, if there is one in about square root of n steps compared to N steps that would be needed. Okay, but this is also the bet net proven to be the best that even a quantum computer can do, at least for that sort of abstract problem. Okay, but now, with the collision problem, we imagine a function that map's messages to hashes, so like a cryptographic hash function, you know, something like MD five or whatever. And now the goal is just to find two messages with the same hash. Okay, so there are many, many pairs of such functions, right, because the hashes are much shorter than the messages are, but you just have to find any pair. So this is a very fundamental problem in cryptography. And for this problem, it's well known that you know, even classically, so with no quantum computer, there is already a square root speed up to be had over the most naive thing. And that square root speed up comes from what's called the birthday paradox. Yes, birthday paradox is the thing where if you put 23 people in a room, there's already a half chance that two of them share a birthday, because what matters is not is the number of pairs of people, right, right, which grows quadratically with the number of people. And likewise, if I have a function, you know, with n possible outputs, then define the collision and that function, I have to query about square root of n random inputs, and then probably I'll have found some COVID. And now, so so so so so we now see two completely different ways to solve this problem in square root of n time using the birthday paradox or using Grover's algorithm. If we had a quantum computer. In 1997, Broussard Hoyer and tap had a sort of remarkable paper where they said you could actually combine the two, you could have a Grover enhanced birthday attack, and that ends up being able to solve that problem in cube root event steps. Okay, wow. Combine the two square root of n speed ups.

Kevin Rowney 22:32
Very, you know, beautiful results.

Scott Aaronson 22:34
It may it may, it makes a perverse kind of sense, right. Is that?

Sebastian Hassinger 22:38
I mean, do you feel like that's sort of the roots have have your own interest in sort of, you know, it's been cultured, D quantizing. quantum algorithms, like sort of trying to find classical approaches, maybe inspired by the quantum approach, but

Scott Aaronson 22:53
I think I think that's been part of the field since the beginning, I think, you know, for, you know, like the the, you could say, the dividing line between serious and non serious quantum computing research is, are you asking the question of, can you actually be the best that a classical computer could do at this desk? Right, the, you know, the serious science begins when we asked that comparative question, right. And, you know, which, which is as obvious as it sounds like, you know, every day I see papers on the archive that are just, you know, do not reach that threshold.

Sebastian Hassinger 23:31
And then you're called upon to debunk them on shtetl optimize.

Scott Aaronson 23:37
This is part of, you know, how my blog just became prominent. I didn't plan it that way. But it was just that no one else was doing that was that, you know, taking what is obvious to everyone in the quantum computing research community, and just saying it out loud to the wider world.

Kevin Rowney 23:54
And that was one of the reasons that we had such a primary attraction having you on on the podcast is just so great there. I mean, I wonder I mean, there's, there's a family of other areas of research and algorithms in quantum computing. You touched on the two major ones, he assures and Grover, but the stuff like adiabatic, computation, quantum rikes, I mean, topological quantum computing so much about the future measurement based quantum computation we do what's what's your relative level of faith and optimism that those areas are going to produce? New, significant breakthroughs on the order of growth are unsure.

Scott Aaronson 24:28
I mean, I would say that many of those already have produced, you know, significant breakthroughs in our I mean, they're a permanent part of our understanding of the subject. But in terms of, you know, giving, like independent sources of quantum speed up, you know, that really matter in practice, right, yeah, you know, it's been, it's been much, much harder than than many people have hoped to expand the list of applications that we already knew within 99 days. Right, right. Yeah, I think I think that's the truth of the matter. You know, I think, I mean, there are many, many problems that seem amenable to a Grover type speed up, right? That's like a large fraction of everything in computer science that, you know, can can get some kind of polynomial speed up. And, you know, often often it takes a lot of cleverness to figure out how, and you know, and to figure out what is the best polynomial speed up, right. So I was just just to complete what I was saying before, my first big result in quantum computing that sort of got me into the field was to prove that that Prasad Hoyer tap algorithm for the collision problem was optimal, right. For the collision problem, you know, once Well, I proved the first lower bound for the problem. And that was then improved like a month later by a colleague named Galleon Sheikh, and he showed that, that there's not even a slight improvement over over presort or DAP. So, so, you know, with enough effort, we've been able to understand for lots and lots of different problems, you know, collision just being one example, you know, what are, you know, the best speed ups, you know, that a quantum computer can give you, at least in this sort of blackbox setting, setting where, you know, you imagine that you you have a function that you know, how to compute, you know, you can compute it in superposition, but you don't have access to its internal workings. Okay, so, so, so we have many examples of Grover type speed ups in that model, you know, and there are many, you know, there, there might be many different ways of realizing Grover speed ups, you know, including via adiabatic quantum computation, which you mentioned. Right, but now, the trouble with Grover type speed ups is, well, they're not exponential, right. And B, because of the overhead of building a, a fault tolerant quantum computer, right, the kind that can truly scale, it's not in practice, it's not that you're comparing and against square root of n, right? It's that you're comparing N against, let's say, square root of n times a million, or times a billion or something enormous constant. prefactor. Right. And so,

Kevin Rowney 27:17
because you have to run a bunch of samples through to get a, you know,

Scott Aaronson 27:20
there were there was a recent paper, like treating that as a new revelation, but we've known that for a very long time. And, you know, what it means is that to make those quantum speed ups really practical, you know, either you need to massively decrease that constant, you know, find much more efficient ways to do error correction. Or else, you know, n has to be really, really enormous. So, okay, so, so that, so then that leaves us with exponential speed ups, right. And I think, you know, set from the 90s, through to today, you know, we've had two big families have exponential quantum speed ups that seem like they could actually matter in practice. The first of those is just for simulating quantum physics itself. This was the original application I had in mind when he, you know, raised the idea of a quantum computer four years ago, and it seems like so obvious that like we have what is there even to say about it? I actually, there's a lot to say about it.

Sebastian Hassinger 28:26
Yeah. Well, and it's really exciting to see, you know, things like the neutral atom devices from Chiara and quanta, and Pascal, those who are doing things with 256 atoms, it would be very, very difficult to to classically,

Scott Aaronson 28:42
Oh, absolutely, absolutely. I think that that is, you know, that that is almost certainly going to be the source of the first practical quantum speed up that we see, you know, for for useful problems, you know, whether that's condensed matter physics, or, you know, simulating chemical reactions, high energy physics, if we count that as a practical application. And, and, yeah, you know, this, this is, you know, it's sort of what a quantum computer wants to do. And it's sleep itself problems that themselves are quantum mechanical, it's a, it's a minor miracle that you can sometimes coax it into giving a speed up for a purely classical problem. But, you know, now now, of course, the other big class is, you know, breaking our current public key cryptography is because what, which is what Shor's algorithm and its variants do now that's a very important application, if you are an intelligence agency or a criminal syndicate is not it is not obviously a positive indoor to have disability, right. I mean, what what it will mean in practice is that you know, event Surely we will all have to migrate to, you know, new quantum resistant forms of encryption, you know, and then, you know, having put in that huge effort to migrate, you know, if all goes well, well, then just be right back where we started. Yeah. So. So so, you know, now now it's been, you know, one of the biggest problems in quantum computing theory for, you know, almost 30 years has been, can you expand the list of practical exponential quantum speed ups? You know, beyond those two families, right, and there have been valiant attempts to do so. You know, so So this adiabatic algorithm that you mentioned, is basically a sort of, you know, you take classical heuristic methods like simulated annealing, right, which are things that are not guaranteed to work, you know, often they don't work. But in practice, often enough, they do, right, there's your sort of local optimization heuristics, and, you know, often they work astoundingly well. And now you can take those and you can sort of enhance those with via quantum tunneling effects, right? And you can hope that the quantum heuristic algorithm will be, you know, in practice even better than at least sometimes than a classical heuristic algorithm. Okay. So that was, that was the idea. You know, when when, you know, I was around when, when Ed Farhi and his collaborators introduced this stuff in 1999, right. And at the time, I think, right at the time, they were super optimistic that like, this might just solve NP complete problems in polynomial time on a quantum computer, and, you know, and, and already, then I was very skeptical of that, right. But, you know, they had some data that looked like a right, but uh, you know, I was, at that point, you know, I was just a student, but I was already being this kind of

Sebastian Hassinger 31:56
Little did you know, that being the quantum skeptic would be, you're

Scott Aaronson 32:01
getting this kind of arrogant, you know, complexity theorists saying, right? It's going to, you know, you know, it's clearly going to become exponential, right, in that, in that particular instance, the arrogant complexity theorists were right, right. That is how it turns out, okay, but then, but then the question, you know, or rather, there are, you know, it was it was quickly discovered that one can construct instances of NP hard optimization problems, where even the adiabatic algorithm needs exponential time. And where, you know, the only speed up that it gets over a classical algorithm is basically the Grover speed up. Right. Okay. But then the question became All right, well, that's the worst case. So forget the worst case. Right, you know, could couldn't, you know, maybe in the typical case, you know, maybe this is a win. And, you know, and that's been a discussion that that's been going on from, you know, the early 2000s, all the way till today, right. And, you know, unfortunately, you know, I think, you know, there's, you know, our actual scientific understanding, which are very, very slowly advances. Right. And, you know, Matt Hastings had a breakthrough on the subject just a few years ago, where he showed, you know, for the first time, you could construct an artificial landscape, where he could prove that the adiabatic algorithm, you know, reaches a global optimum in polynomial time, and no classical algorithm can do the same. Right, you know, again, in this black box, set, right, and, and, you know, and but now, we still have no idea, well, that kind of landscape ever arise in practice, right? And then for practical optimization problems, like you, people were still squinting at data and arguing about it. You know, we may not really know the answer as to what kind of speed ups are possible, until we have large quantum computers to, to test the thing out until

Sebastian Hassinger 34:07
we can see the results.

Scott Aaronson 34:10
That's the thing with heuristics, right, yeah.

Kevin Rowney 34:15
Such a great deep dive on this algorithm. I think you're

Scott Aaronson 34:19
sure, sure. But yeah, I mean, in the meantime, you know, and that this brings us to the to the saga of D wave, right. But but, you know, what, what, what, what, what happened starting in 2007 or so, was that people learned that, you know, people would get so excited about this adiabatic algorithm, you know, maybe like using tunneling, to solve optimization problems faster, you know, and that just fit in so well with the narrative that they wanted to tell that, you know, quantum computers are going to revolutionize machine learning and, you know, optimization and financial, you know, Vehicle routing and, and financial portfolio planning and all these things, you know, it just fit in so deliciously you know, with that narrative that that people would just not resist the temptation to say, Okay, we, you know, we use the adiabatic algorithm, and you know, and therefore, you know, this is practical, and they would just trust that in the business world, in the journalism world, people would not be asking the question, but did you be a classical computer? How did this compare, you know, to, you know, the best classical algorithm, right, and you can't just compare against, you know, classical brute force. Right, thanks. Weighing the classical computers, hands behind its back, you have to compare to the best classical algorithms. Right. And

Sebastian Hassinger 35:48
as you said, that's where the the actual discipline scientific work.

Scott Aaronson 35:52
That's exactly that's where the actual sciences, and it's still unsettled, and it's still being argued about, right. Yeah. And, you know, it's, it's, you know, you can, you know, at least there seem to be Grover speed ups, maybe there's more, you know, at least in some artificial scenarios, there's more than the grocery speed up. But will the more actually matter in practice? Yeah. And I think in 20, some years, we still don't have a satisfying answer.

Kevin Rowney 36:23
So fascinating. What about what are the same domain for assume about love question for quantum walks? Are you Are you an optimist on those or I

Scott Aaronson 36:31
mean, I mean, quantum walks, you know, are often what they just amount to, is a way of achieving Grover type speed ups are a wider range of problems than you would have expected, right? I mean, this is this, this is one way of using quantum blocks. So you know, Andreas unbiasness, had a famous example of this in 2003, when he gave an end to the two thirds quantum algorithm for what's called the element distinctness problem. That's the problem where I give you a list of numbers, and I just asked you to find any two that are the same. Or alternatively, you know, tell me if the numbers are all different from one another. Okay, so it's, it's, it's related to the collision problem, but it's harder, because you know, there, there, there need not be any collisions to find. And if there are, then there might be only one of them. Right? So, since this was one of the first examples of how, sometimes there is a Grover type speed up, but you only see it if you do something clever, such as quantum walks, okay, see, now, now, there are also examples where you can get an exponential quantum speed up via a quantum walk. Okay, and, you know, the seminal paper showing that was from 2002. By childs at all, okay, and they, they constructed this absolutely bizarre looking graph, called the glue trees graph, right, it's gotten to complete binary trees, and they're the leafs of one are connected to the leaves of other of the other via a random cycle, right. And what they showed is that if a, if you started, let's say, at the root of one of the trees, and your goal was to find the root of the other tree, right, and you could only do it by making local moves in the graph, then classically, this would take you time, that's exponential in the depth of these binary trees, basically, because you're just going to get to like the middle the cycle that connects the two trees, and you're just gonna get stuck there for like, a huge amount of time does wandering around in this labyrinth, you know, until you happen to find the way to do the exit, okay, but then they showed that a quantum walk will actually just kind of barrel right through in a polynomial amount of time, because of quantum tunneling effects, basically. And so so this was a beautiful example. And it was actually used in that breakthrough by Matt Hastings and later people that I that I, you know, a few years ago that I mentioned before, where they were they found that there is also a ultimately exponential quantum speed up also via the adiabatic algorithm. Okay, the trouble is like, these are all based on extremely artificial graphs, right? Yes. So you don't know like for taking a quantum walk on a a graph that would naturally occur in the wild, whatever that means, right. Are there also exponential speed ups via quantum walk in those kinds of situations? You know, we just I don't think we really know yet.

Sebastian Hassinger 39:54
Interest and so more recently, so since the summer of 2022, you've been working with open AI. So you sort of gone back to your roots in machine learning and AI? I'm curious, was that I mean, does that feel like almost? I mean, it is sort of back to the world of like empirical proofs, right? I mean, it's like, does the model do the thing you want it to do? And you're kind of trying to come in and add some level of, of explainability, or introspection to these models? Is that Is that right?

Scott Aaronson 40:24
Well, that that's one, that was one idea for, for what I thought I might be able to do. As it turns out, you know, explainability, is also an extremely empirical problem.

Unfortunately, for me, right, but what happened was that, you know, I've, since 2007, or so, you know, I've know in this community of people who, you know, have felt strongly that that, you know, the most important issue facing humanity is to, you know, is to prevent an unaligned super intelligent AI from destroying the world. Right. And it sort of nothing else matters by comparison to that. And, you know, I will just sort of play with this, you know, like, I never rejected their premise as impossible, right. But I always felt like, okay, you know, I'm not sure that I really see a research program here, right. Like, supposing that I agreed with you, like, what do you want me to do?

Sebastian Hassinger 41:31
There's sort of a flavor of religiosity and the whole thing.

Scott Aaronson 41:36
It just seems so far ahead. Like, how do we know if we're making progress on it or not? Right, yeah. You know, and I think like, like, you know, people, you know, so and people would kind of, you know, be amused and laugh about it. I think that in the last couple of years, other way if and less.

Kevin Rowney 41:57
It's always, it's always challenging to to reason about Black Swan risk, right. Exactly.

Scott Aaronson 42:03
Exactly. Yeah. So so we're maybe a year ago, someone was in the comment section of my blog saying, like, Scott, what is it going to take to get you to stop wasting your life on quantum computing? The one thing that really matters, which is AI, right, and like, How much money would have to offer you and I don't I don't think it's about money. It's, you know, is there an actual MIDI question that I could make progress on? Anyway, it turns out that people at open AI, some of them read my blog, and unsurprising, even even a few former students from MIT, oh, cool. Who who went to work at open AI? But, so, so Yan likey, who is the head of the Safety Group at open AI then emailed me and was like, Well, you know, we think we do have problems. And are you serious about this? And would you like to spend a year at open AI? Right, and I was, you know, my first reaction was, like, you know, why do you want me right? I'm a quantum computing person, what do I even know about any of this? Right? But, you know, I talked to yawn, I talked to Ilya Sutskever, who's the chief scientist of open AI, and we found, you know, first of all, you know, they had a lot of extremely interesting things to say. And, of course, I had been aware, you know, of, of what was starting to come out from open AI, you know, I played around with GPT, which, you know, even even a year ago was already astounding, and much less astounding, then than it is now. Right? And, you know, and, and they felt like, okay, you know, maybe things have changed, you know, maybe like there were finally artifacts, you know, in the world that are powerful enough, that AI safety is now you know, a subject where you can get feedback, or like, you can see what is working and what is not working right like there, these things are already going to be, you know, they're not going to destroy the world, hopefully, but or, you know, turn us all into paperclips or whatever. But, you know, mill, millions of students are going to use GPT to do their homework for

Kevin Rowney 44:20
stopping that. Yeah, that's

Scott Aaronson 44:21
right. People are going to use it to generate propaganda, to generate spam to impersonate people to do all kinds of, you know, nefarious things, and

Sebastian Hassinger 44:32
to ask someone on TaskRabbit to pass the I'm not a robot test. I love the TaskRabbit person, you're not a robot asking me to do this

Scott Aaronson 44:47
right now, so I mean, they've already shown some limited abilities to deceive humans, you know, in pursuit of a golf right. And you know, so you know that Oh, that's interesting, right. Very and, and, you know, and I said, Okay, so. So you know, there's clearly going to be a big need to, you know, try to prevent these misuses. And even if that is mainly an empirical problem, you know, maybe maybe the right maybe a theorist can help. Right. So, you know, and then I said, okay, it sounds interesting, but you know, I'm in Austin, you're in San Francisco, you know, I've got, you know, you know, to teach, I've got, you know, like, fight, you know, maybe some other year, and then you know, and then we're like, okay, trust us, this is going to be a really big year, for this year. You can do it remotely from Austin. And we'll just pay you so you don't have to teach. And, you know, and you can still run your research group, and but just also think about our stuff. And I was like, Alright, that sounds like a good deal. So that so that, that's, that's how I got involved. And, you know, I, you know, because because some people have asked me, you know, the stuff I've been doing at open AI has nothing to do with quantum computing. Right, right. Right. It does draw on on, you know, some of what I know, is that theoretical computers, wow,

Sebastian Hassinger 46:18
what an interesting contrast, right, you've got a very capable, you know, black box that can actually do a bunch of stuff that you we don't have a theoretical explanation for really, versus the work you do in quantum where it's all pretty much theoretical, because there's no real empirical stuff.

Scott Aaronson 46:36
In quantum computing, there's there, you know, only within the last few years, have we started to get devices that are that are sort of, you know, sort of sort of interesting, from the standpoint of complexity theory, non trivial. For most of the fields history, it was it was, you know, you know, there were the the experimenters just trying to get one or two qubits to work, right. And then there were the theorists off thinking about, you know, you had billions of qubits, then what can you add plus an interactive prover? You know, a mobile in or whatever that what could you still not do? Right? Like, we're like, you know, the two communities had a very hard time talking to each other. That is, that is changing now. Okay. It is, the way that I like to put it is that, you know, in order to make progress in science, you typically need at least one of two things, you know, either, you know, math, so, you know, axioms that everyone agrees on where they can prove theorems from them, or else, you know, experiment or observation, okay, but there's got to be something external to you, that can tell you when you were being a doofus, you know? When you're when you're being wrong, when you're wrong, when you have the backup, maybe No, I, you know, the best of all, is if you have bets, right, but but at least one of them right in, in quantum computing until very recently, you know, we didn't have the experiments at anywhere near the scale that we cared about, but at least we had the theory, we knew we, you know, mathematically, we could write down exactly what a quantum computer would be. And ironically, we understood the sort of eventual perfect quantum computer far, far better than we understood the noisy quantum computers of the present.

Sebastian Hassinger 48:31
Right. Yeah.

Scott Aaronson 48:33
With with with with AI, you know, we don't have a theory that people agree about that, you know, I mean, I mean, certainly, you know, we don't have any theorems that would explain the recent successes of deep learning. Right, right. The best we can say is that, well, we, you know, we can explain why why none of the theorems rule it out. Right, I

Sebastian Hassinger 48:54
hear a phrase like emergent behavior, and I think, what's going on?

Kevin Rowney 49:00
Right now, yeah, right, things

Scott Aaronson 49:01
are exploiting regularities of the real world, you know, in the case of GPT, of all the texts on the internet, that we do not know how to capture in theory, in a theory, okay. Now, with with with AI safety, you know, I think for a long time, people were trying to make progress with neither, you know, experiment nor mathematical theory. And that's just really, really hard to do. Right. And, you know, what, you know, the, the failure mode, you get, you know, that I think people would get into it was, they would just say, Well, you know, assume an arbitrary super intelligence that wants to destroy you, and no matter what idea you have for how to contain it, or align it or whatever, well, it won't work, because, you know, the super intelligence is smarter than you know, it's already anticipated what you're gonna do, and you know, and so nothing works. And so we're doomed, right? And you know, and it's like, okay, well, whether that's true or not, it's like you don't make a lot of progress that you're breastfeed So, and, you know, I think now, you know, it's become, you know, now that you have GPT out there in the world, you can, you know, it sort of makes the question much more concrete because you can see how people are misusing it right, then you can say, you know, and then you can think about what can we do to prevent that kind of misuse, right? And then those questions might have, you know, theoretical components, right?

Sebastian Hassinger 50:30
Well, or just having a tackle a problem like you, you just on shtetl optimized, you just posted its results from taking your final exam, which it did okay on actually

Scott Aaronson 50:44
forgot to be on my quantum computer exam.

Sebastian Hassinger 50:48
Yeah. And then And then, and then grade grubs to try to ask for more.

Scott Aaronson 50:54
This was this was by TAS idea.

Sebastian Hassinger 50:57
Hilarious.

Scott Aaronson 50:59
graded it. He said, You know, why don't why don't you now, you know, ask it whether it wants to, you know, request the higher grade

Kevin Rowney 51:09
position you're involved with sometimes with students. Yeah.

Scott Aaronson 51:13
And you it will grade grew up and you know, in the same kind of way that students will grade grew up. Identical tone, yeah. hours that are rather similar to the hours that I mean, this is what it's designed to do. It's exactly what a person would say or do in this situation. And I'm just curious,

Sebastian Hassinger 51:33
like, in something like that, where obviously, the field itself is so familiar to you, it's your exam, you've graded students who've taken that exam now for some time. Yes. Was there any insight in in having GPT? tackle that, you know, was there surprising things that got right or wrong to give you some insight into how maybe there can be a theory of mind for for a GPT?

Scott Aaronson 51:58
Okay, I mean, I mean, so so so so I think, you know, we are just at the dawn of the field of Robo psychology that Isaac Asimov was writing about AI. Right, this is finally an empirical subject. And there could be hundreds of 1000s of papers written just to explain God new

Sebastian Hassinger 52:23
jobs for humans.

Scott Aaronson 52:26
I have so many friends in AI who are just moping around these days, like, lamenting that, you know, that there's nothing for them to do anymore. Because, you know, all of their special purpose systems are just beaten by you know, GPT, which is this big, you know, inscrutable black box, and, you know, that which they can neither understand, nor can they compete with it, you know, on scale. And I'm like, okay, but really just just like, reconceptualize this, like, you know, you, you can now be the first Robo psychologist. And there's just so much to do, there's so much more like, so much to do. And so, so yeah, so So I mean, um, you know, I think what surprised me the most about GPT taking my my final exam, is that unlike students, it did much better on conceptual questions than on calculation questions. So, you know, a calculation question. I mean, usually, it would set up the calculation in something like the right way, but then it would botch the execution. Right, right. Yeah. When would you know, of course, students do also, right, but you know, it, would you like if GPT has no idea how to check its work, right. So you can you can sort of make it through, you know, like, people have have learned all kinds of hacks for sort of, you know, forcing it to check its work is over. But, you know, in order to make things fair, I didn't, I didn't use those. Right, right. Right, which gave it the problems and told them to solve them. And, and, you know, and you have to remember that GPT is generating one token at a time. And it doesn't have any internal state, you know, beyond this tokens that is generating, okay, which means that, like, if it starts generating a wrong answer, it just wants to continue to just keep going. It doesn't want to back up, right, and it does

Kevin Rowney 54:23
so with dead calm, confidence, it just making stuff.

Sebastian Hassinger 54:30
The most human thing about it actually, is that total assurance that I'm, I'm 100% Correct.

Scott Aaronson 54:37
But if you if you if you look at the exam on my blog, like you'll see that there are some, some tricky, you know, there are some conceptual questions that students had a lot of trouble with, that it really just blows out of the water, right. And so much that like that, I wonder how, you know, like, like, was there something in its training data that was, you know, that was, you know, it wouldn't have been exactly that question. Yeah. examines it was not on the assume

Sebastian Hassinger 55:04
I'm assuming it hoovered up all the archive, though. I mean, yeah, that's

Scott Aaronson 55:07
right. That's right. But I would have hoovered up a bunch of other people's quantum computing exams. And, you know, the trouble is like, we don't really have explainability tools right now, right? Ask GPT like, what in your training data caused you to say that? Right, right. Right. You know, that's something that some people were thinking about, you know, that would be incredibly useful to useful, right.

Sebastian Hassinger 55:31
So I mean, it also begs the question sort of about, like, you know, there there are domains of knowledge, anything having to do with math, for example, that are you can validate the answer. And that's clearly and there's already a plug in for alpha, you know, Wolfram Alpha. So yes, that's obviously coming. But like, the, the question to me is, like, so the law, anything humanities related? What kind of validation? Are we gonna be able to build? Is there any kind of, there's obviously this the explainability. But then there's the

Scott Aaronson 56:04
validation, there's always just humans up voting or down voting.

Sebastian Hassinger 56:09
That's true. That's kind of how we decide what's correct.

Scott Aaronson 56:13
And, and, of course, that that is already being done on a on a very large scale, right. You know, before the latest GPUs are released to the public, they undergo months and months of this RL HF reinforcement learning that human feedback, right, which is basically, you know, hundreds of contract workers who are, you know, looking at, you know, hundreds of 1000s of GPP outputs and giving it little electric shocks. But that's, you know, misinformation or that that's, you know, not the kind of thing that people want, right, so

Kevin Rowney 56:55
it was powerful ops large language model. Yeah, exactly. Exactly.

Scott Aaronson 56:59
So, so it is being RLH. Left, right. And, and we can do more of that, you know, and people really disagree. I mean, some people think we know that, that that, you know, this kind of thing is going to be how we're going to solve AI alignment, right? Because it's like, it is hopeless to try to write down human morality as a list of rules, right. I mean, Asimov tried it with his five rules. Yeah. If you read his stories, they're all about how that doesn't work. That's right. So so so maybe the best we can do is three being moral as yet another machine learning problem, right? Here are a million examples from like old literature and stories and Saturday morning cartoons of you know, here's, here's, you know, here's what a good guy looks like. And here's what a bad guy looks like, right? Now. Now, now generalize from that. Oh, you know, other people are worried that that all of this is just too superficial. And that, well, you're, you know, you're really just teaching the AI to appear to be moral, right, or to, you know, give you the kind of answers that you wanted to hear. And, you know, in that whole time, it could be, you know, secretly working against your interests. Right. And so, you know, so that's a that's, that's a debate that people have, you know, and like, I, I have not, you know, let's be clear, in this one year of sort of spending part of my time working at open AI, I have not solved these problems. I mean, I feel like the problem of AI alignment, you know, contains 3000 years of human moral philosophy.

Sebastian Hassinger 58:47
So, you're saying you'll need a little more time?

Scott Aaronson 58:51
Maybe one more year, you know? Yeah, I've picked off little tiny pieces where I feel like I could, I could say something, right. So the, the, you know, the best known, you know, thing that I've worked on this year was a tool for watermarking, the outputs of GPT? Yes, you make it easier to distinguish human written texts from GPT written text. And so, you know, there were some interesting theoretical problems there. You know, the thing that I think most surprised people is that by using a pseudo random function, you can watermark, the output of a large language model in a way that does not degrade the quality of the output at all. Okay, in a way, that's really impressive.

Kevin Rowney 59:38
So do you see you really feel for our listeners that there's room for optimism that educators can have a decent tool to hunt down this kind of plagiarism?

Scott Aaronson 59:47
Yeah. So so there is some room for optimism. I mean, the the the the bad news is that no matter what watermarking scheme, I or anyone else comes up with probably, you know, a student with a little bit of cleverness can, you know, figure out a workaround? Right? Yeah, I mean, even just, you know, you asked GPT to generate to write your term paper, but in French, and then you put it into Google Translate, right?

Sebastian Hassinger 1:00:19
gave away the secrets.

Scott Aaronson 1:00:23
That one is obviously find it anyway. Right. But, I mean, you know, so So, so what what you need, you know, seems to be, you know, to watermark on a more conceptual level, right? And, you know, or, you know, or you could say, Look, you know, you know, you know, we're just going to use watermarking, as sort of one component of, you know, like, like, what, what we can do is, we can push the rate of false positives, you know, almost all the way to zero. So we can say, the watermark is there, then the student cheated.

Kevin Rowney 1:01:01
Yes.

Scott Aaronson 1:01:04
Yeah. And that might be enough to catch, you know, a large percentage of cheaters in practice. But, but I think that ultimately, you know, there will be some cat and mouse game, you know, to stay ahead of people misusing these tools, you know, just like with Google against all the people doing search engine optimization,

Sebastian Hassinger 1:01:25
are you planning to change your exam at all for next year?

Scott Aaronson 1:01:29
Well, it'll be an in class exam.

Sebastian Hassinger 1:01:32
So okay, yeah.

Scott Aaronson 1:01:34
But, you know, I mean, I mean, in some cat and mouse games, there was really not much to do except just try to be a better cat, you know,

Kevin Rowney 1:01:42
just keep playing. Right. Yeah, exactly. Exactly. So.

Sebastian Hassinger 1:01:46
So circling back to Quantum. So again, we've finished on the on the main topic of this podcast. So I'm just curious, you know, there's, there's a couple of vendors, IBM, I think, Adam, a few others that are sort of promising, like on the order of 1000 Gate base qubits in the next year or so that, you know, presuming they have good enough, you know, gate Fidelity's to two qubit error rates that are acceptable, what, you know, is there anything sort of that you can think of that you're excited about seeing implemented on on like, 1000, gate based qubits with a decent amount of error mitigation? Let's not call it fault tolerance yet, but, you know, less noisy nisky machines?

Scott Aaronson 1:02:34
I wish I had a better answer for you. I mean, we are, you know, the, the fear in quantum computing is that we are entering a bit of a of a NISQ desert. Right, during, you know, a region where like, yeah, you know, eventually, you know, I, you know,

Kevin Rowney 1:02:53
like, it's gonna be great.

Scott Aaronson 1:02:54
There's a, there's an oasis on the other, which is fault tolerance, right? Yeah, you're right. Once you get all the way to full power in quantum computing, then you can scale up to as many qubits as you want. You can run Shor's algorithm, right. But you know, we've we've struggled to come up with compelling applications for a NISQ quantum computer, or at least ones that we're confident are going to work and are going to be a classical computer, right? There are certainly things that you can try and that are worth trying, you know, I mean, people will, will continue to try these optimization and machine learning things, they'll continue to do quantum simulations. And I think, you know, there is some chance that even before we get full power in quantum computers, that we could start to see, you know, the first useful simulations in material science and quantum chemistry or things like that, right. And then this would be analogous to, let's say, the, the, the, the era in classical computing, when you had, you know, analog devices of vacuum tubes, or electromechanical relays, you know, before the transistor was invented. Right. Right. And so, so you could, you know, you could have an arrow like that, I am personally interested in designing better quantum supremacy experiments. I mean, you know, the, the, maybe, you know, the most impactful thing I've been able to do in my career so far, right, was to, you know, come up with Boson sampling, and, you know, with, you know, these ideas that helped lead to the quantum supremacy experiment that Google did in 2019. And since then, has been done by us DC and China. And by by Xanadu, by, you know, these these experiments, you know, I mean, they've they've, they've given us new information, I think they've, they've put the ball, you know, clearly into the skeptics court to sort of experience like if if quantum computing will not scale, you know, explain these experiments. So, you know, Gil Gil Kalai, you know, famous quantum computing skeptic has been, has been trying to do that for the last few years. I don't think he's succeeded right now. So, but having said that, you know, the trouble is, you know, because, you know, these are only being done with 50 or 60 cubits, which is all that you can do, if you still want to verify the answer. Computer, you know, and because the devices are noisy, you know, there are classical algorithms to spoof these quantum supremacy experiments that are pretty good, right? You know, they don't quite, you know, match the quantum computer, but you know, they come, depending on how you measure, they come within a couple of orders of magnitude of matching, like, let's say, as measured by how much electricity do you need to spend to solve this problem? Right, you know, and the quantum computers, you know, needs a dilution refrigerator to run this, you know, pretty energy intensive, the classical computer? Well, you know, you need a lot, of course, yeah, maybe it ends up being 100 times more than the dilution fridge. So, I know, that is not as clear of a separation as we'd like. And in particular, it might take only one or two more improvements in classical algorithms before, you know, the the quantum supremacy that we have, which is go away entirely. Okay. So, so so so I think that there is a strong need to do better quantum supremacy experiments, you know, and I have some, some ideas about it, you know, that I want to pursue further, I think the ideal would be to figure out what can we do with a, a near term, you know, non error corrected device that beats a classical computer, and where a classical computer can efficiently recognize a correct answer. And we did we just don't know how to do that yet. This is a problem for us, for the theorists to to figure out how to do that. You know, I, I have some some some ideas. I think, you know, the quantum circuit, the kinds of quantum circuits that we need, seem to exist, we have some numerical evidence that they are out there. It's just that we don't know how to systematically find them. Okay,

Kevin Rowney 1:07:22
I thought that's what we heard on a previous podcast. I think it was the interview with though was Durrett that she was talking. Yeah, right. Yes. Sorry, the presentation about about log depth circuits on NISQ. Era machines is a specific class where there could be actually a fruitful set of targets for that exists, right?

Scott Aaronson 1:07:38
That's right. So So okay, so it says that there's a little bit different from what I'm talking about. There's that discussion of like, could quantum could these quantum supremacy experiments based on random circuit sampling? Could they themselves already give you a scalable advantage? Right. And so, you know, I think with the existing experiments, like, we never presuppose that they did, right. I mean, you know, my supposition from the beginning has been, yeah, if you want to scale, ultimately, you're going to need our correction, right. And we are only targeting the sort of, you know, near term regime of, let's say, 50, or 100, or 200 cubits where, you know, where you can hope to eke out an advantage, you know, which, which is clear, but which is not scalable. Right, and, you know, but But now, you could get a little bit greedier than that. And you could say, well, you know, but why shouldn't it be scalable? Right, and, and that, you know, we know that there's actually there's been theoretical progress, including by by dery, herself, and Vazirani and others, that has now, you know, mostly ruled out the possibility that you can get a scalable quantum speed up, you know, via these sorts of experiments without error correction. Except, you know, and there was one case where we thought maybe, like, using logarithmic depth quantum circuits, and they had a new result from just this year, that rule. Okay, but there's still, I think there's still one other loophole, which is like constant depth, non spatially local, you know, random quantum circuits. Okay, there are, there might be a speed up there. But, you know, I wouldn't I wouldn't personally bet on it. I would just say, we might, we might, we might get lucky.

Sebastian Hassinger 1:09:33
And do you think I mean, because you want it to be verifiable with classical systems that limits the number of qubits you can run on to like about 50 or so.

Scott Aaronson 1:09:44
When the ideas that we currently have

Sebastian Hassinger 1:09:47
Yeah. Oh, I see. I see. Yeah, yeah.

Scott Aaronson 1:09:49
Let me I mean, we know what short are, you know, or you can tell your without error correction, I mean, ultimately, Shor's algorithm, right, you can you can verify that these are the prime factors of theory, right, right. However, how big it is, but we just don't know how to do that yet.

Sebastian Hassinger 1:10:02
Yeah, right. Right. Right. Yeah. Okay. All right. Interesting. Cool. Well, I mean, I mindful of our time. Any any last topics you wanted to touch on? Or? I mean, that was a good.

Kevin Rowney 1:10:19
Such a good time. Yeah, sir. Thank you so much.

Sebastian Hassinger 1:10:23
Thanks for having very much appreciate.

Well, wasn't that fun? I really enjoyed that. Yeah, that's a great conversation. Great. I really appreciated how he turned a simple ask of like, how did you get into quantum it's a sort of a giant overview of of the, you know, not just his history of coming into quantum computing, but almost an entire overview of the intellectual history of quantum computing. was really

Kevin Rowney 1:11:34
fast. And you have a ringside seat on that one. Pretty cool. Yeah. I also really liked Yeah, you know, for any student of this topic, there's just a intimidatingly huge array of categories of possible subjects of studies when it comes to QC algorithms. I mean, it's kind of daunting. But I mean, I think his framework around, you know, focusing on the underlining the core importance of Shor's algorithm, and the Grover's algorithm, and then he's touching on these other categories of adiabatic, computing and quantum random walks and pointing out that there's right, you know, some advances there. But there's still a lot of room left for, you know, really definitive results of those categories to show some kind of advantage. But yeah, I got from him that he has a distinct sense that there's at least some level of stagnation on the discovery of, you know, algorithms.

Sebastian Hassinger 1:12:27
Yeah. Well, and I think, you know, he, in the section where we dove into the large language models, he's sort of drew out, you know, the differences between what ingredients exist in quantum computing and what gradients exists in large language was in terms of progress. And I thought that was really interesting.

Kevin Rowney 1:12:46
That was funny, that sort of notion that you had the have at least one of two things to discover whether or not you're being a doofus, right, in science, right is, is either significant experimental evidence, or some decent theoretical base and some mathematics to attach to it. If you don't have either one of those you just I doofus risk. And that he started putting in QC, I mean, there's now you know, emerging significant quantum volume, that give us debt, you know, really experimental evidence that it's making a difference out there. And, of course, the theory is super well developed, and the LLM there's just well, now with the chat GPT. And these new MLMs there's more more experiment experimental evidence. And, wow, the theory slate looks a little bit blank.

Sebastian Hassinger 1:13:34
Yeah, I mean, it's it's startling, I think that he's spent a year on with open AI and his finding the large language models sort of resistant to to theoretical explanation. I think that's that's, that doesn't bode well, Scott Aronson is having a hard time. Right.

Kevin Rowney 1:13:52
It's like, you know, you got a problem or problem when

Sebastian Hassinger 1:13:54
he's only started. But But yeah, it's a hard problem. And I thought it was, it was fascinating, sort of his perspective on, you know, how experimental results are starting to catch up with with theory and quantum computing. But, you know, he made that reference to being a little worried about us entering sort of a quantum desert, which is a new new term for us. We've heard quantum winter in the past, but I think, I mean, the way I understood it quite a winter in my mind, of course, is more sort of commercial interest, private sector investment, it's sort of you know, what happened with AI when it when the the industry felt like there wasn't enough progress being made and sort of the capital was withdrawn resources with were withdrawn and, and, and you know, the crops are all dead. And it was sort of make it through the winter with whatever sad little potatoes you can muster from the from the root cellar. I think what he meant by quantum desert was more that we may have, we may not have a ton More productive results on NISQ era machines. And that may be, we're really going to have to buckle down and for the long haul and wait for fault tolerant machines to actually drive more innovation more progress on the, to bolster the theoretical

Kevin Rowney 1:15:21
date that NISQ Desert could go on for some time, and that he does stand out is different from other, you know, scholars of the space who seem to have at least more optimism in the in the NISQ era. I guess time will tell really, I mean, it's an interesting debate, though, these are high level minds, operating at a intense level, the high stakes stuff, so so pretty cool.

Sebastian Hassinger 1:15:43
I will say, you know, I mean, now I think about it, he also said, you know, that machines that are realizing the promise that that Fineman recognized in 1981, and that keynote at Endicott has of, you know, if you want to simulate nature, you need a quantum device to do the simulation, that those are are starting to show results. Right now, the neutral atom devices in particular 256 atoms is very good, you know, doing a time evolution of a mini body problem is a very difficult thing to do classically. Yeah. So yes. And I think he did say he was optimistic in the near term around simulations in particular. So I think he's, when he's thinking quantum desert, he may be thinking more in terms of,

Kevin Rowney 1:16:29
of inniscarra machines for optimization problem, the more like,

Sebastian Hassinger 1:16:33
gate based logical, you know, algorithmic, rather than simulated simulation. So, yeah, that's an interesting topic to wish we should dig into deeper into that with future episodes.

Kevin Rowney 1:16:45
No doubt, no doubt. Wow, it's a great time. Really appreciate this show, as

Sebastian Hassinger 1:16:49
always. All right. Thanks, Kevin.

Kevin Rowney 1:16:53
Thank you. Take care. Okay, that's it for this episode of The New quantum era, a podcast by Sebastian Hassinger. And Kevin Roni, our cool theme music was composed and played by Omar Costa Homido. production work is done by our wonderful team over at pod phi. If you're at all like us and enjoy this rich, deep and interesting topic, please subscribe to our podcast on whichever platform you may stream from. And even consider, if you like what you've heard today, reviewing us on iTunes, and or mentioning us on your preferred social media platforms. We're just trying to get the word out on this fascinating topic and would really appreciate your help spreading the word and building community. Thank you so much for your time.

Creators and Guests

Sebastian HassingeršŸŒ»
Host
Sebastian HassingeršŸŒ»
Business development #QuantumComputing @AWScloud Opinions mine, he/him.
Scott Aaronson
Guest
Scott Aaronson
Scott Joel Aaronson (born May 21, 1981)[1] is an American theoretical computer scientist and David J. Bruton Jr. Centennial Professor of Computer Science at the University of Texas at Austin. His primary areas of research are quantum computing and computational complexity theory.
Quantum Supremacy to Generative AI and Back with Scott Aaronson
Broadcast by