: Hi, welcome back to season four of QuBites, your bite-sized pieces of quantum computing. My name is Rene from Valorem Reply and today we're going to talk about quantum and machine learning for image classification, which is in a super exciting topic with a lot of growth in the market at the moment and I’m honored to have a special expert guest today, Dr. Johannes Oberreuter. Hi Johannes and welcome back to the show. How are you today?

Johannes: Hi Rene. Thanks for having me. I'm happy to be here and I’m feeling great looking forward to talking about this exciting topic.

Rene: Awesome! A couple of folks might remember you from a previous episode where we already chatted about quantum computing. But for the rest of our viewers today, can you share a little bit about your background as it relates to quantum computing?

Johannes: Yes, happy to. So, I've been working for a couple of years now for Reply as a machine learning expert and on top I'm co-leading Reply’s global practice on quantum computing. I've been trained as a physicist. I've done a PhD and then I’ve had two post-doctoral assignments in quantum many-body systems. So, I've looked at the research side of things and now I'm very excited that this is the age where quantum computers are becoming a reality, so this is really an interesting point in time.

Rene: Absolutely, and now you're putting the research to work, if you will, right, that's exciting! So, let's dive into our today's topic and can you tell us, first of all, what is image classification and how is quantum machine learning related to image classification.

Johannes: Yes, so image classification as the task says, it concerns images, so you want to have a computer look at images and puts them into various classes. So that could be, I mean the classic example is, you have a bunch of images with cats, a bunch of images with dogs and maybe a car and then you want to say well this is an image where which shows a dog, this is an image which shows a cat, this is an image which shows a car or a hot dog, exactly. it could be all kinds of things. So obviously, this is an easy task for humans, right? You don't need a lot of training; you know what a cat and what a dog is. I know what a hot dog is but for a computer that has been very difficult for a long time, but it can become interesting if you want to look at a lot of images, like at a video stream or think about a production facility, you might have a camera looking at your product, a car, say you want to see is there any damage in the product and that is something which you would like to automate. So, a couple of years ago people in the machine learning community came up with a technology which is called a convolutional neural network. So, it's neural network and it tries to really emulate the way in which we think that humans are perceiving images, so it tries to really look at spatial correlation. So, there are the eyes and there is pointy ears and there are whiskers underneath the eyes, so that's probably a cat. And if it has a tail and which is wiggling then it's probably a dog. I mean, so, it tries to come up with these characteristic features of various classes of various objects on an image. So, with these convolutional neural networks in a way image classification, image analysis can be regarded as a solved problem, so we know how to do it, we know how this works. However, like in these scientific challenges which people have been looking at, we have millions of data available, millions of data which people have been sitting down and labelling for the purpose of the advance of science, which is great, but in everyday life this is costly, this is expensive. You don't have this kind of resources to look at millions of data. You might not even have a million of pictures of a damaged car, hopefully you don't, because it would be super expensive. So, this is a real problem and bringing this to an active, a real use case in practice and what people have shown is that quantum computing can help for neural networks to train faster to need less training data and that's of course super interesting because these models are super data hungry and it's great if we can reduce this amount. So, this could be an application where quantum computing or quantum assisted machine learning can help. It's quite interesting because like this, these extraction of features like eyes, whiskers, pointy ears, that is something which these neural networks are super good at doing. So, you need loads of computational power for this, a lot of memory. You need to put the whole image in memory, so this is something where quantum computers are really struggling. We don't have this size of machines, but you can use, after these features have been extracted from the images, you can use quantum bytes, quantum computers to create a better representation of these features and that is helping us in getting better classification results with less image data.

Rene: So, you can train your network for a good prediction which requires less training data which is a huge benefit.

Johannes: Of course, which is huge benefit for business in a business context.

Rene: Like you were saying it's very expensive if you have all these data annotations, the data labelling and all of that, it's a lot of manual labor these days, right?

Johannes: And very often you need you need really skilled people to do this it's not like cats and dogs, which, you would say, okay, give it to a bunch of school kids and everybody can do it and you can super parallelize it. In a business setting, that would be or think about medicine, for instance, you could do image classification for medicine. So, you need a doctor sitting there and saying well I guess I see a shadow on the lung, this might be a hint at some disease, that's super complicated. You need highly skilled labor and sometimes you don't even have that available. You cannot just ask a doctor who is working in the hospital- can you please take four weeks off to do some data labelling for me, so that that's sometimes really difficult.

Rene: Yeah. Healthcare is a fantastic example because there's amazing progress actually already being seen with these kind of image classification or let's say AI in healthcare. A big kind of growing field with amazing results. I just recently came across a new paper where they trained a neural network to detect certain skin cancer in its early stage and they were able to outperform humans by, you know, finding- okay is this cell going to be dangerous, is it be is it going to be a carcinogenic basically and they can detect it way earlier before some like professional doctors could see it and of course they had to, you know, most of the research was of course getting the data, right? Like you said the model architectures are done, they are there, basically of course you need to fine-tune them for these specific cases but like for this specific case the training data is the huge effort of course but it's worth it. I mean they can help a lot; this is a huge impact, right?

Johannes: Exactly, I mean if you think about the impact that this, it's something which you can even do at home right? You point your iPhone or your camera at the spot on your skin and it will give you an assessment and then, of course, you want to have this checked by professional. That's not the point. The point is, really to have a cheap way of getting high of classifying high throughput data and then, in the end, of course when you say okay, I think this is carcinogenic, please see a doctor and do it now, don't do it in two years when it's too late. The impact on people's lives especially in healthcare is huge but also in business, even if it might not be that as heroic as in healthcare. It's of course still good if you if you can reduce waste, if you can react quicker on damages, on defects and problems. You're reducing waste and you're reducing well both of material and resources and money and that's good for all of us.

Rene: So, tell us a little bit about how does it work under the hood right with AI models or these neural networks you typically model these and with layers kind of a thing right and so with quantum computing we're dealing with circuits and gates, so how do you model such a kind of a neural network if you want to use it with QML.

Johannes: Yes. So, what we have been using so far are hybrid architectures. We are marrying the advantages of a classical computer with the advantages of a quantum computer and as you said correctly a neural network is organized in layers. So, at the beginning you have these huge images, say, it's two thousand something times two thousand pixels. It's a lot of data which you cannot put on a quantum computer. Quantum computers are not that big at the moment. So, what these classical neural networks are really good at is extracting these features and modeling more complex features. So, at the beginning they might just see an edge and they might just see a shadow and you might just see something- something is pointy and then out of these basic features they create more complex ones. This is a this is something round, this is an eye, this is a nose, this is a face, so it becomes ever more complex. So, you arrive at more useful features and that's why convolutional neural networks are really good for image classification. However, at the end, say or towards the end, what we can do is we can cut this neural network and we can insert a quantum layer. Now out of these 2 000 times 2000 pixels which are all data which you have to work with, after all these feature extractions, you might end up with very few data maybe 200, maybe 100, and this becomes much more manageable. Current quantum computers have something between, well say, couple of qubits to 30 20 qubits. There are some common computers which have more like 60 70 from google, however there are also noises. You cannot use all of them but something between, I would say 10 and 20 is definitely a usable number currently for quantum computers. So, if you are managing to use a classical neural network to reduce these data to this number, say 10, we can actually put this on a quantum computer, and this is what we have been doing. So, we have benchmarked also that even four or six qubits is already quite nice, and you get good results, so you put this on your qubits. And then as you said, you are in different worlds, the quantum world, which is programmed with gates and you're not destroying your information, you're always keeping all the information until the end of your computation. It's a probabilistic calculation, so it's a different world but it turns out that the way we have set up these gates and in which you make the quantum information flow through this quantum pipeline we are creating some. You say in physics, entanglement, so some correlations, some relation between the various data points. And somehow this is an interesting question because i would say we haven't really understood yet why this is working so well. Somehow this space, this quantum space, allows a better representation of the data. And then at the end of this the circuit we are measuring, it's again a technical term, so we are looking at what has the quantum circuit made out of. This quantum information we are putting it again into a classical registry and then we can make our classification. And this is interesting because it's really marrying the, as I said before, the advantages of what we have learned how convolutional neural networks work and what they're really good at, with what we have seen also in other quantum machine learning algorithms on data representation. So, there's a better way of representing your data and you can even do it on today's quantum computers. And this is super exciting because for a lot of tasks which you might have heard of, like a prime factorization of big numbers and breaking RSA algorithm, you need big computers right? It's nothing, but which you really need to be afraid of right now or which you can do right now. Also, molecular simulation, all the big things that quantum computers are going to do in the coming, let’s say three to five years. The quantum machine learning is something you can do right now and that is of course something we are interested in- what can we do right now with this toy.

Rene: That's awesome! And before I ask you about some of the scenarios and sharing some examples, could it be, like you know, it's a little bit philosophical angle to the question but, like you know with quantum, we're going down to the smallest dimension, how our universe, not dimension, but to the smallest scale. How our universe works right, and so we have this uncertainty and this kind of randomness and all of that occurs. So basically, our world works on analog data, right, it's all analog. We cannot exactly say, okay it's a particle right there or a little bit there, right? This kind of Heisenberg, what Heisenberg discovered basically. But could it be like, because we get better results already with these small quantum computers by leveraging QML because we can represent the data not just in a digital way but more in a closer to nature like in an analog stage kind of a thing.

Johannes: That is a very interesting question and I think this is what the community needs to work on right now to really understand a bit better what this quantum layers are doing. Why is it so much, well, not so much better but why does it seem to be a better representation and in which cases does it seem to be a better representation? So in general, what you say is right that if you see that your data is quantum in origin, you have a quantum sensor, for instance, then it performs much better. If you see that it's sensor data in general, we see a better performance of or a better enhancement, a richer enhancement of these quantum representations. And to be very honest with you, it is it is something which I do not know, and I don't think anybody really knows what exactly is happening in there which makes it more suitable than for very artificial data. We still see an advantage, but it's not as pronounced, and I like your philosophical angle because it really guides the way to better usage of the models also technically and from an engineering perspective. If you understand better what the representation really is, and what features we need, we can also design these models better in the future. But this is really an active research field, and we are trying to do our contribution by looking at applied situations and saying okay look here it works well it works also well but maybe not as well, what is the difference, and can we learn something on real data sets? That's what we are interested in.

Rene: Can you share some examples of what you're doing in that field and what Reply is doing in this area?

Johannes: Yes, so we, in order to get this innovation going, what we usually do is, we look at research papers and we are doing our own experiments right? So already, two years ago we looked at more classic algorithms for machine learning and tried to do them to enhance them with quantum computing and last year we started with neural networks and it turned out that we kind of hit a good point there because later this year, the second half of this year, BMW and AWS posed a challenge in which we took part and I'm happy to say that we reached the final of this challenge and yeah it was really nice and we used this knowledge, which we gained there, on a task of say quality control in manufacturing and we have looked at the data set which was provided there. It's a public data set, but nevertheless it was interesting to see what is relevant to the client there. And what we really saw is that the point we discussed earlier about having millions of images available is not a realistic scenario. Especially there you don't produce millions of car defects luckily, so you really need to battle this data scarcity problem. So we tried to come up with a setup, again taking inspiration from our experience in machine learning and taking some of our knowledge which we gained by experimenting with quantum computing, and tried to come up with a model which is in particular solving this data scarcity setup. But maybe what I should say is that we didn't start this research because we wanted to, we had a specific use case in mind, we were fascinated about the perspective to improve neural networks as a general method of machine learning. And the thing, which is really striking me, apart from the fact that you can use it right now already with the current quantum computers, is that neural networks are pretty universal tool. You can use it on images, which is what we are talking about, and where we have real experience and hands-on experience now. But you can also use it for language, you can use it for structured data, you can use it for all kinds of settings and all the neural networks are still being developed. So, what I really enjoy about this journey is that we have yet to see in which other fields and which other applications will see these advantages of quantum enhancement, enhancement of the feature representation by quantum computing, and see if we can get better results in other or more useful results in other areas as well. Again, I don't have any hands-on experience at the moment because now we are dealing with images, but I just wanted to put this out there. There's a much broader range of applications and that's really fascinating and an interesting prospect.

Rene: Well exciting times! Unfortunately, we're already at the end of the show. We could talk for many more hours about this and I'm sure we will invite you again and maybe you can show us some of the results from the BMW challenge and also the other research you're doing with QML for image classification, but also for NLP, natural language processing, a couple of those. Again, thank you so much. And thanks everyone for joining us for another episode of QuBites, your bite-sized pieces of quantum computing. Watch our blog, follow our social media channels to hear all about the next episodes of season four. If you missed any of the previous seasons or episodes from those you can always go to and watch all the episodes there, from season one to four. Take care, see you soon. Bye-bye.