Welcome to another episode of QuBites, your bite-sized pieces of quantum computing. In this episode, Rene and our expert guest, Dr. Marine Pigneur, talk about the BMW quantum challenge.
Transcript
Rene - Hi! Welcome to QuBites your bite-sized pieces of quantum computing. My name is Rene from Valorem Reply and today we're going to talk about the BMW quantum challenge and for this i'm honored to have a special expert guest today Dr. Marine Pigneur. Hi Marine and welcome to the show. How are you today?
Marine - Hi Rene, thanks for asking, I'm fine, how are you doing?
Rene - I'm doing fantastic, thanks for asking as well! So, tell us a little bit about yourself and your background as it relates to quantum computing, computer science, physics, what have you.
Marine – Yeah, Oh well. I discovered quantum physics about 10 years ago and never stopped loving it. I think I'm fascinated about it each time I think about it, so I did a Master in Theoretical Quantum physics in Paris and then I moved to Vienna for my PhD. I'm still in Vienna now and I did a PhD in the field of quantum optics, and this was very fundamental research. So fundamental is by opposition to applied meaning it's research for let's say the sake of understanding. So, you always hope that there's going to be an application in the end but that's not what drives you and it's quite remarkable now because I'm making a transition now. So, I'm working at Machine Learning Reply where I'm working on very specific use cases, so for example, the BMW Quantum Challenge.
Rene - Nice. So, you're basically coming out of the core fundamental research and putting it now into the work with applied quantum computing restoration. This is awesome. So, let's dive into today’s topic. You already mentioned BMW that recently held this quantum computing challenge where they invited basically everyone to take part in this challenge and submit. I know you and your colleagues of Machine Learning Reply took part in that. Tell us a little bit about what was the challenge all about and why is it that the automotive industry is so much interested in quantum computing?
Marine- Okay, so the challenge has been organized by BMW and for this they teamed up with AWS. So AWS provided access to NISQ devices, so real quantum hardware. It was a really great opportunity for us because it means that we could test what we did on real devices, It's not only theory. So, the problems themselves were defined by BMW. They organized four different projects corresponding to four cases where they already have a need and have a classical solution but where they expect that quantum can bring an advantage. So, one of this problem was related to machine learning and that is the one to which we contributed as a Machine Learning Reply. So more in details, what this problem was about is that when you engineer car parts, metallic doors, for example, they might not come out perfect. You might have defects in this part and of course you need to spot them because you cannot have this remaining in the production line. I mean, it's not only aesthetic, but you also have human life at stake. So what is done currently is that machine learning algorithms are allowing to spot defects on such metallic parts and the idea, is can quantum or quantum contribution allow something better here.
Rene – Okay, so basically the solution you developed is for well quality control kind of, like what they're checking, is the painting correct and I guess that is using image classification or whatever. Can you tell us a little bit more in detail, maybe how it works?
Marine - Exactly, so it's an automated quality assessment problem and this boils down to image classification. So, what is currently done is using convolutional neural networks that's a very powerful approach which is really good at solving these classification problems and I don't know how familiar you are with CNN but they are very evolved, very complex tools and just a quantum solution would have a very hard time beating that, because we would need so many qubits to implement a full quantum equivalent of this. And to be competitive with CNN is just very difficult. So, what we did was to adopt a hybrid approach, which means that you take the CNN as it is and you have an additional quantum contribution to it. You don't have the full quantum equivalent but you kind of put a bit of quantum in the classical solution and that's the reason why we say it's a hybrid approach. So, if I go a bit more in details, what did we do…Okay so …I don't want to give too many details about machine learning but if you want to understand the solution, you need to know a few things. So the basic tools of CNN are layers and neurons. So, you're going to have a lot of layers and each of this layer has a lot of neurons. So the first layer, let's say we'll take as input the picture that you want to classify and the different neurons in it, which for now I'm not saying what it is. They will take this picture and they will consider it as an input, do something give an output and this output will in turn be the input of the next layer. So, you will have like that a lot of layers having that information going all along your network and what's happening here is that the very first layers can recognize maybe a shadow on the picture, something very small and slowly you are going to build up complexity. So the layers afterwards will recognize maybe a small curve and then a full circle and then an eye and then a full human face. So you build up complexity as you go deeper and deeper in the in the network. And what we did for this challenge is that one of this last layers, the most specialized, one we replaced by a quantum layer and what is a quantum layer here is that instead of neurons you use qubits. So now you immediately see why it would be so hard to have a full quantum approach here because every neuron you have and there's really a lot of them, would have to be a qubit. We don't have enough qubits with NISQ devices. We don't have enough control on enough qubits. So what we did was really to implement it towards the end of it to really be specialized on the problem at hand, so the doors and not general and generic features. So now it's a bit different because you have qubits, it's not the same as classical neurons but one thing I did not say is that a neuron of a layer does not just give an output to one neuron of the other layer but to all of them. It's becoming a very connected problem, yes, so you have really a one-to-many connection and since the question would be how can you encode this on a qubit? It's very difficult so there's another way to do it and that's where quantum is so beautiful. This output, so the classical output of the neural, of the layer just before you encode only in one qubit as a rotation of the qubit and every neuron every classical neuron will give its feature to only one qubit as a rotation and then this multiple connection you make with a quantum circuit which is creating entanglement. And that's how now you make the connection. I insist on it. it's a bit technical but I love this idea, I find it extremely ingenious, and I would have liked to think about it myself actually. I really would have loved it, I like this idea very much. I find it very elegant because it's a quantum equivalent of something which was performing very well but can bring an advantage. So now you have this layer, which has all the tools of the classical one but it's quantum and it does bring advantages that's why we do it.
Rene - That makes a lot of sense. So thanks for this beautiful explanation. So, it clicked for me and so basically you represent like, all these multiple connections to the following layer, with which we have one neuron connected multiple. You're just connecting to one qubit and since the qubit in superposition entanglement state, like in this specific phenomenon, that of course at this smallest scale you can basically represent all these multiple connections in one qubit because you can represent multiple states at the same time. It's like you're saying, right man, why didn't I think about it. I really like it.
Marine - And you know what is great also is that, without going again too much in details, but this full network is actually an optimization problem so when the output is given to the next layer you need some optimization parameters and everything and the neurons are fired or not but with qubits well they are both fired and at rest. So, it's much richer actually, what you can do is much more complex because you have superposition here which you don't have classically. So you have so many more things that you can do.
Rene – Yeah, you can represent almost infinite states. Well, you already explained like what I was going to ask, like why do you think it's providing better results already with quantum machine learning because you already explained it so well, because you can represent it in a qubit and you can represent much more states and it's more closer to the probabilistic nature of nature in a sense right?
Marine - Actually you have more advantages. You even have more advantages. So, I was not even that aware of this specific advantage. Actually, the reason why we chose this approach is because literature was giving indication that we also could win on performances on other levels. So, when you have a machine learning algorithm, well there's a bit of a price to pay, it's not especially easy to implement. It can be very resource consuming. For example, before the algorithm can know if there's a defect or not on a picture it has to be trained and this training is very demanding because you need a lot of labelled data to train your algorithm and one of the things which was showed in in literature was that you need less training data with a hybrid network because your model would learn faster. So, you can reduce the cost of this labelling and it's actually a really great thing on which you can win because training time and the resources for training are really a thing in the current implementation of machine learning. And another thing actually just to finish with this advantage is a classification accuracy. So, it might happen that your algorithm gets something, gets confused or gets it wrong and see the defect where there was none like maybe it was a shadow or skips a defect, doesn't see it and that's a classification accuracy and there's also indication that with a quantum contribution you improve the accuracy of your model.
Rene -Because it can represent more closer to the probabilistic state, like with quantum you're closer to the probabilistic system, right?
Marine - I guess that's a bit what goes, what is the underlying reason is that you have an algorithm which maybe can learn new features. It just learns better, faster and that's the idea.
Rene - That's better, faster, stronger. But, well that that is actually also a great point like you're saying, it generalizes but faster with less training data and like you're saying, this is big, because, like I guess the largest cost for training image classification and so on is the actually getting the training data in the right way, with the labelling and all of that. I mean there's a bunch of labelling services out there, which you can hire folks that actually sit down and label it manually and which is, of course, cost and time intensive. It totally makes sense what you're saying. We can take less training data and it will still generalize very well. That is amazing. Can you also maybe, last question, can you tell us a little bit about the result out of the challenge or is it still ongoing or what kind of machine have you actually been using, like how many qubits and so on.
Marine - Yes that's a very interesting question. So, the main result from now is that we remain comparable with CNN, with a classical solution, and already that's already great because it means that this approach is valid. I mean, I said in the beginning beating CNN will be very hard. I mean that was a solved problem that they gave us. But just already showing that something can be done with NISQ devices is already an achievement in itself. And also, this approach can scale up very well. You can use more once we have more qubits, we can make layers with more qubits or more layers and we can go towards something more general. And to answer your other question how many qubits did we actually use and need to get very good results. Six, we needed only six qubits. So, that was very surprising. We thought, okay we barely will have enough qubits, actually having more did not help in this case. So, it is very compatible with business devices right now.
Rene – Awesome! Well, that's impressive and so maybe once well you said more qubits didn't help in this case, but maybe more qubits will be available, you can actually go a layer down in your network right, where you need many more neurons and maybe that but it's very impressive and again congrats, awesome achievement. Like you said, getting on parity already is impressive and with only six qubits like, oh man, that will just fly in a few years! Impressive, impressive. Marine, thank you so much for being part of the show. We are unfortunately already at the end, and I will for sure invite you for another episode because you can explain all of that so well and it was fantastic to talk with you. Thank you so much for sharing your insights.
Marine - Well thank you for inviting me here. It was a pleasure!
Rene - Yeah and thanks everyone for joining us for yet another episodes of QuBites, your bite-sized pieces of quantum computing. Watch our blog, follow our social media channels to hear all about the next episodes and of course on our website, you can find all the previous episodes where you can also learn a little bit more about quantum machine learning and all the other interesting things. Take care and see you soon, bye!