Reliability of IBM’s Public Quantum Computers

One of the challenges of the current ecosystem of quantum computers (QC) is the stabilization of the coherence associated with the entanglement of the states of their inner qubits. In this empirical study, we monitor the reliability of IBM’s public-access QCs network on a daily basis. Each of these state-of-the-art machines has a totally different qubit association, and this entails that for a given (same) input program, they may output a different set of probabilities for the assembly of results (including both the right and the wrong ones). Although we focus on the computing structure provided by the “Big Blue” company, our survey can be easily transferred to other currently available quantum mainframes. In more detail, we probe these quantum processors with an ad hoc designed computationally demanding quaternary search algorithm. As stated, this quantum program is executed every 24 hours (for nearly 100 days) and its goal is to put to the limit the operational capacity of this novel and genuine type of equipment. Next, we perform a comparative analysis of the obtained results according to the singularities of each computer and over the total number of executions. In addition, we subsequently apply (for 50 days) an improvement filtering to perform noise mitigation on the results obtained proposed by IBM. The Yorktown 5-qubit computer reaches noise filtering of up to 33% in one day, that is, a 90% confidence level is reached in the expected results. From our continuous and long-term tests, we derive that room still exists regarding the improvement of quantum calculators in order to guarantee enough confidence in the returned outcomes.

Full access at: Pérez-Antón, R., Corbi, A., López Sánchez, J. I., & Burgos, D. (2023, in press) Reliability of IBM’s Public Quantum Computers, International Journal of Interactive Multimedia and Artificial Intelligence. DOI: