• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Jeanne Boyarsky
  • Ron McLeod
  • Paul Clapham
  • Liutauras Vilda
Sheriffs:
  • paul wheaton
  • Rob Spoor
  • Devaka Cooray
Saloon Keepers:
  • Stephan van Hulst
  • Tim Holloway
  • Carey Brown
  • Frits Walraven
  • Tim Moores
Bartenders:
  • Mikalai Zaikin

Quantum computing :will we need to redefine informatics as a science?

 
Bartender
Posts: 1359
39
IBM DB2 Netbeans IDE Spring Java
  • Likes 1
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
In the authors' opinion, do they think that we need to redefine concepts we used to use in traditional informatics, for example the Turing Machine model, Big O complexity notation and so on, or the same will still be valid to study algorithms from a theorical point of view?
 
Author
Posts: 32
11
C++
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Claude Moore wrote:In the authors' opinion, do they think that we need to redefine concepts we used to use in traditional informatics, for example the Turing Machine model, Big O complexity notation and so on, or the same will still be valid to study algorithms from a theorical point of view?


Hi Claude! Great question.
Nope, all of the information theory concepts you're mentioning still apply, exactly as they always have. The only thing QCs change is which operations can be done efficiently.

  • Big O: This notation is used all over QC, especially to describe the speedup provided by a quantum algorithm. For example, Grover's Database Search is an application of QCs which allows them to search an unsorted database of n items in O(sqrt(n)) time, when it would normally be O(n) time. Quantum search is covered very well (thanks to Mercedes) in chapter 10 of the book.
  • Turing model still applies as well, as we're still really dealing with sort-of digital data. Where this gets really interesting is when you consider that the program counter in a CPU (the register which points to the next instruction) is actually data, and a QPU might store that in quantum superposition, allowing a program to "both branch and not branch". This is potentially weird, and not well-explored. Although it's too early to really exhaustively explore this on a physical QPU, I did a small-scale simulation of this a few years ago. That whitepaper is pretty fun (if not directly useful), and can be found here. (language warning: the FSM model I based it on (created by Urban Müller) has a cheeky name).
  •  
    Claude Moore
    Bartender
    Posts: 1359
    39
    IBM DB2 Netbeans IDE Spring Java
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator
    Thanks a lot for your detailed answer. I am very excited about what you said about the superscalar speed up you can achieve on algorithms using QC:l can't stop thinking about what we could achieve in the field of Neural networks, and in AI in general...
     
    E Johnston
    Author
    Posts: 32
    11
    C++
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator

    Claude Moore wrote:Thanks a lot for your detailed answer. I am very excited about what you said about the superscalar speed up you can achieve on algorithms using QC:l can't stop thinking about what we could achieve in the field of Neural networks, and in AI in general...



    There's an interesting aspect concerning AI which hasn't really been dealt with yet: Qubits can't be copied. They can be moved and transformed, but there's physically no way to duplicate even one.

    So... warning: far-future sci-fi speculation (just for fun)
    Suppose (with your far-future-practically-scifi hat firmly in place), suppose you have a complicated quantum machine-learning system, and it took you a year to train it to do a job really well (maybe it can win championship-level games of chess and ping pong simultaneously), and its entire learned state is maintained in fault-tolerant 4 giga-qubit storage. Someone sees your AI demo, and wants to buy 100 of them. Qubits physically cannot be copied, so you can't just save the state and stamp out more just like it, you'll have to train each one and hope it comes out the same. There's no way to back it up, so if the power goes down you'll need to start over. And if over time (and after more input) it gets worse at its job, you'll also have to start over.

    Thanks to Nic, Chapter 13 in the book covers Quantum Machine Learning!
    It's all very early and experimental; QML systems in the near future will be trained in a few microseconds, use the knowledge, and lose it all a few microseconds later.
     
    Claude Moore
    Bartender
    Posts: 1359
    39
    IBM DB2 Netbeans IDE Spring Java
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator

    E Johnston wrote:
    It's all very early and experimental; QML systems in the near future will be trained in a few microseconds, use the knowledge, and lose it all a few microseconds later.


    Well, one could argue that from a practical point of view, beign able to train a model in a few microseconds would be the same as loading it from a persistent storage, wouldn't it be?
     
    E Johnston
    Author
    Posts: 32
    11
    C++
    • Likes 1
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator

    Claude Moore wrote:
    Well, one could argue that from a practical point of view, being able to train a model in a few microseconds would be the same as loading it from a persistent storage, wouldn't it be?



    If the "trained" structure is qubit-based, then it can't be loaded, as loading involves making a copy of what's in the persistent storage. You could keep digital data in persistent storage, such as instructions used to train it...

    If the "trained" structure is digital data and not qubits then yep, loading it will work.

    There's a middle ground: As covered in chapter 9 in the book, one form of QRAM (quantum memory) is digital data which may be accessed in superposition (so you can load from multiple addresses at once into qubit registers). If this is used to store the trained structure, then loading will also work.

    ...so it depends on the design of your ML system.
     
    Consider Paul's rocket mass heater.
    reply
      Bookmark Topic Watch Topic
    • New Topic