\r\nDid you ever wonder why programmers use hash functions without keys while cryptographers only proved the implemented protocol secure for a hash function that is keyed? Did you ever want to have your passwords hashed using a random oracle for maximum security? If you are unhappy because it is possible to prove that a microkernel implementation can be proven to do what it is supposed to do, but your favourite cryptographic protocol cannot, then this talk may be for you.

\r\nWe explore how the way we do classical math leads deviations between cryptographic functions and how they can be modeled in proofs, and what could be done about that.

\r\nWe focus on questions like:

\r\n- \r\n
- How we can be forced to worry about collisions that no one knows \r\n
- How it can be that proofs in an exact science like math need to be interpreted \r\n
- Why modern cryptographers cannot prove something under the assumption that \"hash function X is secure\" while programmers have to design their software like this \r\n
- Whether \"proving A\" is always the same as \"proving A\" \r\n
- How to reasonably measure precomputation complexity in cryptographic attack \r\n
- And finally: Why we may need different mathematical foundations to formalize cryptography \r\n