As much as conventional wisdom says, “If you don’t use it, you lose it”, I still find myself remembering most of the core mathematical ideas surrounding my field of work. Basics of computational complexity theory, logics, some parts of discrete mathematics that Donald Knuth calls “concrete mathematics,” as well as some of the graph theory, I can recall and reason on without much trouble. Now, more than ten years after me graduating from my alma mater, Moscow Aviation Institute, these topics probably got mostly connected together in my mind, each reinforcing the understanding of another.

But despite that, I sometimes notice glaring — and embarrassing — omissions from this interconnected picture. Some parts of mathematical theory won’t stick, and keep evading me even after I refresh them as needed. My brain seems to try to avoid walking around some dangerous areas.

Sometimes it is a question that I feel almost scared of asking because the answer seems to lie outside of what I know. And it keeps hanging in the background of my mind, but never prevents me from using the parts of the theory I do know. Sometimes it’s pieces that haven’t got connected to the big picture, and are important enough in isolation, but they never commit to my memory being well-connected to other parts.

To put an end to this embarrassment, here’s an incomplete list of my glaring omissions, with the answers to them (that I spent some time finding).

Q: If Gödel’s Second Incompleteness Theorem put an end to an attempt to formalize mathematics, how come mathematicians still feel it’s OK to continue doing what they’re doing using Zermelo-Fraenkel set theory, that, in turn, is also susceptible to Incompleteness Theorem?

A: The set of axioms for a system as powerful as Peano Arithmetic indeed can’t prove its own consistency, but a wider set of axioms may be able to do it. So axioms in Zermelo-Fraenkel exist to prove consistency not of itself, but — you guessed it — of less powerful Peano Arithmetic. In order to prove consistency of Zermelo-Fraenkel you need a larger system, which will again be inconsistent with regards to itself.

Q: What’s the difference between propositional logic, first-order logic and logic of higher orders?

A: I won’t even put an answer here, so simple it is. Back in my university days I needed to deal with description logics a lot and still didn’t manage to remember the computation complexity of the problem of reasoning over different logic classes.

Q: Nothing in the known complexity theory prevents us from building something that’s more computationally capable than Turing Machine, isn’t it? We just use Turing Machine as a convenient model, right?

A: Not so. Or, strictly speaking, it’s not known yet whether something more powerful does exist, but most bets are on the side of not existing. Note that quantum computers are known not to be more powerful than the Turing Machine. This one is particularly embarrassing, as Church–Turing thesis that presumes (but not proves) that every computable function — anything that computes — can only be computable on a Turing Machine is studied in the first year of Computer Science course! It just never stuck me until very recently that the thesis indeed talks about every computable function. Like, every.

Do you ever feel anything similar about something you know?