Sergey Mikhanov  

Teamwork (November 13, 2016)

The concept of teamwork I briefly mention in the previous post warrants a post in itself. I’m absolutely not an authority on building and running teams, but just like anyone else in the programming profession I’ve got some of intuition about it.

First of all, it seems like good teams are rare: most companies are dysfunctional in some way or the other that sometimes make Dilbert cartoons look like the bastion of good reason. Dan Luu wrote a long entry on that recently:

At the last conference I attended, I asked most people I met two questions:

  1. Do you know of any companies that aren’t highly dysfunctional?
  2. Do you know of any particular teams that are great and are hiring?

Not one single person told me that their company meets the criteria in (1). […] A few people had suggestions for (2), but the most common answer was something like “LOL no, if I knew that I’d go work there”.

Over years, advances in software engineering — which is most commonly defined as a set of practices allowing software development on an industrial scale — made it possible to at least partially work around human imperfections by established a software development process. Practices like writing unit tests, doing code reviews, adhering to coding standards and choosing low risk technologies made it possible to develop software of enormous size. Like any process, when being enforced unreasonably those practices have a chance of scaring away your best employees and potential candidates.

That’s simply because those things are not pleasant. Nobody likes writing unit tests, or renaming variables just because your reviewer has an idea of a better name. A job that advertises itself as being in a team that practices TDD to the book is most likely the one that you’d want to run away from.

However, encountering a good team that works together organically is possible. My intuition for recognizing a team like this is that it shouldn’t require you to be dumber than you are. You want to write smart code? Sure, go ahead, you shouldn’t have any reasons to think that your colleagues won’t be able to understand it.

How I learned to program pt. 2 (October 29, 2016)

This continues from part one.

First job and the discovery of teamwork

My first job was doing Java at an “e-commerce” firm. I spent some time churning out alone large amounts of code, and quickly understood that this must be the most inefficient way to deliver software. As cool as having the laser focus of the “lonely cowboy coder” is, you can achieve much more when people don’t see you as the hermit guy with the red stapler from Office Space but are instead enthusiastic about working with you and you are enthusiastic about working with them. Few years later, when I joined a really strong team at a company doing mobile call processing software, it felt even more true.

Several jobs and teams later, I still can’t really point my finger on what constitutes a great team. It may boil down to having some common ground with your colleagues, which leads to you quickly understanding each other without the need to resort to corporate speak, but I don’t know what is it.

Algebra and the rediscovery of abstractions

Turns out, reminding yourself about the math you studied in the university several years after graduating is a good way to boost your programming skills. I came back to abstract algebra recently and it feels like it’s worth it (for recovering programmers, this book may be a good introduction). Surprisingly many problems in very different domains can be expressed using a semiring or similar structure, which gives you lots of room in how you can reason about them. I, essentially, learn to write as little code as possible to achieve the same goal.

It looks like I’m not the only one out there trying to reach for this goal. Forth people have long ago discovered the usefulness of programming with as little as possible. Languages with strong type systems can help achieve the same goal.


This part is very much a work in progress, but one thing is clear to me: the problem of composing large pieces of code together without driving your development team crazy is both important and not solved. I saw many companies working with large code bases throw lots of manpower at it and I definitely would like to get better at dealing with it. It’s now fifteen years since I’ve started working in this industry — here’s for the next fifteen!

How I learned to program pt. 1 (October 22, 2016)

It’s very tempting to start this entry with a nostalgic description of all the old hardware and software I’ve ever worked with, but I’ll skip that part. When talking to other programmers, I’m more often than not astonished by all the different paths people follow on their way to this profession. Despite being immersed in a very similar spirit of time, even most programmers of similar age to me arrived where they are via their own unique way. So I thought it could be an interesting exercise to list the milestones on my own path that seem important in retrospect. Who knows whether skipping any of these would have had a profound effect on my career, but here they are.

8-bit assembly and the discovery of abstraction

During the school years, before I first started fiddling with programming in Z80 assembly, my view of computers was very romantic. When you encounter assembly, it totally sobers you up. This is partially due to the fact that it reveals the computer completely in front of you, down to the hardware rock bottom, so that no more mystery remains about it.

At that time I had a go at the demoscene, which forced me to think hard about the programs I was writing. Here’s why: to the high school version of me, the assembly program was just a sequence of elementary steps, nothing other than a straight flow of instructions. I don’t remember using any sophisticated data structures, or organizing code in any way, except minimal split into procedures. It obviously led to a code that was never up to the serious performance standards of the demoscene. Nevertheless it was inconceivable for me at first that some alternative way of writing programs may exist. Over time, different ways of looking at the same small operations slowly started revealing themselves to me. For example, if you need a series, you don’t necessary need the loop. A blob of data generated at right in front of your program counter becomes code. Stack is just a memory, and so on. I entered university suspecting that there was more than one way to describe what you want the computer to do — by abstracting yourself away from the sequence of instructions — and that this is actually important.

Pascal and the discovery of void

During the first university year, the teaching language of choice was Pascal and the teaching style of choice was asking students to implement many standard data structures in it. This wasn’t difficult in itself, and my main memory of that time was struggling to deal with the “absence” of something, like a search tree that has no nodes. Adding a node to a tree is simple, you just hook to a node’s parent, but how do you add a node to an empty tree? On one occasion, an algorithm required several nested loops with the innermost loop’s both lower and upper bound controlled by the outer ones and both starting from zero, which eliminated the inner loop for the at least one iteration. At 18, I found it so difficult to reason about the collapsed loop, that I instead chose to perform whatever the first iteration required outside of loops — by hand. My dorm roommates who were less clueless about writing programs gave me some hard time laughing about that. That was the beginning of my learning to juggle “absent” code or data in my head.

This is continued in part two.

Remembering (or forgetting) math (October 2, 2016)

As much as conventional wisdom says, “If you don’t use it, you lose it”, I still find myself remembering most of the core mathematical ideas surrounding my field of work. Basics of computational complexity theory, logics, some parts of discrete mathematics that Donald Knuth calls “concrete mathematics,” as well as some of the graph theory, I can recall and reason on without much trouble. Now, more than ten years after me graduating from my alma mater, Moscow Aviation Institute, these topics probably got mostly connected together in my mind, each reinforcing the understanding of another.

But despite that, I sometimes notice glaring — and embarrassing — omissions from this interconnected picture. Some parts of mathematical theory won’t stick, and keep evading me even after I refresh them as needed. My brain seems to try to avoid walking around some dangerous areas.

Sometimes it is a question that I feel almost scared of asking because the answer seems to lie outside of what I know. And it keeps hanging in the background of my mind, but never prevents me from using the parts of the theory I do know. Sometimes it’s pieces that haven’t got connected to the big picture, and are important enough in isolation, but they never commit to my memory being well-connected to other parts.

To put an end to this embarrassment, here’s an incomplete list of my glaring omissions, with the answers to them (that I spent some time finding).

Q: If Gödel’s Second Incompleteness Theorem put an end to an attempt to formalize mathematics, how come mathematicians still feel it’s OK to continue doing what they’re doing using Zermelo-Fraenkel set theory, that, in turn, is also susceptible to Incompleteness Theorem?

A: The set of axioms for a system as powerful as Peano Arithmetic indeed can’t prove its own consistency, but a wider set of axioms may be able to do it. So axioms in Zermelo-Fraenkel exist to prove consistency not of itself, but — you guessed it — of less powerful Peano Arithmetic. In order to prove consistency of Zermelo-Fraenkel you need a larger system, which will again be inconsistent with regards to itself.

Q: What’s the difference between propositional logic, first-order logic and logic of higher orders?

A: I won’t even put an answer here, so simple it is. Back in my university days I needed to deal with description logics a lot and still didn’t manage to remember the computation complexity of the problem of reasoning over different logic classes.

Q: Nothing in the known complexity theory prevents us from building something that’s more computationally capable than Turing Machine, isn’t it? We just use Turing Machine as a convenient model, right?

A: Not so. Or, strictly speaking, it’s not known yet whether something more powerful does exist, but most bets are on the side of not existing. Note that quantum computers are known not to be more powerful than the Turing Machine. This one is particularly embarrassing, as Church–Turing thesis that presumes (but not proves) that every computable function — anything that computes — can only be computable on a Turing Machine is studied in the first year of Computer Science course! It just never stuck me until very recently that the thesis indeed talks about every computable function. Like, every.

Do you ever feel anything similar about something you know?

On being an adult (September 24, 2016)

I’ve recently turned 35, which is normally well beyond the age where one can safely assume one is an adult. I remember myself very clearly in my twenties — maybe even a bit too clearly — and I’m totally sure I wasn’t yet an adult when I was 23, for instance. Somewhere late in the last decade, a change has happened.

What changed? What part of me is now different from the 23-year-old me? The answer actually lies on the surface: interacting with the world of other adults has gotten much easier than before. The world of adults holds very tightly onto one notion that has now taken a common hold over all the different ways I think and do things. That notion is that everyone in the world is very different.

This may sound simple at first, but here’s what I mean by that. Accepting that everyone is different means you accept that you can never be sure or fully aware about other people’s motives, desires, intentions, feelings or thoughts. Even when someone’s behaviour seems familiar — this happens, for example, with someone you closely know — there can always be something very unpredictable they can do once and this still won’t come into conflict with the image of the person that you had in mind. You wouldn’t imagine or even think about someone doing what they did, but this still fits well with the rest of your image of them. The world is just so very flexible that it allows that too.

This manifests itself the most in the fields that are normally considered “very adult”. Take law: nothing can be taken as obvious and unshakable by a judge in court. They know too well that the factual evidence nobody could’ve predicted may easily beat some “obvious” assumptions. And in politics, no issue comes without a second side; everything is open to a new interpretation that has a chance of remaining valid even in the crowd of the old ones.

The flexibility of meanings and the ability to control it is clearly something that separates an adult from a child, or a teenager. Both teenagers and adults use the same words, but teens apparently tend to think that words have fixed and obvious meanings — that they know for sure how things are because “obviously” it can’t be any other way. Adults, on the other hand, can bend and control the meaning of the words they say. For them, it gets easy because there’s one thing you can be sure of: if you attach certain meaning to a word and release it into the world, you will be amazed with what new meanings it will get back to you.

I don’t remember exactly when this understanding has come to me — I think this was a slow process around the age of 27 — but that clearly was the end of my non-adult life.