This post is largely a reaction to this piece of news: that YouTube/Google never expected the number of views of a YouTube video to exceed the capacity of a 32 bit counter or 2147483648 views. This is very Bill Gates 2.0. I suppose the news item is most surprising for the *who*, not the *what*, but I still find the use of a 32 bit integer by *anyone* surprising.

32 bit counters represent roughly four billion values. In the YouTube case, the 32 bit integer was obviously signed — one bit represents the negative or positive sign and the other 31 bits represent the value. It’s silly to think a video could have *negative* views, but there are cases where unsigned 32 bit integers can cause problems (mostly … when somewhere in the mix there’s a signed integer). But two or four billion isn’t that large. Certainly there are more people on the internet. Certainly one person can watch a video more than once. While it took Gangnam Style to exceed this number of views, it was inevitable that something would.

In my own programming, I shy away from 32 bit integers unless the class of items is well-defined to be small. The cost of 8-byte integers (64 bits) vs. 4-byte integers (32 bits) is reasonably trivial and the cost of fixing something later is comparably large.

You might ask the question: Is 64 bits enough? 64 bits represent 18446744073709551616 values or up to 9223372036854775808 with one bit for the sign. It’s hard to wrap your mind around these numbers. Wikipedia’s page on the Order of Magnitude of Numbers has this value somewhere between the total number of insects on earth and the number of grains of sand on all the beaches in the world. So if we’re counting things that numerous, we need bigger numbers.

Conversely, the number of people in the world has grown from 32 bites to 33 bits (4 to 7 billion) from about 1970 to the present. There are still many things that fit in 64 bits. At this point, almost any counter you create in a software program should be 64 bits wide.