Recently I came across something that reminded me of my youth, big-O calculations. Big-O is used to calculate the complexity of algorithms or programs, big-O gives you a rough worst case estimation of the performance of an algorithm, and how runtime and space grows relative to the size of the input.
I didn’t know anything at all about big-O, but replied ‘do anyone really do that kind of calculation?’, when I was asked about big-O. It reminds me of the time when people tried to estimate run times of programs by calculating revolving speed of drum memories and the time of machine instructions etc. When I started work with computers people had just stopped doing that type of calculations, and I was very happy for that.
I was told big-O is a big thing, it gives a worst time measurement of the algorithm the higher big-O figure the likelier it will run slower. But there is so many ifs and buts, these days you cannot do proper run time calculations just by evaluating the source code, you need to understand the program optimizer, high level instructions, libraries, hardware, opsys. If you have a program language good at e.g. parallel optimize your code with a JIT compiler that may skew your calculations, if I got big-O right.
I have checked with some younger colleagues, one said when I asked, ‘I recall this from school, but I never used it in real life, real runtime figures are dependent on so many other things, I can image it can be of use for assembler or C programmers, but for modern high level programming languages it is probably of limited use’. The others I asked did not know what big-O was, one said he remembered something about calculating runtime of programs.
With my limited knowledge, I see big-O as a simple and clever option to programmatically compare different program snippets. This can come handy if you are developing a program language optimizer or a JIT compiler, but otherwise it is of little use. Big-O is an interesting subject and gives food for thought. I would not be surprised if I will use big-O one of these days.
I have spent years of optimizing computer systems, anything from network throughput to assembler algorithms, SQL queries, physical IO of databases etc, etc. I ‘only’ used real measurement, which you can rely on since they are real.
Links I found useful when I studied big-O: