What Great Programmers Know That Newbies Don’t

Let’s say you and a friend challenged each other on coding an algorithm to find a student by key in a list of a 100 students and you both came up with the following solution.

Well, this algorithm works and will find the key if it exists in the list but can you see what would happen if the list was larger?

Let us say 100,000 students instead of a 100?

Also, how could we measure its efficiency?

One way could be by timing how long it takes the above algorithm to complete. Let’s say you made some changes to your code to measure how long your algorithm takes to complete and ended up with the following.

With this we can measure how long the algorithm takes to complete the given task in nanoseconds.

Let’s run the algorithm 3 times and see what results we get:

Total execution time: 1793927649

Total execution time: 1178524592

Total execution time: 2186708577

We can see that the algorithm did not take the same amount of time to complete on all three runs.

Why?

What happens is, while your computer is executing this algorithm it is executing many other algorithms at the same time and those all cost precious CPU time.

For example, you could have

  • VLC media player running and playing some music
  • Adobe Photoshop processing your images
  • A browser downloading web content
  • Handbrake trans-coding your HD movies at what seems to be the same time

Given this, measuring our algorithm completion time using this technique would not be the best way. Worst of all, this algorithm could perform better or worse depending on the hardware and operating system it is being executed on.

How do we then compare the efficiency of two different algorithms regardless of their hardware or software environment?

To create a more uniform way of measuring algorithmic efficiency, computer scientists (e.g. Donald Knuth) and mathematicians have devised concepts to measure the asymptotic complexity of a program, called Big O Notation. In laymans terms,

it is a way of measuring how fast a programs run time grows asymptotically or as the size of your input grows towards infinity.

Lets use Big O to measure the efficiency or time complexity of our solution above.

Remember, we want to find the student with a certain key in a student list with N elements with N being 100.

What the algorithm does is loop over every single item in the array of size N and checks to see if the element is equal to the given key K.

That means it would take N lookups to complete the task, if N is 100 it would take a 100 lookups which is fine if you are dealing with small amounts of students but what if our software will store data for the entire student population in a country or countries?

Surely this would be inefficient as the size of our input grows towards infinity.

Computer scientists have ways to define the efficiency of algorithms. This algorithm we say is O (n) or the time it would take is proportional to the size of N.

The algorithm we conjured up is called a linear search and that is an example of a bad algorithm to use for solving large problems.

There are better algorithms to solve this problem. Called Binary Search.

This is an example of an efficient algorithm to search for a given key in a very large list if the list already in sorted order. The notation used for this algorithm is O(log (n)) which simply means that during every loop, N is cut in half. That is we continue to divide our problem space until we find the given key in the very large list so instead of taking 100 iterations we take just 6 ( log 2 base of 100 = 6) iterations which is much more efficient!

So you can see why analyzing algorithms in this way is much better than our first method and we can easily see that our binary search algorithm will scale as the input size grows.

Here are some of the most common used notations:

O (1) constant time

O (log (n)) logarithmic time

O ((log(n))c) polylogarithmic time

O (n) linear time

O (n^2) Quadratic time

O (n^c) Polynomial time

O (c^n) exponential time

Great programmers know how to to analyze their algorithms and prove that their algorithms are efficient. They also make sure their solution can scale as more data increases.

The software we rely on everyday needs programmers who can analyze and come up with great algorithms to make today's software work efficiently like social networks and search engines. Companies like Google, Yahoo, Apple, IBM and Microsoft look for programmers with these abilities to work on their products and infrastructure so that users can get work done faster.

In the next article I will talk about some other things that great programmers know that set them apart to “okay” programmers. Stay tuned.

Cover Image, Peter Peele | African Tech Round-Up

Comments