>>> l = [6, 8, 3, 5, 1, 2, 4, 7]
>>> l.sort()
>>> l
[1, 2, 3, 4, 5, 6, 7, 8]
Those were good times.
Now I know that there are different ways to sort a list: bubble sort, insertion sort, selection sort, merge sort, quick sort, etc. Learning these types of sorts (whose code and descriptions can be found here) is useful for understanding the difference in efficiency one implementation of a working code can make over another.
The efficiency of a program is determined by how many steps it takes to solve a problem depending on the size of the problem. In the case of list sorting, the runtime would be determined by the length of the list. We count steps rather than speed to avoid including the factor of differences in CPU speed. Computer scientists like to look at the worst case runtime not because they're pessimistic but because knowing the most number of steps a program will take allows for an upper bound to be put on it.
The runtime of a program is denoted as some function of the problem size. This is useful for predicting how much longer a program will take to run as the problem size increases. For example, the function for the worst case runtime of bubble sort is n2. This means that if the length of the list were to double, the runtime would be four times longer. Contrasting this with merge sort's nlogn runtime, it is evident bubble sort is not as efficient.
The worst case runtime of a program is denoted as O(function). For example, bubble sort has O(n2) and merge sort has O(nlogn). Bubble sort is jealous.
This is called Big-O notation. Because it's kind of a big deal.
...I'll see myself out now.
...I'll see myself out now.