There are many different sorting algorithms that sort information into various types of orders with varying efficiencies. (From the least inefficient - bubble sort - to one of the most efficient - quick sort) The many different algorithms each have their pros and cons, excelling at list of smaller sizes but are weaker when dealing with lists of a large size - meaning that their runtime grows to a staggering amount in proportion to the size of the list. And then there are some sorting algorithms that excel over others when the input is a large list. The order in which the information is arranged prior to being sorted using a certain algorithm also affects runtime - this depends on the process in which a function sorts a list. It is important to know the worst case runtime of an algorithm. To do this, you need to calculate the time complexity of an algorithm and represent it using big O notation which describes the behaviour of a function.
Here, f(x) - blue and g(x)- red. f(x) ∈ O(g(x)). Big O notation can be used to describe the worst case runtime of a function’s algorithm. For example, bubble sort has a worst case runtime that is quadratic, O(n**2) which means that, depending on the imput size, the runtime will increase quadratically in proportion to the size of the list. If you analyze the code for bubble sort:
def bubbleSort(alist):
for passnum in range(len(alist)-1,0,-1):
for i in range(passnum):
if alist[i]>alist[i+1]:
temp = alist[i]
alist[i] = alist[i+1]
alist[i+1] = temp