Sorting algorithms

    Here are some of the most popular sorting algorithms. We use them to put our information in ascending or descending order. Usually we need to sort arrays, so all examples and explanations assume that we sort an array.

Do you learn better from video?

Learn faster with deeper understanding!
The "Computer programming for beginners" course is the perfect place to begin with programming.

Start now!

Sorting algorithms characteristics

  • Time complexity - the worst or average number of comparisons and swaps that need to be done.
  • Space complexity - the additional memory space that the algorithm uses in order to do the sorting.
  • Adaptation - some algorithms recognize an already sorted or nearly sorted array. In such case the adaptive algorithms can improve to linear time complexity.
  • Stable - a stable sorting preserves the order of elements with equal values. This is often desired, when sorting complex data, like objects.

O(n2) complexity

    These algorithms are simple and relatively easy to implement. They are good for learning purposes. Some of them are also used to complement the more sophisticated sorters from the O(n*log n) group.

Bubble sort

    is a stable comparison sorting algorithm. It has an average complexity of O(n2), where 'n' is the number of elements to be sorted. It has a best case of O(n) when the input array is already sorted or nearly sorted. It is relatively easy to implement, but is too slow to be used in practice.

Selection sort

    is an unstable sorter. Its best, average and worst complexities are quadratic. It is too slow for sorting big arrays. Selection has one major advantage – it makes only “n” number of swaps, where “n” is the number of elements to be sorted. This makes it useful when the swap is very expensive operation(for instance if the elements are large objects and their keys are small, or when using flash memory).

Insertion sort

    is one of the most efficient among the O(n2) sorters. It has a worst and average time complexity of O(n2). Its best case O(n) is when the input is already sorted. It is stable and adaptive. It is very efficient for sorting a small data set. For that reason it is used in practice to complement advanced sorters like quicksort and shell sort.

Sorters with n*log n time complexity

Merge sort

    Merge sort uses the "divide and conquer" technique. It is very efficient - runs in O(n * log n) and makes a low number of compares. One disadvantage is the amount of extra space that it requires. Normally this sorting is stable, meaning that it preserves the order of equal elements. It has various implementations and applications in practice.

Heap sort

    Heap sort is a very fast sorting algorithm. It runs in O(n*log n) time and requires only constant additional space O(1). It is kind of evolved selection sort, which uses a complete binary tree structure. This reduces the time to find the max element and thus makes the routine very efficient. Heap sort is unstable, because its structure could change the order of elements with equal keys. One downside is that this sorting is not adaptive to nearly sorted input.

Quicksort

    Quicksort is probably the most used sorting algorithm today. The idea is to exchange elements at bigger distance. The algorithm chooses a pivot element and puts all smaller elements before the pivot and the bigger ones - after it. Then it is called recursively for the two parts.

    At first glance it seems that quicksort is of quadratic complexity. And indeed its worst case is n*n, but it is a rare case. Although its time complexity is hard to prove, it is a n * log(n) and the constant factor is about 2 times better than heap and merge sort.

Do you learn better from video?

Learn faster with deeper understanding!
The "Computer programming for beginners" course is the perfect place to begin with programming.

Start now!


› Sorting Algorithms

Did this help? Support me with your vote ;-)

Did this help?