2008-09-20 04:59:59 8 Comments

I'm asking more about what this means to my code. I understand the concepts mathematically, I just have a hard time wrapping my head around what they mean conceptually. For example, if one were to perform an O(1) operation on a data structure, I understand that the number of operations it has to perform won't grow because there are more items. And an O(n) operation would mean that you would perform a set of operations on each element. Could somebody fill in the blanks here?

- Like what exactly would an O(n^2) operation do?
- And what the heck does it mean if an operation is O(n log(n))?
- And does somebody have to smoke crack to write an O(x!)?

### Related Questions

#### Sponsored Content

#### 4 Answered Questions

### [SOLVED] Difference between Big-O and Little-O Notation

**2009-09-01 20:22:38****Jeffrey Lott****245096**View**336**Score**4**Answer- Tags: algorithm time-complexity big-o asymptotic-complexity little-o

#### 41 Answered Questions

### [SOLVED] What is a plain English explanation of "Big O" notation?

**2009-01-28 11:10:32****Arec Barrwin****700451**View**4949**Score**41**Answer- Tags: algorithm complexity-theory computer-science big-o time-complexity

#### 33 Answered Questions

### [SOLVED] What does O(log n) mean exactly?

**2010-02-21 20:05:38****Andreas Grech****1002088**View**2152**Score**33**Answer- Tags: time-complexity big-o

#### 23 Answered Questions

### [SOLVED] Big O, how do you calculate/approximate it?

**2008-08-06 10:18:16****sven****425548**View**881**Score**23**Answer- Tags: algorithm optimization complexity-theory big-o performance

#### 7 Answered Questions

### [SOLVED] Ukkonen's suffix tree algorithm in plain English

**2012-02-26 11:30:09****Nathan Ridley****167308**View**1109**Score**7**Answer- Tags: string algorithm language-agnostic suffix-tree

#### 4 Answered Questions

### [SOLVED] List of Big-O for PHP functions

**2010-03-18 23:12:32****Kendall Hopkins****74506**View**346**Score**4**Answer- Tags: php performance algorithm arrays big-o

#### 1 Answered Questions

### [SOLVED] Understanding Amortized Time and why array inserts are O(1)

**2017-08-31 02:06:39****randombits****716**View**2**Score**1**Answer- Tags: algorithm big-o amortized-analysis

## 25 comments

## @Albin Sunnanbo 2010-08-11 19:00:05

I try to explain by giving simple code examples in C#.

For

`List<int> numbers = new List<int> {1,2,3,4,5,6,7,12,543,7};`

O(1) looks like

O(n) looks like

O(n log(n)) looks like

O(n

^{2}) looks likeO(n!) looks like, uhm, to tired to come up with anything simple.

But I hope you get the general point?

## @Nicholas Hamilton 2017-01-10 21:13:01

fibonacci sequence would be n! if it is calculated using naive calculation approach and if previous term is not stored.

## @Don Neufeld 2008-09-20 05:08:03

One way of thinking about it is this:

O(N^2) means for every element, you're doing something with every other element, such as comparing them. Bubble sort is an example of this.

O(N log N) means for every element, you're doing something that only needs to look at log N of the elements. This is usually because you know something about the elements that let you make an efficient choice. Most efficient sorts are an example of this, such as merge sort.

O(N!) means to do something for all possible permutations of the N elements. Traveling salesman is an example of this, where there are N! ways to visit the nodes, and the brute force solution is to look at the total cost of every possible permutation to find the optimal one.

## @Steve Jessop 2008-09-20 11:42:16

Good explanation, although it should be noted that it is what it says - "a way of thinking about it" rather than the literal truth. For instance, if for half the elements you do something with half the other elements, that's still O(n^2)

## @chepner 2015-05-06 17:20:30

In nearly all cases, O(N log N) means you are either sorting the input, or storing it in such as way that you can read it back in sorted order.

## @Matthew Rapati 2008-09-20 05:09:53

You might find it useful to visualize it:

Also, on

LogY/LogXscale the functionsnall look like straight lines, while on^{1/2}, n, n^{2}LogY/Xscale2are straight lines and^{n}, e^{n}, 10^{n}n!is linearithmic (looks liken log n).## @Will Ness 2017-12-07 09:02:43

for completeness, it'd be cool if two more graphs be added here: one on

LogY/LogXscale (so n^(1/2), n, n^2 are straight lines) and the other onLogY/Xscale (so 2^n, e^n, 10^n are straight lines and n! is linearithmic (looks like nlogn)).## @Will Ness 2017-12-07 09:10:50

I went ahead and made a suggestive edit, hope it's cool by you. :)

## @bmdhacks 2008-09-20 05:23:31

The big thing that Big-O notation means to your code is how it will scale when you double the amount of "things" it operates on. Here's a concrete example:

So take quicksort which is O(n log(n)) vs bubble sort which is O(n^2). When sorting 10 things, quicksort is 3 times faster than bubble sort. But when sorting 100 things, it's 14 times faster! Clearly picking the fastest algorithm is important then. When you get to databases with million rows, it can mean the difference between your query executing in 0.2 seconds, versus taking hours.

Another thing to consider is that a bad algorithm is one thing that Moore's law cannot help. For example, if you've got some scientific calculation that's O(n^3) and it can compute 100 things a day, doubling the processor speed only gets you 125 things in a day. However, knock that calculation to O(n^2) and you're doing 1000 things a day.

clarification:Actually, Big-O says nothing about comparative performance of different algorithms at the same specific size point, but rather about comparative performance of the same algorithm at different size points:## @Alderath 2011-12-30 13:49:23

While it is surely helpful, I do not think this is the best way to describe it, because this explanation gives rise to a very common misconception about Big-O. Some people erroneously tend to think that "

An O(1) algorithm is always better than an O(n) algorithm". While that is most often the case, it is not always true. It is possible to, on one hand, have an O(1) function which operates on a set of N numbers, and takes roughly 1 second to execute regardless of N. On the other hand, an O(N) function doing the same thing in 1 ms for N = 1kk and 5 ms for N = 5kk and 100 ms for N = 100kk.## @Khaled.K 2015-05-21 11:08:33

I'll try to actually write an explanation for a real eight year old boy, aside from technical terms and mathematical notions.

If you are in a party, and there are

`n`

people in the party including you. How many handshakes it take so that everyone has handshaked everyone else, given that people would probably forget who they handshaked at some point.Note: this approximate to a simplex yielding

`n(n-1)`

which is close enough to`n^2`

.Your favorite team has won, they are standing in line, and there are

`n`

players in the team. How many hanshakes it would take you to handshake every player, given that you will hanshake each one multiple times, how many times, how many digits are in the number of the players`n`

.Note: this will yield

`n * log n to the base 10`

.You are a rich kid and in your wardrobe there are alot of cloths, there are

`x`

drawers for each type of clothing, the drawers are next to each others, the first drawer has 1 item, each drawer has as many cloths as in the drawer to its left and one more, so you have something like`1`

hat,`2`

wigs, ..`(x-1)`

pants, then`x`

shirts. Now in how many ways can you dress up using a single item from each drawer.Note: this example represent how many leaves in a decision-tree where

`number of children = depth`

, which is done through`1 * 2 * 3 * .. * x`

## @Pavan Katepalli 2017-02-06 15:19:06

the handshake example doesn't make sense. It would be O(n) directly correlated to the number of players on the team. Why would you shake someone's hand a random amount of times?

## @Khaled.K 2017-02-07 17:07:53

@PavanKatepalli the solution didn't say "random", it specified how many, if you keep reading

`how many times, how many digits are in the number of the players n.`

, the number of digits in a number is its log to the base 10, given it's a positive integer.## @sfink 2009-01-26 23:52:54

don.neufeld's answer is very good, but I'd probably explain it in two parts: first, there's a rough hierarchy of O()'s that most algorithms fall into. Then, you can look at each of those to come up with sketches of what

typicalalgorithms of that time complexity do.For practical purposes, the only O()'s that ever seem to matter are:

And that's it. There are many other possibilities that fit between these (or are greater than O(2^n)), but they don't often happen in practice and they're not qualitatively much different from one of these. Cubic algorithms are already a bit of a stretch; I only included them because I've run into them often enough to be worth mentioning (eg matrix multiplication).

What's actually happening for these classes of algorithms? Well, I think you had a good start, although there are many examples that wouldn't fit these characterizations. But for the above, I'd say it usually goes something like:

None of these are rigorous. Especially not linear time algorithms (O(n)): I could come up with a number of examples where you have to look at all of the inputs, then half of them, then half of those, etc. Or the other way around -- you fold together pairs of inputs, then recurse on the output. These don't fit the description above, since you're not looking at each input once, but it still comes out in linear time. Still, 99.2% of the time, linear time means looking at each input once.

## @Jason S 2009-01-27 00:00:41

actually matrix multiplication is sub-n^3 (the regular way is n^3), see the Strassen algorithm (n^(log_2(7)))

## @Jason S 2009-01-27 00:03:24

and then there are factoring algorithms, somewhere between sqrt(n)=naive and log(n)=impossible

## @sfink 2009-06-09 19:26:35

O(sqrt(n)) - good one. That is indeed a missing meaningful level. I should add that. But re: matrix multiplication -- that's mostly what I was thinking about in my "cubic" bullet point (it's where the n^2.8... came from.) I still assert it isn't worth the extra overhead in most practical cases.

## @iandisme 2010-08-17 19:58:38

"O(2^n) "exponential" - the problem is either fundamentally computationally hard or you're being an idiot." I laughed. +1

## @user3347123 2014-12-22 18:22:15

To remain sincere to the question asked I would answer the question in the manner I would answer an 8 year old kid

Suppose an ice-cream seller prepares a number of ice creams ( say N ) of different shapes arranged in an orderly fashion. You want to eat the ice cream lying in the middle

Case 1 : - You can eat an ice cream only if you have eaten all the ice creams smaller than it You will have to eat half of all the ice creams prepared (input).Answer directly depends on the size of the input Solution will be of order o(N)

Case 2 :- You can directly eat the ice cream in the middle

Solution will be O(1)

Case 3 : You can eat an ice cream only if you have eaten all the ice creams smaller than it and each time you eat an ice cream you allow another kid (new kid everytime ) to eat all his ice creams Total time taken would be N + N + N.......(N/2) times Solution will be O(N2)

## @Anders Öhrt 2009-05-03 15:14:17

No, just use Prolog. If you write a sorting algorithm in Prolog by just describing that each element should be bigger than the previous, and let backtracking do the sorting for you, that will be O(x!). Also known as "permutation sort".

## @John Gardner 2008-09-20 05:04:27

A lot of these are easy to demonstrate with something non-programming, like shuffling cards.

Sorting a deck of cards by going through the whole deck to find the ace of spades, then going through the whole deck to find the 2 of spades, and so on would be worst case n^2, if the deck was already sorted backwards. You looked at all 52 cards 52 times.

In general the really bad algorithms aren't necessarily intentional, they're commonly a misuse of something else, like calling a method that is linear inside some other method that repeats over the same set linearly.

## @Mike Dunlavey 2009-10-22 19:47:10

Remember the fable of the tortoise and the hare (turtle and rabbit)?

Over the long run, the tortoise wins, but over the short run the hare wins.

That's like O(logN) (tortoise) vs. O(N) (hare).

If two methods differ in their big-O, then there is a level of N at which one of them will win, but big-O says nothing about how big that N is.

## @Aric TenEyck 2009-06-09 22:02:24

The way I describe it to my nontechnical friends is like this:

Consider multi-digit addition. Good old-fashioned, pencil-and-paper addition. The kind you learned when you were 7-8 years old. Given two three-or-four-digit numbers, you can find out what they add up to fairly easily.

If I gave you two 100-digit numbers, and asked you what they add up to, figuring it out would be pretty straightforward, even if you had to use pencil-and-paper. A bright kid could do such an addition in just a few minutes. This would only require about 100 operations.

Now, consider multi-digit multiplication. You probably learned that at around 8 or 9 years old. You (hopefully) did lots of repetitive drills to learn the mechanics behind it.

Now, imagine I gave you those same two 100-digit numbers and told you to multiply them together. This would be a much,

muchharder task, something that would take you hours to do - and that you'd be unlikely to do without mistakes. The reason for this is that (this version of) multiplication is O(n^2); each digit in the bottom number has to be multiplied by each digit in the top number, leaving a total of about n^2 operations. In the case of the 100-digit numbers, that's 10,000 multiplications.## @kastermester 2009-07-04 15:50:05

This is actually a great explanation of describing how different algorithms can take more time - although there's a difference here in which the algorithms (addition and multiplication) produce different results. Also a thing you left out, is that after multiplying these 2x 100 digit numbers (that is all the different parts), you're still left with having to add them all up (that's 10,000 numbers, some very large) - so your "algorithm" suddenly becomes O(scary) - I'm not good on this subject, which is why I read the question through.

## @David Thornley 2009-06-09 21:49:10

Suppose you had a computer that could solve a problem of a certain size. Now imagine that we can double the performance a few times. How much bigger a problem can we solve with each doubling?

If we can solve a problem of double the size, that's O(n).

If we have some multiplier that isn't one, that's some sort of polynomial complexity. For example, if each doubling allows us to increase the problem size by about 40%, it's O(n^2), and about 30% would be O(n^3).

If we just add to the problem size, it's exponential or worse. For example, if each doubling means we can solve a problem 1 bigger, it's O(2^n). (This is why brute-forcing a cipher key becomes effectively impossible with reasonably sized keys: a 128-bit key requires about 16 quintillion times as much processing as a 64-bit.)

## @Assaf Lavie 2009-06-03 04:36:03

The "Intuitition" behind Big-OImagine a "competition" between two functions over x, as x approaches infinity: f(x) and g(x).

Now, if from some point on (some x) one function always has a higher value then the other, then let's call this function "faster" than the other.

So, for example, if for every x > 100 you see that f(x) > g(x), then f(x) is "faster" than g(x).

In this case we would say g(x) = O(f(x)). f(x) poses a sort of "speed limit" of sorts for g(x), since eventually it passes it and leaves it behind for good.

This isn't exactly the definition of big-O notation, which also states that f(x) only has to be larger than C*g(x) for some constant C (which is just another way of saying that you can't help g(x) win the competition by multiplying it by a constant factor - f(x) will always win in the end). The formal definition also uses absolute values. But I hope I managed to make it intuitive.

## @Chad Brewbaker 2009-04-20 02:13:36

Tell your eight year old log(n) means the number of times you have to chop a length n log in two for it to get down to size n=1 :p

O(n log n) is usually sorting O(n^2) is usually comparing all pairs of elements

## @Domenic 2008-09-20 05:34:54

This might be too mathematical, but here's my try. (I

ama mathematician.)If something is O(

f(n)), then it's running time onnelements will be equal toAf(n) +B(measured in, say, clock cycles or CPU operations). It's key to understanding that you also have these constantsAandB, which arise from the specific implementation.Brepresents essentially the "constant overhead" of your operation, for example some preprocessing that you do that doesn't depend on the size of the collection.Arepresents the speed of your actual item-processing algorithm.The key, though, is that you use big O notation to figure out

how well something will scale. So those constants won't really matter: if you're trying to figure out how to scale from 10 to 10000 items, who cares about the constant overheadB? Similarly, other concerns (see below) will certainly outweigh the weight of the multiplicative constantA.So the real deal is

f(n). Iffgrows not at all withn, e.g.f(n) = 1, then you'll scale fantastically---your running time will always just beA+B. Iffgrows linearly withn, i.e.f(n) =n, your running time will scale pretty much as best as can be expected---if your users are waiting 10 ns for 10 elements, they'll wait 10000 ns for 10000 elements (ignoring the additive constant). But if it grows faster, liken^{2}, then you're in trouble; things will start slowing down way too much when you get larger collections.f(n) =nlog(n) is a good compromise, usually: your operation can't be so simple as to give linear scaling, but you've managed to cut things down such that it'll scale much better thanf(n) =n^{2}.Practically, here are some good examples:

n): retrieving an element from a linked list. Here,A= 0.5, because on average you''ll have to go through 1/2 of the linked list before you find the element you're looking for.n^{2}): various "dumb" sorting algorithms. Because generally their strategy involves, for each element (n), you look at all the other elements (so times anothern, givingn^{2}), then position yourself in the right place.nlog(n)): various "smart" sorting algorithms. It turns out that you only need to look at, say, 10 elements in a 10^{10}-element collection to intelligently sort yourself relative toeveryoneelse in the collection. Because everyone else isalsogoing to look at 10 elements, and the emergent behavior is orchestrated just right so that this is enough to produce a sorted list.n!): an algorithm that "tries everything," since there are (proportional to)n! possible combinations ofnelements that might solve a given problem. So it just loops through all such combinations, tries them, then stops whenever it succeeds.## @Jules 2009-01-27 00:07:42

Nit,

`O(f(n))`

means that it's less than or equal to`A f(n) + B`

.## @yogman 2009-01-27 00:55:17

Think of it as stacking lego blocks (n) vertically and jumping over them.

O(1) means at each step, you do nothing. The height stays the same.

O(n) means at each step, you stack c blocks, where c1 is a constant.

O(n^2) means at each step, you stack c2 x n blocks, where c2 is a constant, and n is the number of stacked blocks.

O(nlogn) means at each step, you stack c3 x n x log n blocks, where c3 is a constant, and n is the number of stacked blocks.

## @Loren Pechtel 2009-01-27 00:47:52

One thing that hasn't been touched on yet for some reason:

When you see algorithms with things like O(2^n) or O(n^3) or other nasty values it often means you're going to have to accept an imperfect answer to your problem in order to get acceptable performance.

Correct solutions that blow up like this are common when dealing with optimization problems. A nearly-correct answer delivered in a reasonable timeframe is better than a correct answer delivered long after the machine has decayed to dust.

Consider chess: I don't know exactly what the correct solution is considered to be but it's probably something like O(n^50) or even worse. It is theoretically impossible for any computer to actually calculate the correct answer--even if you use every particle in the universe as a computing element performing an operation in the minimum possible time for the life of the universe you still have a lot of zeros left. (Whether a quantum computer can solve it is another matter.)

## @Jason S 2009-01-27 00:17:12

The way I think about it, is you have the task of cleaning up a problem caused by some evil villain V who picks N, and you have to estimate out how much longer it's going to take to finish your problem when he increases N.

O(1) -> increasing N really doesn't make any difference at all

O(log(N)) -> every time V doubles N, you have to spend an extra amount of time T to complete the task. V doubles N again, and you spend the same amount.

O(N) -> every time V doubles N, you spend twice as much time.

O(N^2) -> every time V doubles N, you spend 4x as much time. (it's not fair!!!)

O(N log(N)) -> every time V doubles N, you spend twice as much time plus a little more.

These are bounds of an algorithm; computer scientists want to describe how long it is going to take for large values of N. (which gets important when you are factoring numbers that are used in cryptography -- if the computers speed up by a factor of 10, how many more bits do you have to use to ensure it will still take them 100 years to break your encryption and not just 1 year?)

Some of the bounds can have weird expressions if it makes a difference to the people involved. I've seen stuff like O(N log(N) log(log(N))) somewhere in Knuth's Art of Computer Programming for some algorithms. (can't remember which one off the top of my head)

## @HenryR 2008-09-20 16:55:50

Just to respond to the couple of comments on my above post:

Domenic- I'm on this site, and I care. Not for pedantry's sake, but because we - as programmers - typically care about precision. Using O( ) notation incorrectly in the style that some have done here renders it kind of meaningless; we may just as well say something takes n^2 units of time as O( n^2 ) under the conventions used here. Using the O( ) adds nothing. It's not just a small discrepancy between common usage and mathematical precision that I'm talking about, it's the difference between it being meaningful and it not.I know many, many excellent programmers who use these terms precisely. Saying 'oh, we're programmers therefore we don't care' cheapens the whole enterprise.

onebyone- Well, not really although I take your point. It's not O(1) for arbitrarily large n, which is kind of the definition of O( ). It just goes to show that O( ) has limited applicability for bounded n, where we would rather actually talk about the number of steps taken rather than a bound on that number.## @HenryR 2008-09-20 10:58:20

Ok - there are some very good answers here but almost all of them seem to make the same mistake and it's one that is pervading common usage.

Informally, we write that f(n) = O( g(n) ) if, up to a scaling factor and for all n larger than some n0, g(n) is

largerthan f(n). That is, f(n)grows no quickerthan, or isbounded from aboveby, g(n). This tells us nothing about how fast f(n) grows, save for the fact that it is guaranteed not to be any worse than g(n).A concrete example: n = O( 2^n ). We all know that n grows much less quickly than 2^n, so that entitles us to say that it is bounded by above by the exponential function. There is a lot of room between n and 2^n, so it's not a very

tightbound, but it's still a legitimate bound.Why do we (computer scientists) use bounds rather than being exact? Because a) bounds are often easier to prove and b) it gives us a short-hand to express properties of algorithms. If I say that my new algorithm is O(n.log n) that means that in the worst case its run-time will be bounded from above by n.log n on n inputs, for large enough n (although see my comments below on when I might not mean worst-case).

If instead, we want to say that a function grows exactly as quickly as some other function, we use

thetato make that point (I'll write T( f(n) ) to mean \Theta of f(n) in markdown). T( g(n) ) is short hand for being bounded fromabove and belowby g(n), again, up to a scaling factor and asymptotically.That is f(n) = T( g(n) ) <=> f(n) = O(g(n)) and g(n) = O(f(n)). In our example, we can see that n != T( 2^n ) because 2^n != O(n).

Why get concerned about this? Because in your question you write 'would someone have to smoke crack to write an O(x!)?' The answer is no - because basically everything you write will be bounded from above by the factorial function. The run time of quicksort is O(n!) - it's just not a tight bound.

There's also another dimension of subtlety here. Typically we are talking about the

worst case inputwhen we use O( g(n) ) notation, so that we are making a compound statement: in the worst case running time it will not be any worse than an algorithm that takes g(n) steps, again modulo scaling and for large enough n. But sometimes we want to talk about the running time of theaverageand evenbestcases.Vanilla quicksort is, as ever, a good example. It's T( n^2 ) in the worst case (it will actually take at least n^2 steps, but not significantly more), but T(n.log n) in the average case, which is to say the expected number of steps is proportional to n.log n. In the best case it is also T(n.log n) - but you could improve that for, by example, checking if the array was already sorted in which case the best case running time would be T( n ).

How does this relate to your question about the practical realisations of these bounds? Well, unfortunately, O( ) notation hides constants which real-world implementations have to deal with. So although we can say that, for example, for a T(n^2) operation we have to visit every possible pair of elements, we don't know how many times we have to visit them (except that it's not a function of n). So we could have to visit every pair 10 times, or 10^10 times, and the T(n^2) statement makes no distinction. Lower order functions are also hidden - we could have to visit every pair of elements once, and every individual element 100 times, because n^2 + 100n = T(n^2). The idea behind O( ) notation is that for large enough n, this doesn't matter at all because n^2 gets so much larger than 100n that we don't even notice the impact of 100n on the running time. However, we often deal with 'sufficiently small' n such that constant factors and so on make a real, significant difference.

For example, quicksort (average cost T(n.log n)) and heapsort (average cost T(n.log n)) are both sorting algorithms with the same average cost - yet quicksort is typically much faster than heapsort. This is because heapsort does a few more comparisons per element than quicksort.

This is not to say that O( ) notation is useless, just imprecise. It's quite a blunt tool to wield for small n.

(As a final note to this treatise, remember that O( ) notation just describes the growth of any function - it doesn't necessarily have to be time, it could be memory, messages exchanged in a distributed system or number of CPUs required for a parallel algorithm.)

## @Domenic 2008-09-20 15:18:22

On a programming site, we explain big-O as how programmers use it. Mathematically, of course, that's not the correct way, but nobody (on this site) cares. :)

## @Erik Forbes 2009-01-27 00:14:34

... I care... (Math major)

## @iandisme 2010-08-17 20:02:48

+1 for the asymptotic upper-bound bit. None of the other popular answers seemed to touch on that.

## @Steve Jessop 2013-01-16 11:04:02

I care too. Most of the answers here say that O(n^2) means "proportional to n^2". This is an abuse of notation. One could argue that by persistently abusing it, programmers have redefined Big-O to mean the same as Big-Theta. I feel that this is unfair to programmers'

potentialto understand what they're talking about, even if it accurately reflects thecurrentignorance of the average code monkey ;-)## @user9282 2008-09-20 06:35:35

Most Jon Bentley books (e.g.

Programming Pearls) cover such stuff in a really pragmatic manner. This talk given by him includes one such analysis of a quicksort.While not entirely relevant to the question, Knuth came up with an interesting idea: teaching Big-O notation in high school calculus classes, though I find this idea quite eccentric.

## @archbishop 2008-09-20 06:09:50

I like don neufeld's answer, but I think I can add something about O(n log n).

An algorithm which uses a simple divide and conquer strategy is probably going to be O(log n). The simplest example of this is finding a something in an sorted list. You don't start at the beginning and scan for it. You go to the middle, you decide if you should then go backwards or forwards, jump halfway to the last place you looked, and repeat this until you find the item you're looking for.

If you look at the quicksort or mergesort algorithms, you will see that they both take the approach of dividing the list to be sorted in half, sorting each half (using the same algorithm, recursively), and then recombining the two halves. This sort of

recursivedivide and conquer strategy will be O(n log n).If you think about it carefully, you'll see that quicksort does an O(n) partitioning algorithm on the whole n items, then an O(n) partitioning twice on n/2 items, then 4 times on n/4 items, etc... until you get to an n partitions on 1 item (which is degenerate). The number of times you divide n in half to get to 1 is approximately log n, and each step is O(n), so recursive divide and conquer is O(n log n). Mergesort builds the other way, starting with n recombinations of 1 item, and finishing with 1 recombination of n items, where the recombination of two sorted lists is O(n).

As for smoking crack to write an O(n!) algorithm, you are unless you have no choice. The traveling salesman problem given above is believed to be one such problem.

## @Steve Jessop 2008-09-20 11:52:13

Quicksort can't guarantee that it partitions equally. In the worst case, it repeatedly divides into partitions of size (k-2) and (1), so it's O(n^2). In the most naive qsort, the worst case is sorted data! A suitably tweaked algorithm makes it hard to construct the worst case, though.

## @Mike Dunlavey 2010-03-25 18:35:14

My wrinkle on what you've said is 1) your explanation of searching is good (except there needs to be a better word than "log" for 8-year-olds), and 2) I just say sorting is repeated search - for each of n items, you need to search for where it goes and stick it in.

## @Kevin Conner 2008-09-20 05:20:59

To understand O(n log n), remember that log n means log-base-2 of n. Then look at each part:

O(n) is, more or less, when you operate on each item in the set.

O(log n) is when the number of operations is the same as the exponent to which you raise 2, to get the number of items. A binary search, for instance, has to cut the set in half log n times.

O(n log n) is a combination – you're doing something along the lines of a binary search for each item in the set. Efficient sorts often operate by doing one loop per item, and in each loop doing a good search to find the right place to put the item or group in question. Hence n * log n.

## @paxdiablo 2008-09-20 06:11:47

Is that right? I always thought an unadorned log meant log to base e, at least it does in maths. Log to base 2 would be written as log2 n (with that 2 subscripted of course, something I don't know how to do in comment fields yet.

## @Steve Jessop 2008-09-20 11:45:59

For this purpose it doesn't matter, since an algorithm is O(log2(n)) iff it's O(log10(n)) etc

## @Svish 2009-03-25 08:18:51

as far as I remember: log is to base 10. ln is to base e.

## @Kevin Conner 2009-03-27 02:45:45

In mathematical notation, "log" means log base 10. In computer science I've often seen it assumed to mean log base 2.

## @Robert P 2010-01-13 21:20:03

Well, it really doesn't matter too much what the base is; with Big-O notation you generally factor out all the constants. It's the pattern of the algorithm, not the particular base that matters.

## @Statement 2008-09-20 05:12:23

log(n) means logarithmic growth. An example would be divide and conquer algorithms. If you have 1000 sorted numbers in an array ( ex. 3, 10, 34, 244, 1203 ... ) and want to search for a number in the list (find its position), you could start with checking the value of the number at index 500. If it is lower than what you seek, jump to 750. If it is higher than what you seek, jump to 250. Then you repeat the process until you find your value (and key). Every time we jump half the search space, we can cull away testing many other values since we know the number 3004 can't be above number 5000 (remember, it is a sorted list).

n log(n) then means n * log(n).

## @Esteban Araya 2008-09-20 05:06:59

No, an O(n) algorithm does not mean it will perform an operation on each element. Big-O notation gives you a way to talk about the "speed" of you algorithm independent of your actual machine.

O(n) means that the time your algorithm will take grows linearly as your input increase. O(n^2) means that the time your algorithm takes grows as the square of your input. And so forth.