By Jeffrey Lott


2009-09-01 20:22:38 8 Comments

What is the difference between Big-O notation O(n) and Little-O notation o(n)?

3 comments

@agorenst 2009-09-02 04:53:42

I find that when I can't conceptually grasp something, thinking about why one would use X is helpful to understand X. (Not to say you haven't tried that, I'm just setting the stage.)

[stuff you know]A common way to classify algorithms is by runtime, and by citing the big-Oh complexity of an algorithm, you can get a pretty good estimation of which one is "better" -- whichever has the "smallest" function in the O! Even in the real world, O(N) is "better" than O(N²), barring silly things like super-massive constants and the like.[/stuff you know]

Let's say there's some algorithm that runs in O(N). Pretty good, huh? But let's say you (you brilliant person, you) come up with an algorithm that runs in O(NloglogloglogN). YAY! Its faster! But you'd feel silly writing that over and over again when you're writing your thesis. So you write it once, and you can say "In this paper, I have proven that algorithm X, previously computable in time O(N), is in fact computable in o(n)."

Thus, everyone knows that your algorithm is faster --- by how much is unclear, but they know its faster. Theoretically. :)

@Craig Gidney 2009-09-01 20:50:27

Big-O is to little-o as is to <. Big-O is an inclusive upper bound, while little-o is a strict upper bound.

For example, the function f(n) = 3n is:

  • in O(n²), o(n²), and O(n)
  • not in O(lg n), o(lg n), or o(n)

Analogously, the number 1 is:

  • ≤ 2, < 2, and ≤ 1
  • not ≤ 0, < 0, or < 1

Here's a table, showing the general idea:

Big o table

(Note: the table is a good guide but its limit definition should be in terms of the superior limit instead of the normal limit. For example, 3 + (n mod 2) oscillates between 3 and 4 forever. It's in O(1) despite not having a normal limit, because it still has a lim sup: 4.)

I recommend memorizing how the Big-O notation converts to asymptotic comparisons. The comparisons are easier to remember, but less flexible because you can't say things like nO(1) = P.

@Masked Man 2014-01-21 05:08:55

I have one question: what's the difference between line 3 and 4 (limit definitions column)? Could you please show me one example where 4 holds (lim > 0), but not 3?

@Masked Man 2014-01-21 05:16:48

Oh, I figured it out. Big Omega is for lim > 0, Big Oh is for lim < infinity, Big Theta is when both conditions hold, meaning 0 < lim < infinity.

@user2963623 2015-08-22 07:16:16

For f ∈ Ω(g), shouldn't the limit at infinity evaluate to >= 1 ? Similarly for f ∈ O(g), 1=<c<∞?

@Craig Gidney 2015-08-22 13:52:49

@user2963623 No, because finite values strictly above 0, including values between 0 and 1, correspond to "same asymptotic complexity but different constant factors". If you omit values below 1, you have a cutoff in constant-factor space instead of in asymptotic-complexity space.

@user2963623 2015-08-22 20:22:29

I get the intuition of what you're saying. So, does f ∈ O(g) (or Ω(g)) imply that g approximates the growth rate of complexity with input size rather than strictly a bound?

@isarandi 2018-02-27 14:31:50

So is f ∈ o(g) equivalent to (f ∈ O(g) AND g NOT ∈ O(f))?

@Craig Gidney 2018-02-27 18:16:00

@isarandi No, you need some extra notion of the functions being well behaved for that. When some of the limits don't exist, your expression can fail. For example, g(x) = x and f(x) = x*(1 + (-1)^x). It is not the case that f is asymptotically less-than g because f keeps bouncing up to g. It is the case that f is asymptotically at-most g because f doesn't bounce asymptotically higher than g. It is not the case that g is asymptotically at-most f because f keeps bouncing down to 0. So (f ∈ o(g)) is false while (f ∈ O(g) AND g NOT ∈ O(f)) is true.

@isarandi 2018-02-28 12:43:03

@CraigGidney Thanks! Is it true for monotonically nondecreasing functions?

@Craig Gidney 2018-03-01 03:50:56

@isarandi No. For example, make a function by drawing a curve that bounces between y=x and y=log(x). Do the x-to-log(x) bounce horizontally rightward, and the log(x)-to-x bounce vertically upward.

@bigworld12 2018-10-31 22:43:32

here is a better definition from MIT: web.mit.edu/broder/Public/asymptotics-cheatsheet.pdf

@ubadub 2018-12-18 07:03:51

What does n^O(1) = P even mean, given O(1) is a set? How do you raise a variable to a power that is a set?

@Craig Gidney 2018-12-18 18:24:52

@ubadub It means that the exponent is asymptotically constant with respect to the size of the problem. The running time has to look like n^c for some constant c.

@ubadub 2018-12-18 22:59:39

@CraigGidney I figured that out (since you said it's equal to P), but I am not sure how that notation is valid/consistent if O(1) is generally defined as a set

@Craig Gidney 2018-12-18 23:04:16

@ubadub You broadcast the exponentiation operation over the set. It's loose notation.

@Tyler McHenry 2009-09-01 20:32:32

f ∈ O(g) says, essentially

For at least one choice of a constant k > 0, you can find a constant a such that the inequality 0 <= f(x) <= k g(x) holds for all x > a.

Note that O(g) is the set of all functions for which this condition holds.

f ∈ o(g) says, essentially

For every choice of a constant k > 0, you can find a constant a such that the inequality 0 <= f(x) < k g(x) holds for all x > a.

Once again, note that o(g) is a set.

In Big-O, it is only necessary that you find a particular multiplier k for which the inequality holds beyond some minimum x.

In Little-o, it must be that there is a minimum x after which the inequality holds no matter how small you make k, as long as it is not negative or zero.

These both describe upper bounds, although somewhat counter-intuitively, Little-o is the stronger statement. There is a much larger gap between the growth rates of f and g if f ∈ o(g) than if f ∈ O(g).

One illustration of the disparity is this: f ∈ O(f) is true, but f ∈ o(f) is false. Therefore, Big-O can be read as "f ∈ O(g) means that f's asymptotic growth is no faster than g's", whereas "f ∈ o(g) means that f's asymptotic growth is strictly slower than g's". It's like <= versus <.

More specifically, if the value of g(x) is a constant multiple of the value of f(x), then f ∈ O(g) is true. This is why you can drop constants when working with big-O notation.

However, for f ∈ o(g) to be true, then g must include a higher power of x in its formula, and so the relative separation between f(x) and g(x) must actually get larger as x gets larger.

To use purely math examples (rather than referring to algorithms):

The following are true for Big-O, but would not be true if you used little-o:

  • x² ∈ O(x²)
  • x² ∈ O(x² + x)
  • x² ∈ O(200 * x²)

The following are true for little-o:

  • x² ∈ o(x³)
  • x² ∈ o(x!)
  • ln(x) ∈ o(x)

Note that if f ∈ o(g), this implies f ∈ O(g). e.g. x² ∈ o(x³) so it is also true that x² ∈ O(x³), (again, think of O as <= and o as <)

@Phil 2009-09-01 20:38:43

Yes-- the difference is in whether the two functions may be asymptotically the same. Intuitively I like to think of big-O meaning "grows no faster than" (i.e. grows at the same rate or slower) and little-o meaning "grows strictly slower than".

@cloudsurfin 2014-11-09 04:42:04

Copy this to wikipedia? This is much better that what's there.

@S A 2015-12-28 22:01:55

@TylerMcHenry for little(o) would it be true if it is 2^n = o(3^n)?

@Tyler McHenry 2016-01-17 05:02:43

@SA Yes. That's a trickier case where the simpler rule I gave about "higher powers of x" isn't obviously applicable. But if you look at the more rigorous limit definitions given in Strilanc's answer below, what you want to know is if lim n->inf (2^n / 3^n) = 0. Since (2^n / 3^n) = (2/3)^n and since for any 0 <= x < 1, lim n->inf (x^n) = 0, it is true that 2^n = o(3^n).

@Emerald Weapon 2016-10-26 20:02:51

Your definition do not seem correct to me. For example your big-O definition implies x ~ O(x^2) (with, e.g., a=k=1).

@Cubic 2017-04-02 10:18:51

@EmeraldWeapon The definition given here doesn't imply x~O(x^2), it says x∈ O(x^2) and O(x) is a subset of O(x^2).

@Eric Duminil 2018-01-11 10:50:33

"f ∈ o(g) means that f's asymptotic growth is strictly slower than g's" With this definition, x ∈ o(2*x).

@Patch 2018-09-17 13:53:56

I had never thought of these two conditions as being so similar before, with the only difference being a "for all" versus an "exists". Thanks for that.

@GA1 2019-05-15 10:30:27

Be careful with "In Little-o, it must be that there is a minimum x after which the inequality holds no matter how small you make k, as long as it is not negative or zero." It's not "for every a there is k that: ... ", it's "for every k there is an a that: ..."

@Filippo Costa 2020-03-04 13:20:00

"In Little-o, it must be that there is a minimum x after which the inequality holds no matter how small you make k, as long as it is not negative or zero." no, this is incorrect.

@o0omycomputero0o 2020-03-17 16:17:33

For remembering: The condition is stricter so the circle is smaller (big-O compared to little-o)

Related Questions

Sponsored Content

33 Answered Questions

[SOLVED] What does O(log n) mean exactly?

52 Answered Questions

40 Answered Questions

9 Answered Questions

[SOLVED] How to find time complexity of an algorithm

13 Answered Questions

36 Answered Questions

[SOLVED] How can I pair socks from a pile efficiently?

12 Answered Questions

[SOLVED] Computational complexity of Fibonacci Sequence

23 Answered Questions

[SOLVED] Big O, how do you calculate/approximate it?

6 Answered Questions

[SOLVED] What exactly does big Ө notation represent?

9 Answered Questions

[SOLVED] What is the difference between Θ(n) and O(n)?

Sponsored Content