PrevNext

Time Complexity

Authors: Darren Yao, Benjamin Qi

Contributors: Ryan Chou, Qi Wang

Measuring the number of operations an algorithm performs.

Edit This Page
Resources
IUSACO

module is based off this

CPH

Intro and examples

PAPS

More in-depth. In particular, 5.2 gives a formal definition of Big O.

YouTube

If you prefer watching a video instead


In programming contests, your program needs to finish running within a certain timeframe in order to receive credit. For USACO, this limit is 22 seconds for C++ submissions, and 44 seconds for Java/Python submissions. A conservative estimate for the number of operations the grading server can handle per second is 10810^8, but it could be closer to 51085 \cdot 10^8 given good constant factors.

Complexity Calculations

We want a method to calculate how many operations it takes to run each algorithm, in terms of the input size nn. Fortunately, this can be done relatively easily using Big O notation, which expresses worst-case time complexity as a function of nn as nn gets arbitrarily large. Complexity is an upper bound for the number of steps an algorithm requires as a function of the input size. In Big O notation, we denote the complexity of a function as O(f(n))\mathcal{O}(f(n)), where constant factors and lower-order terms are generally omitted from f(n)f(n). We'll see some examples of how this works, as follows.

The following code is O(1)\mathcal{O}(1), because it executes a constant number of operations.

C++

int a = 5;
int b = 7;
int c = 4;
int d = a + b + c + 153;

Java

int a = 5;
int b = 7;
int c = 4;
int d = a + b + c + 153;

Python

a = 5
b = 7
c = 4
d = a + b + c + 153

Input and output operations are also assumed to be O(1)\mathcal{O}(1). In the following examples, we assume that the code inside the loops is O(1)\mathcal{O}(1).

The time complexity of loops is the number of iterations that the loop runs. For example, the following code examples are both O(n)\mathcal{O}(n).

C++

for (int i = 1; i <= n; i++) {
// constant time code here
}
int i = 0;
while (i < n) {
// constant time code here
i++;
}

Java

for (int i = 1; i <= n; i++) {
// constant time code here
}
int i = 0;
while (i < n) {
// constant time code here
i++;
}

Python

for i in range(1, n + 1):
pass # constant time code here
i = 0
while i < n:
# constant time code here
i += 1

Because we ignore constant factors and lower order terms, the following examples are also O(n)\mathcal{O}(n):

C++

for (int i = 1; i <= 5 * n + 17; i++) {
// constant time code here
}
for (int i = 1; i <= n + 457737; i++) {
// constant time code here
}

Java

for (int i = 1; i <= 5 * n + 17; i++) {
// constant time code here
}
for (int i = 1; i <= n + 457737; i++) {
// constant time code here
}

Python

for i in range(5 * n + 17):
pass # constant time code here
for i in range(n + 457737):
pass # constant time code here

We can find the time complexity of multiple loops by multiplying together the time complexities of each loop. This example is O(nm)\mathcal{O}(nm), because the outer loop runs O(n)\mathcal{O}(n) iterations and the inner loop O(m)\mathcal{O}(m).

C++

for (int i = 1; i <= n; i++) {
for (int j = 1; j <= m; j++) {
// constant time code here
}
}

Java

for (int i = 1; i <= n; i++) {
for (int j = 1; j <= m; j++) {
// constant time code here
}
}

Python

for i in range(n):
for j in range(m):
pass # constant time code here

In this example, the outer loop runs O(n)\mathcal{O}(n) iterations, and the inner loop runs anywhere between 11 and nn iterations (which is a maximum of nn). Since Big O notation calculates worst-case time complexity, we treat the inner loop as a factor of nn. Thus, this code is O(n2)\mathcal{O}(n^2).

C++

for (int i = 1; i <= n; i++) {
for (int j = i; j <= n; j++) {
// constant time code here
}
}

Java

for (int i = 1; i <= n; i++) {
for (int j = i; j <= n; j++) {
// constant time code here
}
}

Python

for i in range(n):
for j in range(i, n):
pass # constant time code here

If an algorithm contains multiple blocks, then its time complexity is the worst time complexity out of any block. For example, the following code is O(n2)\mathcal{O}(n^2).

C++

for (int i = 1; i <= n; i++) {
for (int j = 1; j <= n; j++) {
// constant time code here
}
}
for (int i = 1; i <= n + 58834; i++) {
// more constant time code here
}

Java

for (int i = 1; i <= n; i++) {
for (int j = 1; j <= n; j++) {
// constant time code here
}
}
for (int i = 1; i <= n + 58834; i++) {
// more constant time code here
}

Python

for i in range(n):
for j in range(n):
pass # constant time code here
for i in range(n + 58834):
pass # more constant time code here

The following code is O(n2+m)\mathcal{O}(n^2 + m), because it consists of two blocks of complexity O(n2)\mathcal{O}(n^2) and O(m)\mathcal{O}(m), and neither of them is a lower order function with respect to the other.

C++

for (int i = 1; i <= n; i++) {
for (int j = 1; j <= n; j++) {
// constant time code here
}
}
for (int i = 1; i <= m; i++) {
// more constant time code here
}

Java

for (int i = 1; i <= n; i++) {
for (int j = 1; j <= n; j++) {
// constant time code here
}
}
for (int i = 1; i <= m; i++) {
// more constant time code here
}

Python

for i in range(n):
for j in range(n):
pass # constant time code here
for i in range(m):
pass # more constant time code here

Common Complexities and Constraints

Complexity factors that come from some common algorithms and data structures are as follows:

Warning!

Don't worry if you don't recognize most of these! They will all be introduced later.

  • Mathematical formulas that just calculate an answer: O(1)\mathcal{O}(1)
  • Binary search: O(logn)\mathcal{O}(\log n)
  • Sorted set/map or priority queue: O(logn)\mathcal{O}(\log n) per operation
  • Prime factorization of an integer, or checking primality or compositeness of an integer naively: O(n)\mathcal{O}(\sqrt{n})
  • Reading in nn items of input: O(n)\mathcal{O}(n)
  • Iterating through an array or a list of nn elements: O(n)\mathcal{O}(n)
  • Sorting: usually O(nlogn)\mathcal{O}(n \log n) for default sorting algorithms (mergesort, Collections.sort, Arrays.sort)
  • Java Quicksort Arrays.sort function on primitives: O(n2)\mathcal{O}(n^2)
  • Iterating through all subsets of size kk of the input elements: O(nk)\mathcal{O}(n^k). For example, iterating through all triplets is O(n3)\mathcal{O}(n^3).
  • Iterating through all subsets: O(2n)\mathcal{O}(2^n)
  • Iterating through all permutations: O(n!)\mathcal{O}(n!)

Here are conservative upper bounds on the value of nn for each time complexity. You might get away with more than this, but this should allow you to quickly check whether an algorithm is viable.

nnPossible complexities
n10n \le 10O(n!)\mathcal{O}(n!), O(n7)\mathcal{O}(n^7), O(n6)\mathcal{O}(n^6)
n20n \le 20O(2nn)\mathcal{O}(2^n \cdot n), O(n5)\mathcal{O}(n^5)
n80n \le 80O(n4)\mathcal{O}(n^4)
n400n \le 400O(n3)\mathcal{O}(n^3)
n7500n \le 7500O(n2)\mathcal{O}(n^2)
n7104n \le 7 \cdot 10^4O(nn)\mathcal{O}(n \sqrt n)
n5105n \le 5 \cdot 10^5O(nlogn)\mathcal{O}(n \log n)
n5106n \le 5 \cdot 10^6O(n)\mathcal{O}(n)
n1018n \le 10^{18}O(log2n)\mathcal{O}(\log^2 n), O(logn)\mathcal{O}(\log n), O(1)\mathcal{O}(1)

Warning!

A significant portion of Bronze problems will have n100n\le 100. This doesn't give much of a hint regarding the intended time complexity. The intended solution could still be O(n)\mathcal{O}(n)!

Constant Factor

Constant factor refers to the idea that different operations with the same complexity take slightly different amounts of time to run. For example, three addition operations take a bit longer than a single addition operation. Another example is that although binary search on an array and insertion into an ordered set are both O(logn)\mathcal{O}(\log n), binary search is noticeably faster.

Constant factor is entirely ignored in Big O notation. This is fine most of the time, but if the time limit is particularly tight, you may receive time limit exceeded (TLE) with the intended complexity. When this happens, it is important to keep the constant factor in mind. For example, a piece of code that iterates through all ordered triplets runs in O(n3)\mathcal{O}(n^3) time might be sped up by a factor of 66 if we only need to iterate through all unordered triplets.

For now, don't worry about optimizing constant factors -- just be aware of them.

Formal Definition of Big O notation

Let ff and gg be non-negative functions from R0\mathbb{R}_{\ge 0} to R0\mathbb{R}_{\ge 0}. If there exist positive constants n0n_0 and cc such that f(n)cg(n)f(n) \le c \cdot g(n) whenever nn0n \ge n_0, we say that f(n)=O(g(n))f(n) = \mathcal{O}(g(n)).

Therefore, we could say that the time complexity of a linear function, O(n)\mathcal{O}(n), is also O(n/2)\mathcal{O}(n/2), O(2n)\mathcal{O}(2n), O(n2)\mathcal{O}(n^2), O(2n)\mathcal{O}(2^n), O(nn)\mathcal{O}(n^n), etc. However, we usually just write the simplest function out of those that are the most restrictive, which in the case of our linear function above, is O(n)\mathcal{O}(n).

Optional: P vs. NP

P refers to the class of problems that can be solved within polynomial time (O(n2),O(n3),O(n100),\mathcal{O}(n^2), \mathcal{O}(n^3), \mathcal{O}(n^{100}), \dots). NP, short for nondeterministic polynomial time, is the set of problems with solutions that can be verified in polynomial time.

A common example of a problem in NP is a generalized version of Sudoku, where a solution is easily verifiable in polynomial time, but it is unknown whether a solution is computable in polynomial time. "P vs. NP" is the classic unsolved problem that asks whether every problem that can be verified in polynomial time can also be solved in polynomial time.

If you're interested in learning more about P vs. NP, check out this YouTube video.

Quiz

What's time complexity?

Question 1 of 4

Module Progress:

Join the USACO Forum!

Stuck on a problem, or don't understand a module? Join the USACO Forum and get help from other competitive programmers!

PrevNext