Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

big-O notation: parenthesis for function calls, explicit multiplication #71167

Merged
merged 2 commits into from
Apr 18, 2020
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
27 changes: 13 additions & 14 deletions src/liballoc/collections/binary_heap.rs
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
//! A priority queue implemented with a binary heap.
//!
//! Insertion and popping the largest element have `O(log n)` time complexity.
//! Insertion and popping the largest element have `O(log(n))` time complexity.
//! Checking the largest element is `O(1)`. Converting a vector to a binary heap
//! can be done in-place, and has `O(n)` complexity. A binary heap can also be
//! converted to a sorted vector in-place, allowing it to be used for an `O(n
//! log n)` in-place heapsort.
//! converted to a sorted vector in-place, allowing it to be used for an `O(n * log(n))`
//! in-place heapsort.
//!
//! # Examples
//!
Expand Down Expand Up @@ -233,9 +233,9 @@ use super::SpecExtend;
///
/// # Time complexity
///
/// | [push] | [pop] | [peek]/[peek\_mut] |
/// |--------|----------|--------------------|
/// | O(1)~ | O(log n) | O(1) |
/// | [push] | [pop] | [peek]/[peek\_mut] |
/// |--------|-----------|--------------------|
/// | O(1)~ | O(log(n)) | O(1) |
///
/// The value for `push` is an expected cost; the method documentation gives a
/// more detailed analysis.
Expand Down Expand Up @@ -398,7 +398,7 @@ impl<T: Ord> BinaryHeap<T> {
///
/// # Time complexity
///
/// Cost is O(1) in the worst case.
/// Cost is `O(1)` in the worst case.
#[stable(feature = "binary_heap_peek_mut", since = "1.12.0")]
pub fn peek_mut(&mut self) -> Option<PeekMut<'_, T>> {
if self.is_empty() { None } else { Some(PeekMut { heap: self, sift: true }) }
Expand All @@ -422,8 +422,7 @@ impl<T: Ord> BinaryHeap<T> {
///
/// # Time complexity
///
/// The worst case cost of `pop` on a heap containing *n* elements is O(log
/// n).
/// The worst case cost of `pop` on a heap containing *n* elements is `O(log(n))`.
#[stable(feature = "rust1", since = "1.0.0")]
pub fn pop(&mut self) -> Option<T> {
self.data.pop().map(|mut item| {
Expand Down Expand Up @@ -456,15 +455,15 @@ impl<T: Ord> BinaryHeap<T> {
///
/// The expected cost of `push`, averaged over every possible ordering of
/// the elements being pushed, and over a sufficiently large number of
/// pushes, is O(1). This is the most meaningful cost metric when pushing
/// pushes, is `O(1)`. This is the most meaningful cost metric when pushing
/// elements that are *not* already in any sorted pattern.
///
/// The time complexity degrades if elements are pushed in predominantly
/// ascending order. In the worst case, elements are pushed in ascending
/// sorted order and the amortized cost per push is O(log n) against a heap
/// sorted order and the amortized cost per push is `O(log(n))` against a heap
/// containing *n* elements.
///
/// The worst case cost of a *single* call to `push` is O(n). The worst case
/// The worst case cost of a *single* call to `push` is `O(n)`. The worst case
/// occurs when capacity is exhausted and needs a resize. The resize cost
/// has been amortized in the previous figures.
#[stable(feature = "rust1", since = "1.0.0")]
Expand Down Expand Up @@ -644,7 +643,7 @@ impl<T: Ord> BinaryHeap<T> {
/// The remaining elements will be removed on drop in heap order.
///
/// Note:
/// * `.drain_sorted()` is O(n lg n); much slower than `.drain()`.
/// * `.drain_sorted()` is `O(n * lg(n))`; much slower than `.drain()`.
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know what lg is supposed to mean here. Is it log with a typo, or is it supposed to indicate a particular base?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is the only place it is used here, i assume it is a typo

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

https://www.math10.com/en/algebra/logarithm-log-ln-lg.html suggests that lg == log base 10

https://mathworld.wolfram.com/Lg.html suggests it might mean log base 2

Copy link
Member Author

@RalfJung RalfJung Apr 15, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mean anyway the difference between base 2 and base 10 is a constant factor, so in the context of big-O the base just doesn't matter... but then there are two places in the docs that explicitly give a base in big-O.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, I changed this here to log and removed the two explicitly given bases.

/// You should use the latter for most cases.
///
/// # Examples
Expand Down Expand Up @@ -729,7 +728,7 @@ impl<T> BinaryHeap<T> {
///
/// # Time complexity
///
/// Cost is O(1) in the worst case.
/// Cost is `O(1)` in the worst case.
#[stable(feature = "rust1", since = "1.0.0")]
pub fn peek(&self) -> Option<&T> {
self.data.get(0)
Expand Down
2 changes: 1 addition & 1 deletion src/liballoc/collections/btree/map.rs
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ use UnderflowResult::*;
/// performance on *small* nodes of elements which are cheap to compare. However in the future we
/// would like to further explore choosing the optimal search strategy based on the choice of B,
/// and possibly other factors. Using linear search, searching for a random element is expected
/// to take O(B log<sub>B</sub>n) comparisons, which is generally worse than a BST. In practice,
/// to take O(B * log<sub>B</sub>(n)) comparisons, which is generally worse than a BST. In practice,
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Elsewhere we are using log_2 to indicate the base, so this is inconsistent. But for now I decided not to change this.

/// however, performance is excellent.
///
/// It is a logic error for a key to be modified in such a way that the key's ordering relative to
Expand Down
20 changes: 10 additions & 10 deletions src/liballoc/collections/linked_list.rs
Original file line number Diff line number Diff line change
Expand Up @@ -390,7 +390,7 @@ impl<T> LinkedList<T> {
/// This reuses all the nodes from `other` and moves them into `self`. After
/// this operation, `other` becomes empty.
///
/// This operation should compute in O(1) time and O(1) memory.
/// This operation should compute in `O(1)` time and `O(1)` memory.
///
/// # Examples
///
Expand Down Expand Up @@ -547,7 +547,7 @@ impl<T> LinkedList<T> {

/// Returns `true` if the `LinkedList` is empty.
///
/// This operation should compute in O(1) time.
/// This operation should compute in `O(1)` time.
///
/// # Examples
///
Expand All @@ -568,7 +568,7 @@ impl<T> LinkedList<T> {

/// Returns the length of the `LinkedList`.
///
/// This operation should compute in O(1) time.
/// This operation should compute in `O(1)` time.
///
/// # Examples
///
Expand All @@ -594,7 +594,7 @@ impl<T> LinkedList<T> {

/// Removes all elements from the `LinkedList`.
///
/// This operation should compute in O(n) time.
/// This operation should compute in `O(n)` time.
///
/// # Examples
///
Expand Down Expand Up @@ -737,7 +737,7 @@ impl<T> LinkedList<T> {

/// Adds an element first in the list.
///
/// This operation should compute in O(1) time.
/// This operation should compute in `O(1)` time.
///
/// # Examples
///
Expand All @@ -760,7 +760,7 @@ impl<T> LinkedList<T> {
/// Removes the first element and returns it, or `None` if the list is
/// empty.
///
/// This operation should compute in O(1) time.
/// This operation should compute in `O(1)` time.
///
/// # Examples
///
Expand All @@ -783,7 +783,7 @@ impl<T> LinkedList<T> {

/// Appends an element to the back of a list.
///
/// This operation should compute in O(1) time.
/// This operation should compute in `O(1)` time.
///
/// # Examples
///
Expand All @@ -803,7 +803,7 @@ impl<T> LinkedList<T> {
/// Removes the last element from a list and returns it, or `None` if
/// it is empty.
///
/// This operation should compute in O(1) time.
/// This operation should compute in `O(1)` time.
///
/// # Examples
///
Expand All @@ -824,7 +824,7 @@ impl<T> LinkedList<T> {
/// Splits the list into two at the given index. Returns everything after the given index,
/// including the index.
///
/// This operation should compute in O(n) time.
/// This operation should compute in `O(n)` time.
///
/// # Panics
///
Expand Down Expand Up @@ -880,7 +880,7 @@ impl<T> LinkedList<T> {

/// Removes the element at the given index and returns it.
///
/// This operation should compute in O(n) time.
/// This operation should compute in `O(n)` time.
///
/// # Panics
/// Panics if at >= len
Expand Down
6 changes: 3 additions & 3 deletions src/liballoc/collections/vec_deque.rs
Original file line number Diff line number Diff line change
Expand Up @@ -1391,7 +1391,7 @@ impl<T> VecDeque<T> {
/// Removes an element from anywhere in the `VecDeque` and returns it,
/// replacing it with the first element.
///
/// This does not preserve ordering, but is O(1).
/// This does not preserve ordering, but is `O(1)`.
///
/// Returns `None` if `index` is out of bounds.
///
Expand Down Expand Up @@ -1426,7 +1426,7 @@ impl<T> VecDeque<T> {
/// Removes an element from anywhere in the `VecDeque` and returns it, replacing it with the
/// last element.
///
/// This does not preserve ordering, but is O(1).
/// This does not preserve ordering, but is `O(1)`.
///
/// Returns `None` if `index` is out of bounds.
///
Expand Down Expand Up @@ -2927,7 +2927,7 @@ impl<T> From<VecDeque<T>> for Vec<T> {
/// [`Vec<T>`]: crate::vec::Vec
/// [`VecDeque<T>`]: crate::collections::VecDeque
///
/// This never needs to re-allocate, but does need to do O(n) data movement if
/// This never needs to re-allocate, but does need to do `O(n)` data movement if
/// the circular buffer doesn't happen to be at the beginning of the allocation.
///
/// # Examples
Expand Down
10 changes: 5 additions & 5 deletions src/liballoc/slice.rs
Original file line number Diff line number Diff line change
Expand Up @@ -165,7 +165,7 @@ mod hack {
impl<T> [T] {
/// Sorts the slice.
///
/// This sort is stable (i.e., does not reorder equal elements) and `O(n log n)` worst-case.
/// This sort is stable (i.e., does not reorder equal elements) and `O(n * log(n))` worst-case.
///
/// When applicable, unstable sorting is preferred because it is generally faster than stable
/// sorting and it doesn't allocate auxiliary memory.
Expand Down Expand Up @@ -200,7 +200,7 @@ impl<T> [T] {

/// Sorts the slice with a comparator function.
///
/// This sort is stable (i.e., does not reorder equal elements) and `O(n log n)` worst-case.
/// This sort is stable (i.e., does not reorder equal elements) and `O(n * log(n))` worst-case.
///
/// The comparator function must define a total ordering for the elements in the slice. If
/// the ordering is not total, the order of the elements is unspecified. An order is a
Expand Down Expand Up @@ -254,7 +254,7 @@ impl<T> [T] {

/// Sorts the slice with a key extraction function.
///
/// This sort is stable (i.e., does not reorder equal elements) and `O(m n log n)`
/// This sort is stable (i.e., does not reorder equal elements) and `O(m * n * log(n))`
/// worst-case, where the key function is `O(m)`.
///
/// For expensive key functions (e.g. functions that are not simple property accesses or
Expand Down Expand Up @@ -297,7 +297,7 @@ impl<T> [T] {
///
/// During sorting, the key function is called only once per element.
///
/// This sort is stable (i.e., does not reorder equal elements) and `O(m n + n log n)`
/// This sort is stable (i.e., does not reorder equal elements) and `O(m * n + n * log(n))`
/// worst-case, where the key function is `O(m)`.
///
/// For simple key functions (e.g., functions that are property accesses or
Expand Down Expand Up @@ -935,7 +935,7 @@ where
/// 1. for every `i` in `1..runs.len()`: `runs[i - 1].len > runs[i].len`
/// 2. for every `i` in `2..runs.len()`: `runs[i - 2].len > runs[i - 1].len + runs[i].len`
///
/// The invariants ensure that the total running time is `O(n log n)` worst-case.
/// The invariants ensure that the total running time is `O(n * log(n))` worst-case.
fn merge_sort<T, F>(v: &mut [T], mut is_less: F)
where
F: FnMut(&T, &T) -> bool,
Expand Down
8 changes: 4 additions & 4 deletions src/libcore/slice/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -1606,7 +1606,7 @@ impl<T> [T] {
/// Sorts the slice, but may not preserve the order of equal elements.
///
/// This sort is unstable (i.e., may reorder equal elements), in-place
/// (i.e., does not allocate), and `O(n log n)` worst-case.
/// (i.e., does not allocate), and `O(n * log(n))` worst-case.
///
/// # Current implementation
///
Expand Down Expand Up @@ -1642,7 +1642,7 @@ impl<T> [T] {
/// elements.
///
/// This sort is unstable (i.e., may reorder equal elements), in-place
/// (i.e., does not allocate), and `O(n log n)` worst-case.
/// (i.e., does not allocate), and `O(n * log(n))` worst-case.
///
/// The comparator function must define a total ordering for the elements in the slice. If
/// the ordering is not total, the order of the elements is unspecified. An order is a
Expand Down Expand Up @@ -1697,7 +1697,7 @@ impl<T> [T] {
/// elements.
///
/// This sort is unstable (i.e., may reorder equal elements), in-place
/// (i.e., does not allocate), and `O(m n log n)` worst-case, where the key function is
/// (i.e., does not allocate), and `O(m * n * log(n))` worst-case, where the key function is
/// `O(m)`.
///
/// # Current implementation
Expand Down Expand Up @@ -1957,7 +1957,7 @@ impl<T> [T] {
// over all the elements, swapping as we go so that at the end
// the elements we wish to keep are in the front, and those we
// wish to reject are at the back. We can then split the slice.
// This operation is still O(n).
// This operation is still `O(n)`.
//
// Example: We start in this state, where `r` represents "next
// read" and `w` represents "next_write`.
Expand Down
6 changes: 3 additions & 3 deletions src/libcore/slice/sort.rs
Original file line number Diff line number Diff line change
Expand Up @@ -143,7 +143,7 @@ where
}
}

/// Sorts `v` using heapsort, which guarantees `O(n log n)` worst-case.
/// Sorts `v` using heapsort, which guarantees `O(n * log(n))` worst-case.
#[cold]
pub fn heapsort<T, F>(v: &mut [T], is_less: &mut F)
where
Expand Down Expand Up @@ -621,7 +621,7 @@ where
}

// If too many bad pivot choices were made, simply fall back to heapsort in order to
// guarantee `O(n log n)` worst-case.
// guarantee `O(n * log(n))` worst-case.
if limit == 0 {
heapsort(v, is_less);
return;
Expand Down Expand Up @@ -684,7 +684,7 @@ where
}
}

/// Sorts `v` using pattern-defeating quicksort, which is `O(n log n)` worst-case.
/// Sorts `v` using pattern-defeating quicksort, which is `O(n * log(n))` worst-case.
pub fn quicksort<T, F>(v: &mut [T], mut is_less: F)
where
F: FnMut(&T, &T) -> bool,
Expand Down
8 changes: 4 additions & 4 deletions src/libstd/collections/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -110,10 +110,10 @@
//!
//! For Sets, all operations have the cost of the equivalent Map operation.
//!
//! | | get | insert | remove | predecessor | append |
//! |--------------|-----------|----------|----------|-------------|--------|
//! | [`HashMap`] | O(1)~ | O(1)~* | O(1)~ | N/A | N/A |
//! | [`BTreeMap`] | O(log n) | O(log n) | O(log n) | O(log n) | O(n+m) |
//! | | get | insert | remove | predecessor | append |
//! |--------------|-----------|-----------|-----------|-------------|--------|
//! | [`HashMap`] | O(1)~ | O(1)~* | O(1)~ | N/A | N/A |
//! | [`BTreeMap`] | O(log(n)) | O(log(n)) | O(log(n)) | O(log(n)) | O(n+m) |
//!
//! # Correct and Efficient Usage of Collections
//!
Expand Down
4 changes: 2 additions & 2 deletions src/libstd/ffi/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -43,8 +43,8 @@
//! terminator, so the buffer length is really `len+1` characters.
//! Rust strings don't have a nul terminator; their length is always
//! stored and does not need to be calculated. While in Rust
//! accessing a string's length is a O(1) operation (because the
//! length is stored); in C it is an O(length) operation because the
//! accessing a string's length is a `O(1)` operation (because the
//! length is stored); in C it is an `O(length)` operation because the
//! length needs to be computed by scanning the string for the nul
//! terminator.
//!
Expand Down