See
Wikipedia - Big O Notation and
An informal introduction to O(N) notation, the latter of which says...
O(N) notation is used to express the worst-case order of growth of an algorithm. That is, how the algorithm's worst-case performance changes as the size of the data set it operates on increases.
...and specifically...
O(log N) and O(N log N) ... generally mean that the algorithm deals with a data set that is iteratively partitioned, like a balanced binary tree... Generally, but not always, log N implies log2N, which means, roughly, the number of times you can partition a set in half, then partition the halves, and so on, while still having non-empty sets.