Week 4 Day 2 - Big O Notation

Channel:
Subscribers:
2,700
Published on ● Video Link: https://www.youtube.com/watch?v=hlPCo5ZVKGQ



Duration: 1:32:35
191 views
2


Today we did a code review on Imaginary Homework. Murai and Ishigaki did a fantastic job on their project. If you didn't pass 20/20 of the test cases, review their code and figure out what you could have done differently.


We then talked about Big-O notation, a central concept in Computer Science. Big-O basically how much slower your algorithm gets as the problem size gets bigger. A lot of algorithms have a linear relationship. Suppose you want to add up all N elements in a vector (N means the size of your problem). Well, you start at the beginning and add them up till the end. As N gets bigger, so does the amount of work you have to do. So this is called an O(N) ("linear running time") algorithm.


However, the work needed to access the first element in a vector doesn't change based on how big the vector is. You just, well, read the first element. It doesn't matter if the vector is size 10 or size 1000000. It takes a constant amount of time to access. So this is called an O(1) ("constant running time") algorithm.


If you have a doubly-nested for loop, each one of which runs from 0 to N-1, then you'll have an O(N^2) algorithm. If you have a triply nested for loop (each loop running from 0 to N-1) then it'll be O(N^3), and so forth.


O(2^N) means that for each additional element you add to the problem, you *double* the amount of work you do. This is called "exponential running time" and it can quickly overwhelm (as N increases) even the fastest computers.


Contrariwise, O(logN) ("logarithmic running time") is as fast as exponential is slow - for every doubling of the problem size, you only have to do one additional step. Binary Search is a classic example of a O(logN) algorithm.


Finally, O(NlogN) is important ("line... loga... uh, *linearithmic* running time") because the time needed to sort a vector full of random data is provably no faster than O(NlogN). Which is pretty good. It's only a little worse than O(N), and basically any algorithm like that is going to have to at least read through all the elements once, which is O(N) time.







Tags:
csci 41
big o
code review