I am learning in my computer science class about algorithms. My teacher wrote on the board:

n= 1 running time= 1
3 1+1= 2
7 1+2= 3
15 1+3= 4
31 1+4=5
63 1+5= 6
127 7
255 8
511 9
1023 10

How in the world does this happen?

It has something to do with log n steps. I don't understand this concept.

What does it mean log n= <<n for large n?

Also, what is O(n)?

Running time is 1,2,3,4,5,6,7,8,9,10. It sll got put together, but really those numbers are separate from the n numbers

Sorry to be nagging, but next time, when the teacher writes something on the board that you do not understand, raise your hand and ask for explanation. There may be 6 other students who don't understand and they would thank you for asking.

Back to the board.
If you have to calculate X=2n-1, where n=10, you can start with k=1, n=2¹-1=1
for k=2, you double the previous x and add 1 to get 3. For k=3, again double the previous x(=3) and add 1 to get 7.
If you do this 10 times, you will get 210-1 in 10 steps, i.e. up to k=n.
Since 1024 is equal to 210, therefore log2(1024)=10.
The 'cost' of doing this calculation is therefore proportional to n, or approximate equal to n, or order n, or O(n).
For an explanation of the Big-O notation, you can read up:
http://en.wikipedia.org/wiki/Big_O_notation

I do not know if algebra is a prerequisite for this algorithm course, but definitely you will need to know quite well logarithms (to the base 2 and base e). Textbooks usually start off with a thorough revision of math required.

The idea behind all this is to be able to compare the efficiency of different algorithms, or way of calculating or evaluating formulas. An algorithm of O(log n) performs better that of O(n), which in turn performs better than that of O(n²), etc. This is the object of the very interesting course.

The example you provided seems to be illustrating a power of 2 pattern. Each row shows a value of n increasing by a power of 2, and the running time is calculated as the logarithm (base 2) of that value.

To better understand the concept, let's look at the progression you provided:

n = 1, running time = 1
n = 3, running time = 1 + 1 = 2
n = 7, running time = 1 + 2 = 3
n = 15, running time = 1 + 3 = 4
n = 31, running time = 1 + 4 = 5
n = 63, running time = 1 + 5 = 6
n = 127, running time = 7
n = 255, running time = 8
n = 511, running time = 9
n = 1023, running time = 10

In this pattern, the running time seems to be equal to the logarithm (base 2) of n, incremented by 1.

Now, let's talk about logarithms. In mathematics, the logarithm is the inverse operation to exponentiation. Specifically, the base b logarithm (logb) of a number x, denoted as logb(x), answers the question: "To what power must I raise b to get x?"

For example, log2(8) = 3, because 2 raised to the power of 3 is equal to 8 (2^3 = 8). Similarly, log2(16) = 4.

In the case of your example, the running time is approximately equal to log2(n) + 1. For larger values of n, the difference between log2(n) and n is significant, which is why the running time appears to increase slowly.

Now, let's discuss O(n). In computer science, O(n) represents the worst-case time complexity of an algorithm. It provides an upper bound for the growth rate of the algorithm's running time as a function of the input size, often denoted as n.

In simpler terms, if the running time of an algorithm is O(n), it means that the time it takes to execute the algorithm will grow linearly with the size of the input. For example, if the input size doubles, the running time will also approximately double.

To summarize, log n represents the logarithm (base 2 in your example) of a number n. In your case, the running time seems to be approximately log2(n) + 1. O(n) is a notation used to express the worst-case time complexity of an algorithm, indicating linear growth in the running time relative to the input size.

The pattern you're seeing on the board is related to powers of 2. Each number on the right side is the result of adding 1 to the corresponding number on the left side. This progression of numbers is following the formula: running time = log n + 1.

In computer science, the logarithm is often used to measure the efficiency or running time of algorithms. The notation "log n" represents the logarithm base 2 of the input size n. For example, log(8) = 3 because 2^3 = 8. In the case of the pattern you provided, the input size (n) refers to the numbers on the left side.

As n gets larger, the running time increases logarithmically. This means that the running time grows slower than n itself. In other words, doubling the input size will only increase the running time by a constant factor (in this case, 1). So, for large values of n, you can say that log n is a good estimate of the running time.

O(n), also known as Big O notation, is a way to describe the upper bound or worst-case running time of an algorithm. When someone says O(n), it means that the running time of the algorithm grows linearly with the input size n. In other words, if the input size doubles, the running time also doubles. The notation O(n) is commonly used to analyze and compare algorithms in terms of their efficiency.

In summary, the pattern on the board demonstrates that for large values of n, the running time can be approximated by log n + 1. This is because the running time increases logarithmically with the input size. Additionally, O(n) represents the linear growth rate of an algorithm's running time.