In this lesson, we are going to roll up our sleeves and take a closer look at how computers use and store numbers. Learning how computers manage data can help you better understand the applications on your computer. Computers are Binary Creatures

You've probably heard that computers operate on binary numbers. The two binary digits are 0 and 1. All data stored on a computer, such as a larger decimal number (42) or a piece of text ("hello"), is actually just a more complicated series of 0's and 1's. Inside your computer's processor, the 0's and 1's are actually determined by the presence or absence of an electrical signal. If you have a 5-volt processor, for example, then a reading of 0 volts on a line is treated as a binary 0 and a reading of 5 volts on a line is treated as a binary 1. Sometimes, the voltage on a line will flop back and forth between 0 and the maximum, and you can visualize this as a square wave. square wave between 0 and 5 volts Through the magic of the tiny transistor, computers can switch a voltage off and on very easily. So, managing data as a series of off/on or 0/1 combinations comes naturally to computers. When making decisions or speaking about digital logic, the 0's and 1's are understood as "False" (0) or "True" (1). Therefore, programmers may speak informally about logical expressions using "True or False", "0 or 1", or similar terms. Numbers in Binary All computer programs and data are simply a series of 1's and 0's. These two symbols are grouped in larger patterns to be meaningful to a computer or human. Each 1 or 0 digit is called a bit, which is short for binary digit. Because there are only two possible values for each digit, this is called a binary or base 2 numbering system. Humans normally use "decimal" or "base 10" numbering. The "base" of a numbering system describes how many unique digits are used. In a base 10 system, we use 10 digits (0 through 9). In a base 2 system, we use only 2 digits (0 and 1). You can write numbers in binary simply as a line of digits, like this: 1010. But how will anyone looking at that line know if the numbers are binary or represent the decimal value "one thousand and ten"? If the meaning is not clear, you can add a percent sign prefix (%) as in %1010 or add a trailing sub-script showing the base 2, like this: 10102. Now, what do these binary numbers mean in decimal? If you have 1 bit, the possible values are just 0 or 1. If you have two bits, the possible values are 00, 01, 10, and 11, which correspond to decimal values 0, 1, 2, and 3. As you can imagine, as you add binary digits, the range of possible values gets greater. In fact, each additional binary digit doubles the range! Four bits can hold sixteen values (0-15 in decimal), five bits can hold 32 possible values (0-31 in decimal), and so on. Computers will store a group of 8 bits into a byte which has a range of 256 possible values.Bits and bytes are gathered together in increasingly large groups to form data sets or files. The following terms are used to describe file sizes or amounts of data.

Term Description Example
bit A single 0 or 1 value 0 or 1
byte A set of 8 bits A single character like 'A'
kilobyte 1024 bytes A paragraph or two of text
megabyte 1024 kilobytes or 1,048,567 bytes About 1 minute of MP3 music or an average book
gigabyte 1024 megabytes or 1,073,741,824 bytes About 1 hour of 1080p video
terabyte 1024 gigabytes or 1,099,511,627,776 bytes About 500 hours of movies or 300,000 photos, depending on resolution
petabyte 1024 terabytes or 1,125,899,906,842,624 bytes

Why is it important to understand the binary number system when dealing with computers?
How many digits are in the binary number system?
What two binary values are normally understood as "True" and "False"?
What is a bit? What is a byte?
When thinking about kilobytes, megabytes, and larger sizes, what multiplier is used to increase from one stage to the next?
What are two common ways to identify binary numbers?
Why are binary numbers a form of abstraction?
Does one sequence of binary digits always mean the same thing to all applications?
How do you count in binary?
What is the process of converting from binary to decimal?
As a shortcut to converting 4 binary digits to decimal, what 4 decimal weights should you memorize?
What is the process of converting from decimal to binary?

Understanding the binary number system is important when dealing with computers because all data stored and processed by computers is in binary form.

There are only two digits in the binary number system: 0 and 1.

The two binary values that are normally understood as "True" and "False" are 1 and 0, respectively.

A bit is a single binary digit, either 0 or 1. A byte is made up of 8 bits.

When thinking about kilobytes, megabytes, and larger sizes, the multiplier 1024 is used to increase from one stage to the next.

Two common ways to identify binary numbers are to prefix them with a percent sign (%) or to use a subscript indicating base 2.

Binary numbers are a form of abstraction because they represent complex data using a simple system.

One sequence of binary digits does not always mean the same thing to all applications. The interpretation of binary data depends on the context in which it is used.

Counting in binary involves counting in rows of binary digits from right to left, with each column representing a power of 2.

To convert from binary to decimal, you can multiply each digit by 2 raised to the power of its position from right to left, and then add up the results.

As a shortcut to converting 4 binary digits to decimal, you can memorize the decimal weights of 1, 2, 4, and 8 for each position from right to left.

To convert from decimal to binary, you divide the decimal number by 2 repeatedly and record the remainder from each division in reverse order to obtain the binary representation.