#1,086 – Converting Decimal Floating Point to Binary Floating Point

We can represent a particular floating point value as either a decimal floating point number or a binary floating point number.

Below is an example of how we would convert a decimal floating point value (3.25)  to its equivalent binary floating point value.

1086-001

This particular example was rather easy, because the decimal value 0.25 represents 1/4, so can be represented as a binary fraction with only two digits after the decimal point.

The decimal value 1.1 is a bit more difficult to calculate.

At each step, we find the largest fractional power of two (e.g. 1/2, 1/4, 1/8) that is smaller than the remaining fractional value.  We then subtract that value and continue.

1086-002

We could continue this process, adding one digit of precision at each step and reducing the error–the difference between the binary representation and the value 1.1.  In the end, we can’t exactly represent this value with a binary floating point number.

 

Advertisements

#1,085 – Binary Floating Point Numbers

When we write decimal floating point numbers, digits to the left of the decimal point indicate values of powers of 10 (1, 10, 100) that should be summed together.  Digits to the right of the decimal point indicate values of negative powers of 10 (1/10, 1/100, etc).

We can also write binary floating point numbers.  Numbers represented this way aren’t seen very often in practice, but are important to understand when we talk about how to store floating point numbers in memory.

Digits to the left of the decimal point in a binary floating point number represent powers of 2 (1, 2, 4) to be summed together.

Digits to the right of the decimal point in a binary floating point number represent negative powers of two, e.g. 1/2, 1/4, 1/8, etc.

Here are some examples.

1085-001

#1,084 – Representing Numbers Using Scientific Notation

We often represent numbers (especially floating point numbers) using something called scientific notation.

With scientific notation (in base 10), you represent a number using the form:

1084-001

The a term is known as the significand, or mantissa and the b term is known as the exponent.

Scientific notation is useful because it allows easily writing very large or very small numbers by using the exponent to avoid writing a bunch of extra zeroes.

We also typically write the number so that the significand only has one significant digit, that is–one digit to the left of the decimal point.  (This is known as normalized form).

For example:

1084-002

Notice that we can now write very large and very small numbers fairly concisely:

1084-003

 

We only need to write down the numbers’ significant digits (i.e. significand), that is–the non-zero digits that contribute to the number’s precision.  (Zeroes between non-zeroed digits are also significant).

#1,083 – Using Visual Studio to Verify Little-Endianness

We know that Intel processors use a “little-endian” scheme when deciding how to store binary data in memory.  That is, bytes from the “little end” of a number will be stored earlier in memory than the bytes from the “big end”.

We can see this little-endianness in action by using Visual Studio to look at how a data item is stored in memory.

Let’s say that we have a 4-byte (32-bit) unsigned integer with a value of 0x1234ABCD, assigned to a variable named “myNumber”.  We can view the memory location where this number is stored by bringing up the Memory window in Visual Studio and then entering “&myNumber” in the Address area.

1083-001

1083-002

When you press RETURN, you’ll see the memory location where myNumber is stored.  Notice that the first byte is CD, followed by AB, etc.  The number is stored in a little-endian manner.

1083-003

 

#1,082 – Big-endian and Little-endian

The terms “big-endian” and “little-endian” refer to the scheme that a computer uses to store binary data in memory.  The basic difference is:

  • Big-endian (e.g. IBM mainframes, Motorola 68000) – leftmost byte is stored first, followed by other bytes, left-right.  (“Big end” of number stored first)
  • Little-endian (e.g. Intel processors) – rightmost byte is stored first, followed by other bytes, right-left.  (“Little end” of number stored first)

Below is an example.  Assume that have a 4-byte (32-bit) number with a value of 0x1234ABCD (hex).  The diagram below shows how this number would be stored in a 4-byte chunk of memory, based on whether the processor uses the big-endian or the little-endian convention.

1082-001

 

#1,081 – Bits, Bytes and Nibbles

A bit is a unit of information in a digital computer that can take on one of two values, typically written as 0 or 1.  Information stored in digital computers was originally represented as bits because each bit could be physically represented by some electrical mechanism that could take on one of two states (e.g. a transistor, which can be on or off).

A series of eight bits is considered a byte.  (Historically, the number of bits in a byte was dependent on the hardware, but the term byte most often refers to a sequence of eight bits).

1081-001

Because a byte consists of eight bits and a hexadecimal character represents four bits, you can represent a byte using two hexadecimal characters.

1081-002

A byte can take on values from 0 to 255, or 0x00 to 0xFF.

1081-003

Four bits (one hexadecimal character) is also know as a nibble, though this term is not as common as byte.

 

 

#1,080 – Binary Numerals Written as Hexadecimal

Binary data, for example data stored in in some location in memory, is typically written as a hexadecimal number.  This can be done by grouping the binary digits into groups of four.  Each group of four digits can then be represented by a single hexadecimal character.

Four binary digits can range from 0000 to 1111, representing values from 0 to 15, corresponding to the 16 available characters in the hexadecimal number system.

1080-001

The example below shows how we can represent a long binary number (16 bits in this case) as a series of hex characters.

1080-002

The practice of grouping binary data into groups of four digits maps well to data stored in digital computers, since the typical size of a data word in a binary computer is some factor of four–e.g. 16 bits, 32 bits, or 64 bits.  These groups of bits can then be represented by 4, 8, or 16 hexadecimal characters, respectively.