**what is floating point Representation? **

In **numerical analysis** floating point Representation is the scientific notation of binary number.

The floating-point representation can operations for high range Values.

The floating-point representation divides the number into three parts.

The left side represent as signed bit , the middle part is fixed point called Mantissa.

Last part or the end part represent Exponent.

The floating-point values also attached with a sing that is 0 or 1

In this 0 represent the positive value and 1 represent the negative value.

The Floating Point Representation is one of the important concept and nowadays whatever the technologies that we are using, in that technologies the number system which we are following that are represented in a Floating Point that is a scientific notation .

The FORMAT of the Scientific notation is 1M ×B^{E}.

Example- mantissa × 2^{Exponent}

1.3452 = 13452 × 10^{-4}

**History of Floating Point Representation**

**History of Floating Point Representation**

In 1914, Leonardo Torres y Quevedo suggest a form of floating-point in the course of discussing his design for a special-purpose electromechanical calculator. In 1938, Konrad Zuse of Berlin completed the Z1, the first binary, programmable mechanical computer, which uses a 24-bit binary floating-point number representation with a 7-bit signed exponent, a 17-bit significand, and a sign bit. The more reliable relay is based upon Z3, which is completed in 1941, and has representations for both positive and negative infinities, in particular, the implements defined operations with infinitude, such as 1/∞ = 0, and it stops on undefined operations, such as 0×∞.

Konrad Zuse, founder of the Z3 computer, which uses a 22-bit binary floating-point representation

Zuse also proposed, but did not complete, carefully rounded floating-point arithmetic that includes representations, the expected features of the IEEE Standard by four decades. In contrast, von Neumann recommended against floating-point numbers for the 1951 IAS machine, disagreeing that fixed-point arithmetic is preferable.

The mass-produced IBM 704 was observed in 1954, introducing the use of a biased exponent. After that, floating-point hardware was typically an optional feature, and computers that had it were said to be “scientific computers”, it has the capability (see also Extensions for Scientific Computation (XSC)). It was not taken off the Intel i486 in 1989 that general-purpose personal computers had the floating-point capability in hardware as a standard feature.

** What is Significant digit ?**

The concept of significant figure or digit in **numerical analysis** has been developed to formally designed the reliability of a numerical value.

The significant digits of a number are those that can be used with confidence.

The number of digits in a number excluding leading zero are called significant digit that means the leading zero can not be consider.

Example

7.345 = Number of significant digits is 4

0.546 = Number of significant digits is 3

0.005467 = In this case fast we remove the leading zeros that means the value is 0.5467 after that we calculate the significant digit that is 4

4.53×104 = Number of significant digits is 3 . there is no change due to the scientific notation.

Need of Significant digits

In numerical method there is yeild approximate result . for this it developing critecial to specify how confident we are in approximation .

To over come the problem we use significant digits.

## 2 thoughts on “Introduction to floating point and Significant digits in Numerical Analysis”