**what is floating point Representation? **

- Floating point representation is the formal representation of binary numbers in numerical analysis.
- The floating-point representation is capable of handling values with a wide range.
- The number is divided into three parts in the floating-point representation.
- A signed bit is represented on the left, and a fixed point known as the Mantissa is in the center.
- The Exponent is represented by the final portion of the end part.
- The floating-point values also have a sign that can be either 0 or 1, where 0 denotes a positive value and 1 denotes a negative value.
- One of the key ideas today is the Floating Point Representation, which is a scientific notation that is used in all of the technologies we use to represent numbers according to our current number system.

Example- mantissa × 2^{Exponent}

1.3452 = 13452 × 10^{-4}

**History of Floating Point Representation**

**History of Floating Point Representation**

In 1914, Leonardo Torres y Quevedo suggest a form of floating point in the course of discussing his design for a special-purpose electromechanical calculator. In 1938, Konrad Zuse of Berlin completed the Z1, the first binary, programmable mechanical computer, which uses a 24-bit binary floating-point number representation with a 7-bit signed exponent, a 17-bit significand, and a sign bit. The more reliable relay is based upon Z3, which is completed in 1941, and has representations for both positive and negative infinities, in particular, the implements defined operations with infinitude, such as 1/∞ = 0, and it stops on undefined operations, such as 0×∞.

Konrad Zuse, founder of the Z3 computer, which uses a 22-bit binary floating-point representation

Zuse also proposed, but did not complete, carefully rounded floating-point arithmetic that includes representations, the expected features of the IEEE Standard by four decades. In contrast, von Neumann recommended against floating-point numbers for the 1951 IAS machine, disagreeing that fixed-point arithmetic is preferable.

The mass-produced IBM 704 was observed in 1954, introducing the use of a biased exponent. After that, floating-point hardware was typically an optional feature, and computers that had it were said to be “scientific computers”, it has the capability (see also Extensions for Scientific Computation (XSC)). It was not taken off the Intel i486 in 1989 that general-purpose personal computers had the floating-point capability in hardware as a standard feature.

** What is a **Significant digit**?**

The concept of significant figures or digits in **numerical analysis** has been developed to formally designed the reliability of a numerical value.

The significant digits of a number are those that can be used with confidence.

The number of digits in a number excluding the leading zero is called a significant digit which means the leading zero can not be considered.

Example

7.345 = Number of significant digits is 4

0.546 = Number of significant digits is 3

0.005467 = In this case fast we remove the leading zeros that mean the value is 0.5467 after that we calculate the significant digit which is 4

4.53×104 = Number of significant digits is 3 . there is no change due to the scientific notation.

Need for Significant digits

In the numerical method, there is a yield of approximate results. for this, it developing crucial to specify how confident we are in approximation.

To overcome the problem we use significant digits.

## 2 thoughts on “Introduction to floating point and Significant digits in Numerical Analysis”