If you've recently taken a science class in college, high school, or maybe even middle school, you've definitely had to suffer through dealing with "Significant Figures", or "Sig Figs". The idea behind them is reasonable enough: the more precise information you have from measuring something, the more precise data you can get out of those measurements, and sig figs are intended to enforce this concept. However, in a modern college or high school curriculum, these tend to be emphasized much too strongly, and to an extreme extent far beyond what is reasonable.
An example of it's absurdity: So, imagine you have made 4 measurements of a thing: 2.53, 2.51, 2.49, 2.55. Add them together and you've got 10.08. Divide by 4 to get the average and you've got 2.520. Since adding them gave you that extra digit (10.08 is 4 digits, even though the numbers we started with only had 3 digits), and you're dividing by the counting number of 4, you keep the extra digit of precision. However, pick some totally different numbers, maybe 2.48, 2.52, 2.44, 2.47 and they add up to 9.91. Divide by 4, and you get 2.48 (not 2.4775, the actual number), because you've got to throw away all the digits that you didn't have when adding them together. So, supposedly, according to sig figs, you fundamentally know more about the 1st set of numbers than the 2nd, because they happened to overflow the pretty much arbitrary base-10 boundary between digits.
Any reasonable person can see that the above example isn't how science should be done, and certainly not one of the main focuses of a high school or college level physics course, one of the main deciding factors in determining a student's grade. But it is, and far too many physics and chemistry teaches treat it as if it was a fundamental concept of the universe, rather than a shortcut or rule-of-thumb for estimating how much precision you should present your data with.
The concept behind sig figs is great, they're a nice way to gauge how many decimal digits to use when publishing data, pass along info on how precise your measurements are. However this isn't how they're taught in America's schools (I don't have data on if the trend is the same internationally), and that needs to change!
An example of it's absurdity: So, imagine you have made 4 measurements of a thing: 2.53, 2.51, 2.49, 2.55. Add them together and you've got 10.08. Divide by 4 to get the average and you've got 2.520. Since adding them gave you that extra digit (10.08 is 4 digits, even though the numbers we started with only had 3 digits), and you're dividing by the counting number of 4, you keep the extra digit of precision. However, pick some totally different numbers, maybe 2.48, 2.52, 2.44, 2.47 and they add up to 9.91. Divide by 4, and you get 2.48 (not 2.4775, the actual number), because you've got to throw away all the digits that you didn't have when adding them together. So, supposedly, according to sig figs, you fundamentally know more about the 1st set of numbers than the 2nd, because they happened to overflow the pretty much arbitrary base-10 boundary between digits.
Any reasonable person can see that the above example isn't how science should be done, and certainly not one of the main focuses of a high school or college level physics course, one of the main deciding factors in determining a student's grade. But it is, and far too many physics and chemistry teaches treat it as if it was a fundamental concept of the universe, rather than a shortcut or rule-of-thumb for estimating how much precision you should present your data with.
The concept behind sig figs is great, they're a nice way to gauge how many decimal digits to use when publishing data, pass along info on how precise your measurements are. However this isn't how they're taught in America's schools (I don't have data on if the trend is the same internationally), and that needs to change!