An example of it's absurdity: So, imagine you have made 4 measurements of a thing: 2.53, 2.51, 2.49, 2.55. Add them together and you've got 10.08. Divide by 4 to get the average and you've got 2.520. Since adding them gave you that extra digit (10.08 is 4 digits, even though the numbers we started with only had 3 digits), and you're dividing by the counting number of 4, you keep the extra digit of precision. However, pick some totally different numbers, maybe 2.48, 2.52, 2.44, 2.47 and they add up to 9.91. Divide by 4, and you get 2.48 (not 2.4775, the actual number), because you've got to throw away all the digits that you didn't have when adding them together. So, supposedly, according to sig figs, you fundamentally know more about the 1st set of numbers than the 2nd, because they happened to overflow the pretty much arbitrary base-10 boundary between digits.
Any reasonable person can see that the above example isn't how science should be done, and certainly not one of the main focuses of a high school or college level physics course, one of the main deciding factors in determining a student's grade. But it is, and far too many physics and chemistry teaches treat it as if it was a fundamental concept of the universe, rather than a shortcut or rule-of-thumb for estimating how much precision you should present your data with.
The concept behind sig figs is great, they're a nice way to gauge how many decimal digits to use when publishing data, pass along info on how precise your measurements are. However this isn't how they're taught in America's schools (I don't have data on if the trend is the same internationally), and that needs to change!