It’s a computer thing. In base 10 numbers it adds up to 1, but in base 2 the number 0.1 can’t be perfectly represented. It’s actually a tiny bit smaller and after adding it 10 times The error is a bit bigger.
Anyways you can read more about it by looking up floating point numbers on Wikipedia. You don’t see it in some software because when they display numbers they use less decimal digits so it rounds to 1, at least when it’s displayed. When dealing with decimals in general there is a small amount of approximation so you can’t expect numbers to be exact values.
Anyways, rounding the number is one solution, as you have done. Another is to just add 1 every time and divide by 10 when you want to use the value.
A third, albeit needlessly elaborate solution would be to do all the math in a base 10 way and store a decimal as two integers.