Ok, I tested a bunch of stuff by making a program which graphs the rate of change of timedelta (or the timedeltadelta, if you wish), and I think the problem has to do with the margin of error of timedelta. For example: At unlimited fps, the graph shows less, and smaller changes in timedeltadelta over the same period of time. I guess this has to do with the fact that the higher fps allows for more calculations of the delta, so a small error is corrected faster, and has less effect on the net error accumulation. At v-synced fps, the timedeltadelta shows more periodic drops, resulting in a smaller overall timedelta accumulation (hence the slower object speed).
TL:DR, I believe this has to do with a higher fps being more accurate in the average timedelta calculation. When you set an objects speed by accumulation like you're doing, you're accumulating a lot of error, which is why the speed varies wildly across framerates. V-sync seems to accumulate A much higher error %, in the same amount of time, biased towards lower value than expected error (more of the errors are drops).
I'm not even sure if the floating point calculation errors are to blame for some of this error though, it's hard to tell.
Edit: I'm wrong. While what I said is true, there isn't enough error to account for that amount of speed difference. The reason it's moving faster is because you're not multiplying the set position action by timedelta too. You have to multiply BOTH the acceleration increase and the set position action by timedelta, otherwise you're increasing the acceleration rate at a timedelta'd factor, but you're the moving the object based on the framerate. It's REALLY counter intuitive at first glance.