Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> A common fix for this issue is to use two sets of coordinates: you can for instance represent your world as a grid with fixed-size cells, then you translate all your models into the local cell before computing anything, this way you always have good enough precision since you effectively limit the amplitude of your floats.

Isn't this in practise creating a double-precision float by adding a second "significant figure" in a "base float" system?



You can think of it that way, sure, but since you want your computations to remain as fast as possible you don't want to use the full "double-precision" float everywhere, so that's where it gets tricky. You essentially want to translate everything to your local referential once and for all, then do everything with local coordinates if possible.

It's somewhat reminiscent of segmented memory in a way.


>> a grid with fixed-size cells

> creating a double precision float ... ?

This added top-level is uniformly distributed - so yes, it's a more precise float, but no, it's not a direct analogue.


Kind of but you get significantly better precision than if you just use a normal double float type because doubles only push the issues further out. Especially in games you're likely already breaking the world into cells for streaming chunks into/out of memory anyways.


Isn't this also just pushing the problem out? Once you get far enough out, you wouldn't be able to transition between cells because of the rounding error of the cell boundary being bigger than the cell size.

You can't represent infinite precision with finite bits so you must run into an issue eventually.


Of course but remember that it grows exponentially with the number of bits. Every bit you add one bit it doubles your usable size.

The visible universe (according to Wikipedia) is 8.8 * 10^26 meters or about 5.4 * 10^61 Plank lengths. That gives us a log2 of 205.1

In other words if my maths is correct if you want to represent the entire visible universe down to the scale of the Planck length you need 206bits of resolution, or 4 64 bit integers with a lot of room to spare.

And if you don't actually aim to simulate subatomic particles you can get down to millimeter resolution with "only" 100bits.

And that's not even taking into account that, our universe being so empty, you can probably fudge super large distances a bit (having a lower resolution at scales where the universe is mostly empty). Nobody is going to notice if Andromeda if a few parsecs closer than it should be.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: