What I remember from college, more than 10 years have passed, from using MicroStation and SolidEdge is that even professional design packages will accumulate some errors. Actually you could build the same piece in several different ways and some ways were more precise than others. My understanding is that all operations are carried numerically and the only way that would prevent accumulation of errors would be to perform them symbolically and solve them numerically just at the end but I suppose that would be extremely inefficient. I wonder if someone knows about any book or software library that follows the later approach since I find the subject quite interesting.
So long as you are using floating point, most bets are off. What you can do with FP is use exact predicates which are always correct. That might, for example tell you if two lines intersect, or if a point is inside/outside a polygon without precision issues.
The problem with that is that you will soon want to construct geometrical entities. For example, if you intersect lines that have their ends on integer coordinates you can't represent the intersection on integers (You will need rationals, in that case). The same applies to floating point - two line segments with floating point coordinates will not have their intersection representable in floating point.
So for proper robustness in all situations you need "exact predicates and exact constructions". A middle ground is "exact predicates and inexact constructions". These modes are different kernels usable in e.g. CGAL. http://doc.cgal.org/latest/Kernel_23/group__kernel__predef.h...
Even something seemingly simple such as saying "is point C to the left or right of the line A->B" is very hard in floating point. It's possible without too much sweat to use kind of series expansion where the length of the expansion depends on the precision needed (you only go as far as you need to answer the predicate, so often only 2-3 terms) https://www.cs.cmu.edu/~quake/robust.html
>Even something seemingly simple such as saying "is point C to the left or right of the line A->B" is very hard in floating point.
Is it? ie - construct a normal and take a dot product. That'll work the vast majority of the time. Near the line you'll start running into the FP precision limits. With 64 (...or even 32) bit floats that sort of distance is likely way smaller than anything that represents a practical distance, so just call it and say you are on the line. Of late I've been messing about with problems of this nature and I can't say it's been all that difficult. https://github.com/deadsy/sdfx
It comes back to bite you in a hurry. You don't need arbitrary precision to represent arbitrarily small things but to not have algorithms break down. Simple things like "is p inside the polygon" changes from true to false by moving both point and polygon 1 unit to the right. See under the heading "geometric predicates can FAIL" here http://groups.csail.mit.edu/graphics/classes/6.838/S98/meeti...
You are right that this happens only in edge cases. But you don't want that in a cad program if you have anything doing e.g point-in-polygon, triangulation or other such algorithms.
I know from experience that you really should bake this into the very foundation of a cad package (if you see 2003 me, let him know)
Yes. In the U.K. many architects persist in drawing their buildings at OS grid coordinates relative to the origin of the file in mm because it make importing surveys and OS maps easier. So a building in Scotland would have a translate of something like 10000000000, 700000000000mm it's one of the reasons BIM is mysteriously unreliable in practice because points that should be coincident aren't because of of accumulated errors.
> My understanding is that all operations are carried numerically and the only way that would prevent accumulation of errors would be to perform them symbolically and solve them numerically just at the end but I suppose that would be extremely inefficient.
You can solve the problem with extended integers, but not as easily as you would expect. In 2D, it takes something like 2n+logn bits of computation to accurately calculate things for n bits of coordinates. So you need more than 70 bits just to calculate 32 bits accurately. And we almost always need 64 bits, nowadays.
And this requires "binning" which breaks things like measurements of "parallel". It's not an easy problem, and it requires lots of test cases.
So basically the most accurate approach that is near something practical is using arbitrary precision. What I was wondering is if there is any implementation capable of detecting for example that the three bisectors of a triangle meet exactly at the same point, the incenter. My guess is that maybe for pedagogical purposes but not for anything more complex.
The problem is that once you start dealing with the actual coordinates of intersections, you have to start picking what properties you wish to keep and which you wish to abandon. Things like parallelism, inside vs outside, winding, handedness, etc. often get very murky once you start dealing with finite precision.
If I understand it correctly - If you are using it academically or in free software you can use it. If you are using it commercially you can pay a reasonable license fee and use it.
Sounds like a better solution than most; what am I missing?
Honestly, the worst part of CGAL licensing is figuring out which parts actually require the commercial license, since most of the simpler structures in CGAL can be used freely in a commercial product.
I haven't looked for years, but as I recall very little was LGPL other than the core, and the rest was licensed by module. Even if you just licensed the whole thing it wasn't particularly expensive.
Good questions. I'm also interested and I'm planning to test it out first thing tomorrow. But from the FAQs, this project seemed to be around since 1997 (not C++ back then though) so I'm guessing it should be pretty stable.
Some things are not parallel enough. GPUs don’t feature thousands of independent cores, instead they have group of cores executing same instructions on different data, in lockstep. Some algorithms, e.g. shading those triangles or raytracing or teaching neural networks, work awesome on GPUs. Other are very tricky to port to that model.
Other things are not suitable for GPUs performance wise. For example, if you have octree data structure and you query a lot, a CPU will perform better, because cache hierarchy (on CPU, the top level[s] of the tree will be in L1, the levels below in L2, etc., GPUs don’t have that), and because branch prediction.
Finally, not everything is computation bound. If your CPU code is fast enough to saturate IO bandwidth, be it HDD or network, you’re fine unless you have a million of servers. If you have a million of them, GPUs might still be worse they because might be more power efficient slightly decreasing your huge electricity bills.
these tutorials read like a computational geometry textbook. I have gotten stuck even on simple algorithms like the intersection of two lines. at least in the beginning it may be hard to compute bugs
To clarify, this is a tutorial on how to use a C++ computational geometry library named Wykobi. It is not an introduction to computational geometry or a tutorial on how to implement computational geometry algorithms.
For an introduction to computational geometry in general, I've enjoyed these lecture notes from Leonidas Guibas [1]. There are some more specific class notes at [2].
The problem with most computational geometry libraries is that they tend to be 1) riddled with bugs, 2) slow and 3) covered by stupid licenses.
The license is nice. So, how is the accuracy and speed?