www.delorie.com/gnu/docs/xaos/xaos_31.html | search |
Buy GNU books! | |
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
@unnumberedsubsec Introduction to problem You can see that the approximation algorithm is central to the implementation of XaoS. If the guess is incorrect the image will look strange, boundaries will not be smooth and the zoom will flicker. On the other hand, if it adds more new rows or columns than required, zooming will become much slower. Also, in the instance of doubling (i.e., using an old row or column more than once) the resolution will lower. It is important to keep the increasing imaginary and real components in the correct order. If a row and column of complex coordinates follows one with higher coordinate values an improved approximation can be attained by swapping their values. The algorithm needs to be relatively fast. It is only used for xsize + ysize values but if its speed is proportional to O(n^2), it can be slower than a whole recalculation of the image. Speeds of O(n) or O(n * log(n)) are acceptable. @unnumberedsubsec Some simple algorithms to solve it Initially, a very simple algorithm was used: Find the old row/column nearest the row/column that needs to be regenerated. If the difference between them is less than one step (step = (end - beginning) / resolution) then use it. Otherwise, recalculate a new one. Finding the nearest row/column pair is very simple since it is always greater or equal to the pair needing to be generated. Surprisingly, this simple algorithm has almost all the problems described above. Doubling was fixed by lowering the limit to step / 2. This cause a considerable slowdown so the limit was returned to step. Instead, the algorithm was changed to search for only row/column pairs that are greater than the previous frame's row/column pairs. This is the algorithm that was used in version 1.0 This algorithm still added to many new rows and columns and did not generate smooth boundaries. For version 1.1 a heuristic was added that preferred approximating rows/columns with lower values. This way it did not occupy possible rows/columns for the next approximation. The result was a speedup by a magnitude of four. In versions 1.1 to 2.0 many improvements were made to the heuristic to give it added performance. The following example tries to explain how complicated the problem is (O is the old coordinates and X is the values to be approximated):
X1 X2 X3 X4 X5 X6 X7 O1 O2 O3 O4 O5 O6 O7 O8 |
The normal algorithm will aproximate X1 by O2, X3 by O4 but nothing more. For the algorithm with threshold step instead of step / 2:
O2 to X1 O3 to X2 O4 to X3 O5 to X4 O6 to X5 O8 to X6 |
But this will fail with X7. The second algorithm which relies on lower values will do the following:
O1 to X1 O3 to X2 O4 to X3 O5 to X4 O6 to X5 O7 to X6 O8 to X7 |
O1 to X1 is wrong. And there is many and many other situations that may occur. But you may see that the normal algorithm will calculate 4 new rows/columns but the heuristic saves all of these calculations. @unnumberedsubsec Current algorithms used In version 2.1 work on this heuristic was disabled after I discovered a suprisingly simple algorithm that solves all these problems. First I decided to define exactly what is best aproximation. This should be done by defining a price for every aproximation and choose the aproximation with the lowest price. Prices are defined as such: Aproximating row/column x by y costs dist(x, y) ^ 2. This prefers two smaller approximation errors before a single larger error and describes my goal quite well. The cost for adding a new row/column specifies when it is better to do a bad approximation and when to add a new row/column. I use (4 * step) * (4 * step). This means that the approximation is acceptable when dist(x, y) < 4 * step. Otherwise, adding a new row/column costs less. Now the best approximation is known. All that is required is a fast algorithm to do this. Surprisingly, this is possible in linear time using a relatively simple dynamic algorithm. It uses approximations of length < n to make a guess at the length of n. It can start by approximating one row/column and then again for two, three up to xsize/ysize rows/columns. The algorithm starts by calculating prices for all possible new positions for old row/column 1. Because of the pricing there are maximally 8 new positions. (Other ones must cost more than adding new row/column). Of course it is possible that there are no new positions.
For calculating the price of aproximations for row/column 2 I may use previous one: Try new position n. Calculate the price and add the best aproximation for the previous (row/column 1) one that uses a new position lower than n(prohibits doubling or swapping). This shoud be one of 8 positions or eventually adding of new one and not using row/column 1 at all.
The same method can be used for the rest of the rows/columns. At the end the best price may be found for the last row/column and return by the way it was calculated. (For this I need the saved "calculated using" values.) At this step the best approximation has been determined.
To fill the table, 9 * n steps are required and n steps to backtrack best aproximation. The only problem is that this algorithm is still a little slow (chiefly because of slow memory access on Intel architectures). -But with some optimizing it works well.
This algorithm is almost perfect except that it occaisonally adds new rows/columns to the wrong locations. It does not prefer to add new rows/columns into holes. But it does not seem that this is the real problem. The last optimization made was based upon the face that added rows/columns do not have the exact real and imaginary components calculated by (beginning + x * step) but lies at the average of left and right neighbours. This makes the boundaries smooth and distributes coordinates better. It also has the added benefit of making the input better for future approximations.
Another danger during implementation if this algorithm is that adding new rows/columns into their idead possitions should cause missordered results, since some rows/columns should be off more that is distance between them. To avoid this, I use algorithm that always examine start and end of block of new rows/columns and linearly interpolates value between them. Special care needs to be at the blocks that start at the begining or overs at the end.
Implementation should be much faster using custom fixedpoint routines --- first recalculate values that 0 means start of image and 65536 means end. Than calculation is much cleaner. Values <0 and >65536 are of screen, calculation is independent at scale and many thinks should be precalculated -- like tables for calculating price from distance. Also dividing main loops into many specialized parts and avoiding filing unnecesay parts of table helps. So current algorithm in XaoS is about 5 or 6 times faster than first naive implementation.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
webmaster | delorie software privacy |
Copyright © 2003 by The Free Software Foundation | Updated Jun 2003 |