xref: /freebsd/contrib/bc/manuals/algorithms.md (revision 4543ef516683042d46f3bd3bb8a4f3f746e00499)
1# Algorithms
2
3This `bc` uses the math algorithms below:
4
5### Addition
6
7This `bc` uses brute force addition, which is linear (`O(n)`) in the number of
8digits.
9
10### Subtraction
11
12This `bc` uses brute force subtraction, which is linear (`O(n)`) in the number
13of digits.
14
15### Multiplication
16
17This `bc` uses two algorithms: [Karatsuba][1] and brute force.
18
19Karatsuba is used for "large" numbers. ("Large" numbers are defined as any
20number with `BC_NUM_KARATSUBA_LEN` digits or larger. `BC_NUM_KARATSUBA_LEN` has
21a sane default, but may be configured by the user.) Karatsuba, as implemented in
22this `bc`, is superlinear but subpolynomial (bounded by `O(n^log_2(3))`).
23
24Brute force multiplication is used below `BC_NUM_KARATSUBA_LEN` digits. It is
25polynomial (`O(n^2)`), but since Karatsuba requires both more intermediate
26values (which translate to memory allocations) and a few more additions, there
27is a "break even" point in the number of digits where brute force multiplication
28is faster than Karatsuba. There is a script (`$ROOT/scripts/karatsuba.py`) that
29will find the break even point on a particular machine.
30
31***WARNING: The Karatsuba script requires Python 3.***
32
33### Division
34
35This `bc` uses Algorithm D ([long division][2]). Long division is polynomial
36(`O(n^2)`), but unlike Karatsuba, any division "divide and conquer" algorithm
37reaches its "break even" point with significantly larger numbers. "Fast"
38algorithms become less attractive with division as this operation typically
39reduces the problem size.
40
41While the implementation of long division may appear to use the subtractive
42chunking method, it only uses subtraction to find a quotient digit. It avoids
43unnecessary work by aligning digits prior to performing subtraction and finding
44a starting guess for the quotient.
45
46Subtraction was used instead of multiplication for two reasons:
47
481.	Division and subtraction can share code (one of the less important goals of
49	this `bc` is small code).
502.	It minimizes algorithmic complexity.
51
52Using multiplication would make division have the even worse algorithmic
53complexity of `O(n^(2*log_2(3)))` (best case) and `O(n^3)` (worst case).
54
55### Power
56
57This `bc` implements [Exponentiation by Squaring][3], which (via Karatsuba) has
58a complexity of `O((n*log(n))^log_2(3))` which is favorable to the
59`O((n*log(n))^2)` without Karatsuba.
60
61### Square Root
62
63This `bc` implements the fast algorithm [Newton's Method][4] (also known as the
64Newton-Raphson Method, or the [Babylonian Method][5]) to perform the square root
65operation.
66
67Its complexity is `O(log(n)*n^2)` as it requires one division per iteration, and
68it doubles the amount of correct digits per iteration.
69
70### Sine and Cosine (`bc` Math Library Only)
71
72This `bc` uses the series
73
74```
75x - x^3/3! + x^5/5! - x^7/7! + ...
76```
77
78to calculate `sin(x)` and `cos(x)`. It also uses the relation
79
80```
81cos(x) = sin(x + pi/2)
82```
83
84to calculate `cos(x)`. It has a complexity of `O(n^3)`.
85
86**Note**: this series has a tendency to *occasionally* produce an error of 1
87[ULP][6]. (It is an unfortunate side effect of the algorithm, and there isn't
88any way around it; [this article][7] explains why calculating sine and cosine,
89and the other transcendental functions below, within less than 1 ULP is nearly
90impossible and unnecessary.) Therefore, I recommend that users do their
91calculations with the precision (`scale`) set to at least 1 greater than is
92needed.
93
94### Exponentiation (`bc` Math Library Only)
95
96This `bc` uses the series
97
98```
991 + x + x^2/2! + x^3/3! + ...
100```
101
102to calculate `e^x`. Since this only works when `x` is small, it uses
103
104```
105e^x = (e^(x/2))^2
106```
107
108to reduce `x`.
109
110It has a complexity of `O(n^3)`.
111
112**Note**: this series can also produce errors of 1 ULP, so I recommend users do
113their calculations with the precision (`scale`) set to at least 1 greater than
114is needed.
115
116### Natural Logarithm (`bc` Math Library Only)
117
118This `bc` uses the series
119
120```
121a + a^3/3 + a^5/5 + ...
122```
123
124(where `a` is equal to `(x - 1)/(x + 1)`) to calculate `ln(x)` when `x` is small
125and uses the relation
126
127```
128ln(x^2) = 2 * ln(x)
129```
130
131to sufficiently reduce `x`.
132
133It has a complexity of `O(n^3)`.
134
135**Note**: this series can also produce errors of 1 ULP, so I recommend users do
136their calculations with the precision (`scale`) set to at least 1 greater than
137is needed.
138
139### Arctangent (`bc` Math Library Only)
140
141This `bc` uses the series
142
143```
144x - x^3/3 + x^5/5 - x^7/7 + ...
145```
146
147to calculate `atan(x)` for small `x` and the relation
148
149```
150atan(x) = atan(c) + atan((x - c)/(1 + x * c))
151```
152
153to reduce `x` to small enough. It has a complexity of `O(n^3)`.
154
155**Note**: this series can also produce errors of 1 ULP, so I recommend users do
156their calculations with the precision (`scale`) set to at least 1 greater than
157is needed.
158
159### Bessel (`bc` Math Library Only)
160
161This `bc` uses the series
162
163```
164x^n/(2^n * n!) * (1 - x^2 * 2 * 1! * (n + 1)) + x^4/(2^4 * 2! * (n + 1) * (n + 2)) - ...
165```
166
167to calculate the bessel function (integer order only).
168
169It also uses the relation
170
171```
172j(-n,x) = (-1)^n * j(n,x)
173```
174
175to calculate the bessel when `x < 0`, It has a complexity of `O(n^3)`.
176
177**Note**: this series can also produce errors of 1 ULP, so I recommend users do
178their calculations with the precision (`scale`) set to at least 1 greater than
179is needed.
180
181### Modular Exponentiation
182
183This `dc` uses the [Memory-efficient method][8] to compute modular
184exponentiation. The complexity is `O(e*n^2)`, which may initially seem
185inefficient, but `n` is kept small by maintaining small numbers. In practice, it
186is extremely fast.
187
188### Non-Integer Exponentiation (`bc` Math Library 2 Only)
189
190This is implemented in the function `p(x,y)`.
191
192The algorithm used is to use the formula `e(y*l(x))`.
193
194It has a complexity of `O(n^3)` because both `e()` and `l()` do.
195
196However, there are details to this algorithm, described by the author,
197TediusTimmy, in GitHub issue [#69][12].
198
199First, check if the exponent is 0. If it is, return 1 at the appropriate
200`scale`.
201
202Next, check if the number is 0. If so, check if the exponent is greater than
203zero; if it is, return 0. If the exponent is less than 0, error (with a divide
204by 0) because that is undefined.
205
206Next, check if the exponent is actually an integer, and if it is, use the
207exponentiation operator.
208
209At the `z=0` line is the start of the meat of the new code.
210
211`z` is set to zero as a flag and as a value. What I mean by that will be clear
212later.
213
214Then we check if the number is less than 0. If it is, we negate the exponent
215(and the integer version of the exponent, which we calculated earlier to check
216if it was an integer). We also save the number in `z`; being non-zero is a flag
217for later and a value to be used. Then we store the reciprocal of the number in
218itself.
219
220All of the above paragraph will not make sense unless you remember the
221relationship `l(x) == -l(1/x)`; we negated the exponent, which is equivalent to
222the negative sign in that relationship, and we took the reciprocal of the
223number, which is equivalent to the reciprocal in the relationship.
224
225But what if the number is negative? We ignore that for now because we eventually
226call `l(x)`, which will raise an error if `x` is negative.
227
228Now, we can keep going.
229
230If at this point, the exponent is negative, we need to use the original formula
231(`e(y * l(x))`) and return that result because the result will go to zero
232anyway.
233
234But if we did *not* return, we know the exponent is *not* negative, so we can
235get clever.
236
237We then compute the integral portion of the power by computing the number to
238power of the integral portion of the exponent.
239
240Then we have the most clever trick: we add the length of that integer power (and
241a little extra) to the `scale`. Why? Because this will ensure that the next part
242is calculated to at least as many digits as should be in the integer *plus* any
243extra `scale` that was wanted.
244
245Then we check `z`, which, if it is not zero, is the original value of the
246number. If it is not zero, we need to take the take the reciprocal *again*
247because now we have the correct `scale`. And we *also* have to calculate the
248integer portion of the power again.
249
250Then we need to calculate the fractional portion of the number. We do this by
251using the original formula, but we instead of calculating `e(y * l(x))`, we
252calculate `e((y - a) * l(x))`, where `a` is the integer portion of `y`. It's
253easy to see that `y - a` will be just the fractional portion of `y` (the
254exponent), so this makes sense.
255
256But then we *multiply* it into the integer portion of the power. Why? Because
257remember: we're dealing with an exponent and a power; the relationship is
258`x^(y+z) == (x^y)*(x^z)`.
259
260So we multiply it into the integer portion of the power.
261
262Finally, we set the result to the `scale`.
263
264### Rounding (`bc` Math Library 2 Only)
265
266This is implemented in the function `r(x,p)`.
267
268The algorithm is a simple method to check if rounding away from zero is
269necessary, and if so, adds `1e10^p`.
270
271It has a complexity of `O(n)` because of add.
272
273### Ceiling (`bc` Math Library 2 Only)
274
275This is implemented in the function `ceil(x,p)`.
276
277The algorithm is a simple add of one less decimal place than `p`.
278
279It has a complexity of `O(n)` because of add.
280
281### Factorial (`bc` Math Library 2 Only)
282
283This is implemented in the function `f(n)`.
284
285The algorithm is a simple multiplication loop.
286
287It has a complexity of `O(n^3)` because of linear amount of `O(n^2)`
288multiplications.
289
290### Permutations (`bc` Math Library 2 Only)
291
292This is implemented in the function `perm(n,k)`.
293
294The algorithm is to use the formula `n!/(n-k)!`.
295
296It has a complexity of `O(n^3)` because of the division and factorials.
297
298### Combinations (`bc` Math Library 2 Only)
299
300This is implemented in the function `comb(n,r)`.
301
302The algorithm is to use the formula `n!/r!*(n-r)!`.
303
304It has a complexity of `O(n^3)` because of the division and factorials.
305
306### Logarithm of Any Base (`bc` Math Library 2 Only)
307
308This is implemented in the function `log(x,b)`.
309
310The algorithm is to use the formula `l(x)/l(b)` with double the `scale` because
311there is no good way of knowing how many digits of precision are needed when
312switching bases.
313
314It has a complexity of `O(n^3)` because of the division and `l()`.
315
316### Logarithm of Base 2 (`bc` Math Library 2 Only)
317
318This is implemented in the function `l2(x)`.
319
320This is a convenience wrapper around `log(x,2)`.
321
322### Logarithm of Base 10 (`bc` Math Library 2 Only)
323
324This is implemented in the function `l10(x)`.
325
326This is a convenience wrapper around `log(x,10)`.
327
328### Root (`bc` Math Library 2 Only)
329
330This is implemented in the function `root(x,n)`.
331
332The algorithm is [Newton's method][9]. The initial guess is calculated as
333`10^ceil(length(x)/n)`.
334
335Like square root, its complexity is `O(log(n)*n^2)` as it requires one division
336per iteration, and it doubles the amount of correct digits per iteration.
337
338### Cube Root (`bc` Math Library 2 Only)
339
340This is implemented in the function `cbrt(x)`.
341
342This is a convenience wrapper around `root(x,3)`.
343
344### Greatest Common Divisor (`bc` Math Library 2 Only)
345
346This is implemented in the function `gcd(a,b)`.
347
348The algorithm is an iterative version of the [Euclidean Algorithm][10].
349
350It has a complexity of `O(n^4)` because it has a linear number of divisions.
351
352This function ensures that `a` is always bigger than `b` before starting the
353algorithm.
354
355### Least Common Multiple (`bc` Math Library 2 Only)
356
357This is implemented in the function `lcm(a,b)`.
358
359The algorithm uses the formula `a*b/gcd(a,b)`.
360
361It has a complexity of `O(n^4)` because of `gcd()`.
362
363### Pi (`bc` Math Library 2 Only)
364
365This is implemented in the function `pi(s)`.
366
367The algorithm uses the formula `4*a(1)`.
368
369It has a complexity of `O(n^3)` because of arctangent.
370
371### Tangent (`bc` Math Library 2 Only)
372
373This is implemented in the function `t(x)`.
374
375The algorithm uses the formula `s(x)/c(x)`.
376
377It has a complexity of `O(n^3)` because of sine, cosine, and division.
378
379### Atan2 (`bc` Math Library 2 Only)
380
381This is implemented in the function `a2(y,x)`.
382
383The algorithm uses the [standard formulas][11].
384
385It has a complexity of `O(n^3)` because of arctangent.
386
387[1]: https://en.wikipedia.org/wiki/Karatsuba_algorithm
388[2]: https://en.wikipedia.org/wiki/Long_division
389[3]: https://en.wikipedia.org/wiki/Exponentiation_by_squaring
390[4]: https://en.wikipedia.org/wiki/Newton%27s_method#Square_root_of_a_number
391[5]: https://en.wikipedia.org/wiki/Methods_of_computing_square_roots#Babylonian_method
392[6]: https://en.wikipedia.org/wiki/Unit_in_the_last_place
393[7]: https://people.eecs.berkeley.edu/~wkahan/LOG10HAF.TXT
394[8]: https://en.wikipedia.org/wiki/Modular_exponentiation#Memory-efficient_method
395[9]: https://en.wikipedia.org/wiki/Root-finding_algorithms#Newton's_method_(and_similar_derivative-based_methods)
396[10]: https://en.wikipedia.org/wiki/Euclidean_algorithm
397[11]: https://en.wikipedia.org/wiki/Atan2#Definition_and_computation
398[12]: https://github.com/gavinhoward/bc/issues/69
399