K4/Q displays from various fields of math, finance and physics

1. a complete rest-system modulo 17 consisting of multiples of 3

/Q

N:17;f:{x-N*y};r:neg i:til 1+N;

{t@p:first where 0=(t:{f . x} each x,/:r) mod 3} each i

2. the secant method:N:17;f:{x-N*y};r:neg i:til 1+N;

{t@p:first where 0=(t:{f . x} each x,/:r) mod 3} each i

/k4

*{2#(((x0*y1)-x1*y0)%(y1:f@x1:x 0)-y0:f(x0:x 1)),x}/init

3. highest common factor:*{2#(((x0*y1)-x1*y0)%(y1:f@x1:x 0)-y0:f(x0:x 1)),x}/init

/k4

g:{$[0=c:x-y*_ x%y;y;g[y;c]]}

4. Newton method for single polynomial roots:g:{$[0=c:x-y*_ x%y;y;g[y;c]]}

/k4

{:c::c-pv[p]%pv[dp]}/()

(.. where c=start,p=polynomial,pv:{{y+c*x}/[*x;1_x]} and dp:{(*r)#x*r:|!#x}@p

5. Muller's method for polynomials .. {:c::c-pv[p]%pv[dp]}/()

(.. where c=start,p=polynomial,pv:{{y+c*x}/[*x;1_x]} and dp:{(*r)#x*r:|!#x}@p

/k4

pv:{{y+c*x}/[*x;1_x]}

qe:{D:(b*b:x@1)-4f*(a:x@0)*x@2;((-b+s),-b-s:sqrt D)%2f*a}

p:1 0 -3 1f;/polynomial

i:0 0.5 1;/3 starting points needed

/here 4 iterations ..

do[4;

ys:{c::x;:pv[p]}'i;

y0:ys@1;y1:ys@2;y2:ys@0;

a:((y1*h2)+(y2*h1)-y0*h1+h2)%h1*h2*(h1:(x1:i 2)-x0)+h2:(x0:i 1)-x2:i 0;

sx:*x0+t@&u=&/u:{x|-x}'t:qe@a,((y1-y0+a*h1*h1)%h1),y0;

i:((-9)+6*+/sx

]

6. Fixed point iterationpv:{{y+c*x}/[*x;1_x]}

qe:{D:(b*b:x@1)-4f*(a:x@0)*x@2;((-b+s),-b-s:sqrt D)%2f*a}

p:1 0 -3 1f;/polynomial

i:0 0.5 1;/3 starting points needed

/here 4 iterations ..

do[4;

ys:{c::x;:pv[p]}'i;

y0:ys@1;y1:ys@2;y2:ys@0;

a:((y1*h2)+(y2*h1)-y0*h1+h2)%h1*h2*(h1:(x1:i 2)-x0)+h2:(x0:i 1)-x2:i 0;

sx:*x0+t@&u=&/u:{x|-x}'t:qe@a,((y1-y0+a*h1*h1)%h1),y0;

i:((-9)+6*+/sx

]

/sample 1

th:1f%3f;g:{(1f+x) xexp th}

k)g/ 1f .. 1.324718

iteration solving f(x)=xth:1f%3f;g:{(1f+x) xexp th}

k)g/ 1f .. 1.324718

..where g fulfills the conditions of Banach's fixed point theorem on [1,2]

/sample 2

g:{y*(2f+x*-6f+x*11f+3f*x) xexp 0.25}

k){g[x;-1f]}/ -2.2 .. -2.496744 resp {g[x;1f]}/ 3 .. 4.982284

iteration solving f(x)=xg:{y*(2f+x*-6f+x*11f+3f*x) xexp 0.25}

k){g[x;-1f]}/ -2.2 .. -2.496744 resp {g[x;1f]}/ 3 .. 4.982284

Friendly hint: the iteration function must not be necessarily the same for all roots !!

7. Chinese remainder (not finished yet ..)

/solving in Q ...
x ~ a_{i} (mod m_{i})

f:{n@first where 0=((neg y)+x*n:til z) mod z}

crt:{sum y*t*f ./:flip (t:floor (prd x)%/:x;(count x)#1;x)}

0 1 2 3 + crt[i;0,1_(i:r*r)-til 4] /.. 4 consecutive not square-free numbers

Some history:f:{n@first where 0=((neg y)+x*n:til z) mod z}

crt:{sum y*t*f ./:flip (t:floor (prd x)%/:x;(count x)#1;x)}

0 1 2 3 + crt[i;0,1_(i:r*r)-til 4] /.. 4 consecutive not square-free numbers

The Chinese remainder is mentioned in the 3rd part of the book "Sun Tzu Suan Ching"

written by the Chinese mathematician Sun Zi (Master Sun). Unfortunately, little is

known about Sun Zi as he did not have any high political position nor an influential

social standing and so he did not obtain the highly deserved place in the official

Chinese history. "Sun Tzu Suan Ching" was not written before 280 AD resp later than

473 AD - but Sun's name went forgotten already in the early period of the 7

There is some evidence that Sun Zi was a Buddhist scholar. It was not uncommon for

Buddhist monks to be well versed in mathematics.

8. Newton method for multiple polynomial roots:

/k4

{:c::c-m*pv[p]%pv[dp]}/()

(.. where m=multitude,c=start,p=polynomial,pv:{{y+c*x}/[*x;1_x]} and dp:{(*r)#x*r:|!#x}@p

Friendly hint:iteration function g is converging quadratically, where g'(x{:c::c-m*pv[p]%pv[dp]}/()

(.. where m=multitude,c=start,p=polynomial,pv:{{y+c*x}/[*x;1_x]} and dp:{(*r)#x*r:|!#x}@p

g''(x

Let us look at: f(x)=27x

With m=1 (starting point = 4) we would get the expected sluggish progress

3.505972,3.100675,2.770442,2.50393,2.291592,2.125182,1.997326,1.90124,1.830651,1.779899

1.74409,1.719211,1.702133,1.690512,1.682655,1.677367,1.673819,1.671443,1.669855,1.668794,..

With m=3 we get almost lightspeed:

2.517915,1.872123,1.684531,1.666822,1.666667 DONE

The correct value is 5/3.

9. Newton method for multiple dimensions ..

/k4

li:{-(!:+-1_u)$+-1#u:+x}

{x+,/li@m .\:\:x}/ startvalue

m is the function-matrix (in case of polynomials we can represent the partialli:{-(!:+-1_u)$+-1#u:+x}

{x+,/li@m .\:\:x}/ startvalue

derivatives as translations of hyperplanes in an n-dimensional space)

let's solve:

x

x

{x+,/li@m .\:\:x}/ (2f;6f) .. returns ~ 0.77 1.77 ;where
m:(fx,fy,f;gx,gy,g) and

f:{(x xexp x)+(y*y)-3.950607}

fx:{[x;y](1f+log x)*x xexp x}

fy:{[x;y]2f*y}

g:{(y xexp y)+(x*x)-3.340242}

gx:{fx[y;x]}

gy:{fy[y;x]}

10. fixed point method for n dimensions:f:{(x xexp x)+(y*y)-3.950607}

fx:{[x;y](1f+log x)*x xexp x}

fy:{[x;y]2f*y}

g:{(y xexp y)+(x*x)-3.340242}

gx:{fx[y;x]}

gy:{fy[y;x]}

x

xy-yz+z

x(sin y) + y(sin x) + xz

then {(f1,f2,f3).\:x}/ (-3 1.5 2.5f) will converge to:

-2.105851 0.3408871 2.682297, where

f1:{(-7f+(y*sin y)-y*z*z) xexp 1%3}

f2:{(0.2427912+z xexp 1%3)%z-x}

f3:{[x;y;z]sqrt ((neg 16.14828)-(x*sin y)+y*sin x)%x}

where at least one B-norm must be < 1 (ie |B|-2.105851 0.3408871 2.682297, where

f1:{(-7f+(y*sin y)-y*z*z) xexp 1%3}

f2:{(0.2427912+z xexp 1%3)%z-x}

f3:{[x;y;z]sqrt ((neg 16.14828)-(x*sin y)+y*sin x)%x}

11. system of linear equations - iterative Jacobi method

/k4

{m$x,1f}/ startvalue /we assume m is strictly diagonally row(column) dominant

{m$x,1f}/ startvalue /we assume m is strictly diagonally row(column) dominant

12. system of linear equations - iterative Gauss-Seidel method

/k4

{i:0;c:();do[r:#x;c,:+/(c,0f,((1+i-r)#x),1f)*m@i;i+:1];:c}/ startvalue

/we assume m is strictly diagonally row(column) dominant

13. Lagrange interpolation polynomials L.. {i:0;c:();do[r:#x;c,:+/(c,0f,((1+i-r)#x),1f)*m@i;i+:1];:c}/ startvalue

/we assume m is strictly diagonally row(column) dominant

/k4+q

/f(x

L(f) = Σ y

q)sum (ys%prd flip xs-p)*1f,/:eq each p:xs@r except/:r:til count xs

example: f(-2 -1 1 3)=-15 -4 0 20 → L = x
q)nc:{(not 0 in deltas x)&x~asc x} /just eliminating the variational noise

q)vr:{s::x;y {s cross x}/x}

k)fx:{q@&nc'q:vr[1+!x;y]}

k)vt:{+/*/+z@-1+fx[x;y]}

k)eq:{c::#r::x;(c#-1 1f)*(+/r),{vt[c;x;r]}'1_!c}

q)vr:{s::x;y {s cross x}/x}

k)fx:{q@&nc'q:vr[1+!x;y]}

k)vt:{+/*/+z@-1+fx[x;y]}

k)eq:{c::#r::x;(c#-1 1f)*(+/r),{vt[c;x;r]}'1_!c}

eq corresponds to Vieta: 1f,eq 3 8 5 .. (x-3)(x-8)(x-5)

The sample with Lagrange above shows also, that the classical school-way

representing solutions can be pretty misleading in finding effective algorithms:

The 5 steps above leading to Vieta's representation we can put away in k4 by one single line:

1f,(*s) {1 _ (0f,y*c)+(c:1f,x),0f}/1_s ...

where s represents the negative roots of the polynomial...ie (x-s

14. Newton's interpolation polynomial derived from divided differences

We define:

f[x

f[x

/k4

@[+/a*{(-o)#(o#0f),x}'1f,/:(-xs@0),1_g@--1_xs;-1+o:#xs;+;ys@0]

/..and where

g:{1f,(*x) {1 _ (0f,y*c)+(c:1f,x),0f}\1_x} /thats vieta

a:{*x}'ys {(1_-':x)%y}\ {-/'+x@(r;-1+(r:-2_1_'{1_x}\ c)--1_c:!#x)}@xs

although the expression above does trivial things, it does too much as only@[+/a*{(-o)#(o#0f),x}'1f,/:(-xs@0),1_g@--1_xs;-1+o:#xs;+;ys@0]

/..and where

g:{1f,(*x) {1 _ (0f,y*c)+(c:1f,x),0f}\1_x} /thats vieta

a:{*x}'ys {(1_-':x)%y}\ {-/'+x@(r;-1+(r:-2_1_'{1_x}\ c)--1_c:!#x)}@xs

the upper leaf of the derivation tree is really needed ..

example: f[-3 -1 2 6 7f] is -10 -6 6 8 48f .. will result in p(x)=0.115x

15. interpolation via Newton's forward form (for equidistant abscissas)

From the general divided difference form:

p

x = x

usual number of combinations s over i and where Δ

We want to know f[0.5] if f[0,0.2,0.4,0.6,0.8,1.0,1.2] = (0.44 , 0.58 , 1.5 , 2.2 , 2.9 , 3.6 , 5.1)

/k4

nf:{s:(x-i)%((y@1)-i:*y);+/(*\1f,(s-c)%1f+c:1f*!-1+#y)*{*x}'-1_{1_-':x}\z}

nf[0.5;xs;ys] .. 1.863457

nf:{s:(x-i)%((y@1)-i:*y);+/(*\1f,(s-c)%1f+c:1f*!-1+#y)*{*x}'-1_{1_-':x}\z}

nf[0.5;xs;ys] .. 1.863457

16. construction of the H

The 2n+1 comes from the 2n+2 conditions at x

The issue point are the Lagrange polynomials L

L

cancelled immediately. This will leave us with the single statement a

b

One other way to obtain H

H

and where i in {0,1,..,n-1}

/k4

@[+/h*{(-j)#(j#0f),x}'1f,/:g@--1_*{-1_x}\xi;j-1;+;l] ..and where

r:1_@[c;&^:c:(-':yi)%-':xi;:;dyi], g is Vieta again

t:(l:*yi),h:(*r),,/1#'r {(1_-':x)%y}\-/'xi@+-1_'({1_x}\2_ix;{-1_x}\-2_ix:!j:#xi)

@[+/h*{(-j)#(j#0f),x}'1f,/:g@--1_*{-1_x}\xi;j-1;+;l] ..and where

r:1_@[c;&^:c:(-':yi)%-':xi;:;dyi], g is Vieta again

t:(l:*yi),h:(*r),,/1#'r {(1_-':x)%y}\-/'xi@+-1_'({1_x}\2_ix;{-1_x}\-2_ix:!j:#xi)

example:

f(-1)= -2, f(0)= -1, f(2) = 7 .. f '(-1)= -3, f '(0)= 4, f '(2) = 36

xi: , /2#'-1 0 2f .. yi: , /2#'-2 -1 7f .. dyi:-3 4 36f .. will return p(x) = x

Friendly hint: The "hard" way just by setting p(x)=x

conditions would be much simpler and would result in a short trivial expression in k4. In terms of operations

Hermite's approach is less tiresome.

17. interpolation using rational functions

In case the function we try to interpolate has poles or asymptotes we might run into problems with

accuracy and the reality when using purely the polynomial approach. This new class of functions R(x) = P(x)/Q(x)

where P(x) = Σ

into real trouble first .. contrary to the polynomial approach, we cannot guarantee that such an R(x) always

exists for a given interpolation problem.

R(x) is of dimension ζ + η + 1, and the number of conditions is given by R(x

So, n = ζ + η.

The divided reciprocal differences will help to solve us a special type of rational interpolation ..

etc.

renaming the x

Thieles chain-fraction. (Thorvald Nicolai Thiele was a brilliant researcher , astronomer and mathematician.

He was also interested in issues of insurance and old-age pensions from a social point of view. Thiele emphasised

that old age is the destiny of us all, therefore a pension system should be seen as a social contract between

generations with the State as mediator rather than as an insurance.

He was born in Copenhagen December 24

the Danish Mathematical Society was founded in 1873)

and so ..

R(x) = c

where c

We can show by math. induction that the following conditions are an invariant :

(i) ζ = η = n/2 in case n is even

(ii) ζ = (n+1)/2 and η = (n-1)/2 else

In other words: 0 ≤ grad(P) - grad(Q) ≤ 1 .. which means can also have a linear asymptote.

This construction defines in a natural way a 2 dimensional recursion ..

p

Θ

c

Example:

Let be f(0.1)=1.11035 , f(0.2)=0.5441072 , f(0.3)=0.355784 , f(0.4)=0.2619114 , f(0.7)=0.14207 .. we look for f(0.16)

/k4 .. xi,yi,A input

f:{(1_x;1_(x-*x)%y-*y)};c::`float$();{if[1<#*x;c::c,,x:f . x;.z.s@x]}@(xi;yi)

ci:{*x}'((,yi),,/1_'c)

f1:{[x;y;z;u]((y*c)+(u-z)*x@1),c:x@0}

/c0:*ci:{*x}'((,yi),,/1_'c) ; c1:ci@1 ; qk:(1f,c1)

/pk:(c0,(c0*c1)+A-*xi)

({*x}'f1\[(|pk);h1;h2;A])%{*x}'f1\[(|qk);h1:2_ci;h2:1_-1_xi;A]

f:{(1_x;1_(x-*x)%y-*y)};c::`float$();{if[1<#*x;c::c,,x:f . x;.z.s@x]}@(xi;yi)

ci:{*x}'((,yi),,/1_'c)

f1:{[x;y;z;u]((y*c)+(u-z)*x@1),c:x@0}

/c0:*ci:{*x}'((,yi),,/1_'c) ; c1:ci@1 ; qk:(1f,c1)

/pk:(c0,(c0*c1)+A-*xi)

({*x}'f1\[(|pk);h1;h2;A])%{*x}'f1\[(|qk);h1:2_ci;h2:1_-1_xi;A]

With A:0.16 we get the value 0.685568 .. a result which is really breath-taking ..

The underlying unknown function was s(x)=0.3 / (x*(3x+7)

The difference is only 1.023808e-007

18. Cubic splines

The term spline comes from the flexible spline devices used by shipbuilders and draftsmen to draw

smooth shapes. In fact what they solved was a variational problem - they forced the splines to

pass through given n+1 nodes so that the strain energy E = 0.5* ∫

in other words: δE = 0. So δE = ∫

Σ

As f(x

δs'(x-) = δs'(x+) = δs'(x). This will lead us directly to:

δE = s''(x

In other words: s''(x

and the "plug-in's" s''(x

We can define now: h

The conditions above will result in:

a

b

c

d

Where the

h

The system matrix is symetrical, tri-diagonal and diagonally dominant.

/k4

ri:6f*(-':(1_-1_,/2#'1_-':yi)%1_-1_,/2#'hi)@-1+1_2*!t:#hi

g:{@'[x;+(l;l:!#x);:;0f]}

M:-g@(+m,,ri)%|/'m:(w,w)#(-u+1)_1_,/(1_{y,(2f*x+y),x}':hi),\:(u:-2+w:-1+t)#0f

b:-1f*0f,({M$x,1f}/(#M)#1f),0f

s:((1_{x-y}':b)%6f*hi;-1_0.5*b;((1_{x-y}':yi)%hi)-(1_{x+2f*y}':b)*hi%6f;-1_yi)

example: f[-1 0 1 3] is 2 1 2 0 ..

hi:1_-':xi:-1 0 1 3f

yi:2 1 2 0f

will result in 3 cubic polynomials:ri:6f*(-':(1_-1_,/2#'1_-':yi)%1_-1_,/2#'hi)@-1+1_2*!t:#hi

g:{@'[x;+(l;l:!#x);:;0f]}

M:-g@(+m,,ri)%|/'m:(w,w)#(-u+1)_1_,/(1_{y,(2f*x+y),x}':hi),\:(u:-2+w:-1+t)#0f

b:-1f*0f,({M$x,1f}/(#M)#1f),0f

s:((1_{x-y}':b)%6f*hi;-1_0.5*b;((1_{x-y}':yi)%hi)-(1_{x+2f*y}':b)*hi%6f;-1_yi)

example: f[-1 0 1 3] is 2 1 2 0 ..

hi:1_-':xi:-1 0 1 3f

yi:2 1 2 0f

s

s

s

19. Eulers φ function .. :

determine φ (x)= 84 .. means numbers N which have 84 coprime elements

to N and all x < N (solution below is a "student" version, don't try with high N's )

t@&84={+/1={$[0=c:x-y*_ x%y;y;.z.s[y;c]]} ./:(1_!x),\:x}@/:t:2_!300

returns .. 129 147 172 196 258 294
20. numerical differentiation .. :

the rules for the differentiation of grade 1 and 2 we can get directly from the Taylor expansion

for higher terms and intermediate points we have go back to Newtons forward form ..

and assuming the underlying approximation of f(x), which is p

and where C(s,i) are the combinations s over i.

from f(x) ≈ p

as example we try to determine f

let be xd:1.1 1.2 1.3 1.4 1.5 1.6 and yd:1.078361 1.342136 1.628413 1.931481 2.244364 2.558908

s:((u:1.44)-*xd)*ih:%h;rh:dh*dh:ih*ih

rh*(*t)+(s-2f)*1_t:-2#{*x}'-1_{1_-':x}\yd .. -11.37996

..and ih*{y-x} ./:+(-1_yd;1_yd) will return the 1^{st} derivatives starting

from 1.2 .. 2.637754 2.86277 3.030681 3.128822 3.145447

rh*(*t)+(s-2f)*1_t:-2#{*x}'-1_{1_-':x}\yd .. -11.37996

..and ih*{y-x} ./:+(-1_yd;1_yd) will return the 1

from 1.2 .. 2.637754 2.86277 3.030681 3.128822 3.145447

21 a) numerical integration .. trapezoidal method for single integrals

We consider the integral ∫

polynomial p

The error ∫

At this point we can start to analyze the trapezoidal rule ..

Given is now the underlying set of evenly distributed sub-intervals .. x

Lets look at the error-term E

and C(s;n+1) the classical combination s over n+1. For the trapezoidal rule we have in each interval n = 1

( .. approximation by a linear function) and dx=h ds. (be aware that h = (b-a)/n). Applying this to the

error term, we will end up with ∫

As s(s-1) is not negative over [0,1] and f '' continuous over (0,1) there must be one fix α

∫

For the whole interval J we will get: Error = -(1/12)*(b-a)h

is a simple adding of trapezoids ..0.5*h*(f

tz:{[f;a;b;n]h*(0.5*neg (first r)+last r)+sum r:f a+(h:(b-a)%n)*til 1+n}

Example with f(x) = (2*π)r:1f%sqrt 4f*atan 0w)

21 b) numerical integration .. trapezoidal method for n-dimensional integrals

Let be Ω an n-dimensional rectangle defined by :{(x

and tentatively assuming we are in a sufficiently foolish mood to evaluate the integral

J = ∫ ∫ ∫ ∫ .. ∫

summations at i = n, ie ∫ ξ (x

The last expression we can define as Ξ[x

by the same weight factor (integer exponent of 2). In other words:

J = (Π

+ 2 Σ

+ 2

and the i

collapse into the sum of integrals Σ

... so - for Ω = R

J = ∫ ∫

We can write J as 0.5h ∫

and j in (1,..,n-1) and E=(-1/12)(d-c)k

Using the definitions: f

0.25hk[f

and because f

So the total error will become (-1/12)(b-a)(c-d)[h

Example: we want to integrate J = ∫ ∫

diT:{[a;b;c;d;m;n;f]

o:f .'/:+(a+(h:(b-a)%m)*ix:!1+m),'/:\:c+(k:(d-c)%n)*iy:!1+n

:(h*k*0.25)*(4f*+/,/fr'q)+(2f*+/(,/fr'p),,/fl'q:fr@o)++/,/fl'p:fl@o

}

and where

fl:{(,*x),,last x}

fr:{-1_1_x}

and so: diT[2;3;1;4;120;100;{((y*sin x%3f)+x*x)%x+y*y*sin y%3f}] .. 3.860015

22. the order of a modulo m:o:f .'/:+(a+(h:(b-a)%m)*ix:!1+m),'/:\:c+(k:(d-c)%n)*iy:!1+n

:(h*k*0.25)*(4f*+/,/fr'q)+(2f*+/(,/fr'p),,/fl'q:fr@o)++/,/fl'p:fl@o

}

and where

fl:{(,*x),,last x}

fr:{-1_1_x}

and so: diT[2;3;1;4;120;100;{((y*sin x%3f)+x*x)%x+y*y*sin y%3f}] .. 3.860015

I:log 1f*0Wj-1

ord:{1_where 0=(-1j+(floor I%log r::x) {x*r}\ 1j) mod y}

Example 1: ord[29;37] .. ,12 - means the order of 29 modulo 37 is 12ord:{1_where 0=(-1j+(floor I%log r::x) {x*r}\ 1j) mod y}

ie 29

Example 2: ord[53;163] .. ,9 - means the order of 53 modulo 163 is 9

ie 53

23. composite double integral - functional boundaries

J = ∫ ∫

For each thread we define: w(i) := [τ

y

We set Φ (x

So, J = 0.5 * h * [Φ (x

J = 0.25h * Φ (integrand grid) ⊗ Y (trapezoidal weights) ⊗ T (functional grid)

f3:{(f2 x)-f1 x};phi:{y*x};a:1f;am:(mi:!1+m)*cm:%m:40;b:2f;an:(ni:!1+n)*cn:%n:20

phival:|+phi .'' ((1+n)#'hmi),''{(f1 x)+an*f3 x}@/:hmi:a+mi*h:cm*b-a

l:,2f,((m-1)#4f),2f

0.25*h*+/+/phival*(n {x,q}/ q:,cn*f3 hmi)*+(r,((m-2) {x,l}/ l),r:l%2)

Example: J = ∫ ∫ phival:|+phi .'' ((1+n)#'hmi),''{(f1 x)+an*f3 x}@/:hmi:a+mi*h:cm*b-a

l:,2f,((m-1)#4f),2f

0.25*h*+/+/phival*(n {x,q}/ q:,cn*f3 hmi)*+(r,((m-2) {x,l}/ l),r:l%2)

the exact value is 27.65783..

24. replacing the integrand by polynomials of grade 2 and 3 :

Lets define:x

issue point is Newton's form Σ

and i in (0,1,2). J = ∫

Although Ψ

correction factor (x-x

For convenience reasons we will choose x

Intersecting Ω into 2n sub-intervals with h=(b-a)/2n collapses into the sum E(J)=(-h

m in (1,..,n) and ω

E(J) = -(1/180)h

Example. J = ∫

b:4f;f:{(e*5f+exp x)%1f+x*(e:c*c:x*x)*1+2f*x};h:(b-a:0f)%n:50

sum (h*(w,1_reverse w:1,(floor n*0.5)#4 2)*f a+h*key n+1)%3f .. 4.806506

Hint:sum (h*(w,1_reverse w:1,(floor n*0.5)#4 2)*f a+h*key n+1)%3f .. 4.806506

the trapezoidal method would require n=1460 to obtain the same precision - and is 10 times slower in the end.

The derivation of the approximation using a cubic polynomial is trivial. We would end up with:

E(J) = -(1/80)h

b:4f;f:{(e*5f+exp x)%1f+x*(e:c*c:x*x)*1+2f*x};h:(b-a:0f)%n:42

sum 0.375*h*(1,(-1_n#3 3 2),1)*f a+h*key n+1 .. 4.80653

sum 0.375*h*(1,(-1_n#3 3 2),1)*f a+h*key n+1 .. 4.80653

25. basis solutions .. :

Let be h,m natural numbers and (a,m) = 1 .. means m coprime to the whole number a.

If h is the lowest fulfilling a

If h = φ (m) (and φ is Eulers function .. see chapter 19 above) then a is called basis solution modulo m .. ie a = β (m)

k)jx:{-1j+x {x*y}/ (y-1)#x}

{{c::y;({jx . (c;x)} each 1_key x) mod x} . (7j;x)} each 2 3 4 5 ..

returns..{{c::y;({jx . (c;x)} each 1_key x) mod x} . (7j;x)} each 2 3 4 5 ..

/1 3 0 1 3 0 --> 2 is not a basis

/2 1 5 3 4 0 --> 3 is a basis

/3 1 0 3 1 0 --> 4 is not a basis

/4 3 5 1 2 0 --> 5 is a basis

{3,5} are the only basis solutions modulo 7. There is no need to look further.

The number of basis solutions is φ (p - 1) for primes > 2.

It is {c::y;({jx . (c;x)} each 1_key x) mod x} . (23j;5)

4 1 9 3 19 7 16 15 10 8 21 17 20 12 18 2 14 5 6 11 13 0j .. ie. 5

and indeed .. 23 x 103660251783288j = 2384185791015624j .. and so 5 = β (23)

26. Gauss' method of mechanical quadrature .. (part 1 from 4):

The classical result of Gauss states that for the range [-1,+1] the best accuracy is obtained

by choosing the roots x

With each x

J = ∫

But lets look at it from a general aspect:

Consider the integral: J = ∫

J = ∑

Pro memoria: l

We also introduce ω(x) into the general error term with:

E(J) = ∫

replacing f[x

The concept:

we introduce x

Then E(J)= ∫

we introduce x

Then E(J)= ∫

etc..until

we introduce x

Then E(J)= ∫

This means Ψ

polynomial of grade n+1.

If we choose x

that E(J) = f[x

So Ψ

with respect to ω(x).

Let's choose ω(x) ≡ 1 and Ω = [-1,+1] and Gauss-Legendre quadrature:

p

n-th Legendre polynomial:

k)p:*(n-1) {c:(((3f+2*n)*(r:x 0),0f)-(m:1f+n)*0f,0f,x 1)%2f+n:x 2;:(c;r;m)}/(1 0f;,1f;0f)

roots of n-th Legendre polynomial ..

k)n {dp::g@w::x;c::0.5;{:c::c-(last pv[w])%last pv[dp]}/();s::s,c;:-1_pv[x]}/ p

and where
k)pv:{(*x),{y+c*x}\[*x;1_x]}

k)g:{(*r)#x*r:|!#x};s::()

k)g:{(*r)#x*r:|!#x};s::()

the weights..

k)2f%(1f-s*s)*r*r:last'{{(*x),{y+z*x}\[*x;1_x;y]} . (g@p;x)}'s

Example: J=∫ and using the Legendre polynomial of degree 12 ( = n)

sum w*u each s .. and where u:{(-7f+x*101f+x*13f+x*6f)%(1f+x*x)*20f+x*4+x}

will return: -0.70384426234338049 (exakt: -0.70384426234345321)

will return: -0.70384426234338049 (exakt: -0.70384426234345321)

in this case 12 digit precision with only 12 abscissa points !

26. Gauss' method of mechanical chebyshev quadrature .. (part 2 from 4):

J = ∫

Chebyshev-polynomials T

and the weights A

k)+/(pi%n)*f {cos (pi*1+2*!x)%2+2*x-1} n

and where pi: acos -1

26. Gauss' method of mechanical hermite's quadrature .. (part 3 from 4):and where pi: acos -1

J = ∫

Hermite-polynomials H

and the weights A

of the special differential equation y''-2xy'+2ny = 0 (n natural number incl. 0) which appears with

the harmonic oscillator in quantum mechanics.

n-th hermite polynomial

k)p:*(2 0;,1) {r:2*((c:x 0),0)-0,0,y*x 1;:(r;c)}/1_!n

k)n {dp::g@w::x;c::0.5;{:c::c-(last pv[w])%last pv[dp]}/();s::s,c;:-1_pv[x]}/ p

and where
k)pv:{(*x),{y+c*x}\[*x;1_x]}

k)g:{(*r)#x*r:|!#x};s::()

the weights..
k)g:{(*r)#x*r:|!#x};s::()

k)b:sqrt acos -1

k)(2 xexp w)*({*/1_!x} w:1+#s)*b%r*r:last'{{(*x),{y+z*x}\[*x;1_x;y]}.(g@p;x)}'s

26. Gauss' method of mechanical laguerre's quadrature .. (part 4 from 4):k)(2 xexp w)*({*/1_!x} w:1+#s)*b%r*r:last'{{(*x),{y+z*x}\[*x;1_x;y]}.(g@p;x)}'s

J = ∫

Pseudo-Laguerre-polynomials Λ

and the weights A

Laguerre's polynomials we simply get by: L

They are the canonical solutions of the 2

This equation plays a role in quantum mechanics. It describes the radial part of the energy states

of the hydrogen atom.

n-th pseudo-Laguerre polynomial

k)p:*(-1 1;,1) {r:(0,(1+2*y)*c)+(-1*((c:x 0),0))-0,0,y*y*x 1;:(r;c)}/1_!n

k)n {dp::g@w::x;c::0.5;{:c::c-(last pv[w])%last pv[dp]}/();s::s,c;:-1_pv[x]}/ p

and where
k)pv:{(*x),{y+c*x}\[*x;1_x]}

k)g:{(*r)#x*r:|!#x};s::()

the weights..
k)g:{(*r)#x*r:|!#x};s::()

k)(u*u:{*/1_!x} w:1+#s)%s*r*r:last'{{(*x),{y+z*x}\[*x;1_x;y]}.(g@p;x)}'s

27(a) smallest non-cyclic multiplicative group
g:{$[0=c:x-y*_ x%y;y;g[y;c]]} .. see chapter 3

f:{c::x;w*/:w:1,r@where 1={g . (x;c)} each r:2_key x};

h:{t:(f x) mod x;t ./:flip (p;p:key count t)}

flip (l;h each l:2_key 12)

2: ,1 f:{c::x;w*/:w:1,r@where 1={g . (x;c)} each r:2_key x};

h:{t:(f x) mod x;t ./:flip (p;p:key count t)}

flip (l;h each l:2_key 12)

3: 1 1

4: 1 1

5: 1 4 4 1

6: 1 1

7: 1 4 2 2 4 1

8: 1 1 1 1

→ all elements of the diagonal are the neutral element → order of all elements is 2 → 8 is smallest

order for the non-cyclic multiplicative group

9: 1 4 7 7 4 1

10: 1 9 9 1

11: 1 4 9 5 3 3 5 9 4 1

27(b) generators in multiplicative groups

g:{$[0=c:x-y*_ x%y;y;g[y;c]]} .. see chapter 3

M:{c::x;(w*/:w:1,r@where 1={g . (x;c)} each r:2_key x) mod x}@n:19;

1+&t={#?x}'*:''(-1+t) {(M . (-1+x@0;-1+r)),r:x@1}\'+(r;r:1+!t:#M)

M:{c::x;(w*/:w:1,r@where 1={g . (x;c)} each r:2_key x) mod x}@n:19;

1+&t={#?x}'*:''(-1+t) {(M . (-1+x@0;-1+r)),r:x@1}\'+(r;r:1+!t:#M)

.. 2 3 10 13 14 15 means 6 generators for the multiplicative group modulo 19

.. corresponds to the theory expecting φ(n-1) elements where a

An example which is in the end a nice affinity between classical number theory and group theory.

28a) Runge-Kutta method of order 2

Lets look at y' = f(x,y) and y(x

Our issue point are the equations(using A as (x

k = h f(A)

l = k f(A+ΔA) and where ΔA = (αh,βk)

y

..and the 4 unknowns α,β,a,b

We see immediately (or not) that ΔA ≈ f

classical Taylor serie and using the fact that y'' = f

a + b = 1, bα = 0.5 and bβ = 0.5. We can choose: a = b = 0.5 and α = β = 1

Let's try to solve:

y' = (y/x) + sin [(y-x)/x] with y(1) = 3, spacing h = 0.2

and we need to determine y(1.2), y(1.4), y(1.6), y(1.8), y(2.0)

f:{(y%x)+sin (y-x)%x};o:3f;a:1f;h:0.2;i:5

o {l:h*f . A+(h,k:h*f . A:(y,x));x+0.5*l+k}\a++\0f,(i-1)#h

o {l:h*f . A+(h,k:h*f . A:(y,x));x+0.5*l+k}\a++\0f,(i-1)#h

.. and we will get: 3.78969,4.592525, 5.403301, 6.219094, 7.03819

(exact values: 3.790758 4.594191 5.405322 6.221337 7.040578)

28b) Runge-Kutta method of order 4

We apply a similar procedure as for the method of order 2, and we will end up with:

k = h f(x

l = h f(x

m = h f(x

p = h f(x

y

Let's try to solve:

(x

and we need to determine y(2.0)

f:{(d-c)%(d:(y-x)*y+x)+c:2f*x*y};o:12f;a:4f;h:-0.1;i:20

last o {p:h*f . A + (h,m:h*f . A + 0.5*(h,l:h*f . A + 0.5*(h,k:h*f . A:(y;x))));x+(k+p+2f*l+m)%6f}\a++\0f,(i-1)#h

last o {p:h*f . A + (h,m:h*f . A + 0.5*(h,l:h*f . A + 0.5*(h,k:h*f . A:(y;x))));x+(k+p+2f*l+m)%6f}\a++\0f,(i-1)#h

.. and we will get: 11.403124236864 (exact: 11.403124237433 and -1.4031242374328!)

29. quadratic rest mod prime p > 2:

g:{0=(sum i<(floor (y*1_key 1+floor -0.5+i:x%2)) mod x) mod 2}

{c:x;i@where g'[c]x:i:1_key x} each 17 29 31 37 53

1 2 4 8 9 13 15 16{c:x;i@where g'[c]x:i:1_key x} each 17 29 31 37 53

1 4 5 6 7 9 13 16 20 22 23 24 25 28

1 2 4 5 7 8 9 10 14 16 18 19 20 25 28

1 3 4 7 9 10 11 12 16 21 25 26 27 28 30 33 34 36

1 4 6 7 9 10 11 13 15 16 17 24 25 28 29 36 37 38 40 42 43 44 46 47 49 52

means for example: x

30. the predictor-corrector method for differential equations

The predictor-corrector method is a multistep method. Multistep methods are usually very fast and the accuracy of the results is impressive.

In a way, the linear multistep method represents a difference equation which approximates the differential equation. They are instruments for the (let's say) more sophisticated user. In our case we concentrate on multistep methods for a constant step size.

In our approach to solve y' = f(x,y) = f(x,y(x)) = F(x) we will approximate F(x) by the interpolation polynomial p

The C(-s,i) is the usual number of combinations of -s over i and the Δ f

in the backward form.

Our general mapping of the abscissa points: x

This integral we can represent by I = h∫

From the error definition of Newton's backward for we deduct that E(I) = h

If p=0 we talk about Adams-Bashforth methods, in case of p = 1 or 2 or 3 we talk about Milne's method's.

Edward Arthur Milne: 14.2.1896 in Hull,Yorkshire,England - 21.9.1950 Dublin,Ireland:

He was also interested in Relativity, Gravitation,World Structure and Kinematic Relativity (1948) - a theory which was an alternative to Albert Einstein's

general theory of relativity. Very similar to Albert Einstein, Milne made people rethink old ideas and led to new approaches to the

fundamental concepts of space and time. Milne's cosmological principle (where the universe is already infinite at the creation time)

can be described entirely within Euclidean geometry. One of Milne's first books was "Thermodynamics of the Stars" - 1930. From 1943 - 1945

Milne was the president of the Royal Astronomical Society in London. Milne died from a heart attack in Dublin while attending a conference of the

Royal Astronomical Society

John Couch Adams 5.6.1819 Lidkott,Cornwall, England - 21.1.1892 Cambridge: Adams was an astronomer and mathematician who, at the age of 24,

was the first person to predict the position of a planet beyond Uranus. He never boasted of his achievements and in fact he refused a knighthood which

was offered to him in 1847

Francis Bashforth (1819-1912), inventor of the Bashforth chronograph. A fellow of St John's, he was professor of applied mathematics

at Woolwich 1864-1872.

Developing I above: I ≅ h[f

So in case of the order 3 we will get y

The formula's from Adams-Moulton we can derive by the classical interpolation polynomial from Newton and the fact that

f[x

As x - x

This means E(I) = h∫

For p = 0 and m = 1 we obtain easily the corrector: y

Example: y' = cos (y-x) and y at -3,-2.8,-2.6 should be 2.565987,2.468988,2.35589

We are looking for y at -2.4,-2.2,...,0,0.2,0.4,...,4.0,4.2

f:{cos y-x};vx:|-3f+(h:0.2)*!n:3;vy:2.565987 2.468988 2.35589

x2:5f*x1:h%12f;x3:23 -16 5f;x4:5 8 -1f;pr:{(*y)+x1*+/x3*f .'+(x;y)};c2:();j:0;

do[#v:(*vx)+h*1+!34;

zv:(*vy)+x1*+/(1_x4)*f .'+(-1_vx;-1_vy);vj:v@j;

vy:n#(c1:{zv+x2*f . (vj;x)}/pr[vx;vy]),vy;vx:n#vj,vx;c2,:c1;j+:1;

]

x2:5f*x1:h%12f;x3:23 -16 5f;x4:5 8 -1f;pr:{(*y)+x1*+/x3*f .'+(x;y)};c2:();j:0;

do[#v:(*vx)+h*1+!34;

zv:(*vy)+x1*+/(1_x4)*f .'+(-1_vx;-1_vy);vj:v@j;

vy:n#(c1:{zv+x2*f . (vj;x)}/pr[vx;vy]),vy;vx:n#vj,vx;c2,:c1;j+:1;

]

c2 .. 2.642632 2.693582 2.712187 .. 4.210839 4.394837 4.580019

The maximal error will be in this case 0.001444267 in absolute value (not necessarily reached at end!!)

Conclusions:

The 4

but lacks of a sufficient error control. The 3

already a good method. The 4

of these methods. It has excellent stability and accuracy limits.

31. quadratic reciprocity:

/k4

f0:{1-2*1=mod[x;2]}

f1:{_ 0.25*(x-1)*y-1}

g:{$[x>y;:((mod . (x;y));y;z);:(y;x;z*f0@f1 . (x;y))]}

qr:{$[(x>2)&&/(x,y) in PRIMES;.z.s . g . (x;y;z);(x;y;z)]}

/PRIMES 2 .. p

f0:{1-2*1=mod[x;2]}

f1:{_ 0.25*(x-1)*y-1}

g:{$[x>y;:((mod . (x;y));y;z);:(y;x;z*f0@f1 . (x;y))]}

qr:{$[(x>2)&&/(x,y) in PRIMES;.z.s . g . (x;y;z);(x;y;z)]}

/PRIMES 2 .. p

Example:

Does x

qr . (3821;252359;1) .. returns a reduced form .. (15 173 1)

Legendre:(15,173) = (3,173) • (5,173) = 1 as 173 ≡ 3 (mod 4) and multiplied by last element from the reduced form

we get 1. This means the equivalence above does have 2 solutions. And indeed x

32a) higher orders of ordinary differential equations (ode's) and systems of ode's using Runge Kutta of order 2:

/k4

rk22:{(h+*x),(1_x)+0.5*(1_k)+h*F@\:x+k:h*1f,F@\:x}

/where F represents the list of functions

/h is the step size and x is the initial values

rk22:{(h+*x),(1_x)+0.5*(1_k)+h*F@\:x+k:h*1f,F@\:x}

/where F represents the list of functions

/h is the step size and x is the initial values

Example:

x

h = 0.1 , y(1) = 3, y'(1) = 1.25, y''(1) = 0.77

We are looking for y(a),y'(a),y''(a) where a in (1.1,1.2,1.3, ... ,1.9,2.0)

With:

F:({x@2};{x@3};{((log x0)-15*(4*x@1)+x0*(x0*x@3)+4*x@2)%x0*x0*x0:x@0})

xv:1 3 1.25 0.77; by (10 rk22\ xv)@\:1 we will obtain the y(a):

y(1.1) = 3.12885 (exact: 3.09924)xv:1 3 1.25 0.77; by (10 rk22\ xv)@\:1 we will obtain the y(a):

y(1.2) = 3.10208 (exact: 3.10068)

y(1.3) = 2.98739 (exact: 3.00910)

y(1.4) = 2.82461 (exact: 2.85650)

y(1.5) = 2.63874 (exact: 2.67156)

y(1.6) = 2.44559 (exact: 2.47441)

y(1.7) = 2.25491 (exact: 2.27774)

y(1.8) = 2.07245 (exact: 2.08896)

y(1.9) = 1.90130 (exact: 1.91199)

y(2.0) = 1.74286 (exact: 1.74857)

Friendly hint: decreasing the step size will improve the accuracy

Example: in case h = 0.05 .. we will obtain y(1.1) = 3.10232, y(1.3) = 3.00212

In case of h = 0.01 the outlier y(1.5) will become 2.67130

In case of h = 0.001 we will reach a match with the 5 digit exact values.

1000 rk22\ xv takes max 15ms on one 3Ghz CPU with a Barry White CD playing via Windows Media Player on the same machine

32b) higher orders of ordinary differential equations (ode's) and systems of ode's using Runge Kutta of order 4:

/k4

e:1%6;a:0.5

rk42:{p:h*F@\:x+h,m:h*F@\:x+a*h,l:h*F@\:x+a*h,k:h*F@\:x;(h+*x),(1_x)+e*k+p+2f*l+m}

/where F represents the list of functions

/h is the step size and x is the initial values

e:1%6;a:0.5

rk42:{p:h*F@\:x+h,m:h*F@\:x+a*h,l:h*F@\:x+a*h,k:h*F@\:x;(h+*x),(1_x)+e*k+p+2f*l+m}

/where F represents the list of functions

/h is the step size and x is the initial values

Example:

y''' + 2y'' + 5y' = x

h = 0.1 , y(1) = 1.3, y'(1) = 0.5, y''(1) = 0.77

We are looking for y(a),y'(a),y''(a) where a in (1.1,1.2,1.3, ... ,1.9,2.0)

With:

h:0.1;F:({x@2};{x@3};{(x@0)-(2f*x@3)+5f*x@2})

xv:1 1.3 0.5 0.77; by (10 rk42\ xv)@\:1 we will obtain the y(a):

y(1.2) = 1.411582 (exact: match )

y(1.3) = 1.472204 (exact: match )

y(1.4) = 1.533183 (exact: match )

y(1.5) = 1.592922 (exact: 1.592921)

y(1.6) = 1.650260 (exact: 1.650258)

y(1.7) = 1.704445 (exact: 1.704442)

y(1.8) = 1.755094 (exact: 1.755091)

y(1.9) = 1.802144 (exact: 1.802140)

y(2.0) = 1.845794 (exact: 1.845790)

32c) higher orders of ordinary differential equations (ode's) and systems of ode's using predictor/corrector method of the order 4:

/k4

c1:55 -59 37 -9f;c2:9 19 -5 1f

ha:(h:0.1)*a:1%24;f:{(1_*x)+ha*+/'c1*/:F@'\:4#x};g:{(1_x@1)+ha*+/'c2*/:F@'\:4#@[x;0;:;y]}

pcr4:init {(,y,(x@0,!#x) g/,y,f@x),x}/nxt

/where F represents the list of functions

/h is the step size,init - initial values, nxt - next abscissas

c1:55 -59 37 -9f;c2:9 19 -5 1f

ha:(h:0.1)*a:1%24;f:{(1_*x)+ha*+/'c1*/:F@'\:4#x};g:{(1_x@1)+ha*+/'c2*/:F@'\:4#@[x;0;:;y]}

pcr4:init {(,y,(x@0,!#x) g/,y,f@x),x}/nxt

/where F represents the list of functions

/h is the step size,init - initial values, nxt - next abscissas

Example:

y''' + 2y'' + 2y' + y = x

h = 0.1 , "init" below

(x)5.3 (y)3.255877 (y')1.075979 (y'') -0.026863510

(x)5.2 (y)3.148154 (y')1.078380 (y'') -0.021017100

(x)5.1 (y)3.040222 (y')1.080152 (y'') -0.014276980

(x)5.0 (y)2.932147 (y')1.081204 (y'') -0.006613907

We are looking for y(a),y'(a),y''(a) where a in (5.4 ,..., 6.0)

h:0.1;F:({x@2};{x@3};{(x@0)-(x@1)+2f*+/x@2 3})

by init pcr4/ 5.4 5.5 5.6 5.7 5.8 5.9 6.0 we will obtain the matrix

6.0 4.000189 1.048414 -0.046123930

5.9 3.895118 1.052993 -0.045371760

5.8 3.789594 1.057468 -0.044034900

5.7 3.683630 1.061779 -0.042062910

5.6 3.577246 1.065858 -0.039406340

5.5 3.470469 1.069636 -0.036017420

5.4 3.363331 1.073036 -0.031850660

5.3 3.255877 1.075979 -0.026863510

5.2 3.148154 1.078380 -0.021017100

5.1 3.040222 1.080152 -0.014276980

5.0 2.932147 1.081204 -0.006613907

the match is 100% with the exact solution (to 6 decimal digits)

33. A first look at parabolic partial differential equations ..

As model case we look at the heat equation ∂u /∂t = η ∂

η is the coefficient of the thermal conductivity ..

η = k /c ρ .. k = thermal conductivity [J/m s Kelvin], c = specific heat [J/kg Kelvin], ρ = density

We are looking at a thin circular rod, length L - the rod has an isolated lateral side along the x-Axis

Endpoint A(in the origin) is kept at temperature T

in steady state condition. The temperature in A we change to T

The temperature profile change over time we can express with an explicite finite difference method..

{ta1,(g@/:x@l),tb}/ v .. where g:{+/0.5*x}

or with the implicite finite difference method (here the Crank-Nicolson form)

n {v:0.25*ta21,(,/x),t2b;r:q . (v;h);M::m,'r;,-1_{(+/'M*\:x),1f}/st}/,w

where n = number of time-intervals, ta21 = 2*ta1, t2b = 2*tb, st = 1 1 ... 1f .. a start vector

w = initial temperature distribution, and using the convergence condition 1 = η Δ t / (Δ x)

q is {(x@y)+x@y+2}, r the residuals in m defined by: u

i index in the x-axis (x = iΔ x) and j the index of the time-intervals

34. A side glance on Black-Scholes ..

Let be S = asset, t = time, μ = drift rate , σ = volatility , X = Wiener process

asset return = dS/S = μdt + σdX. As dX ≅ ξ t

follows that E(dS) = Sμdt. VAR(dS) = E(σ

From f = f(S) follows df = (∂f/∂S)dS + 0.5(∂

dS

So, df = (∂f/∂S)dSσdX + ((∂f/∂S)dSμ + 0.5σ

resp if f = f(S,t) and by using dS = SσdX + Sμdt

we get df = σSdX∂f/∂S + (μS∂f/∂S + 0.5σ

now we turn to Black-Scholes for options .. let be V value of one option (we don't care if it is a put or call at this stage)

Let be portfolio Λ = V - ΔS with Δ (a multiple of the underlying asset) unchanged during time-jump dt

And so:

dΛ = σSdX∂V/∂S + (μS∂V/∂S + 0.5σ

(σS∂V/∂S - ΔσS)dX + (μS∂V/∂S + 0.5σ

Eliminating the randomness leaves us with dΛ = (0.5σ

Then the equation Λrdt = (0.5σ

This implies the partial differential equation of Black-Scholes for options:

- ∂V/∂t = 0.5σ

We set: x = ln S

So, - ∂V/∂t = 0.5σ

in case δ a dividend yield and r the risk-less rate. (μ = r - δ).

When applying Crank-Nicholson (CN) we have to consider that Δx ≥ σ(3Δt)

The accuracy is O(Δx

is a fully centered method, because it replaces the space-time derivates with finite differences centered

at an imaginary time step j + 0.5.

Although we might get similar formula's like we would see in the purely stochastical trinomial method - the values

p

But it can be shown that this implicit difference method is equivalent to a generalized discrete stochastic process

in which the asset price may jump to every node on the grid at the next time-step. The variance of the discrete

process is an upward-biased approximation to the continuous time GBM(geometric brownian motion) process.

The involved tri-diagonal matrix system we can blow away with a K one-liner:

s:{y-pd*x%z}\[p:(r*pd)+ml@c;-1_2_|ml;u:-1_b:N2 {pm-ud%x}\ pm+pd]

N2 = 2N .. and where 2N + 1 represents the number of jumps on the last time step

ml .. payoff's on the last time step

c .. 2N-1

r .. residual (here for put-options: C

If Θ = σ

The crucial part of the CN method is an other K one liner:

{(z-pu*x)%y}\[((last s)%pu+last b);|u;|-1_p,s]

35. highest exponent of a prime ..

k)+/_n%(_(log n)%log p) {p*x}\p

p:23j;n:3000; ... 23

p:97j;n:27500; .. 97

k)-/+/_(c,n)%/:(_(log c:n*2)%log p) {p*x}\p

p:83j;n:17121;... 83

36. a first look at elliptic partial differential equations..

Let's look at the equation ∂

A := intersection of x = 0 and y = 0, B := intersection of x = a and y = 0

C := intersection of x = a and y = b, D := intersection of x = 0 and y = b

The left side (AD) is kept at the constant temperature ϑ

We assume side (AB) is insulated and over the top side (DC) air of temperature ϑ

The heat at (DC) is escaping through heat convection.

Let be:

β the convection heat transfer coefficient , ϑ

k thermal conductivity of the material , ρ(x,y) is the resistance heating function

(1) No flux condition for the isolated part: -k u

(2) Convection: - k u

Just for practical reasons we choose Δ x = Δ y = h

Let be i ∈ (0,1,2,...,M) the vertical grids, and j ∈ (0,1,2,..,N) the horizontal grid

From (1) we follow: u

From (2) we follow: u

The u

If ξ := h

u

(A)The central part (j ∈ (1,2,..,N-1) and i ∈ (1,2,...,M-1)) of the plate we can represent by the "mpart" ..

m0:(N-3) {x,,m}/ ,m:(M-1) fu/ 0

q)mx:enlist (M-1) fs/ enlist 1 -4 1f

q)m1:enlist (M-1) fu/ 1

q)mpart:raze each flip raze each flip ((N-1),N+1)#raze (neg key N) rotate\:m3:m1,mx,m1,m0

q)mx:enlist (M-1) fs/ enlist 1 -4 1f

q)m1:enlist (M-1) fu/ 1

q)mpart:raze each flip raze each flip ((N-1),N+1)#raze (neg key N) rotate\:m3:m1,mx,m1,m0

(B)The insulated part (j = 0 and i ∈ (1,2,...,M-1)) of the plate we can represent by the "upart" ..

m0:(N-2) {x,,m}/ ,m:(M-1) fu/ 0

q)m1:enlist (M-1) fu/ 2

q)upart:flip raze mx,m1,m0

q)m1:enlist (M-1) fu/ 2

q)upart:flip raze mx,m1,m0

(C)The convection part (j = N and i ∈ (1,2,...,M-1)) of the plate we can represent by the "bpart" ..

q)m1:enlist (M-1) fu/ (-8)

q)mx:enlist (M-1) fs/ enlist -4f,((8%k)*(2*k)+b*h),-4f

q)bpart:flip raze m0,m1,mx

q)mx:enlist (M-1) fs/ enlist -4f,((8%k)*(2*k)+b*h),-4f

q)bpart:flip raze m0,m1,mx

with:

fu:{y*(x,x)#@[(x*x)#0f;(x+1)*!x;:;1f]}

q)fs:{flip -1 _ 1 _ flip (neg key x) rotate\:y,(x-1)#0f}

q)fs:{flip -1 _ 1 _ flip (neg key x) rotate\:y,(x-1)#0f}

As final part the residuals of the equation system ..

r0:,/N#,@[(M-1)#0f;0,M-2;:;1f*T1,T2]

q)r1:alpha * g .'0.1*(1_key M) cross key N

resid1:-1f*r0+r1

r0:(M-1)#Te*8f*h*b%k

q)r1:4f*alpha*g .'0.1*(1_key M) cross N

r2:@[(M-1)#0f;0,M-2;:;4f*T1,T2]

resid:resid1,r1+r2+r0

q)TempDistribution:((N+1),M-1)#(inv matr) mmu resid

q)r1:alpha * g .'0.1*(1_key M) cross key N

resid1:-1f*r0+r1

r0:(M-1)#Te*8f*h*b%k

q)r1:4f*alpha*g .'0.1*(1_key M) cross N

r2:@[(M-1)#0f;0,M-2;:;4f*T1,T2]

resid:resid1,r1+r2+r0

q)TempDistribution:((N+1),M-1)#(inv matr) mmu resid

Friendly Hint: a direct indexing would be probably a more practical way making (A),(B) and (.C) un-necessary

The whole code will become *much* shorter in the end...

37. a first look at hyperbolic partial differential equations.. (explicit method)

From a purely technical aspect the vibrating strings are not extremely meaningful, but they are good enough

as a modelcase.

Let be T the stress force. We look at a string of length L fixated in the origin of our coordinate system and also fixated at x = L.

The forces in the y-direction:

F

F

The forces in the x-direction:

F

F

The deflection in x-direction is 0. The mass m = ρ Δs , where ρ = line density.

So, m ∂

As ∂y/∂x much smaller than 1, we can neglect (∂y/∂x)

∂

The wave equation is a hyperbolic partial differential equation which can be solved by variable separation .. y(x,t) = X(x)T(t).

As there are infinite eigen-frequencies we have exactly one n per solution:

y

So, y

= eigen-frequency The discrete parameter n is called quantum number in quantum physics.

The explicit finite difference method for our 1-dim wave equation we can express simply by:

k {(r;(0f,(+/(r:x@1)@h2),0f)-x@0)}/ (t1;t0)

describing the state of the string at kΔt and fixated on the 2 ends.

38. Re-visiting the year 1680

/k4

f:{((*x)*last x)-c*c:x@1}

last'1_10 {(2#r),f@r:(+/u),u:2#x}\ (0 1f) ... 1 -1 1 -1 1 -1 1 -1 1 -1f ...

Given a Fibonacci sequence {xf:{((*x)*last x)-c*c:x@1}

last'1_10 {(2#r),f@r:(+/u),u:2#x}\ (0 1f) ... 1 -1 1 -1 1 -1 1 -1 1 -1f ...

Then x

So, if x

As for x

This proof was carried out the first time in the year 1680 by the Italian-French mathematician and astronomer

Giovanni Domenico Cassini (1625 - 1712)

39. The extended Euclidean Algorithm

q)f:{d:x@0;c:x@1;(c;d mod c;d div c)}

g:{(x,'((x0@0)-y*(x0:x@0)@1;(x1@0)-y*(x1:x@1)@1))[;1 2]}

/a,b given where a > b

last@+last (1,-1*q2;(-1*q1),1+(q2:last@u@1)*q1:last@u@0) g\ 2_(u:1_-3_f\ (a,b))[;2]

g:{(x,'((x0@0)-y*(x0:x@0)@1;(x1@0)-y*(x1:x@1)@1))[;1 2]}

/a,b given where a > b

last@+last (1,-1*q2;(-1*q1),1+(q2:last@u@1)*q1:last@u@0) g\ 2_(u:1_-3_f\ (a,b))[;2]

a:23311;b:17331

..will return 6460 -8689 .. ie: 6460a - 8689b = 1 (as a,b coprime)

The function above does not return a value if the job is to easy, this means if the iteration

is finished already during the preparation step when the classical algorithm of Euclid on the

right side returns a matrix "u" which has only one row .. for example a=11 , b=5 .. 11 = 2*5 + 1.

In this case (1,-2) is the obvious result.

Such a whole serie of trivial solutions we would obtain when solving:

18x + 27y + 11z + 5w = 7 .. one solution: (x,y,z,w) = (1,1,-38,76)

The extended Euclidean Algorithm can be traced to the "Aryabhatya" (approx. year 499) by Aryabhata[476-550]

of northern India. His key contribution was in the field of the "number theory" - in special the Diophantine equations.

Aryabhata found also a good estimate for π with 3.1460.

40. a first look at hyperbolic partial differential equations.. (implicit method)

/k4 .. solving u(x,t)

n {(im$(1.6*i)+((0f,(x@1),0f) f2/:o);i:*x)}\ (u2;u1)}

and where f2 can be like {(-1f*x@y)+0.1*sum x@y+1,-1}

For the time derivatives we take the classical central differences, for the space derivatives we choosen {(im$(1.6*i)+((0f,(x@1),0f) f2/:o);i:*x)}\ (u2;u1)}

and where f2 can be like {(-1f*x@y)+0.1*sum x@y+1,-1}

the average of the central differences of the known j-1 time points and the unknown j+1 time points.

The "o" represents just a serie of supporting points.

The n represents the number of future time-jumps. "im" represents the inverted characteristical system matrix.

The start values u2 and u1 will be (in our case) completely determined by

(a) the initial deflection at time j = 0; u(x;0) = h(x)

and

(b) the initial speed profile at time j = 0; u

41a) Eigenvalues and Eigenvectors, Powermethod

Here we assume:

real quadratic matrix M, |λ

We define: y

And so: Π

As |y

Therefore Π

ξ

{(r%u;u:|/r:M$*x)}/(i0;0)

i0 is our ξ

a more robust version:

{u:(mv,-1f*mav)@((mav:|/-1f*r)>mv:|/r:m$*x);:((flat@r%u);u)}/(s;0)

and flat:{@[x;&1e-14>abs x;:;0f]}

and flat:{@[x;&1e-14>abs x;:;0f]}

41b) school definition of a determinant A

|A| = ∑

q)f:{x cross y}

p:u@&n={#?x}'u:s f/(n-1)#,s:!n:#A;j:(1_s)_\:s

q)i:raze raze (-1_s) cross/:'j

signs:-1+2*(+/'>/'+:'p@\:i) in even

/even ist just a set of positive even numbers beginning with 0

sum signs**/',/A ./:/:/:,s,'/:p

p:u@&n={#?x}'u:s f/(n-1)#,s:!n:#A;j:(1_s)_\:s

q)i:raze raze (-1_s) cross/:'j

signs:-1+2*(+/'>/'+:'p@\:i) in even

/even ist just a set of positive even numbers beginning with 0

sum signs**/',/A ./:/:/:,s,'/:p

A defined as:

19 18 21 18 11 23

18 33 18 10 12 15

17 21 20 18 13 12

18 15 18 29 18 10

19 20 11 27 18 15

11 13 15 17 19 12

⇒ |A| = 12023

41c) Jacobi's method ..

Let be A,B similar matrices. ⇒ ∃ P with B = P

.. = P(A - λI)P

But: assuming y = Px implies immediately that x ≠ y in general.

⇒ Ax = AP

If A is a n x n symmetric and real matrix ⇒ ∃ P an orthogonal transform with AP = PD and D diagonal.

Intuitively this equation suggests already that there will be a sequence of orthogonal matrices {O

(product of orthogonal matrices is also orthogonal) with O

Within this process let's always choose |a

This implies: (a

In contrast to Σ

holds: Σ

But this means Σ

.. ≤ Σ

And so: Σ

consequently, if j goes to infinity then all squares of the non-diagonal elements will converge to 0.

maximal element in the upper triangle .. m is the n x n matrix, m1 is the upper triangle 1-matrix

fpq:{(y div x;y mod x)}

pqv:m . pq:n fpq/h?|/h:abs@,/m*m1

pqv:m . pq:n fpq/h?|/h:abs@,/m*m1

indices p,q and values of the orthogonal rotation matrix

ppv:m . p,p:*pq

qqv:m . q,q:pq 1

qqv:m . q,q:pq 1

elements of the rotation matrix

lamb:2f*pqv;mu:qqv-ppv;omeg:sqrt@+/i*i:lamb,mu

case when lamb>0 and mu>0; sphi = sin φ, φ = rotation angle

sphi:sqrt (omeg-mu)%2f*omeg

elements of the transformed similar matrix (1st move towards diagonalization)

pqx:&~xn in pq

h02:(cphi;-1f*sphi)

bm[p;pqx]:+/'h02 */:l1:{m . x}''pq,'/:pqx

bm[pqx;p]:+/'h02 */:l2:{m . x}''pqx,'\:pq

h03:(sphi;cphi)

bm[q;pqx]:+/'h03 */:l1

bm[pqx;q]:+/'h03 */:l2

h02:(cphi;-1f*sphi)

bm[p;pqx]:+/'h02 */:l1:{m . x}''pq,'/:pqx

bm[pqx;p]:+/'h02 */:l2:{m . x}''pqx,'\:pq

h03:(sphi;cphi)

bm[q;pqx]:+/'h03 */:l1

bm[pqx;q]:+/'h03 */:l2

the rotation matrix

/Im:1f*nn#(n {x,1,n#0}/ ()),1 is static

O:Im;O[q;q]:O[p;p]:cphi;O[q;p]:-1f*O[p;q]:sphi

O:Im;O[q;q]:O[p;p]:cphi;O[q;p]:-1f*O[p;q]:sphi

example 5 x 5 matrix:

M:(2 5 1 4 -6;5 10 9 -7 10;1 9 -8 3 -6;4 -7 3 2 11;-6 10 -6 11 -3)

eigenvalues:5.764826 , 17.29619 , -5.73115 , 11.01596 , -25.34583 (prd = 159554)

eigenvectors(just rounded):

e

e

e

e

e

41d) QR method (here for real Eigenvalues)

Each quadratic matrix A can be factorized as Q*R, where Q orthogonal and R a right triangular matrix.

The key process consists in the multiplication by left by suitable rotation matrices. The multiplication

will/should consecutively eliminate all matrix elements below the main diagonal. This elimination is carried

out column by column. The QR-method is in the general case less accurate for positive definite matrices than

the Jacobi method. The QR algorithms are very popular, widely used, numerically very stable (but not infallible)

In order to eliminate the matrix element a

we need the rotation matrix O

The condition a

Below a student version (working in acceptable time up to 70 x 70 real matrices with real eigenvalues)

of the QR-method

QRc2:{aqp:x[y;z];aqq:x[y;y];app:x[z;z];

sif:aqp%h22:sqrt (app*app)+aqp*aqp;cif:app%h22;

bqj:(-1f*sif*c3:x[z;])+cif*c2:x[y;];bpj:(cif*c3)+sif*c2;

x[y;]:bqj;x[z;]:bpj;aip:(cif*c3:x[;z])+sif*c2:x[;y];

aiq:(-1f*sif*c3)+cif*c2;x[;z]:aip;x[;y]:aiq;

x}

do[n-1;o0:(j_l),\:j;i0:o0[;0];i1:o0[;1];

while[1b in ERROR < abs@M ./: o0;M:QRc2/[M;i0;i1];i+:1];M[i0;j]:0f;j+:1]

sif:aqp%h22:sqrt (app*app)+aqp*aqp;cif:app%h22;

bqj:(-1f*sif*c3:x[z;])+cif*c2:x[y;];bpj:(cif*c3)+sif*c2;

x[y;]:bqj;x[z;]:bpj;aip:(cif*c3:x[;z])+sif*c2:x[;y];

aiq:(-1f*sif*c3)+cif*c2;x[;z]:aip;x[;y]:aiq;

x}

do[n-1;o0:(j_l),\:j;i0:o0[;0];i1:o0[;1];

while[1b in ERROR < abs@M ./: o0;M:QRc2/[M;i0;i1];i+:1];M[i0;j]:0f;j+:1]

Example:

8 4 6 1 3

2 2 5 1 1

6 5 1 1 3

1 2 2 2 1

3 8 6 1 6

returns M value ..

17.24937 -3.656361 -1.247661 -2.182439 +5.053954

+0.00000 +3.223795 -0.939394 -1.619626 +3.482991

+0.00000 +0.000000 -4.107825 +1.251698 -1.246878

+0.00000 +0.000000 +0.000000 +1.917964 +0.102689

+0.00000 +0.000000 +0.000000 +0.000000 +0.716699

+0.00000 +3.223795 -0.939394 -1.619626 +3.482991

+0.00000 +0.000000 -4.107825 +1.251698 -1.246878

+0.00000 +0.000000 +0.000000 +1.917964 +0.102689

+0.00000 +0.000000 +0.000000 +0.000000 +0.716699

The eigenvalues we find in the diagonal.

41e) Givens' rotations ..

Contrary to the classical Jacobi approach we are looking for rotations which set all elements below the sub-diagonal

successively to 0. For the elimination of the a

We use the classical transformation a

When we calculate the conditions resulting from the equation above, we need to be aware that φ ∈ [-π/2 , π/2],

so we are not free to set the signs arbitrarily.

The k4 code (student version) resulting from all this consists basically of 3 key-steps:

/the rotation pairs

i:1+l:j+1;pq:l,'b:l _ t;

/the rotation angle

if[ERROR < abs@ij:A[i;j];h:sqrt@(u@1)+*u:d*d:jj,ij;w:h*sg@jj;cf:jj%w;sf:-1f*ij%w]

/the transforms

bpk:+/(cf,-1f*sf)*a:B[pqtmp;b] and akp:+/(cf,-1f*sf)*a:+A[;pqtmp]

i:1+l:j+1;pq:l,'b:l _ t;

/the rotation angle

if[ERROR < abs@ij:A[i;j];h:sqrt@(u@1)+*u:d*d:jj,ij;w:h*sg@jj;cf:jj%w;sf:-1f*ij%w]

/the transforms

bpk:+/(cf,-1f*sf)*a:B[pqtmp;b] and akp:+/(cf,-1f*sf)*a:+A[;pqtmp]

Example - the real 5x5 matrix below has to be transformed to Hessenberg form using Given's rotations .. :

(This sample code works in acceptable time up to 150 x 150 real matrices)

-12 40 12 28 11

-14 17 11 14 10

-25 15 11 10 12

-13 51 11 15 17

-90 98 97 90 96

Result:

-12.00 +23.22 +5.79 +12.99 -043.68

-95.34 145.72 51.49 +62.42 -104.27

+00.00 -23.23 -8.41 -06.39 +023.13

+00.00 +00.00 +0.55 -14.58 -022.85

+00.00 +00.00 +0.00 +04.51 +016.26

-95.34 145.72 51.49 +62.42 -104.27

+00.00 -23.23 -8.41 -06.39 +023.13

+00.00 +00.00 +0.55 -14.58 -022.85

+00.00 +00.00 +0.00 +04.51 +016.26

41f) fast Givens' rotations (FGR) ..

The concept is (in the first step) to factorize matrices A and A' using convenient diagonal matrices R and R':

A = R A* and A' = R' A*'

Replacing all elements of the classical (p,q)-Givens rotation will lead to:

a'

and

a'

This allows 4 possibilities to achieve one standalone term in each line, thus cutting the multiplication count by a half.

In principle we could be happy with just one possibility, but in order to keep boundary rotation angles(those leading to vanishing values) away from our

process (ie 0 and ± π/2) we should consider 2 alternatives. (|c| ≥ |s| and the opposite)

The first alternative implies:

a*'

a*'

Considering the "wish" that a*'

only need the square values of D during the entire iteration process(beginning with D = I).

In other words .. (for the 1

We will meet a corresponding situation when we use the rotations as similarity transforms:

A = R A* R and A'' = R' A*'' R'. When we are done with the recursion, the final Hessenberg we receive by R

/snippets .. student version (z1 and z2 are the zeta's above)

z1:z2*dp%dq;t:z1*z2;c:1f%1f+1f%t;s:1f%1f+t

/t>=1f;

btsp:A[p;]+A[q;]%z1;btsq:A[q;]-A[p;]%z2

A[p;]:btsp;A[q;]:btsq;]

...

D[p;p]:c*D[p;p];D[q;q]:c*D[q;q]]

/end..

Ds:sqrt D;H:(Ds mmu A) mmu Ds;

z1:z2*dp%dq;t:z1*z2;c:1f%1f+1f%t;s:1f%1f+t

/t>=1f;

btsp:A[p;]+A[q;]%z1;btsq:A[q;]-A[p;]%z2

A[p;]:btsp;A[q;]:btsq;]

...

D[p;p]:c*D[p;p];D[q;q]:c*D[q;q]]

/end..

Ds:sqrt D;H:(Ds mmu A) mmu Ds;

Example - the real 5x5 matrix below has to be transformed to Hessenberg form using fast Given's rotations .. :

5 4 3 2 1

4 6 0 4 3

3 0 7 6 5

2 4 6 8 7

1 3 5 7 9

Result(rounded here to 3 digits):

5.000 +07.500 +00.000 +0.000 +0.000

7.500 +26.125 -17.980 +0.000 +0.000

0.000 -17.980 +18.350 -5.138 +0.000

0.000 +00.000 -05.138 +7.843 -4.805

0.000 +00.000 +00.000 -4.805 +7.086

7.500 +26.125 -17.980 +0.000 +0.000

0.000 -17.980 +18.350 -5.138 +0.000

0.000 +00.000 -05.138 +7.843 -4.805

0.000 +00.000 +00.000 -4.805 +7.086

Because original matrix was symmetric we got a triangular form

Friendly hint: using the classical Given's method can eventually lead to different signs,

but a closer look will tell us that mathematically the results represent the same matrix.

PS: the student version will not take the full benefit we expect. The code will be only 10-15% faster

a more sophisticated implementation will converge against the expected 50% speed up.

42. Hyman's method for Hessenberg-type matrices

let be H := {h

We further assume that all subdiagonal elements are not equal to 0. (otherwise the problem can be simplified)

The method is based on the concept that for the Hessenberg-type we can construct a serie of multiplicators τ

so that the linear combination ∑

The ω

This means we defined the following recursive process:

τ

The construction process tells us something else: the τ

τ

ξ

This means the original matrix H - λI can be represented as:

P(λ) = (-1)

It's derivative we can obtain easily, so that we are in the position now to apply Newton's interpolation method.

the τ

(-1f*%%/last B) {(((-1f*last y)-+/x*-1_1_y)%*y),x}/C

the ξ
{((z-+/x*-1_1_y)%*y),x}/[%*last B;C;-1_|xv];

and where H is the Hessenberg Matrix, l = eigenvalue(start value), Im = unit matrix
A:H-l*Im

B:(-1* -1_n-!n)#'1_A

C:|(n-2)#B

B:(-1* -1_n-!n)#'1_A

C:|(n-2)#B

As for Ξ := (τ

the Ξ will represent the eigenvector for λ. Lyman's method at least theoretically gives us also the eigenvectors, but the

process evolving the τ

we have to consider an alternative way for the eigenvectors.

43. Reduced positive definite binary quadratic forms

Δ(p,q,r,s) := ps - qr (determinant)

Let be Z the set of whole numbers and g(x,y) := ax

ρ is the smallest natural number represented by g(x,y) ⇒ ∃ (α,γ)∈ Z

as ρ minimal. Then x = αξ + βφ and y = γξ + δφ will define a new equivalent g':=[ρ,κ,λ].

Any further mapping ξ := x' - ϑy' and φ = y' defines a further equivalent g'':=[ρ,β',γ'], for any choice of ϑ ∈ Z.

Choosing ϑ as the closest number to 0.5*κ/ρ will automatically lead to 0 ≤ |β'| ≤ ρ. But as g and g'' equivalent,

γ' can be represented by g, in other words ρ ≤ γ' ⇒ 0 ≤ |β'| ≤ ρ ≤ γ' ⇒ g'' is reduced.

The proof implicitely defines a recursive algorithm which we can easily express by a simple k4 line:

h:{_last((b-a),(b:x@1)+a)%2f*a:x@0}

qf:{if[(a:x@0)>c:x@2;x:|x];if[(a:x@0)< abs b:x@1;s:(x@2)+((-b)+k*a)*k:h@a,b;x:a,(abs b-2*a*k),s];:x}

qf:{if[(a:x@0)>c:x@2;x:|x];if[(a:x@0)< abs b:x@1;s:(x@2)+((-b)+k*a)*k:h@a,b;x:a,(abs b-2*a*k),s];:x}

Sample:

qf\ 1137 2247 1111f .. (1137 2247 1111f ; 1111 25 1f ; 1 1 955f)

This means 1137x

first polynomial we can replace by the 2

44. Pell's Equation

The diophantine equation ax

We start with the basic theorem:

ξ

It also holds ξ

h

Example: x

d:1389;
k)m1:a0:_w:sqrt d;q1:d-m1*m1

k)a1:_(w+m1)%q1

k)d:a0,last'29 {o:((e:x@2)*j:*s:x@1)-t:*x;u:(s@1)+e*t-o;l:_(o+w)%u;:(o;(u,j);l)}\ (m1;(q1,1);a1)

k)ki:*:' (0 1){((y*c)+x@1),c:x@0}\d

k)hi:*:' (1 0){((y*c)+x@1),c:x@0}\d

flip (hi;ki)

k)a1:_(w+m1)%q1

k)d:a0,last'29 {o:((e:x@2)*j:*s:x@1)-t:*x;u:(s@1)+e*t-o;l:_(o+w)%u;:(o;(u,j);l)}\ (m1;(q1,1);a1)

k)ki:*:' (0 1){((y*c)+x@1),c:x@0}\d

k)hi:*:' (1 0){((y*c)+x@1),c:x@0}\d

flip (hi;ki)

37 # 1

112 # 3

149 # 4

410 # 11

969 # 26

23666 # 635

48301 # 1296

120268 # 3227

168569 # 4523

625975 # 16796

46490719 # 1247427

140098132 # 3759077

186588851 # 5006504

..

and we find: 625975

We can go even a step further. For any |N| < d

Example: x

d:4243;... etc

65 # 1

456 # 7

1889 # 29

4234 # 65

6123 # 94

40972 # 629

251955 # 3868

..

and we find: 456

Let be x

We further define: (x

As the conjugate of a product is equal to the product of the conjugates it most hold (x

The assumption that there are other (positive) solutions (u,v) not belonging to {(x

Let's go back to our first example : x

with (625975,16796) as smallest positive solution. Then for example x

represent the 3rd solution .. in other words (981139945893059575;26325694366773204) is a solution, too.

45. Simple boundary values problems and finite differences .. a first look

N:3;n:10;x0:0f;xn:1.570796;y0:4;yn:3.5

k)xi:x0+(t:!n+1)*h:(xn-x0)%n

f1:{(a0@x)-0.5*h*a1@x};f2:{(-2f*a0@x)+h*h*a2@x};f3:{(a0@x)+0.5*h*a1@x}

v:{h*h*q@x} xii:1_-1_xi;b:N#'xii

k)resid:@[v;(0,n-2);:;((*v),last v)-(y0,yn)*(f1@*xii),f3@last xii]

F:((f1@b[;0]),'(f2@b[;1]),'f3@b[;2]),'flip (n-1)#'(n-2)#0f

iter:(inv ((-1*key n-1) rotate' F)[;1_-1_t]) mmu resid

show (j1:y0,iter,yn),'j2:fx xi

k)xi:x0+(t:!n+1)*h:(xn-x0)%n

f1:{(a0@x)-0.5*h*a1@x};f2:{(-2f*a0@x)+h*h*a2@x};f3:{(a0@x)+0.5*h*a1@x}

v:{h*h*q@x} xii:1_-1_xi;b:N#'xii

k)resid:@[v;(0,n-2);:;((*v),last v)-(y0,yn)*(f1@*xii),f3@last xii]

F:((f1@b[;0]),'(f2@b[;1]),'f3@b[;2]),'flip (n-1)#'(n-2)#0f

iter:(inv ((-1*key n-1) rotate' F)[;1_-1_t]) mmu resid

show (j1:y0,iter,yn),'j2:fx xi

Let's look at the differential equation of the form:

a

Using central differences of error order ~ h

If k even: f

So for n = 2 we arrive to: y

Example: y'' + y'tan x = sin 2x .. y(0) = 4 and y(π/2) = 3.5 ⇒ y = 4 - x - 0.5sin 2x + 0.5(π - 1)sin x

Hint: tan x undefined for π/2 .. so this equation would trigger some discussions .. the product y'tan x kills the discontinuity

In this case for n = 10 already we get the following approximations .. (second column are the exact solutions..)

4.000000 4.000000

3.854364 3.855921

3.719837 3.722842

3.606148 3.610384

3.520401 3.525551

3.466112 3.471769

3.442606 3.448286

3.444864 3.450020

3.463815 3.467858

3.487067 3.489387

3.500000 3.500000

Using central differences for given boundary values y'(x

the additional fictive values y

Our code undergoes only minor adjustments

g:{h*h*q@x};v:g xii:xi;b:N#'xii;

F:((f1@b[;0]),'(f2@b[;1]),'f3@b[;2]),'flip (n+1)#'n#0f

G:((-1*key n+1) rotate' F)[;t,n+1 2]

G:(enlist l,o),G,enlist (o:n#0f),l:-1 0 1f

iter:(inv G) mmu (yd0*2*h),v,ydn*2*h

show (1_-1_j1:iter),'j2:fx xi

F:((f1@b[;0]),'(f2@b[;1]),'f3@b[;2]),'flip (n+1)#'n#0f

G:((-1*key n+1) rotate' F)[;t,n+1 2]

G:(enlist l,o),G,enlist (o:n#0f),l:-1 0 1f

iter:(inv G) mmu (yd0*2*h),v,ydn*2*h

show (1_-1_j1:iter),'j2:fx xi

Example: x

1.293359 1.274653

2.221625 2.210453

3.448289 3.445770

5.016969 5.023784

6.972381 6.988891

9.359968 9.386294

12.22569 12.26176

15.61586 15.66146

19.57711 19.63187

24.15623 24.21971

29.40023 29.47188

46. shooting methods .. one simple intro:

Let's look at the boundary value problem:

a

If y

by 2 initial value problems:

a) y

b) y

where we for convenience reasons take ε

Then we obtain y(x) as a linear combination of y

μ

Example: x

f2:{((2*x*z)+(4*y)+(x*sin x)+(16+x*x)*cos x)%x*x}

/using runge kutta of order 2 here

g:{c1:h*p:z;c2:h*q:f2[x;y;z];:(x+h;y+s*p+z+c2;z+s*q+f2[x+h;y+c1;z+c2])}

Y1:n {g . x}\ (x0;y0;1)

Y2:n {g . x}\ (x0;y0;-1)

/done

((Y1[;1]*beta-R2)+(R1-beta)*Y2[;1])%(R1:Y1[n;1])-R2:Y2[n;1]

solutions../using runge kutta of order 2 here

g:{c1:h*p:z;c2:h*q:f2[x;y;z];:(x+h;y+s*p+z+c2;z+s*q+f2[x+h;y+c1;z+c2])}

Y1:n {g . x}\ (x0;y0;1)

Y2:n {g . x}\ (x0;y0;-1)

/done

((Y1[;1]*beta-R2)+(R1-beta)*Y2[;1])%(R1:Y1[n;1])-R2:Y2[n;1]

/col 1 = xi, col 2 = exact, col 3 = runge 2 approximation

/1.000000 1.000000 1.000000

/1.214159 0.679182 0.690431

/1.428319 0.614242 0.628830

/1.642478 0.704365 0.719082

/1.856637 0.886659 0.900103

/2.070796 1.116741 1.128293

/2.284956 1.360600 1.370001

/2.499115 1.590962 1.598108

/2.713274 1.785609 1.790450

/2.927433 1.926617 1.929091

/3.141593 2.000000 2.000000

a very similar approach we can take for the boundary values y'(a)= α , y'(b) = β

.. .. to be continued in displays (PART 2) ..

©++ MILAN ONDRUS