联系方式

  • QQ:99515681
  • 邮箱:99515681@qq.com
  • 工作时间:8:00-23:00
  • 微信:codinghelp

您当前位置:首页 >> Matlab编程Matlab编程

日期:2019-11-14 11:18

Chapter 1

Practical Exercises

Introduction

This document contains the practicum assignments of this course.

From now on, the ‘real’ numerical part is going to start: we consider problems from mathematics,

all of which originate from physics, chemistry and/or engineering, where in general,

we cannot find closed-form solutions. In such cases, we have to use the computer to find an

answer, together with a numerical method. In Matlab, many of those numerical methods

are built in, and we will use them as a tool.

However, there is a problem: whatever you do, and whatever mistakes you make, the

computer will nevertheless give an answer, and that answer may have nothing to do with

the answer you were looking for. That’s why in the exercises you are asked to critically

examine your answers, and adapt your answers accordingly.

Whenever an answer with error is requested, we always want to see the answer

= A ± ? in such a form, that ? has 1 significant digit and that the number of

digits of A is in accordance with that.

Example:

approximation = 1.234567890 ± 0.000067890 leads to answer = 1.23457 ± 7 · 10?5

.

Indeed, the (wrong!) answers

answer = 1.23457 ± 0.000067890 and answer = 1.234567890 ± 7 · 10?5

do not look consistent!

1

1.0 Matlab

1.0.5 Executing a part of a program many times

In programs, you often want to perform certain calculations very often, possibly each time

slightly different. For example if you want to add up the first billion natural numbers. In

that case you know how often it has to be done (namely, one billion times), but sometimes

you don’t and then you use a while (because there is no alternative!); in the first case, you

use a for.

While statement

The way of writing a while (’zolang’) is as follows:

while(expression)

do;

end

As long as expression is true, do is executed. THUS, the following program won’t stop

(except when pulling the plug, or Ctrl-Alt-Del):

i=1;

while(i>0)

i=i+1;

end

So the while statement is very error-prone, therefore avoid it if possible.

For statement

The following example hopefully speaks for itself and adds up the first billion natural

numbers (500000500000): 1

1Using File-Preferences...-Command Window, set the Numeric Format to ‘long e‘ and the Numeric Display

to ’loose’, and press Apply and OK!! It can also be done with the Matlab command >> format long

e.

2

sum=0;

for i=1:1000000

sum=sum+i;

end

Exercise 1.0.6. Write a function Q(m, n) which computes Pn

k=m k?1

for various m, n.

Now compute z = Q(1, 109) ? log(109) (analytical answer Q(1, n) ? log(n) as n → ∞:

γEuler = 0.5772156649015328606065121 . . .)

1.1 Error analysis

1.1.1 Condition of linear equations

In this exercise, we will look more closely at solving linear equations:

solve x from: A · x = b.

If the matrix is nonsingular, then (mathematically) the system can be solved exactly. On

a computer however, the answer is not exact in general.

It is therefore an important question how good the solution we find is, or, stated in a slightly

different way:

? How sensitive is the answer x under small perturbations of e.g. the right hand side

b?

2

Slightly more mathematically formulated:

? How large is δx := R(b + δb) ? R(b) in relation to δb, or: to what extent is the output

perturbed as a result of a perturbation of the input?

If the perturbation of the output is small, relatively speaking, in relation to the perturbation

of the input, also relatively speaking, then we say the problem is well-conditioned. If that

ratio is large, then the problem is said to be ill-conditioned.

We will illustrate some things using the problem above. As matrix A we use a so-called

Hilbert matrix. Hilbert matrices are known for being ill-conditioned. Now we choose

the right hand side vector b such that b = A · (1, 1, . . . , 1)t

,

3

so the exact solution x =(1, 1, . . . , 1)t

is known.

Write a function which, given n,

1. determines the n × n Hilbert matrix A, (help hilb),

2. determines the n × 1 vector x with ones, (help ones),

2

Indeed, it is almost always the case that b is not exactly known, for example because of round-off errors.

3

In Matlab, [1, 1, 1] is a row vector, so with size 3 × 1, and [1; 1; 1] a column vector, so with size 1 × 3.

Hence, we have that [1, 1, 1] = [1; 1; 1]t

.

4

3. computes the right hand side b, which equals the exact solution x = (1, . . . ,1)t

,

4. subsequently determines the actual numerical solution xn: xn = A\b,

5. finally computes the quantities kA · xn ? bk ′and kxn ? (1, . . . , 1)tk, (help norm).

Complete the following table by calling the function for different values of n. Here, δb :=A · xn ? b can be interpreted as the perturbation of the input which causes a perturbation

δx := xn ? (1, . . . , 1)t of the output, since by linearity:

δx = R(b + δb) ? R(b) = A

Note that, although the numerical solution xn seems good because kA · xn ? bk ≈ 0 (the

type double implies approximately 15 significant digits), we see that, particularly when

n > 10, this need not be true! That is the consequence of an ill-conditioned system.

1.1.2 Accuracy and condition

Sometimes an answer must be given very precisely, such as

answer = 1.234567890123456 · 10?4,

but sometimes, only the order of magnitude suffices:

answer = 1 · 10?4.

The number of digits you include depends on the accuracy of the result, or how accurate

you think it is.

In some cases, the computation of a function value can be strongly influenced by:

5

1. the way a computation is performed, and/or

2. the order of the operations, and/or

3. round-off errors.

We are going to examine this using the computation of the polynomial R(x),

R(x) = 177 ? 21x + 3(9x + 550)2 ? 38x

3 ? 558x

4 + 2(3x + 434)5 ? x

6

for x = 972 in the following 4 ways:

? Algorithm A: by “completely expanding” every term,

for example: 558x

4 = 558 * x * x * x * x ,

or: 2(3x + 434)5 = 2 * h * h * h * h * h with h = 3*x + 434.

? Algorithm B: using the standard functions exp( ) and log( ) (’natural logarithm’!),

for example: 558x

4 = 558 e

4 ln x = 558 * exp(4 * log(x)) ,

or: 2(3x + 434)5 = 2 e

5 ln(3x+434) = 2 * exp(5 * log(3*x + 434)).

You also have to compute the value of the polynomial by interchanging terms:

? R1(x) = 177 ? 21x + 3(9x + 550)2 ? 38x

3 ? 558x

4 + 2(3x + 434)5 ? x

6

? R2(x) = ?x

6 + 2(3x + 434)5 ? 558x

4 ? 38x

3 + 3(9x + 550)2 ? 21x + 177

For every possibility, write a separate function (thus, four functions).

method answer that Matlab gives

C[R1A(972)]

C[R1B(972)]

C[R2A(972)]

C[R2B(972)]

The correct result is R(972) = 1.

6

With one of the outcomes, namely C[R . . .(972)] we obtain an estimate of the condition

number:

It is given that, on basis of (??), we find as exact value cP (972) ≈ 1.4 · 1018

.

Do you prefer one of the ways of computation? Yes / no , because . . . . . . . . . . . . . . . . . . . . . .

1.1.3 Problems and algorithms

In this exercise, we want to make clear the difference between well- and ill-conditioned

problems, but also between good and bad algorithms: if you want to compute something,

you can do that in a stupid way, but sometimes also in a (more) intelligent way.

As an example, we are interested in the following integrals

yn := Z 1.

In principle, all these integrals can be determined analytically, but looking at the solution

of the integral in this exercise, you won’t like that (except for a few mathematicians).

We consider the problem P:

for which we have that (we have calculated that using Maple, not by hand!)

y13 = ?2541865828329 log (10) + 5083731656658 log (3) + 10723204268006567

40040.

Of course, we are much more interested in the numerical value of the integral.

Part A

The following trick provides a way to compute y13.

we can determine y13 from the recursion: that is, from

yn = ?9yn?1 +1n(1.1.1)

with n = 1 we can compute y1; subsequently, using (1.1.1) and n = 2 we can compute y2

as well, etc. Equation (1.1.1) is called the recursion formula.

Hence, this algorithm, which we will call algorithm A, has the following form:

A : yn = ?9yn?1 +1n, n = 1, 2, . . . , 13, and y0 = log(10/9). (1.1.2)

If we execute this algorithm on a computer, than we must realise that errors occur immediately

when computing y0 numerically, namely a round-off error. And this error, and all

other round-off errors which we will make thereafter, propagate to the final result. Then

the question is: how strongly will this propagate? Or, in other words, is this algorithm

sufficiently stable or not (well- or ill-conditioned)?

A: Numerical experiment

Write a Matlab program, which executes algorithm A, and write down (in 16 decimals; use

format long e) the results for y0 and y13:

Algorithm A :

Answer (using common sense) the following question: The approximation of y13 is (good /

reasonable / very bad)daa.

4

In order to ‘measure’ the stability of the algorithm, we now do the following: we perturb

the value of the input (that is, y0) on purpose, and observe the effect on the answer, y13.

Execute the same Matlab program with algorithm A, but now with y, and write down the results again:

Algorithm A

4daa = delete as appropriate.

8

And answer (using common sense) the following question: This y

?

13 as an approximation of

y13 is (good / reasonable / very bad)daa.

A: Stability

Using these answers, we can estimate the condition of the algorithm A 5

(and using that,

say whether the algorithm is stable):

1. PA is the solution of the problem P when using algorithm A, so y13,

2. x is the input, so here y0,

3. || is the relative error in the input.

From this, the approximation follows:

cA,P ≈ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

And we can go one step further: the y0 = log(10/9) which we have used in the algorithm

A to compute y13 isn’t exact either! The relative error in y0 will have at most the value

11C, with machine precision C ≈ 2.2 · 10?16, because

C[y0] = log(10/9 (1 + 1)) (1 + 2) = . . . ≈ y0 + 1 + y02 , with |i

| ≤ C.

What can you say about the possible error in y13 as computed using algorithm A?

relative error in y13 ≤ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

and hence,

absolute error in y13 ≤ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Is the value of y13 we have found with this error estimate consistent with the exact value

of y13?

(yes / no)daa , because y13 = . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

NB. You can compute the exact value with the function quad in Matlab (see help quad?)

using the call:

5Actually, we should say: the condition of problem P when we solve the problem using algorithm A.

9

>> quad(@(x) x.^13./(x+9),...)

A: Stability explained

An absolute error dn?1 in yn?1 propagates to the error of yn during one step of algorithm

A, use (1.1.2):

yn + dn = ?9(yn?1 + dn?1) + 1

n

, so dn = . . . . . . dn?1. (1.1.3)

and hence

d13 = . . . . . . d0. (1.1.4)

And this (is / is not)daa consistent in good approximation with the data:

the true absolute error in y13 = . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

and using the above, we find that this should be

the expected absolute error in y13 = d13 = . . . . . . d0 = . . . . . . C = . . . . . . . . . . . . . . . . . .

Part B

Now, we ‘turn around’ the equation (1.1.1) and we are going to use it for B:

For further investigation of this recursion, we will, assuming a chosen initial value yN ,

continue the backward recursion up to y0. The result obtained can be compared with the

exact value, namely log(10/9) = 0.10536051565782630122.

B: Numerical experiment

We choose N = 25 and as an approximation of y25, we choose for convenience the value 0.

Write down the following outcomes of the algorithm B:

Algorithm B :

B: Stability

If we start with y25 = 10000, the same correct value for y0 is found (verify it) !!?? It follows

that (the stability of) algorithm B is much better than (that of) algorithm A. Estimate,

just as in the equations (1.1.3)-(1.1.4), the error amplification/reduction when computing

y0 given an absolute error d25 in y25

d0 = . . . . . . d25,

and if the error in y25 approximately equals 10000, it follows that

d0 = . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

and this (is / is not)daa consistent with the value found.

What should we choose for N in order to find y13 in 15 significant digits as well, assuming

yN = 0?

N = is sufficient because . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Conclusion: It follows from the above that the condition of the problem “compute y13 = R 10x13x+9 dx” (is good / is good, provided that the correct algorithm is used / (strongly)

depends on the algorithm that is used)daa (choose one answer!).

11

1.2 Errors in numerical differentiation

Later in the course, we will repeatedly encounter one of the most practical problems in

numerical mathematics, namely solving ordinary and also partial differential equations

(ODE, PDE).

One of the numerical methods which is very popular, is called discretisation. In that case,

you look for an approximate solution to a differential equation which you only know at

a finite number of points, and not at ′every point. And because you have to compute

derivatives of functions as well (after all, a DE is an equation which contains derivatives!)

it is necessary to be able to estimate derivatives on the basis of a function which you only

know at a number of points.

We consider the following formula to determine the third derivative of a function f, or the

value of f

(3)(x0):

D3(h) = f(x0 + 2h) ? 2f(x0 + h) + 2f(x0 ? h) ? f(x0 ? 2h)2h3= f(3)(x0) + C · h2 + O(h4), C = f

(5)(x0)/4.

From this formula (plus error), at first sight you would say that you have to make h as

small as possible. After all, the error O(h

2

) then becomes smaller and smaller.

Remarkably (?) enough, this is not correct, but that is because we also make round-off

errors. That’s what we want to investigate in more detail in this exercise.

By means of an error analysis, we want to determine the optimal choice for h. Thereafter,

we verify this analytical result with a simple experiment.

If we compute D3(h), then we make errors. For convenience, in the following error analysis

we only consider the round-off error in computing the function values of f and we make

the rough assumption

absolute error in computing f(x) ≈ |f(x0)| C.

Then it follows that:

|computed D3(h) ? f

(3)(x0)| ≈ |C| h

2 + . . . . . . C

Hence, the error we make is the sum of the last two terms. This is minimised as function

We are going to verify the analytical expressions we have found experimentally. Compute

D3(h) as approximation of the value f(3)(1) of f(x) = exfor h = ( 12)j, j = 0, 1, 2, . . . , 20.

The best value of h which follows from the experiment is h ≈ . . . . . . . . . . . . . . .

In this example, the best value for the analytical derivative is h ≈ . . . . . . . . . . . . . . .

Conclusion: the experimental results are consistent with the theoretical values. The minor

deviations are caused by . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

13

1.3 Interpolation

One of the ways to draw a smooth function through discrete points is by means of interpolation.

Usually, polynomials are used for that (but certainly not always!) and for now, we

restrict ourselves to that.

Interpolation means that that smooth function has to pass through the data points; a

natural property, at least if the data doesn’t contain much noise (we will address that

problem in section 1.6).

1.3.1 Polynomial interpolation

In order to investigate the properties of interpolation (e.g. how well the method works, and

what you have to do, but also what you shouldn’t do), we proceed as follows.

We choose a function with certain properties (e.g. a simple function, a smooth function,

or a wild function – we will come back to what this exactly means mathematically later),

and we draw discrete data from that, that is, we take as data the values of the function at

a certain (finite) set of points. Thereafter, we are going to test our method (in this case,

determining a polynomial that passes through those points) to find out how good it is. 6

And we can also (try to) analyse how we should choose the data points in order to obtain

an optimal result, i.e. that the resulting function lies ‘as close as possible’ to the original

function.

We need the following ingredients:

1. a function from which we draw the data,

2. a grid, i.e. the set of points where we assume the function values are given,

3. a numerical code which computes the polynomial (the interpolant),

4. and subsequently visualises the results – you can read off the error from this.

Instructions for the outline of an M-file (or multiple M-files):

1. write a function f which determines, for given values of x (a vector), the associated

function values (and this function is adapted repeatedly),

6

In reality, we don’t know the function of course, but then we do know where the data comes from, which

might give us a hint of the function associated with it.

14

2. we consider two types of grids, namely an equidistant grid

xk = ?1 +2kn∈ [?1, 1], k = 0, . . . , n,

and a so-called Chebyshev grid:

xk = ? cos((2k + 1)π

2n + 2

) ∈ [?1, 1], k = 0, . . . , n;

do this in two functions, with argument n, which outputs a set of points x, y (in

those functions, first create a set k=[0:n], and subsequently the grid x and using

that, the function values y; see also the Matlab exercise ??

NB. when using a for-statement to fill the vector x, you should note that every index

should be a positive integer, so x(1) = x0, x(2) = x1, etc.)

3. the Matlab code

p = polyfit(x,y,n);

computes the coefficients of the polynomial p that interpolates,

4. and since Matlab works completely discrete, we create another data set xf, finer than

the given data set, to compare the result (polynomial) with the original function, e.g.

xf = -1:0.001:1;

pf = polyval(p,xf);

ff = f(xf);

plot(xf,pf,’o’,xf,ff,’-’)

You also can change the latter such that you see the plot of the difference (how?).

We hope that the programming didn’t take too much effort (surely not – they are just a

few lines) and we can now start to work. We consider data from different functions. One

more notation: ?pn means the maximum value of the truncation error of the problem, for

given n and f, for the equidistant grid, ?qn for the Chebyshev grid.

i. Data: n ?pn

f(x) = x

3 1

x ∈ [?1, 1] 2

3

The program works well because . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

15

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

The interpolation polynomials for n = 1 and n = 2 are identical since (use symmetry!)

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

ii.

Data: n ?pn ?qn

f(x) = e

x

(= exp(x)) 4

x ∈ [?1, 1] 8

12

The truncation errors are consistent with the theory, because it should hold that (in

formulas):

?pn ≤ . . . . . . . . . and ?qn ≤ . . . . . . . . .

iii-a. Data: n ?pn ?qn

f(x) = 1

25x2+1 4

x ∈ [?1, 1] 12

20

iii-b. Data: n ?pn ?qn

f(x) = 1

x2+1 4

x ∈ [?1, 1] 12

20

The truncation errors in iii-b are significantly smaller than those in iii-a. Can you

give an explanation for that?

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

16

The graphs of the difference functions show an oscillating behaviour in all cases (i,ii and

iii). Nonetheless, there is a remarkable difference between graphs based on an equidistant

grid on the one hand, and those based on a Chebyshev grid on the other hand, namely

. . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.3.2 Interpolation with rational functions

We consider the interpolation problem once more (and we also use the notation from the

previous section and the M-file(s), but make sure you make a backup of the programs).

Even if a function behaves ‘neatly’ on the interpolation interval, then that is not a guarantee

for a good interpolation (with a small truncation error). For example, a singular point

(“pool”; at that point, the function has an asymptote) just outside the interpolation interval

has quite some influence on the truncation error. We illustrate this using the function tan x

(with an asymptote at x = π/2).

We only use an equidistant grid (with interpolation polynomial pn and truncation error

?pn). We now take [1, 1.5] as interpolation interval (so, not [?1, 1]: adjustments?), so that

the point π/2 = 1.57 . . . lies outside, but very close to the interval.

Data: n ?pn

f(x) = tan x 6

x ∈ [1, 1.5] 9

12

It is (has become) clear that the interpolation of tan x on [1,1.5] is being influenced quite

strongly by the singular point at π/2, even for n = 12. Can we improve this?

Instead of approximating the data set tan xi

, we do the following: we consider the data

(xi ? π/2) tan xi

, xi ∈ [1, 1.5]

or in other words, we interpolate data drawn from this function

g(x) = (x ? π/2) tan x

This function doesn’t have a singularity anymore at the point π/2, and therefore this data

can be interpolated more easily.

17

It seems a bit silly to approximate data which we cannot approximate well by changing the

data itself, but it is not:

1. we draw data from g(x) = (x ? π/2) tan x.

2. we interpolate that data just as before; this yields a polynomial p(x).

3. we divide this polynomial again by x ? π/2:

r(x) = p(x)

x ? π/2

We now consider this function to be an approximation of the data drawn from f(x) =

tan x. This function has the form of a polynomial divided by a polynomial, and for

that reason we cannot speak of polynomial interpolation anymore.

Indeed, a function of this type is called a rational function.

Show that r(x) interpolates the original data drawn from f:

. . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Adjust your program such that you compute r(x) and subsequently, complete the following

table: ?rn now means the absolute error of r(x) against f(x).

Data: n ?rn

f(x) = tan x 6

x ∈ [1, 1.5] 9

12

If everything went well, you observe better convergence behaviour! What you should notice

(remember for the future) is that polynomial interpolation is not the Holy Grail. On the

other hand: here, we knew that the data was drawn from a function with a singularity

nearby, and we could adjust our method accordingly. In reality however, you often don’t

know that to do.

18

1.4 Extrapolation

Extrapolation is a simple technique to obtain better results with ‘little effort’, and to obtain

error estimates. Because this technique can be applied to many problems, we will encounter

extrapolation in the next sessions as well, but then as a part of a problem! In the problem

below, extrapolation is the central element.

Compute xn for the following values of n = 1, . . . , 1024:

Do this with the help of the following questions:

1. we have that x1 = . . . . . .

2. the relation between xn and xn?1 is the following

xn = Cn xn?1 with Cn = . . . . . .

3. Why is it more efficient to use these two relations instead of the definition(1.4.1)?

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

4. A few values that you have found:

Of course, it is impossible to compute the limit x∞ explicitly on a computer, and in this

case you see from the numbers that the solution is not yet in sight. Therefore, we first

investigate the kind of convergence.

19

5. In order to estimate how fast the series converges, we consider, just as in the lecture

notes, the following quotients:

Numerically, we find from the previous table:

Both quotients are so-called convergence factors. We conclude from the values in the

last table that the process converges as ( 1n/1n2 /1n3 )daa. In other words: the convergence

is (linear/quadratic/cubic)daa as a function of n.

6. On the basis of the result above we can perform another extrapolation: we can approximate

x∞ better with the following linear combination:

yn = . . . . . . xn ? . . . . . . xn/2

7. Using this last number, we can extrapolate another time:

x∞ ≈ . . . . . . y512 ? . . . . . . y256 = . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

but also en estimate of the error: the final answer (which is familiar to you, hopefully)

x∞ = . . . . . . . . . . . . . . . . . . . . . . . . ± . . . . . . . . . . . .

20

1.5 Boundary Value Problems

1.5.1 A simple example

In example ??, the boundary value problem

y

00(x) = x · y(x) ? 5 , 0 ≤ x ≤ 1 , y(0) = 0 , y(1) = 0 ,

has been discussed extensively. Application of the finite difference method led to a system

of linear equations A · y = r.

The solution of this system (together with the two boundary conditions) is a numerical

approximation of the exact solution y(x), x ∈ [0, 1], of the boundary value problem, which

is plotted in figure ??.

It is now intended that you devise a strategy yourself for the question: give the best

possible approximation of the solution in the middle of the interval, including a reliable

error estimate.

Instructions:

- fill A and r in Matlab, and determine the value of the solution in the middle of the interval,

i.e. with Pn = yn/2

.

- for given n, define the matrix A as follows:

A = sparse(n-1,n-1);

A(1,1) = ....;

..... % set the nonzero elements of A (for loop?)

A(n-1,n-1) = ....;

full(A) % check: take fore example n = 10 so that h = 1/10

Now complete the following:

I can determine the order of this method by e.g. carrying out the experiment for the following

values of n ≤ 128: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Therefore I have run the program for the following values of n ≤ 128, with corresponding

results:

n = . . . . . . with Pn = . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

n = . . . . . . with Pn = . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

n = . . . . . . with Pn = . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

21

n = . . . . . . with Pn = . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

The best answer, resulting from these measurements, is:

y(0.5) = . . . . . . . . . . . . . . . . . . ± . . . . . . . . .

Both those values are obtained as follows:

The order of the method equals . . ., because . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Therefore the best value is determined from . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

with error estimate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.5.2 The “buckling” problem

Today, we are trainee research assistant (AIO) at prof. Van Huetink’s group (UT, mechanics,

strength of materials). In order to get trained, we have to do a first calculation on the

“buckling” problem (with thanks to Kees Vuik, TUD).

We consider a rod with length L, width b and height d. It is clamped to the wall at the left

end (x = 0), and at the other end (x = L), pressure S is applied to the rod. The rod will

also bend by gravity.

We want to know how the rod behaves for different values of S, i.e. how the deflection y

depends on the place x under the influence of the pressure S.

We can formulate the problem mathematically as

(0) = 0. E is the elastic modulus of the material, K the moment of inertia,

and the term with q is responsible for the gravity:

where ρ is the density and g the gravitational constant.

If S = 0, we can find a closed-form solution – and this is useful in order to verify the

numerical results later.

22

i. If S = 0, the solution is given by

y(x) = c0 + c1x + c2x

From now on, we use the following values:

ii. If S = 0, the deflection at x = L equals . . . . . . . . . . . . . . ..

Now we take the grid xi = ih, i = 0, . . . , n and h = L/n. We denote the numerical solution

at such a point by yi

.

iii. Rewrite the differential equation in xi as a function of the variables yi and its neighbours,

i = 1, . . . , n ? 1, using (??).

The boundary conditions lie at 0: y(0) = y

0

(0) = 0, which provide the equations we were

lacking. Of course, we set y0 = 0, or even better, we leave out every y0 from the system of

equations above (this is useful because in Matlab, indices start at 1).

Now we have two possibilities left to determine y

0

(0):

a. We set y1 = 0, since then, we have from:

(0) with truncation error O(h).

b. If y?1 would exist, then according to , setting y1 = y?1 is more accurate:

But if y?1 exists, we can also consider the differential equation at x = 0:

EK y00(0) ≈ EK y1 ? 2y0 + y?1

and hence, using y1 = y?1, y0 = 0, we have

iv. Method a and b are two alternative methods to treat the boundary, and we expect

that b is more accurate. What is the equation in both cases?

The n ? 1 equations from (iii) complemented by one equation from (iv), depending on

the method chosen, together form a system of linear equations A · y = b with unknowns

yi

, i = 1, . . . , n. Now we can fill the matrix and the right hand side, and we can perform

numerical experiments (e.g. if S = 0) for testing and further investigation.

v. Let S = 0, n = 20, 40, 80, and determine the order of method a and b numerically in

computing the deflection at x = L, that is, the value yn. Verify the order by plotting

the difference between the numerical and the analytical answer.

24

1.6 Approximation

One of the ways to find a smooth function that passes through discrete points (data), is

through interpolation. Usually, polynomials are used for this (but certainly not always!).

Interpolation means that that smooth function has to pass through the data points; a

natural property - at least, if the data doesn’t contain much noise.

We will examine this first and thereafter alternatives.

We need the following ingredients:

1. a function from which we draw the data,

2. a grid, i.e. the set of points at which we assume the function values are given,

3. a numerical code which computes the polynomial (the interpolant),

4. and subsequently, visualises the results – you can read off the error from this.

Instructions for the outline of an M-file (or multiple M-files):

1. write a function f which determines, for given values of x (a vector), the associated

function values (and hence, this function is adapted repeatedly),

2. we consider an equidistant grid

do this in two functions, with argument n, which outputs a set of points x, y (in

those functions, first create a set k=[0:n], and subsequently the grid x and using

that, the function values y; see also the exercise ??),

3. the Matlab code

p = polyfit(x,y,n);

computes the polynomial p that interpolates,

4. and since Matlab works completely discrete, we create another data set xf, finer than

the given data set, in order to compare the result (polynomial) with the original

function, e.g.

xf = -1:0.001:1;

yf = polyval(p,xf);

ff = f(xf);

plot(xf,yf,’o’,xf,ff,’-’)

25

You can also change the latter such that you see the plot of the difference (how?).

Use this program to see that interpolation does ‘not always’ work: compare the results of

the function f(x) = e

x with a similar function to which some noise has been added, for

example f(x) = e

x

(1 + 0.01 sin(1234x)), for n = 10, 20.

Now we are going to examine a number of alternatives. In Matlab we find the following

possibilities:

1. Least squares method, as described in the lecture notes. This one can be achieved

with a very simple change in the program: you only have to lower the last parameter

in

p = polyfit(x,y,n);

for example to n/2: what this procedure does, is constructing the best fitting polynomial

of that degree (n/2) with the least squares method.

If that parameter equals n, and there are exactly n + 1 data points, then the least

squares method will interpolate exactly, as we have seen above.

The following four methods also try to approximate data:

2. yf=interp1(x,y,xf,’linear’);

3. yf=interp1(x,y,xf,’spline’);

4. yf=interp1(x,y,xf,’nearest’);

5. yf=interp1(x,y,xf,’pchip’);

and with a small change in your program you can see the result again.

i. Examine for method 2-5 how the result is determined, among others by consulting

Matlab’s ’help-desk’ (a detailed description is not asked). Which two methods do not

appear suitable to you?

26

ii. Examine, by generating different data sets with the following functions on [?1, 1],

and also for different values of n (but don’t take them too large),

f(x) = |x|

f(x) = e

x

(1 + 0.01 sin(1234x))

f(x) = atan(λx), λ  1

how well the remaining three methods of 1-5 meet your expectations. If you would

like to examine more functions or problems, feel free!

On the basis of the experiments I would choose method (1 / 2 / 3 / 4 / 5)daa , because

. . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

27

1.7 Integration

Integrating a function of one variable is a standard problem, and occurs in many applications.

In Matlab, there are a number of standard routines:

1. [Q,tel]=quad(@(x) cos(x),0,1,1e-8)); computes

Z 1

0

cos(x) dx

with tolerance 10?8

. It is an adaptive method as described in section ??. The answer

is stored in Q, and tel indicates the number of function evaluations needed. This

last number is a measure of the efficiency, since the lower this number, the faster and

therefore better the method.

2. [Q,tel]=quadl(@(x) cos(x),0,1,1e-8)) does almost the same, but claims to be

almost always better (and we will investigate what ’almost’ means).

3. Romberg.m is a self-written program which can, using the theory in the lecture notes

(trapezoidal rule, accelerated with a Romberg extrapolation), integrate as well: the

most important things you have to know about this program are:

(a) a, b at the top of the program are the lower and upper bound (default 0, 1).

(b) j-1 indicates the number of times the subintervals are halved, fe = 2j-1 + 1 is

the number of function evaluations needed in this case.

(c) fx is the function which is integrated – this function must be implemented yourself.

Let’s get started.

i. Using the function quad, how could you verify yourself whether

quadl(@(x) cos(x)./sqrt(2-x),0,1,1e-8) does indeed evaluate the integral with

the desired precision?

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

We consider the integrals of the following functions on the interval [0,1]. For writing down

the answer, including the error, we refer to the introduction to this chapter!

28

ii. Verify whether the procedures work: f(x) = 5x

4

Romberg’s results in the penultimate column are the exact answer because . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

quad and quadl reach this answer too with tolerance 10?16 with . . . . . . and . . . . . .

evaluations resp.

iii. Which one is the best? f(x) = e

x/

1 + x.

Romberg’s best answer is I(f) = . . . . . . . . . . . . ± . . . . . . and the number of evaluations

needed is . . . . . .. The routines quad and quadl reach the same tolerance with . . . . . .

and . . . . . . evaluations resp. The best method is (Romberg/quad/quadl)daa.

iv. Which one is the best? f(x) = sin(x)/

x.

The function value at x = 0 is f(0) = . . . . . .. Use a for statement combined with an

if else statement to define the function value at each grid point separately.

Romberg’s outcomes Qj,i deviate strongly from the limit values because . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Romberg’s best answer is I(f) = . . . . . . . . . . . . ± . . . . . . and the number of evaluations

needed is . . . . . .. The routines quad and quadl reach the same tolerance with . . . . . .

and . . . . . . evaluations resp. The best method is (Romberg/quad/quadl)daa.

v. Which one is the best? f(x) = sin(x)/

x after the transformation x = t

2

.

Romberg’s best answer is I(f) = . . . . . . . . . . . . ± . . . . . . and the number of evaluations

needed is . . . . . .. The routines quad and quadl reach the same tolerance with . . . . . .

and . . . . . . evaluations resp. The best method is (Romberg/quad/quadl)daa.

Are the best approximations of iv and v in line with each other?

. . ., because . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

vi. Which one is the best? f(x) = cos(cos(2πx))

Romberg’s outcomes Qj,i deviate strongly from the limit values because . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Romberg’s best answer is I(f) = . . . . . . . . . . . . ± . . . . . . and the number of evaluations

needed is . . . . . .. The routines quad and quadl reach the same tolerance with . . . . . .

and . . . . . . evaluations resp. The best method is (Romberg/quad/quadl)daa.

29

Conclusion:

I would use the procedure . . . . . . . . . . . . if f . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

I would use the procedure . . . . . . . . . . . . if f . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

I would use the procedure . . . . . . . . . . . . if f . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

30

1.8 Initial Value Problems

Today, we work at NASA in the year 2020. NASA’s senior management has announced that

savings must be made in the monthly planet visits, such as to Venus and Mars. Therefore,

they want to know the cheapest way to launch a rocket from earth to these planets.

Our solar system

The solar system is a collection of planets, all of which revolve around the sun in an (almost)

circular orbit. They all revolve (approximately) in a flat plane, and all in the same direction

too. For (also later) use, in table 1.1 we provide some (rounded) data which we have taken

from nl.wikipedia.org.

Radius Distance sun

Sun 700 · 106 m -

Mercury 3 · 106 m 58 · 109 m

Venus 6 · 106 m 108 · 109 m

Earth 7 · 106 m 150 · 109 m

Mars 4 · 106 m 228 · 109 m

Jupiter 72 · 106 m 778 · 109 m

Saturn 60 · 106 m 1427 · 109 m

Uranus 26 · 106 m 2871 · 109 m

Neptune 25 · 106 m 4498 · 109 m

Pluto 1 · 106 m 5914 · 109 m

Table 1.1: Data solar system

1. Approximate, using the data in tables 1.1 and 1.2, values for the mass of Mars and

Venus (in 3 significant figures), assuming that they are composed of roughly the same

material as the earth:

mVenus = . . . . . . . . . . . . , mMars = . . . . . . . . . . . . .

All parameters in this problem are extremely large (masses, distances and times too – think

of years instead of the SI unit second). Therefore we switch to a new mass, length and time

31

Mass earth, mearth 6 · 1024 kg

Mass sun, msun 2 · 1030 kg

Mass moon, mmoon 8 · 1022 kg

Mass rocket, mrocket 104 kg

Distance earth - moon 4 · 108 m

Gravitational constant G 6.7 · 10?11 m3kg?1

s

?2

Table 1.2: Even more astronomical data

scale. This doesn’t change the physical problem, but only facilitates our comprehension.

In this exercise, we suggest to work with the following units: 7

Time 1 year 365 × 24 × 3600 s

Length distance earth - sun 150 · 109 m

Mass the earth’s mass 6 · 1024 kg

2. Determine in these new units (each in 3 significant figures):

the following masses:

msun = . . . . . . . . . . . . , mVenus = . . . . . . . . . . . . , mMars = . . . . . . . . . . . . ,

the following lengths:

distance sun - Venus: rVenus = . . . . . . . . . . . .,

distance sun - Mars: rMars = . . . . . . . . . . . .,

distance earth - moon: rmoon = . . . . . . . . . . . .,

radius earth: R = . . . . . . . . . . . .,

and also the value of the gravitational constant:

G = . . . . . . . . . . . . . . . . . . . . . . . .

Movement of planets, but also rockets (and apples)

7

Indeed, if e.g. something now takes 3.5 time units, we know that the earth has revolved around the sun

three and a half times; we wouldn’t see this as easily if something has taken 110 million seconds! (Verify it

yourself!)

32

In order to explain the sentence above: the story goes that about 400 years ago, Newton

was sitting under a tree, working on his theories such as those about differential equations,

and saw an apple fall. And then he understood it (gravity). Of course, Einstein has done

much extra work later on, and that theory is still not really understood, and that’s one of

the reasons why a lot of research is carried out into, yes, solar systems, but nowadays not

only ours. . .

We repeat concepts from dynamics (mechanics) below. For those who haven’t seen this

before in their studies or at the VWO, we also give all basic concepts below (but don’t

worry: we can also get down to work without this knowledge!).

? For convenience (especially when we are going to program later on) we number each

object. We define the position of the centre of mass of an object k at time t to be:

(xk(t), yk(t)).

These are only two components because we assume that all planets move in a flat

plane.

? The velocity vector is its derivative with respect to the time,

? The derivative of the velocity vector is the acceleration,

(ax,k(t), ay,k(t)) = 

? The fundamental law of mechanics now states that the acceleration times the mass

of an object must equal the sum of the forces working on that object, in short:

m · a =

XF (1.8.3)

We first consider two objects now, we call them no. 1 and 2, with masses m1 and m2 resp.,

for example the sun and the earth. These attract each other gravitationally (and therefore

they can keep revolving around each other). If they are at a distance

then these masses attract each other with a force of magnitude

and thanks to that apple, G is still called Newton’s gravitational constant.

33

3. Write a Matlab function force.m that computes the force which mass m1 with coordinates

(x1, y1) exerts on mass m2 which is located at (x2, y2):

function [Fx,Fy]=force(G,x1,y1,x2,y2,m1,m2)

This force is a two-dimensional vector and is directed from the position of mass 2

to the position of mass 1 (order determines the direction!), and its length equals the

value of F in (1.8.4).

? Use both positions to determine a unit vector (in the right direction) and using

this, compute Fx,Fy; so don’t use cos or sin relations.

4. Subsequently, write a function accelerations.m which computes the accelerations

for all masses

function [ax,ay]=accelerations(N,G,x,y,m)

x, y, m are vectors of length N, which contain the positions of the masses at a certain

time and the values of those masses; ax, ay are vectors of length N as well and have

to contain the accelerations which the objects undergo at that time: for object j, that

is the sum of the gravitational forces of the other objects, divided by its own mass

(see (1.8.3)):

? Use some for statements to compute the accelerations; it need not be efficient.

5. Now write the main program, for example simulation.m; it should contain the following

elements

? Definition of constants, such as G and the number of masses N.

? Initial positions (x, y), velocities v and masses m; put the initial positions of all

masses in one plot.

? Definition of the simulation parameters: the number of years you want to calculate,

in how many steps, and, with that, the time step ?.

We apply the ’leap-frog’ method (see also sec. ??, exercise 4). Formally, this means

that the positions and the velocities at different times are computed, but because

the time isn’t explicitly present in (1.8.3) this effectively means that we only have to

apply the correct order, and first we have to, in order make a good start, compute

the velocity after half a time step.

? First compute the velocity after half a time step: v = v +

1

2? · a.

? Subsequently, in a loop (and in the following order!; note that the old data are

overwritten):

34

i. compute the new positions: (x, y) = (x, y) + ? · v , and add them to the

existing plot,

ii. compute at these new positions the accelerations: a ,

iii. with these accelerations, compute the new velocities: v = v + ? · a .

Simulations

Before we are going to simulate, we first have to test the program (and debug, in case it

doesn’t work properly).

6. simulationEarth.m, N = 2: The simplest case is the case of sun and earth. Because

the sun is much heavier than the earth, we put the sun in the centre of our solar

system with velocity 0. We put the earth e.g. at (1, 0), with velocity ‘upwards’ in the

x ? y-plane, i.e. (0, V ).

The value of V , in order to move in an exactly circular orbit, follows from an equilibrium

of forces:

Now simulate a year in 1000 steps. If everything works correctly, the earth revolves

around the sun exactly 1 time (and the sun almost stands still). It is not a perfect

circle: how much does the distance to the sun vary in a year?

? Determine the smallest and the largest distance to the sun during one year.

7. simulationEarth.m, N = 2: A second test we can carry out is by looking at (numerical)

energy conservation. The energy of the system consists of two components:

the kinetic energy (at a certain time):

Analytically, the total energy is constant, and hence independent of the time (provided

both components are calculated at the same time, of course); numerically, there are

differences, however, the fluctuations in the total energy go to 0 properly as ? → 0.

? Determine the fluctuation as the difference between the smallest and the largest

value of the total energy during one year; how does it relate to the time step ??

It is now time to investigate our solar system.

8. simulationVenus.m, N = 3: First simulate the sun, earth and a third planet, Venus.

Choose an arbitrary position for Venus (but at the right distance), e.g. at (0, rVenus).

Choose the velocity in analogy to (1.8.5).

To see that not all orbits need to be circles (as is the case in our solar system), and

that our solar system need not have been dull, now put the third planet at (0, 0.5)

and velocity (?3.14, 0).

9. simulationMoon.m, N = 3: Now simulate sun, earth and moon. Choose an arbitrary

position for the moon (but at the right distance) and choose the velocity in analogy

to (1.8.5) again. This looks simpler than it is!

What time step do you choose now? Is the orbit of the moon (clearly) visible? Zoom

in?

36

1.9 Nonlinear problems

Today, we first make calculations on a chemical reactor.

Problem

We consider a chemical reactor filled with a coarse material. A liquid which contains a

material A flows through the reactor with speed v. This material A can be converted into

a material B in the reactor, and this happens proportional to the squared concentration of

A.

If we call this A-concentration c, and y is the coordinate along the length of the reactor,

0 ≤ y ≤ L, then we have in the stationary state:

with D the diffusion coefficient, and k a reaction rate constant.

We scale to the size s of the coarse material. Then we obtain, with x = y/s:

where the P′eclet number P = vs/D and furthermore r = ks/v. The length of the reactor

in the scaled quantity is now M = L/s.

The boundary conditions are

in other words, at the inflow boundary at x = 0, the concentration is almost 1 (almost due

to the diffusion), and at the outflow boundary, the concentration doesn’t change.

Apparently, we have a differential equation, and it is one which is nonlinear. And for

ordinary nonlinear equations, we have seen that (usually) you cannot solve those directly,

but only with an iterative method.

Solution method

Just as for Newton’s method, we suppose the following. Suppose we have a c

?

, which is

’almost’ a solution to (1.9.7) and (1.9.8), and we want to find the exact solution c such that

(nothing happened yet). If now c?is almost a solution, we apparently have that c ? c?is

small, and we may (Newton approach!) assume the quadratic term to be negligible; the

right-hand side then becomes

We then call the c we have found c

? again and we repeat this process until the solution

almost doesn’t change anymore.

To start this Newton process, we first have to choose a suitable c

? of course. Given the

nature of the problem, the choice c

?

(x) = 1 is obvious, especially since this choice also

satisfies the boundary conditions!

Numerical elaboration

Create an M-file in which the following components are clearly recognisable:

0. Give the quantities P, r and M their value. In order to obtain a numerical solution,

we introduce a grid of equidistant, discrete points xi

:

xi = (i ? 1) h , i = 1, 2, 3, . . . , n + 1, with h = M/n,

and n is the number of (yet to be chosen) boxes on that grid.

1. A numerical solution is asked. To that end, we look for approximations of the exact

solution c(xi), i = 1, 2, 3, . . . , n + 1. These approximations together form the

requested numerical solution.

We introduce two (column) vectors w and w

?

, both of length n+1: wi

is the numerical

approximation of c(xi) that we want to know, and w

?

i

is the numerical approximation

of c

?

(xi) that we already know. The choice c

?

(x) = 1 to be able to start the Newton

process implies that initially, w

? becomes a vector of ones.

2. At the grid point xi

, the equation (1.9.9) applies; we will approximate both derivatives

with central difference formulas, namely (??) and (??):

The truncation error in these approximations is O(h2) and is also referred to as the

discretisation error. Note that these approximations can only apply to the interior

grid points xi

, i = 2, 3, . . . , n.

With the help of these central differences, a system of linked equations arises:

These are a total of n?1 linear equations in n+ 1 unknowns wi

, i = 1, 2, . . . , n+ 1. It

will turn out that the two boundary conditions, which haven’t been used yet, provide

the two missing linear equations.

Therefore, we can write the whole (linear) system in matrix × vector,

A · w = b

where both matrix A and right hand side b will depend on w

?

. In order to be able

to fill the coefficient matrix A correctly, it is wise to rewrite the equation (1.9.11) by

combining terms with the same unknown:

Note that the matrix A consists of very many zeros. Such a matrix is called a sparse

matrix. In practice, for such a matrix only the nonzero coefficients are stored (which,

in this case, only lie on the main diagonal and on both diagonals immediately beside

the main diagonal: reason?). Now already fill the rows 2, 3, . . . , n of the sparse matrix

A:

A = sparse(n+1,n+1);

A(2,1) = ....;

..... % fix the nonzero elements of A (for loop?)

A(n,n+1) = ....;

full(A) % once for checking: take for example n = 6

3. The two missing equations are established by the two boundary conditions. This can

be done (again) in two ways:

(a) With the (one-sided) difference formula (??), the first derivatives in (1.9.8) can

be approximated. This leads to (note that w1 ≈ c(x1) = c(0) and wn+1 ≈ c(xn+1) =

c(M))w1 ?1Pw2 ? w1h= 1 , wn ? wn+1 = 0 . (1.9.12)

Using this, fill the first and last equation from the system A · w = b.

(b) As a (better) alternative, use the method of virtual points on both sides. Therefore,

we introduce w0, so that after application of (??), the boundary condition at

x = 0 changes to

After elimination of w0 (from the boundary condition), the last equation transforms

to:

Similarly, we introduce wn+2, so that the boundary condition at x = M transforms

to

wn+2 ? wn = 0 .

Use this to transform (1.9.11) at xn+1 = M to

αwn + βwn+1 ? 2rw?

n+1wn+1 = ?r(w?n+1)2. (1.9.14)

Determine the parameters α, β yourself.

Hereafter, use both method (a): (1.9.12), and method (b): (1.9.13, 1.9.14).

4. Solve the system A · w = b, and compute the difference kw ? w

?k (help norm).

Subsequently, you take w

?

equal to the w you have found.

5. If the difference you have just computed is smaller than a certain tolerance δ, then

you stop, and otherwise you return to step 2.

Because convergence is not guaranteed for Newton’s method, it is wise to limit the

number of possible iterations to a certain maximum, for example 50 iterations. If this

number is not enough to reach the tolerance, then this must be made clear with a

print statement.

Question: which repetition statement do you use for this, and what does the expression

that must be checked look like (if necessary, have a look at section 1.0.5)?

Computations

From now on, we have: P = 4, r = 0.05, M = 60, if v = 1. First we are going to check

whether the solution method works ’correctly’ on the basis of an error analysis. Note that,

besides round-off errors, there are also truncation errors, both from Newton’s method and

from the discretisation method.

To that end, investigate for both methods, (a) and (b), the following:

i. Take n = 20 and δ = 10?12. Observe how fast the convergence is within the iteration

in determining w, i.e. look at kw ? w

?k very precisely. Does this meet your

expectations?

40

ii. Run the program for n = 32, 64, 128, 256, 512 and 1024. For n = 1024, make a plot of

the solution. Discuss the accuracy of the solution at x = M/2 (so i = . . .) and x = M

(so i = . . .). Is this a reason to revise the value of δ?

The purpose of this reactor is to remove material A as much as possible. Therefore, the

reactor functions adequately when the concentration at the end of the reactor is sufficiently

small:

c(M) < 0.2

By changing the speed v, you can change the end value of the concentration (Attention: as

a result, the parameters P and r change simultaneously!)

iii. Investigate for which value of v the reactor functions well. To that end, make a plot

of c(M) as a function of v. What is the best value of v, in two significant figures?

Only use method (b) and take n = 128.

41


版权所有:留学生编程辅导网 2020 All Rights Reserved 联系方式:QQ:99515681 微信:codinghelp 电子信箱:99515681@qq.com
免责声明:本站部分内容从网络整理而来,只供参考!如有版权问题可联系本站删除。 站长地图

python代写
微信客服:codinghelp