联系方式

  • QQ:99515681
  • 邮箱:99515681@qq.com
  • 工作时间:8:00-23:00
  • 微信:codinghelp

您当前位置:首页 >> Java编程Java编程

日期:2020-11-04 11:22

McMaster University, Physics 4G03/6G03 October 8 - October 29 2020

Homework 3 :

Eigenvalue Problems, Jacobi and Lanczos methods

You should submit source and makefiles for the exercises marked with PROGRAM.

Files for this assignment are available in /home/4G03/HW3 on phys-ugrad

Due: Thursday October 29 2020, 11:30pm on avenue

?Reading: In the first weeks we have covered material, parts of which can be found in

Chapter 2, Chapter 3 and Chapter 10 (Monte Carlo) of Pang. This week we will discus diagonalization

for dense (Jacobi) and sparse (Lanczos) matrices and continue with Linear Algebra

and matrices (Pang Chapter 5).

Exercise 1 Quantum States in a Potential Well

You may remember from Physics 3MM3 (or similar Quantum Mechanics course) that it is

possible to solve a differential equation in terms of the Legendre and Laguerre polynomials.

However, in practice it is often much more useful to solve Schr¨odinger’s equation by mapping

it onto a Matrix which we can solve numerically for its eigenvalues and eigenvectors. That’s

the goal of this exercise. You may find that basic QM like the material developed in Chapter

3 in Griffiths can be quite handy, so you may want to review that a bit. We wish to find the

eigenvalues and eigenfunctions of the following 1D Schr¨odinger equation:

H|ψi = E|ψi, H = H0 + V (1). (3)

As you can see this potential diverges at x = 0, π/2 so we may think of the particle as being

confined to this interval. The eigenstates of H0 on the same interval are simply the eigenstates

of the infinite potential well of width L. They are well known:

They also form a complete set of basis states on the Hilbert space defined on the interval

0 < x < L. We can therefore expand the solution we are looking for in the following manner:(7)

If we insert this expression in Eq. (1) we find:

If we take the inner product of Eq. (8) with the bra hψn| we obtain the matrix equation:

X∞

m=1

Hnmcm = Ecn, (9)

where

Hnm = hψn|(H0 + V )|ψmi = δnmE

0

n + hψn|V |ψmi. (10)

Note that this matrix equation is infinite dimensional ! Hence, for a practical application we

will have to make it finite-dimensional by simply truncating the basis set:

|ψi ' X

M

m=1

cm|ψmi. (11)

With M = 5 the finite matrix equation then looks like this

?

?????

H11 H12 H13 H14 H15

H21 H22 H23 H24 H25

H31 H32 H33 H34 H35

H41 H42 H43 H44 H45

H51 H52 H53 H54 H55. (12)

Solving the Schr¨odinger equation has become equivalent to determining the eigenvalues and

eigenvectors of a matrix.

1.1 For the potential in Eq. (4) write out the 5 × 5 matrix Hnm that corresponds to truncating

the basis set with M = 5. You may find the following integrals useful:

dx = (?1)|m?n|π min(n, m), (14)

with m, n both integers. When using this integrals be careful with overall constants. (No

programming needed).

1.2PROGRAM Write a module that implements Jacobi’s method for diagonalizing. Note that

in the handout Eq. 11.1.11 should read 1/

t

2 + 1. We shall use that to study the eigenvalues

of the matrix H. You should write this as a subroutine starting something like this:

2

#include <cmath>

#include <vector>

using namespace std;

typedef vector<double> Row; // One row of the matrix

typedef vector<Row> Matrix; // Matrix: a vector of rows

void jacdiag(Matrix & A, vector<double> & d)

.

.

Here A is the matrix to be diagonalized and d is a vector that on output contains the calculated

eigenvalues in any order. Try the simplest possible implementation first, by doing 10 sweeps

through the elements apq above the diagonal and performing a jacobi rotation on each one. Use

the following equations, assuming that |apq| > 2.2 × 10?16:

Then perform the rotation as discussed in class. In a first implementation you can try something

like this in pseudocode: (watch out I haven’t checked this)

AP=A(p,:)

A(p,:)=c*A(p,:)-s*A(q,:)

A(q,:)=s*AP +c*A(q,:)

AP=A(:,p)

A(:,p)=c*A(:,p)-s*A(:,q)

A(:,q)=s*AP +c*A(:,q)

A(p,q)=0.0_dbl

A(q,p)=0.0_dbl

After each sweep, print out the sum of the squares of the remaining elements above the diagonal.

Then try to add some of the refinements described in the handout and write a routine that only

work on the upperdiagonal of A. The matrix A is assumed symmetric so the transformations

you perform should preserve that. A simple way of doing that is to use only elements above

the diagonal (as it is done in the handout from Numerical Recipes by W. Press et al.). Check

your program on simple matrices where you know the result.

For this assignment there is no need to also calculate the eigenvectors. However, if you

feel up to it you can add the calculation of the eigenvectors to your implementation by adding

another argument Matrix & V.

1.3PROGRAM Use your implementation of Jacobi’s routine to determine the eigenvalues of the

matrix equation with M = 10, 20, 30, 40. Determine the three lowest eigenvalues this way for

the different values of M. The eigenvalues should be in units of ~

2/(2m). This method

becomes exact in the limit where M → ∞. How many digits have you reliably determined ?

Exercise 2 S = 1/2 Heisenberg Spin Chain

While Jacobi’s method is very reliable it is not very efficient for very large matrices. In that

case we have to use sparse matrix methods. We shall look at spin waves as the occur in the

Heisenberg spin chain. We will do this by exactly diagonalizing a small model system using

Lanczos method. Hopefully you will be able to study the ground-sate properties of a system

with 20 spins corresponding to a matrix of size 220 × 2

20 or 1048576 × 1048576 !!! In part, the

3

2016 nobel prize in physics was awarded to Duncan Haldane for studies of this system. I’ve

put the nobel committees overview of the scientific background on avenue.

Some materials consists of long 1-dimensional chains of atoms. These chains interact only

weakly and hence we can model the magnetic properties of such a material by a linear chain

of interacting quantum mechanical spins as shown in the figure below. On the i’th site of the

figure 1: The Heisenberg Spin Chain.

The matrices are here written in terms of the z-component of the spin, that’s why σ

z

is diagonal.

The z-component of the spin can be either up, | ↑i, or down, | ↓i. If we have two spins we

need to work in a product basis, | ↑↑i, | ↑↓i, | ↓↑i, | ↓↓i. We see that we have 4 states. If we

have 3 spins interacting we get 8 states and in the general case for L spins interacting we get

N = 2L

states. The interaction between 2 spins is usually written in terms of the Heisenberg

interaction S~i· S~j, with a little bit of work it is not difficult to see that in the above basis we

find that:

If we number the basis states 0, 1, 2, 3 we see that the action of the matrix P01, describing the

interaction between the spins on site 0 and 1, is the following:

nothing more than permuting the coefficient in the vector. It is easy to see that the coefficient

xi

is mapped onto the coefficient j of the resulting vector where j is equal to i with the bits 0,1

interchanged. In general 2S~k · S~l +12I permutes the bits k and l. If we have L interacting spin

their interactions are described by the Hamiltonian H which we can write:

Here it is implied that we are using periodic boundary conditions in the sense that across the

boundary we get S~

L?1 · S~

0.

file: hv.cc

#define BIT_SET(a,b) ((a) |= (1U<<(b)))

#define BIT_CLEAR(a,b) ((a) &= ~(1U<<(b)))

#define BIT_FLIP(a,b) ((a) ^= (1U<<(b)))

#define BIT_CHECK(a,b) ((bool)((a) & (1U<<(b))))

// Set on the condition f else clear

//bool f; // conditional flag

//unsigned int m; // the bit mask

//unsigned int w; // the word to modify: if (f) w |= m; else w &= ~m;

#define COND_BIT_SET(a,b,f) ((a) = ((a) & ~(1U<<(b))) | ((-(unsigned int)f) & (1U<<(b))))

#include <vector>

#include <cmath>

#include <iostream>

using namespace std;

void hv(vector<double>& y, const vector<double>& x,int L)

{

//for (vector<double>::iterator it = y.begin() ; it != y.end(); ++it)

// *it=0.0;

bool b ;

unsigned int k;

for (unsigned int i=0;i<x.size();i++){

if (abs(x[i])>2.2e-16) {

int jm = L-1;

double xov2=x[i]/2.0;

for (int j=0;j<L;j++){

k=i;

COND_BIT_SET(k,jm,BIT_CHECK(i,j));

COND_BIT_SET(k,j,BIT_CHECK(i,jm));

y[k] += xov2;

jm = j;

}

}

}

for (unsigned int i=0;i<x.size();i++)

y[i]=y[i]-((double) L)/2.0*x[i]/2.0;

}

It is now relatively easy to write a little function that takes a vector, v, and returns Hv. In order

to do this we need the functions BIT CHECK(i,j) and COND BIT SET(i,j,f). BIT TEST(i,j)

returns a logical that’s true if the j’th bit of i is set. COND BIT SET(i,j,f) sets the j’th bit

of i equal to 1 if the boolean f is true otherwise it is set to zero. One then gets the above function.

?Note: The above function conserves the input vector and the results of the matrix-vector

operation is added to the outgoing vector y. As you can see the outgoing vector is not first set

to zero. This is convenient if you want to use as little memory as possible but you can of course

uncomment the statement setting y to zero if that is more convenient for your implementation.

Since the z-component of each spin is ’represented’ by the i’th bit of an integer we obviously

have to make sure that the number of spins in the linear chain doesn’t exceed the number of

bits in an integer. If we use unsigned integers we have 32 bits so the above function will not

work for L>32. That’s not a big problem since, as we shall see shortly, it is very difficult to

treat more than 32 spins using a single cpu.

2.1PROGRAM The size of the Hamiltonian H is N = 2L

, which can be an astronomical number.

Hence, if we want to study the interactions of the spins it is impossible to diagonalize a

5

Hamiltonian of more than a few spins using the dense matrix algorithms like jacobi’s method.

As in much current research we have to resort to an approximate method, Lanczos method,

which however is very precise (as we shall see). Using the above function hv implement Lanczos

method by generating the M × M tridiagonal matrix Lan:

(21)

If v0 is a normalized random vector of length N = 2L

the entries in the matrix Lan are given

by:

βi = hvi

|H|vi+1i

αi = hvi

|H|vii

u1 = H|v0i ? α0|v0i

vi = ui/|ui

|

ui+1 = H|vii ? αi

|vii ? βi

|vi?1i, i ≥ 1. (22)

At the heart of the Lanczos process is a Gram-Schmidt orthogonalization. This process is

known to have some stability issues and often a slight rewriting is used, the so-called modified

Gram-Schmidt process. Mathematically the two techniques are equivalent but the modified

Gram-Schmidt process is less susceptible to round-off errors. A somewhat analoguous rewriting

is used for the Lanczos process where the above outlined steps instead are re-organized as

follows:

Fundamental Lanczos iteration without re-orthogonalization

|v0i ← random vector

|v0i ← |v0i/||v0||

|wi ← A|v0i

α0 ← hv0|wi

|f0i ← |wi ? α0|v0i

for j = 0, . . . m ? 2

βj ← ||fj

||

|vj+1i ← |fj i/βj

|wi ← A|vj+1i ? βj

|vj i

αj+1 ← hvj+1|wi

|fj+1i ← |wi ? αj+1|vj+1i (23)

6

You can use the following skeleton as a starting point:

file: main.cc

#include <random>

#include <chrono>

#include <iostream>

#include <cmath>

#include <vector>

using namespace std;

typedef vector<double> Row; // One row of the matrix

typedef vector<Row> Matrix; // Matrix: a vector of rows

void hv(vector<double>& y, const vector<double>& x,int L);

void jacobi(Matrix &a, vector<double> &d, Matrix &v, int &nrot);

void eigsrt(vector<double> &d, Matrix &v);

inline double dotprod(const vector<double>& y, const vector<double>& x)

{ double sum = 0.0;

for (int i=0;i<y.size();i++)

sum += x[i]*y[i];

return (sum);

}

int main()

{

// define the geneator without seeding it.

mt19937 generator;

uniform_real_distribution<double> distribution(0,1.0);

int L,m;

cout << "Size of system, L: ";

cin >> L;

cout << "Size of Lanczos matrix, m: ";

cin >> m;

int N=pow(2,L);

vector<double> v1(N),v2(N),f(N),omega(N);

Matrix Lan(m,Row(m)),v(m,Row(m));

for (int i=0;i<N;i++){

v1[i] = 1.0-2.0*distribution(generator);

v2[i]=0.0;

Once you have generated Lan, use your jacobi routine to find the eigenvalues of Lan. Locate

the lowest two eigenvalues which will correspond to the ground-state energy, E0 and the energy

of the first excited state, E1.

?Note: As you can see, the above program outline main.cc suggest that you use 4 vectors

of length N = 2L

, v1,v2,f,omega. This can use a lot of memory. I was able to eliminate the

need for f,omega thereby cutting the memory usage in half ! For the assignment there is no

need for you to do this. (But if you should feel the urge to try it is indeed possible).

2.2PROGRAM This exercise will take up quite a bit of memory on the computer ! Start with

L = 10 and plot the calculated E0 as a function of M for M = 5, 10, 15, 20 . . . 50. (It should

converge to E

0

L=10 = ?4.5154463544). Do the same thing for E1. Plot the difference |E0?E

0

L=10|

as a function of M. What do you observe ? Does the method converge with M ? If so, how ?

Do the same thing for L = 14.

2.3PROGRAM For some time it was discussed if models of spin chains as the present one possessed

a gap in the thermodynamic limit in the sense that limL→∞ |E1 ? E0| > 0. Duncan

Haldane was awarded the 2016 nobel prize for answering this question. Try plotting the gap

you can calculate as a function of the system size L for L even, using sufficiently large values

of M. The behavior of this gap for L odd is unrelated, so you should always use L even. Do

NOT try to go beyond L = 20 ? 24 and if you’re not working on phys-ugrad you may exceed

7

your computers memory before that. What can you say about the gap in the limit L → ∞

? To reach a conclusion you might try to change the axis on your plot from linear to logarithmic.

?Note:

1. Obviously, it doesn’t make sense to increase the size, M, of the tri-diagonal Lanczos

matrix beyond the size of the original matrix. For instance, with L = 6 it doesn’t make

sense to take M larger than 26 = 32 and in general one would always keep M much

smaller than this number.

2. If you monitor the lowest eigenvalues of the tri-diagonal Lanczos matrix, Lan, as you

increase M you occasionally observe that an eigenvalue ’splits’ into 2 eigenvalues that are

very close. One of these eigenvalues is ’spurious’ and is not a real eigenvalue. One cause of

this is the loss of orthogonality between the Lanczos eigenvectors due to round-off errors.

One way of fixing this is to orthogonalize a new Lanczos vector not only to the 2 previous

ones but to all previous ones. We shall not due this here. Criteria for determining when

an eigenvalue is spurious or not have also been developed but this goes beyond what we

can cover here. However, please note that when you calculate the gap |E1 ? E0| in the

spin chain you may then have to discard a ’spurious’ eigenvalue that suddenly occurs very

close to the ground-state eigenvalue. If you have been very careful with numerical errors

then this might not occur. Physically, the gap has to be a smooth function of L and it

cannot exhibit sudden ’jumps’.

8


版权所有:留学生编程辅导网 2020 All Rights Reserved 联系方式:QQ:99515681 微信:codinghelp 电子信箱:99515681@qq.com
免责声明:本站部分内容从网络整理而来,只供参考!如有版权问题可联系本站删除。 站长地图

python代写
微信客服:codinghelp