pdf.io >> Free >> Quantum Mechanics with Trajectories: Quantum Trajectories....pdf

Quantum Mechanics with Trajectories: Quantum Trajectories and ...

 FileName: CiSE.pdf
[readonline]


 FileSize: 399 KB download
 Shared by: k2_chem_uh_edu 146 month ago
 Category: Free
 Report us: delete it


Abstract: (From M. Shapiro and R. Bershon, J. Chem. Phys. 73, 3810 (1980)). In (a), the red grid shows the location of the DVR points used ... [3] C. L. Lopreore and R. E. Wyatt, Quantum wavepacket dynamics with trajectories, Phys. ...

Quantum Mechanics with Trajectories: Quantum Trajectories and Adaptive
Grids
Robert E. Wyatt∗
Department of Chemistry and Biochemistry, University of Texas, Austin, Texas 78712
Eric R. Bittner
Department of Chemistry and Center for Materials Chemistry
University of Houston, Houston, Texas 77204†
(Dated: February 7, 2003)
Although the foundations of the hydrodynamical formulation of quantum mechanics were laid
over 50 years ago, it has only been within the past few years that viable computational implemen
tations have been developed. One approach to solving the hydrodynamic equations uses quantum
trajectories as the computational tool. The trajectory equations of motion are described and
methods for implementation are discussed, including ﬁtting of the ﬁelds to gaussian clusters.
I. INTRODUCTION
The concept of a path or trajectory plays a central role in our understanding of the motion of objects and ﬂuids.
Much like a route traced on a road map, a trajectory tells us where an object started, where it goes, and how it gets
there. There may be alternate routes, some more likely than others. Hence, analysis of a set of trajectories provides
us with an intuitive tool for understanding possible complex dynamics. Macroscopic objects obey Newtons equation
of motion, mq = f (q(t)), where q is the position of the particle at time tand f (q(t)) is the force acting on it. Given
˙
values for both position and velocity at time t, we can compute the trajectory that the object will follow and as a
result, we can predict with certainty where the object will wind up.
However, at the atomic and molecular level where objects obey the rules of quantum mechanics, Newtons equations
of motion are no longer strictly valid and the concept of a unique trajectory given a set of initial conditions becomes
murky at best. This is because, fundamentally, quantum mechanics is nonlocal, an issue to which we will return later.
In addition, the Heisenberg uncertainty principle states that when measurements are made, we cannot simultaneously
determine with inﬁnite precision the exact position and velocity of a quantum particle, although in principle this is
possible in Newtonian mechanics. Consequently, it seems as though we cannot speak in terms of the unique trajectory
followed by a quantum mechanical object.
However, even in quantum mechanics, Feynman (1) showed that we can talk in terms of paths, in fact, ensembles of
them. For example, if we specify two ﬁxed endpoints, q(0) and q(t), then we can compute the probability of a particle
starting at q(0) and winding up at q(t) by summing over all possible paths connecting the two points (these paths
include the classical trajectory linking these points, if there is one) and weighting each path by the complexvalued
factor exp(iS/¯ ) , where S(t) is the classical action integral
h
t
1 2
S(t) = q − V (q(t))dt
˙ (1)
0 m
in which the integrand is the classical Lagrangian. What we lose here is any indication of exactly which path the
particle actually follows, although we can say that some paths are more likely than others, especially those close to
the paths predicted by classical mechanics. In order to make a prediction, it is as if the particle must explore all
possible routes between the two end points.
Another way that we can describe the motion of quantum mechanical objects is via the hydrodynamic formulation
of quantum mechanics, which was ﬁrst introduced in the late 1920s and later explored and extensively developed
by Bohm (2) starting in the early 1950s. Here we will deﬁne an ensemble of quantum trajectories, each with a
precisely deﬁned coordinate and velocity that uniquely characterizes the dynamical evolution of a quantum system.
∗ email: [email protected]
† email:[email protected]; web page: http://k2.chem.uh.edu/bittner
2
However, we still are not free to talk about independent trajectories as in classical mechanics; quantum trajectories
are coupled together and evolve as a correlated ensemble. This correlation is a unique feature of quantum dynamics
and is expressed as a nonlocal potential in the hydrodynamic formulation.
In this Article, we will explore the use of the hydrodynamic viewpoint of quantum mechanics to design new
computational tools for predicting the evolution of quantum systems. Within the past few years, starting in 1999, it has
become possible to directly solve the quantum hydrodynamic equations to predict the spacetime dynamics of elements
of the probability ﬂuid.(3; 4; 5) Elements of this quantum ﬂuid are linked through the Bohm quantum potential,
denoted Q, which is computed ontheﬂy as the equations of motion are integrated to generate the hydrodynamic
ﬁelds. The quantum potential introduces all quantum features into the dynamics, including interference eﬀects, barrier
tunneling, zero point energies, etc. The only approximation made in solving the hydrodynamic equations involves the
use of a relatively small number of ﬂuid elements. The equations of motion for these ﬂuid elements are expressed in
the moving with the ﬂuid, Lagrangian, picture of ﬂuid ﬂow.
One implementation of these ideas, referred to as the quantum trajectory method (3), QTM, has been used to
predict and analyze the dynamics of wavepackets in a number of scattering problems. An approach similar to the
QTM is the quantum ﬂuid dynamic method (QFD)(5). Within the past two years, there has been a surge of interest in
the development and application of trajectory methods for solving both the timedependent Schr¨dinger equation and
o
density matrix equations of motion. Novel quantum trajectory methods for evolution of the reduced density matrix
in both nondissipative and dissipative systems have also been developed(6). In order to circumvent computational
problems associated with the propagation of Bohmian trajectories (especially in regions where nodes develop) adaptive
grid strategies have been recently explored.(4; 7; 8)
In Sec. II of this article, the Bohmian formulation of the hydrodynamic equations of motion will be reviewed,
and computational implementations will be described. Recent studies that employ cluster modeling to calculate the
quantum potential will also be presented in this section (9). Section III continues with a discussion of adaptive
dynamic grid techniques, and the transforms of the hydrodynamical equations appropriate for moving grids are
presented. Finally, some suggestions for further study are presented in Sec.IV.
II. IMPLEMENTATION OF THE QUANTUM HYDRODYNAMIC EQUATIONS
A. The equations of motion
In this Section, the equations needed to implement the quantum hydrodynamic formulation will be reviewed.
This formulation is initiated by substituting the amplitudephase decomposition of the timedependent wavefunction,
ψ(y, t) = R(y, t) exp[iS(y, t)/¯ ), into the timedependent Schr¨dinger equation,
h o
dψ ¯2
h 2
i¯
h = − +V ψ (2)
dt 2m
In terms of R and S, the probability density and the local ﬂow velocity are given by ρ = ψ2 = R2 and v = j/ρ ,
where j is the probability current. The Lagrangian form of the hydrodynamic equations of motion resulting from this
analysis are given by:
dρ
= −ρ v
˙ (3)
dt
dv 1
=− (V + Q) (4)
dt m
in which the derivative on the left side is appropriate for calculating the rate of change in a function along a ﬂuid
trajectory. Equation (3) is recognized as the continuity equation and Equation (4) is a Newtoniantype equation
in which the ﬂow acceleration is produced by the sum of the classical force, fc = − V , and the quantum force
is fq = − Q, where V is the potential energy function and Q is quantum potential (2). The quantum potential
measures the curvature induced internal stress and is given by
¯2
h 1 2 ¯ 2 −1/2
h 2 1/2
Q(y, t) = − R(y, t) = − ρ ρ (5)
2m R(r, y) 2m
Computation of the quantum potential is frequently rendered more accurate if derivatives are evaluated using the
amplitude C = log(R) (C is referred to as the Camplitude). In terms of derivatives of this amplitude, the quantum
potential is Q = −¯ 2 ( 2 C + ( C)2 )/2m.
h
3
surface 1 t=1000a.u.
0.4
0.2
Y1
0.0
0.2
0.4
5 6 7 8 X 9 10 11
FIG. 1 Density map showing the real part of the wavefunction on the lower of two potential energy surfaces at t = 1000 atomic
units (24.2 fs). The coordinates (X, Y1 ) correspond to the reaction coordinate leading from reactants to products along with the
Fig. 1 10 vibrational coordinates. On each potentialof the 110 quantum trajectories were propagated, and some of these are
ﬁrst of Density map showing the real part surface, wavefunction on the lower of two
potential energy surfaces atThe information carried along these fs). The coordinatesused to 1build the wavefunction
shown by the large white dots. t=1000 atomic units (24.2 scattered trajectories was (X,Y )
and this function was later interpolated onto a uniform mesh for plotting purposes.
correspond to the reaction coordinate leading from reactants to products along with the
Closure of the set of dynamical equations is obtained by introducing the equation for the action function,
11
dS 1
= ( S)2 − (V + Q) = Lq (6)
dt 2m
This equation, referred to as the quantum HamiltonJacobi equation, relates the change in the action to the quantum
Lagrangian.
The quantum trajectories obey two important noncrossing rules: (1) they cannot cross nodal surfaces (along which
); (2) they cannot cross each other. (In practice, because of numerical inaccuracies, these conditions may be violated.)
The quantum trajectories are thus very diﬀerent from the paths used to assemble the Feynman propagator.
No approximations were made in deriving these equations from the timedependent Schr¨dinger equation. However,
o
in order to generate solutions to these equations, an approximation will be made. The initial wavepacket will be
subdivided into N ﬂuid elements and the equations of motion will be used to ﬁnd the hydrodynamic ﬁelds along the
trajectories followed by these elements. As time proceeds, the ﬂuid elements evolve into an unstructured mesh, and
this presents problems for accurate derivative evaluation. For this purpose, least squares methods may be used.(3; 4)
From the values of C and S carried by each evolving ﬂuid elements, the complexvalued wavefunction may be
synthesized. An example is shown in Fig. 1: this wavepacket has just propagated downhill on an 11 degreeoffreedom
potential surface This complicated, oscillating wavefunction (only the real part of the wavefunction is shown) was
synthesized from the information carried along 110 quantum trajectories. It is remarkable that this complicated
structure can be built from the information that is propagated along so few quantum trajectories.
4
B. Multidimensional dynamics with cluster modeling
For some systems, including those of low dimensionality, the methods introduced in the previous section allow us
to solve the quantum hydrodynamic equations of motion to high accuracy. For higher dimensional systems, such
as atomic clusters or larger molecules, an extremely accurate description of the quantum motion is very diﬃcult to
obtain and perhaps more diﬃcult to understand. In this section, we will discuss how the hydrodynamic/Bohmian
method can be used to develop an approximate statistical approach for high dimensional systems.(9)
We can specify the conﬁguration of a collection of particles by a set of coordinates, R(t) = {r1 (t), r2 (t) · · · , rN (t)}.
The probability of ﬁnding one of these conﬁgurations is given by the product of the volume element dV weighted by
the probability density ρ(R(t)). In essence, each conﬁguration is an element in an ensemble of possible conﬁgurations
and the act of measurement pulls one of these conﬁgurations out of the hat. In quantum mechanics, the probability
density is related to square amplitude of the quantum wave function, ψ(R, t), Finally, we apply the Bohmian postulate
that the time evolution of a conﬁguration, rn , is given by (2)
1
rn =
˙ n S(R, t) (7)
m R=rn
where S is the action function for the quantum wave function as described in the previous section. In this section, we
discuss how one can use a Bayesian statistical approach to approximate Q by assuming that the density can be cast
as a superposition of gaussian product states.
Any statistical distribution can be written as a superposition of a set of M clusters [10],
M
ρ(R) = p(R, cm ) (8)
m=1
where p(R, cm ) is the probability of ﬁnding the system in conﬁguration R and being in the mth cluster. We then use
Bayes(10) theorem to break this joint probability into a conditional probability p(Rcm ) which tells us the probability
that a conﬁguration is a member of R knowing that it is also a member of the mth cluster and a marginal probability,
p(cm ) which gives the likelihood of being in the m − th cluster. We then pick a functional form for the conditional
probabilities by writing
c−1 1/2
p(Rcm ) = m
exp −(R − µm )T c−1 (R − µm ) ,
m (9)
(2π)3N/2
where cm is the N × N dimensional covariance matrix and µm is the center of the gaussian in N spatial dimensions.
To determine the coeﬃcients of the gaussians, we maximize the loglikelihood, L, of a given trial set of gaussians
actually corresponding to the data by taking the variation δL = 0. Furthermore, since the density is now represented
as a superposition of gaussians, it is a straightforward task to compute the quantum potential, Q, which is required
to integrate the hydrodynamic equations (see Eq. 4). Note that the methodology itself is extremely general and can
be used to estimate the probability distribution function (PDF) of any distribution of sample points.
In general, the N dimensional covariance matrices specifying each gaussian cluster reﬂect the correlation between
various degrees of freedom. A fully speciﬁed covariance will require that we solve N 2 simultaneous coupled equations
per gaussian cluster. In practice, it is easier to use more gaussians with less covariance than few gaussian clusters
with a high degree of covariance.
Our initial application of this method has been to determine ground states of highdimensional systems, such as
(He)n clusters for up to n = 4. In Fig. 2, we demonstrate this by showing the convergence of an initial distribution
of sampling points to the vibrational ground state for the anharmonic CH3 I bending/stretching modes of the methyl
iodide molecule in its lowest electronic state. This system serves as a useful benchmark for our approach since the
vibrational ground state can also be computed by diagonalization of the Hamiltonian matrix. Using this approach
gives the ground state energy as 591.045 cm−1 above the bottom of the well. The discrete variable representation
(DVR) grid used to converge this and the lowest few vibrational excited states is shown superimposed over potential
energy surface in Fig.2(a). In the clustering approach for boundstate problems, the sample points remain localized
in a small volume of conﬁguration space as the algorithm relaxes to the minimal energy conﬁguration. In Fig.2(d),
we show the energy as the system relaxes. The horizontal solid grey line is the “exact” result. In Fig.2(b), our initial
sampling is well away from the lowest energy point on the potential energy surface. The superimposed ovals give the
location and widths of the M gaussian clusters used in this calculation. For this calculation we use 4 fully factorized
gaussians with no covariance. As the calculation progresses, the distribution follows the potential energy curve to
its minimum and the distribution takes the form of the lowest energy quantum state. The ﬁnal relaxed groundstate
5
1.5 A 1.5 B
1 1
RCH3
RCH3
0.5 0.5
0 0
0.5 0.5
3.5 4 4.5 5 5.5 6 6.5 3.5 4 4.5 5 5.5 6 6.5
RCI RCI
1000
1.5 C Rel. Energy cm1 D
900
1
RCH3
800
0.5
700
0
0.5 600
500
3.5 4 4.5 5 5.5 6 6.5 150 200 250 300 350 400
RCI 102 steps
FIG. 2 Comparison of ground state calculations via exact diagonalization of vs. clustering dynamics for bond stretching modes
in methyl iodide. The vibrational potential energy surface is given as lightgray contours in terms of the CI stretching mode
(RC−I ) and the symmetric umbrella mode of the methyl hydrogens (RCH3 ). (From M. Shapiro and R. Bershon, J. Chem.
Phys. 73, 3810 (1980)). In (a), the red grid shows the location of the DVR points used to converge the ground state and the
ﬁrst few vibrational modes. A more detailed calculation (and dynamics) requires that this grid cover the entire region of the
potential energy surface. In (b), the colored (green, blue, and red) contours show the evolution of the density as it relaxes
to the lowest energy state. Superimposed crosses (+) show the centers and full width at half maximum of the gaussians used
to represent the density. In(c), we show the relaxation to the ground state, this time using two separable Gaussians with full
covariance. In (d), we plot the energy of the system vs. time: – case A, – case B, – DVR calculation.
energy is 665.205 ± 33.6 cm−1 . At this point, the sample points are still in motion and we are considerably above the
DVR ground state energy. Finally, we consider what happens if one takes fewer Gaussians each with full covariance
in (x, y). Starting from the same distribution as before, we propagate the particles applying a viscous force to bleed
away the kinetic energy. Under this approach, the system relaxes to 580.0 ± 10.1 cm−1 using only two Gaussians.
Surprisingly, most of the amplitude in the ﬁnal state is concentrated in the Gaussian located above the minimum
in the potential energy surface. Looking at Fig.2(d), we notice that the energy ﬂuctuates about the ﬁnal average
with a relatively small deviation. However, there are some spikes and these correspond to cases in which one of the
6
clusters, typically the smaller one, suddenly jumped to a diﬀerent position or the correlation matrix rotated slightly.
Fortunately, the algorithm rapidly corrects for these excursions.
In many ways, the hydrodynamic formalism implemented with the clustering approach is akin to the guide function
Monte Carlo (GFMC) widely used to calculate the ground state of multiparticle systems. For example, in GFMC
one chooses an appropriate guidefunction from a variational Monte Carlo calculation and uses the guide function to
improve the Monte Carlo sampling. Much like the Bohmian quantum potential, the local kinetic energy determined
by the curvature in the distribution function and iterative reﬁnements to the sampling are made using the Monte
Carlo algorithm. In our case, both the guide function and the sampling points are determined at each step in the
calculation and the sampling points evolve according to hydrodynamic equations of motion. We expect that this
approach will be most useful in highdimensional problems where conventional variational basis set approaches are
numerically impossible.
III. BEYOND BOHMIAN MECHANICS: ADAPTIVE QUANTUM PATHS
A. What kinds of grids and paths are there?
Near nodes or nodal surfaces, places where , the propagation of Bohmian trajectories becomes problematic. On one
hand, as mentioned earlier, these trajectories tend to avoid nodal regions and this leads to an undersampling problem.
For trajectories near the node, derivative evaluation of the hydrodynamic ﬁelds becomes increasingly inaccurate, thus
leading to computational breakdown. A closely related problem is that the quantum potential near nodes acquires
large values, and small errors in evaluating Q in turn leads to large errors in the positions and momenta for the nearby
quantum trajectories. In order to circumvent these diﬃculties, we will take a more general look at adaptive dynamic
(moving) grids.(11) The advantage of these grids is the ﬂexibility aﬀorded in directing the grid paths. Furthermore,
these dynamically adaptive grids may be robust enough that accurate results can be obtained over long propagation
times.
When the spatial coordinates associated with hydrodynamic equations of motion are discretized, each grid point
follows a timedependent path, xj (t), with an associated grid velocity, xj . The primary advantage of timedependent
˙
grids is that many fewer points may be required because the grid can be chosen to adapt to the evolving hydrodynamic
ﬁelds. In favorable cases, interesting features may be captured with the moving grid that would require signiﬁcantly
more ﬁxed grid points in order to achieve comparable resolution. There are two subcategories of moving grids.
• Lagrangian grids. For this frequently used type of dynamic grid, the grid point velocity is the same as the local
ﬂow speed of the ﬂuid, . As a consequence, the grid points march in step with the ﬂuid. Quantum trajectories
obtained by integrating the hydrodynamic equations of motion fall within this category (Sec. II. A.) These
trajectories follow along the ﬂuid ﬂow, but an observer has no control over the paths taken by the grid points.
• Arbitrary Lagrangian Eulerian (ALE) grids. For these intermediate grids, the grid point velocity is not the same
as that of the ﬂuid,(11). As a result, the grid either advances on the ﬂuid, or the grid may lag the ﬂuid ﬂow.
In this case, it is useful to introduce the slip velocity, , whose sign may be either positive or negative. The grid
developer has considerable ﬂexibility is creating these designer grids to satisfy certain objectives. This design
issue will be taken up in more detail in the next section.
B. Moving path transforms of the hydrodynamic equations
For an observer moving along a path x(t), the rate of change of a function f is denoted df /dt (this is the total time
derivative), whereas the time derivative at a ﬁxed point (Eulerian frame) is ∂f /∂t. These two time derivatives are
related through the equation: df /dt = ∂f /∂t + x f , where the last term is referred to as the convective contribution.
˙
From this equation, the derivative at a spaceﬁxed point is given by ∂f /∂t = df /dt − x f . This equation allows us
˙
to transform equations of motion expressed in the Eulerian frame into the more general ALE frames. When the three
hydrodynamic equations given earlier for ρ, v, and S are transformed(8), we obtain the new equations (in the ﬁrst
equation, the Camplitude is C = ln ρ/2).
dC ∂C 1 ∂v
= w − (10)
dt ∂x 2 ∂x
dS
= w(mv) + Lq (t) (11)
dt
dv ∂v ∂
m = mw − (V + Q) (12)
dt ∂x ∂x
7
where, for simplicity, a one dimensional problem has been assumed. It is important to note that the slip velocity
appears in the ﬁrst term in each of these equations. When w = 0, the condition appropriate for a Lagrangian frame,
these equations revert back to the usual ones that were presented earlier. However, in their more general form, these
equations can be integrated along paths designed to capture features that develop during the course of the dynamics.
How this can be done forms the subject of the next section.
C. Grid adaptation using the monitor function
The local hydrodynamic ﬁelds surrounding each moving grid point can be monitored with a user designed function
M (x).(11) There is considerable ﬂexibility in crafting the monitor function and in principle diﬀerent features can be
captured at diﬀerent times. Usually, the monitor captures the gradient and/or the curvature of the hydrodynamic
ﬁelds, for example, M (x) = 1 + α · u2 + β 2 u. A constant (unity in this case) is added on the right side to prevent
the monitor from becoming very small in regions where the ﬁelds are relatively ﬂat. There are many possibilities and
the grid designer must decide what features of the ﬁelds require monitoring. Once the monitor is introduced, the
next issue is how to adjust the grid points so that they can adapt to these ﬁelds. In regions where the monitor takes
on large values, we would like for the grid points to come closer together, but in such a way that crossing of paths is
avoided. A commonly used way to do this is through use of the equidistribution principle.(11)
The equidistribution principle (EP) is easily stated for a onedimensional problem. First, let {Mj } denote the values
of the monitor at the grid points {xj } , for j = 1, .., N . The end points x1 and xN are regarded as ﬁxed during the
adaption of the internal points. Finally, let Mj+1/2 denote the average value of the monitor between points j and j +1.
The EP then states, Mj−1/2 (xj − xj−1 ) = Mj+1/2 (xj+1 − xj ) = constant. A large value of the monitor then forces
the spacing between adjacent points to be small. This equation can also be viewed in terms of the equilibration of a
spring system, in which the local monitor functions act as the spring constants. These smart springs sense features in
the hydrodynamic ﬁelds. The spring analogy has been used in the solution of classical ﬂuid problems, and it has also
been used recently in the solution of the quantum hydrodynamic equations. (8) In order to prevent relatively sudden
changes (jerkiness) in the grid, it is useful to consider a reﬁned version of the EP that was ﬁrst introduced by Dorﬁ
and Drury (12)
D. Application of dynamic grid algorithms
A dynamically adaptive grid was used to study the scattering of an initial gaussian wavepacket from a repulsive
Eckart potential (whose shape is similar to that of a gaussian). (7) When Bohmian trajectories are propagated, the
calculation breaks down shortly after ripples start to develop in the reﬂected wavepacket. However, much longer
propagation time are obtained through use of adaption techniques. In this application, the grid points were adapted
using a monitor function designed to capture the local curvature of the wavefunction. The initial wavepacket has a
translational energy Etrans = 4000cm−1 , the barrier height is Vo = 6000cm−1 , and the barrier is centered at xb = 6a.u.
This wavepacket was propagated using N = 249 grid points between xo = −5 and xN = 25. The two edge points
were held ﬁxed during the entire propagation sequence (Eulerian frame), but more recent calculations have employed
Lagrangian edge points. The internal points were adapted according to the Dorﬁ and Drury scheme (12) that was
mentioned earlier.
The paths followed by the grid points (only 1/4th of which are shown) are plotted in Fig.3. At t = 0, it is seen
that the grid points are dominantly clustered in the region of large wavefunction density and curvature between x = 1
and 3. At later times, the wavepacket spreads as it moves toward the barrier and this is reﬂected in the grid paths by
the width of the clustered region spreading as the center heads toward the barrier. At around t = 40fs, wavepacket
bifurcation is manifest by the grid paths diverging near the barrier region to follow the reﬂected and transmitted
parts of the wavepacket. After about 40fs, these two wavepackets are moving toward the lower right and upper right
of the ﬁgure, respectively. By design, the grid points tend to congregate in these two regions of higher density.
IV. FUTURE STUDIES
The use of quantum Lagrangian trajectories and more general dynamic adaptive grids have the potential to solve
quantum dynamical problems in multiple dimensions. The grid points follow the evolving probability ﬂuid so that
numerous points or basis functions are not wasted in regions of little activity. These methods have already yielded
insights into a number of problems in chemical physics, including barrier tunneling, electronic nonadiabiatic dynamics,
decoherence, and relaxation in dissipative environments.
8
FIG. 3 Grid paths for wavepacket scattering from an Eckart barrier (only 1/4th of the N = 249 paths are shown). The barrier
is centered at xb = 6au). Bifurcation of the wavepacket near t = 40f s is evident in the clustering of the paths.
In order to make these methods robust enough to tackle multidimensional dynamics on anharmonic potential
surfaces, additional eﬀort needs to be directed toward the following interconnected issues. (1) We have already
mentioned that the quantum potential becomes large around wavefunction nodes and quasinodes and that ﬂuid
elements following Bohmian trajectories inﬂate away from these regions. Accurate computation of the quantum
potential is very diﬃcult and this in turn can lead to instabilities in the equations of motion. (2) Maintaining long
time stability of the solutions is diﬃcult, in part because of the large quantum potentials that arise locally. (3)
Derivative evaluation on the unstructured mesh formed by the evolving ﬂuid elements is diﬃcult, although moving
least squares or transformation to a structured grid for subsequent derivative evaluation are useful strategies. (4)
Near caustics that form when classical trajectories are integrated, the quantum forces acting on the ﬂuid elements
are both large and rapidly changing. The hydrodynamic equations are stiﬀ, and special integration techniques (so far
untested) are required for stable propagation. All of the unresolved issues mentioned in this paragraph need further
investigation.
Acknowledgments
RW and EB both thank the National Science Foundation and the Robert Welch Foundation for ﬁnancial support.
In addition, many discussions with Jeremy Maddox, Kyungsun Na, Keith Hughes, and Corey Trahan are gratefully
acknowledged.
References
[1] R. P. Feynman and A. R. Hibbs, Quantum Mechanics and Path Integrals, (McGrawHill, New York, 1965).
[2] D. Bohm, A suggested interpretation of quantum theory in terms of hidden variables. I, Phys. Rev. 85, 167179 (1952).
9
[3] C. L. Lopreore and R. E. Wyatt, Quantum wavepacket dynamics with trajectories, Phys. Rev. Lett. 82, 51905193 (1999).
[4] R. E. Wyatt and E. R. Bittner, Quantum wavepacket dynamics with trajectories: Implementation with adaptive Lagrangian
grids, J. Chem. Phys. 113, 88988907 (2000).
[5] F. Sales Mayor, A. Askar, and H. A. Rabitz, Quantum ﬂuid dynamics in the Lagrangian representation and application to
photodissociation problems, J. Chem. Phys. 111, 24232435 (1999).
[6] J. B. Maddox and E. R. Bittner, Quantum dissipation in unbounded systems, Phys. Rev. E 65, 0261431 to 02614310
(2002).
[7] K. H. Hughes and R. E. Wyatt, Wavepacket dynamics on dynamically adapting grids: Application of the equidistribution
principle, Chem. Phys. Lett. 366, 336342 (2002).
[8] C. Trahan and R. E. Wyatt, An arbitrary LagrangianEulerian approach to solving the quantum hydrodynamic equations
of motion: Implementation with smart springs, J. Chem. Phys., in press (2003).
[9] J. B. Maddox and E. R. Bittner, Estimating Bohms quantum potential using Bayesian computation, J. Chem. Phys., to be
submitted (2003).
[10] N. Gershenfeld, The Nature of Mathematical Modeling, (Cambridge Univ. Press, New York, 1999).
[11] P. A. Zegeling, in Handbook of Grid Generation, (CRC, New York, 1998),J. F. Thompson, B. K. Soni, and N. P. Weatherill
(eds.),pp. 371 to 3722
[12] E. A. Dorﬁ and L. O. C. Drury, Simple adaptive grids for 1D initial value problems, J. Comp. Phys. 69, 175195 (1987).
 Related pdf books
 Exactly solvable approximating models for Rabi Hamiltonian dynamics
 Random growth statistics of longchain single molecule poly(pphenylene vinylene)
 Quantum Chemistry Problem Set 2
 Quantum Chemistry
 Quantum Chemistry
 Quantum Mechanics with Trajectories: Quantum Trajectories and ...
 Quantum hydrodynamics: Mixed states, dissipation, and a new ...
 O BL PV: N Exciton Dissociation Dynamics in Model Donor ...
 Numerov method for solving the time independent wave equation
 Quantum Mechanics
 Research in the Lubchenko Group
 Quantum Chemistry
 Popular epubs
 Time dependence in quantum mechanics Notes on Quantum Mechanics ...
 Quantum Mechanics The predictions of quantum mechanics are rather ...
 Postulates of Postulates of Quantum Quantum Mechanics Mechanics
 Combined quantum mechanics and molecular mechanics simulation of ...
 Implementation of a quantum mechanics/molecular mechanics approach ...
 Harbingers of the quantum world Notes on Quantum Mechanics
 What quantum computers may tell us about quantum mechanics
 Trajectories & Orbital Mechanics
 Course Syllabus PHY 5646 Quantum Mechanics B Spring 2009
 Path integral in quantum mechanics
 Multiconﬁgurational molecular dynamics with quantum ...
 List of Publications
Download the ebook