Video
10th Anniversary Lectures
"Frozen boundaries and log-fronts"
Andrei Okounkov
Princeton University
Date: Monday, Oct. 16, 2006, at 4:00 p.m. in WMAX 110, UBC
Abstract
In this talk, based on a joint work with Richard Kenyon and Grisha Mikhalkin, Andrei Okounkov discussed some simple binary operation on plane curves which 1) generalizes classical duality for plane curves, 2) arises naturally in probabilistic context, namely as a facet boundary in certain random surface models.
(57 minutes)
"New geometric and functional analytic ideas arising from problems in symplectic geometry"
Helmut Hofer
New York University
Date: Monday, Oct. 23, 2006, at 4:00 p.m. in WMAX 110, UBC
Abstract
The study of moduli spaces of holomorphic curves in symplectic geometry is the key ingredient for the construction of symplectic invariants. These moduli spaces are suitable compactifications of solution spaces of a first order nonlinear Cauchy-Riemann type operator. The solution spaces are usually not compact due to bubbling-off phenomena and other analytical difficulties.
(1 hour 10 minutes)
"Sums of Squares and Pfister forms"
Raman Parimala
Emory University/Tata Institute
Date: Monday, Oct. 30, 2006, at 4:00 p.m. in WMAX 110, UBC
Abstract
While a formula to write a product of sums of two squares as a sum of two squares goes back to ancient times, the question for which natural numbers n is a product of sums of n squares again a sum of n squares was only resolved in the 1960's by Pfister, through his introduction of the theory of multiplicative quadratic forms. Pfister's theory led to a solution of several outstanding questions related to quadratic forms; it also led to a formulation of new questions concerning classification of quadratic forms. In this lecture we shall discuss some recent advances in this direction.
(1 hour 3 minutes)
"A History of the Trace Formula"
James Arthur
University of Toronto
Date: Monday, Nov. 6, 2006, at 4:30 p.m. in WMAX 110, UBC
Abstract
The trace formula is a far reaching generalization of the Poisson summation formula. It relates spectral data of deep arithmetic significance to explicit but complicated geometric data. With its applications to the Langlands programme, some already realized and others still far away, the trace formula represents a mathematical equation of great power.
In trying to give some sense of its history, we will begin with Selberg's original discovery of a formula that gave remarkable relations between the spectral and geometric properties of Riemann surfaces. We shall then describe Langlands' ideas for using this formula to establish reciprocity laws between different kinds of arithmetic quantities. Finally, we will say something about the present state trace formula, as it applies to spaces and groups of arbitrary dimension.
(1 hour 6 minutes)
"Plagued by numbers: the mathematics of disease"
Mark Lewis
University of Alberta
Date: Monday, Dec. 4, 2006, at 4:30 p.m. in WMAX 110, UBC
Abstract
The dynamics of disease have long fascinated mathematical researchers. From influenza to the bubonic plague, mathematical and computational models are used to evaluate factors governing disease outbreaks. Facts about a disease are put into models. What come out are predictions regarding infection levels over time. The ultimate goal is to use models in devising strategies for disease control and management. Much recent work has focused on predicting dynamics of modern or emerging diseases, such as HIV/AIDS, SARS and West Nile virus. In the first part of my talk I will review some of this work and its implications, particularly in regards to models for West Nile virus and its spatial spread. I will then discuss recent interdisciplinary work on the spatial dynamics of naturally occurring parasites on wild salmon, namely sea lice, the role played by salmon farms in changing those dynamics for juvenile salmon, and the implications for wild salmon populations.
(1 hour 3 minutes)
"The Reality of Computer Models: Statistics and Virtual Science"
Jerry Sacks
National Institute of Statistical Sciences
Date: Monday, February 19, 2007, at 4:00 p.m. in WMAX 110, UBC
Abstract
Computer models are imperfect representations of real phenomena. An austere view is that validating a model cannot be done, the "primary value of models is heuristic: models are representations, useful for guiding further study but not susceptible to proof." This view may have substantial basis in purely scientific roles, as distinct from a model's use in policy and engineering contexts. But the real validation issue, we contend, is not whether a model is absolutely correct or only a useful guide. Rather, it is to assess the degree to which it is an effective surrogate for reality: does the model provide predictions accurate enough for intended use?
Incisive argument on the validity of models, seen as assessment of their utility, has previously been hampered by the lack of a structure in which quantitative evaluation of a model's performance can be addressed. The lack has given wide license to challenge computer model predictions (just what is the uncertainty in temperature predictions connected with increases in CO2?). A structure for validation should:
* Permit clear cut statements on what and how performances are to be addressed and assessed;
* Account for uncertainties stemming from a multiplicity of sources including field measurements and, especially, model inadequacies; and
* Recognize the confounding of calibration/tuning with model inadequacy – tuning can mask flaws in the model; flaws in the model may lead to incorrect values for calibration parameters.
We will describe such a structure (and applications). It is built on methods and concepts for the statistical design and analysis of virtual experiments, drawing on elements of Gaussian stochastic processes and Bayesian analysis.
(1 hour 5 minutes)
"Computational Studies of the Motion of a Nematic LCP in a Simple Shear Device"
Gary Leal
University of California, Santa Barbara
Date: Monday, February 26, 2007, at 4:00 p.m. in WMAX 110, UBC
To view this streaming video, please click the link: Gary Leal streaming video (1 hour 5 minutes)
CRM-Fields-PIMS Prize 2006 Lecture - Dr. Nicole Tomczak-Jaegermann (UA)
"High dimensional convex bodies: phenomenoa, intuition and results"
Dr. Nicole Tomczak-Jaegermann (UA)
On Dec. 1, 2006, Dr. Nicole Tomczak-Jaegermann (U. Alberta), the winner of the CRM-Fields-PIMS Prize for 2006, presented a Distinguished Lecture at PIMS-UBC on High dimensional convex bodies: phenomenoa, intuition and results.
Abstract: Phenomena in large dimensions appear in a number of fields of mathematics and related fields of science, dealing with functions of infinitely growing number of variables and with objects that are determined by infinitely growing number of parameters. In this talk we trace these phenomena in linear-metric, geometric and some combinatorial structure of high-dimensional convex bodies. We shall concentrate this presentation on very recent results in asymptotic geometric analysis.
BIRS Workshops
BIRS Workshops 2003
BIRS Workshops 2004
BIRS Workshops 2005
Ceremonies and Meetings
PIMS Awards Ceremony 2001
Vancouver, BC, December 1, 2001
Opening Ceremonies and Banquet of the 2001 Canada-China
Mathematics Congress
University of British Columbia, Vancouver, Canada, August 20, 2001
Opening Ceremonies of the 1999 Canada-China Mathematics
Congress
Tsinghua University, Beijing, China, August 23 - 28, 1999
Studying the tracks of Elephant Seals
David Brillinger, University of California at Berkeley
PIMS Opening Meeting, University of Victoria, October 4, 1996
PIMS and Mathematics Communication
Katherine Heinrich, Simon Fraser University
PIMS Opening Meeting, University of Victoria, October 4, 1996
Combinatorial Optimization as a Tool for Molecular Biology
Richard Karp, University of Washington
PIMS Opening Meeting, University of Victoria, October 4, 1996
The Chaotic Complexity of Economics and the Social Sciences: part 1,
part 2
Donald Saari, Northwestern University
PIMS Opening Meeting, University of Victoria, October 4, 1996
Education Events
PIMS Education Day 2005
Opening Remarks - Chris Bose (PIMS Site Director, UVic) and Ivar Ekeland (PIMS Director)
Keynote Speaker - George Bluman (Mathematics, UBC)
VIGRE at UW - Hart Smith (Mathematics, University of Washington)
Closing Remarks - Alejandro Adem (PIMS Deputy Director)
Date: June 1, 2005
Location: University of Victoria
On June 1, approximately 40 academics and educators, along with university and government administrators, met at the University of Victoria for the first annual PIMS Education Day.
Numeracy and Beyond: Part 1
Passing the torch: What have mathematiciancs to contribute to the next generation?
Tony Gardiner
University of Birmingham
Numeracy and Democracy
Bernard Madison
University of Arkansas
Number sense in the calculator age.
Günter Törner
Mathematics and Didactics, Duisburg, Germany
Lessons from the Singapore Curriculum
Yoram Sagher
University of Illinois
Date: July 9 - 10, 2003
Location: UBC
This conference, organized and sponsored by PIMS, will prepare the 5-day workshop at BIRS in 2004. Since the latter will not be a gathering of specialists exchanging and creating new theorems, but a meeting of 24 mathematicians and mathematics-educators with the purpose of identifying principles of a realistic but solid programme of school mathematics, it will require careful preparation. The present meeting is to accomplish this and to seek guidance from interested members of the public.
Changing the Culture 2003
The Language of Mathematics
Bernice Kastner
Towson University, Maryland and the Rochester Institute of Technology
Hollywood Perceptions of Mathematics: Cultural Truth or Mathematical Fiction?
Rina Zazkis and Peter Liljedahl
Faculty of Education, SFU
Date: May 2, 2003
Location: SFU
Abstract
The 6th Annual Changing the Culture Conference, organized and sponsored by the Pacific Institute for the Mathematical Sciences, will again bring together mathematicians, mathematics educators and school teachers from all levels to work together towards narrowing the gap between mathematicians and teachers of mathematics, and between those who do and enjoy mathematics and those who don't believe they could.
Changing the Culture 2002
Symbiosis: Intuition and Rigour
Ed Barbeau
University of Toronto
Rigour : Mathematics : : Intuition : Teaching ... And Vice Versa
Brent Davis
University of Alberta
Date: April 26, 2002
Location: SFU
Abstract
The 5th Annual Changing the Culture Conference, organized and sponsored by the Pacific Institute for the Mathematical Sciences, will again bring together mathematicians, mathematics educators and school teachers from all levels to work together towards narrowing the gap between mathematicians and teachers of mathematics, and between those who do and enjoy mathematics and those who don't believe they could.
Changing the Culture 2001
Mathematics and Literature: Cross Fertilization
Brett Stevens
PIMS SFU
Breaking the Cycle of Ignorance
John Mighton
Fields Institute
Date: May 11, 2001
Location: SFU
Abstract
The Fourth Annual Changing the Culture Conference, organized and sponsored by the Pacific Institute for the Mathematical Sciences, will again bring together mathematicians, mathematics educators and school teachers from all levels to work together towards narrowing the gap between mathematicians and teachers of mathematics, and between those who do and enjoy mathematics and those who don't. Its theme this time is Writing, Speaking and Thinking Mathematics. The conference will explore connections between numeracy and literacy, mathematics and language, mathematics and literature, and how we can use language to teach mathematics.
Changing the Culture 2000
The Mathematics in the Art of M.C. Escher
H.S.M. Coxeter
University of Toronto
Date: April 28, 2000
Location: SFU
Abstract
While the public lecture by H.S.M. Coxeter will touch on various mathematical aspects of M.C. Escher's art, its centre-piece is likely to be an examination of Escher's circular woodcuts. The following is Coxeter's introduction (with two minor verbal substitutions for mathematical notation) to a paper which appeared in the Mathematical Intelligencer, No.4, 1966.
In M.C. Escher's circular woodcuts, replicas of a fish (or cross, or angel, or devil), diminishing in size as they recede from the centre, fit together so as to fill and cover a disc. Circle Limits I, II, and IV are based on Poincaré's circular model of the hyperbolic plane, whose lines appear as arcs of circles orthogonal to the circular boundary (representing the points at infinity). Suitable sets of such arcs decompose the disc into a theoretically infinite number of similar "triangles", representing congruent triangles filling the hyperbolic plane. Escher replaced these triangles by recognizable shapes. Circle Limit III is likewise based on circular arcs, but in this case, instead of being orthogonal to the boundary circle, they meet it at equal angles of almost precisely 80 degrees. (Instead of a straight line of the hyperbolic plane, each arc represents one of the two branches of an "equidistant curve".) Consequently, his construction required an even more impressive display of his intuitive feeling for geometric perfection. The present article analyzes the structure, using the elements of trigonometry and the arithmetic of the biquadratic field containing the square roots of 2 and 3; subjects of which he steadfastly claimed to be entirely ignorant.
Elmacon 2002
Solutions to Problems
Cary Chien
Date: May 25, 2002
Location: UBC
Abstract
Each contestant received a score between 0 and 49, calculated by adding their Sprint Round score (0-25) to double their Target Round score (0-24). The top ten scorers went on to the Countdown round, where one-on-one competition caused some changes to the rankings based on written work alone.
Mini Courses by Distinguished Chairs
Entropy and Orbit Equivalence in Measure Preserving Dynamics - Part I Part II Part III
Dan Rudolph
University of Maryland
Date: October 25, 27 and 28, 2004
Location: University of Victoria
Abstract
These lectures will present a link between two fundamental ideas in the study of measure preserving transformations of a probability space. The study of such transformations, or groups of such transformations, is central to ergodic theory and as dynamical systems often possess natural invariant measures, it is central to dynamics generally.
The first notion is the Kolmogorov-Sinai entropy, a numerical invariant that, vaguely speaking, measures the exponential growth rate of the number epsilon-distinct orbits up to a set of small measure. This is a natural measure of the complexity of the system.
The second idea comes from a fundamental result by H. Dye that says any two free and ergodic measure preserving transformations of a standard probability space are orbit equivalent. That is to say, by measurably rearranging the order of points on the orbit of one ergodic transformation one can modify it to be conjugate to any other.
The link we wish to discuss between these two is that one can place on such rearrangements of orbits a notion of the complexity of the rearrangement intimately related to the notion of entropy. The proof of Dye's theorem is constructed by making a sequence of local rearrangements of the orbits. Each one separately produces a system still conjugate to the original one but in the limit one obtains the dynamics of the other system. One can calculate the complexity of the terms in this sequence of local rearrangements and consider its asymptotic behavior. The core result we will discuss is that two ergodic transformations have the same entropy iff one can be modified into the other by the rearranging of orbits with zero complexity asymptotically.
In lecture I I will present the basic background in ergodic theory and definitions to be able to state and understand the main result. This lecture should be a self-contained treatment for a broad audience.
In lecture II I will offer the background in entropy theory and orbit equivalence theory needed to understand the proof of our main result. In particular we will see that the complexity described above for local rearrangements is a metric. This plays a central role in our work.
In lecture III I will offer the core of the proof of the main result.
Executable Specifications: The Abstract State Machine Approach
Dr. Yuri Gurevich
Microsoft Research
Date: April 17, 2003
Location: SFU Harbour Centre
Abstract
To specify software means to describe it one way or another. We are interested in specifications taht are precise, high level and executable. Some people think that executable specifications will change the way software is designed, developed, tested and documented. Our opinion is based on the theory of abstract state machines, extensive international experimentation with ASM, and the applied ASM work of the group on Foundations of Software Engineering at Microsoft Research.
Contrary to natural sciences, computer science is about artificial reality, about computer systems. Mathematically speaking, what is a computer system? A computer system may have many levels of abstraction. Fix such a level. The ASM theroy tells us that there is an abstract state machine that behaviorally, is identical to our system on the chosen abstract level. The specification language Asml, developed by the FSE group, makes writing ASM models practical. Our tools allow the developers (more and more) to experiment with their design, validate it and enforce it. The tools allow testers to be involved earlier in the software development cycle and empower them to test the intended functionality of software (and not only its robustness).
Executable Specifications: The Abstract State Machine Approach - Part I Part II, Part III
Gunther Uhlmann
University of Washington
Date: November 6-8, 2002
Location: UBC
Abstract
Professor Uhlmann will deliver a series of lectures on "The Dirichlet to Neumann Map and Inverse Problems". The first three lectures will be held from Nov. 6-8. All of the lectures will be videotaped and made available in realvideo format from the PIMS webpages.
Mathematical Social Sciences, an oxymoron?
Donald G. Saari
University of California, Irvine
Date: September 5, 2002
Location: University of Victoria
Abstract
It will be shown how basic questions from the social sciences lead to new mathematics or new uses of mathematics. Mentioned will be functional equations, but the emphasis will be on hidden symmetries in decision events we experience each day.
Singularity theory and departmental discussions
Donald G. Saari
University of California, Irvine
Date: September 12, 2002
Location: University of Victoria
Abstract
Surprisingly, to understand even a simple modeling of basic decision theory, we need to invoke properties from singularity theory and even some unresolved questions from the N-body problem.
Evolutionary game theory; examples and dynamics
Donald G. Saari
University of California, Irvine
Date: September 19, 2002
Location: University of Victoria
Abstract
Dynamical systems are becoming an important tool to handle the new area of evolutionary game theory. The conclusions for game theory can be surprising; the impact for dynamical systems is that new structures are found.
Chaotic dynamics of economics
Donald G. Saari
University of California, Irvine
Date: September 23, 2002
Location: University of Victoria
Abstract
We have heard of Adam Smith's Invisible Hand story, but a mathematical examination shows that rather than stability, chaotic behavior should be expected. This is related to questions about convergences of numerical techniques.
Economics and dynamics
Donald G. Saari
University of California, Irvine
Date: September 24, 2002
Location: University of Victoria
Abstract
This concluding lecture describes other uses of dynamics and symmetry in understanding basic concerns coming from the social sciences.
Computing Free Boundary Problems in Moving Fluids
Computing with Surface Tension, and Discovering Singularities
Pattern Formation in Fluid Dynamics: Fluid Dynamics meets Materials Science
Why do Flags Flap?
Bending in the Wind: Elasticity and Drag Reduction
Michael Shelley
Courant Institute, Applied Mathematical Laboratory
Date: November 22, 23, 29, 30, December 7, 2001
Location: Simon Fraser University
Abstract
Oil and water being mixed together, a gas bubble forming intricate patterns as it expands in a small gap, a flag flapping or leaf bending in a strong wind ... Each of these involves an interface that separates and interacts with surrounding fluid. All are examples of free-boundary problems, which as a class is one of the most important areas of fluid dynamical research. They are among the most difficult. Mathematically they involve the dynamics of fluids in time-dependent domains, where the domain boundaries can become complex and can exert forces on the fluid. I will lecture on several central, and off-beat, examples of dynamical free boundary problems, focusing especially on the sophisticated numerical methods that have been developed to simulate them, and what these simulations teach us.
Torsion of chain complexes
Mehler's Formula and the Renormalization Group
Euler structures and refined torsions
The torsion function of 3-manifolds
Properties of the torsion function
Vladimir Turaev
Nactional Center of Scientific Research (CNRS)
Date: July 5, 12, 19, 26, August 2, 2001
Location: University of Calgary
Abstract
This series of six lectures is intended for a general audience. The aim of the lectures is to survey the theory of torsions of 3-dimensional manifolds. The torsions were introduced by Kurt Reidemeister in 1935 to give a topological classification of lens spaces. Recent interest in torsions is due to their connections with the Seiberg-Witten invariants of 4-manifolds and the Floer-type homology of 3-manifolds. The lectures will cover the above topics.
Self-Interacting Walk and Functional Integration - Part I Part II, Part III, Part IV
David Brydges
University of Virginia
Date: September 15, 19, 26, October 4, 2000
Location: University of Victoria
Abstract
This series of four lectures is intended for a general audience. The motivating problem is to determine the end-to-end distance of a very long self-avoiding walk on a D-dimensional simple cubic lattice as a function of the number of steps. In particular, when D = 4, the end-to-end distance is conjectured to be a constant times n1/2 log 1/8 n, where n is the number of steps. This conjecture will be used as pedagogical device to relate some of the standard machinery used in theoretical physics to ideas that are familiar and attractive to mathematicians. Thus the principal objective of the lectures is to explain what our colleagues in theoretical physics are doing and to discuss some general open problems. The main result will be a theorem validating the D = 4 conjecture, but only for a contrived version of the problem called the "Hierarchical Lattice"
How to Draw a Tree Correctly
Yuri Matiyasevich
Steklov Institute of Mathematics
Date: March 9, 2000
Location: University of Calgary
Abstract
This will be an introduction to a fascinating area of interplay between complex variable functions, algebraic numbers and graph theory which Grothendieck named "dessins d'enfants".
On Hilbert's 10th Problem - Part I, Part II, Part III, Part IV
Yuri Matiyasevich
Steklov Institute of Mathematics
Date: February - March, 2000
Location: University of Calgary
Abstract
A Diophantine equation is an equation of the form
D(x1,...,xm) = 0,
where D is a polynomial with integer coefficients. These equations were named after the Greek mathematician Diophantus who lived in the 3rd century A.D.
Hilbert's Tenth problem can be stated as follows:
Determination of the Solvability of a Diophantine Equation. Given a diophantine equation with any number of unknown quantities and with rational integral numerical coefficients, devise a process according to which it can be determined by a finite number of operations whether the equation is solvable in rational integers.
IAM-PIMS Joint Distinguished Colloquium Series
Compressive Signal Processing
Date: January 14, 2008
Location: UBC
Abstract
Sensors, signal processing hardware, and algorithms are under increasing pressure to accommodate ever larger and higher-dimensional data sets; ever faster capture, sampling, and processing rates; ever lower power consumption; communication over ever more difficult channels; and radically new sensing modalities. Fortunately, over the past few decades, there has been an enormous increase in computational power and data storage capacity, which provides a new angle to tackle these challenges.
System Level Mathematical Analysis of Mitosis
Alex Mogilner
University of California, Davis
Date: September 06, 2006
Location: UBC
Abstract
Mitotic spindle goes through distinct morphological states characterized by increasing spindle length and distances between chromosomes. A complete picture of how the spindle assembles is still lacking. We performed an In Silico model screen to identify all potential mechanisms of spindle self-organization. We trained' the computer to assemble a set of models and screened the models in a multi-dimensional parameter space. To identify models that fit experimental data we used stochastic optimization and genetic algorithms. We found multiple models quantitatively describing the spindle in which the timing of force activity must be fine tuned, in contrast to the kinetic and mechanical parameters that show robustness to change.
Models of Initiation and Propagation of Dendritic Spikes in Hippocampal CA1 Pyramidal Neurons
William L. Kath
Northwestern University
Date: September 13, 2006
Location: UBC
Abstract
In computational models of hippocampal CA1 pyramidal neurons with active dendrites, distal synaptic inputs trigger dendritic spikes, but in many cases these spikes do not propagate reliably to the soma to produce output action potentials in the axon. The computational models show, moreover, that the probability of axonal action potential initiation increases dramatically if the distal dendritic inputs are accompanied by small amounts of more proximal synaptic input. In this case, the propagation of the dendritic spikes appears to be gated by the more proximal inputs. The mechanisms for this phenomenon, as well as experimental results designed to test the predictions of the computational models, will be discussed.
Hedging Under Jump Diffusion with Transaction Costs
Peter Forsyth
University of Waterloo
Date: November 06, 2006
Location: UBC
Abstract
In this talk, we consider the problem of hedging a contingent claim, where the underlying asset follows a jump diffusion process. The no-arbitrage value of the claim is given by the solution of a Partial Integro-Differential Equation (PIDE), which in general must be solved numerically. By constructing a portfolio consisting of the underlying asset and a number of liquidly traded options, we devise a dynamic hedging strategy. At each hedge rebalance time, we minimize both the jump risk and the cost of buying/selling due to bid-ask spreads. Simulations of this strategy show that the standard deviation of the profit and loss of the hedging portfolio is greatly reduced compared with the standard hedging strategy.
Episodic Slow Slipping of Seafloor under Cascadia: What Physical Processes cause Aseismic Deformation Transients?
James Rice
Harvard University
Date: February 12, 2007
Location: UBC
Abstract
In several shallow-dipping subduction zones, including Cascadia, the seafloor undergoes episodes of more rapid than usual creep-slippage under the overlying margin, but at rates vastly slower than usual earthquake slip. In some locations, also including Cascadia, non-volcanic seismic tremors occur during the slip episodes. Graduate student Yajing Liu and I have been trying to understand what physical processes underlie these phenomena. We have shown that transients, with features somewhat like the observations, are a natural outcome of modern 'rate and state' formulations of fault zone friction, in a regime for which the ambient fluid pore pressure within the fault zone is very high and close to the compressive normal stress clamping the fault walls together. Evidence for such pressure conditions is provided by independent mechanical and petrological constraints.
Compressive Sampling
Emmanuel Candes
Caltech
Date: March 12, 2007
Location: UBC
Abstract
One of the central tenets of signal processing is the Shannon/Nyquist sampling theory: the number of samples needed to reconstruct a signal without error is dictated by its bandwidth-the length of the shortest interval which contains the support of the spectrum of the signal under study. Very recently, an alternative sampling or sensing theory has emerged which goes against this conventional wisdom. This theory allows the faithful recovery of signals and images from what appear to be highly incomplete sets of data, i.e. from far fewer data bits than traditional methods use. Underlying this methodology is a concrete protocol for sensing and compressing data simultaneously.
This talk will present the key mathematical ideas underlying this new sampling or sensing theory, and will survey some of the most important results. We will argue that this is a robust mathematical theory; not only is it possible to recover signals accurately from just an incomplete set of measurements, but it is also possible to do so when the measurements are unreliable and corrupted by noise. We will see that the reconstruction algorithms are very concrete, stable (in the sense that they degrade smoothly as the noise level increases) and practical; in fact, they only involve solving very simple convex optimization programs.
An interesting aspect of this theory is that it has bearings on some fields in the applied sciences and engineering such as statistics, information theory, coding theory, theoretical computer science, and others as well. If time allows, we will try to explain these connections via a few selected examples.
Mathematical Models of Protein Receptor Trafficking and Its Role in Synaptic Plasticity
Paul Bressloff
University of Utah
Date: March 19, 2007
Location: UBC
Abstract
AMPA receptors mediate the majority of fast excitatory synaptic transmission in the central nervous system, and recent experimental evidence suggests that AMPA receptor trafficking regulates synaptic strength, a phenomenon implicated in learning and memory. There are two major mechanisms of AMPA receptor trafficking: exo/endocytic exchange of surface receptors with intracellular receptor pools, and lateral diffusion of surface receptors within the plasma membrane. In this talk we review some mathematical models of these trafficking mechanisms, and show how they can account for a variety of physiological data regarding plasticity at the single-synapse and multi-synapse levels.
Rapid Past Climate Change: It is the Sea Ice
Eli Tziperman
Harvard University
Date: September 26, 2005
Location: UBC
Abstract
Heinrich events are massive glacier discharges from the ice sheets around the North Atlantic Ocean which occurred every 7,000-10,000 years during the last glacial period (10,000-50,000 years ago). Each of these events also triggered an abrupt atmospheric warming of some 10 degrees Celsius around the northern North Atlantic. The warming (Dansgaard-Oeschger event) occurred rapidly, in about twenty years, lasted a few hundred years, and terminated abruptly again, within a few decades. We suggest that such past rapid climate changes during the last glacial maximum have occurred due to rapid sea ice melting and formation. A specific mechanism is proposed for the climatic effects of Heinrich events. The synchronous iceberg discharges from several ice sheets around the North Atlantic are explained by a nonlinear phase locking between the different glaciers.
Edge Detection, Hierarchical Decompositions and Velocity Averaging
Tadmor Eitan
University of Maryland
Date: October 17, 2005
Location: UBC
Abstract
I will discuss three separate problems which are dominated by the presence of different scales. The first problem deals with edge detection in noisy spectral data using separation of scales. The second problem originates with image processing: I will present a novel representation of texture which is decomposed into hierarchical scales of edges. I will conclude with velocity averaging of kinetic to macroscopic scales, deriving new regularizing effects in nonlinear second-order equations.
Microwave Heating of Materials: A Mathematical and Physical Overview
Greg Kriegsmann
New Jersey Institute of Technology
Date: November 07, 2005
Location: UBC
Abstract
The use of microwaves to heat and dry materials is rapidly gaining acceptance in industry and, to some extent, in the field of biomedical engineering. The working engineering theories are based upon heuristically averaged, linear equations which adequately explain some processes, such as microwave cooking of foodstuffs, but not others. These include such phenomenon as thermal runaway and hot-spot formations which have important ramifications in both biomedical and industrial applications. They are caused by the temperature dependencies of the electrical and thermal properties of the irradiated material which make the basic underlying mathematical description highly nonlinear. We shall describe several microwave heating experiments and present models which have been used by researchers in this field. The strengths and shortcomings of these models will be discussed, and open questions of both mathematical and computational natures will be presented.
Optimal Decisions in the Brain: From Neural Oscillators to Stochastic Differential Equations
Philip Holmes
Harvard University
Date: March 20, 2006
Location: UBC
Abstract
The sequential probability ratio test (SPRT) is optimal in that it allows one to accept or reject hypotheses, based on noisy incoming evidence, with the minimum number of observations for a given level of accuracy. There is increasing neural and behavioral evidence that primate and human brains employ a continuum analogue of SPRT: the drift-diffusion (DD) process. I will review this and descibe how a biophysical model of a pool of spiking neurons can be simplified to a phase oscillator and analysed to yield spike rates in response to stimuli. These spike rates tune DD parameters. This study is a small step toward the construction of a series of models, at different time and space scales, linking neural spikes to human decisions. This work provides a rich, if chaotic, example of applied mathematics in action, involving probability, stochastic differential equations, and nonlinear dynamical systems.
Computational Cell Biology: From Molecular Networks to Cell Physiology
Philip Holmes
Princeton University
Date: March 27, 2005
Location: UBC
Abstract
Cell physiology – how cells feed, move around, respond to stimuli – is a consequence of the dynamic properties of complex networks of interacting macromolecules (genes and proteins). Effective mathematical methods for deriving cell physiology from molecular interaction networks are crucial to future progress in understanding living cells and in modifying cell behavior for medical and technological purposes. Of particular interest is the network regulating cell reproduction (DNA synthesis and cell division). Too complex to be reliably understood by hand-waving arguments, the network can be described by nonlinear differential equations that accurately predict the observed properties of growing and dividing cells.
Novel Marangoni Flows
George Homsy
University of California, Santa Barbara
Date: September 20, 2004
Location: UBC
Abstract
In this talk I describe three recent studies of novel Marangoni flows, i.e., flows that are driven by tangential stresses due to temperature, compositional, or electrical fields. These stresses can drive bulk flows that are vigorous and in many cases, counter-intuitive.
Our first two studies involve gradients of concentration of surfactants arising from variation in the rate of chemical reaction producing them. We study the effect of in-situ production of surfactants on viscous fingering instabilities. We find that Maragoni stresses result in wider fingers, a larger fractal dimension of the pattern, and an increase in displacement efficiency. We then describe a surprising phenomenon of spontaneous, self-sustained, chemically driven oscillations at the tip of a drop suspended from a needle. We connect this phenomenon with the well-known tip-streaming in drops subjected to extensional flows. Plausible physical mechanisms are proposed and mathematical models presented for both of these phenomena.
Finally, we describe theory and experiment on internal circulations in drops that are driven by a combination of translation and tangential electrical stresses. Modulation of the electric field responsible for the latter results in chaotic advection and good mixing within the drop. Theory and experiment are found to be in good agreement.
A Stirring Tale of Bacterial Swimming and Chemotaxis
Ray Goldstein
University of Arizona
Date: October 25, 2004
Location: UBC
Abstract
Nonlinear PDEs are used to model a wide range of phenomena in science and engineering including fluid dynamics, wave propagation, magnetic fields, and biological systems. During the last 15 years PDEs have come to serve a new purpose in engineering – as the basis for algorithm design in image processing. Novel nonlinear PDEs are currently being used to create new algorithms that locally dynamically evolve image pixel values with a global effect being achieved. Applications include edge detection and denoising, image inpainting and reconstruction, super-resolution, compression, and image registration. We discuss a suite of methods based on higher-order equations that are designed to address curvature effects in images.
Higher Order PDEs in Image Processing
Andrea Bertozzi
University of California at Los Angeles
Date: November 24, 2004
Location: UBC
Abstract
Nonlinear PDEs are used to model a wide range of phenomena in science and engineering including fluid dynamics, wave propagation, magnetic fields, and biological systems. During the last 15 years PDEs have come to serve a new purpose in engineering – as the basis for algorithm design in image processing. Novel nonlinear PDEs are currently being used to create new algorithms that locally dynamically evolve image pixel values with a global effect being achieved. Applications include edge detection and denoising, image inpainting and reconstruction, super-resolution, compression, and image registration. We discuss a suite of methods based on higher-order equations that are designed to address curvature effects in images.
Dynamical Systems That Do Tricks
Roger Brockett
Harvard University
Date: January 24, 2005
Location: UBC
Abstract
Our ideas about computing are increasingly challenged as more importance is attached to explaining the marvels of biological information processing. At the same time, the mathematical theory of dynamical systems is being revolutionized based on studies of high dimensional integrable systems, on one hand, and low dimensional chaotic ones on the other. In this talk we discuss highly nonlinear input-output systems that perform various calculational tasks in a robust way and link these to biological phenomena. This involves a discussion of input-output versions of the Toda lattice models operating in soliton mode as well as efforts to use distributed representations of geometric quantities, such as those afforded by place cells, to represent numbers in an analog computation.
Inverse Problems in Medical Imaging
Adrian Nachman
University of Toronto
Date: March 7, 2005
Location: UBC
Abstract
The introduction of X-ray computed tomography in 1972 revolutionized medical imaging, replacing classical qualitative imaging by a quantitative format. Other imaging modalities use acoustic or electromagnetic waves. Mathematically, wave propagation is modeled by partial differential equations, and the class of problems considered consists in determining the coefficients of such an equation (the tissue characteristics), assumed unknown inside the body, from knowledge of its solutions at the surface. These inverse problems turn out to have a rich and beautiful mathematical structure (leading to nonlinear harmonic analysis, as well as to hard questions in differential geometry) and have attracted a considerable amount of research activity. This talk will survey some of the analytic breakthroughs in the field as well as some current open problems.
Early-Life Crises of Habitable Planets
Ray Pierrehumbert
University of Chicago
Date: March 28, 2005
Location: UBC
Abstract
There are a number of crises that a potentially habitable planet must avoid or surmount if its potential is to be realized. These include the runaway greenhouse, loss of atmosphere by chemical or physical processes, and long-lasting global glaciation. In this lecture I will present research on the climate dynamics governing such processes, with particular emphasis on the lessons to be learned from the cases of Early Mars and the Neoproterozoic Snowball Earth.
Dynamics of Abyssal Ocean Currents
Gordon E. Swaters
University of Alberta
Date: October 7, 2002
Location: UBC
Abstract
The ocean is the regulator of Earth's climate. The world's oceans store an enormous quantity of heat, which is redistributed throughout the world via the currents. Because the density of water is about a thousand times larger than the density of air, the ocean has a substantial inertia associated with it compared to the atmosphere. This implies that it takes an enormous quantity of energy to change an existing ocean circulatory pattern compared to the atmospheric winds. One can think of the ocean as the "memory" and "integrator" of past and evolving climate states.
One can characterize ocean currents into two broad groups. The first are the wind-driven currents. These currents are most intense near the surface of the ocean. Their principal role is to transport warm equatorial waters toward the polar regions. The second group of currents are those that are driven by density contrasts with the surrounding waters. Among these are the deep, or abyssal, currents flowing along or near the bottom of the oceans in narrow bands. Their principal role is to transport cold, dense waters produced in the polar regions toward the equator.
My research group is working toward understanding the dynamics of these abyssal currents. In particular, we have focused on developing innovative mathematical and computational models to describe the evolution, including the transition to instability and interaction with the surrounding ocean and bottom topography, of these currents. The goal of this research is to better understand the temporal variability in the planetary scale dynamics of the ocean climate system. Our work can be seen as "theoretical" in the sense that we attempt to develop new models to elucidate the most important dynamical balances at play and "process-oriented" in the sense that we attempt to use these models to make concrete predictions about the evolution of these flows. As such, our work is a blend of classical applied mathematics, high-performance computational science and physical oceanography.
In this talk, we will attempt to give an overview of our work in this area.
Transition pathways in complex systems: throwing ropes over rough mountain passes, in the dark
David Chandler
University of California
Date: October 28, 2002
Location: UBC
Abstract
This lecture describes the statistical mechanics of trajectory space and examples of what can be learned from it. These examples include numerical algorithms for studying rare but important events in complex systems -- systems where transition states are not initially known, where transition states need not coincide with saddles in a potential energy landscape, and where the number of saddles and other features are too numerous and complicated to enumerate explicitly. This methodology for studying trajectories is called "transition path sampling." Extensive material on this topic can be found at the web site: http://gold.cchem.berkeley.edu.
Spatial complexity in ecology and evolution
Ulf Dieckmann
Center for Turbulence Research, Stanford University and NASA Ames Research Center
Date: December 2, 2002
Location: UBC
Abstract
The field of spatial ecology has expanded dramatically in the last few years. This talk gives an overview of the many intriguing phenomena arising from spatial structure in ecological and evolutionary models. While traditional ecological theory sadly fails to account for such phenomena, complex simulation studies offer but limited insight into the inner workings of spatially structured ecological interactions. The talk concludes with a survey of some novel methods for simplifying spatial complexity that offer a promising middle ground between spatially ignorant and spatially explicit approaches.
Turbulence and its Computation
Parviz Moin
Center for Turbulence Research, Stanford University and NASA Ames Research Center
Date: January 13, 2003
Location: UBC
Abstract
Turbulence is a common state of fluid motion in engineering applications and geophysical and astrophysical scales. Prediction of its statistical properties and the ability to control turbulence is of great practical significance. Progress toward a rigorous analytic theory has been prevented by the fact that turbulence is a mixture of high dimensional chaos and order, and turbulent flows possess a wide range of temporal and spatial scales with strong non-linear interactions. With the advent of supercomputers it has become possible to compute some turbulent flows from basic principles. The data generated from these calculations have helped to understand the nature and mechanics of turbulent flows in some detail. Recent examples from large scale computations of turbulent flows and novel numerical experiments used to study turbulence will be presented. These display a wide range in complexity from decaying turbulence in a box to turbulent combustion in a combustor of a real jet engine. The hierarchy of methods for computing turbulent flows and the problem of turbulence closure will be discussed. Recent applications of optimal control theory to turbulence control for drag and noise reduction will be presented.
Fast accurate solution of stiff PDE
Lloyd N. Trefethen
Oxford University Computing Laboratory
Date: March 17, 2003
Location: UBC
Abstract
Many partial differential equations combine higher-order linear terms with lower-order nonlinear terms. Examples include the KdV, Allen-Cahn, Burgers, and Kuramoto-Sivashinsky equations. High accuracy is typically needed because the solutions may be highly sensitive to small perturbations. For simulations in simple geometries, spectral discretization in space is excellent, but what about the time discretization? Too often, second-order methods are used because higher order seems impractical. In fact, fourth-order methods are entirely practical for such problems, and we present a comparison of the competing methods of linearly implicit schemes, split step schemes, integrating factors, "sliders", and ETD or exponential time differencing. In joint work with A-K Kassam we have found that a fourth-order Runge-Kutta scheme known as ETDRK4, developed by Cox and Matthews, performs impressively if its coefficients are stably computed by means of contour integrals in the complex plane. Online examples show that accurate solutions of challenging nonlinear PDE can be computed by a 30-line Matlab code in less than a second of laptop time.
PIMS-SFU School of Computer Science Distinguished Lecture Series, 2004
List Decoding of Error-correcting Codes
Mahdu Sudan
MIT
Date: September 17, 2004
Location: Simon Fraser University
Abstract
The task of dealing with errors (or correcting them) lies at the very heart of communication and computation. The mathematical foundations for this task were laid in two concurrent and interdependent works by Shannon and Hamming in the late 1940s. The two theories are strikingly powerful and distinct in their modelling of the error. Shannon?s theory models errors as effected by a probabilistic/stochastic process, while Hamming envisions them as being introduced by an adversary. While the two theories share a lot in the underlying tools, the quantitative results are sharply diverging. Shannon's theory shows that a channel that corrupts (arbitrarily) close to 50% of the transmitted bits can still be used for transmission of information. Hamming's theory in contrast has often been interpreted to suggest it can handle at most 25% error on a binary channel.
Learning to perceive how handwritten characters were drawn
Geoffrey Hinton
University of Toronto
Date: December 10, 2004
Location: Simon Fraser University
Abstract
I shall describe a fairly general way of inverting graphics programs to produce vision programs. The main problem is that we do not know what inputs the graphics program needs to be given in order to generate the large set of real images that we want to train on. Using the MNIST images of handwritten digits as an example, I will show how this problem can be overcome. For each digit class, we start with a single example of a motor program that draws a prototypical example of that class. The motor program consists of the four stiffnesses of two pairs of opposing springs at 17 time steps. When these springs operate on a pen, the pen draws the digit. These stiffnesses are the initial biases of the 68 output units of a feedforward neural net. The net has a large non-linear hidden layer and its inputs are the pixel intensities in an image of a digit. With small initial weights, the neural net can only produce small distortions of the prototype, whatever input image it is shown. The graphics program is used to draw these distorted examples and this drawing plus the motor program that created it constitutes a labeled training example which can be used to train the neural net to convert images to motor programs. The net first learns to invert images that are very similar to the prototype. Once it has learned this, it can map images in the dataset that are moderately similar to the prototype into their approximate motor programs. By producing images from these motor programs, it can extend the set of labeled training examples. Eventually, the feedforward net becomes very good at perceiving the correct motor program for any image in the class it is trained on. Digit classification can then be performed by seeing how accurately the motor program extracted by a class-specific network reconstructs the test image.
IAM-PIMS-MITACS Joint Distinguished Colloquia
Novel Marangoni Flows
George Homsy
University of California, Santa Barbara
Date: September 20, 2004
Location: UBC
Abstract
In this talk I describe three recent studies of novel Marangoni flows, i.e., flows that are driven by tangential stresses due to temperature, compositional, or electrical fields. These stresses can drive bulk flows that are vigorous and in many cases, counter-intuitive.
Our first two studies involve gradients of concentration of surfactants arising from variation in the rate of chemical reaction producing them. We study the effect of in-situ production of surfactants on viscous fingering instabilities. We find that Maragoni stresses result in wider fingers, a larger fractal dimension of the pattern, and an increase in displacement efficiency. We then describe a surprising phenomenon of spontaneous, self-sustained, chemically driven oscillations at the tip of a drop suspended from a needle. We connect this phenomenon with the well-known tip-streaming in drops subjected to extensional flows. Plausible physical mechanisms are proposed and mathematical models presented for both of these phenomena.
Finally, we describe theory and experiment on internal circulations in drops that are driven by a combination of translation and tangential electrical stresses. Modulation of the electric field responsible for the latter results in chaotic advection and good mixing within the drop. Theory and experiment are found to be in good agreement.
A Stirring Tale of Bacterial Swimming and Chemotaxis
Ray Goldstein
University of Arizona
Date: October 25, 2005
Location: UBC
Abstract
I describe very recent experimental results on collective behavior in bacterial suspensions where the biology of chemotaxis, metabolism and cell-cell signaling is intimately connected to the physics of buoyancy, diffusion, and mixing. These results include the discovery of the "chemotactic Boycott effect" and large-scale coherence characterized by transient, recurring vortex streets and high-speed jets of cooperative swimming. These phenomena generate large Peclet number flows that may fundamentally alter the nature of solute transport in such systems. Certain of these observations can be quantitatively understood through coupled PDEs describing oxygen transport, chemotaxis, and fluid flow. Others have no successful model to date. Some intriguing fluid dynamical studies motivated by these observations are described, including a striking coiling instability of viscous fluid jets.
Higher Order PDEs in Image Processing
Andrea Bertozzi
University of California at Los Angeles
Date: November 29, 2004
Location: UBC
Abstract
Nonlinear PDEs are used to model a wide range of phenomena in science and engineering including fluid dynamics, wave propagation, magnetic fields, and biological systems. During the last 15 years PDEs have come to serve a new purpose in engineering – as the basis for algorithm design in image processing. Novel nonlinear PDEs are currently being used to create new algorithms that locally dynamically evolve image pixel values with a global effect being achieved. Applications include edge detection and denoising, image inpainting and reconstruction, super-resolution, compression, and image registration. We discuss a suite of methods based on higher-order equations that are designed to address curvature effects in images.
Dynamical Systems That Do Tricks
Roger Brockett
Harvard University
Date: January 24, 2005
Location: UBC
Abstract
Our ideas about computing are increasingly challenged as more importance is attached to explaining the marvels of biological information processing. At the same time, the mathematical theory of dynamical systems is being revolutionized based on studies of high dimensional integrable systems, on one hand, and low dimensional chaotic ones on the other. In this talk we discuss highly nonlinear input-output systems that perform various calculational tasks in a robust way and link these to biological phenomena. This involves a discussion of input-output versions of the Toda lattice models operating in soliton mode as well as efforts to use distributed representations of geometric quantities, such as those afforded by place cells, to represent numbers in an analog computation.
Inverse Problems in Medical Imaging
Adrian Nachman
University of Toronto
Date: March 7, 2005
Location: UBC
Abstract
The introduction of X-ray computed tomography in 1972 revolutionized medical imaging, replacing classical qualitative imaging by a quantitative format. Other imaging modalities use acoustic or electromagnetic waves. Mathematically, wave propagation is modeled by partial differential equations, and the class of problems considered consists in determining the coefficients of such an equation (the tissue characteristics), assumed unknown inside the body, from knowledge of its solutions at the surface. These inverse problems turn out to have a rich and beautiful mathematical structure (leading to nonlinear harmonic analysis, as well as to hard questions in differential geometry) and have attracted a considerable amount of research activity. This talk will survey some of the analytic breakthroughs in the field as well as some current open problems.
Early-Life Crises of Habitable Planets
Ray Pierrehumbert
University of Chicago
Date: March 28, 2005
Location: UBC
Abstract
There are a number of crises that a potentially habitable planet must avoid or surmount if its potential is to be realized. These include the runaway greenhouse, loss of atmosphere by chemical or physical processes, and long-lasting global glaciation. In this lecture I will present research on the climate dynamics governing such processes, with particular emphasis on the lessons to be learned from the cases of Early Mars and the Neoproterozoic Snowball Earth.
IAM-PIMS Joint Distinguished Colloquia
Dynamics of Abyssal Ocean Currents
Gordon E. Swaters
University of Alberta
Date: October 7, 2002
Location: UBC
Abstract
The ocean is the regulator of Earth's climate. The world's oceans store an enormous quantity of heat, which is redistributed throughout the world via the currents. Because the density of water is about a thousand times larger than the density of air, the ocean has a substantial inertia associated with it compared to the atmosphere. This implies that it takes an enormous quantity of energy to change an existing ocean circulatory pattern compared to the atmospheric winds. One can think of the ocean as the "memory" and "integrator" of past and evolving climate states.
One can characterize ocean currents into two broad groups. The first are the wind-driven currents. These currents are most intense near the surface of the ocean. Their principal role is to transport warm equatorial waters toward the polar regions. The second group of currents are those that are driven by density contrasts with the surrounding waters. Among these are the deep, or abyssal, currents flowing along or near the bottom of the oceans in narrow bands. Their principal role is to transport cold, dense waters produced in the polar regions toward the equator.
My research group is working toward understanding the dynamics of these abyssal currents. In particular, we have focused on developing innovative mathematical and computational models to describe the evolution, including the transition to instability and interaction with the surrounding ocean and bottom topography, of these currents. The goal of this research is to better understand the temporal variability in the planetary scale dynamics of the ocean climate system. Our work can be seen as "theoretical" in the sense that we attempt to develop new models to elucidate the most important dynamical balances at play and "process-oriented" in the sense that we attempt to use these models to make concrete predictions about the evolution of these flows. As such, our work is a blend of classical applied mathematics, high-performance computational science and physical oceanography.
In this talk, we will attempt to give an overview of our work in this area.
Transition pathways in complex systems: throwing ropes over rough mountain passes, in the dark
David Chandler
University of California
Date: October 28, 2002
Location: UBC
Abstract
This lecture describes the statistical mechanics of trajectory space and examples of what can be learned from it. These examples include numerical algorithms for studying rare but important events in complex systems -- systems where transition states are not initially known, where transition states need not coincide with saddles in a potential energy landscape, and where the number of saddles and other features are too numerous and complicated to enumerate explicitly. This methodology for studying trajectories is called "transition path sampling." Extensive material on this topic can be found at the web site: gold.cchem.berkeley.edu .
Spatial complexity in ecology and evolution
Ulf Dieckmann
The International Institute for Applied Systems Analysis, Laxenburg, Austria
Date: December 2, 2002
Location: UBC
Abstract
The field of spatial ecology has expanded dramatically in the last few years. This talk gives an overview of the many intriguing phenomena arising from spatial structure in ecological and evolutionary models. While traditional ecological theory sadly fails to account for such phenomena, complex simulation studies offer but limited insight into the inner workings of spatially structured ecological interactions. The talk concludes with a survey of some novel methods for simplifying spatial complexity that offer a promising middle ground between spatially ignorant and spatially explicit approaches.
Turbulence and its Computation
Parviz Moin
Center for Turbulence Research, Stanford University and NASA Ames Research Center
Date: January 13, 2003
Location: UBC
Abstract
Turbulence is a common state of fluid motion in engineering applications and geophysical and astrophysical scales. Prediction of its statistical properties and the ability to control turbulence is of great practical significance. Progress toward a rigorous analytic theory has been prevented by the fact that turbulence is a mixture of high dimensional chaos and order, and turbulent flows possess a wide range of temporal and spatial scales with strong non-linear interactions. With the advent of supercomputers it has become possible to compute some turbulent flows from basic principles. The data generated from these calculations have helped to understand the nature and mechanics of turbulent flows in some detail. Recent examples from large scale computations of turbulent flows and novel numerical experiments used to study turbulence will be presented. These display a wide range in complexity from decaying turbulence in a box to turbulent combustion in a combustor of a real jet engine. The hierarchy of methods for computing turbulent flows and the problem of turbulence closure will be discussed. Recent applications of optimal control theory to turbulence control for drag and noise reduction will be presented.
Fast accurate solution of stiff PDE
Lloyd N. Trefethen
Oxford University Computing Laboratory
Date: March 17, 2003
Location: UBC
Abstract
Many partial differential equations combine higher-order linear terms with lower-order nonlinear terms. Examples include the KdV, Allen-Cahn, Burgers, and Kuramoto-Sivashinsky equations. High accuracy is typically needed because the solutions may be highly sensitive to small perturbations. For simulations in simple geometries, spectral discretization in space is excellent, but what about the time discretization? Too often, second-order methods are used because higher order seems impractical. In fact, fourth-order methods are entirely practical for such problems, and we present a comparison of the competing methods of linearly implicit schemes, split step schemes, integrating factors, "sliders", and ETD or exponential time differencing. In joint work with A-K Kassam we have found that a fourth-order Runge-Kutta scheme known as ETDRK4, developed by Cox and Matthews, performs impressively if its coefficients are stably computed by means of contour integrals in the complex plane. Online examples show that accurate solutions of challenging nonlinear PDE can be computed by a 30-line Matlab code in less than a second of laptop time.
Detached-Eddy Simulation
Philippe R. Spalart
Boeing Corp., Seattle
Date: October 1, 2001
Location: UBC
Abstract
DES is a recent technique, devised to predict separated flows at high Reynolds numbers with a manageable cost, for instance an airplane landing gear or a vehicle. The rationale is that on one hand, Large-Eddy Simulation (LES) is unaffordable in the thin regions of the boundary layer, and on the other hand, Reynolds-Averaged Navier-Stokes (RANS) models seem permanently unable to attain sufficient accuracy in regions of massive separation.
DES contains a single model, typically with one transport equation, which functions as a RANS model in the boundary layer and as a Sub-Grid-Scale model in separated regions, where the simulation becomes an LES. The approach has spread to a number of groups worldwide, and appears quite stable. A range of examples are presented, from flows as simple as a circular cylinder to flows as complex as a fighter airplane beyond stall. The promise and the limitations of the technique are discussed.
Spectral Methods for Discontinuous Problems
David Gottlieb
Brown University
Date: October 29, 2001
Location: UBC
Abstract
Spectral methods involve approximating the solution by Fourier series or orthogonal polynomials. Those approximations are very accurate for smooth functions, but not for piecewise smooth functions that appear very often in applications.
In this talk we will review the progress made in the last decade, in applying spectral methods to problems where the solution is only piecewise smooth.
The first part of the talk will concentrate on approximation theory: The resolution of the Gibbs phenomenon. We will show that if the first N Fourier coefficients of a piecewise smooth function are known then it is possible to get a rapidly convergent uniform approximation. Applications in computed tomography and splicing of pictures will be shown.
We will then discuss linear hyperbolic equations and show that spectral methods, with pre- and post-processing yield spectral accuracy.
The Tadmor theory of spectral approximations to nonlinear hyperbolic equations will be reviewed, together with recent developments concerning the conservation properties of multidomain spectral methods. Simulations of reactive supersonic flows will be presented.
Numerical Simulation of Turbulence
Joel H. Ferziger
Stanford University
Date: November 26, 2001
Location: UBC
Abstract
Turbulence is a phenomenon (or rather a set of phenomena) that is difficult to deal with both mathematically and physically because it contains both deterministic and random elements. However, the equations governing its behavior are well known. After a short discussion of the physics of turbulence, we will give a discussion of the approaches used to deal with it and an example of the use of simulation techniques to learn about the physics of turbulence and the development of simple models for engineering use.
Modeling and Simulation for Epitaxial Growth
Russel Caflisch
UCLA
Date: January 28, 2002
Location: UBC
Abstract
Epitaxy is growth of a thin film as a single crystal, the properties of which are inherited from the underlying substrate. This is the method of choice for growth of materials for many of the most demanding applications, such as quantum well devices. Because of the wide range of length and time scales that are relevant to epitaxial growth, modeling and simulation is performed for length scales ranging from the atomistic to the continuum. A new mathematical model and simulation method, using a level set method, will be described in detail. This method is atomistic in the growth direction but continuum in the lateral directions. Inclusion of strain due to heteroepitaxy will be discussed.
Signal Processing in Cellular Regulatory Networks: Physical Models, Formal Abstractions and Applications
Adam Arkin
University of California, Berkeley
Date: February 18, 2002
Location: UBC
Abstract
Cellular behavior is controlled by highly interconnected networks of chemical and physical interactions. The networks process signals from the environments and from internal subsystems in order to perform complex tasks such as when an immune cell locates a bacterium in complex tissue, or chooses to follow one path of development or another. The physical mechanisms underlying these decisions range in time from milliseconds to hours and include mechanical, chemical, and transport processes. They are highly nonlinear and often stochastic. Molecular biology has progressed to the point wherein the questions being addressed seem to require that the detailed functioning of these networks be understood in great detail. However, building models of these processes is extremely difficult: both physical theory and the ability to observe all the parameters and interactions is lacking. Thus, there is need to develop more abstract models of these networks that are nonetheless able to aid in prediction, control and design of cellular systems. In support of this, theories of biological network decomposition and regulatory motif detection need to be developed so that analysis of pieces of the system can proceed without explicitly including every interaction in the cell. All the while, these analyses must drive experiment and be validated in detail by the experimental data. Here we describe some of our approaches to all these different problems and demonstrate them on particular bacterial and eukaryotic systems. The suite of tools we are developing are designed to fit into an integrated framework for biological systems analysis called Bio/Space. Progress on this tool and its theory will also be discussed.
Approximation Algorithms and Games on Networks
Eva Tardos
Cornell University
Date: March 11, 2002
Location: UBC
Abstract
In this talk we discuss work at the intersection of algorithms design and game theory. Traditional algorithms design assumes that the problem is described by a single objective function. One of the main current trends of work focuses on approximation algorithm, where computing the exact optimum is too hard. However, there is an additional difficulty in a number of settings. It is natural to consider algorithmic questions where multiple agents each pursue their own selfish interests. We will discuss problems and results that arise from this perspective
Radial Basis Functions - A future way to solve PDEs to spectral accuracy on irregular multidimensional domains?
Bengt Fornberg
University of Colorado
Date: March 27, 2001
Location: UBC
Abstract
It was discovered about 30 years ago that expansions in Radial Basis Functions (RBFs) provide very accurate interpolation of arbitrarily scattered data in any number of spatial dimensions. With both computational cost and coding effort for RBF approximations independent of the number of spatial dimensions, it is not surprising that RBFs have since found use in many applications. Their use as basis functions for the numerical solution of PDEs is however surprisingly novel. In this Colloquium, we will discuss RBF approximations from the perspective of someone interested in pseudospectral (spectral collocation) methods primarily for wave-type equations.
The Mathematics of Reflection Seismology
Gunther Uhlmann
University of Washington
Date: March 6, 2001
Location: UBC
Abstract
Reflection seismology is the principal exploration tool of the oil industry and has many other technical and scientific uses. Reflection seismograms contain enormous amounts of information about the Earth's structure, obscure by complex reflection and refraction effects. Modern mathematical understanding of wave propagation in heterogeneous materials has aided in the unraveling of this complexity. The speaker will outline some advances in the theory of oscillatory integrals which have had immediate practical application in seismology.
Comparative Genomics
David Baillie
Simon Fraser University
Date: January 16, 2001
Location: UBC
Abstract
The sequencing of the human genome has been essentially finished. The genomes of more than fifty other organisms have also been completed. Thus we have in these genomes the information that encodes the description of not only the majority of the protein and nucleic acid components essential for their operation but also the details of when and where they are to be produced. A major challenge of the next decade will be the determination of how the control elements associated with the structural genes function. We must develop methods that allow these elements to be recognized in the genomic sequence. This has been done for only a small number of genes at this time. Comparative genomics provides a powerful approach to this problem. Comparison of the genomic sequence of organisms that are evolutionarily separated by 30 to 100 million years provides way of identifying regions that contain conserved regulatory signals. In addition this type of comparison also often allows identification previously unrecognized genomic features. I will present some examples of this approach.
Algorithms and Software for Dynamic Optimization with Application to Chemical Vapor Deposition Processes
Linda Petzold
University of California at Santa Barbara
Date: November 1, 2000
Location: UBC
Abstract
In recent years, as computers and algorithms for simulation have become more efficient and reliable, an increasing amount of attention has focused on the more computationally intensive tasks of sensitivity analysis and optimal control. In this lecture we describe algorithms and software for sensitivity analysis and optimal control of large-scale differential-algebraic systems, focusing on the computational challenges. We introduce a new software package DASPK 3.0 for sensitivity analysis, and discuss our progress to date on the COOPT software and algorithms for optimal control. An application from the chemical vapor deposition growth of a thin film YBCO high-temperature superconductor will be described.
PIMS PDE/Geometry Seminar, 2003
Unusual comparison properties of capillary surfaces
Robert Finn
Stanford University
Date: May 8, 2003
Location: UBC
Abstract
This talk will address a question that was raised about 30 years ago by Mario Miranda, as to whether a given cylindrical capillary tube always raises liquid higher over its section than does a cylinder whose section strictly contains the given one. Depending on the specific shapes, the answer can take unanticipated forms exhibiting nonuniformity and discontinuous reversal in behavior, even in geometrically simple configurations. The presentation will be for the most part complete and self-contained, and is intended to be accessible for a broad mathematical audience.
MITACS Annual General Meeting, 2002
Filtering and Signal Processing
Gilbert Strang
Massachusetts Institute of Technology
Date: May 23, 2002
Location: UBC
Abstract
We discuss two filters that are frequently used to smooth data. One is the (nonlinear) median filter, that chooses the median of the sample values in the sliding window. This deals effectively with "outliers" that are beyond the correct sample range, and will never be chosen as the median. A straightforward implementation of the filter is expensive for large windows, particularly in two dimensions (for images).
The second filter is linear, and known as "Savitzky-Golay". It is frequently used in spectroscopy, to locate positions and peaks and widths of spectral lines. This filter is based on a least-squares fit of the samples in the sliding window to a polynomial of relatively low degree. The filter coefficients are unlike the equiripple filter that is optimal in the maximum norm, and the "maxflat" filters that are central in wavelet constructions. Should they be better known....?
We will discuss the analysis and the implementation of both filters.
Guessing Secrets
Ron Graham
University of California, San Diego
Date: May 24, 2002
Location: UBC
Abstract
We will describe a variant of the familiar "20 questions" problem in which one tries to discover the identity of some unknown "secret" by asking binary questions. In this variation, there is now a set of two (or more) secrets. For each question asked, an adversary gets to choose which of the secrets to use in supplying the answer, which in any case must be truthful. We will discuss a number of algorithms for dealing with this problem, although we are still far from a complete understanding of the situation. Problems of this type have recently arisen in connection with certain Internet traffic routing applications, although it turns out that such problems have in fact occurred in the literature at least 40 years ago.
Fingerprint Matching
Anil K. Jain
Michigan State University
Date: May 25, 2002
Location: UBC
Abstract
Associating an identity with an individual is called personal identification. Biometric identification refers to identifying an individual based on her distinguishing physiological and/or behavioral characteristics. Because of the need to enhance security (e.g., airports) and reduce fraud (e.g., credit cards), there is a growing interest in developing highly accurate biometric authentication systems. Among the various characteristics (e.g., fingerprints, face, iris, voice) that have been proposed for biometric identification, fingerprint-based systems are the most widely used. Inspite of over 50 years of research and development, fingerprint matching and classification continue to be challenging research problems. This talk will present methods for fingerprint representation, matching and classification. Current research related to multimodal biometrics (e.g., combination of fingerprints and faces), combination of multiple matchers, fingerprint individuality and digital watermarking of fingerprints will also be presented.
PIMS String Theory Seminars
D-particles with multipole moments of higher dimensional branes
Mark van Raamsdonk
Stanford University
Date: November 28, 2000
Location: UBC
Abstract
N/A
PIMS-MITACS Seminars on Computational Statistics and Data Mining
A Simple Model for a Complex System: Predicting Travel Times on Freeways
John A. Rice
UC, Berkeley
Date: April 26, 2001
Location: UBC
Abstract
A group of researchers from the Departments of EECS, Statistics, and the Institute for Transportation Research at UC Berkeley has been collecting and studying data on traffic flow on freeways in California. I will describe the sources of data and give an overview of the problems being addressed. I will go into some detail on a particular problem-forecasting travel times over a network of freeways. Although the underlying system is very complex and tempting to model, a simple model is surprisingly effective at forecasting.
Robust Factor Model Fitting and Visualization of Stock Market Returns
R. Douglas Martin
University of Washington
Date: January 25, 2001
Location: UBC
Abstract
Stock market returns are often non-Gaussian by virtue of containing outliers. Modeling stock returns and calculating portfolio risk is almost invariably accomplished by fitting a linear model, called a "factor" model in the finance community, using the sanctified method of ordinary least squares (OLS). However, it is well-known that stock returns are often non-Gaussian by virtue of containing outliers, and that OLS estimates are not robust toward outliers. Modern robust regression methods are now available that are not for stock returns using firm size and book-to-market as the factors, where we show that OLS gives a misleading result. Then we show how Trellis graphics displays can be used to obtain quick, penetrating visualization of stock returns factor model data, and to obtain convenient comparisons of OLS and robust factor model fits. Last but not least, we point out that robust factor model fits and Trellis graphics displays are in effect powerful "data mining tools" for better understanding of financial data. Our examples are constructed using a new S-PLUS Robust Methods library and S-PLUS Trellis graphics displays.
PIMS-MITACS Mathematical finance seminar
Levy Processes in Financial Modeling
Dilip Madan
University of Maryland
Date: March 9, 2001
Location: UBC
Abstract
We investigate the relative importance of diffusion and jumps in a new jump diffusion model for asset returns. In contrast to the standard modelling of jumps for asset returns, the jump component of our process can display finite or infinite activity, and finite or infinite variation. Empirical investigations of time series indicate that index dynamics are essentially devoid of a diffusion component, while this component may be present in the dynamics of individual stocks. This result leads to the conjecture that the risk-neutral process should be free of a diffusion component for both indices and individual stocks. Empirical investigation of options data tends to confirm this conjecture. We conclude that the statistical and risk-neutral processes for indices and stocks tend to be pure jump processes of infinite activity and finite variation.
Distinguished Lectures
Arithmetic progressions of primes.
Ben Green
University of British Columbia
Date: April 27, 2004
Location: UBC
Abstract
N/A
Mahler measure, factors of Markov shifts and symbolic representations of group automorphisms
Klaus Schmidt
University of Vienna
Date: May 8, 2003
Location: UBC
Abstract
The topic of this lecture is a construction of symbolic representations of hyperbolic toral automorphisms originally due to Vershik (at least in a special case) and later developed further by Vershik, Sidorov, Kenyon, Einsiedler, Lind and myself. This construction and its extension to the nonhyperbolic setting (and, more generally, to certain not necessarily expansive $Z^d$-actions by commuting group automorphisms) leads to interesting connections between Mahler measures of polynomials, beta-expansions and a result by Marcus, Petersen and Williams on factor maps of Markov shifts, and to many open problems.
Skein theory in knot theory and beyond
Vaughan F.R. Jones
University of California, Berkeley
Date: November 4, 2002
Location: UBC
Abstract
John Conway invented skein theory as a tool for manipulating knots and links and in particular calculating the Alexander polynomial. It was extended somewhat in the 1980's when new knot polynomials were discovered. More recently it has proved useful in the theory of subfactors as a tool to construct and analyse certain exotic structures. These structures remain related to low dimensional topology as they have topological quantum field theories attached. The unifying language is that of planar algebra.
Systems of Nonlinear PDEs arising in economic theory
Dr. Ivar Ekeland
Université Paris-Dauphine
Date: March 22, 2002
Location: UBC
Abstract
Testing the foundations of microeconomic theory leads us into a mathematical analysis of systems of nonlinear PDEs. Some of these can be solved in a C^\infty framework by using the classical Darboux theorem and its recent extensions, others require analysticity and more refined tools, such as the Cartan-Kahler theorem. Care will be taken to explain the economic framework and the tools of differential geometry.
Odd embeddings on lens spaces
David Gillman
UCLA
Date: May 31, 2001
Location: UBC
Abstract
N/A
Colliding Black Holes and Gravity Waves: A New Computational Challenge
Douglas N. Arnold
Institute for Mathematics and its Applications
Date: May 16, 2001
Location: UBC
Abstract
An ineluctable, though subtle, consequence of Einstein's theory of general relativity is that relatively accelerating masses generate tiny ripples on the curved surface of spacetime which propagate through the universe at the speed of light. Although such gravity waves have not yet been detected, it is believed that technology is reaching the point where detection is possible, and a massive effort to construct worldwide network of interferometer gravity wave observatories is well underway. They promise to be our first window to the universe outside the electromagnetic spectrum and so, to astrophysicists and others trying to fathom a universe composed primarily of electromagnetically dark matter, the potential payoff is enormous.
If gravitational wave detectors are to succeed as observatories, we must learn to interpret the wave forms which are detected. This requires the numerical simulation of the violent cosmic events, such as black hole collisions, which are the most likely sources of detectable radiation, via the numerical solution of the Einstein field equations. The Einstein equations form a system of ten second order nonlinear partial differential equations in four-dimensional spacetime which, while having a very elegant and fundamental geometric character, are extremely complex. Their numerical solution presents an enormous computational challenge which will require the application of state-of-the-art numerical methods from other areas of computational physics together with new ideas. This talk aims to introduce some of the scientific, mathematical, and computational problems involved in the burgeoning field of numerical relativity, discuss some recent progress, and suggest directions of future research.
Chow Forms and Resultants - old and new
David Eisenbud
Mathematical Science Research Institute (Berkeley)
Date: April 12, 2001
Location: UBC
Abstract
N/A
Variational Principles, Groups and Hydrodynamics
Tudor S. Ratiu
École Polytechnique Fédérale de Lausanne
Date: January 12, 2001
Location: UBC
Abstract
In this lecture Prof. Ratiu will present Hamiltonian systems whose configuration space is a Lie group. The systems discussed are all geodesic flows of certain metrics with a number of common characteristics. He will begin with the classical example introduced by Euler: a free rigid body whose configuration space is a proper rotation group. Then he will discuss a homogeneous incompressible fluid flow whose configuration space is a group of volume preserving diffeomorphisms of a smooth manifold. The Arnold-Ebin-Marsden program for the analysis of the equations of motion will be also presented. Another example is the Korteweg-de Vries equation whose configuration space is the Bott-Virasoro group. Generalization of this is the Camassa-Holm equation, and Prof. Ratiu will discuss its possible choices of configuration spaces. The averaged incompressible Euler flow and recent results about it will be presented. Finally, Prof. Ratiu will show that the Euler-Poincaré equation could be an abstract tool that easily enables one to recognize such systems.
The Mandelbrot Set, the Farey Tree, and the Fibonacci Sequence
Robert L. Devaney
Boston University
Date: October 20, 2001
Location: UBC
Abstract
In this lecture several folk theorems concerning the Mandelbrot set will be described. It will be shown how one can determine the dynamics of the corresponding quadratic maps by visualizing tiny regions in the Mandelbrot set as well as how the size and location of the bulbs in the Mandelbrot set is governed by Farey arithmetic.
Idempotents in Group Algebras, Traces, and Geometry of Groups
Beno Eckmann
ETH, Zürich
Date: September 21, 2000
Location: UBC
Abstract
An idempotent in a ring R is an element with a2 = a. If R has no zero-divisors then a = 0 or a = 1. According to the well-known "Idempotent Conjecture Group Algebras" the same should be true if R is the group algebra CG of a torsion-free group G.
The lecture is about some aspects of that conjecture (and of the analogue for idempotent matrices). There are results for various classes of groups, mainly appearing in geometry or topology. One uses suitable trace concepts, and methods such as the Kaplansky Theorem, the Bass Conjecture, and the Gromov type geometry of groups.
Projections, Group Algebras, and Geometry of Groups
Beno Eckmann
ETH, Zürich
Date: September 14, 2000
Location: UBC
Abstract
N/A
The Euler Characteristic - Some Variations and Ramifications
Beno Eckmann
ETH, Zürich
Date: September 13, 2000
Location: UBC
Abstract
This is survey of some developments around the classical Euler characteristic of a finite cell complex: Euler-Poincare and Atiyah formula, with application to groups, to four-manifolds and complex surfaces, and to negatively curved Riemannian manifolds.
Infinite Systems Of Linear Equations
Israel Gohberg
Tel Aviv University
Date: September 30, 1999
Location: UBC
Abstract
Infinite systems of linear equations are usually solved by the method of finite sections. This means that the infinite system is replaced by the sequence of finite sections of the original system and it is expected that the solutions of the finite systems converge to the solution of the infinite system. This method has a rich history of 150 years and many distinguished mathematicians have made important contributions in this area. The talk will present the early history and recent important achievements. Unexpected examples and computational experiments will motivate and illustrate the main results. Special attention will be paid to the case of Toeplitz matrices with continuous and discontinuous symbols. The talk is planned for a wide audience.
Geometric Unfolding of a Difference Equation
Sir Christopher Zeeman
Date: March 21, 2000
Location: UBC
Abstract
The generalized Lyness equation
\begin{displaymath}x_{n+1}= {\alpha+x_n\over x_{n-1}}, \alpha \gt 0\end{displaymath}
determines a sequence x1, x2, x3... of positive real numbers, where x1, x2 are given positive initial terms. The aim is to study these sequences. The unfolding of the equation is the diffeomorphism of the positive quadrant of R2 given by
\begin{displaymath}f(x,y)=(y,{\alpha+y \over x}).\end{displaymath}
The orbits of f then give the desired sequence by projecting onto the x axis. The orbits are studied using dynamical systems and algebraic geometry.
Following Lyness, Ladas showed that f leaves invariant a family of nested closed curves filling the positive quadrant. We show that on each curve f, is conjugate to a rigid rotation, and so the orbits on that curve are either all periodic or all dense in that curve. Lyness showed that if $ \alpha = 1$ then all the orbits are 5-periodic. We show that if $\alpha \not= 1 $ then f is a twist map, with rotation numbers tending to 1/5 as the curves tend to infinity. Beukers and Cushman have shown that the rotation numbers are monotonic increasing if $\alpha < 1,$ and decreasing if $\alpha \gt 1.$ Using number theory we classify the periods of periodic orbits.
The extension to higher dimensions leads to further theorems and conjectures.
The Design of Molecular Bar Codes: A Combinatorial Problem from Molecular Biology
Richard M. Karp
University of Washington
Date: May 13, 1999
Location: UBC
Abstract
Define the weight of a word over the alphabet {A , C , T , G} as the number of A's and T's in the word plus twice the number of C's and G's. Let a and b be positive integers with a < b. Define an ( a , b )-code as a set of words of weight greater than or equal to b (called code words), such that no word of weight greater than or equal to a occurs as a (consecutive) substring of two different code words or more than once as a substring of the same code word. We show how to construct ( a , b )-codes of near-maximum cardinality.
The problem arises in molecular biology. The Watson-Crick complement of a molecule is the molecule obtained by replacing each nucleotide by its complement, where A and T are complementary and C and G are complementary. Complementary or near-complementary sequences tend to hybridize (stick together). For various molecular recognition tasks we seek a set of molecules such that each molecule in the set hybridizes strongly to its own complement, but does not hybridize strongly to the complement of any other molecule in the set. One definition of ``strength of hybridization'' leads to the problem stated above. Other definitions lead to alternate formulations that we also discuss.
This is joint work with Amir Ben-Dor, Benno Schwikowski and Zohar Yakhini.
Modelling, analysis and computation of crystalline microstructures
Mitchell Luskin
University of Minnesota
Date: October 29, 1998
Location: UBC
Abstract
Microstructure is a feature of crystals with multiple symmetry-related energy-minimizing states. Continuum models have been developed explaining microstructure as the mixture of these symmetry-related states on a fine scale to minimize energy. We have developed an approximation theory for crystal microstructure which gives an analysis of the stability of macroscopic variables with respect to small energy perturbations. This theory has been applied to analyze several numerical methods for the approximation of martensitic microstructure and ferromagnetic microstructure
A Computational View of Randomness
Avi Wigderson
Hebrew University of Jerusalem
Date: April 6, 1998
Location: UBC
Abstract
The current state of knowledge in Computational Complexity Theory suggests two strong empirical "facts" (whose truth are the two major open problems of this field).
1. Some natural computational tasks are infeasible (e.g. it seems so for computing the functions PERMANENT, FACTORING, CLIQUE, SATISFIABILITY ...)
2. Probabilistic algorithms can be much more efficient than deterministic ones. (e.g it seems so for PRIMALITY, VERIFYING IDENTITIES APPROXIMATING VOLUMES...).
As it does with other notions (e.g. knowledge, proof..), Complexity Theory attempts to understand the notion of randomness from a computational standpoint. One major achievement of this study is the following (surprising?) relation between these two "facts" above:
THEOREM: (1) contradicts (2) In words: If ANY "natural" problem is "infeasible", then EVERY probabilistic algorithm can be "efficiently" "derandomized".
I plan to explain the sequence of important ideas, definitions, and techniques developed in the last 20 years that enable a formal statement and proof of such theorems. Many of them, such as the formal notions of "pseudo-random generator", and "computational indistinguishability" are of fundamental interest beyond derandomization; they have far reaching implications on our ability to build efficient cryptographic systems, as well as our inability to efficiently learn natural concepts and effectively prove natural mathematical conjectures (such as (1) above).
Thematic Programme on Inverse Problems and Applications: Workshop on Inverse Problems & Medical Imaging
Reconstructing the Location and Magnitude of Refractive Index Discontinuities from Truncated Phase-Contrast Tomographic Projections
Mark Anastasio
Illinois Institute of Technologyv
Date: August 4, 2003
Location: UBC
Abstract
Joint work with Daxin Shi, Yin Huang, and Francesco De Carlo. I. INTRODUCTION: In recent years, much effort has been devoted to developing imaging techniques that rely on contrast mechanisms other than absorption. Phase-contrast computed tomography (CT) is one such technique that exploits differences in the real part of the refractive index distribution of an object to form an image using a spatially coherent light source. Of particular interest is the ability of phase-contrast CT to produce useful images of objects that have very similar or identical absorption properties. In applications such as microtomography, it is imperative to reconstruct an image with high resolution. Experimentally, the demand of increased resolution can be achieved by highly collimating the incident light beam and using a microscope optic to focus the transmitted image, formed on a scintillator screen, onto the detector. When the object is larger than the field-of-view (FOV) of the imaging system, the measured phase-contrast projections are necessarily truncated and one is faced with the so-called local CT reconstruction problem. To circumvent the non-local nature of conventional CT, local CT algorithms have been developed that aim to to reconstruct a filtered image that contains detailed information regarding the location of discontinuities in the imaged object. Such information is sufficient for determining the structural composition of an object, which is the primary task in many biological and materials science imaging applications. II. METHODS A. Theory of Local Phase-Contrast Tomography: We have recently demonstrated that the mathematical theory of local CT, which was originally developed for absorption CT, can be applied naturally for understanding the problem of reconstructing the location of image boundaries (i.e., discontinuities) from truncated phase-contrast projections. Our analysis suggested the use of a simple backprojection-only algorithm for reconstructing object discontinuities from truncated phase-contrast projection data that is simpler and more theoretically appropriate than use of the FBP algorithm or use of the exact reconstruction algorithm for phase-contrast CT that was recently proposed by Bronnikov [1]. We demonstrated that the reason why this simple backprojection-only procedure represents an effective local reconstruction algorithm for phase-contrast CT is that the filtering operation that needs to be explicitly applied to the truncated projection data in conventional absorption CT is implicitly applied to the phase-contrast projection data (before they are measured) by the act of paraxial wavefield propagation in the near-field. In this talk, we review the application of local CT reconstruction theory to the phase-contrast imaging problem. Using concepts from microlocal analysis, we describe the features of an object that can be reliably reconstructed from incomplete phase-contrast projection data. In many applications, the magnitude of the refractive index jump across an interface may provide useful information about the object of interest. For the first time, we demonstrate that detailed information regarding the magnitude of refractive index discontinuities can be extracted from the phase-contrast projections. Moreover, we show that these magnitudes can be reliable reconstructed using adaptations of algorithms that were originally developed for absorption local CT. B. Numerical Results: We will present extensive numerical results to corroborate our theoretical assertions. Both simulation data and experimental coherent X-ray projection data acquired at the Advanced Photon Source (APS) at Argonne National Laboratory will be utilized. We will compare the ability of the available approximate and exact reconstruction algorithms to provide images that contain accurate information regarding the location and magnitude of refractive index discontinuities. The stability of the algorithms to data noise and inconsistencies will be reported. In Fig. 1, we show some examples of phase-contrast images reconstructed from noiseless simulation data. III. SUMMARY In this talk, we address the important problem of reconstructing the location and magnitude of refractive index discontinuities in phase-contrast tomography. We theoretically investigate existing and novel reconstruction algorithms for reconstructing such information from truncated phase-contrast tomographic projections and numerically corroborate our findings using simulation and experimental data.
IV. REFERENCES [1] A. Bronnikov, "Theory of quantitative phase-contrast computed tomography," Journal of the Optical Society of America (A), vol. 19, pp. 472-480, 2002.
Reconstruction Methods in Optical Tomography and Applications to Brain Imaging
Simon Arridge
University College London
Date: August 7, 2003
Location: UBC
Abstract
In the first part of this talk I will discuss methods for reconstruction of spatially-varying optical absorbtion and scattering images from measurements of transmitted light through highly scattering media. The problem is posed in terms of non-linear optimisation, based on a forward model of diffusive light propgation, and the principle method is linearisation using the adjoint field method. In the second part I will discuss the particular difficulties involved in imaing the brain. These include: - Accounting for non or weakly scattering regions that do not satisfy the diffusion approximation (the void problem) - Accounting for anisotropic scattering regions - Constructing realistic 3D models of the head shape - Dynamic imaging incorporating temporal regularisation.
Computational Anatomy: Quantifying Shape and Size of Anatomical Objects via Metrics on Flows of Diffeomorphisms
Mirza Faisal Beg
Johns Hopkins University
Date: August 7, 2003
Location: UBC
Abstract
Development of mathematical models and computational tools that can quantify shape and size of anatomical structures in normal and diseased states and during growth and aging is an important and challenging task in advancing the field of medical image understanding. I will present an overview of the Metric Pattern Theory of Miller, Trouve and Younes in which anatomical objects as modelled by images or landmarked representations of these images are an orbit of a template object under the group of diffeomorphisms. Metric distances between objects (images or landmarked representations of images) come from the geodesic distance between points on the manifold of diffeomorphisms that register these objects and are calculated by the variational optimization of a cost associated to a path of deformation on the manifold of diffeomorphisms. I will present the Euler-Lagrange equations to estimate diffeomorphisms for registering images and sets of landmarks and issues in numerical and parallel implementation of these equations. I will also present applications in quantifying shape and size changes in neurodegenerative diseases like Huntington's, Alzheimer's and Schizophrenia.
Fast Hierarchical Algorithms for Tomography
Yoram Bresler
University of Illinois at Urbana-Champaign
Date: August 8, 2003
Location: UBC
Abstract
The reconstruction problem in practical tomographic imaging systems is recovery from samples of either the x-ray transform (set of the line-integral projections) or the Radon transform (set of integrals on hyperplanes) of an unknown object density distribution. The method of choice for tomographic reconstruction is filtered backprojection (FBP), which uses a backprojection step. This step is the computational bottleneck in the technique, with computational requirements of O(N^3) for an NxN pixel image in two dimensions, and at least O(N^4) for an NxNxN voxel image in three dimensions. We present a family of fast hierarchical tomographic backprojection algorithms, which reduce the complexities to O(N^2 log N) and O(N^3 log N), respectively. These algorithms employ a divide-and-conquer strategy in the image domain, and rely on properties of the harmonic decomposition of the Radon transform. For image sizes typical in medical applications or airport baggage security, this results in speedups by a factor of 50 or greater. Such speedups are critical for next-generation real-time imaging systems.
How Medical Science will Benefit from Mathematics of the Inverse Problem
Thomas F. Budinger
Lawrence Berkeley National Laboratory and University of California Berkeley and San Francisco
Date: August 4, 2003
Location: UBC
Abstract
Selection of in-vivo imaging modalities (i.e x-ray, MRI, PET, SPECT, light absorption, fluorescence and luminescence, current source and electrical potential) can be logically approached by evaluating biological parameters relative to the biomedical objective (e.g. cardiac apoptosis vs cardiac stem cell trafficking and vs plaque composition vs plaque surface chemistry). For that evaluation, contrast resolution, of highest importance for modality selection in most cases, is defined as the signal to background for the desired biochemical or physiological parameter. But a particular modality which has exquisite biological potential (e.g. MRI and SPECT for atherosclerosis characterization) might not be deployed in medical science because appropriate algorithms are not available to deal with problems of blurring, variable point spread function, background scatter, detection sensitivity, attenuation and refraction. Trade-offs in technique selection frequently pit contrast resolution against intrinsic instrument resolution (temporal and spatial) and depth or size of the object. For example, imaging vulnerable carotid plaques using a molecular beacon with 5:1 signal to background and with 7 mm resolution in the human neck can be argued as superior to imaging tissue characteristics with 1:3:1 signal to background at 0.5 mm resolution with MRI. Another example is the use of the multidetector CT (helical) due to its relative speed instead of MRI to characterize coronary plaques even though MRI has much better intrinsic contrast mechanisms. The superior speed of modern CT argues for its preferred use. Some old examples of how mathematics of the inverse problem have enabled medical science advances include incorporation of attenuation compensation in SPECT imaging which brought SPECT to a quantitative technique, light transmission and fluorescence emission tomography, iterative reconstruction algorithm for all methods, and incorporation of phase encoding for MRI reconstruction. Current work on new mathematical approaches includes endeavors to improve resolution, improve sampling speed, decrease background and achieve reliable quantitation. Examples are rf exposure reduction in MRI by selective radio frequency pulses requiring low peak power, dose reduction by iterative reconstruction schemes in X-Ray CT, implementation of coded aperture models for emission tomography, 3D and time reversal ultrasound, a multitude of transmission and stimulated emission methods for light wavelength of 400nm to 3 cm, and electrical potential and electric source imaging. Many of these subjects will be discussed at this workshop and all rely on innovations in mathematics applied to the inverse problem.
Tensor and Emission Tomography Problems on the Plane
Alexander Bukhgeim
University of Vienna
Date: August 7, 2003
Location: UBC
Abstract
In this paper we consider two problems of 2D computerized tomography. Firstly, we study the fan-beam Radon transform of symmetric solenoidal tensor fields of arbitrary rank in a unit disc. The inversion formula for a fan-beam transform follows from its singular value decomposition (SVD). For that we first build polynomial solenoidal orthonormal basis tensor fields with the use of Zernike polynomials. As a result, SVD of the fan-beam transform is a double Fourier series of trigonometric functions. This algorithm allows to investigate the velocity distribution in a flow or the material stress distribution in metals by differential ultrasonic time-of-flight measurements. We provide numerical evaluation of the given algorithms for recovering scalar and vector fields from uniform and non-uniform discrete fan-beam projections. Then we consider the problem of two-dimensional single photon emission computerized tomography (SPECT) in the case of a fan-beam geometry. The problem of recovering emission map from SPECT data arises in different industrial and medical applications. This problem is mathematically investigated using the attenuated Radon transform mostly within the framework of either a parallel-beam geometry or a fan-beam geometry. The first explicit inversion formula in the case of a fan-beam geometry was obtained by A.~L.~Bukhgeim and S.~G.~Kazantsev on the basis of Cauchy formula for the so-called $A$-analytic functions. Later on, another explicit inversion formulas for reconstructing the emission map in the case of a parallel-beam geometry were derived by R.~G.~Novikov, F.~Natterer and numerically evaluated by L.~A.~Kunyansky. In this paper we deal with the fan-beam geometry supported on a unit disk and derive an explicit inversion formula for recovering the emission map without using, at least formally, the theory of $A$-analytic functions. We also give an implementation of the inversion formula for recovering the emission map provided that the attenuation map is known.
New Multiscale Thoughts on Limited-Angle Tomography
Emmanuel Candes
California Institute of Technology
Date: August 4, 2003
Location: UBC
Abstract
This talk is concerned with the problem of reconstructing an object from noisy limited-angle tomographic data---a problem which arises in many important medical applications. Here, a central question is to describe which features can be reconstructed accurately from such data and how well, and which features cannot be recovered.
We argue that curvelets, a recently developed multiscale system, may have a great potential in this setting. Conceptually, curvelets are multiscale elements with a useful microlocal structure which makes them especially adapted to limited-angle tomography. We develop a theory of optimal rates of convergence which quantifies that features which are microlocally in the "good" direction can be recovered accurately and which shows that adapted curvelet-biorthogonal decompositions with thresholding can achieve quantitatively optimal rates of convergence. We hope to report on early numerical results.
Computed Imaging for Near-Field Microscopy
P. Scott Carney
University of Illinois at Urbana-Champaign
Date: August 7, 2003
Location: UBC
Abstract
Near-field optics provides a means to observe the electromagnetic field intensity in close proximity to a scattering of radiating sample. Modalities such as near-field scanning optical microscopy (NSOM) and photon scanning tunneling microscopy (PSTM) accomplish these measurements by placing a small probe close to the object (in the "near-zone") and then precision controlling the position. The data are usually plotted as a function probe position and the resulting figure is called an image. These modalities provide a means to circumvent the classical Rayleigh-Abbe resolution limits, providing resolution on scales of a small fraction of a wavelength.
There are a number of problems associated with the interpretation of near-field images. If the probe is slightly displaced from the surface of the object, the image quality degrades dramatically. If the sample is thick, the subsurface features are obscured. The quantitative connection between the measurements and the optical properties of the sample is unknown. To resolve all these problems it is desirable to solve the inverse scattering problem (ISP) for near-field optics. The solution of the ISP provides a means to tomographically image thick samples and assign quantitative meaning to the images. Furthermore, data taken at distances up to one wavelength from the sample may be processed to obtain a focused, or reconstructed image of the sample at subwavelength scales.
Inverse Problems and Nuclear Medicine
Anna Celler
UBC
Date: August 5, 2003
Location: UBC
Abstract
N/A
Preferred Pitches in Multislice Spiral CT from Periodic Sampling
Adel Faridani
Oregon State University
Date: August 4, 2003
Location: UBC
Abstract
Joint work with Larry Gratton. Applications of sampling theory in tomography include the identification of efficient sampling schemes; a qualitative understanding of some artifacts; numerical analysis of reconstruction methods; and efficient interpolation schemes for non-equidistant sampling. In this talk we present an application of periodic sampling theorems in three-dimensional multisclice helical tomography shedding light on the question of preferred pitches.
Spherical Means and Thermoacoustic Tomography
David Finch
Oregon Stage University
Date: August 6, 2003
Location: UBC
Abstract
In thermoacoustic tomography, impinging radiation causes local heating which generates sound waves. These are measured by transducers, and the problem is to recover the density of emitters. This may be modelled as the recovery of the initial value of the time derivative of the solution of the wave equation from knowledge of the solution on (part of) the boundary of the domain. This talk, in conjunction with the talk by Sarah Patch, will report on recent work by the author, S. Patch and Rakesh on uniqueness and stability and an inversion formula, in odd dimensions, for the special case when measurements are taken on an entire sphere surrounding the object. The well-known relation between spherical means and solutions of the wave equation then implies results on recovery of a function from its spherical means.
Transient Elastography and Supersonic Shear Imaging
Mathias Fink
University of British Columbia
Date: August 6, 2003
Location: UBC
Abstract
Palpation is a standard medical practice which relies on qualitative estimation of the tissue Young's modulus E. In soft tissues the Young's modulus is directly proportional to the shear modulus ó (E = 3ó). It explains the great interest for developing quantitative imaging of the shear modulus distribution map. This can be achieved by observing with NMR or with ultrasound the propagation of low frequency shear waves (between 50 Hz and 500 Hz) in the body. The celerity of these waves is relatively low (between 1 and 10 m/s) and these waves can be produced either by vibrators coupled to the body or by ultrasonic radiation pressure. We have developed an ultra high-rate ultrasonic scanner that can give 10.000 ultrasonic images per second of the body. With such a high frame-rate we can follow in real time the propagation of transient shear waves, and from the spatio-temporal evolution of the displacement fields, we can use inversion algorithm to recover the shear modulus map. New inversion algorithm can be used that are no more limited by diffraction limits. In order to obtain unbiased shear elasticity map, different configurations of shear sources induced by radiation pressure of focused transducer arrays are used. A very interesting configuration that induces quasi plane shear waves will be described. It used a sonic shear source that moves at supersonic velocities, and that is created by using a very peculiar beam forming in the transmit mode. In vitro and in vivo results will be presented that demonstrate the interest of this new transient elastographic technique.
Effects of Target Non-localization on the Contrast of Optical Images: Lessons for Inverse Reconstruction
Amir Gandjabkhche
NIH
Date: August 7, 2003
Location: UBC
Abstract
N/A
Multivariate Inverse Problems
Fred Greensite
University of California, Irvine
Date: August 5, 2003
Location: UBC
Abstract
N/A
Characterizing inclusions using a linear sampling method based on the complete electrode model of
Nuuti Hyoven
Helsinki University of Technology
Date: August 5, 2003
Location: UBC
Abstract
In electrical impedance tomography one tries to recover the conductivity distribution inside a body from boundary measurements; in real life the obtainable data is a linear operator mapping electrode currents onto electrode potentials. I start this presentation by pointing out that in the framework of the complete electrode model, the forward problem corresponding to this finite-dimensional boundary operator can be viewed as a Galerkin approximation to the continuum model forward problem. Using this information, a special case of constant background conductivity with inhomogeneities is considered: It will be demonstrated how inclusions with strictly higher or lower conductivities can be characterized by the limit behaviour of the range of a boundary operator, which can be obtained through electrode measurements, when the electrodes get infinitely small and cover all of the object boundary. It will also be indicated how this result can be turned into a numerical algorithm, and simple two-dimensional reconstructions will be presented.
Progress and Problems in Electrical Impedance Imaging
David Isaacson
RPI
Date: August 5, 2003
Location: UBC
Abstract
Electrical impedance imaging systems apply patterns of currents through electrodes on a portion of the surface of a body. They measure and record the resulting voltages. From this electrical data they reconstruct and display approximations to the electrical conductivity and permittivity inside the body. Since the conductivity of the lungs varies with changing volumes of blood and air EIT images may be used to monitor ventilation and perfusion as well as other aspects of heart and lung function. Since the conductivity of many tumors is significantly larger than surrounding normal tissue EIT may be used to help screen for and diagnose cancer as well as monitor therapy. Mathematical problems arising in the design of electrical impedance imaging systems will be presented along with images and movies of heart and lung function made with the RPI ACT3 system.
A general inversion formula for cone beam CT
Alexander Katsevich
University of Central Florida
Date: August 4, 2003
Location: UBC
Abstract
Given a rather general weight function n, we derive a new cone beam transform inversion formula. The derivation is explicitly based on Grangeat's formula and the classical 3D Radon transform inversion. The new formula is theoretically exact and is represented by a two-dimensional integral. We show that if the source trajectory C is complete (and satisfies two other very mild assumptions), then substituting the simplest uniform weight n gives a convolution-based filtered back-projection algorithm. However, this easy choice is not always optimal from the point of view of practical applications. Uniform weight works well for closed trajectories, but the resulting algorithm does not solve the long object problem if C is not closed. In the latter case one has to use the flexibility in choosing n and find the weight that gives an inversion formula with the desired properties. We show how this can be done for spiral CT. It turns out that the two inversion algorithms for spiral CT proposed earlier by the author are particular cases of the new formula. For general trajectories the choice of weight should be done on a case by case basis.
An Inverse Problem for Multi-parameter Compton-Scatter 3D Imaging of the Human Head
Faysal El Khettabi
University of New Brunswick
Date: August 7, 2003
Location: UBC
Abstract
Most medical imaging today is done using transmitted photons (x-rays), either in the form of radiography or tomography. An alternative approach is to employ scattered photons, in order to take advantage of the fact that Compton (incoherent) scattering is principally sensitive to the number of electrons per unit volume, e; a quantity of vital importance in radiotherapy planning. With Compton scatter imaging (CSI), it is also possible to reconstruct images of the attenuation coe cients at the incident and scattered energies ( and 0, respectively), along with images of the electron density [1]. Therefore, CSI o ers indications identical to those of dualenergy tomograph systems; enabling the extraction of the electron density information which is helpful in tumor studies. This work examines the applicability of a CSI system recently developed for the imaging of passenger luggage [1, 2, 3] to the imaging of the human head. The head is considered here since the developed system requires measuring scattered radiation at two directions orthogonal to an incident beam that scans one side of the object. This paper presents the inverse problem of reconstructing three images (, 0 and e) of the head, using three sets of scattering measurements, along with one set of transmission measurements. The mathematical formulation of this nonlinear inverse problem is introduced. This problem is numerically casted into a set of coupled inverse problems. One is a linear inverse problem Permanent address: NRCN, P.O. BOX 9001, Beer-Sheva 84190, Israel. 1 that involves reconstructing 3D images of 0, using the ratio between pairs of scattering measurements to eliminate the e ect of and e. A regularization techniques is then applied to ensure an unbounded solution and avoid excessive error propagation. The second inverse problem is solved by an iterative likelihood learning algorithm, to simultaneously reconstruct the and e images. This learning algorithm searches for an image that maximizes the likelihood of reconstructing an image that best matches the measurements. The numerical characteristics of these image reconstruction methods and are investigated, and their viability is demonstrated using data obtained from independent Monte Carlo simulations, obtained via the MNCP-4C code [4].
References
[1] Khettabi E F and Hussein E M A An inverse problem for threedimensional X-ray scatter/transmission imaging, Inverse Problems, 19, 477-495, 2003.
[2] El Khettabi F, Hussein E M A, Jama H A F 2003 A Non-Rotating Multi-Parameter 3-D X-ray Imaging System: Part I: Modeling and Simulation, IEEE Transactions on Nuclear Science, submitted for publication.
[3] Jama H A, Hussein E M A, El Khettabi F 2003 A Non-Rotating Multi- Parameter 3-D X-ray Imaging System: Part II: Design Aspects, IEEE Transactions on Nuclear Science, submitted for publication.
[4] Judith F. Briesmeister 1997, General Monte Carlo N-Particle Transport Code, Los Alamos National Laboratory, LA-13709-M.2
The Green's Function for the Radiative Transport Equation
Arnold Kim
Stanford University
Date: August 7, 2003
Location: UBC
Abstract
N/A
Reconstruction of conductivities in the plane
Kim Knudsen
Aalborg University
Date: August 5, 2003
Location: UBC
Abstract
Joint work with Jennifer Mueller, Samuli Siltanen and Alex Tamasan. In this talk I will consider the mathematical problem behind Electrical Impedance Tomography, the inverse conductivity problem. The problem is to reconstruct an isotropic conductivity distribution in a body from knowledge of the voltage-to-current map at the boundary of the body. I will discuss the two-dimensional problem and give a reconstruction algorithm, which is direct and mathematically exact. The method is based on the so-called dbar-method of inverse scattering. Both theoretical validation of the algorithm and numerical examples will be given.
TBA
Steve Kusiak
University of Washington
Date: August 6, 2003
Location: UBC
Abstract
The inverse problem of reconstructing the support of an embedded scatterer within a non uniform background with a single probing incident wave of a single frequency can be thought of as an inverse source problem for the Helmholtz equation. Specifically, far away from the source we know the value, i.e. amplitude and phase, of a scattered wave generated by the interaction of a single fixed-frequency incident wave and a scattering inhomogeneity situated in a known background, e.g. the human brain. The inverse source problem is to then determine the size, shape and location of the inhomogeneity from the far field measurements of the scattered wave. We present results which establish the existence of convex lower bounds -- called the convex scattering support -- of the convex hull of the support of the scattering inhomogeneity. Additionally, we provide a viable reconstruction algorithm which can compute the convex scattering support from discrete measurements taken on the sphere at infinity.
Inverse scattering problem with a random potential
Matti Lassas
Rolf Nevanlinna Institute
Date: August 6, 2003
Location: UBC
Abstract
In these talk we consider scattering from random media and the inverse problem for it. As a stereotype of inverse scattering problems, we consider the SchrÎdinger equation $$ (\Delta+q+k^2)u(x,y,k)=\delta_y $$ with a random potential $q(x)$. Also, we discuss shortly the relation of this problem to medical imaging. The potential $q(x)$ is assumed to be a Gaussian random function which covariance function $E(q(x)q(y))$ is smooth outside the diagonal. We show how the realizations of the amplitude of the scattered field $|u_s(x,y,k)|$, averaged over frequency parameter $k>1$, can be used to determine stochastic properties of $q$, in particular the principal symbol of the covariance operator. This corresponds to finding the correlation length function of the random medium. In contrast to applied literature, we approach the problem with methods that do not require approximations. In technical point of view, we analyze the scattering from the random potential by combining methods of harmonic and microlocal analysis with stochastic, in particular with theory of ergodic processes.
Predicting Epileptic Seizures From Intracranial EEG
Brian Litt
University of Pennsylvania
Date: August 5, 2003
Location: UBC
Abstract
Epilepsy, the tendency to have seizures without a clear precipitant, affects approximately 1% of the world's population, or 60 million people, worldwide. Fully 25% of those affected have seizures that cannot be controlled by any available therapy. Implantable devices designed to predict seizures from the intracranial EEG and trigger electrical stimulation to stop these events before they begin hold great promise for these treatment resistant patients. Predicting seizures involves detecting unknown patterns in multi-channel intracranial EEG recordings and identifying periods of time during which the probability of seizure onset is increased. Early experiments suggest that these patterns vary from patient to patient, within a finite range of pattern types, and that they are constant within individuals. There is also evidence that seizure precursors, quantitative signatures of these patterns, wax and wane, build up and dissipate, multiple times prior to eventually triggering an event. The most promising quantitative methods used to predict seizures for implantable devices to date include: (1) modeling seizure generation as a non-linear system with parameters such as the principal Lyapunov exponent, correlation dimension and fractal dimension; (2) using a broad feature library, genetic search algorithm and probabilistic neural network classifier to train prediction techniques to individual patients; (3) using simpler detectors to count precursor events and trigger stimulation broadly when these are detected, without attempting to more definitively predict seizures. Major practical questions that need to be answered in this field include: (1) Can seizures be predicted prospectively? (all of the studies to date are retrospective, in well marked data sets); (2) Which specific electrode locations are involved in seizure generation in individual patients and how can they be found? (recording data sets can be generated by up to 128 intracranial electrodes, and must be reduced to a much smaller number to be practical for implantable devices); (3) What is the effect of electrode placement on precursor detection and seizure prediction (some precursors are exquisitely localized?can seizures still be predicted when the electrodes are not in the exact, correct location?); (4) How, when and where should electrical stimulation be applied to arrest seizures before they begin?; and (5) How should experiments be designed to measure success? Despite these fundamental questions, device development is proceeding rapidly, and human clinical trials of early responsive devices are already under way. More research must be done to make these devices safe, effective and efficient.
Multifrequency inverse obstacle scattering: the point source method and generalized filtered backprojection
Russell Luke
Simon Fraser University
Date: August 6, 2003
Location: UBC
Abstract
This work focuses on two methods for obstacle reconstruction from multifrequency far field scattering data, the first built upon the point source method proposed by Potthast for solving inverse scattering problems with single frequency data in the resonance region, and the second based on filtered backprojection techniques using the physical optics approximation for high frequency scattering. Our implementation using the point source method amounts to a generalized, nonlinear filtered backprojection algorithm, the key to which is the construction of the filter used in the backprojection operator. Numerical examples illustrate that the critical factor for reconstructions in multifrequency settings is the frequency dependence of the filter.
Interior Elastodynamics Inverse Problems: Recovery of Shear Wavespeed in Transient Elastography
Joyce McLaughlin
RPI
Date: August 6, 2003
Location: UBC
Abstract
For this new inverse problem the data is the time and space dependent interior displacement measurements of a propagating elastic wave. The medium is initially at rest with a wave initiated at the boundary or at an interior point by a broad band source. A property of the wave is that it has a propagating front. For this new problem we present well posedness results and an algorithm that recovers shear wavespeed from the time and space dependent position of the propagating wavefront. We target the application from transient elastography where images are created of the shear wavespeed in biological tissue. The goal is to create a medical diagnostic tool where abnormal tissue is identified by its abnormal shear stiffness characteristics. Included in our presentation are images of stiffness changes recovered by our algorithms and using data measured in the laboratory of Mathias Fink.
Image Analysis Models in Computational Anatomy
Michael Miller
Date: August 7, 2003
Location: UBC
Abstract
N/A
On the Uniqueness and Stability in the Tomography Problems In Cases of the Refractions of Rays
Ravil Moukhometov
University of Ottawa
Date: August 6, 2003
Location: UBC
Abstract
In this paper we consider the integral geometry problem for an anisotropic medium in cases since the refractions rays. Such problems may arise in the ultrasonic tomography used in medicine.
Reconstructions of Chest Phantoms by the D-Bar Method for Electrical Impedance Tomography
Jennifer Mueller
Colorado State University
Date: August 5, 2003
Location: UBC
Abstract
In this talk a direct (noniterative) reconstruction algorithm for EIT in the two-dimensional geometry is presented. The algorithm is based on the mathematical uniqueness proof by A. Nachman [Ann. of Math. 143 (1996)] for the 2-D inverse conductivity problem. Reconstructions from experimental data collected on a saline-filled tank containing agar heart and lung phantoms are presented, and the results are compared to reconstructions by the NOSER algorithm on the same data.
3D Emission Tomography via Plane Integrals
Frank Natterer
University of Munster
Date: August 8, 2003
Location: UBC
Abstract
In emission tomography one reconstructs the activity distribution of a radioactive tracer in the human body by measuring the activity outside the body using collimated detectors. Usually the collimators single out lines along which the measurements are taken. In a novel design (Solstice camera) weighted plane integrals are measured instead. By a statistical error analysis it can be shown that the Solstice concept is superior to the classical line scan for high resolution, making Solstice attractive for small animal imaging. By a suitable approximation of the weight function we can reduce the reconstruction problem to Marr's two stage algorithm for the 3D Radon transform, leading to an efficient algorithm. In order to account for attenuation we approximate the 3D problem by the 2D attenuated Radon transform which can be inverted by Novikov's algorithm. We show reconstructions from simulated and measured data.
Information Geometry, Alternating Minimizations, and Transmission Tomography
Joseph A. O'Sullivan
Washington University in St. Louis
Date: August 8, 2003
Location: UBC
Abstract
Many medical imaging problems can be formulated as statistical inverse problems to which estimation theoretic methods can be applied. Statistical likelihood functions can be viewed in information-theoretic terms as well. Maximizations of statistical likelihood functions for several image estimation problems, including emission and transmission tomography, can be reformulated as double minimizations of information divergences. Properties of minimizations of I-divergences are studied in information geometry. This more general viewpoint yields new characterizations of algorithms and new algorithms for transmission tomography. These new algorithms are described in detail as are medical imaging applications of transmission tomography in the presence of metal.
Imaging in Clutter
George Papanicolau
Stanford University
Date: August 6, 2003
Location: UBC
Abstract
Array imaging, like synthetic aperture radar, does not produce good reflectivity images when there is clutter, or random scattering inhomogeneities, between the reflectors and the array. Can the blurring effects of clutter be controlled? I will discuss this issue in some detail and show that if bistatic array data is available and if the data is suitably preprocessed to stabilize clutter effects then a good deal can be done to minimize blurring.
Thermoacoustic Tomography - An Inherently 3D Generalized Radon Inversion Problem
Sarah Patch
GE Medical Systems
Date: August 6, 2003
Location: UBC
Abstract
Joint work with D. FINCH, RAKESH. A hybrid imaging technique using RF excitation measures ultrasound (US) data. Cancerous tissue is hypothesized to preferentially absorb RF energy, heating more and expanding faster than surrounding healthy tissue. Pressure waves therefore emanate from cancerous inclusions and are detected by US transducers located on the surface of a sphere surrounding the imaging object. A formula for the contrast function is derived in terms of data measured over the entire imaging surface. Existence and uniqueness for the inverse problem when transducers cover only a hemisphere also hold. However, explicit inversion for this clinically realizable case remains an open problem.
Cramer-Rao Bounds in Dixon Imaging using SSFP
Angel Pineda
Stanford University
Date: August 6, 2003
Location: UBC
Abstract
Joint work with Zhifei Wen, Scott B. Reeder and Norbert J. Pelc Dixon Imaging is an MRI technique that collects multiple images at different echo times which are then used to separate the fat and water components. Steady-state free precession (SSFP) is a rapid gradient echo imaging technique with good signal-to-noise (SNR) and contrast that depends on both T1 and T2. However, both fluid and fat appear bright in SSFP images which makes abnormalities appear similar to fat. Fat-water separation would solve this problem. We pose Dixon imaging as an estimation problem for the transverse complex magnetization of both the fat and water given images taken at multiple echoes. The Cramer-Rao bounds for the magnitude of the fat and water components are used to guide the design of pulse sequences and to evaluate reconstruction techniques.
Limited Data Tomography in science and industry
Eric Todd Quinto
Tufts University
Date: August 7, 2003
Location: UBC
Abstract
Tomography has revolutionized diagnostic medicine, scientific testing, and industrial nondestructive evaluation, and some of the most difficult problems involve limited data, in which some data are missing. This talk will describe two practical problems and give the mathematical background. The first problem, in industrial nondestructive evaluation (joint with Perceptics, Inc.), uses limited-angle exterior CT to reconstruct a rocket mockup. The second, in electron microscopy (joint with Sidec Technologies), uses limited angle local CT to reconstruct RNA and a virus.
ECGI : A Noninvasive Imaging Modality for Cardiac Electrophysiology and Arrhythmias
Yoram Rudy
Case Western Reserve
Date: August 4, 2003
Location: UBC
Abstract
N/A
Tomography and Inverse Scattering with Diffuse Light
John Schotland
University of Pennsylvania
Date: August 7, 2003
Location: UBC
Abstract
N/A
A New Approach to Inversion in Sonar Tomography
Thomas Schuster
Tufts University
Date: August 6, 2003
Location: UBC
Abstract
The two dimensional spherical Radon transform assigns to a function its integrals over circles and is one of the mathematical models in sonar. Thus, stable inversion algorithms for that transform are of great interest. In contrast to the Radon transform, where we integrate over lines, the spherical Radon transform can not be formulated as bounded operator between Hilbert spaces, rather between distribution spaces. To establish a stable inversion scheme we have to think about how to approximate a distribution in an appropriate manner. That is why we expand the method of approximate inverse, which led to efficient reconstruction algorithms in several areas of tomography, such that it may be applied to this distribution space setting. The approximate inverse consists of evaluations of the dual pairing of the given Radon data with precomputed reconstruction kernels. These kernels are smooth and rapidly decreasing functions whose images under the Radon transform are so called mollifiers. In the talk we first define what we mean by a mollifier. We design a mollifier and outline how the corresponding reconstruction kernel can be approximated by numerical integration. We show that the Radon transform as well as the constructed mollifier satisfy a certain operator invariance which accelerates the reconstruction process significantly. Plots of the mollifier and the reconstruction kernel are presented.
Nonlinear image reconstruction in optical tomography using an iterative Newton-Krylov method
Martin Schweiger
University College London
Date: August 7, 2003
Location: UBC
Abstract
Image reconstruction in optical tomography can be formulated as a nonlinear least squares optimisation problem. This paper describes an inexact regularised Gauss-Newton method to solve the normal equation, based on a projection onto the Krylov subspaces. The Krylov linear solver step addresses the Hessian only in the form of matrix-vector multiplications. We can therefore utilise an implicit definition of the Hessian, which only requires the computation of the Jacobian and the regularisation term. This method avoids the explicit formation of the Hessian matrix which is often intractable in large-scale three-dimensional reconstruction problems. We apply the method to the reconstructions of 3-D test problems in optical tomography, whereby we recover the volume distribution of absorption and scattering coefficients in a heterogeneous highly scattering medium from boundary measurements of infrared light transmission. We show that the Krylov method compares favourably to the explicit calculation of the Hessian both in terms of memory space and computational cost.
Inversion of the Bloch Equation
Meir Shinnar
Rutgers University of Medicine and Dentistry of New Jersey
Date: August 5, 2003
Location: UBC
Abstract
N/A
The Inverse Polynomial Reconstruction Method for Two Dimensional Image Reconstruction
Bernie Shizgal
University of British Columbia
Date: August 8, 2003
Location: UBC
Abstract
N/A
Three-dimensional X-ray imaging with few radiographs
Samuli Siltanen
Gunma University
Date: August 6, 2003
Location: UBC
Abstract
In medical X-ray tomography, three dimensional structure of tissue is reconstructed from a collection of projection images. In many practical imaging situations only a small number of truncated projections is available from a limited range of view. Traditional reconstruction algorithms, such as filtered backprojection (FBP), do not give satisfactory results when applied to such data. Instead of FBP, Bayesian inversion is suggested for reconstruction. In this approach, a priori information is used to compensate for the incomplete information of the measurement data. Examples with in vitro measurements from dental radiology and surgical imaging are presented.
One Dimensional Inverse Problem In Diffusion Based Optical Tomography Using Gauss-Newton-Type Algorithm For Nonlinear Ill-Posed Problems
Alexandra Smirnova
Georgia State University
Date: August 6, 2003
Location: UBC
Abstract
We investigate an one dimensional inverse problem in diâusion based op- tical tomography using iteratively regularized Gauss-Newton-type algorithm with simultaneous updates of the regularized Fr?echet derivative (IRGNSU). We compare IRGNSU to other nonlinear algorithms such as iteratively regularized Gauss-Newton, Levenberg-Marquardt, Kaczmarz or nonlinear ART (Algebraic Reconstruction Technique) and conjugate gradient.
References
1. A.G.Ramm, A.B.Smirnova, Continuous regularized Gauss-Newton-type al- gorithm for nonlinear ill-posed equations with simultaneous updates of inverse derivative. Intern. Journal of Pure and Appl. Math., 2, N1, 23-34, (2002).
2. R.B.Alexeev, A.B.Smirnova, Regularization of nonlinear unstable operator equations by secant methods with application to gravitational sounding prob- lem. Proceedings of AMS Special Session on Inverse Problems and Image Anal- ysis (2001: New Orleans, LA), Editors: M.Nashed, O.Scherzer, Contemporary mathematics, 313, 1-17, (2002).
Applications of Diffusion MRI to Electrical Impedance Tomography
David Tuch
MIT
Date: August 5, 2003
Location: UBC
Abstract
Diffusion MRI measures the molecular self-diffusion of the endogeneous water in tissue. In this talk, I will discuss various applications of diffusion MRI to electrical impedance tomography (EIT). In particular, I will discuss (i) how the anisotropy information from diffusion tensor imaging (DTI) can inform the EIT forward model, and (ii) how particular transport conservation principles measured with DTI can provide priors or hard constraints for the EIT inverse problem. I will also discuss some recent work on mapping non-tensorial diffusion using spherical tomographic inversions of the diffusion signal.
Uniqueness theorems in Interior Elastodaynamics Inverse Problems Toward Tissue Stiffness Reconstruction
Jeong-Rock Yoon
RPI
Date: August 7, 2003
Location: UBC
Abstract
Joint work with Joyce McLaughlin. We consider the transient elastography problem: Sending an elastic pulse from the surface and measuring the interior displacements by ultrasound, make an image of stiffness distribution in biological tissue. This leads us to consider an inverse problem for the identification of coefficients in the second-order hyperbolic system that models the propagation of elastic waves. The measured data for our inverse problem is the time dependent interior vector displacement. In the isotropic case, we establish sufficient conditions for the unique identifiability of wave speeds and the simultaneous identifiability of both density and the Lame parameters. In the anisotropic case, counterexamples are presented to exhibit the nonuniqueness and to show the structure of the set of shear tensors corresponding to the given data.
Cascade Topology Seminar PIMS
The best picture of Poincare's homology sphere
David Gillman
UCLA
Date: November 2, 2002
Location: UBC
Abstract
N/A
Homotopy self-equivalences of 4-manifolds
Ian Hambleton
McMaster University
Date: November 2, 2002
Location: UBC
Abstract
Skein theory in knot theory and beyond
Vaughan Jones
University of California, Berkeley
Date: November 3, 2002
Location: UBC
Abstract
N/A
New perspectives on self-linking
Dev Sinha
University of Oregon
Date: November 3, 2002
Location: UBC
Abstract
N/A
Cryptography and the braid groups
Catherine Webster
University of British Columbia
Date: November 2, 2002
Location: UBC
Abstract
N/A
Topological robotics; topological complexity of projective spaces
Sergey Yuzvinsky
University of Oregon
Date: November 2, 2002
Location: UBC
Abstract
N/A
Thematic Programme on Asymptotic Geometric Analysis
Random Homomorphisms
Peter Winkler
Bell Labs
Date: July 20, 2000
Location: UBC
Abstract
N/A
Let H be a fixed small graph, possibly with loops, and let G be a (possibly infinite) graph. Let f be chosen from the set Hom(G,H) of all homomorphisms from G to H.
If H is Kn, f is a proper coloring of G if H consists of two adjacent vertices one of which is looped, f is (in effect) an independent set in G These and other H give rise to "hard constraint" models of interest in statistical mechanics. One way to phrase the physicists' key question is: when G is infinite, is there a canonical way to pick f uniformly at random?
When G is a Cayley tree, f can be generated by a branching random walks on H and using this approach, we are able to characterize the H for which Hom(G,H) always has a unique "nice" probability distribution. We will sketch the proof but spend equal time illustrating the bizarre things that can happen when H is not so well behaved.
Reference: Graham R. Brightwell and Peter Winkler, Graph homomorphisms and phase transitions, J. Comb. Theory Series B (1999) 221--262.
Acyclic coloring, strong coloring, list coloring and graph embedding
Noga Alon
Tel Aviv University
Date: July 19, 2000
Location: UBC
Abstract
I will discuss various coloring problems and the relations among them. A representative example is the conjecture, studied in a joint paper with Sudakov and Zaks, that the edges of any simple graph with maximum degree d can be colored by at most d+2 colors with no pair of adjacent edges of the same color and no 2-colored cycle.
A three-color theorem for some graphs evenly embedded on orientable surfaces
Joan Hutchinson
Macalester College
Date: July 19, 2000
Location: UBC
Abstract
The easiest planar graph coloring theorem states that a graph in the plane can be 2-colored if and only if every face is bounded by an even number of edges; call such a graph "evenly embedded." What is the chromatic number of evenly embedded graphs on other surfaces? Three, provided the surface is orientable and the graph is embedded with all noncontractible cycles sufficiently long. We give a new proof of this result, using a theorem from Robertson-Seymour graph minors work and a technique of Hutchinson, Richter, and Seymour in a proof of a related 4-color theorem for Eulerian triangulations.
Colourings and orientations of graphs
Adrian Bondy
Université Claude Bernard
Date: July 18, 2000
Location: SFU
Abstract
To each proper colouring c:V -> {1,2,...,k} of the vertices of a graph G, there corresponds a canonical orientation of the edges of G, edge uv being oriented from u to v if and only if c(u) > c(v). This simple link between colourings and orientations is but the tip of the iceberg. The ties between the two notions are far more profound and remarkable than are suggested by the above observation. The aim of this talk is to describe some of these connections.
Extending graph colorings
Michael Albertson
Smith College
Date: July 17, 2000
Location: UBC
Abstract
Given an r-chromatic graph G and an r-coloring of $P \subseteq V(G)$it is natural to wonder if the coloring of P can be extended to all of G. This is NP-complete for r = 3. Even if G is bipartite, it is NP-complete to decide if a 3-coloring extends. Although graph color extension can be formulated as a list coloring problem, the context of extension makes certain questions seem more provocative. Does using an additional color help? Does the structure of P matter? Thomassen posed the following problem: Suppose G is planar and the distance between any two vertices in P is at least 100. Can any 5-coloring of P be extended to a 5-coloring of G? Th. If $\chi(G) = r$ and the distance between any two vertices in P is at least 4, then any r+1-coloring of P extends to an r+1-coloring of all of G. It is known that no r-coloring result of this nature can hold. In some instances no additional colors are necessary. This talk will introduce this subject, survey some of its highlights, and present a handful of open questions.
Integral polyhedra related to even-cycle and even-cut matroids
Bertrand Guenin
University of Waterloo, Canada
Date: July 28, 2000
Location: SFU
Abstract
N/A
Antisymmetric Flows
Matt DeVos
Princeton University
Date: July 14, 2000
Location: SFU
Abstract
N/A
The Background to the Hodgkin-Huxley Equation
Professor Sir Andrew Huxley
Master of Trinity College, Cambridge
Date: August 19, 1999 at 14:00
Location: UBC
Abstract
Sir Andrew Huxley, a Master of Trinity College, Cambridge (UK), is well known for his role in the development of the Hodgkin-Huxley equations for neural excitation and the action potential, a work which earned him and Sir Alan Hodgkin the Nobel Prize in Physiology or Medicine in 1963.
This work began in 1939 with the experimental discovery of the action potential, the basic mechanism for electrical signalling between neurons. The research eventually revealed the role of membrane currents and conductivities for various ions, and culminated in a mathematical description which is used today as a fundamental template for the modelling of most excitable cells, including neurons, cardiac cells, and pancreatic beta-cells. Sir Andrew Huxley will describe the background to the equations in his lecture.
For any videos that are not listed above, please click here.