## Audio

### IAM-PIMS Joint Distinguished Colloquium

Dynamics of Abyssal Ocean Currents

Gordon E. Swaters

University of Alberta

Date: October 7, 2002

Location: UBC

Abstract

The ocean is the regulator of Earth's climate. The world's oceans store an enormous quantity of heat, which is redistributed throughout the world via the currents. Because the density of water is about a thousand times larger than the density of air, the ocean has a substantial inertia associated with it compared to the atmosphere. This implies that it takes an enormous quantity of energy to change an existing ocean circulatory pattern compared to the atmospheric winds. One can think of the ocean as the "memory" and "integrator" of past and evolving climate states.

One can characterize ocean currents into two broad groups. The first are the wind-driven currents. These currents are most intense near the surface of the ocean. Their principal role is to transport warm equatorial waters toward the polar regions. The second group of currents are those that are driven by density contrasts with the surrounding waters. Among these are the deep, or abyssal, currents flowing along or near the bottom of the oceans in narrow bands. Their principal role is to transport cold, dense waters produced in the polar regions toward the equator.

My research group is working toward understanding the dynamics of these abyssal currents. In particular, we have focused on developing innovative mathematical and computational models to describe the evolution, including the transition to instability and interaction with the surrounding ocean and bottom topography, of these currents. The goal of this research is to better understand the temporal variability in the planetary scale dynamics of the ocean climate system. Our work can be seen as "theoretical" in the sense that we attempt to develop new models to elucidate the most important dynamical balances at play and "process-oriented" in the sense that we attempt to use these models to make concrete predictions about the evolution of these flows. As such, our work is a blend of classical applied mathematics, high-performance computational science and physical oceanography.

In this talk, we will attempt to give an overview of our work in this area.

Transition pathways in complex systems: throwing ropes over rough mountain passes, in the dark

David Chandler

University of California

Date: October 28, 2002

Location: UBC

Abstract

This lecture describes the statistical mechanics of trajectory space and examples of what can be learned from it. These examples include numerical algorithms for studying rare but important events in complex systems -- systems where transition states are not initially known, where transition states need not coincide with saddles in a potential energy landscape, and where the number of saddles and other features are too numerous and complicated to enumerate explicitly. This methodology for studying trajectories is called "transition path sampling." Extensive material on this topic can be found at the web site: gold.cchem.berkeley.edu .

Spatial complexity in ecology and evolution

Ulf Dieckmann

The International Institute for Applied Systems Analysis

Date: December 2, 2002

Location: UBC

Abstract

The field of spatial ecology has expanded dramatically in the last few years. This talk gives an overview of the many intriguing phenomena arising from spatial structure in ecological and evolutionary models. While traditional ecological theory sadly fails to account for such phenomena, complex simulation studies offer but limited insight into the inner workings of spatially structured ecological interactions. The talk concludes with a survey of some novel methods for simplifying spatial complexity that offer a promising middle ground between spatially ignorant and spatially explicit approaches.

Turbulence and its Computation

Parviz Moin

Center for Turbulence Research, Stanford University and NASA Ames Research Center

Date: January 13, 2003

Location: UBC

Abstract

Turbulence is a common state of fluid motion in engineering applications and geophysical and astrophysical scales. Prediction of its statistical properties and the ability to control turbulence is of great practical significance. Progress toward a rigorous analytic theory has been prevented by the fact that turbulence is a mixture of high dimensional chaos and order, and turbulent flows possess a wide range of temporal and spatial scales with strong non-linear interactions. With the advent of supercomputers it has become possible to compute some turbulent flows from basic principles. The data generated from these calculations have helped to understand the nature and mechanics of turbulent flows in some detail. Recent examples from large scale computations of turbulent flows and novel numerical experiments used to study turbulence will be presented. These display a wide range in complexity from decaying turbulence in a box to turbulent combustion in a combustor of a real jet engine. The hierarchy of methods for computing turbulent flows and the problem of turbulence closure will be discussed. Recent applications of optimal control theory to turbulence control for drag and noise reduction will be presented.

Fast accurate solution of stiff PDE

Lloyd N. Trefethen

Oxford University Computing Laboratory

Date: March 17, 2003

Location: UBC

Abstract

Many partial differential equations combine higher-order linear terms with lower-order nonlinear terms. Examples include the KdV, Allen-Cahn, Burgers, and Kuramoto-Sivashinsky equations. High accuracy is typically needed because the solutions may be highly sensitive to small perturbations. For simulations in simple geometries, spectral discretization in space is excellent, but what about the time discretization? Too often, second-order methods are used because higher order seems impractical. In fact, fourth-order methods are entirely practical for such problems, and we present a comparison of the competing methods of linearly implicit schemes, split step schemes, integrating factors, "sliders", and ETD or exponential time differencing. In joint work with A-K Kassam we have found that a fourth-order Runge-Kutta scheme known as ETDRK4, developed by Cox and Matthews, performs impressively if its coefficients are stably computed by means of contour integrals in the complex plane. Online examples show that accurate solutions of challenging nonlinear PDE can be computed by a 30-line Matlab code in less than a second of laptop time.

Detached-Eddy Simulation

Philippe R. Spalart

Boeing Corp., Seattle

Date: October 1, 2001

Location: UBC

Abstract

DES is a recent technique, devised to predict separated flows at high Reynolds numbers with a manageable cost, for instance an airplane landing gear or a vehicle. The rationale is that on one hand, Large-Eddy Simulation (LES) is unaffordable in the thin regions of the boundary layer, and on the other hand, Reynolds-Averaged Navier-Stokes (RANS) models seem permanently unable to attain sufficient accuracy in regions of massive separation.

DES contains a single model, typically with one transport equation, which functions as a RANS model in the boundary layer and as a Sub-Grid-Scale model in separated regions, where the simulation becomes an LES. The approach has spread to a number of groups worldwide, and appears quite stable. A range of examples are presented, from flows as simple as a circular cylinder to flows as complex as a fighter airplane beyond stall. The promise and the limitations of the technique are discussed.

Numerical Simulation of Turbulence

Joel H. Ferziger

Stanford University

Date: November 26, 2001

Location: UBC

Abstract

Turbulence is a phenomenon (or rather a set of phenomena) that is difficult to deal with both mathematically and physically because it contains both deterministic and random elements. However, the equations governing its behavior are well known. After a short discussion of the physics of turbulence, we will give a discussion of the approaches used to deal with it and an example of the use of simulation techniques to learn about the physics of turbulence and the development of simple models for engineering use.

Approximation Algorithms and Games on Networks

Eva Tardos

Cornell University

Date: March 11, 2002

Location: UBC

Abstract

In this talk we discuss work at the intersection of algorithms design and game theory. Traditional algorithms design assumes that the problem is described by a single objective function. One of the main current trends of work focuses on approximation algorithm, where computing the exact optimum is too hard. However, there is an additional difficulty in a number of settings. It is natural to consider algorithmic questions where multiple agents each pursue their own selfish interests. We will discuss problems and results that arise from this perspective.

Algorithms and Software for Dynamic Optimization with Application to Chemical Vapor Deposition Processes

Linda Petzold

University of California at Santa Barbara

Date: November 1, 2000

Location: UBC

Abstract

In recent years, as computers and algorithms for simulation have become more efficient and reliable, an increasing amount of attention has focused on the more computationally intensive tasks of sensitivity analysis and optimal control. In this lecture we describe algorithms and software for sensitivity analysis and optimal control of large-scale differential-algebraic systems, focusing on the computational challenges. We introduce a new software package DASPK 3.0 for sensitivity analysis, and discuss our progress to date on the COOPT software and algorithms for optimal control. An application from the chemical vapor deposition growth of a thin film YBCO high-temperature superconductor will be described.

The Mathematics of Reflection Seismology

Gunther Uhlmann

University of Washington

Date: March 6, 2001

Location: UBC

Abstract

Reflection seismology is the principal exploration tool of the oil industry and has many other technical and scientific uses. Reflection seismograms contain enormous amounts of information about the Earth's structure, obscure by complex reflection and refraction effects. Modern mathematical understanding of wave propagation in heterogeneous materials has aided in the unraveling of this complexity. The speaker will outline some advances in the theory of oscillatory integrals which have had immediate practical application in seismology.

Radial Basis Functions - A future way to solve PDEs to spectral accuracy on irregular multidimensional domains?

Bengt Fornberg

University of Colorado

Date: March 27, 2001

Location: UBC

Abstract

It was discovered about 30 years ago that expansions in Radial Basis Functions (RBFs) provide very accurate interpolation of arbitrarily scattered data in any number of spatial dimensions. With both computational cost and coding effort for RBF approximations independent of the number of spatial dimensions, it is not surprising that RBFs have since found use in many applications. Their use as basis functions for the numerical solution of PDEs is however surprisingly novel. In this Colloquium, we will discuss RBF approximations from the perspective of someone interested in pseudospectral (spectral collocation) methods primarily for wave-type equations.

### PIMS PDE/Geometry Seminar

Unusual comparison properties of capillary surfaces

Robert Finn

Stanford University

Date: This talk will address a question that was raised about 30 years ago by Mario Miranda, as to whether a given cylindrical capillary tube always raises liquid higher over its section than does a cylinder whose section strictly contains the given one. Depending on the specific shapes, the answer can take unanticipated forms exhibiting nonuniformity and discontinuous reversal in behavior, even in geometrically simple configurations. The presentation will be for the most part complete and self-contained, and is intended to be accessible for a broad mathematical audience.

Location:

Abstract

This talk will address a question that was raised about 30 years ago by Mario Miranda, as to whether a given cylindrical capillary tube always raises liquid higher over its section than does a cylinder whose section strictly contains the given one. Depending on the specific shapes, the answer can take unanticipated forms exhibiting nonuniformity and discontinuous reversal in behavior, even in geometrically simple configurations. The presentation will be for the most part complete and self-contained, and is intended to be accessible for a broad mathematical audience.

### String Theory Seminar

D-particles with multipole moments of higher dimensional branes

Mark van Raamsdonk

Stanford University

Date: November 28, 2000

Location: UBC

Abstract

N/A

### PIMS-MITACS Seminar on Computational Statistics and Data Mining

A Simple Model for a Complex System: Predicting Travel Times on Freeways

John A. Rice

UC Berkeley

Date: April 26, 2001

Location: UBC

Abstract

A group of researchers from the Departments of EECS, Statistics, and the Institute for Transportation Research at UC Berkeley has been collecting and studying data on traffic flow on freeways in California. I will describe the sources of data and give an overview of the problems being addressed. I will go into some detail on a particular problem-forecasting travel times over a network of freeways. Although the underlying system is very complex and tempting to model, a simple model is surprisingly effective at forecasting.

Some of the work the group is doing appears on these websites:

http://www.dailynews.com/news/articles/0201/20/new01.asp

http://oz.berkeley.edu/~fspe/

http://http.cs.berkeley.edu/~zephyr/freeway/

http://www.its.berkeley.edu/projects/freewaydata/

http://www.path.berkeley.edu/

http://http.cs.berkeley.edu/~pm/RoadWatch/index.html

http://www.path.berkeley.edu/~pettyk/rssearch.html

Robust Factor Model Fitting and Visualization of Stock Market Returns

R. Douglas Martin

University of Washington

Date: January 25, 2001

Location: UBC

Abstract

Stock market returns are often non-Gaussian by virtue of containing outliers. Modeling stock returns and calculating portfolio risk is almost invariably accomplished by fitting a linear model, called a "factor" model in the finance community, using the sanctified method of ordinary least squares (OLS). However, it is well-known that stock returns are often non-Gaussian by virtue of containing outliers, and that OLS estimates are not robust toward outliers. Modern robust regression methods are now available that are not for stock returns using firm size and book-to-market as the factors, where we show that OLS gives a misleading result. Then we show how Trellis graphics displays can be used to obtain quick, penetrating visualization of stock returns factor model data, and to obtain convenient comparisons of OLS and robust factor model fits. Last but not least, we point out that robust factor model fits and Trellis graphics displays are in effect powerful "data mining tools" for better understanding of financial data. Our examples are constructed using a new S-PLUS Robust Methods library and S-PLUS Trellis graphics displays.

### PIMS-MITACS Financial Seminar Series

Levy Processes in Financial Modeling

Dilip Madan

University of Maryland

Date: March 9, 2001

Location: UBC

Abstract

We investigate the relative importance of diffusion and jumps in a new jump diffusion model for asset returns. In contrast to the standard modelling of jumps for asset returns, the jump component of our process can display finite or infinite activity, and finite or infinite variation. Empirical investigations of time series indicate that index dynamics are essentially devoid of a diffusion component, while this component may be present in the dynamics of individual stocks. This result leads to the conjecture that the risk-neutral process should be free of a diffusion component for both indices and individual stocks. Empirical investigation of options data tends to confirm this conjecture. We conclude that the statistical and risk-neutral processes for indices and stocks tend to be pure jump processes of infinite activity and finite variation.

### PIMS Distinguished Lecture Series

Systems of Nonlinear PDEs arising in economic theory

Ivar Ekeland

Université Paris-Dauphine

Date: March 22, 2002

Location: UBC

Abstract

Testing the foundations of microeconomic theory leads us into a mathematical analysis of systems of nonlinear PDEs. Some of these can be solved in a C^\infty framework by using the classical Darboux theorem and its recent extensions, others require analysticity and more refined tools, such as the Cartan-Kahler theorem. Care will be taken to explain the economic framework and the tools of differential geometry.

Odd embeddings of lens spaces

David Gillman

UCLA

Date: May 31, 2001

Location: UBC

Abstract

N/A

Colliding Black Holes and Gravity Waves: A New Computational Challenge

Douglas N. Arnold

Institute for Mathematics and its Applications

Date: May 16, 2001

Location: UBC

Abstract

An ineluctable, though subtle, consequence of Einstein's theory of general relativity is that relatively accelerating masses generate tiny ripples on the curved surface of spacetime which propagate through the universe at the speed of light. Although such gravity waves have not yet been detected, it is believed that technology is reaching the point where detection is possible, and a massive effort to construct worldwide network of interferometer gravity wave observatories is well underway. They promise to be our first window to the universe outside the electromagnetic spectrum and so, to astrophysicists and others trying to fathom a universe composed primarily of electromagnetically dark matter, the potential payoff is enormous.

If gravitational wave detectors are to succeed as observatories, we must learn to interpret the wave forms which are detected. This requires the numerical simulation of the violent cosmic events, such as black hole collisions, which are the most likely sources of detectable radiation, via the numerical solution of the Einstein field equations. The Einstein equations form a system of ten second order nonlinear partial differential equations in four-dimensional spacetime which, while having a very elegant and fundamental geometric character, are extremely complex. Their numerical solution presents an enormous computational challenge which will require the application of state-of-the-art numerical methods from other areas of computational physics together with new ideas. This talk aims to introduce some of the scientific, mathematical, and computational problems involved in the burgeoning field of numerical relativity, discuss some recent progress, and suggest directions of future research.

Chow Forms and Resultants - old and new

David Eisenbud

Mathematical Science Research Institute (Berkeley)

Date: April 12, 2001

Location: UBC

Abstract

N/A

The Mandelbrot Set, the Farey Tree, and the Fibonacci Sequence

Robert L. Devaney

Boston University

Date: October 20, 2000

Location: UBC

Abstract

In this lecture several folk theorems concerning the Mandelbrot set will be described. It will be shown how one can determine the dynamics of the corresponding quadratic maps by visualizing tiny regions in the Mandelbrot set as well as how the size and location of the bulbs in the Mandelbrot set is governed by Farey arithmetic.

A Computational View of Randomness

Avi Wigderson

Hebrew University of Jerusalem

Date:

Location: UBC

Abstract

The current state of knowledge in Computational Complexity Theory suggests two strong empirical "facts" (whose truth are the two major open problems of this field).

1. Some natural computational tasks are infeasible (e.g. it seems so for computing the functions PERMANENT, FACTORING, CLIQUE, SATISFIABILITY ...)

2. Probabilistic algorithms can be much more efficient than deterministic ones. (e.g it seems so for PRIMALITY, VERIFYING IDENTITIES APPROXIMATING VOLUMES...).

As it does with other notions (e.g. knowledge, proof..), Complexity Theory attempts to understand the notion of randomness from a computational standpoint. One major achievement of this study is the following (surprising?) relation between these two "facts" above:

THEOREM: (1) contradicts (2) In words: If ANY "natural" problem is "infeasible", then EVERY probabilistic algorithm can be "efficiently" "derandomized".

I plan to explain the sequence of important ideas, definitions, and techniques developed in the last 20 years that enable a formal statement and proof of such theorems. Many of them, such as the formal notions of "pseudo-random generator", and "computational indistinguishability" are of fundamental interest beyond derandomization; they have far reaching implications on our ability to build efficient cryptographic systems, as well as our inability to efficiently learn natural concepts and effectively prove natural mathematical conjectures (such as (1) above).

### Thematic Programme on Inverse Problems and Applications

Reconstructing the Location and Magnitude of Refractive Index Discontinuities from Truncated Phase-Contrast Tomographic Projections

Mark Anastasio

Illinois Institute of Technology

Date: August 4, 2003

Location: UBC

Abstract

Joint work with Daxin Shi, Yin Huang, and Francesco De Carlo. I. INTRODUCTION: In recent years, much effort has been devoted to developing imaging techniques that rely on contrast mechanisms other than absorption. Phase-contrast computed tomography (CT) is one such technique that exploits differences in the real part of the refractive index distribution of an object to form an image using a spatially coherent light source. Of particular interest is the ability of phase-contrast CT to produce useful images of objects that have very similar or identical absorption properties. In applications such as microtomography, it is imperative to reconstruct an image with high resolution. Experimentally, the demand of increased resolution can be achieved by highly collimating the incident light beam and using a microscope optic to focus the transmitted image, formed on a scintillator screen, onto the detector. When the object is larger than the field-of-view (FOV) of the imaging system, the measured phase-contrast projections are necessarily truncated and one is faced with the so-called local CT reconstruction problem. To circumvent the non-local nature of conventional CT, local CT algorithms have been developed that aim to to reconstruct a filtered image that contains detailed information regarding the location of discontinuities in the imaged object. Such information is sufficient for determining the structural composition of an object, which is the primary task in many biological and materials science imaging applications. II. METHODS A. Theory of Local Phase-Contrast Tomography: We have recently demonstrated that the mathematical theory of local CT, which was originally developed for absorption CT, can be applied naturally for understanding the problem of reconstructing the location of image boundaries (i.e., discontinuities) from truncated phase-contrast projections. Our analysis suggested the use of a simple backprojection-only algorithm for reconstructing object discontinuities from truncated phase-contrast projection data that is simpler and more theoretically appropriate than use of the FBP algorithm or use of the exact reconstruction algorithm for phase-contrast CT that was recently proposed by Bronnikov [1]. We demonstrated that the reason why this simple backprojection-only procedure represents an effective local reconstruction algorithm for phase-contrast CT is that the filtering operation that needs to be explicitly applied to the truncated projection data in conventional absorption CT is implicitly applied to the phase-contrast projection data (before they are measured) by the act of paraxial wavefield propagation in the near-field. In this talk, we review the application of local CT reconstruction theory to the phase-contrast imaging problem. Using concepts from microlocal analysis, we describe the features of an object that can be reliably reconstructed from incomplete phase-contrast projection data. In many applications, the magnitude of the refractive index jump across an interface may provide useful information about the object of interest. For the first time, we demonstrate that detailed information regarding the magnitude of refractive index discontinuities can be extracted from the phase-contrast projections. Moreover, we show that these magnitudes can be reliable reconstructed using adaptations of algorithms that were originally developed for absorption local CT. B. Numerical Results: We will present extensive numerical results to corroborate our theoretical assertions. Both simulation data and experimental coherent X-ray projection data acquired at the Advanced Photon Source (APS) at Argonne National Laboratory will be utilized. We will compare the ability of the available approximate and exact reconstruction algorithms to provide images that contain accurate information regarding the location and magnitude of refractive index discontinuities. The stability of the algorithms to data noise and inconsistencies will be reported. In Fig. 1, we show some examples of phase-contrast images reconstructed from noiseless simulation data. III. SUMMARY In this talk, we address the important problem of reconstructing the location and magnitude of refractive index discontinuities in phase-contrast tomography. We theoretically investigate existing and novel reconstruction algorithms for reconstructing such information from truncated phase-contrast tomographic projections and numerically corroborate our findings using simulation and experimental data. IV. REFERENCES [1] A. Bronnikov, "Theory of quantitative phase-contrast computed tomography," Journal of the Optical Society of America (A), vol. 19, pp. 472-480, 2002.

Anna Celler (Inverse Problems and Nuclear Medicine): Medical Imaging Research Group Division of Nuclear Medicine Vancouver Hospital and Health Sciences Centre Single Photon Emission Computed Tomography (SPECT) and Positron Emission Tomography (PET) are two nuclear medicine (NM) imaging techniques that visualize in 3D distributions of radiolabeled tracers inside the human body. Since concentration of the tracer in each location in the body reflects its physiology, these techniques constitute powerful diagnostic tools to investigate organ functions and changes in metabolism caused by disease processes. Currently however, clinical studies image only stationary activity distributions and the analysis of the results remains mainly qualitative. As it is believed that absolute quantitation of the data would greatly enhance diagnostic accuracy of the tests, a lot of research effort is directed towards this goal. Reconstructions that create 3D tomographic images from the data acquired around the patient represent an example of the inverse problem application. In the last years this area has undergone rapid development but still important questions persist. The data are incomplete, noisy and altered by physics phenomena and the acquisition process. This causes the problem to be illposed so that even small changes in the data can produce large effects in the solution. The talk will present basic principles of NM data acquisition and image creation and will relate them to the underlying physics effects. A discussionof the most important factors that limit quantitation and a short overview of the correction methods will follow. Different approaches to dynamic imaging will be presented.

Reconstruction Methods in Optical Tomography and Applications to Brain Imaging

Dr. Simon Arridge

University College London

Date: August 7, 2003

Location: UBC

Abstract

In the first part of this talk I will discuss methods for reconstruction of spatially-varying optical absorbtion and scattering images from measurements of transmitted light through highly scattering media. The problem is posed in terms of non-linear optimisation, based on a forward model of diffusive light propgation, and the principle method is linearisation using the adjoint field method. In the second part I will discuss the particular difficulties involved in imaing the brain. These include: - Accounting for non or weakly scattering regions that do not satisfy the diffusion approximation (the void problem) - Accounting for anisotropic scattering regions - Constructing realistic 3D models of the head shape - Dynamic imaging incorporating temporal regularisation

Fast Hierarchical Algorithms for Tomography

Yoram Bresler

University of Illinois at Urbana-Champaign

Date: August 8, 2003

Location: UBC

Abstract

The reconstruction problem in practical tomographic imaging systems is recovery from samples of either the x-ray transform (set of the line-integral projections) or the Radon transform (set of integrals on hyperplanes) of an unknown object density distribution. The method of choice for tomographic reconstruction is filtered backprojection (FBP), which uses a backprojection step. This step is the computational bottleneck in the technique, with computational requirements of O(N^3) for an NxN pixel image in two dimensions, and at least O(N^4) for an NxNxN voxel image in three dimensions. We present a family of fast hierarchical tomographic backprojection algorithms, which reduce the complexities to O(N^2 log N) and O(N^3 log N), respectively. These algorithms employ a divide-and-conquer strategy in the image domain, and rely on properties of the harmonic decomposition of the Radon transform. For image sizes typical in medical applications or airport baggage security, this results in speedups by a factor of 50 or greater. Such speedups are critical for next-generation real-time imaging systems.

How Medical Science will Benefit from Mathematics of the Inverse Problem

Thomas F. Budinger

Lawrence Berkeley National Laboratory and University of California Berkeley and San Francisco

Date: August 4, 2003

Location: UBC

Abstract

Selection of in-vivo imaging modalities (i.e x-ray, MRI, PET, SPECT, light absorption, fluorescence and luminescence, current source and electrical potential) can be logically approached by evaluating biological parameters relative to the biomedical objective (e.g. cardiac apoptosis vs cardiac stem cell trafficking and vs plaque composition vs plaque surface chemistry). For that evaluation, contrast resolution, of highest importance for modality selection in most cases, is defined as the signal to background for the desired biochemical or physiological parameter. But a particular modality which has exquisite biological potential (e.g. MRI and SPECT for atherosclerosis characterization) might not be deployed in medical science because appropriate algorithms are not available to deal with problems of blurring, variable point spread function, background scatter, detection sensitivity, attenuation and refraction. Trade-offs in technique selection frequently pit contrast resolution against intrinsic instrument resolution (temporal and spatial) and depth or size of the object. For example, imaging vulnerable carotid plaques using a molecular beacon with 5:1 signal to background and with 7 mm resolution in the human neck can be argued as superior to imaging tissue characteristics with 1:3:1 signal to background at 0.5 mm resolution with MRI. Another example is the use of the multidetector CT (helical) due to its relative speed instead of MRI to characterize coronary plaques even though MRI has much better intrinsic contrast mechanisms. The superior speed of modern CT argues for its preferred use. Some old examples of how mathematics of the inverse problem have enabled medical science advances include incorporation of attenuation compensation in SPECT imaging which brought SPECT to a quantitative technique, light transmission and fluorescence emission tomography, iterative reconstruction algorithm for all methods, and incorporation of phase encoding for MRI reconstruction. Current work on new mathematical approaches includes endeavors to improve resolution, improve sampling speed, decrease background and achieve reliable quantitation. Examples are rf exposure reduction in MRI by selective radio frequency pulses requiring low peak power, dose reduction by iterative reconstruction schemes in X-Ray CT, implementation of coded aperture models for emission tomography, 3D and time reversal ultrasound, a multitude of transmission and stimulated emission methods for light wavelength of 400nm to 3 cm, and electrical potential and electric source imaging. Many of these subjects will be discussed at this workshop and all rely on innovations in mathematics applied to the inverse problem.

New Multiscale Thoughts on Limited-Angle Tomography

Emmanuel Candes

California Institute of Technology

Date: August 4, 2003

Location: UBC

Abstract

This talk is concerned with the problem of reconstructing an object from noisy limited-angle tomographic data---a problem which arises in many important medical applications. Here, a central question is to describe which features can be reconstructed accurately from such data and how well, and which features cannot be recovered.

We argue that curvelets, a recently developed multiscale system, may have a great potential in this setting. Conceptually, curvelets are multiscale elements with a useful microlocal structure which makes them especially adapted to limited-angle tomography. We develop a theory of optimal rates of convergence which quantifies that features which are microlocally in the "good" direction can be recovered accurately and which shows that adapted curvelet-biorthogonal decompositions with thresholding can achieve quantitatively optimal rates of convergence. We hope to report on early numerical results.

Computed Imaging for Near-Field Microscopy

P. Scott Carney

University of Illinois at Urbana-Champaign

Date: August 7, 2003

Location: UBC

Abstract

Near-field optics provides a means to observe the electromagnetic field intensity in close proximity to a scattering of radiating sample. Modalities such as near-field scanning optical microscopy (NSOM) and photon scanning tunneling microscopy (PSTM) accomplish these measurements by placing a small probe close to the object (in the "near-zone") and then precision controlling the position. The data are usually plotted as a function probe position and the resulting figure is called an image. These modalities provide a means to circumvent the classical Rayleigh-Abbe resolution limits, providing resolution on scales of a small fraction of a wavelength.

There are a number of problems associated with the interpretation of near-field images. If the probe is slightly displaced from the surface of the object, the image quality degrades dramatically. If the sample is thick, the subsurface features are obscured. The quantitative connection between the measurements and the optical properties of the sample is unknown. To resolve all these problems it is desirable to solve the inverse scattering problem (ISP) for near-field optics. The solution of the ISP provides a means to tomographically image thick samples and assign quantitative meaning to the images. Furthermore, data taken at distances up to one wavelength from the sample may be processed to obtain a focused, or reconstructed image of the sample at subwavelength scales.

Preferred Pitches in Multislice Spiral CT from Periodic Sampling

Adel Faridani

Oregon State University

Date: August 4, 2003

Location: UBC

Abstract

Joint work with Larry Gratton. Applications of sampling theory in tomography include the identification of efficient sampling schemes; a qualitative understanding of some artifacts; numerical analysis of reconstruction methods; and efficient interpolation schemes for non-equidistant sampling. In this talk we present an application of periodic sampling theorems in three-dimensional multisclice helical tomography shedding light on the question of preferred pitches.

Spherical Means and Thermoacoustic Tomography

David Finch

Oregon Stage University

Date: August 6, 2003

Location: UBC

Abstract

In thermoacoustic tomography, impinging radiation causes local heating which generates sound waves. These are measured by transducers, and the problem is to recover the density of emitters. This may be modelled as the recovery of the initial value of the time derivative of the solution of the wave equation from knowledge of the solution on (part of) the boundary of the domain. This talk, in conjunction with the talk by Sarah Patch, will report on recent work by the author, S. Patch and Rakesh on uniqueness and stability and an inversion formula, in odd dimensions, for the special case when measurements are taken on an entire sphere surrounding the object. The well-known relation between spherical means and solutions of the wave equation then implies results on recovery of a function from its spherical means.

Transient Elastography and Supersonic Shear Imaging

Mathias Fink

Laboratoire Ondes et Acoustique ESPCI, Paris

Date: August 6, 2003

Location: UBC

Abstract

Palpation is a standard medical practice which relies on qualitative estimation of the tissue Young's modulus E. In soft tissues the Young's modulus is directly proportional to the shear modulus ó (E = 3ó). It explains the great interest for developing quantitative imaging of the shear modulus distribution map. This can be achieved by observing with NMR or with ultrasound the propagation of low frequency shear waves (between 50 Hz and 500 Hz) in the body. The celerity of these waves is relatively low (between 1 and 10 m/s) and these waves can be produced either by vibrators coupled to the body or by ultrasonic radiation pressure. We have developed an ultra high-rate ultrasonic scanner that can give 10.000 ultrasonic images per second of the body. With such a high frame-rate we can follow in real time the propagation of transient shear waves, and from the spatio-temporal evolution of the displacement fields, we can use inversion algorithm to recover the shear modulus map. New inversion algorithm can be used that are no more limited by diffraction limits. In order to obtain unbiased shear elasticity map, different configurations of shear sources induced by radiation pressure of focused transducer arrays are used. A very interesting configuration that induces quasi plane shear waves will be described. It used a sonic shear source that moves at supersonic velocities, and that is created by using a very peculiar beam forming in the transmit mode. In vitro and in vivo results will be presented that demonstrate the interest of this new transient elastographic technique.

Effects of Target Non-localization on the Contrast of Optical Images: Lessons for Inverse Reconstruction

Amir Gandjabkhche

NIH

Date: August 7, 2003

Location: UBC

Abstract

N/A

A general inversion formula for cone beam CT

Alexander Katsevich

University of Central Florida

Date: August 4, 2003

Location: UBC

Abstract

Given a rather general weight function n, we derive a new cone beam transform inversion formula. The derivation is explicitly based on Grangeat's formula and the classical 3D Radon transform inversion. The new formula is theoretically exact and is represented by a two-dimensional integral. We show that if the source trajectory C is complete (and satisfies two other very mild assumptions), then substituting the simplest uniform weight n gives a convolution-based filtered back-projection algorithm. However, this easy choice is not always optimal from the point of view of practical applications. Uniform weight works well for closed trajectories, but the resulting algorithm does not solve the long object problem if C is not closed. In the latter case one has to use the flexibility in choosing n and find the weight that gives an inversion formula with the desired properties. We show how this can be done for spiral CT. It turns out that the two inversion algorithms for spiral CT proposed earlier by the author are particular cases of the new formula. For general trajectories the choice of weight should be done on a case by case basis.

The Green's Function for the Radiative Transport Equation

Arnold Kim

Stanford University

Date: August 7, 2003

Location: UBC

Abstract

N/A

Reconstruction of conductivities in the plane

Kim Knudsen

Aalborg University

Date: August 5, 2003

Location: UBC

Abstract

Joint work with Jennifer Mueller, Samuli Siltanen and Alex Tamasan. In this talk I will consider the mathematical problem behind Electrical Impedance Tomography, the inverse conductivity problem. The problem is to reconstruct an isotropic conductivity distribution in a body from knowledge of the voltage-to-current map at the boundary of the body. I will discuss the two-dimensional problem and give a reconstruction algorithm, which is direct and mathematically exact. The method is based on the so-called dbar-method of inverse scattering. Both theoretical validation of the algorithm and numerical examples will be given.

Inverse scattering problem with a random potential

Matti Lassas

Rolf Nevanlinna Institute

Date: August 6, 2003

Location: UBC

Abstract

In these talk we consider scattering from random media and the inverse problem for it. As a stereotype of inverse scattering problems, we consider the SchrÎdinger equation $$ (\Delta+q+k^2)u(x,y,k)=\delta_y $$ with a random potential $q(x)$. Also, we discuss shortly the relation of this problem to medical imaging. The potential $q(x)$ is assumed to be a Gaussian random function which covariance function $E(q(x)q(y))$ is smooth outside the diagonal. We show how the realizations of the amplitude of the scattered field $|u_s(x,y,k)|$, averaged over frequency parameter $k>1$, can be used to determine stochastic properties of $q$, in particular the principal symbol of the covariance operator. This corresponds to finding the correlation length function of the random medium. In contrast to applied literature, we approach the problem with methods that do not require approximations. In technical point of view, we analyze the scattering from the random potential by combining methods of harmonic and microlocal analysis with stochastic, in particular with theory of ergodic processes.

Interior Elastodynamics Inverse Problems: Recovery of Shear Wavespeed in Transient Elastography

Dr. Joyce McLaughlin

RPI

Date: August 6, 2003

Location: UBC

Abstract

For this new inverse problem the data is the time and space dependent interior displacement measurements of a propagating elastic wave. The medium is initially at rest with a wave initiated at the boundary or at an interior point by a broad band source. A property of the wave is that it has a propagating front. For this new problem we present well posedness results and an algorithm that recovers shear wavespeed from the time and space dependent position of the propagating wavefront. We target the application from transient elastography where images are created of the shear wavespeed in biological tissue. The goal is to create a medical diagnostic tool where abnormal tissue is identified by its abnormal shear stiffness characteristics. Included in our presentation are images of stiffness changes recovered by our algorithms and using data measured in the laboratory of Mathias Fink.

Reconstructions of Chest Phantoms by the D-Bar Method for Electrical Impedance Tomography

Jennifer Mueller

Colorado State University

Date: August 5, 2003

Location: UBC

Abstract

In this talk a direct (noniterative) reconstruction algorithm for EIT in the two-dimensional geometry is presented. The algorithm is based on the mathematical uniqueness proof by A. Nachman [Ann. of Math. 143 (1996)] for the 2-D inverse conductivity problem. Reconstructions from experimental data collected on a saline-filled tank containing agar heart and lung phantoms are presented, and the results are compared to reconstructions by the NOSER algorithm on the same data.

3D Emission Tomography via Plane Integrals

Frank Natterer

University of Munster

Date: August 8, 2003

Location: UBC

Abstract

In emission tomography one reconstructs the activity distribution of a radioactive tracer in the human body by measuring the activity outside the body using collimated detectors. Usually the collimators single out lines along which the measurements are taken. In a novel design (Solstice camera) weighted plane integrals are measured instead. By a statistical error analysis it can be shown that the Solstice concept is superior to the classical line scan for high resolution, making Solstice attractive for small animal imaging. By a suitable approximation of the weight function we can reduce the reconstruction problem to Marr's two stage algorithm for the 3D Radon transform, leading to an efficient algorithm. In order to account for attenuation we approximate the 3D problem by the 2D attenuated Radon transform which can be inverted by Novikov's algorithm. We show reconstructions from simulated and measured data.

Information Geometry, Alternating Minimizations, and Transmission Tomography

Joseph A. O'Sullivan

Washington University in St. Louis

Date: August 8, 2003

Location: UBC

Abstract

Many medical imaging problems can be formulated as statistical inverse problems to which estimation theoretic methods can be applied. Statistical likelihood functions can be viewed in information-theoretic terms as well. Maximizations of statistical likelihood functions for several image estimation problems, including emission and transmission tomography, can be reformulated as double minimizations of information divergences. Properties of minimizations of I-divergences are studied in information geometry. This more general viewpoint yields new characterizations of algorithms and new algorithms for transmission tomography. These new algorithms are described in detail as are medical imaging applications of transmission tomography in the presence of metal.

Imaging in Clutter

George Papanicolau

Stanford University

Date: August 6, 2003

Location: UBC

Abstract

Array imaging, like synthetic aperture radar, does not produce good reflectivity images when there is clutter, or random scattering inhomogeneities, between the reflectors and the array. Can the blurring effects of clutter be controlled? I will discuss this issue in some detail and show that if bistatic array data is available and if the data is suitably preprocessed to stabilize clutter effects then a good deal can be done to minimize blurring.

Thermoacoustic Tomography - An Inherently 3D Generalized Radon Inversion Problem

Sarah Patch

GE Medical Systems

Date: August 6, 2003

Location: UBC

Abstract

Joint work with D. FINCH, RAKESH. A hybrid imaging technique using RF excitation measures ultrasound (US) data. Cancerous tissue is hypothesized to preferentially absorb RF energy, heating more and expanding faster than surrounding healthy tissue. Pressure waves therefore emanate from cancerous inclusions and are detected by US transducers located on the surface of a sphere surrounding the imaging object. A formula for the contrast function is derived in terms of data measured over the entire imaging surface. Existence and uniqueness for the inverse problem when transducers cover only a hemisphere also hold. However, explicit inversion for this clinically realizable case remains an open problem.

Limited Data Tomography in science and industry

Eric Todd Quinto

Tufts University

Date: August 7, 2003

Location: UBC

Abstract

Tomography has revolutionized diagnostic medicine, scientific testing, and industrial nondestructive evaluation, and some of the most difficult problems involve limited data, in which some data are missing. This talk will describe two practical problems and give the mathematical background. The first problem, in industrial nondestructive evaluation (joint with Perceptics, Inc.), uses limited-angle exterior CT to reconstruct a rocket mockup. The second, in electron microscopy (joint with Sidec Technologies), uses limited angle local CT to reconstruct RNA and a virus.

ECGI : A Noninvasive Imaging Modality for Cardiac Electrophysiology and Arrhythmias

Yoram Rudy

Case Western Reserve

Date: August 4, 2003

Location: UBC

Abstract

N/A

Nonlinear image reconstruction in optical tomography using an iterative Newton-Krylov method

Martin Schweiger

University College London

Date: August 7, 2003

Location: UBC

Abstract

Image reconstruction in optical tomography can be formulated as a nonlinear least squares optimisation problem. This paper describes an inexact regularised Gauss-Newton method to solve the normal equation, based on a projection onto the Krylov subspaces. The Krylov linear solver step addresses the Hessian only in the form of matrix-vector multiplications. We can therefore utilise an implicit definition of the Hessian, which only requires the computation of the Jacobian and the regularisation term. This method avoids the explicit formation of the Hessian matrix which is often intractable in large-scale three-dimensional reconstruction problems. We apply the method to the reconstructions of 3-D test problems in optical tomography, whereby we recover the volume distribution of absorption and scattering coefficients in a heterogeneous highly scattering medium from boundary measurements of infrared light transmission. We show that the Krylov method compares favourably to the explicit calculation of the Hessian both in terms of memory space and computational cost.

Inversion of the Bloch Equation

Meir Shinnar

Rutgers University of Medicine and Dentistry of New Jersey

Date: August 5, 2003

Location: UBC

Abstract

N/A

The Inverse Polynomial Reconstruction Method for Two Dimensional Image Reconstruction

Bernie Shizgal

University of British Columbia

Date: August 8, 2003

Location: UBC

Abstract

N/A

Three-dimensional X-ray imaging with few radiographs

Samuli Siltanen

Gunma University

Date: August 6, 2003

Location: UBC

Abstract

In medical X-ray tomography, three dimensional structure of tissue is reconstructed from a collection of projection images. In many practical imaging situations only a small number of truncated projections is available from a limited range of view. Traditional reconstruction algorithms, such as filtered backprojection (FBP), do not give satisfactory results when applied to such data. Instead of FBP, Bayesian inversion is suggested for reconstruction. In this approach, a priori information is used to compensate for the incomplete information of the measurement data. Examples with in vitro measurements from dental radiology and surgical imaging are presented.

Applications of Diffusion MRI to Electrical Impedance Tomography

David Tuch

MIT

Date: August 5, 2003

Location: UBC

Abstract

Diffusion MRI measures the molecular self-diffusion of the endogeneous water in tissue. In this talk, I will discuss various applications of diffusion MRI to electrical impedance tomography (EIT). In particular, I will discuss (i) how the anisotropy information from diffusion tensor imaging (DTI) can inform the EIT forward model, and (ii) how particular transport conservation principles measured with DTI can provide priors or hard constraints for the EIT inverse problem. I will also discuss some recent work on mapping non-tensorial diffusion using spherical tomographic inversions of the diffusion signal.

### Cascade Topology Seminar

The best picture of Poincare's homology sphere

David Gillman

UCLA

Date: November 2, 2002

Location: UBC

Abstract

N/A

Homotopy self-equivalences of 4-manifolds

Ian Hambleton

McMaster University

Date: November 2, 2002

Location: UBC

Abstract

N/A

Skein theory in knot theory and beyond

Vaughan Jones

University of California, Berkeley

Date: November 3, 2002

Location: UBC

Abstract

N/A

New perspectives on self-linking

Dev Sinha

University of Oregon

Date: November 3, 2002

Location: UBC

Abstract

N/A

Topological robotics; topological complexity of projective spaces

Sergey Yuzvinsky

University of Oregon

Date: November 2, 2002

Location: UBC

Abstract

N/A

### Thematic Programme on Asymptotic Geometric Analysis

Entropy increases at every step

Shiri Artstein

Tel Aviv University

Date: July 9, 2002

Location: UBC

Abstract

N/A

Convolution Inequalities in Convex Geometry

Keith Ball

University College London

Date: July 4, 2002

Location: UBC

Abstract

The talk presents a new an approach to entropy via a local reverse Brunn-Minkowski inequality. Applications will be presented by other speakers.

Optimal Measure Transportation

Franck Barthe

Université de Marne-la-Vallée

Date: July 9, 2002

Location: UBC

Abstract

N/A

Almost sure weak con- vergence and concentration for the circular ensembles of Dyson

Gordon Blower

Lancaster University

Date: July 12, 2002

Location: UBC

Abstract

N/A

On risk aversion and optimal terminal wealth

Christer Borell

Chalmers University

Date: July 11, 2002

Location: UBC

Abstract

N/A

Density and current interpolation

Yann Brenier

CNRS, Nice

Date: July 12, 2002

Location: UBC

Abstract

We discuss different way of interpolating densities, including the Moser lemma and the Monge-Kantorovich method. Natural extensions to currents interpolation will be addressed.

Asymptotic behaviour of fast diffusion equations

Jose A. Carrillo

Universidad de Granada

Date: July 11, 2002

Location: UBC

Abstract

N/A

Fast Diffusion to self-similarity: complete spectrum, long-time asymptotics and numerology

Jochen Denzler

University of Tennessee

Date: July 11, 2002

Location: UBC

Abstract

N/A

Measure Concentration, Transportation Cost, and Functional Inequalities

Michel Ledoux

University of Toulouse

Date: July 8, 2002

Location: UBC

Abstract

We present a triple description of the concentration of measure phenomenon, geometric (through Brunn-Minkoswki inequalities), measuretheoretic (through transportation cost inequalities) and functional (through logarithmic Sobolev inequalities), and investigate the relationships between these various viewpoints by means of hypercontractive bounds. This expository introduction directed at students and newcomers to the field has been already delivered at the Edinburgh ICMS meeting last April.

Nonlinear diffusion to self-similarity: spreading versus shape via gradient flow

Robert McCann

University of Toronto

Date: July 11, 2002

Location: UBC

Abstract

N/A

Geometric inequalities of hyperbolic type

Vitali Milman

Tel Aviv University

Date: July 10, 2002

Location: UBC

Abstract

N/A

Entropy jumps in the presence of a spectral gap

Assaf Naor

Microsoft Corporation

Date: July 9, 2002

Location: UBC

Abstract

N/A

Free probability and free diffusion

Roland Speicher

Queen's University

Date: July 12, 2002

Location: UBC

Abstract

N/A

Concentration of non-Lipschitz functions and combinatorial applications

Van Vu

University of California at San Diego

Date: July 11, 2002

Location: UBC

Abstract

We survey recent results concerning the concentration of functions with large Lipschitz coeffcients and their applications in combinatorial setting.

Optimal paths related to transport problems

Qinglan Xia

Rice University

Date: July 10, 2002

Location: UBC

Abstract

N/A

(n,d,lambda)-graphs in Extremal Combinatorics

Noga Alon

Tel Aviv University

Date: July 18, 2002

Location: UBC

Abstract

N/A

Sylvester's Question, Convex Bodies, Limit Shape

Imre Barany

University College London

Date: July 19, 2002

Location: UBC

Abstract

N/A

Transportation versus Rearrangement

Franck Barthe

Universite de Marne la Vallee

Date: July 15, 2002

Location: UBC

Abstract

N/A

How to Compute a Norm?

Alexander Barvinok

University of Michigan

Date: July 19, 2002

Location: UBC

Abstract

N/A

Phase Transition for the Biased Random Walk on Percolation Clusters

Noam Berger

University of California, Berkeley

Date: July 17, 2002

Location: UBC

Abstract

N/A

Phase Transition in the Random Partitioning Problem

Christian Borgs

Microsoft Research

Date: July 17, 2002

Location: UBC

Abstract

N/A

New Results on Green's Functions and Spectra for Discrete Schroedinger Operators

Jean Bourgain

Institute for Advanced Study

Date: July 22, 2002

Location: UBC

Abstract

N/A

On Optimal Transportation Theory

Yann Brenier

CNRS

Date: July 15, 2002

Location: UBC

Abstract

N/A

Recent Results in Combinatorial Number Theory

Mei-Chu Chang

University of California, Riverside

Date: July 17, 2002

Location: UBC

Abstract

N/A

Graphical Models of the Internet and the Web

Jennifer Chayes

Microsoft Research

Date: July 17, 2002

Location: UBC

Abstract

N/A

Random Sections and Random Rotations of High Dimensional Convex Bodies

Apostolos Giannopoulos

University of Crete

Date: July 19, 2002

Location: UBC

Abstract

N/A

On the Sections of Product Spaces and Related Topics

Efim Gluskin

Tel Aviv University

Date: July 15, 2002

Location: UBC

Abstract

N/A

The Poisson Cloning Model for Random Graphs with Applications to k-core Problems, Random 2-SAT, and Random Digraphs

Jeong Han Kim

Microsoft Research

Date: July 16, 2002

Location: UBC

Abstract

We will introduce a new model for random graphs, called the Poisson cloning model, in which all degrees are i.i.d. Poisson random variables. After showing how close this model is to the usual random graph model G(n; p), we will prove threshold phenomena of the k-core problem. The kcore problem is the question of when the random graph G(n; p) contains a k-core, where a k-core of a graph is a largest subgraph with minimum degree at least k. This, in particular, improves earlier bounds of Pittel, Spencer & Wormald. The method can be easily generalized to prove similar results for random hypergraphs. If time allows, we will also discuss the scaling window of random 2-SAT and/or the giant (strong) component of random digraphs.

Results and Problems around Borsuk's Conjecture

Gil Kalai

Hebrew University

Date: July 19, 2002

Location: UBC

Abstract

N/A

Random Submatrices of a Given Matrix

Ravindran Kannan

Yale University

Date: July 23, 2002

Location: UBC

Abstract

N/A

The Regularity Lemma for Sparse Graphs

Yoshiharu Kohyakawa

University of San Paulo

Date: July 18, 2002

Location: UBC

Abstract

One of the fundamental tools in asymptotic graph theory is the well-known regularity lemma of Szemereedi. In essence, the regularity lemma tells us that any large graph may be decomposed into a bounded number of quasi-random, induced bipartite graphs. Thus, this lemma is a powerful tool for detecting and making transparent the random-like behaviour of large deterministic graphs. Furthermore, in general, the quasi-random structure that the lemma provides is amenable to deep analysis, and this makes the lemma a very important tool.

The quasi-random bipartite graphs that Szemereedi's lemma uses in its decomposition are certain graphs in which the edges are uniformly distributed. The measure of uniformity is such that this concept becomes trivial for graphs of vanishing density. To manage sparse graphs, one may adjust this notion of uniform edge distribution in a natural way, and it is a routine matter to check that the original proof extends to this notion, provided we restrict ourselves to graphs of vanishing density that do not contain `dense patches'.

However, the quasi-random structure that the lemma reveals in this case is not too informative, and this puts into question the applicability of this variant of the lemma for `sparse graphs'. Nevertheless, there have been some successful applications of the lemma in this context. In this talk, we shall concentrate on the diffculties one faces and how one can overcome them in certain situations.

Algorithmic Applications of Graph Eigenvalues and Related Parameters

Michael Krivelevich

Tel Aviv University

Date: July 23, 2002

Location: UBC

Abstract

N/A

Tiling Problems and Spectral Sets

Izabella Laba

University of British Columbia

Date: July 22, 2002

Location: UBC

Abstract

N/A

Some Estimates of Norms of Random Matrices (non iid case)

Rafal Latala

Warsaw University

Date: July 22, 2002

Location: UBC

Abstract

N/A

Discrete Analytic Functions and Global Information from Local Observation

Laszlo Lovasz

Microsoft Research

Date: July 23, 2002

Location: UBC

Abstract

N/A

Concentration and Random Permutations

Colin McDiarmid

Oxford University

Date: July 15, 2002

Location: UBC

Abstract

N/A

Some phenomena of large dimension in Convex Geometric Analysis

Vitali Milman

Tel Aviv University

Date: July 16, 2002

Location: UBC

Abstract

N/A

Metric Ramsey-Type Phenomena

Assaf Naor

Microsoft Corporation

Date: July 19, 2002

Location: UBC

Abstract

In this talk we will discuss the problem of finding lower bounds for the distortion required to embed certain metric spaces in Hilbert space. We will show that these problems are intimately connected to certain Poincare type inequalities on graph metrics, and we will discuss recent developments which are based on the analysis of the behavior of Markov chains in metric spaces. These new methods allow us to strengthen known results by showing that large subsets of certain natural graphs must be significantly distorted if one wishes to embed them in Hilbert space.

On a Non-symmetric Version of the Khinchine-Kahane Inequality

Krzysztof Oleszkiewicz

Warsaw University

Date: July 16, 2002

Location: UBC

Abstract

N/A

Some Large Dimension Problems of Mathematical Physics

Leonid Pastur

Universitée Pierre & Marie Curie

Date: July 16, 2002

Location: UBC

Abstract

N/A

Crayola and Dice: Graph Colouring via the Probabilistic Method

Bruce Reed

McGill University

Date: July 18, 2002

Location: UBC

Abstract

We survey recent results on graph colouring via the probabilistic method. Tools used are the Local Lemma and Concentration inequalities.

Ramsey Properties of Random Structures

Andrzej Rucinski

Adam Mickiewicz University

Date: July 18, 2002

Location: UBC

Abstract

N/A

Distances between Sections of Convex Bodies

Mark Rudelson

University of Missouri

Date: July 19, 2002

Location: UBC

Abstract

N/A

Probabilistically Checkable Proofs (PCP) and Hardness of Approximation

Shmuel Safra

Tel Aviv University

Date: July 23, 2002

Location: UBC

Abstract

N/A

<2, well embed in l_1^{an}, for any a >1

Gideon Schechtman

The Weizmann Institute

Date: July 19, 2002

Location: UBC

Abstract

I'll discuss a recent result of Johnson and myself a particular case of which is the statement in the title.

Introduction to the Szemeredi Regularity Lemma

Miklos Simonovits

Hungarian Academy of Science

Date: July 18, 2002

Location: UBC

Abstract

N/A

The Percolation Phase Transition on the n-cube

Gordon Slade

University of British Columbia

Date: July 17, 2002

Location: UBC

Abstract

N/A

Zeroes of Random Analytic Functions

Mikhail Sodin

Tel Aviv University

Date: July 22, 2002

Location: UBC

Abstract

N/A

On the Largest Eigenvalue of a Random Subgraph of the Hypercube

Alexander Soshnikov

University of California at Davis

Date: July 22, 2002

Location: UBC

Abstract

N/A

On the Ramsey- and Turan-type Problems

Benjamin Sudakov

Princeton University

Date: July 18, 2002

Location: UBC

Abstract

N/A

On Pseudorandom Matrices

Stanislaw Szarek

Universitée Paris VI

Date: July 22, 2002

Location: UBC

Abstract

N/A

Families of Random Sections of Convex Bodies

Nicole Tomczak-Jaegermann

University of Alberta

Date: July 16, 2002

Location: UBC

Abstract

N/A

Expander Graphs - where Combinatorics and Algebra Compete and Cooperate

Avi Wigderson

Institute for Advanced Studies

Date: July 23, 2002

Location: UBC

Abstract

Expansion of graphs can be given equivalent deFInitions in combinatorial and algebraic terms. This is the most basic connection between combinatorics and algebra illuminated by expanders and the quest to construct them. The talk will survey how fertile this connection has been to both FIelds, focusing on recent results.

There are infinitely many irrational values of the zeta function at the odd integers

Keith Ball

University College London

Date: July 24, 2002

Location: UBC

Abstract

N/A

Applications of zonoids to Asymptotic Geometric Analysis

Yehoram Gordon

Haifa

Date: July 24, 2002

Location: UBC

Abstract

N/A

The Kakeya conjecture (Part 1)

Izabella Laba

University of Britich Columbia

Date: July 25, 2002

Location: UBC

Abstract

N/A

The Kakeya conjecture (Part 2)

Izabella Laba

University of British Columbia

Date: July 25, 2002

Location: UBC

Abstract

N/A

Stability of uniqueness results for convex bodies

Rolf Schneider

Freiburg

Date: July 25, 2002

Location: UBC

Abstract

N/A

Minkowski's existence theorem and some applications

Rolf Schneider

Freiburg

Date: July 24, 2002

Location: UBC

Abstract

N/A

Random Matrices: Gaussian Unitary Ensemble and Beyond (Part 1)

Alexander Soshnikov

Davis

Date: July 24, 2002

Location: UBC

Abstract

N/A

Random Matrices: Gaussian Unitary Ensemble and Beyond (Part 2)

Alexander Soshnikov

Davis

Date: July 25, 2002

Location: UBC

Abstract

N/A

Random Matrices: Gaussian Unitary Ensemble and Beyond (Part 3)

Alexander Soshnikov

Davis

Date: July 26, 2002

Location: UBC

Abstract

N/A

Noncommutative M-structure and the interplay of algebra and norm for operator algebras

David Blecher

Houston

Date: August 6, 2002

Location: UBC

Abstract

We report on a recent joint paper with Smith and Zarikian, following on from work of the author, Effros and Zarikian on noncommutative M-structure. Certain nonlinear but convex equations play a role. We discuss some extensions of these results, and some related ideas.

Operator spaces as `quantized' Banach spaces

Edward Effros

UCLA

Date: August 6, 2002

Location: UBC

Abstract

In the beginning it appeared that linear spaces of operators would have a theory much like that for Banach spaces. This misperception grew out of a series of remarkable discoveries, such as Arveson's version of the Hahn-Banach Theorem, Ruan's axiomatization of the operator spaces, and the theory of projective and injective tensor products. The problems of using Banach space theory as one's sole guide became apparent when one considered such classical notions as local relexivity. Owing to the availability of modern operator algebra theory, researchers have made great strides in understanding the beautiful and unexpected nature of these spaces.

Random Matrices and Magic Squares

Alexander Gamburd

Stanford

Date: August 6, 2002

Location: UBC

Abstract

Characteristic polynomials of random unitary matrices have been intensively studied in recent years: by number theorists in connection with Riemann zeta-function, and by theoretical physicists in connection with Quantum Chaos. In particular, Haake and collaborators have computed the variance of the coeficients of these polynomials and raised the question of computing the higher moments. The answer, obtained in recent joint work with Persi Diaconis, turns out to be intimately related to counting integer stochastic matrices (magic squares).

Free Entropy Dimension and Hyperfinite von Neumann algebras

Kenley Jung

Berkeley

Date: August 7, 2002

Location: UBC

Abstract

I will give a general introduction to Voiculescu's notions of free entropy and free entropy dimension and then discuss what they have in store for the most tractable of von Neumann algebras: those which are hypernite and have a tracial state.

The central limit procedure for noncommuting random variables and applications

Marius Junge

Urbana

Date:

Location: UBC

Abstract

We investigated the algebra of central limits of a fixed set of random variables in the (commutative and) noncommutative context and matrix valued version thereof. In the noncommutative framework states instead of traces provide new examples of complex gaussian variables such that the real part does no longer commute with the imaginary part. Using this procedure, we may embed the operator Hilbert space (a central object in the theory of operator spaces introduced by Pisier) in a noncommutative L1 space and calculate the operator space analogue of the projection constant of the n-dimensional Hilbert space.

A Good formula for noncommutative cumulants

Franz Lehner

Graz

Date: August 7, 2002

Location: UBC

Abstract

Cumulants linearize convolution of measures. We use a formula of Good to define noncommutative cumulants. It turns out that the essential property needed is exchangeability of random variables. This provides a simple unified method to derive the known examples of cumulants, like free cumulants and various q-cumulants, and will hopefully lead to interesting new examples.

Holomorphic functional calculus and square functions on non-commutative $L_p$-spaces

Christian Le Merdy

Besançon

Date: August 7, 2002

Location: UBC

Abstract

N/A

A2-point functions for multi-matrix models, and non-crossing partitions in an annulus

Alexandru Nica

University of Waterloo

Date: August 8, 2002

Location: UBC

Abstract

N/A

Hilbertian Operator spaces with few completely bounded maps

Eric Ricard

Paris 6

Date: August 6, 2002

Location: UBC

Abstract

N/A

Can non-commutative $L^p$ spaces be renormed to be stable?

Haskell Rosenthal

Austin

Date: August 6, 2002

Location: UBC

Abstract

N/A

On Real Operator Spaces

Zhong Jin Ruan

Urbana

Date: August 8, 2002

Location: UBC

Abstract

N/A

The Role of Maximal $L_p$ Bounds in Quantum Information Theory

Mary Beth Ruskai

Lowell

Date: August 7, 2002

Location: UBC

Abstract

N/A

Determinantal Random Point Fields

Alexander Soshnikov

University of California, Davis

Date: August 8, 2002

Location: UBC

Abstract

The purpose of the talk is to give an introduction to determinantal random point fields. Determinantal random point fields appear naturally in random matrix theory, probability theory, quantum mechanics, combinatorics, representation theory and some other areas of mathematics and physics. The first part of the talk will be devoted to some general results (i.e. existence and uniqueness theorem) and examples. In the second part we will concentrate on the CLT type results for the linear statistics and ergodic properties of the translation-invariant determinantal point fields.

Maximization of free entropy

Roland Speicher

Queen's University

Date: August 9, 2002

Location: UBC

Abstract

N/A

On the maximality of subdiagonal algebras

Quanhua Xu

Université de Franche-Comté

Date: August 9, 2002

Location: UBC

Abstract

We consider the open problem on the maximality of subdiagonal algebras posed by Arveson in 1967. We prove that a subdiagonal algebra is maximal if it is invariant under the modular automorphism group of a normal faithful state.

The method of minimal vectors

George Androulakis

University of South Carolina

Date: August 15, 2000

Location: UBC

Abstract

N/A

An introduction to the uniform classification of Banach spaces

Yoav Benyamini

Technion

Date: August 12, 2002

Location: UBC

Abstract

N/A

Baire-1 functions and spreading models

Vassiliki Farmaki

Athens University

Date: August 14, 2002

Location: UBC

Abstract

N/A

Selecting unconditional basic sequences

Tadek Figiel

Polish Academy of Sciences

Date: August 14, 2002

Location: UBC

Abstract

N/A

The Banach envelope of Paley-Wiener type spaces Ep for 0<p<1

Mark Hoffman

University of Missouri

Date: August 14, 2002

Location: UBC

Abstract

N/A

Weak topologies and properties that are fulfilled almost everywhere

Tamara Kuchurenko

University of Missouri

Date: August 12, 2002

Location: UBC

Abstract

N/A

On Frechet differentiability of Lipschitz functions, part I

Joram Lindenstrauss

The Hebrew University of Jerusalem

Date: August 12, 2002

Location: UBC

Abstract

N/A

On Frechet differentiability of Lipschitz functions, part II

Joram Lindenstrauss

The Hebrew University of Jerusalem

Date: August 12, 2002

Location: UBC

Abstract

N/A

The structure of level sets of Lipschitz quotients

Beata Randrianantoanina

Miami University

Date: August 12, 2002

Location: UBC

Abstract

N/A

How many operators do there exist on a Banach space?

Thomas Schlumprecht

Texas A & M University

Date: August 13, 2002

Location: UBC

Abstract

N/A

Lambda_p sets for some orthogonal systems

Lior Tzafriri

The Hebrew University

Date: August 15, 2002

Location: UBC

Abstract

N/A

Sigma shrinking Markushevich bases and Corson compacts

Vaclav Zizler

University of Alberta

Date: August 13, 2002

Location: UBC

Abstract

N/A

### Frontiers of Mathematical Physics, Brane-World and Supersymmetry

Physics with Large Extra Dimensions (lecture 1)

Ignatios Antoniadis

CERN

Date: July 24, 2002

Location: UBC

Abstract

N/A

Physics with Large Extra Dimensions (lecture 2)

Ignatios Antoniadis

CERN

Date: July 25, 2002

Location: UBC

Abstract

N/A

Fixing Runaway Moduli

Cliff Burgess

McGill University

Date: July 22, 2002

Location: UBC

Abstract

N/A

Radius-dependent Gauge Coupling Renormalization in AdS5

Kiwoon Choi

KAIST

Date: July 24, 2002

Location: UBC

Abstract

N/A

Gauge Theories of the Symmetric Group in the large N Limit

Alessandro D'Adda

INFN, Torino

Date: July 22, 2002

Location: UBC

Abstract

N/A

Shape versus Volume: Rethinking the Properties of Large Extra Dimensions

Keith Dienes

University of Arizona

Date: August 1, 2002

Location: UBC

Abstract

N/A

Solving the Hierarchy Problem without SUSY or Extra Dimensions: An Alternative Approach

Keith Dienes

University of Arizona

Date: August 2, 2002

Location: UBC

Abstract

N/A

Universal Extra Dimensions

Bogdan Dobrescu

Yale University

Date: July 29, 2002

Location: UBC

Abstract

N/A

Deconstructing Warped Gauge Theory and Unification

Hyung Do Kim

KIAS

Date: July 29, 2002

Location: UBC

Abstract

N/A

Adding Flavour to ADS/CFT

Andreas Karch

University of Washington

Date: July 23, 2002

Location: UBC

Abstract

N/A

Little Higgses

Emanuel Katz

University of Washington

Date: August 2, 2002

Location: UBC

Abstract

N/A

Twisted superspace and Dirac-Kaehler Fermions

Noboru Kawamoto

Hokkaido University

Date: August 2, 2002

Location: UBC

Abstract

N/A

What can neutrino oscillation tell us about the possible existence of an extra dimension?

C.S. Lam

McGill University

Date: July 23, 2002

Location: UBC

Abstract

N/A

Limitation of Cardy-Verlinde Formula on the Holographic Description of Brane Cosmology

Y.S. Myung

Inje University

Date: July 31, 2002

Location: UBC

Abstract

N/A

Instanton effects in 5d Theories and Deconstruction

Erich Poppitz

University of Toronto

Date: July 31, 2002

Location: UBC

Abstract

N/A

A New Non-Commutative Field Theory

Konstantin Savvidis

Perimeter Institute

Date: July 26, 2002

Location: UBC

Abstract

N/A

Conformal Invariant String with Extrinsic Curvature Action

George Savvidy

National Research Center Demokritos

Date: July 31, 2002

Location: UBC

Abstract

N/A

Nonplanar Corrections to PP-wave Strings

Gordon Semenoff

University of British Columbia

Date: August 1, 2002

Location: UBC

Abstract

N/A

Cosmological Constant Problem in Infinite Volume Extra Dimensions: a Possible Solution

Mikhail Shifman

University of Minnesota

Date: July 25, 2002

Location: UBC

Abstract

N/A

Topological Effects in Our Brane World from Extra Dimensions

Mikhail Shifman

University of Minnesota

Date: July 26, 2002

Location: UBC

Abstract

N/A

Brane World Cosmology: From Superstring to Cosmic Strings

Henry Tye

Cornell University

Date: July 30, 2002

Location: UBC

Abstract

N/A

Supersoft Supersymmetry Breaking

Neal Weiner

University of Washington

Date: July 30, 2002

Location: UBC

Abstract

N/A

### International Conference on Robust Statistics (ICORS 2002)

Dimension Reduction and Nonparametric Regression: A Robust Combination

Claudia Becker

University of Dortmund

Date: May 16, 2002

Location: UBC

Abstract

In modern statistical analysis, we often aim at determining a functional relationship between some response and a high-dimensional predictor variable. It is well-known that estimating this relationship from the data in a nonparametric setting can fail due to the curse of dimensionality. But a lower dimensional regressor space may be suffcient to describe the relationship of interest.

In the following, we consider the two main steps of a combined procedure in this setting: the dimension reduction step and the step of estimating the functional relation in the reduced space. The occurrence of outliers can disturb this process in several ways. When finding the reduced regressor space, the dimension may be wrongly determined. If the dimension is correctly estimated, the space itself may not be found correctly. As a consequence, it may happen that the functional relationship cannot be found, or an incorrect relation is determined. If both, dimension and space are correctly identified, outliers may still in uence the function estimation. Hence, we aim at constructing robust methods which are able to detect irregularities such as outliers in the data and at the same time can adjust the dimension and estimate the function without being affected by such phenomena.

Robust Inference for the Cox Model

Tadeusz Bednarski

University of Zielona Gora

Date: May 15, 2002

Location: UBC

Abstract

N/A

Robust Estimators in Partly Linear Models

Graciela Boente

University of Buenos Aires

Date: May 14, 2002

Location: UBC

Abstract

N/A

John Tukey and "Troubled" Time Series Data

David Brillinger

University of California, Berkeley

Date: May 13, 2002

Location: UBC

Abstract

On various occasions when discussing time series analysis John Tukey made reference to the use of robust methods. In this talk we will mention those remarks of his that we have found and discuss some other methods as well.

On the Bianco-Yohai Estimator for High Breakdown Logistic Regression

Christophe Croux

University of Leuven

Date:

Location: UBC

Abstract

Bianco and Yohai (1996) proposed a highly robust procedure for estimation of the logistic regression model. The results they obtain were very promising. We complement there study by providing a fast and stable algorithm to compute this estimator. Moreover, we discuss the problem of the existence of the estimator. We make a comparison with other robust estimators by means of a simulation study and examples. A discussion of the breakdown point of robust estimators for the logistic regression model will also be given.

Breakdown and groups

Laurie Davies

University of Essen

Date: May 14, 2002

Location: UBC

Abstract

The concept of breakdown point was introduced by Hodges (1967) and Hampel (1968, 1971) and still plays an important though at times a controversial role in robust statistics. In practice its use is confined to location, scale and linear regression problems and to functionals which have the appropriate equivariance structure. Attempts to extend the concept to other situations have not been successful. In this talk we clarify the role of the group structure in determining the maximal breakdown point of functionals which have the equivariance structure induced by the group. The analysis suggests that if a problem does not have a suficiently rich group of transformations under which it remains invariant then there is no canonical definition of breakdown and the highest possible breakdown point will be 1.

Robust Factor Analysis

Peter Filzmoser

Vienna University of Technology

Date: May 16, 2002

Location: UBC

Abstract

Two robust approaches to factor analysis are presented and compared. The first one uses a robust covariance matrix for estimating the factor loadings and the specific variances. The second one estimates factor loadings, scores and specific variances directly, using the alternating regression technique.

Straight Talks about Robust Methods

Xuming He

University of Illinois at Urbana-Champaign

Date: May 14, 2002

Location: UBC

Abstract

Instead of presenting another research result, I wish to use this opportunity to initiate some discussions on the views and uses of modern robust statistical methods. They will reflect some of the questions and concerns that have been nagging me for years, such as

1. Do we tend to be too demanding when we evaluate a robust procedure?

2. Is computational complexity a major hurdle or is there something more serious?

3. Do asymptotic properties matter?

4. Is the breakdown point a really pessimistic measure of robustness?

5. Should we promote the use of robust methods in exploratory or confirmatory data analysis?

6. Are robust methods needed to handle huge data sets with many variables?

I may argument the discussions with my own consulting experience where awareness of robustness often plays a very positive role. Please join me in examining those issues with an open mind and maybe we will agree to disagree.

Statistical Analysis of Microarray Data from Affymetrix Gene Chips

Karen Kafadar

University of Colorado

Date: May 16, 2002

Location: UBC

Abstract

Data obtained from experiments using Affymetrix gene chips are processed and analyzed using the statistical algorithms provided with the product. The details of the algorithms, including the calculations and parameters, are described in mechanical terms in the Appendices to their user's manual (Affymetrix Microarray Suite User Guide, Version 4.0, 2000). I will describe these details using a statistical framework, compare the algorithm with others that have been proposed (Li and Wong, 2002; Efron et al. 2001), and offer modifications that may provide more robust analyses and thus more insightful interpretations of the data.

Approaches to Robust Multivariate Estimation Based on Projections

Ricardo Maronna

Universidad Nacional de La Plata

Date: May 15, 2002

Location: UBC

Abstract

Projections are a useful tool to construct robust estimates of multivariate location and scatter with interesting theoretical and practical properties. In particular: the estimate proposed by Stahel (1981) and Donoho (1982), which was the first equivariant estimate with a high breakdown for all dimensions; Estimates with a maximum bias independent of the dimension, proposed by Maronna, Stahel and Yohai, (1992) for scatter and by Tyler (1994) for location, also studied by Adrover and Yohai (2002); and two recent fast proposals for high-dimensional data: one by Pea and Prieto (2001) based on the kurtosis of projections, and another by Maronna and Zamar (2002) based on pairwise robust covariances. Results and relatinships among these estimates will be reviewed.

Robust Statistics in Portfolio Optimization

Doug Martin

University of Washington and Insightful

Date: May 15, 2002

Location: UBC

Abstract

In this talk we discuss several applications of robust statistics in portfolio optimization, some of which have been only partially developed or are merely ideas of areas for future work. The primary focal points will be (a) The use of influence functions in connection with optimal portfolio quantities of interest, e.g., global minimum variance and associated mean return, tangency portfolio mean and variance, and Sharpe ratio, and (b) The use of robust covariance matrix and mean vector estimates in Markowitz optimal portfolios, and (c) Robustification of the new conditional valueat- risk (CVaR) portfolio theory due to Rockafellar and Uryasev. A brief tutorial on the CVaR optimality theory will be provided, along with discussion of critical questions related to robustifying this approach.

The Multihalver

Stephan Morgenthaler

École Polytechnique Fédérale de Lausanne

Date: May 13, 2002

Location: UBC

Abstract

N/A

Robust Space Transformations for Distance-based Outlier

Raymond Ng

University of British Columbia

Date: May 17, 2002

Location: UBC

Abstract

In the first part of this talk, we will present the notion of distance-based outliers. This is a nonparametric approach, and is particularly suitable for high dimensional data. We will show a case study based on video trajectory surveillance.

For distance-based outlier detection, there is an underlying multi-dimensional data space in which each tuple/object is represented as a point in the space. We observe that in the presence of variability, correlation, outliers and/or differing scales, we may get unintuitive results if an inappropriate space is used. The fundamental question addressed in the second half of this talk is: "What then is an appropriate space?". We propose using a robust space transformation called the Donoho-Stahel estimator. We will focus on the computation of the transformation.

Multivariate Outlier Detection and Cluster Identification

David Rocke

University of California, Davis

Date: May 13, 2002

Location: UBC

Abstract

We examine relationships between the problem of robust estimation of multivariate location and shape and the problem of maximum likelihood assignment of multivariate data to clusters. Recognition of the connections between estimators for clusters and outliers immediately yields one important result that we demonstrate in this paper; namely, outlier detection procedures can be improved by combining them with cluster identification techniques. Using this combined approach, one can achieve practical breakdown values that approach the theoretical limits. We report computational results that demonstrate the effectiveness of this approach. In addition, we provide a new robust clustering method.

Resistant Parametric and Nonparametric Modelling in Finance

Elvezio Ronchetti

University of Geneva

Date: May 16, 2002

Location: UBC

Abstract

We discuss how resistant parametric and nonparametric techniques can be used in the statistical analysis of financial models. As an illustration we re-examine the empirical evidence concerning one factor models for the short rate process and we focus on the estimation of the drift and the volatility.

Standard classical parametric procedures are highly unstable in this application. On the other hand, robust procedures deal with deviations from the assumptions on the model and are still reliable and reasonably efficient in a neighborhood of the model.

Finally, we show that resistant techniques are necessary also in the nonparametric framework, in particular for reliable bandwidth selection.

This is joint work with Rosario Dell'Aquila and Fabio Trojani, University of Southern Switzerland, Lugano.

Robustness Against Separation and Outliers in Binary Regression

Peter Rousseeuw

University of Antwerp

Date: May 14, 2002

Location: UBC

Abstract

The logistic regression model is commonly used to describe the effect of one or several explanatory variables on a binary response variable. Here we consider an alternative model under which the observed response is strongly related but not equal to the unobservable true response. We call this the hidden logistic regression (HLR) model because the unobservable true responses act as a hidden layer in a neural net. We propose the maximum estimated likelihood method in this model, which is robust against separation unlike all existing methods for logistic regression. We then construct an outlier-robust modification of this estimator, called the weighted maximum estimated likelihood (WEMEL) method, which is robust against both problems.

Estimating the p-values of Robust Tests for the Linear Model

Matias Salibian-Barrera

Carleton University

Date: May 17, 2002

Location: UBC

Abstract

There are several proposals of robust tests for the linear model in the literature (see, for example, Markatou, Stahel and Ronchetti, 1991). The finite-sample distributions of these test statistics are not known and their asymptotic distributions have been studied under the assumption that the scale of the errors is known, or that it can be estimated without affecting the asymptotic behaviour of the tests. This is in general true when the errors have a symmetric distribution.

Bootstrap methods can, in principle, be used to estimate the distribution of these test statistics under less restrictive assumptions. However, robust tests are typically based on robust regression estimates which are computationally demanding, specially with moderate- to high-dimensional data sets. Another problem when bootstrapping potentially contaminated data is that we cannot control the proportion of outliers that might enter the bootstrap samples. This could seriously affect the bootstrap estimates of the distribution of the test statistics, specially in their tails. Hence, the resulting p-value estimates may be critically affected by a relatively small amount of outliers in the original data.

In this paper we propose an extension of the Robust Bootstrap (Salibian-Barrera and Zamar, 2002) to obtain a fast and robust method to estimate p-values of robust tests for the linear model under less restrictive assumptions.

Computational Issues in Robust Statistics

Arnold J. Stromberg

University of Kentucky

Date: May 17, 2002

Location: UBC

Abstract

Hundreds, and perhaps thousands, of papers have been published in the area of robust statistics, yet robust methods are still not used routinely by most applied statisticians. An important reason for this is the many computational issues in robust statistics.

Most applied statisticians agree conceptually that robust methods are a good idea, but they fail to use them for a number of reasons. Often, software is not available. Other times, like in linear regression, there are so many choices, it is not clear which estimator to use. In still other situations, the data sets are too big for robust techniques to handle. This paper discusses these issues and others.

High Breakdown Point Multivariate M-Estimation

David Tyler

Rutgers University

Date: May 17, 2002

Location: UBC

Abstract

In this talk, a general study of the properties of the M-estimates of multivariate location and scatter with auxiliary scale proposed in Tatsuoka and Tyler (2000) is presented. This study provides a unifying treatment for some of the high breakdown point methods develop for multivariate statistics, as well as a unifying framework for comparing these methods. The multivariate M-estimates with auxiliary scale include as special cases the minimum volume ellipsoid estimates [Rousseeuw (1985)], the multivariate S-estimates [Davies (1987)], the multivariate constrained M-estimates [Kent and Tyler (1996)], and the recently introduced multivariate MM-estimates [Tatsuoka and Tyler (2000)]. The results obtained for the multivariate MM-estimates, such as its breakdown point, its influence function and its asymptotic distribution, are entirely new. The breakdown points of the M-estimates of multivariate location and scatter for fixed scale are also derived. This generalizes the results on the breakdown points of the univariate redescending M-estimates of location with fixed scale given by Huber (1984).

Semiparametric Random Effects Models for Longitudinal Data

Jane-Ling Wang

University of California, Davis

Date: May 13, 2002

Location: UBC

Abstract

A class of semiparametric regression models to describe the influence of covariates on a longitudinal (or functional) response is described. The model includes indices, which are linear functions of the covariates, unknown random functions of the indices, and unknown variance functions. They are thus semiparametric random effects models with many parsimonious submodels. The parametric components of the indices are estimated via quasi-score estimating equations, and the unknown smooth random and variance functions are estimated nonparametrically. Consistency of the procedures is obtained, and the procedure is illustrated with fecundity data for 1000 female Mediterranean fruit flies.

Robust, Sequential Design Strategies

Doug Wiens

University of Alberta

Date: May 16, 2002

Location: UBC

Abstract

N/A

High Breakdown Point Robust Regression with Censored Data

Victor Yohai

University of Buenos Aires

Date: May 13, 2002

Location: UBC

Abstract

N/A

Robustness Issues for Confidence Intervals

Julie Zhou

University of Victoria

Date: May 14, 2002

Location: UBC

Abstract

In many inference problems, it is of interest to compute confidence intervals or regions for the parameters of interest in the model under consideration. As with point estimation, it is important to know about the robustness of the confidence intervals. This involves evaluating the performance of the interval in terms of coverage and length in the face of small perturbations of the data or the model. Ideally we would like a procedure which gives efficient intervals and accurate coverage in the neighborhood of the model. In this talk, we will address the issues of robustness for confidence intervals and assess the robustness of some particular intervals. We will propose several measures including empirical influence function, gross-error sensitivity, and finite-sample breakdown point to study the robustness of confidence intervals. Those measures are applied to examine the robustness of unconditional intervals in the regression model for both the regression parameters and the scale and conditional intervals.

### Pacific Northwest String Theory Seminar

Non-commutative Space And Chan-Paton Algebra in Open String Field Algebra

Kazuyuki Furuuchi

PIMS, University of British Columbia

Date: 2002

Location: UBC

Abstract

N/A

Adding Flavor to AdS/CFT

Andreas Karch

University of Washington

Date: 2002

Location: UBC

Abstract

N/A

Localized Closed String Tachyons

David Kutasov

University of Chicago

Date: 2002

Location: UBC

Abstract

N/A

Extension of Boundary String Field Theory on Disc and RP2 Worldsheet Geometries

Shin Nakamura

KEK

Date: 2002

Location: UBC

Abstract

N/A

^{
Comments on Vacuum String Field Theory
Kazumi Okuyama
University of Chicago
Date: 2002
Location: UBC
Abstract
N/A
}

^{
Wilson Loops in N=4 Super Yang-Mills Theory
Jan Plefka
AEI, Potsdam
Date: 2002
Location: UBC
Abstract
N/A
}

^{
The Hierarchy Unification and the Entropy of de Sitter Space
Lisa Randall
Harvard University
Date: 2002
Location: UBC
Abstract
N/A
}

^{
Nonperturbative Nonrenormalization in a Non-supersymmetric Nonlocal String Theory
Eva Silverstein
Stanford
Date: 2002
Location: UBC
Abstract
N/A
}

^{
Index Puzzles in SUSY gauge mechanics
Matthias Staudacher
AEI, Potsdam
Date: 2002
Location: UBC
Abstract
N/A
}

^{
Quantum Gravity in dS-Space?
Leonard Susskind
Stanford
Date: 2002
Location: UBC
Abstract
N/A
}

### Thematic Programme on Nonlinear Partial Differential Equations

Recent Progress in Complex Geometry - Part 1 (unavailable),
Part 2,
Part 3,
Part 4

Gang Tian

Massachusetts Institute of Technology

Date: August 14-16, 2001

Location: UBC

Abstract

N/A

Geometric Variational Problems -
Part 1,
Part 2,
Part 3,
Part 4

Richard Schoen

Stanford University

Date: August 8-10, 2001

Location: UBC

Abstract

N/A

Variational problems in relativistic quantum mechanics: Dirac-Fock equations -
Part 1,
Part 2,
Part 3,
Part 4

Eric Séré

Université Paris IX

Date: August 2, 4, 7, 2001

Location: UBC

Abstract

N/A

Energy minimizers of the copolymer problem -
Part 1,
Part 2,
Part 3,
Part 4

Yann Brenier

CNRS Nice, on leave from Universite Paris 6

Date: July 30, 31, 2001

Location: UBC

Abstract

N/A

Variational problems related to operators with gaps and applications in relativistic quantum mechanics -
Part 1,
Part 2,
Part 3

Maria Esteban

Université Paris IX

Date: July 30,31 and August 1, 2001

Location: UBC

Abstract

N/A

On De Giorgi's conjecture in dimensions 4 and 5

Nassif Ghoussoub

Pacific Institute for the Mathematical Sciences

Date: August 1, 2001

Location: UBC

Abstract

N/A

Dynamics of Ginsburg-Landau and related equations -
Part 1,
Part 2,
Part 3,
Part 4

Fang Hua Lin

Courant Institute

Date: July 24-27, 2001

Location: UBC

Abstract

N/A

Diffusions, cross-diffusions, and their steady states -
Part 1,
Part 2

Changfeng Gui

University of British Columbia

Date: July 23 - 24, 2001

Location: UBC

Abstract

N/A

Diffusion & Cross Diffusion in Pattern Formation -
Part 1,
Part 2

Wei-Ming Ni

University of Minnesota

Date: July 20-21, 2001

Location: UBC

Abstract

N/A

About the De Giorgi conjecture in dimensions 4 and 5

Changfeng Gui

University of British Columbia

Date:

Location: UBC

Abstract

N/A

Propagation of fronts in excitable media -
Part 1,
Part 2,
Part 3,
Part 4

Henri Berestycki

Université Paris VI

Date: July 12-16, 2001

Location: UBC

Abstract

N/A

Ergodicity, singular perturbations, and homogenization in the HJB equations of stochastic control

Martino Bardi

University of Padua

Date: July 3, 2001

Location: UBC

Abstract

N/A

Fully nonlinear stochastic partial differential equations - Theory and Applications -
Part 1,
Part 2,
Part 3,
Part 4

Panagiotis Souganidis

University of Texas at Austin

Date: July 3 - 4, 2001

Location: UBC

Abstract

N/A

### Frontiers of Mathematical Physics, Particles, Fields and Strings

Noncommutative Supersymmetric Tubes

Dongsu Bak

University of Seoul

Date: July 19, 2001

Location: SFU

Abstract

N/A

D-branes on Orbifolds: The Standard Model

Robert Leigh

University of Illinois

Date: July 16, 2001

Location: SFU

Abstract

N/A

Orientifolds, Conifolds and Quantum Deformations

Soonkeon Nam

Kyung Hee University

Date: July 16, 2001

Location: SFU

Abstract

N/A

### PIMS-MITACS Workshop on Inverse Problems and Imaging

Sturm-Liouville problems with eigenvalue dependent and independent conditions

Paul Binding

University of Calgary

Date: June 10, 2001

Location: UBC

Abstract

We consider Sturm-Liouville problems with boundary conditions affinely dependent on the eigenvalue parameter. These are classified into three types, one being the standard case where the eigenvalue does not appear explicitly. We exhibit transformations between problems with these different types of boundary condition, preserving all eigenvalues and norming constants, except possibly two. In consequence, we extend some standard inverse Sturm-Liouville results to cases with eigenvalue dependent boundary conditions.

Wavetracing: Ray tracing for the propagation of band-limited signals for traveltime tomography

Kenneth P. Bube

University of Washington

Date:

Location: UBC

Abstract

Many seismic imaging techniques require computing traveltimesand travel paths. Methods to compute raypaths are usually based onhigh frequency approximations. In some situations like head waves,these raypaths minimize traveltime, but are not paths along whichmost of the energy travels. We present an approach to computingraypaths, using a modification of ray bending which we call"wavetracing," that computes raypaths and traveltimes that aremore consistent with the paths and times for the band-limited signalsin real seismic data. Wavetracing shortens the raypath, while stillkeeping the raypath within the Fresnel zone for a characteristicfrequency of the signal. This is joint work with John Washbourneof TomoSeis, Inc.

Synthetic Aperture Radar

Margaret Cheney

Department of Mathematical Sciences

Date: June 10, 2001

Location: UBC

Abstract

In Synthetic Aperture Radar (SAR) imaging, a plane or satellite carrying an antenna flies along a (usually straight) flight track. The antenna emits pulses of electromagnetic radiation; this radiation scatters off the terrain and is received back at the same antenna. These signals are used to produce an image of the terrain. The problem of producing a high-resolution image from SAR data is very similar to problems that arise in geophysics and tomography; techniques from seismology and X-ray tomography are now making their way into the SAR community. This talk will outline a mathematical model for the SAR imaging problem and discuss some of the associated problems.

Optimal Linear resolution and conservation of information

Keith S. Cover

University of British Columbia

Date: June 9, 2001

Location: UBC

Abstract

In linear inverse theory, when trying to estimate a model from data, it is widely advocated in the literature that finding an model which fits the data is the method of choice. However, several common algorithms yield estimates with optimal or near optimal linear resolution that do not fit the data. Prominent examples are the windowed discrete Fourier transform and algorithms following the Backus and Gilbert method. The Backus and Gilbert algorithms are often avoided because uncertainties of how to interpret estimates that do not fit the data. It is shown that algorithms with linear resolution, provided they can be expressed as a matrix multiplication which is invertible, produce an estimate which, along with its resolution functions and noise statistics, is a complete summary of all the models that fit the data. Such estimates also completely conserve the information provided by the data. If the resulting linear resolution of the algorithm is optimal or near optimal such estimates also effectively communicate the inherent nonuniqueness of the solution to an interpreter. This simple but novel theoretical finding will provide a valuable frame work in which to interpret the results of the linear inversion algorithms including those of the Backus and Gilbert type.

Microlocal Analysis and Seismic Inverse Scattering in Anisotropic Elastic Media

Maarten V. de Hoop

Colorado School of Mines

Date: June 9, 2001

Location: UBC

Abstract

N/A

A level set method for shape reconstruction in electromagnetic cross-borehole tomography

Oliver Dorn

UBC

Date: June 9, 2001

Location: UBC

Abstract

In geophysical applications, it is often the case that the shapes of some obstacles in the earth (e.g. pollutant plumes) have to be monitored from electromagnetic data. These problems can be considered as (ill-posed) nonlinear inverse problems, where typically iterative solution techniques and some regularization are required. Starting from some simple initial guess for the shapes, these shapes evolve during the reconstruction process in order to minimize a suitably chosen cost functional. Since the geometries of the hidden objects can be quite complicated and are not known a priori, a solution algorithm has to be able to model changes in the geometries and in the topologies of these objects during the reconstruction process reliably. We have developed a shape reconstruction algorithm which uses a level set representation for modelling the evolving shapes during the reconstructions. The algorithm, as well as the results of various numerical experiments, are discussed in the talk.

Applications of Sampling Theory in Tomography

Adel Faridani

Oregon State University

Date: June 9, 2001

Location: UBC

Abstract

Computed tomography produces images of opaque objects by reconstructing a density function f from measurements of its line integrals. We describe how Shannon Sampling Theory can be utilized to find the minimum number of measurements needed to achieve a desired resolution in the reconstructed image. An error analysis and numerical experiments are presented showing how to achieve high quality images from a minimal amount of data.

Geometric singularities in tomography

David Finch

Oregon State University

Date: June 9, 2001

Location: UBC

Abstract

N/A

Statistical Estimation of the Parameters of a PDE

Colin Fox

University of Auckland

Date: June 10, 2001

Location: UBC

Abstract

Non-invasive imaging remains a difficult problem in those cases where the forward map can only be adequately simulated by solving the appropriate partial-differential equation (PDE) subject to boundary conditions. However, in those problems, the inherent uncertainty in images recovered from actual measurements may be quantified using knowledge of the forward map and the mesurement process. We demonstrate image recovery for the problem of electrical conductivity imaging by sampling the distribution of all possible images and calculating summary statistics. This route to solving inverse problems has a number of advantageous points, including the ability to quantify accuracy of the recovered image, and a straightforward way to include model complexity such as complete descriptions of real electrodes.

Geophysical Inversion in the new millennium

Larry Lines

University of Calgary

Date: June 9, 2001

Location: UBC

Abstract

Geophysicists have been working on solutions to the inverse problem since the dawn of our profession. This presentation is an evaluation of inversion's present state and abbreviates an evaluation given by the authors in the January 2001 issue of Geophysics. Geophysical interpreters currently infer subsurface properties on the basis of observed data sets, such as seismograms or potential field recordings. A rough model of the process that produces the recorded data resides within the interpreter's brain; the interpreter then uses this rough mental model to reconstruct subsurface properties from the observed data. In modern parlance, the inference of subsurface properties from observed data is identified with the solution of a so-called "inverse problem". The currently used geophysical processing techniques can be viewed as attempts to solve the ubiquitous inverse problem: we have geophysical data, we have an abstract model of the process that produces the data, and we seek algorithms that allow us to invert for the model parameters. The theoretical and computational aspects of inverse theory will gain importance as geophysical processing technology continues to evolve. Iterative geophysical inversion is not yet in widespread use in the exploration industry today because the computing resources are barely adequate for the purpose. After all, it is only now that 3-D prestack depth migration has become economically feasible, and the day will surely not be far off when the inversion algorithms described above will come into their own, enabling the geophysicist to invert observations not only for a structure's subsurface geometry, but also for a growing number of detailed physical, chemical, and geological features. The day that such operations become routine will also be the day that geophysical inverse theory has come into its own in both mineral and petroleum exploration. coauthor: Sven Treitel

Approximate Fourier integral wavefield extrapolators for heterogeneous, anisotropic media

Gary Margrave

University of Calgary

Date: June 10, 2001

Location: UBC

Abstract

Seismic imaging uses wavefield data recorded on the earth's surface to construct images of the internal structure. A key part of this process is the extrapolation of wavefield data into the earth's interior. Most commonly, wavefield extrapolation is based on ray theory and incorporates a high-frequency approximation that allows the development of analytic expressions. This leads to computationally efficient imaging algorithms that incorporate both the advantages and the limitations of raytracing. An alternative approach is to perform a plane-wave decomposition of the recorded data and extrapolate each plane wave independently. For homogeneous media, the Fourier transform can be used for the plane-wave decomposition and phase shifts propagate the plane waves. We explore an approximate extension of this concept to heterogeneous media that uses pseudodifferential operator theory. In heterogeneous media, a plane wave does not remain planar as it propagates so there is not a one-to-one correspondence between plane-wave spectra at two different depth levels. A Fourier integral operator that performs the appropriate plane-wave mixing can be developed from pseudo-differential operator theory applied to the variable-coefficient scalar wave equation. We discuss the derivation of the operator and its basic properties. In particular, we demonstrate that the transpose of the operator is also a viable Fourier integral wavefield extrapolator with a first order error that opposes the original operator. Thus a simple symmetric operator, the average of our first extrapolator and its transpose, is more accurate. We show that that our first operator performs a spatially nonstationary phase shift that is simultaneous with the inverse Fourier transformation. The transpose operator also performs a nonstationary phase shift but simultaneously with the forward Fourier transform. We present both numerical experiments and theoretical arguments to characterize our results and discuss their possible extensions. coauthor: Michael LamoureuxDepartment of Mathematics and Statistics, University of Calgary

Simulation studies on Bioelectric and Biomagnetic Reconstruction of Currents on Curved surfaces and in Spherical Volume conductors

Ceon Ramon

University of Washington

Date: June 9, 2001

Location: UBC

Abstract

Reconstruction and resolution enhancement of the current distribution on curved surfaces and in volume conductors from the bioelectric or biomagnetic data is proposed. Applications will be in the reconstruction of current distribution in the heart wall or the localization of the sources in the brain. Our image reconstruction procedure is divided in two steps. First, the bioelectric or biomagnetic inverse problem is solved by use of the weighted pseudo-inverse techniques to reconstruct an initial image of the current distribution on a curved surface or in a volume conductor from a given electric potential or magnetic field profile. The current distribution thus obtained has poor resolution, it can barely resemble the original shape of the current distribution. The second step improves the resolution of the reconstructed image by using the method of alternating projections. The procedure assumes that images can be represented by line-like elements and involves finding the line-like elements based on the initial image and projecting back onto the original solution space. Simulation studies were performed on a set of parallel conductors on the curved surface modeled as set of multiple closely resemble the original shape of the conductors. Simulation studies were also performed for distributed dipolar sources in a spherical volume conductor. Resonation was performed with a 3-D alternating projection technique developed by us. Position of the reconstructed dipoles matched closely with the original dipoles. However, slight error was found in matching the dipolar intensity. Coauthors:Akira Ishimaru - Dept. of Electrical Engineering, University of WashingtonRobert Marks - Dept. of Electrical Engineering, University of WashingtonJoreg Schrieber - Biomagnetics Center, F. S. University, Jena, GermanyJens Haueisen - Biomagnetics Center, F. S. University, Jena, GermanyPaul Schimpf - Dept of Computer Science and Electrical Engineering, Washington State University

Wave equation least-squares Migration/Inversion

Mauricio D. Sacchi

University of Alberta

Date: June 10, 2001

Location: UBC

Abstract

coauthor: Henning KuehlDepartment of Physics, University of AlbertaLeast-squares (LS) migration based on Kirchhoff modeling/migration operators has been proposed in the literature to account for uneven subsurface illumination and to reduce imaging artifacts due to irregularly and/or coarsely sampled seismic wavefields (Nemeth et al., 1999; Duquet et al., 2000). In this presentation we show that least-squares migration can also be used to improve the performance of generalized phase-shift pre-stack Double-Square-Root (DSR) migration. Simulations with complete and incomplete data were used to test the feasibility of the proposed algorithm. In this case, rather than estimating an image of the subsurface by downward propagating wavefields measured at z=0, the image is estimated by solving a linear inverse problem. The solution of this problem requires the specification of two operators: a forward (modeling) operator and its adjoint (migration). The image can be retrieved using the method of conjugate gradients with different regularization schemes. In particular, we have developed a regularization strategy that operates on common angle images. Simulations with complete and incomplete data were used to test the feasibility of the proposed algorithm.

Duquet, B., Marfurt, J.K., and Dellinger, J.A., 2000, Kirchhoff modeling, inversion for reflectivity, and subsurface illumination, Geophysics, 65, 1195-1209. Nemeth, T., Wu, C., and Schuster, G.T., 1999, Least-squares migration of incomplete reflection data, Geophysics, 64, 208-221.

### PIMS Pacific Northwest Seminar on String Theory

Tachyon condensation in open string field theory

Washington Taylor

MIT

Date: March 17, 2001

Location: UBC

Abstract

N/A

Holographic renormalization

Kostas Skenderis

Priceton University

Date: March 17, 2001

Location: UBC

Abstract

N/A

String theoretic mechanisms for spacetime singularity resolution

Amanda Peet

University of Toronto

Date: March 17, 2001

Location: UBC

Abstract

N/A

D-branes as noncommutative solitons: an algebraic approach

Emil Martinec

University of Chicago

Date: March 17, 2001

Location: UBC

Abstract

N/A

Strings in AdS_3 and the SL(2,R) WZW model

Hiroshi Ooguri

Caltech

Date: March 17, 2001

Location: UBC

Abstract

N/A

### Thematic Programme on Graph Theory and Combinatorial Optimization

Random Homomorphisms

Peter Winkler

Bell Labs

Date: July 20, 2000

Location: SFU

Abstract

Let H be a fixed small graph, possibly with loops, and let G be a (possibly infinite) graph. Let f be chosen from the set Hom(G,H) of all homomorphisms from G to H.

If H is Kn, f is a proper coloring of G if H consists of two adjacent vertices one of which is looped, f is (in effect) an independent set in G These and other H give rise to "hard constraint" models of interest in statistical mechanics. One way to phrase the physicists' key question is: when G is infinite, is there a canonical way to pick f uniformly at random?

When G is a Cayley tree, f can be generated by a branching random walks on H and using this approach, we are able to characterize the H for which Hom(G,H) always has a unique "nice" probability distribution. We will sketch the proof but spend equal time illustrating the bizarre things that can happen when H is not so well behaved.

Reference: Graham R. Brightwell and Peter Winkler, Graph homomorphisms and phase transitions, J. Comb. Theory Series B (1999) 221--262.

Acyclic coloring, strong coloring, list coloring and graph embedding

Noga Alon

Tel Aviv University

Date: July 19, 2000

Location: SFU

Abstract

I will discuss various coloring problems and the relations among them. A representative example is the conjecture, studied in a joint paper with Sudakov and Zaks, that the edges of any simple graph with maximum degree d can be colored by at most d+2 colors with no pair of adjacent edges of the same color and no 2-colored cycle.

A three-color theorem for some graphs evenly embedded on orientable surfaces

Joan Hutchinson

Macalester College

Date: July 19, 2000

Location: SFU

Abstract

The easiest planar graph coloring theorem states that a graph in the plane can be 2-colored if and only if every face is bounded by an even number of edges; call such a graph "evenly embedded." What is the chromatic number of evenly embedded graphs on other surfaces? Three, provided the surface is orientable and the graph is embedded with all noncontractible cycles sufficiently long. We give a new proof of this result, using a theorem from Robertson-Seymour graph minors work and a technique of Hutchinson, Richter, and Seymour in a proof of a related 4-color theorem for Eulerian triangulations.

Colourings and orientations of graphs

Adrian Bondy

Université Claude Bernard

Date: July 18, 2000

Location: SFU

Abstract

To each proper colouring c:V -> {1,2,...,k} of the vertices of a graph G, there corresponds a canonical orientation of the edges of G, edge uv being oriented from u to v if and only if c(u) > c(v). This simple link between colourings and orientations is but the tip of the iceberg. The ties between the two notions are far more profound and remarkable than are suggested by the above observation. The aim of this talk is to describe some of these connections.

Integral polyhedra related to even-cycle and even-cut matroids

Bertrand Guenin

University of Waterloo

Date: July 11, 2000

Location: SFU

Abstract

N/A

Amalgamations of Graphs - Lecture 1, Part 1,
Part 2, Lecture 2,
Part 1,
Part 2

Chris Rodger

University of Auburn

Date: June 19 - June 30, 2000

Location: SFU

Abstract

N/A

TBA - Lecture 1 Part 1,
Part 2,
Lecture 2

Ron Gould

Emory University

Date: June 19 - June 30, 2000

Location: SFU

Abstract

N/A

^{ }

^{ }

^{For any audio recordings that are not listed above, please click here.}