# Dynamic light scattering

Dynamic light scattering (also known as Photon Correlation Spectroscopy) is a powerful technique in physics, which can be used to determine the size distribution profile of small particles in solution.

## Description

When light hits small particles the light scatters in all directions (Rayleigh scattering) so long as the particles are small compared to the wavelength (< 250 nm). If the light source is a laser, and thus is monochromatic and coherent, then one observes a time-dependent fluctuation in the scattering intensity. These fluctuations are due to the fact that the small molecules in solutions are undergoing Brownian motion and so the distance between the scatterers in the solution is constantly changing with time. This scattered light then undergoes either constructive or destructive interference by the surrounding particles and within this intensity fluctuation information is contained about the time scale of movement of the scatterers.

There are several ways to derive dynamic information about particles’ movement in solution by Brownian motion. One of such methods is dynamic light scattering, also known as quasi elastic laser light scattering. The dynamic information of the particles is derived from an autocorrelation of the intensity trace recorded during the experiment. The second order autocorrelation curve is generated from the intensity trace as follows:

$g^{2}(q,\tau )=\langle I(t)*I(t+\tau )\rangle /\langle I(t)^{2}\rangle$ where $\ g^{2}(q,\tau )$ is the autocorrelation function at a particular wave vector, $\ q$ and delay time, $\ \tau$ and $\ I$ is the intensity. At short time delays the correlation is high because the particles do not have a chance to move to a great extent from the initial state that they were in. The two signals are thus essentially unchanged when compared after only a very short time interval. As the time delays become longer, the correlation starts to exponentially fall off to zero, meaning that there is no correlation between the intensity of scattering of the initial state with the final state after a long time period has elapsed (relative to the motion of the particles). This exponential decay is obviously then related to the motion of the particles specifically, the diffusion coefficient. To fit the decay (i.e. the autocorrelation function), numerical methods are used based on calculations of assumed distributions. If the sample was monodisperse the decay would simply be a single exponential. The Siegert equation, relates the second order autocorrelation curve with the first order autocorrelation $\ (g^{1}(q,\tau ))$ function as follows:

$g^{2}(q,\tau )=\ 1+A[g^{1}(q,\tau )]^{2}$ where $\ A$ is a parameter which is a correction factor depending on the geometry and alignment of the laser beam of the light scattering setup. Once the autocorrelation curve has been generated, different mathematical approaches can be employed to fit the curve and thus determine the z-averaged translational diffusion coefficient. The simplest approach is to treat the first order autocorrelation function as the sum of single exponential decays with fractions $\ G(\Gamma )$ where $\ \Gamma$ is the decay rate.

$g^{1}(q,\tau )=\sum _{i=1}^{n}G_{i}(\Gamma _{i})\exp(-\Gamma _{i}\tau )=\int G_{i}(\Gamma )\exp(-\Gamma \tau )\,d\Gamma .$ ## Data analysis

### Cumulant method

One of the most common methods is the cumulant method , from which in addition to the sum of the exponentials above, more information can be derived about the variance of the system as follows:

$\ g^{1}(q,\tau )=\exp(-{\bar {\Gamma }}\tau +(\mu _{2}/2!)\tau ^{2}+(\mu _{3}/3!)\tau ^{3}+\cdots$ where ${\bar {\Gamma }}$ is the average decay rate and $\mu _{2}/{\bar {\Gamma }}^{2}$ is the second order polydispersity index (or an indication of the variance). A third order polydispersity index may also be derived but this is only necessary if the particles of the system are highly polydisperse. The z-averaged translational diffusion coefficient $\ D_{z}$ may be derived at a single angle or at a range of angles depending on the wave vector $\ q$ .

$\ {\bar {\Gamma }}=q^{2}D_{z}\,$ with

$\ q=4\pi n_{0}(\sin(\theta /2)/\lambda )$ where $\ \lambda$ is the incident laser wavelength, $\ n_{0}$ is the refractive index of the sample and $\ \theta$ is angle at which the detector is located with respect to the sample cell.

Depending on the anisotropy and polydispersity of the system, a resulting plot of $\ \Gamma /q^{2}$ vs. $\ q^{2}$ may or may not show an angular dependence. Spherical particles will show no angular dependence, hence no anisotropy. A plot of $\ \Gamma /q^{2}$ vs. $\ q^{2}$ will result in a horizontal line. Particles with a shape other than a sphere will show anisotropy and thus an angular dependence when plotting of $\ \Gamma /q^{2}$ vs. $\ q^{2}$ . The intercept will be in any case the $\ D_{z}$ . The $\ D_{z}$ can be calculated into the hydrodynamic radius for a sphere through the Stokes-Einstein equation.

One must note that the cumulant method is valid for small $\ \tau$ and sufficiently narrow $\ G(\Gamma )$ . One should seldom use parameters beyond $\ \mu _{3}$ , because overfitting data with many parameters in a power-series expansion will render all the parameters including $\ {\bar {\Gamma }}$ and $\ \mu _{2}$ , less precise .

### CONTIN algorithm

An alternative method for analyzing the autocorrelation function can be achieved through an inverse Laplace transform known as CONTIN developed by Steven Provencher.. The CONTIN analysis is ideal for heterodisperse, polydisperse and multimodal systems which cannot be resolved with the cumulant method. The resolution for separating two different particle populations is approximately a factor of five or higher and the difference in relative intensities between two different populations should be less than 1 : 1E-5.

### Maximum entropy method

The Maximum entropy method is an analysis method that has great developmental potential. The method is also used for the quantification of sedimentation velocity data from analytical ultracentrifugation. The maximum entropy method involves a number of iterative steps to minimize the deviation of the fitted data from the experimental data and subsequently reducing the $\ \chi ^{2}$ of the fitted data. 