Unfold with unfoldr

unfoldr is the second program of my complex systems software suite that I open-source. (histogramr was the first one.) For those of you who study the spectral features of complex systems such as random networks I think unfoldr will prove quite useful. Given the eigenvalues of an ensemble of random matrices, unfoldr calculates the nearest-neighbor level spacings of the unfolded spectrum, either as a whole or for slices of it. You can specify how you want to cut the spectrum into slices — linearly, logarithmically —, and unfoldr will calculate the level spacings for each slice individually. With this you can study how the level spacing statistics change with energy and whether or not there is a phase transition in the spectrum. I have used this technique to pinpoint the localization transition in the energy spectrum of an ultra-cold Rydberg gas, (Scholak, Wellens, and Buchleitner 2014). I comment on the purpose and the scientific value of the nearest-neighbor level spacings statistics below.

Like histogramr, unfoldr reads and writes HDF5 files. HDF5 files are organized in a hierarchical, file-system-like data structure. HDF5 is commonly used in scientific environments. It is designed to handle big scientific data sets. Please see my earlier blog post for a description of HDF5 and how it is used.

unfoldr and histogramr are designed to work in tandem. Output from unfoldr can (but doesn't have to) be processed by histogramr. unfoldr produces a HDF5 file that contains the level spacings in a single data set:

$ unfoldr -d "spectrum" -m "eigval" -o nnls.h5 spectra.h5

And histogramr then creates a discretized probability density function from it:

$ histogramr -d "inf" -m "spacing" -b .1 -l 0,10 -o nnlsd.h5 nnls.h5

Boom! You've got the nearest-neighbor level spacing distribution stored nicely in nnlsd.h5. Then you can do this:

import numpy as np
import h5py
import plotly.plotly as py
from plotly.graph_objs import *

fS_Poisson = np.array((lambda x: [x, np.exp(-x)])(np.linspace(0, 4, 100))).transpose()
fS_GOE = np.array((lambda x: [x, np.pi*x*np.exp(-np.pi*x**2./4.)/2.])(np.linspace(0, 4, 100))).transpose()

h5file = h5py.File('nnlsd.h5','r')
fS_unfoldr = np.array(h5file['probability density'])
h5file.close()

trace_Poisson = Scatter(x=fS_Poisson[:,0],
                        y=fS_Poisson[:,1],
                        mode='lines',
                        name=u"Poisson distribution")
trace_GOE = Scatter(x=fS_GOE[:,0],
                    y=fS_GOE[:,1],
                    mode='lines',
                    name=u"Wigner's surmise")
trace_unfoldr = Scatter(x=fS_unfoldr[:,0],
                        y=fS_unfoldr[:,1],
                        mode='markers',
                        name=u'unfoldr data',
                        marker=Marker(symbol='square'))
data = Data([trace_Poisson, trace_GOE, trace_unfoldr])
layout = Layout(title=u'Nearest-neighbor level spacing distribution')
fig = Figure(data=data, layout=layout)

py.iplot(fig, filename="unfoldr-demonstration")

The output is:

Nearest-neighbor level spacing distribution

It looks like the data (squares) agrees with Wigner's surmise!

Get unfoldr today from GitHub!

You can download unfoldr from GitHub. In Linux or Mac OS X, open a terminal session and run:

$ git clone git@github.com:tscholak/unfoldr.git

(get Git here). Since I released unfoldr under the GPLv3, you can use, study, share, and modify it for free (as long as you retain these rights for others). unfoldr is written in Python and uses the setuptools. From within the source code directory, run:

$ python setup.py install

Make sure your Python's bin directory is in your $PATH. On Mac OS X, if you use Macports and haven't done so already, add

export PATH=/opt/local/Library/Frameworks/Python.framework/Versions/2.7/bin:$PATH

to your ~/.bash_login or ~/.bash_profile file. Once done, you can call unfoldr from the command line by invoking unfoldr. unfoldr -h will show you the command line arguments.

What exactly is unfoldr good for?

To understand the use cases of unfoldr, I first have to review some complex system theory. If you know all that stuff, skip ahead.

Suppose you are dealing with a complex system, i.e. a large, complicated, but deterministic system or a system whose complexity comes from some intrinsic randomness. Suppose further that you do not seek a full and exact theory describing all phenomena that could possibly manifest in your complex system. Maybe you tried to derive such a theory, but you realized the impossibility of such a task. In fact, your resentment of the task has made you become ignorant of all the details of the system that were once so precious to you. Believe it or not, this is not such a bad thing. There is a way out of your dilemma, and it involves replacing your model — the Hamiltonian, the weighted adjacency matrix, Laplacian matrix, or whatever description of your system you have — by a random matrix (or an ensemble of random matrices) that has the same invariances and symmetry properties as your system. As radical as it may sound, in most cases, it works surprisingly well. The theory that makes all this possible is called random matrix theory (RMT) (Wigner 1967; Dyson 1970; Mehta 1991; Guhr, Müller–Groeling, and Weidenmüller 1998; Stöckmann 1999).

Random matrix theory and Gaussian ensembles

RMT is concerned with the statistical properties (particularly, of eigenvalues and eigenvectors) of large $N \times N$ matrices $M$ with random elements $M_{i j}$. Within RMT, all results are derived from the probability density function $f_{M}$ of $M$. RMT is relevant in math (Tao 2012), theoretical physics (think complex nuclei, quantum and microwave billiards, metals with randomly distributed impurities, Rydberg gases (Scholak, Wellens, and Buchleitner 2014), Boson sampling (Walschaers et al. 2014), ...), finance (Bouchaud and Potters 2009), and the study of networks and communities (social networks, computer networks, etc.). In physics, for instance, $M$ can be the matrix representation of a realization of the system's Hamiltonian.

Of course, for all interesting complex systems, the density $f_{M}$ is highly nontrivial, impossible to obtain from first principles, and thus simply not known. The first step in an RMT treatment of a complex system is therefore to attempt to simplify or to guesstimate $f_{M}$. As mentioned already above, these attempts are guided by the invariance properties of the complex system. There are three seminal random matrix ensembles that result from such considerations:

  • the Gaussian orthogonal ensemble (GOE),
  • the Gaussian unitary ensemble (GUE) and
  • the Gaussian symplectic ensemble (GSE).

As the names suggest, the densities $f_{M}$ of these ensembles respect the invariance with respect to orthogonal, unitary, or symplectic similarity transformations, respectively... ;) In other words, the core assumption for these ensembles is that all real (complex, quaternion) basis sets are equally well suited to describe the system. Furthermore, all matrix elements $M_{i j}$ are independent, and the joint probability densities $f_{M}$ decompose into products of their marginals, $f_{M} = f_{M_{1,1}} f_{M_{1,2}} \cdots f_{M_{N,N}}$. The marginals $f_{M_{i j}}$ are normally distributed with zero mean and fixed variance. For the GOE, they read \[ \begin{align*} f_{M_{i i}}(m) & = \frac{1}{\sqrt{2 \pi \sigma^2}} \, \mathrm{e}^{- \frac{m}{2 \sigma^2}}, \\ f_{M_{i j}}(m) & = \frac{1}{\sqrt{\pi \sigma^2}} \, \mathrm{e}^{- \frac{m}{\sigma^2}}, \end{align*} \] where $\sigma^2$ is the variance of the diagonal elements $M_{i i}$.

The three Gaussian random matrix ensembles have been studied ad nauseam. The eigenvalue density is known (it's the Wigner semi-circle), the eigenstate localization properties are known (they are extended (Erdős, Schlein, and Yau 2007), i.e. every eigenvector extends over a fraction of the system that scales as the system size $N$), the level-spacing distribution is known (it's Wigner's surmise, see below), etc. The importance of the Gaussian ensembles stems from the fact that they are universal. If a closed system's behavior is chaotic in the classical limit, then, statistically speaking, its corresponding wave or quantum behavior coincides with one of the Gaussian ensembles (Mehta 1991). Integrable (non-chaotic) systems, on the other hand, have a completely different statistical fingerprint. If your complex system turns out to coincide with universal Gaussian statistics, then you are done. There is nothing left to solve or discover, because everything you possibly would like to know (and can be known) about your system has already been discovered. And if your complex system turns out to be different, then there is still a good chance that its statistics are described by one of the other important random matrix ensembles:

you name it.

Eigenvalue correlations and level repulsion

At this point, you may wonder: How can I find out about the statistics of my complex system? How do I get its statistical fingerprint? How can I compare my system with the random matrix ensembles? unfoldr helps you with that. It turns out you can learn a lot from the data:

One of the most useful statistical metrics for random matrix ensembles is the nearest-neighbor level spacing (NNLS) density, denoted here and in the following by $f_{S}$. What does $f_{S}$ tell you? Formally, \[ f_{S}(s) \, \mathrm{d}s = \overline{\delta(s - S_{\nu})} \, \mathrm{d}s \] states the ensemble averaged probability to sample two adjacent eigenvalues (also called eigenenergies) $\Lambda_{\nu}$, $\Lambda_{\nu+1}$ that are separated by an energy difference $S_{\nu} = \Lambda_{\nu+1} - \Lambda_{\nu}$ between $s$ and $s + \mathrm{d}s$. Defining this makes sense as long as you sort your eigenvalues $\Lambda_{\nu}$ in ascending order for each realization of $M$, that is $\Lambda_{1} \le \Lambda_{2} \le \ldots \le \Lambda_{N}$.

The NNLS density is important, because its shape depends on the correlations and interactions between all eigenvalues of the random matrix. It is a fingerprint of the eigenvalue correlations. If you have the eigenvalues (or eigenenergies) of your system, unfoldr can compute the level spacings for you. Afterwards, you can use histogramr to get the NNLS density and thus the correlation fingerprint of your system. That fingerprint can then be compared to the fingerprints of, e.g., the universal Gaussian matrices.

The eigenvalue correlations have a deterministic system-dependent part and a random universal part. It is important to note that only the latter part may be used for comparisons between different systems or matrix ensembles. To extract the universal part, the spectrum of your complex system must be unfolded. Spectral unfolding is a transformation that maps the ensemble averaged level spacing, \[ \overline{S} = \int_{0}^{\infty} s \, f_{S} (s) \, \mathrm{d}s, \] to a constant, i.e. $\overline{S} = 1$. unfoldr will do that for you (hence the name). So you don't have to worry about it.

If there are no dependencies between the eigenvalues, all distances between them will be uncorrelated, and $f_{S}$ will thus be identical to the Poisson distribution, \[ f^{\mathrm{P}}_{S} = \mathrm{e}^{-s}. \] By contrast, the spectra of the universal Gaussian ensembles (GOE, GUE, GSE) are correlated. Their level-spacing statistics are not described by $f^{\mathrm{P}}_{S}$. For instance, the NNLS statistics of the GOE are well described by Wigner's surmise, i.e. by the Rayleigh distribution \[ f^{\mathrm{GOE}}_{S}(s) = \frac{\pi}{2} \, s \, \mathrm{e}^{-\frac{\pi}{4} s^2}. \] The densities of the GUE and GSE, on the other hand, follow \[ \begin{align*} f^{\mathrm{GUE}}_{S}(s) & = \frac{32}{\pi^2} \, s^2 \, \mathrm{e}^{-\frac{4}{\pi} s^2}, \\ f^{\mathrm{GSE}}_{S}(s) & = \frac{2^{18}}{3^6 \pi^3} \, s^4 \, \mathrm{e}^{-\frac{64}{9 \pi} s^2}, \end{align*} \] respectively, to very good approximation (especially when $N$ is large). The correlations between the eigenvalues is manifested in the frequency of occurrence of small energy differences $s$. A negative deviation from the Poissonian statistics $f^{\mathrm{P}}_{S}$ signifies level repulsion (which coincidentally indicates eigenvector delocalization (Izrailev 1990)). When we look at Wigner's surmise $f^{\mathrm{GOE}}_{S}(s)$, we see that it increases linearly for small spacings $s$. In other words, the probability of sampling two neighboring eigenvalues that are separated by a small distance $S$ between $s$ and $s + \mathrm{d}s$ is increasing with $s$. Not only do eigenvalues not cluster, they avoid each other! The repulsion is even stronger for the other universal ensembles, since $f^{\mathrm{GUE}}_{S}(s) \sim s^2$ and $f^{\mathrm{GSE}}_{S}(s) \sim s^4$ for small $s$.

Let's assume now you have calculated the NNLS density using unfoldr and histogramr and you are staring at the result. What can you conclude?

  • If it looks like the Poisson distribution $f^{\mathrm{P}}_{S}$, then you know that your eigenvalues are uncorrelated. It is also quite likely that your eigenstates are localized, either strongly or weakly (Izrailev 1990).
  • If it looks like Wigner's surmise, like $f^{\mathrm{GUE}}_{S}$, or like $f^{\mathrm{GSE}}_{S}$, then you know that your system agrees with universal Gaussian statistics. This is a big deal and can help you with a lot of things. (In physics, everything nice is Gaussian.) You also know now that your eigenvectors are delocalized.
  • If it looks like a mix between Poisson and universal Gaussian statistics, then you may have a blend between two or more statistically independent subspectra that correspond to either statistics. You can check that by estimating $\lim_{s \to 0} f_{S}(s)$. If this limit is finite, then this will be consistent with such a blend (Berry and Robnik 1984). However, if your $f_{S}(s) \sim s^\alpha$ for small $s$, then you have most likely interaction between the subspectra (Bäcker et al. 2011). In other words, your subspectra are not independent and their blend results in something that is different from either of them. You may have a localization transition in your system.

If you think you have a localization transition in your system (or maybe just a transition between Poisson and Wigner-Dyson statistics under variation of the energy), you can get further insight into this by letting unfoldr calculate the NNLSs for individual slices through the spectral data. For instance,

$ unfoldr -d "spectrum" -m "eigval" -l -10,10 -b .5 -o nnls.h5 spectra.h5
$ histogramr -d "-9.75" -m "spacing" -b .1 -l 0,10 -o nnlsd_-9.75.h5 nnls.h5
$ histogramr -d "-9.25" -m "spacing" -b .1 -l 0,10 -o nnlsd_-9.25.h5 nnls.h5
...
$ histogramr -d "9.75" -m "spacing" -b .1 -l 0,10 -o nnlsd_9.75.h5 nnls.h5

calculates not one, but twenty NNLS densities that are based exclusively on the spectral data in the intervals $[-10, -9.5)$, $[-9.5,-9)$, $\ldots$, $[9.5,10)$. With this data you can easily follow the change of the NNLS statistics with the energy (Scholak, Wellens, and Buchleitner 2014).

Unfolding — theory and numerical implementation

Below I first explain how unfolding works (Guhr, Müller–Groeling, and Weidenmüller 1998) and then how it is implemented in unfoldr. As an auxiliary tool, I first define the spectral staircase function \[ \mathcal{N}(\lambda) = \sum_{\nu = 1}^N \Theta\left(\lambda - \Lambda_{\nu}\right) = \#\left\{\nu \middle| \Lambda_{\nu} < \lambda\right\}, \] where $\Theta(\lambda) = \int_\infty^\lambda \delta(\lambda') \, \mathrm{d}\lambda'$ is the Heaviside step function. $\mathcal{N}(\lambda)$ is the number of eigenvalues $\Lambda_{\nu}$ that are smaller than $\lambda$. We separate $\mathcal{N}(E)$ into a smooth part, \[ \overline{\mathcal{N}}(\lambda) = N F_{\Lambda}(\lambda) = N \int_{-\infty}^{\lambda} f_{\Lambda}(\lambda') \, \mathrm{d}\lambda', \] and a fluctuating part, \[ \mathcal{N}_{\mathrm{osc}} = \mathcal{N}(\lambda) - \overline{\mathcal{N}}(\lambda), \] i.e. into the ensemble average given by the cumulative energy level distribution function $F_{\Lambda}$ and the difference between the realization's value of $\mathcal{N}(\lambda)$ and the ensemble average, respectively. $f_{\Lambda}$ is the density of states, defined as \[ f_{\Lambda}(\lambda) = \frac{1}{N} \, \overline{\operatorname{Tr} \delta \left(\lambda - M\right)}. \] $f_{\Lambda}(\lambda) \, \mathrm{d}\lambda$ is the ensemble averaged probability to find an eigenvalue $\Lambda$ of $M$ in the interval $\left[\lambda, \lambda + \mathrm{d}\lambda\right]$.

Now, unfolding is defined as the mapping \[ \Lambda_{\nu} \mapsto \tilde{\Lambda}_{\nu} = \overline{\mathcal{N}}\left(\Lambda_{\nu}\right). \] This is done for all eigenvalues ${\Lambda_{\nu}}$ of a given realization. Thus, by construction, the ensemble averaged spectral staircase function of the unfolded spectrum $\left\{\tilde{\Lambda}_{\nu}\right\}$ evaluated for the scaled energy $\lambda$ is equal to $\lambda$ itself, \[ \overline{\tilde{\mathcal{N}}}(\lambda) = \overline{\#\left\{\nu \middle| \tilde{\Lambda}_{\nu} < \lambda\right\}} = \lambda, \] and the corresponding unfolded mean level spacing equals one, $\overline{\tilde{S}} = 1$, such that the level spacing density $f_{\tilde{S}}$ of the unfolded eigenvalues is normalized with respect to \[ \begin{align*} \int_0^\infty f_{\tilde{S}}(s) \, \mathrm{d}s & = 1, \\ \int_0^\infty s \, f_{\tilde{S}}(s) \, \mathrm{d}s & = \overline{\tilde{S}} = 1. \end{align*} \] How is unfolding done in unfoldr? When you load your spectral data with unfoldr, two-thirds of them are taken away to populate what I call the unfolding pool. The unfolding pool is a flat, sorted array of eigenvalues. I use it to estimate the ensemble averaged spectral staircase function $\overline{\mathcal{N}}$ and to do the transformation $\Lambda_{\nu} \mapsto \tilde{\Lambda}_{\nu}$. The following Python pseudo code shows how this is implemented:

i = 0
for j in range(number of eigenvalues):
  while unfolding_pool[i] < eigenvalue[j]:
    i += 1
    if i >= length of the unfolding pool:
      break
  if j + 1 < number of eigenvalues:
    level_spacing[j] -= i / (number of spectra in the unfolding pool)
  if j > 0:
    level_spacing[j-1] += i / (number of spectra in the unfolding pool)

This calculation is done for each of the remaining spectra (the third that doesn't go into the unfolding pool). The above piece of code does not only unfold the spectrum, but also calculates the level spacings. The code is compiled with Cython, so it should be fast.

You can control the size of the unfolding pool with the command line argument -p. The default value is 2. That means that the number of spectra in the unfolding pool is twice as large as the number of spectra that are unfolded. The default value should work in most cases. However, if you see oscillations or weird artifacts in your level spacing densities, then try a larger number.


References

  1. Scholak, Torsten, Thomas Wellens, and Andreas Buchleitner, 2014, “Spectral Backbone of Excitation Transport in Ultracold Rydberg Gases,” Phys. Rev. A 90 (American Physical Society), 063415. doi:10.1103/PhysRevA.90.063415.
    View Download PDF.

    Studied spectral structure underlying excitonic energy transfer in ultracold Rydberg gases ⋅ Found evidence for a critical energy that separates delocalized eigenstates from states that are localized at pairs or clusters of atoms separated by less than the typical nearest-neighbor distance ⋅ Discovered that the dipole blockade effect in Rydberg gases can be leveraged to manipulate the localization transition.

    The spectral structure underlying excitonic energy transfer in ultracold Rydberg gases is studied numerically, in the framework of random matrix theory, and via self-consistent diagrammatic techniques. Rydberg gases are made up of randomly distributed, highly polarizable atoms that interact via strong dipolar forces. Dynamics in such a system is fundamentally different from cases in which the interactions are of short range, and is ultimately determined by the spectral and eigenvector structure. In the energy levels’ spacing statistics, we find evidence for a critical energy that separates delocalized eigenstates from states that are localized at pairs or clusters of atoms separated by less than the typical nearest-neighbor distance. We argue that the dipole blockade effect in Rydberg gases can be leveraged to manipulate this transition across a wide range: As the blockade radius increases, the relative weight of localized states is reduced. At the same time, the spectral statistics, in particular, the density of states and the nearest-neighbor level-spacing statistics, exhibits a transition from approximately a 1-stable Lévy to a Gaussian orthogonal ensemble. Deviations from random matrix statistics are shown to stem from correlations between interatomic interaction strengths that lead to an asymmetry of the spectral density and profoundly affect localization properties. We discuss approximations to the self-consistent Matsubara-Toyozawa locator expansion that incorporate these effects.

    @article{scholak2014spectral,
      annote = {Studied spectral structure underlying excitonic energy transfer in ultracold Rydberg gases ⋅ Found evidence for a critical energy that separates delocalized eigenstates from states that are localized at pairs or clusters of atoms separated by less than the typical nearest-neighbor distance ⋅ Discovered that the dipole blockade effect in Rydberg gases can be leveraged to manipulate the localization transition.},
      author = {Scholak, Torsten and Wellens, Thomas and Buchleitner, Andreas},
      date-added = {2015-05-06 20:28:24 +0000},
      date-modified = {2015-05-06 20:28:24 +0000},
      doi = {10.1103/PhysRevA.90.063415},
      journal = {Phys. Rev. A},
      month = dec,
      number = {6},
      pages = {063415},
      publisher = {American Physical Society},
      title = {Spectral backbone of excitation transport in ultracold Rydberg gases},
      volume = {90},
      year = {2014},
      bdsk-url-1 = {http://dx.doi.org/10.1103/PhysRevA.90.063415}
    }
    
  2. Walschaers, M., J. Kuipers, J.-D. Urbina, K. Mayer, M. C. Tichy, K. Richter, and A. Buchleitner, 2014, “A Statistical Benchmark for BosonSampling,” ArXiv e-prints.
    View
    @article{Walschaers2014A-Statistical-B,
      archiveprefix = {arXiv},
      author = {Walschaers, M. and Kuipers, J. and Urbina, J.-D. and Mayer, K. and Tichy, M. C. and Richter, K. and Buchleitner, A.},
      date-added = {2015-05-07 14:10:59 +0000},
      date-modified = {2015-05-07 19:00:17 +0000},
      eprint = {1410.8547},
      journal = {ArXiv e-prints},
      primaryclass = {quant-ph},
      title = {A Statistical Benchmark for BosonSampling},
      year = {2014}
    }
    
  3. Goetschy, A., and S. E. Skipetrov, 2013, “Euclidean Random Matrices and Their Applications in Physics,” ArXiv e-prints.
    View
    @article{Goetschy2013Euclidean-rando,
      archiveprefix = {arXiv},
      author = {Goetschy, A. and Skipetrov, S. E.},
      date-added = {2015-05-06 20:44:37 +0000},
      date-modified = {2015-05-07 19:00:21 +0000},
      eprint = {1303.2880},
      journal = {ArXiv e-prints},
      primaryclass = {math-ph},
      title = {Euclidean random matrices and their applications in physics},
      year = {2013}
    }
    
  4. Tao, T., 2012, Topics in Random Matrix Theory. Graduate Studies in Mathematics (American Mathematical Society).
    View
    @book{Tao2012Topics-in-Rando,
      author = {Tao, T.},
      date-added = {2015-05-06 20:15:35 +0000},
      date-modified = {2015-05-07 18:58:25 +0000},
      isbn = {9780821874301},
      lccn = {2011045194},
      publisher = {American Mathematical Society},
      series = {Graduate studies in mathematics},
      title = {Topics in Random Matrix Theory},
      year = {2012}
    }
    
  5. Bäcker, Arnd, Roland Ketzmerick, Steffen Löck, and Normann Mertig, 2011, “Fractional-Power-Law Level Statistics Due to Dynamical Tunneling,” Phys. Rev. Lett. 106, 024101. doi:10.1103/PhysRevLett.106.024101.
    View
    @article{Backer2011Fractional-Powe,
      author = {B\"acker, Arnd and Ketzmerick, Roland and L\"ock, Steffen and Mertig, Normann},
      date-added = {2015-05-07 17:56:34 +0000},
      date-modified = {2015-05-07 19:01:41 +0000},
      doi = {10.1103/PhysRevLett.106.024101},
      journal = {Phys. Rev. Lett.},
      pages = {024101},
      title = {Fractional-Power-Law Level Statistics Due to Dynamical Tunneling},
      volume = {106},
      year = {2011},
      bdsk-url-1 = {http://dx.doi.org/10.1103/PhysRevLett.106.024101}
    }
    
  6. Bouchaud, J. P., and M. Potters, 2009, “Financial Applications of Random Matrix Theory: a Short Review,” ArXiv e-prints.
    View
    @article{Bouchaud2009Financial-Appli,
      archiveprefix = {arXiv},
      author = {Bouchaud, J. P. and Potters, M.},
      date-added = {2015-05-06 20:14:06 +0000},
      date-modified = {2015-05-07 19:02:16 +0000},
      eprint = {0910.1205},
      journal = {ArXiv e-prints},
      primaryclass = {q-fin.ST},
      title = {Financial Applications of Random Matrix Theory: a short review},
      year = {2009}
    }
    
  7. Erdős, L., B. Schlein, and H.-T. Yau, 2007, “Semicircle Law on Short Scales and Delocalization of Eigenvectors for Wigner Random Matrices,” ArXiv e-prints.
    View
    @article{Erdos2007Semicircle-law-,
      archiveprefix = {arXiv},
      author = {{Erdős}, L. and {Schlein}, B. and {Yau}, H.-T.},
      date-added = {2015-05-07 15:31:02 +0000},
      date-modified = {2015-06-24 16:04:45 +0000},
      eprint = {0711.1730},
      journal = {ArXiv e-prints},
      primaryclass = {math-ph},
      title = {{Semicircle law on short scales and delocalization of eigenvectors for Wigner random matrices}},
      year = {2007}
    }
    
  8. Bollobás, B., 2001, Random Graphs (Cambridge University Press).
    View
    @book{Bollobas2001Random-Graphs,
      author = {Bollobás, B.},
      date-added = {2015-05-06 20:24:48 +0000},
      date-modified = {2015-05-07 18:56:23 +0000},
      isbn = {9780521797221},
      publisher = {Cambridge University Press},
      title = {Random Graphs},
      title1 = {Cambridge Studies in Advanced Mathematics},
      year = {2001}
    }
    
  9. Stöckmann, H.-J., 1999, Quantum Chaos: An Introduction (Cambridge University Press, Cambridge, United Kingdom).
    View
    @book{Stockmann1999Quantum-chaos,
      address = {Cambridge, United Kingdom},
      author = {St\"{o}ckmann, H.-J.},
      date-added = {2015-05-06 19:30:30 +0000},
      date-modified = {2015-05-07 18:42:13 +0000},
      publisher = {Cambridge University Press},
      title = {Quantum chaos: An introduction},
      year = {1999}
    }
    
  10. Guhr, Thomas, Axel Müller–Groeling, and Hans A. Weidenmüller, 1998, “Random-Matrix Theories in Quantum Physics: Common Concepts,” Phys. Rep. 299, 189.
    View
    @article{Guhr1998Random-matrix-t,
      author = {Guhr, Thomas and Müller--Groeling, Axel and Weidenmüller, Hans A.},
      date-modified = {2015-05-07 18:48:01 +0000},
      issn = {0370-1573},
      journal = {Phys. Rep.},
      number = {4--6},
      pages = {189},
      title = {Random-matrix theories in quantum physics: common concepts},
      volume = {299},
      year = {1998}
    }
    
  11. Mirlin, Alexander D., Yan V. Fyodorov, Frank-Michael Dittes, Javier Quezada, and Thomas H. Seligman, 1996, “Transition from Localized to Extended Eigenstates in the Ensemble of Power-Law Random Banded Matrices,” Phys. Rev. E 54, 3221.
    View
    @article{Mirlin1996Transition-from,
      author = {Mirlin, Alexander D. and Fyodorov, Yan V. and Dittes, Frank-Michael and Quezada, Javier and Seligman, Thomas H.},
      date-added = {2015-05-06 20:37:05 +0000},
      date-modified = {2015-05-07 18:45:52 +0000},
      journal = {Phys. Rev. E},
      pages = {3221},
      title = {Transition from localized to extended eigenstates in the ensemble of power-law random banded matrices},
      volume = {54},
      year = {1996}
    }
    
  12. Cizeau, P., and J. P. Bouchaud, 1994, “Theory of Lévy Matrices,” Phys. Rev. E 50, 1810.
    View
    @article{Cizeau1994Theory-of-Levy-,
      author = {Cizeau, P. and Bouchaud, J. P.},
      date-added = {2015-05-06 20:22:03 +0000},
      date-modified = {2015-05-07 18:45:26 +0000},
      journal = {Phys. Rev. E},
      pages = {1810},
      title = {Theory of Lévy matrices},
      volume = {50},
      year = {1994}
    }
    
  13. Mehta, M. L., 1991, Random Matrices. 2nd ed (Academic Press, San Diego, California).
    View
    @book{Mehta1991Random-matrices,
      address = {San Diego, California},
      author = {Mehta, M. L.},
      date-added = {2015-05-06 19:31:11 +0000},
      date-modified = {2015-05-07 18:41:46 +0000},
      edition = {2nd},
      publisher = {Academic Press},
      title = {Random matrices},
      year = {1991}
    }
    
  14. Izrailev, Felix M., 1990, “Simple Models of Quantum Chaos: Spectrum and Eigenfunctions,” Phys. Rep. 196, 299.
    View
    @article{Izrailev1990Simple-models-o,
      author = {Izrailev, Felix M.},
      date-added = {2015-05-07 17:02:27 +0000},
      date-modified = {2015-05-07 18:57:00 +0000},
      journal = {Phys. Rep.},
      number = {5--6},
      pages = {299},
      title = {Simple models of quantum chaos: Spectrum and eigenfunctions},
      volume = {196},
      year = {1990}
    }
    
  15. Berry, M V, and M Robnik, 1984, “Semiclassical Level Spacings When Regular and Chaotic Orbits Coexist,” J. Phys. A: Math. Gen. 17, 2413.
    View

    The authors calculate semiclassical limiting level spacing distributions P(S) for systems whose classical energy surface is divided into a number of separate region in which motion is regular or chaotic. In the calculation it is assumed that the spectrum is the superposition of statistically independent sequences of levels from each of the classical phase-space regions, sequences from regular regions, having Poisson distributions and those from irregular regions having Wigner distributions. The formulae for P(S) depend on the sum of the Liouville measures of all the classical regular regions, and on the separate Liouville measures of the significant chaotic regions.

    @article{Berry1984Semiclassical-l,
      author = {Berry, M V and Robnik, M},
      date-added = {2015-05-07 17:56:26 +0000},
      date-modified = {2015-05-07 18:46:27 +0000},
      journal = {J. Phys. A: Math. Gen.},
      number = {12},
      pages = {2413},
      title = {Semiclassical level spacings when regular and chaotic orbits coexist},
      volume = {17},
      year = {1984}
    }
    
  16. Dyson, Freeman J., 1970, “Correlations between Eigenvalues of a Random Matrix,” Comm. Math. Phys. 19, 235.
    View
    @article{Dyson1970Correlations-be,
      author = {Dyson, Freeman J.},
      date-added = {2015-05-06 20:33:42 +0000},
      date-modified = {2015-05-07 18:42:56 +0000},
      journal = {Comm. Math. Phys.},
      number = {3},
      pages = {235},
      title = {Correlations between eigenvalues of a random matrix},
      volume = {19},
      year = {1970}
    }
    
  17. Wigner, E., 1967, “Random Matrices in Physics,” SIAM Rev. 9, 1.
    View
    @article{Wigner1967Random-Matrices,
      author = {Wigner, E.},
      date-added = {2015-05-06 20:33:33 +0000},
      date-modified = {2015-05-07 18:57:26 +0000},
      journal = {SIAM Rev.},
      number = {1},
      pages = {1},
      title = {Random Matrices in Physics},
      volume = {9},
      year = {1967}
    }