@[toc](Contents)
-----
# Overview
This repository contains supporting materials for the paper *Multifractal Mice: Measuring Task Engagement and Readiness-to-hand via Hand Movement*
There are two sub-folders, for experiment 2 and 3 (experiment 1 being the replication of Dotov et al.'s study in [Cognitive and movement measures reflect the transition to presence-at-hand][1]). Each folder contains the software materials required for the replication of our experiments (the "sheep game"), as well as our data, and jupyter notebook files for the processing of that data. Observation test data are in the analysis zip file in exp 2.
Experiment one looks at the relationship between the multifractality of hand movement, and aspects of readiness-to-hand - familiarity with tool and task, adequacy-of-tool, and locus of attention - the degree to which the user's is focused on the tool.
Experiment two looks at the relationship between the multifractality of hand movement, and another dimension closely related to readiness-to-hand task engagement.
Different game software are provided for each experiment, though they build on the same basic code.
---------
# Ethics
All experiments were submitted in advance to the university ethics committee, and approved. Experiment 2 involved deception. This necessitated a more rigorous approval process, the giving of consent both before and after the experiment, and finally a debrief process, explaining the deception.
----------
# The Game
Both versions of the sheep game (1 per experiment, in the relevant folder) have been compiled to run on Ubuntu 64bit. A readme file is included with each. The source code is also provided, so it can be recompiled or altered. This was written in C++ requires the installation of [OpenFrameworks][2] to compile.
The object behaviour is given by the following dynamical mappings, with "sheep" $u_k k= {1,2,3}$, cursor $w$ and axes $x,y$ as $j=\{1,2\}$:
\begin{equation*}
u_{kj}[n+1] = u_{kj}[n]+dt(a(u_{kj}[n]-w_j[n])+bv_{kj}[n]+c\eta_{kj}[n])
\end{equation*}
\begin{equation*}
v_{kj}[n]=(u_{kj}[n]-\frac{1}{3}\sum_{i=1}^{3}u_{ij}[n])^3
\end{equation*}
\begin{equation*}
w_j[n+1] = w_j[n] + dt(g(F_j[n] - w_j[n]))
\end{equation*}
Here $a$ scales the repulsive force from the cursor to the sheep, $b$ controls how tightly the sheep group together, $\eta$ is the noise added to the sheep, F is the mouse coordinates vector. More details of the game are available in [Dotov et al.'s paper][1]
The readme files for the games are currently minimally documented - explaining parameter setting for each of the parameters described above, and how the game is executed. More step-by-step information will come before paper publication. In the meantime, functioning should be fairly clear from a quick look at the readme, and at ofApp.cpp in the src folder.
----------
# Data Analysis
The zip file also contains data from our experiments, both raw data, and post-multifractal-analysis. Intermediary files have been excluded due to the considerable storage requirements, and the fact that these are readily recreated using the code provided.
Code for data analysis is provided in 2 Jupyter Notebook files - processing, and analysis, for each experiment. These have been tested on Windows 10, and should transfer fairly painlessly to other operating systems, but minor troubleshooting may be required.
To explore our data post-WTMM analysis, and to run our final analysis routines look at the file "...part_1_Analysis" in each experiiment subfolder, nothing more should be required.
To rerun the WTMM processing look at the file "...part_0_Processing" in each experiment subfolder. It will be neccessary to compile and install [physionet's multifractal analysis software][5] to run this analysis. The WTMM processing is very slow due to the need to create and process multiple "surrogate" copies of the original data, and to repeat all of this for 7 orders of analysis. In our case it took over 24 hours on a good machine (recent intel i5, 16Gb). If the method were to be adapted to online-measurement nearly all of this could be discarded. The surrogate analysis would become a preliminary step for calibration, only one user signal would be processed, at a single order, and the need for repeated hard disc access could easily be circumvented.
----------
# Estimating the Singularity Spectrum with WTMM
In this work we measure changes in multifractality of hand movement, during tasks. In our paper we give a quite *high-level* overview of how we analyse hand movement to arrive at an estimate of multifractality. The notebooks in this repository give the implementation of that process. This section is intended to fill in the gap in the middle ground between those resources. It gives a more detailed step by step account, yet one that still aims to be an introduction and to guide engagement with our paper and the resources we provide, rather than provide details to implement the algorithms directly. More mathematically dense and detailed accounts are available which are better suited to that, and these deal with a wider range of issues are listed in our paper. Our approach is to begin with quite high level descriptions, including a lot of technical terms which we only describe briefly at first, to serve as place-holders. Then we dig into those place-holder terms in progressive detail. This is our own preferred way of organising ideas, and for similarly inclined readers we hope it will help. For differently inclined readers, other tutorials which take different strategies are listed in our paper. Our personal experience has been that triangulating back and forth between multiple accounts of the same ideas is often what helps us to learn best.
It is best to begin by describing the process in overview.
We arrive at a measure of the multifractality of hand movement, by
1. Recording the hand movement via an accelerometer
2. Performing some initial signal processing to prepare for analysis
3. Estimating the singularity spectrum of the signal (the set of exponents which describe the way variance in the signal is distributed over scales of analysis), using Wavelet Transform Modulus Maximum and the Legendre transform
4. Calculating the measure for the strength of multifractality in the signal: the “width” of the singularity spectrum (the difference between the largest and smallest exponents)
5. Validating that our results follow from multifractality and complexity in the signal, and not from other, linear, non-complex, sources - such as a shift in the Fourier spectrum, or a difference in the probability density function (things that could be due to simple factors such as straightforwardly faster or more jerky hand movement)
Parts 1and 2 are sufficiently familiar that we believe that they are easy to follow, between the description in our paper, and the code in our notebooks. This document focuses on 3-6 - the estimation of the singularity spectrum, the derivation of the strength measure, and the validation of the result.
## Estimating the singularity spectrum of the signal with WTMM
To estimate the multifractal spectrum we use an approach based on the Wavelet Transform Modulus Maxima (WTMM) method - a commonly used method of multifractal analysis. We describe this process in this section. Alongside our description of this process below, readers might also want to look at descriptions by [Dotov et al][6]. and [Amaral][7]. Our account here draws strongly on both of these.
### Multifractality and The Singularity Spectrum
First it is worth giving an recap of what Multifractality is and how the singularity spectrum relates to it. This is already found in our paper, but here it serves as a reminder, and perhaps also as a different view on the same concept to aid learning.
#### Scale
Multifractality, in a signal, means that the variance in the signal is distributed in a particular way over *scales* of analysis (in our case *time*-scales of analysis), so it is worth spending a while talking about scale. Some natural processes are scale-invariant (meaning that they show the same patterns of variation regardless of the scale of analysis) many change significantly across different scales, as different local factors become dominant. However, many phenomena reveal interesting variation if we change the scale at which we analyse them. When looking at temperature variation in a room, for example, we may see quite different features if we move from analysing patterns at the scale of the seasons, to analysing patterns at the scale of a week, to analysing patterns at the scale of a day. This is because different processes drive temperature change at those different scales - the rotation of the planet, the orbit of the planet, and meteorological processes. In many natural phenomena there are complex interactions between these processes at different scales - the slower processes can influence the faster processes, and the faster processes in turn influence the slower processes. This is often the case in chemical, biological and ecological systems, for example in chemical reactions which generate outputs which themselves influence how the reaction behaves. When this happens the interaction between these processes shows up in the patterns at each scale - patterns of correlation and nesting arise. These patterns are *multifractal*: The variance - or fluctuation in the signal - is distributed such that the fluctuations nest within one-another, with variation in that nesting across scales. Analysing these patterns of variation can thus give clues to the way processes across different scales interact.
### Singularities
The other key concept for understanding multifractality in a signal is the notion of a singularity. At root the concept is very simple - a singularity is a point of discontinuity, or a sharp turning point in a waveform. In mathematical terms it is a place where the derivative of the waveform is not defined. On the figure below, three such singularities have been drawn over a waveform
![Singularities in a waveform (representation)][5]
We can understand the strength of an individual singularity in a signal - how sharp it is - by fitting a curve to it (a Taylor expansion series). Such a curve will be defined as a function of time, and its strength will be defined as a power of time. For a weak singularity, with a smooth curve, this will be an integer powers of time. For a stronger singularity, with a sharp discontinuity, it will be a non integer power of time. The strength of a singularity at a particular location is given by $f(t)=s_h(t-t_i)^h$ where $s$ is the scale and $h$ - the *Hölder exponent* is the strength of the local trend. Three such singularities, with different scales and strengths have been picked out on the diagram immediately above (note: this is drawn by eye and numbers are guesstimates).
When a curve is fractal it will contain singularities at every point and scale, with low amplitude singularities fitting within larger amplitude singularities. On the curve above, if we look at the region picked out by the green ($h=0.3$) singularity, similar shapes appear to be nested within this at smaller scales. In a monofractal only one such singularity is found - the same value of $h$ holds at every point and scale of the waveform. In a multifractal, to capture the variation across time and across scales of analysis, multiple singularities, and therefore multiple values of $h$ are required - the singularity spectrum. This singularity spectrum is what we estimate using WTMM or MDFA. Since in our work we have used WTMM, we describe that algorithm below.
### The WTMM Analysis
#### Overview
Wavelet Transform Modulus Maximum (WTMM ) is an algorithm for estimating the singularity spectrum of a signal. It uses a series of Continuous Wavelet Transforms to isolate local singularities across multiple scales of analysis, while at the same time removing trends from the data that would skew results. We describe the process in overview below, before giving more detail on some aspects
![Overview of WTMM][3]
1. First the WTMM algorithm is applied to the prepared signal from the accelerometer. First a wavelet decomposition is carried out which quantifies the energy at every point in the waveform, at multiple scales of analysis. Then a Modulus Maximum operation is performed on this decomposition, to identify the locations and strengths of singularities. This yields a “partition function” for the signal: a function which characterises this distribution of singularities at different scales of analysis.
2. Next, the (log-log) gradients of this partition function are taken. This gives a set of “multiscaling exponents": real valued numbers which describe how the distribution of singularities changes across scales of analysis.
3. Finally, this set of multiscaling exponents is subjected to a mathematical transformation which gives us the final singularity spectrum. This is done via the Legendre transform. The Legendre transform gives the estimate of the “Hölder“ exponents” in the signal. The Hölderexponents characterise the the distribution of singularities in the signal across scales and across time.
4. The width of this spectrum of Hölderexponents (the distance between the smallest and largest Hölderexponents) is perhaps the most commonly used measure of strength of multifractality. If there is a wider range of exponents, then the signal is more strongly multifractal. As the range of exponents tends towards a single value, so the signal tends towards being fractal (or, if that single value is an integer, non-fractal)
#### Wavelet Decomposition
The WTMM is built on the continuous wavelet transform (CWT) - a formal tool related to the Fourier transform. The Fourier transform is not sensitive to scale and location, decomposing the signal into non-localised, sinusoidal-functions of theoretically infinite length. The CWT by contrast gives a transform of a signal which is both localised and scale-based - that is, it is descriptive of both the scales and the positions of features in the analysed signal. This scale-sensitive analysis is useful for understanding multifractality since, as discussed above, it is defined by the way fluctuations in the signal nest and correlate across scales. Where the Fourier transform uses an infinite-length sine-wave as its basic unit of analysis, the CWT achieves its sensitivity to position and scale by replacing this with a finite function which we call a *wavelet* function. Since the wavelet is finite, it can be resized on its time-axis to be descriptive of different time-scales. CWT involves convolving this selected wavelet with the original signal, shifting the wavelet in time, and scaling its amplitude. This results in the time-scale decomposition of the waveform.
The wavelet that is chosen for this convolution process is important. [Dotov et al.][1] note that it is conventional to use a derivative of the Gaussian function. The order of this function determines which polynomial trends in the waveform will be rejected. For a Gaussian wavelet of order $o$ all polynomial trends below $o$ will be rejected. This can be useful since it removes spurious contributions to the multifractal spectrum from linear trends. In our paper we note that it is difficult to know in advance which trends should be rejected and describe an empirical approach to selecting the wavelet.
#### Finding and Summarising the Singularities
Having decomposed the waveform by this process of repeatedly scaling, shifting and fitting a wavelet to the signal, we now need to use these results to estimate the distribution of local singularities. Naturally some of these transformations of the wavelet will have proven a better fit to local features than other. The better the fit of the scaled wavelet to the waveform at a particular point, the higher the energy returned by this convolution operation. By taking the local local modulus-maximum of these energy values, we can find the location and strength of local singularities - this gives us a “partition function” describing location of singularities for the waveform at a particular scale.
$$Z(q,s)=\sum_{i=1}^{i_{max}} \mid W(n_i , s ) \mid^q$$
$W$ is the time-scale decomposition, $n$ is the time-shift in the waveform and $s$ is the scaling factor for the wavelet. We repeat this process multiple times for different values of $q$ - a scaling exponent which biases the analysis towards smaller or larger fluctuations. Negative values of q stress the scaling of small fluctuations whereas positive q stress large fluctuations. This range of $q$ values is another important parameter which must be selected for the analysis. Excessively large values of q - either negative or positive - will result in the inclusion of scales which may not be relevant, and this will both slow the analysis and may result in a poor estimate. In addition, large negative values of $q$ will result in errors both due to the amplification of inadequate measurement at small scales, and potentially due to inadequate numeric resolution [cite ihlen 2010].
Finally this partition function is transformed to give the multiscaling exponents, and then the singularity spectrum, via the Legendre transform. The Hölder exponents $h(q)$ for each value of $q$ are given by $$\tau(q)=qh(q)-1$$ We then take the width of the singularity spectrum - the difference between the smallest and largest exponents as the strength of multifractality in the signal.
## Validating the Source of the Multifractal Spectrum
Ihlen and Vereijken [cite ihlen 2010] note that there are limitations in the results of the WTMM based process above. They note that while the estimated multifractal spectra *may* be indicators of multiplicative interactions across scale, and thereby of genuine multifractality, they may also follow from other, independent features of the measured signal. Namely, they may be due to a non-Gaussian probability density function (NGPDF), or to scaling in the Fourier spectrum of the signal. In the case of our data these may be affected by increased speed of hand movement, or changes in jerkiness. As such they argue that it is necessary to take steps to verify that a particular estimated spectrum does indeed result from autocorrelation in the measured signal, and that it can therefore be taken as a reliable indicator of interaction dominance.
The validation technique, which we reuse directly from Ihlen and Vereijken, proceeds as follows. We begin with a recorded signal X, and a multifractal spectrum s(X) estimated from X using the WTMM technique described above. We then generate 30 "surrogates" from X: versions of that signal in which the Fourier spectrum, and NGPDF are preserved, while the interactions between the multiple scales are eliminated. These are generated using a technique called Iterated Amplitude Adjusted Fourier Transform: IAAFT [CITE IAAFT]. Having generated these surrogates, they are also analysed via WTMM, to estimate a spectrum width for each surrogate. Finally we calculate the 95\% confidence interval of these widths and compare this to the width of the original spectrum s(X). The original spectrum width is judged to be significantly due to interaction dominance in the underlying system, if that width is outside the 95\% confidence of the surrogate population (in which, to recap, autocorrelation structures have been disrupted).
--------------------------------------------------
[1]: https://www.sciencedirect.com/science/article/abs/pii/S0732118X16300691 "Cognitive and movement measures reflect the transition to presence-at-hand"
[2]: https://openframeworks.cc "OpenFrameworks"
[3]: https://files.osf.io/v1/resources/2hm9u/providers/osfstorage/5f6a243a9e9a3d012e6e4a05?mode=render
[4]: https://physionet.org/content/multifractal/1.0.0/
[5]: https://files.osf.io/v1/resources/2hm9u/providers/osfstorage/5f6a23bf46080901211ae03a?mode=render