# README
This page contains supplementary material for the following two articles.
- Lonni Besançon, Amir Semmo, David Biau, Bruno Frachet, Virginie
Pineau, El Hadi Sariali, Rabah Taouachi, Tobias Isenberg, and Pierre
Dragicevic. [**Reducing Affective Responses to Surgical Images
through Color Manipulation and
Stylization**](https://hal.inria.fr/hal-01795744/document).
Proceedings of the Joint Symposium on Computational Aesthetics,
Sketch-Based Interfaces and Modeling, and Non-Photorealistic
Animation and Rendering, Aug 2018, Victoria, Canada. 18,
pp.4:1--4:13,
- Lonni Besançon, Amir Semmo, David Biau, Bruno Frachet, Virginie
Pineau, El Hadi Sariali, Marc Soubeyrand, Rabah Taouachi, Tobias
Isenberg, and Pierre Dragicevic. [**Reducing Affective Responses to
Surgical Images and Videos through Stylization**](https://hal.inria.fr/hal-02381513/file/besancon-surgery-inpress.pdf)
## Authors and licenses
* Experiment code by Lonni Besançon and Pierre Dragicevic, [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/).
* Abstool © 2018 Amir Semmo and Jan Eric Kyprianidis.
* R code by Pierre Dragicevic, [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/).
* Ethics documents by Pierre Dragicevic and Lonni Besançon, [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/).
* picture6.jpg, picture7.jpg, picture8.jpg, picture9.jpg, picture10.jpg, 61.jpg, 71.jpg, 81.jpg, 91.jpg, 101.jpg, example.jpg are public domain.
* Lasagna photos by Pierre Dragicevic and Yvonne Jansen, [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/).
* All other material by Lonni Besançon, Pierre Dragicevic, Amir Semmo, and Tobias Isenberg, [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/).
## Important note
The pictures we used in our experiment were taken from the IAPS and NAPS research databases, and we are not allowed to redistribute them publicly. In this supplemental material, we use surrogate pictures instead. If you want to try the original experiment or need to replicate it, you will need to obtain the IAPS and NAPS databases, extract the pictures, and run them through our filters. See **How to regenerate the original experiment stimuli** below. Please contact us if you have any problem.
Similarly, the pictures used for our two sets of interviews are taken from real surgical procedures and we are not allowed to redistribute them publicly. In the directory containing the code we used to run the second of interviews with three surgeons, we replaced the original images and videos. We also reduced the number of videos and images to only one example video and one example image to save storage space. The code indicates what variable to change in order to obtain the original program we used during interviews, but you can also see **How to run the program used for the second set of interviews** below.
Our preregistered analysis available at [osf.io/34vzj][1].
## OSF Storage Content
Directory | Content
------------ | -------------
``surgeon interviews/`` | Notes and data from interviews with surgeons
``experiment/`` |
``ethics documents/`` | Participant information sheet, consent form, etc.
``pictures/`` | Surrogates for the 10 pictures used in the experiment
``stimuli/`` | The processed pictures
``experiment code/`` | The HTML and javascript code for running the experiment
``R code/`` | Experiment data + code for analyzing the data
``R code (pregistered)/`` | Older version of the above directory, see https://osf.io/34vzj
``abstool/`` | Image processing tool for the techniques used in the experiment
``surgeon interviews 2/`` | Notes and data from the second series of interviews with surgeons (journal article only)
## How to run the R code
Make sure you have R Studio installed (https://www.rstudio.com/). Import the ``R code`` directory in R Studio by creating a new project from existing directory, then open the file ``analysis.R`` in the editor and hit ``Shift+Cmd+S``.
## How to run the experiment
If you only want to view instructions, you can open the `` experiment code/index.html`` file directly in a browser. However, you need to set up a Web server to see the experiment stimuli correctly. On Mac OS, you can do as follows:
```bash
sudo npm install -g http-server
```
Then, cd to the ``experiment code`` directory and type:
```bash
http-server -c-1
```
Then, open the URL http://127.0.0.1:8080/ in your browser (preferably Google Chrome to be able to download the final csv containing results), and enter a number between 1 and 30 as participant ID. At the end of the experiment, a csv file will be generated. This csv file can be read by the analysis scripts shared in the ``R code`` directory.
Notes:
- The experiment has been written for a Retina display. It will work on a non-Retina display but the images will be shown in half their resolution.
- Remember those are not the actual stimuli from our experiment. See note at the top of this page.
## How to regenerate the original experiment stimuli
### Step 1 — Getting the NAPS and IAPS pictures
You first need to obtain the NAPS and IAPS picture sets. The process takes a few days for NAPS and about a month for IAPS. You need to be affiliated with an academic institution.
IAPS picture set:
1. Go to http://csea.phhp.ufl.edu/media/requestform.html and fill in your info,
2. Wait for their email, print it, sign it, scan it and send it back to them,
3. About a month later, you will receive an email with a URL, a username and a password to download the picture set. They are valid for one week.
NAPS picture set:
1. Go to http://exp.lobi.nencki.gov.pl/dnaps and fill in your info,
2. A few days later, you will receive a download link that will remain valid for one year.
### Step 2 — Preparing the pictures
Once you have downloaded the NAPS and IAPS picture sets, find the following .jpg files, copy them into a temporary directory and rename them as follows:
From | to
------------ | -------------
``NAPS_H/People_116_h.jpg`` | ``example.jpg``
``NAPS_H/People_221_h.jpg`` | ``picture1.jpg``
``IAPS 1-20 Images/3213.jpg`` | ``picture2.jpg``
``NAPS_H/People_216_h.jpg`` | ``picture3.jpg``
``NAPS_H/People_202_h.jpg`` | ``picture4.jpg``
``IAPS 1-20 Images/3212.jpg`` | ``picture5.jpg``
``NAPS_H/People_044_h.jpg`` | ``picture6.jpg``
``NAPS_H/People_176_h.jpg`` | ``picture7.jpg``
``NAPS_H/People_179_h.jpg`` | ``picture8.jpg``
``NAPS_H/People_180_h.jpg`` | ``picture9.jpg``
``NAPS_H/People_192_h.jpg`` | ``picture10.jpg``
You then need to resize all 11 images to 1024x768. You can proceed like this:
1. Open a Cygwin or Unix shell and make sure you have ImageMagick's ``mogrify`` command installed. If not, install ImageMagick from http://www.imagemagick.org/
2. Move to the directory where the 11 images are located and type the following two commands:
```bash
mkdir resized
mogrify -resize 1024x768 -quality 90 -path resized *.jpg
```
You will see a new ```resized``` subdirectory with the same images but resized to 1024x768.
### Step 3 — Processing the pictures
Here you will need to use our custom image processing tool, which is currently only available as a MS Windows binary. To be able to run it, you will need:
* Windows 7 or newer,
* An nVidia graphics card with CUDA Support and Compute Capability 3.0+ (starting from Kepler GPUs, see https://en.wikipedia.org/wiki/CUDA),
* An up-to-date nVidia graphics driver that supports CUDA.
Go to the ``experiment/abstool/`` directory and run ``ozviewer.exe``. To process all previously resized images in a batch processing, proceed as follows:
1. Use ``File`` -> ``Batch by Setting List``. File dialogs will appear that will ask you to specify the following files/directories:
* ``Open Settings List``: Select ``experiment/abstool/settings/settings_batch_experiment.txt``
* ``Choose input directory``: Select the directory that includes the resized images from Step 2.
* ``Choose output directory``: Select an output directory where the processed images should be saved to. This can also be same folder used as input directory (input images will NOT be overwritten).
2. Wait until all processed images are saved to the specified output directory.
In the experiment, we also provide two example images, one is unprocessed while the other is processed with a technique that is not later used in the experiment. This technique is called the "extended difference-of-Gaussians effect".
To perform the example image processing with the extended difference-of-Gaussians effect, repeat steps 1-2 but using ``experiment/abstool/settings/settings_batch_example.txt`` as the settings list.
### Step 4 — Renaming and copying the files
Almost done! Now you just need to rename all the files.
But first of all, you need to gather all unprocessed and processed images in the same folder.
If you have followed all of the previous steps, you should have a directory that looks like (ordered by name):
* ``example-xdog_4.jpg``
* ``example.jpg``
* ``picture1-apparentGray_1.jpg``
* ``picture1-flowabs_1.jpg``
* ``picture1-hue_shift_2.jpg``
* ``picture1-ivacef_1.jpg``
* ``picture1-msakf_1.jpg``
* ``picture1-ssia_2.jpg``
* ``picture1.jpg``
* [and so on with ``picture2`` to ``picture10``]
If that is the case, you can start renaming the images so that they will be usable by the code of the experiment.
1. Open a Cygwin or Unix shell and make sure you have the Perl ``rename`` command installed. If not, you can install it on Mac OS using the two following commands (this may take a while):
```bash
/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
brew install rename
```
2. Move to the directory that contains all images, and copy/paste the following commands in your terminal:
```bash
rename 's/picture(\d+)-apparentGray_1/$1\x32/' *.jpg
rename 's/picture(\d+)-flowabs_1/$1\x33/' *.jpg
rename 's/picture(\d+)-hue_shift_2/$1\x34/' *.jpg
rename 's/picture(\d+)-ivacef_1/$1\x35/' *.jpg
rename 's/picture(\d+)-msakf_1/$1\x36/' *.jpg
rename 's/picture(\d+)-ssia_2/$1\x37/' *.jpg
rename 's/picture(\d+)./$1\x31./' *.jpg
```
Once this is done, your directory should contain:
* ``11.jpg``
* ``12.jpg``
* ``13.jpg``
* ``14.jpg``
* ``15.jpg``
* ``16.jpg``
* ``17.jpg``
* […and so on until ``107.jpg``]
* ``example-xdog_4.jpg``
* ``example.jpg``
Now move or copy all those images to ``/experiment/experiment code/images``. The directory already contains some images that will be replaced, plus other images that do not need to be changed: ``Mask.png``, ``Screen1.png``, ``Screen2.png``, and ``Screen3.png``.
You can now run the experiment as indicated in **How to run the experiment**.
## How to run the program used for the second set of interviews
You need to set up a Web server to see the experiment stimuli correctly. On Mac OS, you can do as follows:
```bash
sudo npm install -g http-server
```
Then, cd to the ``surgeon interview code`` directory and type:
```bash
http-server -c-1
```
Then, open the URL http://127.0.0.1:8080/ in your browser (preferably Google Chrome to be able to download the final csv containing results), and enter 1 as participant ID. At the end of the experiment, a csv file will be generated.
Notes:
- The experiment has been written for a Retina display. It will work on a non-Retina display but the images will be shown in half their resolution.
- Remember those are not the actual stimuli from our experiment. See note at the top of this page.
- Remember that to save storage space we only use one example video and image here. If you want to obtain the original program, change the value of ```var nbOfVideos = 1``` to ```var nbOfVideos = 2``` in ```indexVideos.html``` as well as the value of ```var nbOfImages = 1``` to ```var nbOfImages = 2``` in ```index.html```.
## Arkangel: the Google Chrome Extension
Based on our initial work, we have implemented a Google Chrome Extension to automatically process images while browsing the web. The code for this extension is [available on Github](https://github.com/lonnibesancon/Arkangel).
[1]: https://osf.io/34vzj/