We have a set of Neuropixels recordings with histologically validated locations of each recording site, by _post-hoc_ track reconstruction. <br>
[on DENMANLAB NAS: /s1/localization/M310016]
Using this data as a training set, we have tested several classification methods (linar SVM, deep networks) with the goal of identifying the brain region each electrode is in.
From the data, a multi-dimensional vector of properties at each recording site is extracted:
![schematic param][1]
| Property | Band |
| ----------- | ----------- |
| Gamma power | LFP |
| Alpha power | LFP |
Delta power | LFP |
| Beta power | LFP |
| Event rate | AP|
| Event amplitude | AP|
We have a pipeline for going from path to raw data to a `pandas` DataFrame with the properties in the above table. (phil lee's notebook repo.)
To date, all approaches have done this on a per channel basis, either using linear classification:
@[osf](arzvu)
or a [deep network][2]:
@[osf](mv5ef)
We would like to explore classification that takes multiple channels, and the distance between them, into account. [i.e., spatial information]
<br>
<br>
<br>
**TODO: a set of recording from more brain areas with DiI tracks, aligned into HERBS.**
This project has been developed by Jeffery Ni (while at UW, Allen Institute for Brain Science), Aidan Armstrong (while at CU Anschutz), and Phil Lee (while at Dartmouth).
[1]: https://files.osf.io/v1/resources/bvfj5/providers/osfstorage/6363d00f3b5b3900b6b3dc22?mode=render
[2]: https://docs.google.com/presentation/d/1n0w6pWE-nA8bbMyNI6H83gfrY8j527PucjNFSAdHFEE/edit#slide=id.p