Skip to content

Commit

Permalink
I think final version for revision 2
Browse files Browse the repository at this point in the history
  • Loading branch information
thesamovar committed Jan 23, 2025
1 parent ed4995f commit 1a88ef9
Show file tree
Hide file tree
Showing 6 changed files with 27 additions and 40 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/deploy-myst.yml
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ jobs:
- name: Install Typst for PDF builds
uses: typst-community/setup-typst@v3
- name: Build HTML Assets
run: myst build --html --typst
run: myst build --html # --typst # removed --typst for now because it doesn't fully support everything
- name: Upload artifact
uses: actions/upload-pages-artifact@v3
with:
Expand Down
22 changes: 17 additions & 5 deletions ReadMe.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,16 +14,28 @@ This is the repository behind the _SNN Sound Localization_ project. Check out [t

To edit and build locally, [install node and myst following these instructions](https://mystmd.org/guide/quickstart).

To build the paper, run the following from the root directory:
To edit locally:

```
myst build paper/paper.md --pdf
myst start
```

To edit locally:
Then click the link in the terminal (probably ``http://localhost:3000``).

### LaTeX build

At the moment, the typst build has some problems with equations. Use

```
myst start
myst build paper/paper.md --tex
```

Then click the link in the terminal (probably ``http://localhost:3000``).
and then you need to edit the generated latex in ``_build/exports/paper_tex`` as follows:

1. Change first line to ``\documentclass[a4paper,11pt]{article}``. This is because we use the book preprint from mystmd as it works best at time of writing, but we don't want a book.
2. Remove the ``\frontmatter`` and ``\mainmatter`` commands because it's not a book.
3. Remove the formatting if desired from ``\title`` and ``\author``.
4. Replace commas and ``and`` in the ``author{}`` with ``\and`` to spread across multiple lines correctly.
5. For each section authors table, add ``[!h]`` to the end of the table declaration otherwise it puts it at the top of the page.

It should then build with a standard ``pdlatex-bibtex-pdflatex*2`` build.
35 changes: 5 additions & 30 deletions paper/paper.bib
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,6 @@ @article{Cao2007
title = {Voltage-sensitive conductances of bushy cells of the Mammalian ventral cochlear nucleus},
journal = {Journal of Neurophysiology},
year = {2007},
month = {Jun},
volume = {97},
number = {6},
pages = {3961--3975},
Expand All @@ -21,7 +20,7 @@ @book{Gelfand2010
title = {Hearing : an introduction to psychological and physiological acoustics},
year = {2010},
}
@misc{Grothe2014,
@article{Grothe2014,
abstract = {Our concepts of sound localization in the vertebrate brain are widely based on the general assumption that both the ability to detect air-borne sounds and the neuronal processing are homologous in archosaurs (present day crocodiles and birds) and mammals. Yet studies repeatedly report conflicting results on the neuronal circuits and mechanisms, in particular the role of inhibition, as well as the coding strategies between avian and mammalian model systems. Here we argue that mammalian and avian phylogeny of spatial hearing is characterized by a convergent evolution of hearing air-borne sounds rather than by homology. In particular, the different evolutionary origins of tympanic ears and the different availability of binaural cues in early mammals and archosaurs imposed distinct constraints on the respective binaural processing mechanisms. The role of synaptic inhibition in generating binaural spatial sensitivity in mammals is highlighted, as it reveals a unifying principle of mammalian circuit design for encoding sound position. Together, we combine evolutionary, anatomical and physiological arguments for making a clear distinction between mammalian processing mechanisms and coding strategies and those of archosaurs. We emphasize that a consideration of the convergent nature of neuronal mechanisms will significantly increase the explanatory power of studies of spatial processing in both mammals and birds.},
author = {Benedikt Grothe and Michael Pecka},
doi = {10.3389/fncir.2014.00116},
Expand All @@ -31,7 +30,6 @@ @misc{Grothe2014
keywords = {Archosaurs,Binaural hearing,Birds,Evolution,GABA,Glycine,LSO,MSO},
month = {10},
pmid = {25324726},
publisher = {Frontiers Media S.A.},
title = {The natural history of sound localization in mammals-a story of neuronal inhibition},
volume = {8},
year = {2014},
Expand All @@ -45,7 +43,6 @@ @article{Jeffress1948
DOI = {10.1037/h0061495},
number = {1},
journal = {Journal of Comparative and Physiological Psychology},
publisher = {American Psychological Association (APA)},
author = {Jeffress, Lloyd A.},
year = {1948},
pages = {35–39}
Expand All @@ -58,7 +55,6 @@ @article{Myoga2014
journal = {Nature Communications},
month = {5},
pmid = {24804642},
publisher = {Nature Publishing Group},
title = {Glycinergic inhibition tunes coincidence detection in the auditory brainstem},
volume = {5},
year = {2014},
Expand Down Expand Up @@ -130,12 +126,11 @@ @article{McAlpine2003
month = {7},
pages = {347-350},
pmid = {12850430},
publisher = {Elsevier Ltd},
title = {Sound localization and delay lines - Do mammals fit the model?},
volume = {26},
year = {2003},
}
@misc{Fettiplace2023,
@article{Fettiplace2023,
abstract = {A ubiquitous feature of the auditory organ in amniotes is the longitudinal mapping of neuronal characteristic frequencies (CFs), which increase exponentially with distance along the organ. The exponential tonotopic map reflects variation in hair cell properties according to cochlear location and is thought to stem from concentration gradients in diffusible morphogenic proteins during embryonic development. While in all amniotes the spatial gradient is initiated by sonic hedgehog (SHH), released from the notochord and floorplate, subsequent molecular pathways are not fully understood. In chickens, BMP7 is one such morphogen, secreted from the distal end of the cochlea. In mammals, the developmental mechanism differs from birds and may depend on cochlear location. A consequence of exponential maps is that each octave occupies an equal distance on the cochlea, a spacing preserved in the tonotopic maps in higher auditory brain regions. This may facilitate frequency analysis and recognition of acoustic sequences.},
author = {Robert Fettiplace},
doi = {10.1002/bies.202300058},
Expand All @@ -145,7 +140,6 @@ @misc{Fettiplace2023
keywords = {BMP7,amniote,cochlea,hair cell,morphogen,resonance},
month = {8},
pmid = {37329318},
publisher = {John Wiley and Sons Inc},
title = {Cochlear tonotopy from proteins to perception},
volume = {45},
year = {2023},
Expand Down Expand Up @@ -174,7 +168,6 @@ @software{spreizer_2022_6368024
title = {NEST 3.3},
month = may,
year = 2022,
publisher = {Zenodo},
version = {3.3},
doi = {10.5281/zenodo.6368024},
url = {https://doi.org/10.5281/zenodo.6368024}
Expand All @@ -189,7 +182,6 @@ @article{Yin2019
month = {10},
pages = {1503-1575},
pmid = {31688966},
publisher = {Wiley-Blackwell Publishing Ltd},
title = {Neural mechanisms of binaural processing in the auditory brainstem},
volume = {9},
year = {2019},
Expand Down Expand Up @@ -234,7 +226,6 @@ @misc{paszke_pytorch_2019
doi = {10.48550/arXiv.1912.01703},
abstract = {Deep learning frameworks have often focused on either usability or speed, but not both. PyTorch is a machine learning library that shows that these two goals are in fact compatible: it provides an imperative and Pythonic programming style that supports code as a model, makes debugging easy and is consistent with other popular scientific computing libraries, while remaining efficient and supporting hardware accelerators such as GPUs. In this paper, we detail the principles that drove the implementation of PyTorch and how they are reflected in its architecture. We emphasize that every aspect of PyTorch is a regular Python program under the full control of its user. We also explain how the careful and pragmatic implementation of the key components of its runtime enables them to work together to achieve compelling performance. We demonstrate the efficiency of individual subsystems, as well as the overall speed of PyTorch on several common benchmarks.},
urldate = {2024-06-20},
publisher = {arXiv},
author = {Paszke, Adam and Gross, Sam and Massa, Francisco and Lerer, Adam and Bradbury, James and Chanan, Gregory and Killeen, Trevor and Lin, Zeming and Gimelshein, Natalia and Antiga, Luca and Desmaison, Alban and Köpf, Andreas and Yang, Edward and DeVito, Zach and Raison, Martin and Tejani, Alykhan and Chilamkurthy, Sasank and Steiner, Benoit and Fang, Lu and Bai, Junjie and Chintala, Soumith},
month = dec,
year = {2019},
Expand All @@ -249,7 +240,6 @@ @misc{zenke_spytorch_2019
url = {https://zenodo.org/records/3724018},
abstract = {Tutorial-style PyTorch code packed into a bunch of Jupyter notebooks that give you everything you need to start with surrogate gradient learning in spiking neural networks.},
urldate = {2024-06-20},
publisher = {Zenodo},
author = {Zenke, Friedemann},
month = mar,
year = {2019},
Expand All @@ -276,20 +266,18 @@ @Article{ harris2020array
number = {7825},
pages = {357--362},
doi = {10.1038/s41586-020-2649-2},
publisher = {Springer Science and Business Media {LLC}},
url = {https://doi.org/10.1038/s41586-020-2649-2}
}
@Article{Hunter2007,
Author = {Hunter, J. D.},
Title = {Matplotlib: A 2D graphics environment},
Journal = {Computing in Science \& Engineering},
Journal = {Computing in Science and Engineering},
Volume = {9},
Number = {3},
Pages = {90--95},
abstract = {Matplotlib is a 2D graphics package used for Python for
application development, interactive scripting, and publication-quality
image generation across user interfaces and operating systems.},
publisher = {IEEE COMPUTER SOC},
doi = {10.1109/MCSE.2007.55},
year = 2007
}
Expand Down Expand Up @@ -333,7 +321,6 @@ @misc{beniaguev_dendro_plexing_2024
abstract = {A cortical neuron typically makes multiple synaptic contacts on the dendrites of its postsynaptic target neuron. The functional implications of this apparent redundancy are unclear. Due to dendritic cable filtering, proximal dendritic synapses generate brief somatic postsynaptic potentials (PSPs) whereas distal synapses give rise to broader PSPs. Consequently, with multiple synaptic contacts, a single presynaptic spike results in a somatic PSP composed of multiple temporal profiles. We developed a “Filter-and-Fire” (F\&F) neuron model that incorporates multiple contacts and cable filtering; it demonstrates threefold increase in memory capacity as compared to a leaky Integrate-and-Fire (I\&F) neuron, when trained to emit precisely timed spikes for specific input patterns. Furthermore, the F\&F neuron can learn to recognize spatio-temporal input patterns, e.g., MNIST digits, where the I\&F model completely fails. We conclude that “dendro-plexing” single input spikes by multiple synaptic contacts enriches the computational capabilities of cortical neurons and can dramatically reduce axonal wiring.},
language = {en},
urldate = {2024-06-22},
publisher = {bioRxiv},
author = {Beniaguev, David and Shapira, Sapir and Segev, Idan and London, Michael},
month = feb,
year = {2024},
Expand Down Expand Up @@ -383,11 +370,10 @@ @article{LAJ1948
}
@article{KLWH2001,
author = {Richard Kempter and Christian Leibold and Hermann Wagner and and J. Leo van Hemmen},
author = {Richard Kempter and Christian Leibold and Hermann Wagner and J. Leo van Hemmen},
title = {Formation of temporal-feature maps by axonal propagation of synaptic learning},
journal = {J. Comp. Physiol.},
volume = "98",
number = "",
pages = "4166-71",
year = "2001",
DOI = "https://doi.org/10.1073/pnas.061369698"
Expand Down Expand Up @@ -435,7 +421,7 @@ @article{HKTI2016
DOI = "https://doi.org/10.1371/journal.pone.0146044"
}
@article{MAVT2017,
author = {Mojtaba Madadi Asl and Alireza Valizadeh and Peter A. Tass },
author = {Mojtaba Madadi Asl and Alireza Valizadeh and Peter A. Tass},
title = {Dendritic and Axonal Propagation Delays Determine Emergent Structures of Neuronal Networks with Plastic Synapses},
journal = {Sci. Rep.},
volume = "7",
Expand Down Expand Up @@ -477,17 +463,6 @@ @article{TM2017
DOI = "https://doi.org/10.3389/fncom.2017.00104"
}
@article{TM2017,
author = {Takashi Matsubara},
title = {Conduction Delay Learning Model for Unsupervised and Supervised Classification of Spatio-Temporal Spike Patterns},
journal = {Front. Comput. Neurosci.},
volume = "11",
number = "",
pages = "104",
year = "2017",
DOI = "https://doi.org/10.3389/fncom.2017.00104"
}
@article{ITT2023,
author = {Ismail Khalfaoui-Hassani and Thomas Pellegrini and Timothée Masquelier},
title = {Dilated convolution with learnable spacings},
Expand Down
5 changes: 3 additions & 2 deletions paper/paper.md
Original file line number Diff line number Diff line change
Expand Up @@ -98,9 +98,10 @@ authors:

license: CC-BY-4.0
export:
- format: docx
- format: tex
template: lapreprint
template: plain_latex_book
# - format: tex+pdf
# template: plain_latex_book
- format: pdf
id: paper
template: lapreprint-typst
Expand Down
2 changes: 1 addition & 1 deletion paper/sections/basicmodel/basicmodel.md
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ Hidden neuron firing rates, with the same setup as in [](#confusion-matrix).
Analysis of the trained networks show that it uses an unexpected strategy. Firstly, the hidden layer neurons might have been expected to behave like the encoded neurons in Jeffress' place theory, and like recordings of neurons in the auditory system, with a low baseline response and an increase for a preferred phase difference (best phase). However, very reliably they find an inverse strategy of having a high baseline response with a reduced response at a least preferred phase difference ({ref}`tuning-curves-hidden`). Note that the hidden layer neurons have been reordered in order of their least preferred delay to highlight this structure. These shapes are consistently learned, but the ordering is random. By contrast, the output neurons have the expected shape ({ref}`tuning-curves-output`). Interestingly, the tuning curves are much flatter at the extremes close to an IPD of $\pm \pi/2$. We can get further insight into the strategy found by plotting the weight matrices $W_{ih}$ from input to hidden layer, and $W_{ho}$ from hidden layer to output, as well as their product $W_{io}=W_{ih}\cdot W_{ho}$ which would give the input-output matrix for a linearised version of the network ({ref}`basic-weights`).

```{figure} sections/basicmodel/tuning-hidden.png
:label: tuning-curves-hidden
:label: tuning-curves-hidden
:width: 100%
Tuning curves of hidden neurons. Each plot shows the interaural phase difference (IPD) tuning curve of one of the eight hidden layer neurons in the model. The x-axis shows the IPD and the y-axis the normalised firing rate. The black curves show the results for the trained spiking neural network. The orange curves show the best fit by a translated and scaled Gaussian curve. The blue curves show the fit for a rate-based approximation where spike times are ignored. Parameters are as in [](#confusion-matrix): $f=50$ Hz, $\tau=2$ ms, $N_\psi=100$, $N_h=8$, $N_c=12$.
```
Expand Down
1 change: 0 additions & 1 deletion paper/sections/contributor_table.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
(contributor-table)=
```{list-table} Contributors, ordered by GitHub commits as of 2024-07-16.
:header-rows: 1
:label: contributor-table
* - Name
- GitHub
Expand Down

0 comments on commit 1a88ef9

Please sign in to comment.