- Montreal, CA
- hguimaraes.me
- @hguimaraes3
Stars
- All languages
- Arduino
- Assembly
- Batchfile
- C
- C#
- C++
- CMake
- CSS
- Clojure
- Cuda
- Dart
- Elixir
- Elm
- Emacs Lisp
- Erlang
- F#
- Go
- Groovy
- HTML
- Haskell
- Java
- JavaScript
- Julia
- Jupyter Notebook
- Kotlin
- Less
- Lua
- MATLAB
- Makefile
- Markdown
- OCaml
- Objective-C
- PHP
- Processing
- Python
- QML
- R
- Ruby
- Rust
- SCSS
- Scala
- Shell
- Swift
- TeX
- TypeScript
- Vim Script
- Vue
Various Slurm utilities scripts for Compute Canada staff and users.
[Research] Code and artifacts for the papers related to RobustDistiller (Robust and Efficient S3RL models)
A library built for easier audio self-supervised training, downstream tasks evaluation
The official repository of Dynamic-SUPERB.
This Repository surveys the paper focusing on Prompting and Adapters for Speech Processing.
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
Audio Codec Speech processing Universal PERformance Benchmark
Code for the Interspeech 2021 paper "AST: Audio Spectrogram Transformer".
TorchEEG is a library built on PyTorch for EEG signal analysis.
A library for experimenting with, training and evaluating neural networks, with a focus on adversarial robustness.
This repository contains the SpeechBrain Benchmarks
Fast and memory-efficient exact attention
《SpeechPrompt v2: Prompt Tuning for Speech Classification Tasks》Speech processing with prompting paradigm
DinoSR: Self-Distillation and Online Clustering for Self-supervised Speech Representation Learning
The simplest, fastest repository for training/finetuning medium-sized GPTs.
INTERSPEECH 2023: "DPHuBERT: Joint Distillation and Pruning of Self-Supervised Speech Models"
FitHuBERT: Going Thinner and Deeper for Knowledge Distillation of Speech Self-Supervised Learning (INTERSPEECH 2022)
Awesome Knowledge-Distillation. 分类整理的知识蒸馏paper(2014-2021)。
Making large AI models cheaper, faster and more accessible
Implementation of Nougat Neural Optical Understanding for Academic Documents
A framework for generating labeled audio recordings of single-spoken keywords via automatic forced alignment.
ICASSP 2023-2024 Papers: A complete collection of influential and exciting research papers from the ICASSP 2023-24 conferences. Explore the latest advancements in acoustics, speech and signal proce…
Fast audio data augmentation in PyTorch. Inspired by audiomentations. Useful for deep learning.
JUCE is an open-source cross-platform C++ application framework for desktop and mobile applications, including VST, VST3, AU, AUv3, LV2 and AAX audio plug-ins.
A collection of papers related to speech model compression
Defending against Adversarial Audio via Diffusion Model (ICLR 2023)
A Pytorch Knowledge Distillation library for benchmarking and extending works in the domains of Knowledge Distillation, Pruning, and Quantization.
Collection of recent methods on (deep) neural network compression and acceleration.