Self-Attention algorithm helper functions and demonstration vignettes of increasing depth on how to construct the Self-Attention algorithm, this is based on Vaswani et al. (2017) <doi:10.48550/arXiv.1706.03762>, Dan Jurafsky and James H. Martin (2022, ISBN:978-0131873216) <https://web.stanford.edu/~jurafsky/slp3/> "Speech and Language Processing (3rd ed.)" and Alex Graves (2020) <https://www.youtube.com/watch?v=AIiwuClvH6k> "Attention and Memory in Deep Learning".
Version: | 0.2.0 |
Suggests: | covr, knitr, rmarkdown, testthat (≥ 3.0.0) |
Published: | 2022-07-12 |
Author: | Bastiaan Quast [aut, cre] |
Maintainer: | Bastiaan Quast <bquast at gmail.com> |
License: | GPL (≥ 3) |
NeedsCompilation: | no |
Materials: | README NEWS |
CRAN checks: | attention results |
Reference manual: | attention.pdf |
Vignettes: |
Complete Self-Attention from Scratch Simple Self-Attention from Scratch |
Package source: | attention_0.2.0.tar.gz |
Windows binaries: | r-devel: attention_0.2.0.zip, r-release: attention_0.2.0.zip, r-oldrel: attention_0.2.0.zip |
macOS binaries: | r-release (arm64): attention_0.2.0.tgz, r-oldrel (arm64): attention_0.2.0.tgz, r-release (x86_64): attention_0.2.0.tgz, r-oldrel (x86_64): attention_0.2.0.tgz |
Old sources: | attention archive |
Reverse imports: | rnn |
Please use the canonical form https://CRAN.R-project.org/package=attention to link to this page.