Algoliterary Encounters: Difference between revisions
From Algolit
(→Datasets) |
|||
Line 25: | Line 25: | ||
* [[The data (e)speaks]] - espeak installation | * [[The data (e)speaks]] - espeak installation | ||
− | |||
* [[Common Crawl]] | * [[Common Crawl]] | ||
* [[Frankenstein]] | * [[Frankenstein]] | ||
* [[Learning from Deep Learning]] | * [[Learning from Deep Learning]] | ||
− | |||
− | |||
* [[AnarchFem]] | * [[AnarchFem]] | ||
* [[WikiHarass]] | * [[WikiHarass]] |
Revision as of 14:35, 25 October 2017
Start of the Algoliterary Encounters catalog.
Introduction
Algoliterary works
- Oulipo recipes
- i-could-have-written-that
- Obama, model for a politician
- In the company of CluebotNG
Algoliterary explorations
What the Machine Writes: a closer look at the output
- CHARNN text generator
- You shall know a word by the company it keeps - Five word2vec graphs, each of them containing the words 'collective', 'being' and 'social'.
How the Machine Reads: Dissecting Neural Networks
Datasets
- Many many words - introduction to the datasets with calculation exercise
- The data (e)speaks - espeak installation
From words to numbers
Different views on the data
Creating word embeddings using word2vec
- Crowd Embeddings - case studies, still needs fine tuning
- word2vec_basic.py - in piles of paper
- softmax annotated
- Reverse Algebra
How a Machine Might Speak
Sources
Algoliterary Toolkit
Bibliography
- Algoliterary Bibliography - Reading Room texts