Actions

Growing a tree: Difference between revisions

From Algolit

 
(2 intermediate revisions by 2 users not shown)
Line 1: Line 1:
 
by Algolit
 
by Algolit
 +
 +
[https://gitlab.constantvzw.org/algolit/mundaneum/tree/master/exhibition/5-Readers/growing_a_tree Sources on Gitlab]
  
 
Parts-of-Speech is a category of words that we learn at school: noun, verb, adjective, adverb, pronoun, preposition, conjunction, interjection, and sometimes numeral, article, or determiner.
 
Parts-of-Speech is a category of words that we learn at school: noun, verb, adjective, adverb, pronoun, preposition, conjunction, interjection, and sometimes numeral, article, or determiner.
Line 9: Line 11:
 
---------------------------------------------------
 
---------------------------------------------------
  
Concept, code: An Mertens
+
Concept, code & interface: Anaïs Berck & Gijs de Heij
 
 
Interface: Gijs de Heij
 
 
 
Recipe: Marcel Benabou (Oulipo)
 
 
 
Texts: a collection of sentences that mention the word 'tree'
 
 
 
Technique: Wordnet
 

Latest revision as of 15:41, 13 September 2019

by Algolit

Sources on Gitlab

Parts-of-Speech is a category of words that we learn at school: noun, verb, adjective, adverb, pronoun, preposition, conjunction, interjection, and sometimes numeral, article, or determiner.

In Natural Language Processing (NLP) there exist many writings that allow sentences to be parsed. This means that the algorithm can determine the part-of-speech of each word in a sentence. 'Growing a tree' uses this techniques to define all nouns in a specific sentence. Each noun is then replaced by its definition. This allows the sentence to grow autonomously and infinitely. The recipe of 'Growing a tree' was inspired by Oulipo's constraint of 'littérature définitionnelle', invented by Marcel Benabou in 1966. In a given phrase, one replaces every significant element (noun, adjective, verb, adverb) by one of its definitions in a given dictionary ; one reiterates the operation on the newly received phrase, and again.

The dictionary of definitions used in this work is Wordnet. Wordnet is a combination of a dictionary and a thesaurus that can be read by machines. According to Wikipedia it was created in the Cognitive Science Laboratory of Princeton University starting in 1985. The project was initially funded by the US Office of Naval Research and later also by other US government agencies including DARPA, the National Science Foundation, the Disruptive Technology Office (formerly the Advanced Research and Development Activity), and REFLEX.


Concept, code & interface: Anaïs Berck & Gijs de Heij