In this repository we stored all the little test and experiments we developed for the on going process of a.l.p.h.a.
a.l.p.h.a is an interactive installation in which an A.I.(GPT3 Neo model) generates poems based on the visitor’s appearance. This is done using computer vision and machine learning algorithms such as pose, object, and gender recognition.
To engage with the system the visitors are encouraged to step into a.l.p.h.a’s field of vision. To further influence the generated text the visitors can choose an item from a collection of objects. This data is then sent to the natural language model GPT3-Neo to generate the personalized poem and is subsequently printed for the visitors to take home.
First exhibition with a.l.p.h.a. can be ssen here https://www.youtube.com/watch?v=XbgmyMMmd_Y
In this process of exploration we use mainly cv2 for the purpose of computer vision and GPT3-NEO (Now that GPT3 has been released this may change). Every one of this Jupyter Notebooks has it's purpose, either combining the use of two or more models or trying to use an especific feature like OSC for comunicating between the interface (made with Touchdesigner) and the actuall ML process
This repo needs some organising and comenting work
For training on body language see:
- AI_BodyLanguageRecognition.ipynb
For the implementation of the items see:
- Computervision+TextGeneration.ipynb
This project is a collaboration between:
- Casper Westhausen
- Julian Moreno
- Jeffrey Van Der Geest
A lot of this is based on tutorials made by Nicholas Renotte If interested in ML, python and computer science in general check his channel. https://www.youtube.com/channel/UCHXa4OpASJEwrHrLeIzw7Yg