Digital art based on Deep learning
http://ml4a.github.io/classes/itp-F18/
https://nips2017creativity.github.io/
https://janhuenermann.com/blog/abstract-art-with-ml
https://janhuenermann.com/blog/abstract-art-with-ml
Machine-learning extrapolation of art: http://extrapolated-art.com/
ASCII art with deep CNN https://github.com/OsciiArt/DeepAA
Random Pics Combined Using Neural Network
Neural doodle
Colorize B&W pictures: http://demos.algorithmia.com/colorize-photos/
Make photo from segmentation: http://prostheticknowledge.tumblr.com/post/169038480796/uncanny-rd-project-by-anastasis-germanidis-and
big gans https://mobile.twitter.com/neuroecology/status/1073291777321381888
make.girls.moe / Crypko
https://dena.com/intl/anime-generation/
Inceptionism: Going Deeper into Neural Networks – Going Deeper with Convolutions
How ANYONE can create Deep Style images
https://twitter.com/quasimondo https://twitter.com/mtyka https://twitter.com/alexjc https://twitter.com/Salavon https://twitter.com/genekogan https://twitter.com/chrisrodley https://twitter.com/elluba https://twitter.com/dh7net https://twitter.com/kostiumas
https://twitter.com/artwithMI https://twitter.com/ml4a_
https://twitter.com/hardmaru https://twitter.com/samim https://twitter.com/algoritmic https://twitter.com/prostheticknowl https://twitter.com/fchollet https://twitter.com/zachlieberman https://twitter.com/bitcraftlab
Style Transfer for Headshot Portraits
Deep Convolutional Inverse Graphics Network
Image based relighting using neural networks
pix2pix
https://magenta.tensorflow.org/welcome-to-magenta
https://mobile.twitter.com/chrisdonahuey/status/1073387592161193984 Music transformer https://magenta.tensorflow.org/music-transformer
https://www.youtube.com/watch?v=HANeLG0l2GA
Composing Music With Recurrent Neural Networks – https://affinelayer.com/sidgen/
Long tail (Power laws) is particularly long in music
Diversity is valuable in long term.
Deep content-based music recommendation.
Model with latent factors. Model from raw audio signal to latent space. Map users to latent space, and see songs nearby, to recommend. Use ConvNet to implement the map from audio signal to latent space.
Weighted Matrix factorization (uses confidence matrix) to find the latent factors (see [[Compressed sensing] video]). Basically implements: probability of having listened to song by vector product between user and song latent representations.
Mean squared error is rotationally invariant, and factorizations are too!
Latent factors split into those predictable from audio, and predictable from metadata.
Datasets: million song dataset, echonest.
Still much poorer performance than collaborative filtering, in general.
Can visualize, using T distributed stochastic neighbour embeding, can identify some genres.