Read about interesting models and framings for AI safety here: https://docs.google.com/document/d/145yJBoNTYHOJ_FMOO2hO-x2KnJQT45hxhtd0I84HVLE/edit
https://blog.openai.com/debate/
OpenAI - Concrete AI safety problems paper
See here for David Dustch comment (linked to specific time): https://vimeo.com/22099396#t=2758s
https://maliciousaireport.com/
https://medium.com/@francois.chollet/the-impossibility-of-intelligence-explosion-5be4a9eda6ec – comments here: https://www.facebook.com/guillermovalleperez/posts/10156139287091223
Excellent article not only on what may be the most tangible current AI risk, but more importantly, its possible solution. The choice forks the future into a potential dystopia or a more humane and enritched society.
https://medium.com/@francois.chollet/what-worries-me-about-ai-ed9df072b704
https://aiimpacts.org/likelihood-of-discontinuous-progress-around-the-development-of-agi/
What is the simplest way to attack a model. Justin Glimer. Security of ML model is about test error basically. Defend against attackers trying random stuff to fool model. VIDEO