See Machine learning, Intelligence
Computational intelligence - Scholarpedia
Data on AI progress: http://aiindex.org/
Oxford course (with video) Deep learning.
Youtube playlist by mathematicalmonk
Hugo latochelle YB videos
Read Neural Turing machines paper
See also: Evolutionary computing, Bio-inspired computing, Sloppy systems
Artificial Intelligence At Work (http://www.vicarious.com/)
Drop Everything to Work on Artificial Intelligence - Foresight Institute
Creating Human-Level AI: How and When - Ray Kurzweil
Artificial intelligence (AI) has the overall goal of understanding and engineering intelligence, behaviour that involves understanding, and higher cognitive functions. It is a broad and very interdisciplinary field. It feeds to and from Machine learning, Logic, Cognitive science, Neuroscience, etc.
Synthetic biology routes to bio-artificial intelligence
Complex Systems, Artificial Intelligence and Theoretical Psychology
Universal AI theory
Integrating symbols into deep learning
Program induction, Neural networks with memory, Intelligence
thinking perception action. Loops b?w them
Oxford's society OxAI
Machine intelligence, is essentially a synonym of AI, but with the connotation of using machines and computers to create and understand intelligence. The biggest part of it, Machine learning, deals with the problem of learning a model from data so as to solve (mostly) predictive tasks.
Miscellaneous notes from first Nando's first deep learning lecture
Challenges: One-shot learning, multi-task & transfer learning, scaling and energy efficiency, ability to generate data (e.g. vision as inverse graphics), architectures for AI.
See more at Machine learning
Why Deep Learning models perform so well?
Seems to be a result of:
I see many examples of trivial deep learning algorithms beating humans in "hard" scientific problems. Beating humans in videogames is really hard though Makes sense, games are designed for humans. The real world is much weirder, and so we humans and our biases aren't that special any more. https://www.facebook.com/guillermovalleperez/posts/10156790132406223
16/03/2019
Thoughts on Sutton's bitter lesson
Eric Drexler - A Cambrian explosion in Deep learning
A Gradient Descent Method for a Neural Fractal Memory
https://www.oreilly.com/ideas/the-current-state-of-machine-intelligence-2-0
https://medium.com/machine-learnings/a-humans-guide-to-machine-learning-e179f43b67a0#.gumb86nos
Yes, the idea is not that humans will not do anything. The idea is that we set goals, make individual and collective decisions, just as today. But with higher prowess. As a result, most people will work less, in the sense that they spend almost all their time in leisure, and almost none in rote activities. In car factories, for instance, you will need less and less people, as the work becomes closer and closer to just deciding the overall high level goals. The remaining jobs, will also become quite close to laisure anyway, with almost all rote aspects removed. You know, people for fun will stay make cars and other things by hand, and with different degrees of automation. A big thing, I believe, will be people and machines working together as a team, each benefiting from each other's strengths. Later on, we will also become hybridized with machines in more direct ways, until, "we become one". Or how I prefer to put it, we become many (in the sense that the diversity of forms of sentient beings/people will grow drastically, with a huge spectrum of personal choice/journey to explore and expand)
http://www.artificialbrains.com/
Prof. Schmidhuber - The Problems of AI Consciousness and Unsupervised Learning Are Already Solved
Yeah, I'm not saying people won't have things to do. But yeah mostly: either personal projects, or leisure, in the case that for some reason you don't want to leave it to machines.
But "understanding" may become not so "important", just like calculating isn't. You can always ask your understanding AI, for advice, if you need some of that understanding xD.
That seems extremely lazy. Although it's not hard to imagine people doing that.
In a time before computers. Knowing your multiplication tables was actually very important.
So, to say what is considered "important" (in the sense of important for humans), we'll need to know our role in society. Right now, computers are doing the computing, and we do the understanding. Perhaps next, computers will also do the understanding, and we'll just to the objective-setting, to different degrees of laziness. Society may just become a hedonistic dream, like in wallee (but not as cartoonshily dystopian...)
Until we merge with the machines, and we do everything again, including multipying of course :) Multiplying and computing is actually extremelly important. It's just that humans aren't the best at it now, so we just say it's not important, so we can feel better about not doing it.
Once we merge, we'll rightfully recognize the importance of all the little things that matter :)
Some quick notes to self, on designing cognitive systems (AIs): System that sees and acts on the world. It creates models of the world and of itself acting on it. It meanwhile decides what to do, based on its internal models, and some sort of reward mechanisms. It can decide to act on the world, or run internal simulations (thinking, consciousness), or both, and can organically alternate between the two. Active and passive/reflective modes. Fast and slow thinking. Creating models <–> Unsupervised learning. Promising approaches: GANs, DBNs. Need more advances in this area. Models that include the acting agent itself. Juergen Schmidhuber seems to have made work on this, but not sure what is his approach. Best reward mechanisms unclear. Also, amount of pre-existing structure for system to work appropriately also unclear <> the rebirth of the nurture vs nature problem, now from the "Creator"'s perspective.
https://www.vicarious.com/2017/10/26/common-sense-cortex-and-captcha/
more notes from GKeep....
–> Try DeepGA for supervised learning problem! Check how robust they are against adversarial examples
– try actinf code combining the loss function! – use World models models/code perhaps, https://worldmodels.github.io/ – "We can now use our trained V model to pre-process each frame at time tt into z_tz to train our M model." – They train the models one after the other, given dependences... Makes sense. "In principle, we can train both models together in an end-to-end manner, although we found that training each separately is more practical, and also achieves satisfactory results. Training each model only required less than an hour of computation time using a single NVIDIA P100 GPU. We can also train individual VAE and MDN-RNN models without having to exhaustively tune hyperparameters."
"Furthermore, we see that in making these fast reflexive driving decisions during a car race, the agent does not need to plan ahead and roll out hypothetical scenarios of the future. Since h_th contain information about the probability distribution of the future, the agent can just query the RNN instinctively to guide its action decisions. Like a seasoned Formula One driver or the baseball player discussed earlier, the agent can instinctively predict when and where to navigate in the heat of the moment. "
"In this simulation, we don’t need the V model to encode any real pixel frames during the hallucination process, so our agent will therefore only train entirely in a latent space environment."
Avoiding catastrophic forgetting by learning loss functions with a neural net, for each of the tasks, and using them as part of the loss when doing new tasks.. Hmm probably won't work.. Could just save the networks trained on the previous tasks and using them as extra labels.. Nah, because inputs likely to be from different distribution..
Best way of evolving hypernets?
Evolve GANs!!
Mutar los seeds (tipo 1 por mutacion, y mutar preferentemente los más recientes).
self-referential nets..
Capsule networks functional maps manifolds. capsule networks inverse graphics <> DrawRNN
codewords to learn text representations
predictive learning. LeCun agreed that reward needs to be intrinsic, and rich – rather than learning from occasional task-specific rewards, AI systems should learn by constantly predicting “everything from everything”, without requiring training labels or a task definition. Once you understand the world (unsupervised/predictive learning) it's much easier to learn how to act (policy/RL)
Measure complexity of policies.
deform graph and reuse spectral representations, to navigate in similar graphs?
param-fun map in rnns Turing complete? Hmm. No, But with hyperrecurrent networks, maybe yes! Giving input to the network, and giving it parameters, merge into a single thing! –> hyperrecurrent network
Evolutionary algos for GANs. They are very unstable, indicating gradient information perhaps not that useful.. Mutations that change only generator or discriminator. Unsupervised learning, is often about self-supervision, but because of lack of external supervision, probably gradients not as useful, and exploration+Occam's razor is better? Hmm. Wolfram agrees that something closer to random search (on program space!) may be better. Demis thinks AGI solutions may be very sparse, better to get as much headway from brain.
Evolutionary strategies with spiking or more general nets. neuroevolution
easily programmable nets. They need to learn concepts/abstract knowledge first #DeepMind
Try random GANs RNNs, RL
RG..
supervised learning, can use gradients very effecitvely because of supervision.
try pluggin trained gan to trained cnn to optimize to generate images of particular categories –> seems to work for MNIST. Try multi-objective. Otherwise, just randomly search GAN inputs, as there aren't that many outputs, so easy to find any by random search.. so not that interesting. But with multi-objective (cat+bicycle or whatever) the space grows combinatorially
When you start making a list of successively harder tasks for AI, and you realize you are writting about the history of life
Tasks * Navigate (change location). Need basic understanding of space. Follow gradients, for instance, chemotaxis.. • Forage. Find food/water. Need memory for this. Need more understanding of space and time. Grid and place cells, hippocampus. • Flee. Need very good perception, outlier detection, super good navigation, basics of causality. Attention mechanicsms. • Attack/hunt. Need even better navigation. Stealth, super basic theory of mind (I don't want to be heard). • Build (change arrangement of objects). Basic world modeling • Socialize. More advanced world modeling, and theory of mind • Communicate. More advanced socialization • Make tools. More advanced world modeling, planning-oriented • Make more tools. • Make art. Tools for thinking. Thoughts that cause actions that change the world in ways that evoke new thoughts. You start seeing your own thoughts. Metacognition. Self-awareness. Because you realize you can control thoughts, you starting becoming aware of them! • Writing. Realize the above can be useful for communication. • First technological singularity • ..."history"... • Try to overcome the limits of biology by reverse engineering yourself.
DeepMind's idea is that you can't just make AI by working on natural language processing (akin to jumping to the communicate, or writting parts), because that thing is built on top of the previous machinery. My attempt above was to make a list of things that only rely on things before, so that it makes sense to work on them in that order.
– make mini brain RL nn, plug to the web
why is training when input is +/-1 instead of 0/1 so much worse??