Documentos de Académico
Documentos de Profesional
Documentos de Cultura
Agenda
Introduction Example: How to build a machine to play chess? Artificial Intelligence Subfields Feature Extraction for Classification Artificial Neural Networks Future of Artificial Intelligence
Easy?
The general idea is to build a tree representing different possible paths, each path represents sequence of choices from you and your opponent. The tree will help the computer choose That is called a Game Tree
Min-Max Algorithm
Two agents are playing Zero-Sum Game
Any loss to you is an equal win for your opponent
Assume that you are playing with a rational player; he will try to maximize his profit At each step, you want to maximize your profit and assume that the opponent will minimize it
Min-Max Algorithm
Is it feasible to create the whole tree? Is it finite? What about the depth? Will we be forced to choose random moves? How to evaluate each state?
Heuristic function (e.g. number of safe pieces)
Heuristic Function
Can we say that the computer is now intelligent because it can defeat human in chess?
There are still a lot of tasks that human does easily while it is very hard for machines Can you name a few?
Chinese Room
Chinese Room
Human
Easy Tasks
Recognize faces Recognize voices Understand concepts
Hard Tasks
Understand concepts Recognize faces Recognize voices
Hard Tasks
Multiply 4878 * 1254 Play chess
Computer Vision
Teach machines how to recognize visual world
Speech Recognition
Teach machines how to recognize human voices
Face Detection
Image Classification
Feature Representation
Feature Representation
Feature Representation
Audio Features
NLP Features
Neuron
Neuron
Neural Networks
Neural Networks
Backpropagation
Backpropagation
The network with more than 1 hidden layer is called a deep network Training a deep network is not easy using Backpropagation.
Demo
Classical Approach
Train a classifier using labeled data; then test the classification accuracy on another testing data. Row data can be: images, videos, audio, natural language We start with extracting appropriate features. Hand-crafted feature extraction will differ according to the nature of the data.
Sparse Autoencoders
Motivating Results
Unsupervised feature extraction outperformed hand-crafted features in set of popular machine recognition tasks:
Human Activity Recognition Handwritten digits Recognition Audio classification ImageNet classification
Biological Mapping
Interesting paper at 2005: Invariant visual representation by single neurons in the human brain They showed that some neurons respond with high selectivity to, for instance, images, sound and name of the actress Halle Berry!
Neuroscientists can lead an AI revolution if they came closer to how the brain works
Acknowledgment
Some slides are taken from Andrew Ng deep learning tutorial. Images used here are for educational purposes, any further use should return back to the original copyright holders.
Thanks You
Questions?