Artificial intelligence (AI) is a rich field spanning formal systems for representing and processing symbolic information, computational models of human cognition, and a range of techniques for learning, planning and reasoning under uncertainty. In this course, students will gain an appreciation of what “artificial intelligence” means, and how it can be usefully applied to real-world problems. Students will learn the intricacies of state-space search and constraint programming. Through an in-depth treatment of knowledge representation via propositional and first-order logic, students will understand how expert knowledge can be fed into and be processed by modern computers. In addition, students will acquire skills in using planning algorithms to find solutions to optimization problems, and understand how to use probabilistic reasoning to draw inferences in uncertain environments.
- 50.007 Machine Learning/ 40.319 Statistical & Machine Learning
- 50.034 Introduction to Probability and Statistics/ 30.003 Introduction to Probability and Statistics/ 40.001 Probability + 40.004 Statistics
- [NEW] Modeling Uncertainty
By the end of the course, students will be able to:
- Define the meaning of “artificial intelligence”.
- Name examples of AI that are successful in real-world problems.
- Describe the strengths and limitations of various state-space search algorithms, and choose the appropriate algorithm for a problem.
- Formulate and solve problems in the framework of constraint satisfaction problems.
- Formulate and solve planning problems.
- Use probabilistic modelling techniques to solve problems with noise, incomplete information, and uncertainty.
- Summarize the essential components of gradient-based optimization in supervised learning problems
- Explain the impact of stepsizes in gradient-based optimization
- Recognize the difference between batch and stochastic/mini-batch gradient descent and apply improved weight update methods such as momentum term, RMSProp and Adam.
- Be able to apply the training and running of trained neural networks in tensorflow
- Be able to investigate the impact of essential parameters such as stepsize, batchsize and training iterations when training a neural network
- Discuss the usage of different neural network structures such as fully connected, convolutional and pooling layers
- Be able to recall state of the art neural network components such as batch normalization layers and residual connections
- Be able to recall the basic ideas behind neural networks used for machine translation and sequence to sequence learning.
- Be able to construct input samples that are able to fool neural networks
- Explain the difference between training from scratch and finetuning, be able to run finetuning of neural networks.
- List current useful real-world applications of AI.
- Implement state-space search algorithms for a variety of problems.
- Solve constraint programming problems.
- Infer new information from provided knowledge.
- Use planning algorithms to find optimal solutions.
- Solve problems with noise and uncertainty using probabilistic techniques.