Course Outline
DAY 1 - ARTIFICIAL NEURAL NETWORKS
Introduction to ANN Architecture
- Comparison of biological and artificial neurons.
- Structural modeling of ANNs.
- Utilization of activation functions within ANNs.
- Overview of common network architecture classifications.
Mathematical Foundations and Learning Mechanisms
- Review of vector and matrix algebra.
- Understanding state-space concepts.
- Principles of optimization.
- Error-correction learning techniques.
- Memory-based learning approaches.
- Hebbian learning principles.
- Competitive learning strategies.
Single Layer Perceptrons
- Architecture and learning algorithms for perceptrons.
- Introduction to pattern classification and Bayes' classifiers.
- Utilizing perceptrons as pattern classifiers.
- Convergence properties of perceptrons.
- Identifying limitations of perceptron models.
Feedforward Artificial Neural Networks
- Structures of multi-layer feedforward networks.
- The backpropagation algorithm.
- Training and convergence in backpropagation.
- Functional approximation using backpropagation.
- Practical considerations and design challenges in backpropagation learning.
Radial Basis Function (RBF) Networks
- Pattern separability and interpolation methods.
- Foundations of Regularization Theory.
- Applying regularization to RBF networks.
- Design and training processes for RBF networks.
- Approximation capabilities of RBF networks.
Competitive Learning and Self-Organizing ANNs
- General clustering methodologies.
- Learning Vector Quantization (LVQ).
- Architectures and algorithms for competitive learning.
- Self-organizing feature maps.
- Characteristics and properties of feature maps.
Fuzzy Neural Networks
- Neuro-fuzzy system integration.
- Theoretical background on fuzzy sets and logic.
- Designing fuzzy systems.
- Designing fuzzy Artificial Neural Networks.
Applications
- Discussion of selected Neural Network application examples, highlighting their advantages and associated challenges.
DAY 2 - MACHINE LEARNING
- The PAC Learning Framework
- Guarantees for finite hypothesis sets: consistent scenarios
- Guarantees for finite hypothesis sets: inconsistent scenarios
- General Principles
- Deterministic vs. Stochastic environments
- Bayes error noise
- Errors in estimation and approximation
- Model selection strategies
- Rademacher Complexity and VC Dimension
- The Bias-Variance Tradeoff
- Regularization Techniques
- Addressing Overfitting
- Validation Methods
- Support Vector Machines
- Kriging (Gaussian Process Regression)
- PCA and Kernel PCA
- Self-Organizing Maps (SOM)
- Kernel-induced Vector Spaces
- Mercer Kernels and Kernel-induced similarity metrics
- Reinforcement Learning
DAY 3 - DEEP LEARNING
This module builds upon the concepts covered on Days 1 and 2.
- Logistic and Softmax Regression
- Sparse Autoencoders
- Vectorization, PCA, and Whitening
- Self-Taught Learning
- Deep Network Architectures
- Linear Decoders
- Convolution and Pooling Layers
- Sparse Coding
- Independent Component Analysis
- Canonical Correlation Analysis
- Demonstrations and Real-world Applications
Requirements
A solid grasp of mathematical principles is required.
Strong understanding of fundamental statistics is required.
While not mandatory, basic programming proficiency is recommended.
Testimonials (2)
Working from first principles in a focused way, and moving to applying case studies within the same day
Maggie Webb - Department of Jobs, Regions, and Precincts
Course - Artificial Neural Networks, Machine Learning, Deep Thinking
It was very interactive and more relaxed and informal than expected. We covered lots of topics in the time and the trainer was always receptive to talking more in detail or more generally about the topics and how they were related. I feel the training has given me the tools to continue learning as opposed to it being a one off session where learning stops once you've finished which is very important given the scale and complexity of the topic.