Top 20 AI Tutorials & Guides
2025-11-12 02:50:18
The following list summarizes the top 20 Machine Learning (ML) tutorials, based on the available search results:
| # | Tutorial Topic | Summary | Link Source |
|---|---|---|---|
| 1 | Bias and Fairness in AI | Discusses how bias is introduced into the machine learning pipeline, what constitutes a fair decision, and methods to remove bias and ensure fairness. | Source: rbcborealis.com |
| 2 | Few-Shot Learning and Meta-Learning I | Describes few-shot and meta-learning problems, introduces a classification of methods, and discusses methods that use training tasks to learn prior knowledge about class similarity/dissimilarity. | Source: rbcborealis.com |
| 3 | Few-Shot Learning and Meta-Learning II | Discusses methods that incorporate prior knowledge about how to learn models and about the data itself, including "learning to initialize," "learning to optimize," and "sequence methods." | Source: rbcborealis.com |
| 4 | Auxiliary Tasks in Deep Reinforcement Learning | Focuses on using auxiliary tasks to improve the speed of learning in deep reinforcement learning (RL) by generating a more consistent learning signal for a shared representation. | Source: rbcborealis.com |
| 5 | Variational Autoencoders | Discusses latent variable models, the non-linear latent variable model, maximizing the lower bound on likelihood using the autoencoder architecture, and the reparameterization trick. | Source: rbcborealis.com |
| 6 | Neural Natural Language Generation – Decoding Algorithms | Covers generating coherent and intelligible text using neural networks (NNLG), assuming the text is conditioned on an input (e.g., dialogue response, summarization). | Source: rbcborealis.com |
| 7 | Neural Natural Language Generation – Sequence Level Training | Considers alternative training approaches that compare the complete generated sequence to the ground truth at the sequence level, including fine-tuning with reinforcement learning and minimum risk training. | Source: rbcborealis.com |
| 8 | Bayesian Optimization | Dives into Bayesian optimization, its key components, applications, and the core idea of building a model of the entire function being optimized (including uncertainty) to choose the next sampling point. | Source: rbcborealis.com |
| 9 | SAT Solvers I: Introduction and Applications | Concerns the Boolean satisfiability (SAT) problem, aiming to establish if binary variables connected by logical relations can be set so the formula evaluates to true. | Source: rbcborealis.com |
| 10 | SAT Solvers II: Algorithms | Focuses exclusively on SAT solver algorithms, introducing two ways to manipulate Boolean logic formulae and concluding with conflict-driven clause learning. | Source: rbcborealis.com |
| 11 | SAT Solvers III: Factor Graphs and SMT Solvers | Divided into two sections: solving satisfiability problems based on factor graphs, and methods that apply SAT machinery to problems with continuous variables. | Source: rbcborealis.com |
| 12 | Differential Privacy I: Introduction | Discusses definitions of privacy in data analysis and covers the basics of differential privacy. | Source: rbcborealis.com |
| 13 | Differential Privacy II: Machine Learning and Data Generation | Presents recent methods for making machine learning differentially private and discusses differentially private methods for generative modeling. | Source: rbcborealis.com |
| 14 | Transformers I: Introduction | Introduces self-attention (the core mechanism of the transformer architecture) and describes how transformers can be used as encoders, decoders, or encoder-decoders (e.g., BERT and GPT3). | Source: rbcborealis.com |
| 15 | Parsing I Context-Free Grammars and the CYK Algorithm | Reviews earlier work modeling grammatical structure and introduces the CYK algorithm, which finds the underlying syntactic structure of sentences. | Source: rbcborealis.com |
| 16 | Transformers II: Extensions | Focuses on two families of modifications that address limitations of the basic transformer architecture and draw connections between transformers and other models. | Source: rbcborealis.com |
| 17 | Transformers III Training | Discusses the challenges with transformer training dynamics and introduces tricks practitioners use to get transformers and similar models to converge. | Source: rbcborealis.com |
| 18 | Parsing II: WCFGs, the Inside Algorithm, and Weighted Parsing | Introduces weighted context-free grammars (WCFGs) and presents two variations of the CYK algorithm: the inside algorithm and the weighted parsing algorithm. | Source: rbcborealis.com |
| 19 | Parsing III: PCFGs and the Inside-Outside Algorithm | Covers probabilistic context-free grammars (PCFGs) and describes algorithms to learn their parameters for both supervised and unsupervised cases, leading to the inside-outside algorithm. | Source: rbcborealis.com |
| 20 | Understanding XLNet | Provides an overview of XLNet, an auto-regressive language model that combines the transformer architecture with recurrence for bidirectional context learning. | Source: rbcborealis.com |