ML 03: “Learning” in Machine Learning (ML) – Part II

ML 03 - Machine Learning

As stated in the previous episode of "learning" in ML, the learning categories of ML can broadly be categorized into four (4). However, there are other learning styles and techniques applied in ML to build robust models. Note that these other forms of learning are applied base on the task at hand.

However, there are other learning styles and techniques applied in ML to build robust models. Note that these other forms of learning are applied base on the task at hand.

Other forms of learning

  • Hybrid learning
  • Statistical Inference

Hybrid Learning

Semi-Supervised Learning

  • Uses a combination of labeled and unlabeled data
  • Usecase: small labeled data with large unlabeled data

Self-Supervised Learning

  • Does not rely on expert-annotated datasets to learn data representations
  • Example: removing parts of an image and allowing the model predict the missing parts (eg. face inpainting)

Multi-Instance Learning

  • A variation of supervised learning
  • Uses sets of labeled "bags" or groups which contains many instances
  • Example: predicting the target class of an image base on its visual content.

Statistical Inference

Inductive Learning

  • learns "general rules" from a set of examples (specific)
  • Goal: learn the function for new data (x).
  • Example: Credit risk assessment where x contains the properties of the customer and the f(x) is credit approved or not.

Deductive Learning

  • Learns from general rules to determine specific outcomes.
  • It can also be seen as the opposite of inductive learning.
  • Example: A Deductive Decision Tree

Transductive Learning

  • The model learns to predict specific examples given specific examples from a domain
  • Example: K-Nearest Neighbor(KNN)

Some Learning Techniques

  • Active learning
  • Multi-task learning
  • Ensemble learning
  • Transfer learning
  • Meta learning

Active learning

learns to query a user interactively to label data with the desired outputs
Similar to semi-supervised learningBest used when;
Best used when;

  • There is very small or huge amount of dataset.
  • Annotation of the unlabeled dataset is very costly.
  • Processing power is limited.

Multi-task learning

Basically enables multiple tasks to be simultaneously learned by a shared model or a single model with a split head/branch.
It improves data efficiency.
Reduces overfitting through shared representations.
Improves learning by leveraging auxiliary information

Ensemble learning

combines the decisions from models to an ensemble which makes the ensemble model have a better performance than a single model.
Some ensemble learning techniques include

  • Bagging
  • Max voting.
  • Boosting.
  • Averaging

Transfer learning

Applies knowledge gained by a pre-trained model from one task to a different but related task.
Example:
A simple classifier to predict whether an image contains a dog, could be used, based on its training knowledge, to identify another object such as cat in another image

Meta learning

also known as “learning to learn”,
goal is to design models that can learn new concepts and skills quickly with a few training examples

Each task is associated with a dataset
looks very similar to a usual learning task, but one dataset is considered as one data sample

Total
0
Shares
Previous Post
ML 02 - What is learning

ML 02: “Learning” in Machine Learning (ML) – Part I

Next Post
instagram - iboosthub

Instagram now requiring video selfies to verify real identities as fight against Spam and Bot accounts

Related Posts