Share

I would like to give you some of my experience with AI projects.


Questions:

How to train model to add new classes?

How to add a new class to an existing classifier in deep learning?

Adding new Class to One Shot Learning trained model

Is it possible to train a neural network as new classes are given?

Merging all several models that detection system for all these tasks.


Answer 1:

There are several ways to add new classes to the trained model, which require just training for the new classes.

  • Incremental training (GitHub)

  • continuously learn a stream of data (GitHub)

  • online machine learning (GitHub)

  • Transfer Learning Twice

  • Continual learning approaches (Regularization, Expansion, Rehearsal) (GitHub)

Answer 2:

Online learning is a term used to refer to a model which takes a continual or sequential stream of input data while training, in contrast to offline learning (also called batch learning), where the model is pre-trained on a static predefined dataset.

Continual learning (also called incremental, continuous, lifelong learning) refers to a branch of ML working in an online learning context where models are designed to learn new tasks while maintaining performance on historic tasks. It can be applied to multiple problem paradigms (including Class-incremental learning, where each new task presents new class labels for an ever expanding super-classification problem).

Do I need to train my whole model again on all four classes or is there any way I can just train my model on new class?

Naively re-training the model on the updated dataset is indeed a solution. Continual learning seeks to address contexts where access to historic data (i.e. the original 3 classes) is not possible, or when retraining on an increasingly large dataset is impractical (for efficiency, space, privacy etc concerns). Multiple such models using different underlying architectures have been proposed, but almost all examples exclusively deal with image classification problems.

Answer 3:

You could use transfer learning (i.e. use a pre-trained model, then change its last layer to accommodate the new classes, and re-train this slightly modified model, maybe with a lower learning rate) to achieve that, but transfer learning does not necessarily attempt to retain any of the previously acquired information (especially if you don't use very small learning rates, you keep on training and you do not freeze the weights of the convolutional layers), but only to speed up training or when your new dataset is not big enough, by starting from a model that has already learned general features that are supposedly similar to the features needed for your specific task. There is also the related domain adaptation problem.

There are more suitable approaches to perform incremental class learning (which is what you are asking for!), which directly address the catastrophic forgetting problem. For instance, you can take a look at this paper Class-incremental Learning via Deep Model Consolidation, which proposes the Deep Model Consolidation (DMC) approach. There are other continual/incremental learning approaches, many of them are described here or in more detail here.

Answer 4:

by using Continual learning approaches to trained without losing the original classes. It has 3 categories:

Regularization

Expansion

Rehearsal

Answer 5:

if you access to the dataset then you can download it and add all you new classes when you have " 'N' COCO Classes + 'M' New classes "

after that you can fine tune model based on new dataset. you do not need all of the dataset just same number of image for all class enough.




https://learnopencv.com/stanford-mrnet-challenge-classifying-knee-mris/


Before start your machine learning project ask these questions and preparation: What is your inference hardware? specify the use case. specify model interface. how would we monitor performance after deployment? how can we approximate post-deployment monitoring before deployment? build a model and iteratively improve it. How to deploy the model at the end? monitor performance after deployment. what is your metric? How do you split your data (training and validation)?

Preparation ML Project Workflow

  • What is your hardware ?

  • specify the use case

  • specify model interface

  • how would we monitor performance after deployment?

  • how can we approximate post-deployment monitoring before deployment?

  • build a model and iteratively improve it

  • deploy the model

  • monitor performance

    • what is your are metric?

    • How do you split your data?

Before Training deep learning model

  • using large model to train because

    • it is faster to train with lower overfit and faster converge due to best training

    • it is easier and higher compress in the final stage

      • model compression and acceleration: reducing parameters without significantly decreasing the model performance

  • Data: How to have good data for training deep learning models; How to Build and Enhance A Good Data Set For Your Deep Learning Project: using same config and data for training and inference, removing redundant (delete data which you don't need), get more data, Handle missing data, using data augmentation techniques or GAN to generate more data, re-scale/balance data, Transform your data (Change data types), Feature selection based on data-set and use case

      • The data you don't need: removing redundant samples

      • get more data

      • Invent more data

        • data augmentation

      • Re-scale data

        • balance datasets

      • Transform your data

      • Feature selection based on dataset and use case

      • ML-Augmented Video Object Tracking: By applying and evaluating multiple algorithmic models, enhanced ability to scale object tracking in high-density video compositions.

Training deep learning model

  • automated hyper-parameters

    • Using Hyperparameter tuning / Hyperparameter optimization tools

    • AutoML

    • genetic algorithm

    • population based training

    • bayesian optimization

  • You need to set some parameters and config for training

      • Diagnostics

      • Weight Initialization

      • Learning rate

      • Activation function

      • Network Topology

      • Batches and Epochs

      • Regularization

      • Optimization and Loss

      • Early Stopping

Continuous delivery

  • evolve with latest detection models

  • more data (no labels)

    • semi-supervised learning: big self-supervised models are strong semi-supervised learners

After Training deep learning model

  • Parameter pruning

    • model pruning: reducing redundant parameters which are not sensitive to the performance.

      • aim: remove all connections with absolute weights below a threshold

  • Quantization

    • compresses by reducing the number of bits used to represent the weights

    • quantization effectively constraints the number of different weights we can use inside our kernels

    • per-channel quantization for weights, which improves performance by model compression and latency reduction.

  • Low rank matrix factorization (LRMF)

    • there exists latent structures in the data, by uncovering which we can obtain a compressed representation of the data

    • LRMF factorizes the original matrix into lower rank matrices while preserving latent structures and addressing the issue of sparseness

  • Compact convolutional filters (Video/CNN)

    • designing special structural convolutional filters to save parameters

    • replace over parametric filters with compact filters to achieve overall speedup while maintaining comparable accuracy

  • Knowledge distillation

    • training a compact neural network with distilled knowledge of a large model

    • distillation (knowledge transfer) from an ensemble of big networks into a much smaller network which learns directly from the cumbersome model's outputs, that is lighter to deploy

  • Binarized Neural Networks (BNNs)

  • Apache TVM (incubating) is a compiler stack for deep learning systems

  • Neural Networks Compression Framework (NNCF)

Deep learning model in production

  • security: controls access to model(s) through secure packaging and execution

  • Test

  • auto training

  • using parallel processing and library such as GStreamer

Technology

Docker

AWS

Flask

Django

My Keynote (February 2021)

  1. introduction

  2. Machine Learning/ Deep Learning

Machine learning is an application of artificial intelligence (AI) that provides systems the ability to automatically learn and improve from experience without being explicitly programmed

  1. supervised Machine Learning

    1. Deep Convolutional Neural Networks (DCNN) Architecture

    2. Visualizing and Understanding Convolutional Networks

    3. Object Detection by Deep Learning

    4. Video Tracking

    5. Style Transfer

  2. semi-supervised Machine Learning/ Deep Reinforcement learning (DRL)

    1. Google

    2. Deep Reinforcement learning (DRL)

  3. unsupervised Machine Learning

    1. Auto Encoder

  4. Generative Adversarial Networks (GANs)

  5. Tools

  6. Pre trained model

  7. Effect of Augmented Datasets to Train DCNNs

  8. Training for more classes

  9. Optimization

  10. Hardware

  11. Production setup

  12. post development

  13. business , Gartner, Hype Cycle for emerging technologies, 2025

Advanced and practical

  1. Inside CNN

    1. Deep Convolutional Neural Networks Architecture

    2. Convolution

    3. Convolution Layer

    4. Conv/FC Filters

    5. Activation Functions

    6. Layer Activations

    7. Pooling Layer

    8. Dropout ; L2 pooling

    9. Why

      1. Max-pooling is useful

      2. How to see inside each layer and find important features

  1. Hands on python for deep learning

  2. Fundamental deep learning

  3. Installation: TensorFlow, PyTorch

  4. Using PC+eGPU for training video tracking

Summary of the summit

Face

  • Effective and precise face detection based on color and depth data

    • https://www.sciencedirect.com/science/article/pii/S221083271400009X

      • containing or not containing a face

      • Eigenface, Fisherface, waveletface, PCA (Principal Component Analysis), LDA (Linear Dis-criminant Analysis), Haar wavelet transform, and so on.

      • Viola–Jones detector

      • illumination changes and occlusion

      • depthinformation is used to filter the regions of the image where a candidate face regionis found by the Viola–Jones (VJ) detector

      • - the first filtering rule is defined on the color of the region; since some false positiveshave colors not compatible with the face (e.g. shadows on jeans) a skin detector isapplied to remove the candidate face regions that do not contain skin pixels;

      • - the second filtering rule is defined on the size of the face: using the depth mapit is quite easy to calculate the size of the candidate face region, which is use-ful to discard smallest and largest faces from the final result set;

      • - the third filtering rule is defined on the depth map to discard flat objects (e.g.candidate faces found in a wall) or uneven objects (e.g. candidate face foundin the leaves of a tree). Combining color and depth data the candidate faceregion can be extracted from the background and measures of depth and reg-ularity are used for filtering out false positives.

      • The size criteria simply remove the candidate faces not included in a fixed rangesize ([12.5,30] cm). The size of a candidate face region is extracted from the depthmap according to the following approach.

      • image below

  • Gaussian mixture 3D morphable face model

  • Face Synthesis for Eyeglass-Robust Face Recognition

  • GeneGAN: Learning Object Transfiguration and Attribute Subspace from Unpaired Data

  • FacePoseNet: Making a Case for Landmark-Free Face Alignment

  • Learning to Regress 3D Face Shape and Expression from an Image without 3D Supervision

  • Unsupervised Eyeglasses Removal in the Wild

  • How far are we from solving the 2D & 3D Face Alignment problem? (and a dataset of 230,000 3D facial landmarks)

    • https://arxiv.org/pdf/1703.07332v3.pdf

    • (a) we construct, for the first time, a very strong baseline by combining a state-of-the-art architecture for landmark localization with a state-of-the-art residual block, train it on a very large yet synthetically expanded 2D facial landmark dataset and fi- nally evaluate it on all other 2D facial landmark datasets.

    • (b) We create a guided by 2D landmarks network which con- verts 2D landmark annotations to 3D and unifies all exist- ing datasets, leading to the creation of LS3D-W, the largest and most challenging 3D facial landmark dataset to date (~230,000 images).

    • (c) Following that, we train a neural network for 3D face alignment and evaluate it on the newly introduced LS3D-W.

    • (d) We further look into the effect of all “traditional” factors affecting face alignment performance like large pose, initialization and resolution, and introduce a “new” one, namely the size of the network.

    • (e) We show that both 2D and 3D face alignment networks achieve per- formance of remarkable accuracy which is probably close to saturating the datasets used.

    • Training and testing code as well as the dataset can be downloaded from https: //www.adrianbulat.com/face-alignment/





19.Sep.2021


Medium

https://fi.co/madlibs

https://orcid.org/0000-0001-8382-1389



Dreyer's English (learn write English)

#book story

Greek Mythology Explained: A Deeper Look at Classical Greek Lore and Myth

**Papers:**

CALTag: High Precision Fiducial Markers for Camera

Diatom Autofocusing in Brightfield Microscopy: a Comparative Study :implementation variation of the laplacian

Analysis of focus measure operators in shape-from-focus: why laplacian? Blure detection? Iqaf?

Optical flow modeling and computation: A survey

Toward general type 2 fuzzy logic systems based on zSlices

--------------------------------------------------------------------

Lost in space

The OA

Film: https://en.wikipedia.org/wiki/Shark_Tank

Movie Serial billons

monk serial movies


Python async

Highly decoupled microservice

Edex RIS-V , Self-car

RISC-V Magazine

Road map


Game: over/under

https://www.sporcle.com/games/Hejman/underwhelmed





--------------------------------------------------------------------



--------------------------------------------------------------------



GDPR in IoT

The EU General Data Protection

Regulation (GDPR) and Face Images in IoT

The GDPR (General Data Protection Regulation), taking effect in May 2018, introduces strict requirements for personal data protection and the privacy rights of individuals. The EU regulations will set a new global standard for privacy rights and change the way organizations worldwide store and process personal data. The GDPR brings the importance of preserving the privacy of personal information to the forefront, yet the importance of face images within this context is often overlooked. The purpose of this paper is to introduce a solution that helps companies protect face images in IoT devices which record or process image by camera, to strengthen compliance with the GDPR.

Our Face is our Identity

Our face is the most fundamental and highly visible element of our identity. People recognize us when they see our face or a photo of our face.

Recent years have seen exponential increase in the use, storage and dissemination of face images in both private and public sectors - in social networks, corporate databases, IoT, smart-city deployments, digital media, government applications, and nearly every organization’s databases.

---------------------

$(aws-okta env stage)

aws s3 cp s3://dataset/archive.tar.gz /Users/a.zip

aws s3 ls images | tail -n 100

aws s3 cp staging-images/test.jpg /Users/test.jpg

---------------------

screen -rD

k get pods

Docker

RUN chmod +x /tmp/run.sh

Can run docker in terminal and run code line by line

docker run -it --rm debian:stable-slim bash

apt-get update

apt-get installl -y

--------------------------------

brew install awscli aws-okta kubectx kubernetes-cli tfenv

touch ~/.aws/config

--------------------------------------------------------------------

docker image rm TETSTDFSAFDSADF

docker image ls

docker system prune

docker run -p 5000:5000 nameDocker:latest

docker build . -t nameDocker:latest

docker container stop number-docker-name

docker container ls

  • docker pull quay.io/test:v0.0.1

    • docker run --rm -p 5000:5000 -it quay.io/test:v0.0.1

    • curl --header "Content-Type: application/json" --request POST --data '[{"fixed":7.4, "a":0, "b":0.56, "c":9.4}]' http://127.0.0.1:5000/predict

  • docker run --rm -v /home/.aws/credentials:/root/.aws/credentials -it quay.io/test /bin/sh aws s3 ls --profile=test


--------------------------------

Cloud software engineer and consultant focusing on building highly available, scalable and fully automated infrastructure environments on top of Amazon Web Services and Microsoft Azure clouds. My goal is always to make my customers happy in the cloud.

----------------

Search google for 3d = tiger - iPhone show AR/VR

---------------

brew install youtube-dl

----------------------------

List: Collection bucket : 1 for week 2 for month 3 for future

--------------------------------------------------------------------

**• Per frame operation**

– Detection

– Classification

– Segmentation

– Feature extraction

– Recognition

**• Across frames **

– Tracking

– Counting

**• High level**

– Intention

– Relations

– Analyzing

=============================

Deep compression

Pruning deep learning

Hash table neural network

Dl compression

Deep compression

===================================

Mini PCI-e slot

  • What have I learned so far:

    • Problem-based learning

    • real life scenarios

    • index card (answer , idea)

    • Think-Pair-Share

    • Leverage flip charts

    • Summarizing



--------------------------------------------------------------------

Self

\\

Advancing Self-Supervised and Semi-Supervised Learning with SimCLR \cite{Chen2020}

%https://github.com/google-research/simclr

first pretraining on a large unlabeled dataset and then fine-tuning on a smaller labeled dataset

pretraining on large unlabeled image datasets, as demonstrated by Exemplar-CNN, Instance Discrimination, CPC, AMDIM, CMC, MoCo and others.

“A Simple Framework for Contrastive Learning of Visual Representations”, 85.8\% top-5 accuracy using 1\% of labeled images on the ImageNet dataset

contrastive learning algorithms

linear evaluation protocol (Zhang et al., 2016; Oord et al.,2018; Bachman et al., 2019; Kolesnikov et al., 2019)

unsupervised learning benefits more from bigger models than its supervised counterpart.




--------------------------------------------------------------------




--------------------------------------------------------------------





--------------------------------------------------------------------




--------------------------------------------------------------------




--------------------------------------------------------------------





--------------------------------------------------------------------





--------------------------------------------------------------------




--------------------------------------------------------------------





Some of optimization algorithms

========================

Swarm Algorithm

===============

1. Ant Colony Optimization (ACO) was inspired by the research on the behavior of ant colonies

2. Firefly Algorithm based on insects called fireflies

3. Marriage in Honey Bees Optimization Algorithm (MBO algorithm) is inspired by the process of reproduction of Honey Bee

4. Artificial Bee Colony Algorithm (ABC) is based on the recollection of the Honey Bees

5. Wasp Swarm Algorithm was inspired on the Parasitic wasps

6. Bee Collecting Pollen Algorithm (BCPA)

7. Termite Algorithm

8. Mosquito swarms Algorithm (MSA)

9. zooplankton swarms Algorithm (ZSA)

10. Bumblebees Swarms Algorithm (BSA)

11. Fish Swarm Algorithm (FSA)

12. Bacteria Foraging Algorithm (BFA)

13. Particle Swarm Optimization (PSO)

14. Cuckoo Search

15. Bat Algorithm (BA)

16. Accelerated PSO

17. Bee System

18. Beehive Algorithm

19. Cat Swarm

20. Consultant-guided search

21. Eagle Strategy

22. Fast Backterial swarming algorithm

23. Good lattice swarm optimization

24. Glowworm swarm optimization

25. Hierarchical swarm model

26. Krill Herd

27. Monkey Search

28. Virtual ant algorithm

29. Virtual bees

30. Weighted Swarm Algorithm

31. Wisdom of Artificial Crowd algorithm

32. Prey-predator algorithm

33. Memetic algorithm

34. Lion Optimization Algorithm

35. Chicken Swarm Optimization

36. Ant Lion Optimizer

37. Compact Particle Swarm Optimization

38. Fruit Fly Optimization Algorithm

39. marine propeller optimization algorithm

40. The Whale Optimization Algorithm

41. virus colony search algorithm

42. Slime mould optimization algorithm

Ecology Inspired Algorithm

==========================

1. Biogeography-based Optimization

2. Invasive Weed Optimization

3. Symbiosis-Inspired Optimization - PS2O

4. Atmosphere Clouds Model

5. Brain Storm Optimization

6. Dolphin echolocation

7. Japanese Tree Frog Calling algorithm

8. Eco-inspired evolutionary algorithm

9. Egyptian Vulture

10. Fish School search

11. Flower Pollination algorithm

12. Gene Expression

13. Great Salmon Run

14. Group Search Optimizer

15. Human Inspired Algorithm

16. Roach Infestation algorithm

17. Queen-bee algorithm

18. Shuffled frog leaping algorithm

19. Forest Optimization Algorithm

20. coral reefs optimization algorithm

21. cultural evolution algorithm

22. Grey Wolf Optimizer

23. probabilistic pso

24. omicron aco algorithm

25. shark smell optimization

26. social spider algorithm

27. sosial insects behavior algorithm

28. sperm whale algorithm

Evolutionary Optimization

=========================

1. Genetic Algorithm

2. Genetic Programming

3. Evolutionary Strategies

4. Differential Evolution

5. Paddy Field Algorithm

6. Queen-bee Evolution

7. Quantum Inspired Social Evolution

Physic and Chemistry inspired algorithm

=======================================

1. Big bang-Big Crunch

2. Block hole algorithm

3. Central force optimization

4. Charged System search

5. Electro-magnetism optimization

6. Galaxy based search algorithm

7. Gravitational search

8. Harmony search algorithm

9. Intelligent water drop algorithm

10. River formation algorithm

11. Self-propelled dynamics

12. Simulated Annealing

13. Stachastic diffusion search

14. Spiral optimization

15. Water Cycle algorithm

16. Artificial Physics optimization

17. Binary Gravitational search algorithm

18. Continous quantum ant colony optimization

19. Extended artificial physics optimization

20. Extended Central force optimization

21. Electromagnetism-like heuristic

22. Gravitational Interaction optimization

23. Hysteristetic Optimization algorithm

24. Hybrid quantum-inspired GA

25. Immune gravitational inspired algorithm

26. Improved quantum evolutinary algorithm

27. Linear programming

28. Quantum-inspired bacterial swarming

29. Quantum-inspired evolutionary algorithm

30. Quantum-inspired genetic algorithm

31. Quantum-behaved PSO

32. Unified big bang-chaotic big crunch

33. Vector model of artificial physics

34. Versatile quantum-inspired evolutionary algorithm

35. Space Gravitational Algorithm

36. Ion Motion Algorithm

37. Light Ray Optimization Algorithm

38. Ray Optimization

39. Photosynthetic Algorithms

40. floorplanning algorithm

41. Gases Brownian Motion Optimization

42. gradient-type optimization

43. mean-variance optimization

44. Mine blast algorithm

45. moth flame optimization

46. multi battalion search algorithm

47. music inspired optimization

48. no free lunch theorems algorithm

49. Optics inspired optimization

50. runner-root algorithm

51. sine cosine algorithm

52. pitch tracking algorithm

53. Stochastic Fractal Search algorithm

54. stroke volume optimization

55. Stud krill herd algorithm

56. The Great Deluge Algorithm

57. Water Evaporation Optimization

58. water wave optimization algorithm

59. Island model algorithm

60. Steady State model