Share
I would like to give you some of my experience with AI projects.
I am thrilled to announce the launch of my new service! As a computer vision and machine learning consultant, I provide end-to-end research and development solutions for cutting-edge artificial intelligence projects. My services encompass custom software implementation, MLOps, and project management, ensuring clients receive top-quality results. If you're looking to enhance your AI capabilities, I'm here to help. Contact me to learn more.
Are you looking for expert analysis of your project, eager for professional feedback, or in need of a comprehensive execution plan? Would you like to gain insights from industry leaders? I offer a 15-minute consultation free of charge to help you achieve your goals.
improved performance, reduced costs, or increased customer satisfaction.
Are you in search of a partner who aligns with your project requirements? As a freelance data analyst with over 10 years of experience in the industry, I understand that finding the right partner can be challenging.
To help businesses overcome this hurdle, I offer best-in-class analysis tools such as regression analysis, reliability tools, hypothesis tests, and a graph builder feature for scientific data visualization. Additionally, I use predictive modeling to forecast future market conditions, making it easier for businesses to plan and strategize.
I understand that budget limitations can exist, and gaining buy-in for new tools and systems can be burdensome. That's why my services are designed to be user-friendly, with no coding required and built-in content-sensitive help systems. My tools and systems are also designed for non-statisticians, making it easier for businesses to understand and implement them.
With many industry use cases available online, my services are proven to deliver results. Let's partner up to take your project to the next level!
pip install mlc-ai-nightly -f https://mlc.ai/wheels
https://mlc.ai/
https://mlc.ai/summer22/
Day 1:
Introduction to Unity: TVMScript
Introduction to Unity: Relax and PyTorch
TVM BYOC in Practice
Get Started with TVM on Adreno GPU
Introduction to Unity: Metaschedule
How to Bring microTVM to a custom IDE
Day 2:
Community Keynote
PyTorch 2.0: the journey to bringing compiler technologies to the core of PyTorch
Support QNN Dialect for TVM with MediaTek Neuron and Devise the Scheduler for Acceleration
On-Device Training Under 256KB Memory
AMD Tutorial
TVM at TI: Accelerating inference using the C7x/MMA
Adreno GPU: 4x speed-up and upstreaming to TVM mainline
Transfer-Tuning: Reusing Auto-Schedules for Efficient Tensor Program Code Generation
Improvement in the TVM OpenCL codegen to autogenerate optimal convolution kernels for Adreno GPUs
TVM Unity: Pass Infrastructure and BYOC
Renesas Hardware accelerators with Apache TVM
Introduction on 4th Gen Intel Xeon processor and BF16 support with TVM
Hidet: Task Mapping Programming Paradigm for Deep Learning Tensor Programs
Towards Building a Responsible Data Economy
Optimizing SYCL Device Kernels with AKG
Adreno GPU Performance Enhancements using TVM
Improvements to CMSIS-NN integration in TVM
UMA: Universal Modular Accelerator Interface
Day 3:
TVM Unity for Dynamic Models
Empower Tensorflow serving with backend TVM
Enabling Conditional Computing on Hexagon target
Decoupled Model Schedule for Large Deep Learning Model Training
Using TVM to bring Bayesian neural networks to embedded hardware
Efficient Support of TVM Scan OP on RISC-V Vector Extension
Improvements to Ethos-U55 support in TVM including CI on Alif Semiconductor boards
Compiling Dynamic Shapes
TVM Packaging in 2023: delivering TVM to end users
Cross-Platform Training Using Automatic Differentiation on Relax IR
AutoTVM: Reducing tuning space by cross axis filtering
SparseTIR: Composable Abstractions for Sparse Compilation in Deep Learning
Analytical Tensorization and Fusion for Compute-intensive Operators
CUTLASS 3.0: Next Generation Composable and Reusable GPU Linear Algebra Library
Enabling Data Movement and Computation Pipelining in Deep Learning Compiler
Automating DL Compiler Bug Finding with NNSmith
TVM at NIO
TVM at Tencent
Integrating the Andes RISC-V Processors into TVM
Alpa: A Compiler for Distributed Deep Learning
ACRoBat: Compiler and Runtime Techniques for Efficient Auto-Batching of Dynamic Deep Learning Computations
Channel Folding: a Transform Pass for Optimizing Mobilenets
========================================================================Day 1:
************************ Introduction to Unity: TVMScript
https://github.com/cyx-6/TVM-Demo/blob/main/tvmscript.ipynb
Gan NN show us some hidden patter in history we can not see before.
“I always have a slip of paper at hand, on which I note down the ideas of certain pages. On the backside I write down the bibliographic details. After finishing the book I go through my notes and think how these notes might be relevant for already written notes in the slip-box. It means that I always read with an eye towards possible connections in the slip-box.” (Luhmann et al., 1987, 150)
Deep representation learning
Model evaluation.
Camera cheaper lidar
Point cloud because of we need 3d
Capturing reality
1. 𝐀𝐝𝐝/𝐂𝐨𝐦𝐦𝐢𝐭 𝐀𝐥𝐥
Standard way: git add .
git commit -m "Message"
Another way: git commit -a -m "Message"
𝟐. 𝐀𝐥𝐢𝐚𝐬𝐞𝐬
With aliases, you can write your own Git commands that do anything you want.
Eg: git config --global alias.ac '!git add -A && git commit -m'
(alias called ac, git add -A && git commit -m will do the full add and commit)
𝟑. 𝐑𝐞𝐯𝐞𝐫𝐭
The revert command simply allows us to undo any commit on the current branch.
Eg: git revert 486bdb2
Another way: git revert HEAD (for recent commits)
𝟒. 𝐑𝐞𝐟𝐥𝐨𝐠
This command lets you easily see the recent commits, pulls, resets, pushes, etc on your local machine.
Eg: git reflog
𝟓. 𝐏𝐫𝐞𝐭𝐭𝐲 𝐋𝐨𝐠𝐬
Gives you the ability to print out a pretty log of your commits/branches.
Eg: git log --graph --decorate --oneline
𝟔. 𝐒𝐞𝐚𝐫𝐜𝐡𝐢𝐧𝐠 𝐋𝐨𝐠𝐬
One can also use the log command to search for specific changes in the code.
Eg: git log -S "A promise in JavaScript is very similar"
𝟕. 𝐒𝐭𝐚𝐬𝐡
This command will stash (store them locally) all your code changes but does not actually commit them.
Eg: git stash
𝟖. 𝐑𝐞𝐦𝐨𝐯𝐞 𝐃𝐞𝐚𝐝 𝐁𝐫𝐚𝐧𝐜𝐡𝐞𝐬
This command will delete all the tracking information for branches that are on your local machine that are not in the remote repository, but it does not delete your local branches.
Eg: git remote update --prune
𝟗. 𝐁𝐢𝐬𝐞𝐜𝐭
For finding which commits caused certain bugs
Eg: git bisect start
git bisect bad
git bisect good 48c86d6
𝟏𝟎. 𝐃𝐞𝐬𝐭𝐫𝐨𝐲 𝐋𝐨𝐜𝐚𝐥 𝐂𝐡𝐚𝐧𝐠𝐞𝐬
One can wipe out all changes on your local branch to exactly what is in the remote branch.
Eg: git reset --hard origin/main
Don’t trust your devices IoT. software and hardware are together for better business.
Newsletter investing every 3 months
1. Prototyping. New bie
2. Patent. Website. ( list of investors)
3. Pre seed. First founding 1M VC, inistution, anjel capital. 400 000 preseed. Quveribel. Equtible rund convertible non agreement Template. Convertabel lone
1. Germ standar inistitude
2.
4. Equity. Venture builder. 20% 200 000
5. 100 000 per year to become unocorn in less than 10 years
6. Soniy corn 100k unicorn 1M
7. 360 euro per years for database of investor
8. Convertable loan: Pay interst rate 5% to 8% = 18 months later (2M found in 10M) convert on based .
9. Invester Never act as co-founder = full time = 20%
10. Project profit,
11. Full time after foun rising
Make a plan for your business; take your time to make calculations by creating a target audience. Your target audience determines how you approach your business plan. By studying your target audience, you are making empirical research and collecting information from them Then, secure a good partnership if need be, and get enough capital to start up.
*
* What the people need
* Why people need it
* When the people need it
* It's affordability
* It's ease of use
* It's maintenance and revenue
Pair programming
The SB7 Framework harnesses the influence of stories. The structure describes the 7 most common story elements:
• Character
• Problem
• Guide
• Plan
• Calls to action
• Failure
• Success
Dear [Hiring Manager’s Name],
I am writing to apply for the position of computer vision for IoT and cloud at [Company Name]. I am a highly skilled and experienced computer vision engineer with a strong background in IoT and cloud technologies. I believe that my skills and experience make me an ideal candidate for this position and I am excited about the opportunity to contribute to the success of your organization.
I have a solid understanding of computer vision algorithms and techniques, as well as experience in developing and implementing computer vision systems. I am proficient in programming languages such as Python, C++, and Java, and have experience with popular computer vision libraries such as OpenCV, TensorFlow, and PyTorch.
In addition, I have a strong background in IoT and cloud technologies, including experience with IoT platforms such as AWS IoT, Azure IoT, and Google Cloud IoT. I am familiar with cloud computing technologies such as AWS, Azure, and Google Cloud, and have experience with deploying and managing computer vision systems on these platforms.
I am also a team player and have excellent communication skills. I am able to work with cross-functional teams and can effectively communicate with both technical and non-technical stakeholders. I am also highly motivated, and I am always looking for ways to improve my skills and stay up-to-date with the latest technologies.
I am excited about the opportunity to join [Company Name] and to contribute to the development of cutting-edge computer vision systems for IoT and cloud. I am confident that my skills and experience make me a strong candidate for this position, and I look forward to discussing how I can contribute to your organization.
Thank you for considering my application. I look forward to hearing from you soon.
Sincerely,
Title: "Unlocking the Power of Computer Vision for IoT and Cloud"
Introduction:
* Hi, and welcome to our video on the topic of computer vision for IoT and cloud. In this video, we're going to explore how computer vision technology can be used to enhance IoT and cloud-based systems, and how it can be used to unlock new possibilities for businesses and consumers alike.
Body:
* First, let's talk about what computer vision is and how it works. Essentially, computer vision is the technology that enables computers to understand and interpret visual information from the world around us. This can include things like images, videos, and even 3D models.
* One of the key ways that computer vision can be used to enhance IoT and cloud-based systems is by enabling devices to better understand and interact with their environment. For example, a computer vision-enabled camera could be used to monitor a manufacturing facility and identify when a machine is in need of maintenance or when an employee is working in an unsafe manner.
* Another way that computer vision can be used to enhance IoT and cloud-based systems is by enabling devices to better understand and interact with people. For example, a computer vision-enabled security camera could be used to identify individuals and track their movements, or a computer vision-enabled smart home system could be used to detect when someone is in the room and adjust the lighting or temperature accordingly.
* Additionally, computer vision can also be used to enhance cloud-based systems by providing more accurate data and insights. For example, a computer vision-enabled drone could be used to collect data on crops and provide farmers with more accurate information about the health and growth of their crops.
Conclusion:
* Overall, computer vision technology has the potential to unlock new possibilities for businesses and consumers alike, by enabling IoT and cloud-based systems to better understand and interact with their environment and people. We hope this video has provided you with a better understanding of the potential of computer vision for IoT and cloud, and we look forward to seeing the new possibilities that will be created as this technology continues to evolve.
Excited to share my latest project using computer vision and IoT to improve efficiency in manufacturing. I used a combination of machine learning algorithms and cloud computing to analyze data from cameras and sensors in real-time, resulting in a 20% increase in production speed. This was a challenging project but I enjoyed every step of it!
I am always looking for new opportunities to apply my skills in computer vision and IoT to help companies improve their operations. Let's connect if you are working on a similar project or if you are looking for a developer with these skills. #computervision #IoT #cloudcomputing #manufacturingefficiency #machinelearning #developer"
In this post, you briefly mention your experience and skills in computer vision and IoT, and you provide a specific example of a project you worked on that demonstrates your abilities. You also make it clear that you are open to new opportunities, and you invite others to connect with you. Using relevant hashtags such as #computervision #IoT #cloudcomputing can help your post reach a wider audience
Exciting news! I just published a paper on a new object detection algorithm that I developed. The algorithm uses a combination of deep learning and computer vision techniques to improve accuracy and speed of object detection in real-world scenarios. This is a big step forward in the field of computer vision and I am proud to have contributed to it.
I will be presenting my research at the Computer Vision Conference next month, if you're attending be sure to stop by and say hi! #computervision #objectdetection #deeplearning #research"
In this post, you briefly explain the main findings and contributions of your research, and you express your excitement and pride in your work. You also mention the upcoming conference where you will be presenting your research, inviting your friends and colleagues to meet you in person. Also using relevant hashtags such as #computervision #objectdetection #deeplearning can help reach a wider audience interested in the field.
Features stores
1. Car parts detection
2. Resize keep aspects ration
3. 3.1 Perform damage detection
4. 3.2Semantic segregation
5. Transfer to original coordinates
1 class imbalance
2 class definition Maybe Class in between
3 inconstant annotations
Color augmentation
1. RGB shift
2. Random brithness and contrast
3. Sharpen
4. Hue saturation value
Why manually data augmented
Becasu control of data. Not too rotate or change something
Photogrammetry model
Neural radiance fields (NeRF)
NeRF in the wild
\
Yocto and Machine Learning + OpenCV: https://www.yoctoproject.org
https://www.hackster.io/monica/running-machine-learning-on-maaxboard-s-yocto-image-part-1-6a4796
Bard Google: https://blog.google/technology/ai/bard-google-ai-search-updates/
https://mustang.ir/questions/question/راه-اندازی-پروژه-های-گیت-هاب-با-git-pages
Book: Project Management for Non-Project Managers
https://fa.wikipedia.org/wiki/علی_اکبرپور
https://www.kingorama.com شاهنامه سه بعدی
بخش هایی از کتاب Refactoring (نسخه رایگان)
Performance Notes Of PyTorch Support for M1 and M2 GPUs - Lightning AI
HandBrake updated with AV1 and VP9 10-bit video encoding
How to Start Your Sole Proprietorship in 6 Simple Steps
چالشهای تولید محتوا برای مارکت اروپا و آمریکا - YouTube
PyTorch for Deep Learning & Machine Learning – Full Course - YouTube
Why passive investing makes less sense in the current environment | Financial Times
Bayesian Neural Networks and Variational Dropout
One machine learning question every day - bnomial
Git remote add orgine
Asynchronous
Operation
Anomaly detection
Use experience. Personalizes.
Prediction manage society mobility
Personalization
Covenant
Platform.
OpenMMLab
Wordtune - AI-powered Writing Companion
tree -v -I '*.png' -I '*.jpg' --charset utf-8 >list2.txt
3D object using triangular mesh need vertices
point cloud underlying surface of some 3D object, faster
Definition of Done
User Story complete
Code\Implementation complete
Code\Implementation Peer Reviews) approved
Unit tests complete (if required)
Testing Notes complete (if required)
User Story Acceptance criteria defined and verified
Backend: Python, Redis, Postgres, Celery
Frontend: React, Redux, TypeScript
DevOps: Terraform, Kubernetes, GitHub, Docker, AWS
Data: Python (Data Science), Kafka, Fastapi, MLFlow, AWS SageMaker
ML: Selcond core, Kubeflow, …
Sharpness ,Noise, Dynamic range, Tone reproduction , Contrast, Color, Distortion , DSLR lenses, Vignetting, Exposure, Lateral chromatic aberration (LCA), Lens flare, Color, Artifacts
۱. جهت انتخاب کلمه مورد نظرتان، دو بار روی آن تپ کنید.
۲. برای انتخاب کل یک پاراگراف، کافیست چهار با روی آن تپ کنید.
۳. یک انگشت را در ابتدا و انگشت دیگر را در آخر یک محدود گذاشته و کمی نگه دارید. متن میان دو انگشت انتخاب خواهد شد.
۴. روی ابتدای محدوده ای دلخواه دو بار تپ کرده و بلافاصله با درگ کردن (کشیدن) پین محدوده ی انتخاب شده را گسترش دهید. (انگشت خود را پس از دومین تپ جدا نکنید)
۵. برای انتخاب کل پاراگراف، به جز استفاده از مورد ۲، می توانید با دو انگشت، یک بار روی آن تپ کنید.
namely motion estimation, motion smoothing, and image warping. Motion estimation algorithms often use a similarity transform to handle camera translations, rotations, and zooming. The tricky part is getting these algorithms to lock onto the background motion,
0. video frames captured during fast motion are often blurry. Their appearance can be improved either using deblurring techniques (Section 10.3) or stealing sharper pixels from other frames with less motion or better focus (Matsushita, Ofek, Ge et al. 2006). Exercise 8.3 has you implement and test some of these ideas.
1. Background subtraction
2. Motion estimation
3. Motion smoothing
4. Image warping. image warping can result in missing borders around the image, which must be cropped, filled using information from other frames, or hallucinated using inpainting techniques (Section 10.5.1).
Vision stabilization
There is much recent work on
Multi-view 3D reconstruction is a central research topic in computer vision that is driven in many different directions
There are many available methods that can handle the noisy image completion problem
In the case of surveillance using a fixed camera, there is no desired motion. In the case of most robotic applications, horizontal and vertical motions are desired, but rotation is not. In some cases of ground vehicles where the terrain is known to have many incline changes, or with aerial vehicles undergoing complicated maneuvers where the vehicle’s body is meant to be in varying orientations, rotation might be desired as the robot is meant to be at an angle at times.
In robotics applications, computational complexity is extremely important due to the need for real-time operation. Also, it is likely that the center of rotation will not lie in the center of the image frame because the camera is rarely mounted at the robot’s center of mass.
This first assumption is made in many video stabilization algorithms, and is a convenient way to seed the correct features with higher trust values. It is not an unreasonable assumption to make. Depending on the application, there is often a large portion of frames where local motion does not occur. In some situations, such as monitoring of steady traffic, there is no guarantee that local motion will not occur. This situation has not been tested, nor has our algorithm been designed to handle it. The second assumption comes from a combination of common sense, and the experience of many computer vision researchers. It makes sense that an object in the scene which does not move will be recognized more easily and more often. Being recognized consistently and consecutively is considered stable. On the other hand, objects which have local motion are less likely to be recognized as often. They might move through shadows, change orientation, or even move completely out of the scene. These possibilities all lead to a less stable class of features. It is likely that, more often than not, there are more background features than foreground features. Moving objects generally cover a small portion of the screen, which usually yields fewer features. Although uncommon, we did not want to make the assumption that this would occur in every frame. Certain scenes will consist of a large portion of local motion, or an object will move very close to the camera, consuming a much larger portion of the scene than the background. As long as some background features are discovered in each frame, our stabilization algorithm should succeed.
image processing tips:
the image size and kernel size need to depended. the best way is to use the one variable to define the size of the image and kernel together.
the coordinate of the image start at top left of the image/display
in order to change it to the normal coordinate you can use
grid of points; two matrix to X , Y coordinate
subtract half of W, H from X, Y in order to have normal coordinate system for our image
now we have cartesian coordinate
cartesian coordinate to polar coordinate
تبدیل فضای کارتزین به پولار در خیلی از برنامه های پردازش تصویر کارایی دارد. برای پیدا کردن ترشلد ها هم می توان استفاده کرد
in MATLAB we can use ":"for example MatrixA(:) which means all entity of the matrix no mater how many dimensions we have but if we want to implemented in Python we can use numpy.flatten().
in the MATLAB the round is different from python. if you want same result you need implement the rand function by yourself.
imge_mask=np.ones_like(image_source)*255
imge_mask=imge_mask.astype(np.uint8)
imge_mask=imge_mask.flatten() ??? .ravel()
.asarray
np.logical_and( 1, 2)
indexes=[index for index in range(len(array1)) if array1[index] == True]
cv2.bitwise_not(yyy)
"olive" editor remove silence
Questions:
How to train model to add new classes?
How to add a new class to an existing classifier in deep learning?
Adding new Class to One Shot Learning trained model
Is it possible to train a neural network as new classes are given?
Merging all several models that detection system for all these tasks.
Answer 1:
There are several ways to add new classes to the trained model, which require just training for the new classes.
Incremental training (GitHub)
continuously learn a stream of data (GitHub)
online machine learning (GitHub)
Transfer Learning Twice
Continual learning approaches (Regularization, Expansion, Rehearsal) (GitHub)
Answer 2:
Online learning is a term used to refer to a model which takes a continual or sequential stream of input data while training, in contrast to offline learning (also called batch learning), where the model is pre-trained on a static predefined dataset.
Continual learning (also called incremental, continuous, lifelong learning) refers to a branch of ML working in an online learning context where models are designed to learn new tasks while maintaining performance on historic tasks. It can be applied to multiple problem paradigms (including Class-incremental learning, where each new task presents new class labels for an ever expanding super-classification problem).
Do I need to train my whole model again on all four classes or is there any way I can just train my model on new class?
Naively re-training the model on the updated dataset is indeed a solution. Continual learning seeks to address contexts where access to historic data (i.e. the original 3 classes) is not possible, or when retraining on an increasingly large dataset is impractical (for efficiency, space, privacy etc concerns). Multiple such models using different underlying architectures have been proposed, but almost all examples exclusively deal with image classification problems.
Answer 3:
You could use transfer learning (i.e. use a pre-trained model, then change its last layer to accommodate the new classes, and re-train this slightly modified model, maybe with a lower learning rate) to achieve that, but transfer learning does not necessarily attempt to retain any of the previously acquired information (especially if you don't use very small learning rates, you keep on training and you do not freeze the weights of the convolutional layers), but only to speed up training or when your new dataset is not big enough, by starting from a model that has already learned general features that are supposedly similar to the features needed for your specific task. There is also the related domain adaptation problem.
There are more suitable approaches to perform incremental class learning (which is what you are asking for!), which directly address the catastrophic forgetting problem. For instance, you can take a look at this paper Class-incremental Learning via Deep Model Consolidation, which proposes the Deep Model Consolidation (DMC) approach. There are other continual/incremental learning approaches, many of them are described here or in more detail here.
Answer 4:
by using Continual learning approaches to trained without losing the original classes. It has 3 categories:
Regularization
Expansion
Rehearsal
Answer 5:
if you access to the dataset then you can download it and add all you new classes when you have " 'N' COCO Classes + 'M' New classes "
after that you can fine tune model based on new dataset. you do not need all of the dataset just same number of image for all class enough.
https://learnopencv.com/stanford-mrnet-challenge-classifying-knee-mris/
Before start your machine learning project ask these questions and preparation: What is your inference hardware? specify the use case. specify model interface. how would we monitor performance after deployment? how can we approximate post-deployment monitoring before deployment? build a model and iteratively improve it. How to deploy the model at the end? monitor performance after deployment. what is your metric? How do you split your data (training and validation)?
Preparation ML Project Workflow
specify the use case
specify model interface
how would we monitor performance after deployment?
how can we approximate post-deployment monitoring before deployment?
build a model and iteratively improve it
deploy the model
monitor performance
what is your are metric?
How do you split your data?
Before Training deep learning model
using large model to train because
it is faster to train with lower overfit and faster converge due to best training
it is easier and higher compress in the final stage
model compression and acceleration: reducing parameters without significantly decreasing the model performance
Data: How to have good data for training deep learning models; How to Build and Enhance A Good Data Set For Your Deep Learning Project: using same config and data for training and inference, removing redundant (delete data which you don't need), get more data, Handle missing data, using data augmentation techniques or GAN to generate more data, re-scale/balance data, Transform your data (Change data types), Feature selection based on data-set and use case
The data you don't need: removing redundant samples
get more data
Invent more data
data augmentation
Re-scale data
balance datasets
Transform your data
Feature selection based on dataset and use case
ML-Augmented Video Object Tracking: By applying and evaluating multiple algorithmic models, enhanced ability to scale object tracking in high-density video compositions.
Training deep learning model
automated hyper-parameters
Using Hyperparameter tuning / Hyperparameter optimization tools
AutoML
genetic algorithm
population based training
bayesian optimization
You need to set some parameters and config for training
Diagnostics
Weight Initialization
Learning rate
Activation function
Network Topology
Batches and Epochs
Regularization
Optimization and Loss
Early Stopping
Continuous delivery
evolve with latest detection models
more data (no labels)
semi-supervised learning: big self-supervised models are strong semi-supervised learners
After Training deep learning model
Parameter pruning
model pruning: reducing redundant parameters which are not sensitive to the performance.
aim: remove all connections with absolute weights below a threshold
Quantization
compresses by reducing the number of bits used to represent the weights
quantization effectively constraints the number of different weights we can use inside our kernels
per-channel quantization for weights, which improves performance by model compression and latency reduction.
Low rank matrix factorization (LRMF)
there exists latent structures in the data, by uncovering which we can obtain a compressed representation of the data
LRMF factorizes the original matrix into lower rank matrices while preserving latent structures and addressing the issue of sparseness
Compact convolutional filters (Video/CNN)
designing special structural convolutional filters to save parameters
replace over parametric filters with compact filters to achieve overall speedup while maintaining comparable accuracy
Knowledge distillation
training a compact neural network with distilled knowledge of a large model
distillation (knowledge transfer) from an ensemble of big networks into a much smaller network which learns directly from the cumbersome model's outputs, that is lighter to deploy
Binarized Neural Networks (BNNs)
Apache TVM (incubating) is a compiler stack for deep learning systems
Neural Networks Compression Framework (NNCF)
Deep learning model in production
security: controls access to model(s) through secure packaging and execution
Test
auto training
using parallel processing and library such as GStreamer
Technology
Docker
AWS
Flask
Django
My Keynote (February 2021)
introduction
Machine Learning/ Deep Learning
Machine learning is an application of artificial intelligence (AI) that provides systems the ability to automatically learn and improve from experience without being explicitly programmed
supervised Machine Learning
Deep Convolutional Neural Networks (DCNN) Architecture
Visualizing and Understanding Convolutional Networks
Object Detection by Deep Learning
Style Transfer
semi-supervised Machine Learning/ Deep Reinforcement learning (DRL)
unsupervised Machine Learning
Auto Encoder
Generative Adversarial Networks (GANs)
Tools
Pre trained model
Effect of Augmented Datasets to Train DCNNs
Training for more classes
Optimization
Production setup
post development
business , Gartner, Hype Cycle for emerging technologies, 2025
Advanced and practical
Inside CNN
Deep Convolutional Neural Networks Architecture
Convolution
Convolution Layer
Conv/FC Filters
Activation Functions
Layer Activations
Pooling Layer
Dropout ; L2 pooling
Why
Max-pooling is useful
How to see inside each layer and find important features
Visualizing and Understanding Convolutional Networks
Hands on python for deep learning
Fundamental deep learning
Installation: TensorFlow, PyTorch
Summary of the summit
AI Hardware Europe Summit (July 2020)
Apache TVM And Deep Learning Compilation Conference (December 2020)
Face
Effective and precise face detection based on color and depth data
https://www.sciencedirect.com/science/article/pii/S221083271400009X
containing or not containing a face
Eigenface, Fisherface, waveletface, PCA (Principal Component Analysis), LDA (Linear Dis-criminant Analysis), Haar wavelet transform, and so on.
Viola–Jones detector
illumination changes and occlusion
depthinformation is used to filter the regions of the image where a candidate face regionis found by the Viola–Jones (VJ) detector
- the first filtering rule is defined on the color of the region; since some false positiveshave colors not compatible with the face (e.g. shadows on jeans) a skin detector isapplied to remove the candidate face regions that do not contain skin pixels;
- the second filtering rule is defined on the size of the face: using the depth mapit is quite easy to calculate the size of the candidate face region, which is use-ful to discard smallest and largest faces from the final result set;
- the third filtering rule is defined on the depth map to discard flat objects (e.g.candidate faces found in a wall) or uneven objects (e.g. candidate face foundin the leaves of a tree). Combining color and depth data the candidate faceregion can be extracted from the background and measures of depth and reg-ularity are used for filtering out false positives.
The size criteria simply remove the candidate faces not included in a fixed rangesize ([12.5,30] cm). The size of a candidate face region is extracted from the depthmap according to the following approach.
image below
Gaussian mixture 3D morphable face model
Face Synthesis for Eyeglass-Robust Face Recognition
GeneGAN: Learning Object Transfiguration and Attribute Subspace from Unpaired Data
FacePoseNet: Making a Case for Landmark-Free Face Alignment
Learning to Regress 3D Face Shape and Expression from an Image without 3D Supervision
Unsupervised Eyeglasses Removal in the Wild
How far are we from solving the 2D & 3D Face Alignment problem? (and a dataset of 230,000 3D facial landmarks)
(a) we construct, for the first time, a very strong baseline by combining a state-of-the-art architecture for landmark localization with a state-of-the-art residual block, train it on a very large yet synthetically expanded 2D facial landmark dataset and fi- nally evaluate it on all other 2D facial landmark datasets.
(b) We create a guided by 2D landmarks network which con- verts 2D landmark annotations to 3D and unifies all exist- ing datasets, leading to the creation of LS3D-W, the largest and most challenging 3D facial landmark dataset to date (~230,000 images).
(c) Following that, we train a neural network for 3D face alignment and evaluate it on the newly introduced LS3D-W.
(d) We further look into the effect of all “traditional” factors affecting face alignment performance like large pose, initialization and resolution, and introduce a “new” one, namely the size of the network.
(e) We show that both 2D and 3D face alignment networks achieve per- formance of remarkable accuracy which is probably close to saturating the datasets used.
Training and testing code as well as the dataset can be downloaded from https: //www.adrianbulat.com/face-alignment/
19.Sep.2021
https://orcid.org/0000-0001-8382-1389
Dreyer's English (learn write English)
#book story
Greek Mythology Explained: A Deeper Look at Classical Greek Lore and Myth
**Papers:**
CALTag: High Precision Fiducial Markers for Camera
Diatom Autofocusing in Brightfield Microscopy: a Comparative Study :implementation variation of the laplacian
Analysis of focus measure operators in shape-from-focus: why laplacian? Blure detection? Iqaf?
Optical flow modeling and computation: A survey
Toward general type 2 fuzzy logic systems based on zSlices
--------------------------------------------------------------------
Lost in space
The OA
Film: https://en.wikipedia.org/wiki/Shark_Tank
Movie Serial billons
monk serial movies
Python async
Highly decoupled microservice
Edex RIS-V , Self-car
RISC-V Magazine
Road map
Game: over/under
https://www.sporcle.com/games/Hejman/underwhelmed
--------------------------------------------------------------------
--------------------------------------------------------------------
GDPR in IoT
The EU General Data Protection
Regulation (GDPR) and Face Images in IoT
The GDPR (General Data Protection Regulation), taking effect in May 2018, introduces strict requirements for personal data protection and the privacy rights of individuals. The EU regulations will set a new global standard for privacy rights and change the way organizations worldwide store and process personal data. The GDPR brings the importance of preserving the privacy of personal information to the forefront, yet the importance of face images within this context is often overlooked. The purpose of this paper is to introduce a solution that helps companies protect face images in IoT devices which record or process image by camera, to strengthen compliance with the GDPR.
Our Face is our Identity
Our face is the most fundamental and highly visible element of our identity. People recognize us when they see our face or a photo of our face.
Recent years have seen exponential increase in the use, storage and dissemination of face images in both private and public sectors - in social networks, corporate databases, IoT, smart-city deployments, digital media, government applications, and nearly every organization’s databases.
---------------------
$(aws-okta env stage)
aws s3 cp s3://dataset/archive.tar.gz /Users/a.zip
aws s3 ls images | tail -n 100
aws s3 cp staging-images/test.jpg /Users/test.jpg
---------------------
screen -rD
k get pods
Docker
RUN chmod +x /tmp/run.sh
Can run docker in terminal and run code line by line
docker run -it --rm debian:stable-slim bash
apt-get update
apt-get installl -y
--------------------------------
brew install awscli aws-okta kubectx kubernetes-cli tfenv
touch ~/.aws/config
--------------------------------------------------------------------
docker image rm TETSTDFSAFDSADF
docker image ls
docker system prune
docker run -p 5000:5000 nameDocker:latest
docker build . -t nameDocker:latest
docker container stop number-docker-name
docker container ls
docker pull quay.io/test:v0.0.1
docker run --rm -p 5000:5000 -it quay.io/test:v0.0.1
curl --header "Content-Type: application/json" --request POST --data '[{"fixed":7.4, "a":0, "b":0.56, "c":9.4}]' http://127.0.0.1:5000/predict
docker run --rm -v /home/.aws/credentials:/root/.aws/credentials -it quay.io/test /bin/sh aws s3 ls --profile=test
--------------------------------
Cloud software engineer and consultant focusing on building highly available, scalable and fully automated infrastructure environments on top of Amazon Web Services and Microsoft Azure clouds. My goal is always to make my customers happy in the cloud.
----------------
Search google for 3d = tiger - iPhone show AR/VR
---------------
brew install youtube-dl
----------------------------
List: Collection bucket : 1 for week 2 for month 3 for future
--------------------------------------------------------------------
**• Per frame operation**
– Detection
– Classification
– Segmentation
– Feature extraction
– Recognition
**• Across frames **
– Tracking
– Counting
**• High level**
– Intention
– Relations
– Analyzing
=============================
Deep compression
Pruning deep learning
Hash table neural network
Dl compression
Deep compression
===================================
Mini PCI-e slot
What have I learned so far:
Problem-based learning
real life scenarios
index card (answer , idea)
Think-Pair-Share
Leverage flip charts
Summarizing
--------------------------------------------------------------------
Self
\\
Advancing Self-Supervised and Semi-Supervised Learning with SimCLR \cite{Chen2020}
%https://github.com/google-research/simclr
first pretraining on a large unlabeled dataset and then fine-tuning on a smaller labeled dataset
pretraining on large unlabeled image datasets, as demonstrated by Exemplar-CNN, Instance Discrimination, CPC, AMDIM, CMC, MoCo and others.
“A Simple Framework for Contrastive Learning of Visual Representations”, 85.8\% top-5 accuracy using 1\% of labeled images on the ImageNet dataset
contrastive learning algorithms
linear evaluation protocol (Zhang et al., 2016; Oord et al.,2018; Bachman et al., 2019; Kolesnikov et al., 2019)
unsupervised learning benefits more from bigger models than its supervised counterpart.
--------------------------------------------------------------------
--------------------------------------------------------------------
--------------------------------------------------------------------
--------------------------------------------------------------------
--------------------------------------------------------------------
--------------------------------------------------------------------
--------------------------------------------------------------------
--------------------------------------------------------------------
Some of optimization algorithms
========================
Swarm Algorithm
===============
1. Ant Colony Optimization (ACO) was inspired by the research on the behavior of ant colonies
2. Firefly Algorithm based on insects called fireflies
3. Marriage in Honey Bees Optimization Algorithm (MBO algorithm) is inspired by the process of reproduction of Honey Bee
4. Artificial Bee Colony Algorithm (ABC) is based on the recollection of the Honey Bees
5. Wasp Swarm Algorithm was inspired on the Parasitic wasps
6. Bee Collecting Pollen Algorithm (BCPA)
7. Termite Algorithm
8. Mosquito swarms Algorithm (MSA)
9. zooplankton swarms Algorithm (ZSA)
10. Bumblebees Swarms Algorithm (BSA)
11. Fish Swarm Algorithm (FSA)
12. Bacteria Foraging Algorithm (BFA)
13. Particle Swarm Optimization (PSO)
14. Cuckoo Search
15. Bat Algorithm (BA)
16. Accelerated PSO
17. Bee System
18. Beehive Algorithm
19. Cat Swarm
20. Consultant-guided search
21. Eagle Strategy
22. Fast Backterial swarming algorithm
23. Good lattice swarm optimization
24. Glowworm swarm optimization
25. Hierarchical swarm model
26. Krill Herd
27. Monkey Search
28. Virtual ant algorithm
29. Virtual bees
30. Weighted Swarm Algorithm
31. Wisdom of Artificial Crowd algorithm
32. Prey-predator algorithm
33. Memetic algorithm
34. Lion Optimization Algorithm
35. Chicken Swarm Optimization
36. Ant Lion Optimizer
37. Compact Particle Swarm Optimization
38. Fruit Fly Optimization Algorithm
39. marine propeller optimization algorithm
40. The Whale Optimization Algorithm
41. virus colony search algorithm
42. Slime mould optimization algorithm
Ecology Inspired Algorithm
==========================
1. Biogeography-based Optimization
2. Invasive Weed Optimization
3. Symbiosis-Inspired Optimization - PS2O
4. Atmosphere Clouds Model
5. Brain Storm Optimization
6. Dolphin echolocation
7. Japanese Tree Frog Calling algorithm
8. Eco-inspired evolutionary algorithm
9. Egyptian Vulture
10. Fish School search
11. Flower Pollination algorithm
12. Gene Expression
13. Great Salmon Run
14. Group Search Optimizer
15. Human Inspired Algorithm
16. Roach Infestation algorithm
17. Queen-bee algorithm
18. Shuffled frog leaping algorithm
19. Forest Optimization Algorithm
20. coral reefs optimization algorithm
21. cultural evolution algorithm
22. Grey Wolf Optimizer
23. probabilistic pso
24. omicron aco algorithm
25. shark smell optimization
26. social spider algorithm
27. sosial insects behavior algorithm
28. sperm whale algorithm
Evolutionary Optimization
=========================
1. Genetic Algorithm
2. Genetic Programming
3. Evolutionary Strategies
4. Differential Evolution
5. Paddy Field Algorithm
6. Queen-bee Evolution
7. Quantum Inspired Social Evolution
Physic and Chemistry inspired algorithm
=======================================
1. Big bang-Big Crunch
2. Block hole algorithm
3. Central force optimization
4. Charged System search
5. Electro-magnetism optimization
6. Galaxy based search algorithm
7. Gravitational search
8. Harmony search algorithm
9. Intelligent water drop algorithm
10. River formation algorithm
11. Self-propelled dynamics
12. Simulated Annealing
13. Stachastic diffusion search
14. Spiral optimization
15. Water Cycle algorithm
16. Artificial Physics optimization
17. Binary Gravitational search algorithm
18. Continous quantum ant colony optimization
19. Extended artificial physics optimization
20. Extended Central force optimization
21. Electromagnetism-like heuristic
22. Gravitational Interaction optimization
23. Hysteristetic Optimization algorithm
24. Hybrid quantum-inspired GA
25. Immune gravitational inspired algorithm
26. Improved quantum evolutinary algorithm
27. Linear programming
28. Quantum-inspired bacterial swarming
29. Quantum-inspired evolutionary algorithm
30. Quantum-inspired genetic algorithm
31. Quantum-behaved PSO
32. Unified big bang-chaotic big crunch
33. Vector model of artificial physics
34. Versatile quantum-inspired evolutionary algorithm
35. Space Gravitational Algorithm
36. Ion Motion Algorithm
37. Light Ray Optimization Algorithm
38. Ray Optimization
39. Photosynthetic Algorithms
40. floorplanning algorithm
41. Gases Brownian Motion Optimization
42. gradient-type optimization
43. mean-variance optimization
44. Mine blast algorithm
45. moth flame optimization
46. multi battalion search algorithm
47. music inspired optimization
48. no free lunch theorems algorithm
49. Optics inspired optimization
50. runner-root algorithm
51. sine cosine algorithm
52. pitch tracking algorithm
53. Stochastic Fractal Search algorithm
54. stroke volume optimization
55. Stud krill herd algorithm
56. The Great Deluge Algorithm
57. Water Evaporation Optimization
58. water wave optimization algorithm
59. Island model algorithm
60. Steady State model