Full Stack Deep Learning
Full Stack Deep Learning
Full Stack Deep Learning (fullstackdeeplearning.com)
notes for week 1 to week 12 of Full Stack Deep Learning: (2021)
Reference: https://fullstackdeeplearning.com/spring2021
Learn More: https://www.tiziran.com/
#StackDeepLearning #computervision #AI #objectdetection #objecttracking #ml #research #CNN #gans #convolutionalneuralnetworks #ai #vr #reinforcementlearning #mlops #aiforbusiness #science #researcher #phd #cameracalibration #opticalflow #videostablization #humanoidrobot #localization #3dSLAM #reconstruction #pointcloud #mixedreality #edgecomputing #raspberrypi #intelstick #googlecoral #jetsonnano #nvidiavgpu #tensorflowjs #pytorch #opencv #aikit #caffee #DIGITS #cpp #C #python #ubuntu #farshidpirahansiah #farshid #pirahansiah #robotics #tiziran #MultiCameraMultiClassMultiObjectTracking #deeplearning #machinelearning #artificialintelligence #tensorflow #robotics #3dvision #sterovision #depthmap #RCNN #machinevision #imageprocessing #patternrecognition #compiler #RISC-V #RNN #fullStackDeepLearning #productinnovation #patents #TensorRT #ApacheTVM #TFLite #PyTorchmobile #dockers #gRPC #RESTAPIs #GRPC #GraphQL #imageprocessing #patternrecognition #Full_Stack_Deep_Learning #overfit #underfit
Common solution for under-fitting or over-fitting: check data-set, error analysis, choose a different model architecture, hyper-parameter tuning
Under-fitting (reducing bias): ⬆️ bigger model ⬇️ reduce regularization 🤔 error analysis 🤔 different model architecture 🤔 tune hyper-parameters ⬆️ add features
over-fitting (reducing variance): ⬆️ add more training data ⬆️ add normalization (batch norm, layer norm) ⬆️ add data augmentation ⬆️ increase regularization (dropout, L2, weight decay) 🤔 error analysis 🤔 choose a different model architecture 🤔 tune hyper-parameters ⬇️ early stopping ⬇️ remove features ⬇️ reduce model size