compile MOT

Test for 2021

3D Multi-Object Tracking: A Baseline and New Evaluation Metrics (IROS 2020, ECCVW 2020) https://github.com/xinshuoweng/AB3DMOT

Unsupervised Learning of Probably Symmetric Deformable 3D Objects from Images in the Wild https://github.com/elliottwu/unsup3d

This repository contains the public release of the Python implementation of our Aggregate View Object Detection (AVOD) network for 3D object detection. https://github.com/kujason/avod

πš™πš’πš™ πš’πš—πšœπšπšŠπš•πš• πš”3𝚍


Run on Ubuntu PC + eGPU



apt search nvidia-driver

apt-cache search nvidia-driver

sudo apt update

sudo apt upgrade

sudo apt install nvidia-driver-455

sudo reboot

nvidia-smi

Download cuDNN v7.6.5 (November 5th, 2019), for CUDA 10.0

  • tar -xzvf cudnn-10.0-linux-x64-v7.6.5.32.tgz

    • sudo cp cuda/include/cudnn*.h /usr/local/cuda/include

    • sudo cp cuda/lib64/libcudnn* /usr/local/cuda/lib64

    • sudo chmod a+r /usr/local/cuda/include/cudnn*.h /usr/local/cuda/lib64/libcudnn*


  • sudo dpkg -i libcudnn7_7.6.5.32-1+cuda10.0_amd64.deb

  • sudo dpkg -i libcudnn7-dev_7.6.5.32-1+cuda10.0_amd64.deb

  • sudo dpkg -i libcudnn7-doc_7.6.5.32-1+cuda10.0_amd64.deb




sudo apt-get install \

apt-transport-https \

ca-certificates \

curl \

gnupg-agent \

software-properties-common

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

sudo apt-key fingerprint 0EBFCD88

sudo add-apt-repository \

"deb [arch=amd64] https://download.docker.com/linux/ubuntu \

$(lsb_release -cs) \

stable"

sudo apt-get update

sudo apt-get install docker-ce docker-ce-cli containerd.io


Make sure you have installed the NVIDIA driver and Docker engine for your Linux distribution Note that you do not need to install the CUDA Toolkit on the host system, but the NVIDIA driver needs to be installed



distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \

&& curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add - \

&& curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list


curl -s -L https://nvidia.github.io/nvidia-container-runtime/experimental/$distribution/nvidia-container-runtime.list | sudo tee /etc/apt/sources.list.d/nvidia-container-runtime.list



sudo apt-get install -y nvidia-docker2

sudo systemctl restart docker

sudo docker run --rm --gpus all nvidia/cuda:11.0-base nvidia-smi






pip install cython; pip install -U 'git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI'

CenterTrack_ROOT=/home/farshid/code/CenterTrack

git clone --recursive https://github.com/xingyizhou/CenterTrack $CenterTrack_ROOT

cd CenterTrack_ROOT

pip install -r requirements.txt

cd $CenterTrack_ROOT/src/lib/model/networks/

git clone https://github.com/CharlesShang/DCNv2/

cd DCNv2

./make.sh


https://github.com/xingyizhou/CenterTrack/blob/master/readme/MODEL_ZOO.md




cvat

sudo groupadd docker

sudo usermod -aG docker $USER

sudo apt-get --no-install-recommends install -y python3-pip python3-setuptools

sudo python3 -m pip install setuptools docker-compose

sudo apt-get --no-install-recommends install -y git

git clone https://github.com/opencv/cvat

cd cvat

sudo docker-compose build

sudo docker-compose up -d

sudo docker exec -it cvat bash -ic 'python3 ~/manage.py createsuperuser'

http://localhost:8080/

Towards-Realtime-MOT

  • conda activate cuda100

  • pip install motmetrics

  • pip install cython_bbox

  • conda install -c conda-forge ffmpeg



https://openaccess.thecvf.com/content_CVPR_2020/papers/Xu_How_to_Train_Your_Deep_Multi-Object_Tracker_CVPR_2020_paper.pdf

https://gitlab.inria.fr/yixu/deepmot

git clone https://gitlab.inria.fr/yixu/deepmot.git

sudo apt-get install libpng-dev

sudo apt install libfreetype6-dev

pip install -r requirements.txt


ImportError: torch.utils.ffi is deprecated. Please use cpp extensions instead.


conda create -y --name cuda92 python=3.6

conda activate cuda92

source activate cuda92

conda install pytorch==0.4.1 torchvision==0.2.0 cudatoolkit=9.2 -c pytorch














conda install -c conda-forge ffmpeg

  • conda create -n cuda100

  • conda activate cuda100

conda install pytorch torchvision cudatoolkit=10.0 -c pytorch


AWS

Towards-Realtime-MOT

  • conda create -n FairMOT

  • conda activate FairMOT

  • conda install pytorch==1.2.0 torchvision==0.4.0 cudatoolkit=10.0 -c pytorch

  • cd ${FAIRMOT_ROOT}

  • pip install -r requirements.txt

  • conda install -c conda-forge ffmpeg



MOTS: Multi-Object Tracking and Segmentation

  • Paper: https://arxiv.org/pdf/1902.03604

  • Dataset: https://motchallenge.net/data/MOTS/

  • This benchmark extends the traditional Multi-Object Tracking benchmark to a new benchmark defined on a pixel-level with precise segmentation masks. We annotated 8 challenging video sequences (4 training, 4 test) in unconstrained environments filmed with both static and moving cameras. Tracking, segmentation and evaluation are done in image coordinates. All sequences have been annotated with high accuracy on a pixel level, strictly following a well-defined protocol.


https://github.com/xingyizhou/CenterTrack

Setup:

  • cd /media/farshid/exfat128/code

  • Conda

    • conda create --name CenterTrack36cuda10 python=3.6

    • conda activate CenterTrack

  • conda install pytorch torchvision -c pytorch

  • pip install cython; pip install -U 'git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI'

  • CenterTrack_ROOT=/media/farshid/exfat128/code/CenterTrack

  • git clone --recursive https://github.com/xingyizhou/CenterTrack $CenterTrack_ROOT

  • pip install -r requirements.txt

  • cd $CenterTrack_ROOT/src/lib/model/networks/

  • git clone https://github.com/CharlesShang/DCNv2/

  • cd DCNv2

  • ./make.sh

  • Download pertained models for monocular 3D tracking, 80-category tracking, or pose tracking and move them to $CenterTrack_ROOT/models/. More models can be found in Model zoo.


AWS (11 December 2020)

https://github.com/xingyizhou/CenterTrack


  • Conda

    • conda create --name CenterTrack36cuda10 python=3.6

    • conda activate CenterTrack36cuda10

  • conda install pytorch torchvision cudatoolkit=10.0 -c pytorch

  • conda install -c conda-forge ffmpeg

  • pip install cython; pip install -U 'git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI'

  • CenterTrack_ROOT=/

  • git clone --recursive https://github.com/xingyizhou/CenterTrack $CenterTrack_ROOT

  • pip install -r requirements.txt

  • cd $CenterTrack_ROOT/src/lib/model/networks/

  • git clone https://github.com/CharlesShang/DCNv2/

  • cd DCNv2

  • ./make.sh

  • Download pertained models for monocular 3D tracking, 80-category tracking, or pose tracking and move them to $CenterTrack_ROOT/models/. More models can be found in Model zoo.

  • Training

  • cd $CenterTrack_ROOT/src/tools/

  • bash get_mot_17.sh