FQA

Scalable Verification for Safety-Critical Deep Networks

https://arxiv.org/pdf/1801.05950.pdf

"Verifying that neural networks behave as intended may soon become a limiting factor in their applicability to real-world, safetycritical systems such as those used to control autonomous vehicles safety and reliability on DNNs. verify properties of DNNs. A major challenge of verifying properties of DNNs with satisfiability modulo theories (SMT) solvers is in handling the networks’ activation functions such as, Reluplex (domain-specific theory solvers; through a lazy approach). 1)devising scalable verification techniques. 2)identifying design choices -> amenable to verification. "

Each neuron of a neural network computes a weighted sum of its inputs according to learned weights. It then passes that sum through an activation function to produce the neuron’s final output. Typically, the activation functions introduce nonlinearity to the network, making DNNs capable of learning arbitrarily complex functions, but also making the job of automated verification tools much harder.



Driver Assistance Systems and Vision-Based System Validates Driver Monitoring

Vision-based convolutional neural network system detects phone usage, eating, and drinking. cameras with active infrared lighting; 30 Hz and delivered 8-bit grayscale images at 1280 × 1024-pixel resolution; ResNeXt-34;

video-based driver assistance systems, such as automated driving; resilient object detection and tracking; camera: ± 50°field of view (horizontal); +27°/ -21°field of view (vertical); > 150 m detection range; 2.6 MP resolution.

multi path approach:

  1. classifier: for pattern recognition; resilient object detection

  2. dense optical flow and structure from motion; to detect static objects; 3D structure

  3. deep learning: classify objects, road, edge road, orientation;

" Operation principle of the multi purpose camera: During assisted and automated driving, the vehicle must know what is happening in its surroundings at all times. It must reliably detect objects and people, and be able to react to these appropriately. Here, the latest generation of the front video camera from Bosch plays a crucial part: The multi purpose camera for assisted and partially automated driving utilizes an innovative, high-performance system-on-chip (SoC) with a Bosch microprocessor for image-processing algorithms. Its unique multipath approach combines classic image-processing algorithms with artificial-intelligence methods for comprehensive scene interpretation and reliable object detection. With its algorithmic multipath approach and the innovative system-on-chip, this camera generation has been specially developed for high-performance driver assistance systems. In line with this approach, the multi purpose camera uses for example the following technical paths at once for image processing: The first of these is the conventional approach already in use today. Via preprogrammed algorithms, the cameras recognize the typical appearance of object categories such as vehicles, cyclists, or road markings. The second and third paths are new, however. For the second path, the camera uses the optical flow and the structure from motion (SfM) to recognize raised objects along the roadside, such as curbs, central reserves, or safety barriers. The motion of associated pixels is tracked. A three-dimensional structure is then approximated based on the two-dimensional camera image. The third path relies on artificial intelligence. Thanks to machine-learning processes, the camera has learned to classify objects such as cars parked by the side of the road. The latest generation can differentiate between surfaces on the road and those alongside the road via neuronal networks and semantic segmentation. Additional paths are used as required: These include classic line scanning, light detection, and stereo disparity. " Link



Face recognition/attributes