FQA

The present invention relates to a system for providing advertisement contents based on facial analysis. The system consists of an image acquisition device, a face detection module, an analysis module, a classification module, a database, a computation module, a matching module and a display device. The image acquisition device acquires an image of a user, the face detection module detects the face of the user in the image, the analysis module analyses the facial features statistically using classification models, the database stores matching rules, weighted advertisements and a plurality of advertisement contents and the display device displays the advertisement contents. The computation module computes the weighted image of the user and the matching module matches the weighted image of the user with the weighted advertisement to select an advertisement content based on facial analysis of the user. The system aims to provide advertisement contents via a digital standee by extracting salient demographic from a user to indirectly obtain user information and behavioral preference.

The present invention relates to a system and method for providing advertisement contents based on facial analysis using a digital standee. The system (100) is embedded in the digital standee and comprises an image acquisition device, a face detection module, a classification module, a data analysis module, a computation module, a database and a matching module. The image acquisition device is configured to acquire an image of a user, and the face detection module uses deep learning technology to detect the user's face in the image. The classification module classifies the user's facial features into a plurality of classification models, such as gender, age range, emotion, style and attention. The data analysis module obtains behavioral preference and information of the user by analyzing the classified facial features. The matching module matches the information with types of businesses to provide suitable advertisement contents to the user based on rules set by the advertisement provider. The advertisement contents are displayed on a display device in the system.

This patent describes a system for providing advertisements based on facial analysis. The system consists of an image acquisition device, a face detection module, an analysis module, a computation module, a matching module, a database, and a display device. The image acquisition device captures an image of the user, the face detection module identifies the user's face and facial features, the analysis module analyses the facial features using statistical parameters and classification models, the computation module computes the weighted image of the user, the matching module matches the weighted image of the user with weighted advertisements, and the display device displays the advertisement contents. The system operates in real-time and updates the classification models continuously. The advertisement contents are based on the user's age, gender, emotion, style, and attention and are provided by the advertisement providers with matching rules.

The process described in this patent involves matching a user's weighted image with a weighted advertisement based on matching rules established by the advertisement providers. The matching rules may include order of features, most similar features, important features, and nearest similar features. The matching is done from left to right of the binary sequence. The selected advertisement content is then displayed by the display device. The terms used in the patent are defined as specified. The invention is open to changes in form and details.

The system (100) is a device for providing advertisement contents based on facial analysis. It consists of: an image acquisition device (10) to acquire an image of a user, a face detection module (20) to detect the face and obtain facial features, an analysis module (40) to analyze the facial features statistically using classification models, a database (60) to store matching rules and advertisements, and a display device (80) to display the selected advertisement content. The system also has a computation module (50) to compute a weighted image of the user based on the analyzed facial features, and a matching module (70) to match the weighted image of the user with the weighted advertisement to select the advertisement content. The system can work for a single user or a group of users. The method (200) of providing advertisement content follows similar steps as the system (100). The steps include acquiring an image of the user, detecting the face, analyzing the facial features, computing a weighted image of the user, obtaining matching rules, and matching the weighted image with the weighted advertisement. The method also includes steps of training the classification models and providing display of the selected advertisement content.






This patent describes a system for providing advertisements based on facial analysis using a digital standee. The system consists of an image acquisition device, a face detection module, an analysis module, a computation module, a matching module, a database, and a display device. The image acquisition device captures an image of the user, the face detection module identifies the user's face and facial features, the analysis module analyzes the facial features using statistical parameters and classification models, the computation module computes the weighted image of the user, the matching module matches the weighted image of the user with weighted advertisements based on matching rules set by the advertisement providers, and the display device displays the advertisement contents. The system operates in real-time and updates the classification models continuously. The advertisement contents are based on the user's age, gender, emotion, style, and attention and are provided by the advertisement providers with matching rules.


https://arxiv.org/pdf/1801.05950.pdf 

"Verifying that neural networks behave as intended may soon become a limiting factor in their applicability to real-world, safetycritical systems such as those used to control autonomous vehicles safety and reliability on DNNs. verify properties of DNNs. A major challenge of verifying properties of DNNs with satisfiability modulo theories (SMT) solvers is in handling the networks’ activation functions such as, Reluplex (domain-specific theory solvers; through a lazy approach). 1)devising scalable verification techniques. 2)identifying design choices -> amenable to verification. "

Each neuron of a neural network computes a weighted sum of its inputs according to learned weights. It then passes that sum through an activation function to produce the neuron’s final output. Typically, the activation functions introduce nonlinearity to the network, making DNNs capable of learning arbitrarily complex functions, but also making the job of automated verification tools much harder.



Driver Assistance Systems and Vision-Based System Validates Driver Monitoring

Vision-based convolutional neural network system detects phone usage, eating, and drinking. cameras with active infrared lighting; 30 Hz and delivered 8-bit grayscale images at 1280 × 1024-pixel resolution; ResNeXt-34; 

video-based driver assistance systems, such as automated driving; resilient object detection and tracking; camera: ± 50°field of view (horizontal);  +27°/ -21°field of view (vertical);   > 150 m detection range; 2.6 MP resolution. 

multi path approach: 

" Operation principle of the multi purpose camera: During assisted and automated driving, the vehicle must know what is happening in its surroundings at all times. It must reliably detect objects and people, and be able to react to these appropriately. Here, the latest generation of the front video camera from Bosch plays a crucial part: The multi purpose camera for assisted and partially automated driving utilizes an innovative, high-performance system-on-chip (SoC) with a Bosch microprocessor for image-processing algorithms. Its unique multipath approach combines classic image-processing algorithms with artificial-intelligence methods for comprehensive scene interpretation and reliable object detection. With its algorithmic multipath approach and the innovative system-on-chip, this camera generation has been specially developed for high-performance driver assistance systems. In line with this approach, the multi purpose camera uses for example the following technical paths at once for image processing: The first of these is the conventional approach already in use today. Via preprogrammed algorithms, the cameras recognize the typical appearance of object categories such as vehicles, cyclists, or road markings. The second and third paths are new, however. For the second path, the camera uses the optical flow and the structure from motion (SfM) to recognize raised objects along the roadside, such as curbs, central reserves, or safety barriers. The motion of associated pixels is tracked. A three-dimensional structure is then approximated based on the two-dimensional camera image. The third path relies on artificial intelligence. Thanks to machine-learning processes, the camera has learned to classify objects such as cars parked by the side of the road. The latest generation can differentiate between surfaces on the road and those alongside the road via neuronal networks and semantic segmentation. Additional paths are used as required: These include classic line scanning, light detection, and stereo disparity.  " Link



Face recognition/attributes 

I use citation plugin