Edge AI summit

Short summary of the edge AI summit 18-20 November 2020

Best of Wednesday, November 18, 2020; tinyMLPerf, Breaking the Barriers to Deploy DNNs on Low-Power Hardware, Optimizing ML Models At The Edge Made Simple

Thursday, November 19, 2020

  • 8:00 AM - 8:30 AM (PST) KEYNOTE PRESENTATION: Developing Edge AI Solutions For A Post-Pandemic Society Sastry Malladi - FogHorn Systems

  • 8:35 AM - 9:05 AM (PST) PRESENTATION: The Evolving Landscape of Edge AI Ajay Nair - Google

  • 9:05 AM - 9:20 AM (PST) Comfort Break

  • 9:20 AM - 9:45 AM (PST) PRESENTATION: InferX X1, The Fastest and Most Efficient Edge Inference Accelerator Cheng Wang - Flex Logix Technologies Inc.

  • 9:50 AM - 10:20 AM (PST) PRESENTATION: Implementing Edge Technologies in Retail: Walmart Case Study Alex Sabatier - Nvidia

  • 10:20 AM - 10:35 AM (PST) Comfort Break

  • 10:35 AM - 11:20 AM (PST) Meet speaker!

  • 11:20 AM - 11:50 AM (PST) PRESENTATION: The Era of Analog AI Compute Is Here Mike Henry - Mythic

  • 11:55 AM - 12:25 PM (PST) PRESENTATION: Using Edge AI To Detect Repetitive Mot Marcellino Gemelli - Bosch Sensortec

  • 12:30 PM - 2:30 PM (PST) NETWORKING - Dedicated Networking 2 hours for 1-2-1 Video Meetings

Friday, November 20, 2020

  • 8:00 AM - 8:30 AM (PST) PRESENTATION: Spatial Computing: A Collision of Edge and Cloud-Based Computing Ashwin Swaminathan - Magic Leap

  • 8:35 AM - 9:05 AM (PST) PRESENTATION: Building An Autonomous Network For IoT and Edge Applications Anshul Bhatt - Rakuten Mobile

  • 9:05 AM - 9:20 AM (PST) Comfort Break

  • 9:20 AM - 9:45 AM (PST) PRESENTATION: Practical Edge Inferencing: Enabling fastest AI inferencing per Watt leveraging sparsity Mahesh Makhijani - GrAI Matter Labs

  • 9:50 AM - 10:20 AM (PST) PRESENTATION: Large Scale Deep Learning and AI models on the Edge Chandra Khatri - Got It AI

  • 10:20 AM - 10:35 AM (PST) Comfort Break

  • 10:35 AM - 11:20 AM (PST) NETWORKING: Interest Groups (18 people per room, topic-specific discussions)

  • 11:20 AM - 11:50 AM (PST) PANEL DISCUSSION: The Symbiotic Relationship between 5G and Edge AI Sami Badri - Credit Suisse, Christos Kolias - Orange, Rima Raouda - Independent

  • 11:55 AM - 12:25 PM (PST) PANEL DISCUSSION: Investment Trends & Dynamics Panel Rashmi Gopinath - B Capital Group, Yvonne Lutsch - Bosch Venture Capital, Eileen Tanghal - In-Q-Tel, Albert Wang - Qualcomm Ventures

  • 12:30 PM - 12:50 PM (PST) PRESENTATION: The Edge: The Hottest Market for AI Accelerator Chips - Introducing the Kisaco Leadership Chart on AI Hardware Accelerators 2020-21: Edge and Automotive Michael Azoff - Kisaco Research


Wednesday, November 18, 2020

  • A Software Solution Enabling Predictive Maintenance at the Sensor Level

    • SensiML Toolkit enables AI for a broad array of resource constrained time-series sensor endpoint applications. These include a wide range of consumer and industrial sensing applications.

    • The problem is machine learning engineer do not have experience with embedded system and moving model to embedded system takes long time.

    • AutoML for Embedded system usage. it is on the cloud.

    • using the compiler for that device for this tools

    • cost edge and cloud. easy to work on cloud. streaming data to cloud is difficult. faster if working on edge.

    • TinyML addresses problems, battery powered, limited internet connectivity, security/privacy, latency, economic

    • https://sensiml.com/products/#process

  • Helping Fish Farmers Feed The World With Deep Learning

    • https://s3-us-west-1.amazonaws.com/aquabyte-static/videos/welcome_to_aquabyte_subtitled.mp4

    • Count sea lice and accurately measure biomass in real-time while reducing cage furniture. Our experts‑in‑the‑loop ensure that every single prediction is correct.

    • Aquabyte is seeking a Machine Learning Platform Engineer to drive the development, testing, and delivery of machine learning models that enable cutting-edge analytics and automation of fish farms around the world.

    • Aquabyte is on a mission to revolutionize the sustainability and efficiency of aquaculture. It is an audacious, and incredibly rewarding mission. By making fish farming cheaper and more viable than livestock production, we aim to mitigate one of the biggest causes of climate change and help prepare our planet for impending population growth. Aquaculture is the single fastest growing food-production sector in the world, and now is the time to define how technology is used to harvest the sea for generations to come.

    • We are currently focused on helping Norwegian salmon farmers better understand their fish populations and make environmentally-sound decisions. Through custom underwater cameras, computer vision, and machine learning we are able to quantify fish weights, detect sea lice infestations, and generate optimal feeding plans in real time. Our product operates at three levels: on-site hardware for image capture, cloud pipelines for data processing, and a user-facing web application. As a result, there are hundreds of moving pieces and no shortage of fascinating challenges across all levels of the stack.


  • tinyMLPerf: Benchmarking Ultra-low Power Machine Learning Systems

    • https://github.com/mlperf/tiny

    • tinyMLPerf Deep Learning Benchmarks for Embedded Devices

    • The goal of TinyMLPerf is to provide a representative set of deep neural nets and benchmarking code to compare performance between embedded devices. Embedded devices include microcontrollers, DSPs, and tiny NN accelerators. These devics typically run at between 10MHz and 250MHz, and can perform inference using less then 50mW of power. TinyMLPerf submissions will allow device makers and researchers to choose the best hardware for their use case, and allows hardware vendors to showcase their offerings. TinyMLPerf is primarily intended to benchmark hardware rather than new network archietctures, or embedded neural net runtimes. The reference benchmarks are provided using TensorFlow Lite for Microcontrollers (TFLM). Submitters can directly use the TFLM, although submitters are encouraged to use the software stack that works best on thier hardware.

    • anomaly detection benchmark, visual wake words benchmark,

  • Ultra-low power neuromorphic intelligence for the sensor edge

    • Innatera Nanosystems BV (Innatera, (Innatera, innatera.com) is a rapidly-growing Dutch semiconductor company that develops ultra-efficient neuromorphic processors for AI at the edge. These microprocessors mimic the brain’s mechanisms for processing fast data streams from sensors, enabling complex turn-key sensor analytics functionalities, with 10,000x higher performance per watt than competing solutions. Innatera's technology serves as a critical enabler for next-generation use-cases in the IoT, wearable, embedded, and automotive domains.

  • * How is AI affecting hearables and sensors?

    • https://github.com/greenwaves-technologies/nn_menu

    • The Neural Network Menu* is a collection of software that implements Neural Networks on Greenwaves Application Processors (GAP). This repository contains common mobile and edge NN archtecture examples, NN sample applications and full flagged reference designs. Our tools maps a TFLITE model (quantized or unquantized) onto gap. There is also a flow in the ingredients directory showing how to hand map from a Pytorch Model onto GAP.

    • https://greenwaves-technologies.com/store/

    • GAPPoc-A is a Proof of Concept Board that can be used for demonstration of battery-operated, edge computer vision applications based on GAP8.

    • It incorporates GAPmod, a surface-mount module that implements all the layout sensitive portion of a GAP8 design, along with a VGA image sensor and a Bluetooth Low Energy radio.

    • The GAPPoc-A board enables battery-operated applications developed around algorithms such as people counting, face-identification and many others to be quickly assembled and evaluated in the field.

    • https://riscv.org/blog/2019/08/risc-v-emea-roadshow-spotlight-greenwaves-technologies/

  • Breaking the Barriers to Deploy DNNs on Low-Power Hardware

    • Deeplite, named to the 2020 CB Insights AI100 List of Most Innovative Artificial Intelligence Startups, is devoted to making fundamental advancements in accessible and efficient deep learning. Our solution helps deep learning engineers and experts automatically create faster, smaller and more energy-efficient deep neural networks. Industry leaders in computer vision, augmented reality and autonomous driving use our technology to unlock new possibilities for deep learning in the real world. At Deeplite, our vision is to create a lightweight intelligence that’s accessible for daily life.

    • https://www.deeplite.ai/

    • At Deeplite, we are tackling inference optimization of deep neural networks, making them faster and energy-efficient from cloud to edge computing. Our solution leverages state-of-the-art technology from elite universities to make deep neural networks applicable for any device, and our team works hard on the iterative evolution of the science behind deep neural networks to directly improve daily life.

    • reduce the size of model 40x

  • Optimizing ML Models At The Edge Made Simple

    • https://octoml.ai/

    • OctoML is an energetic new company changing how developers optimize and deploy machine learning models for their AI needs. We’re a team of machine learning systems leaders focused on making ML more efficient and easier to deploy by… applying machine learning to it!

    • OctoML is leveraging the power and traction of Apache TVM, an open source project originated by our founding team, to enable companies of every size to harness the power of deep learning without the expensive heavy lifting of tuning and securing models to each hardware configuration that a customer might need.

    • Apache TVM and Deep Learning Compilation Conference, Wed-Fri, December 2nd-4th 2020, Free Virtual Event.

Thursday, November 19, 2020

  • Developing Edge AI Solutions For A Post-Pandemic Society

    • https://www.foghorn.io/

    • ogHorn’s Lightning™ Edge AI platform brings a groundbreaking dimension to IIoT and edge computing by embedding AI as close to the source of streaming sensor data as possible. The Edge AI software platform is a highly compact, advanced and feature-rich edge solution that delivers unprecedented low latency for onsite data processing, real-time analytics, ML and AI capabilities. It delivers the industry’s lowest total cost for computing requirements, communications services, and cloud processing and storage.

    • temperature detection, social distancing, cough detection, PPE/Mask detection

    • Flexible, customizable, integrated, actionable

  • The Evolving Landscape of Edge AI

    • https://www.coral.ai/examples/

    • Coral’s local AI technology enables new possibilities across almost any kind of industry

    • The Coral Dev Board is a single-board computer that contains an Edge TPU coprocessor. It's ideal for prototyping new projects that demand fast on-device inferencing for machine learning models. This page is your guide to get started. The setup requires flashing Mendel Linux to the board, and then accessing the board's shell terminal. Once you have terminal access and update some of the software, we'll show you how to run an image classification model on the board. If you want to learn more about the hardware, see the Dev Board datasheet.

    • TPU v3, 32 to 512 TOPS, Q2 2021

  • InferX X1, The Fastest and Most Efficient Edge Inference Accelerator

    • InferX X1: World's fastest and most efficient Edge Inference Accelerator. We have just launched our first inference chip and it is the best in the world for edge inference. We are bringing up neural network models now and moving forward on the steps required for Q2/2021 chip and board production and Inference Compiler availability.

    • mbedded FPGA, or eFPGA, enables your SoC to have flexibility in critical areas where algorithm, protocol or market needs are changing. FPGA can also accelerate many workloads faster than processors: Microsoft Azure uses one FPGA accelerator for every 2 Xeons.Flex Logix provides eFPGA cores which have density and performance similar to leading FPGAs in the same process node. Our EFLX eFPGA is silicon proven in 40nm, 28/22nm, 16nm and 12nm. 6/7nm EFLX eFPGA is planned. Our eFPGA is based on a “tile” called EFLX 4K, which comes in two versions: all logic or mostly logic with some MACs (multiply-accumulators). The programmable logic is called LUTs (look up tables) that can implement any Boolean function. EFLX 4K Logix has 4000 LUT4 equivalents, EFLX 4K DSP has 3000 LUT4s and 40 Multiplier-Accumulators (MACs): the MAC has a 22-bit pre-adder, a 22×22 multiple and a 48-bit post adder/accumulator. MACs can be combined or cascaded to form fast DSP functions. (For 40nm-180nm we offer an EFLX 1K tile).

    • depth-wise conv2d

  • Implementing Edge Technologies in Retail: Walmart Case Study

    • NVidia

  • The Era of Analog AI Compute Is Here

    • Mythic products are based on a unique tile-based AI compute architecture that features three fundamental hardware technologies – Compute-in-Memory, Dataflow Architecture, and Analog Computing. For AI developers, the Mythic SDK streamlines the preparation of trained neural networks for edge and low-latency datacenter deployments, and also performs automatic optimization and compilation of dataflow graphs for our unique architecture.

    • low power consumption, ultra-low latency, high ai performance, large weight capacity, small form factor, cost effective solution

  • Using Edge AI To Detect Repetitive Mot

    • Bosch Sensortec develops and markets a wide portfolio of MEMS sensors and solutions for applications in smartphones, tablets, wearables, AR/VR devices, drones, robots, smart home and the Internet of Things. Striving to meet the demanding requirements of the consumer electronics market, we provide best-in-class sensing solutions in terms of customer focus, quality and reliability, performance, sustainability and competitiveness.

    • https://github.com/BoschSensortec

Friday, November 20, 2020

  • *Spatial Computing: A Collision of Edge and Cloud-Based Computing

    • https://github.com/magicleap

    • instance semantic segmentation contextual computing

    • spatial computing

    • SLAM: tracking/localization, mapping:

      • latency is critical for see through displays

      • weight is critical cannot compensate for lack of compute with more sensors

      • thermal is critical more sensors and more compute lead to heat

      • rigidity leads to weight our device should be light

      • very stringent requirements for MR

    • why build a map: drift correction, robustness (pose recovery), persistence

    • feature descriptors

      • matching across large baselines and illumination changes is challenging

      • most of the SOTA methods based on deep learning and not feasible withing compute budget

      • our deep descriptor is optimized for SLAM and provides the best trade off in terms of performance and compute

      • semantic segmentation 3d point cloud

  • Building An Autonomous Network For IoT and Edge Applications

    • 5G + AI

  • Practical Edge Inferencing: Enabling fastest AI inferencing per Watt leveraging sparsity

    • https://www.graimatterlabs.ai/

    • The world’s first sparsity-enabled AI processor optimized for ultra-low latency and low power processing at the edge.

    • GrAI One drastically reduces application latency, for instance, it reduces the end-to-end latencies for deep learning networks such as PilotNet to the order of milliseconds. The GrAI One chip is based on GML’s innovative NeuronFlow™ technology that combines the dynamic Dataflow paradigm with sparse computing to produce massively parallel in-network processing.

    • GrAI Matter Labs (www.graimatterlabs.ai), a fabless semiconductor company specialized in brain-inspired technology, designs and develops fully programmable ultra-low power neuromorphic HW for sensor analytics and machine learning. The company has offices in Eindhoven (NL), Paris (FR) and San Jose (USA) and has strong relations with top-ranking research groups on neuroscience, human vision and natural computation

  • **Large Scale Deep Learning and AI models on the Edge

    • deployment pipelines

      • there are several steps involved in the AI/ML life-cycle

      • several tools to help simplify the whole process

      • tensorflow extended (TFX): an end to end platform for deploying production ML pipelines

      • MLflow (other options michelangelo): an open source platform for the end to end machine learning life cycle

      • apache airflow (other options kubeflow): an open source workflow management platform

      • dataiku data science studio (DSS): collaborative data science software platform for teams of data scientist , data analysts, and engineers to explore prototype build and deliver

  • The Edge: The Hottest Market for AI Accelerator Chips - Introducing the Kisaco Leadership Chart on AI Hardware Accelerators 2020-21: Edge and Automotive