Camera-based smart IoT sensors are soon going to be everywhere. The recent success of Deep Neural Networks (DNNs) has opened the door to a new computer vision and AI applications. While initial deployments are using high-end server class hardware with expensive and power-hungry GPUs, optimizations and algorithmic improvements will soon make running the inference side of DNNs on low-cost Edge Computing devices commonplace.
Today, a $3 ARM SoC comes with four 64-bit cores running at 1.2GHz, which with the right software has plenty of capacity for hosting advanced deep convolutional neural networks, for tasks such as person detectors for facility management or security, face recognition for employee locating or security, etc., and technology trends, driven by mass-consumer markets like smartphones, tables, and set-top-boxes, dictate ever more processing power to come in ever-smaller packages. These sensors will need software, and this software needs to be continually updated, both to keep track with the rapid development within machine learning/AI methods and datasets, and to keep their operating system and middleware installs tamper-proof and secure. A camera sensor that keeps track of meeting room occupancy or reports burglars is useful, but a spy camera that transcribes overhead presentations and emails them to competitors is a catastrophe. The Vertigo.ai Smart Camera Platform allows seamless deployment, monitoring, and updating of hundreds or thousands of sensors, at multiple physical locations, controlled by logically centralized cloud-hosted control plane. On top of this platform, we provide cutting edge and purpose-trained deep neural networks, for applications such as people counting, person tracking, face recognition, vehicle identification, etc.
Dumb IP cameras that communicate back unfiltered video streams, will soon be either replaced by Smart Cameras that run vision algorithms directly on their builtin SoC hardware and communicate back higher-level concepts, or augmented with Smart PoE switches that turn groups of dumb cameras into smart cameras. Where today’s large camera installations, such as shopping malls or airports, gather all video streams into a single high-end server and storage system (from enterprise IT providers like Hitachi), future installations will look more like distributed sensor networks, essentially hundreds or thousands of small camera-coupled embedded Linux systems, with the UI and storage functionalities often deferred to the cloud.
This new development creates new technical challenges, because installing and maintaining hundreds or thousands of small Linux machines, and keeping their system software and DNN models up to date, requires a skillset that the typically IT-admin will not posses. But it also presents an opportunity for newcomers, who understand both the embedded AI and large-scale distributed system problem domains well, and are able to deliver fully integrated solutions comprising everything from training of neural networks, deployment to diverse embedded platforms, and management at scale.