Before you start reading, you might want to read another article we wrote about Artificial Intelligence. If you are versed in AI and don’t need to go through the basics of what AI is, then don't. This article will focus on a particular use of AI - Artificial Intelligence for embedded systems, also known as Embedded Artificial Intelligence (EAI).
In the following lines, we will begin by defining an embedded system, providing some examples, and describing its evolution within the industry. Next, we go through AI’s computational limitations and try to clear the AI misconception of the need to have vast amounts of processing power, e.g., neural networks and similar systems.
We continue explaining how AI is implemented inside embedded systems, define embedded AI, and conclude with different use cases of embedded AI. Use cases outside our company and our developments, i.e., UltraHUB/InfraHUB. With no further ado, let’s get started!
The word embedded, in general, means something that is attached to another thing. An embedded system, in technological terms, is a microprocessor or microcontroller that sits inside a device and can intake and store data through the sensors that are in the hardware. The data collected is used to execute specific functions that have been coded in its software, and it achieves this by sending particular and timely signals.
These embedded systems can be standalone, mobile, network, and real-time systems, and they are all based on two components, hardware, and software, which are equally crucial for embedded systems. To better understand the different types of embedded systems mentioned, we will describe and give examples. Before we do, it is essential to know that these are possible characteristics so that a system can be, for example, network, standalone, and real-time at the same time.
Embedded system types known as standalone are labeled because they don’t require connecting to a host computer to function. In other words, they can produce outputs independently, for example, digital cameras and wristwatches, household appliances, and calculators. Other embedded systems that cannot be described as standalone require an integrated part of larger mechanical, electrical, or electronic systems, e.g., an adaptive cruise control (ACC) system is dependent on a vehicle.
As the name indicates, these embedded systems can operate on the go, and as you might have deduced, they overlap with standalone embedded systems. So why not call both mobile or standalone, you might be wondering? Well, because not all standalone are mobile. Giving some examples will shed a clear light on the distinctive difference between both. Digital cameras can be classified as standalone and portable, but household appliances like refrigerators or washing machines, although standalone, cannot function on the go. Therefore, household appliances can't be categorized as mobile embedded systems, and cameras can be both.
A network embedded system type means that the system works via a wired or wireless network and communication with web servers for output generation. Home and office security systems, automated teller machines (ATM), and point-of-sale (POS) are network-embedded systems; this is so because, for example, security systems have a network of cameras, sensors, and alarms to function correctly. Thus, embedded systems that rely on networks of other devices to function are classified as network embedded systems.
Real-time embedded systems require the tasks assigned to be executed precisely when the output is released. Therefore, real-time is critical for systems like aircraft controls, aerospace and defense, and autonomous or semi-autonomous vehicles. Another division in real-time embedded systems is soft and hard, accounting for the importance of output generation speed. A soft embedded system is a temperature and humidity reading, and a hard embedded system is a missile defense system.
There has been a long process to get to the current uses of embedded systems, which started in the 1960s. During this decade, the space race between the U.S. and former Russia, the U.S.S.R., set the stage for significant advancements in technology, and one of these was the embedded system.
Dr. Charles Stark Draper created the first real-time embedded computing system at the Massachusetts Institute of Technology (MIT) for the Apollo Program, the Apollo Guidance Computer. It was designed for autonomous data collection and to provide mission-critical calculations for the Apollo Command Module and Lunar Module.
It was not until the 1970s that embedded systems entered the commercial sphere, with Intel’s first microprocessor unit released in 1971. In the 1980s, the development reached memory, input, and output system components integrated into the same chip as the processor, forming a microcontroller.
From that point onwards, microcontroller-based embedded systems were added to the consumers’ daily life, e.g., credit card readers, cell phones, traffic lights, and thermostats. Today, with the rise of the Internet of Things (IoT), embedded systems are becoming increasingly sophisticated and moving into the realm of AI.
It is essential to understand where we stand today regarding the computing or processing power limitations in the field of artificial intelligence. It is general knowledge that AI requires immense processing power and computation to obtain results; hence, it has constantly been developed within a “petite comité.” A handful of big tech corporations had the economic power to run these AI beast training modules, e.g., IBM, Google, Amazon, and Microsoft.
Although this exponential cost in AI training is true, the irruption of the Internet of Things (IoT) has led to exciting breakthroughs for AI. Today, the reality of IoT revolves around embedded systems located inside IoT devices that are, in most cases, as we have presented above, precise tools to function on primarily single tasks. These devices are not built to withstand vast processing power, and their bandwidth is hugely reduced.
On the other hand, AI is a vital part of the evolution of IoT devices as it provides numerous advantages, including predictive maintenance, autonomy, and cost-efficiency, among others. So to overcome the astronomical AI learning curve costs that only a few tech companies can sustain in time, tech innovations have come up with the possibility of adding AI into embedded systems without taking away all the advantages an embedded system has.
This is an achievement that is possible due to Machine Learning (ML) and its fields of study, like neural networks and similar systems. These types of AI only require a lot of processing power during the training stage, and the great benefit we have now is that there are off-the-shelf pre-trained libraries that can be moved to lower processing power systems. Having all these advantages has resulted in what is known as Embedded AI or Embedded Artificial Intelligence (EAI).
Let’s start by defining what EAI is. A simple description of Embedded AI is AI at the device level instead of at the application, cloud, or server level. In other words, AI works inside the device, receiving, processing, analyzing, and delivering data to information at the source. This capacity is possible due to the availability of pre-trained libraries that move to these edge devices. Therefore, the requirements of enormous processing power are mitigated, and the AI can perform based on the inserted libraries.
The advantages AI provides at an embedded system level, which are proving to be a game-changer and thus being applied by practically all industry verticals, can be summarized as follows and are based on the capacity to keep data at the source:
· Security and privacy - Cyberattacks' common trait is that they exploit centralized infrastructure solutions. With embedded AI, data doesn’t require moving to server centers, eliminating these centralized risks and keeping data safe and private.
· Reduction in data transfer - Deleting the step of sending data to server centers to be analyzed by server AIs reduces data transfer. EAI can simply, if necessary, just send the results to the cloud.
· Real-time responsiveness - If the need to send data to the cloud for results is eliminated, the travel time from the source to the cloud and back again no longer applies. Resulting in real-time data to information, which is vital in some industries, e.g., autonomous vehicles or safety systems.
· Cost-efficiency - Again, removing the data transfer from the equation means the vast costs that server centers charge for AI functionalities are no longer an issue. Bringing into the AI scene, not just big tech companies but startups and small companies can also use the power of AI in their business.
We have been talking about pre-trained libraries that move into low-power processing units, i.e., IoT devices. Let’s look more in detail at the recent innovations that have made this possible - TinyML and TensorFlow Lite.
As we have already mentioned, one of the significant drawbacks AI presents with current centralized solutions is that it requires large amounts of energy, high bandwidth, and lag times, an obstacle tackled with TinyML (Tiny Machine Learning) technology. TinyML is a machine learning and embedded systems technique or field of study dedicated to exploring which machine-learning applications can be set on small devices like microcontrollers without going too in-depth into its technicalities.
This procedure goes first through a reduction, optimization, and integration of AI; through the results attained, the decision is taken on which machine-learning applications run in IoT devices. As a result, the benefits are the aforementioned real-time data, security and privacy, lower internet bandwidth, and low power requirements.
Another software package used for machine learning is TensorFlow Lite; this software is capable of providing a deep learning framework. As such, it’s ideal for local inference, specifically for low computational hardware, and runs ML models on compatible hardware and IoT devices. But before one starts to use TensorFlow Lite, you have to select a suitable model depending on the use case.
Looking at some use cases in which TinyML and TensorFlow Lite are providing tremendous benefits, like improving processes, reducing costs, and increasing the quality of life:
· Predictive maintenance
· Building automation
· Vision, motion, and gesture recognition
· Pharmaceutical development and testing
· Child and elderly audio analytics care
· HAI Identification and prevention
· Crops status
We have provided a succinct presentation of embedded systems and how AI/ML incorporates into IoT devices. We’d like to present our approach to this innovative breakthrough that can help IoT reach its full potential and enter its next step; the Internet of Everything (IoE).
Internet of Everything Corporation (IoE Corp) has developed security-first and sustainable computing-first thinking technology to help IoT evolve into IoE. To achieve this feat, our tech team’s core knowledge of the Internet has been fundamental and has brought us to develop a decentralized software infrastructure. A groundbreaking tech approach complies with sustainable computing standards using the Rust programming language.
The core reasons for IoE Corp’s decision to use Rust are various, but for the case we are concerned with within this text, embedded AI, Rust offers to embedded devices:
· Powerful static analysis
· Flexible memory
· Fearless concurrency
· Community driven
With Rust's benefits, we can develop our embedded AI product that focuses on specialized nodes - UltraHUB/InfraHUB.
Our HUB solution is an embedded system designed to provide IoE Informed Infrastructure to smart cities and homes. It comes with a sensor pack, depending on the situation, that is processed directly and exposed to an AI that refines this raw data into usable information. The difference between both resides in their location; InfraHUB is used outdoors and the UltraHUB for indoors.
With different possibilities of sensors applicable to the Infra & Ultra HUB, such as temperature sensors (fridges, or even ambient temp), vibration, infrared, cameras… The AI is built using Rust and simplified machine learning libraries. It is run by the local cluster where the sensor pack is located, so the data doesn't need to leave the acquisition area. The required processing power is significantly reduced.
You can learn more about our decentralized technology, the Eden System, or you can schedule a meeting with our AI expert team by filling out the contact form.