Incremental Learning for Edge AI

Incremental Learning

According to Wikipedia, the incremental learning approach is a method of machine learning in which input data is continuously used to extend the existing model's knowledge i.e. to further train the model. It represents a dynamic technique of supervised learning and unsupervised learning that can be applied when training data becomes available gradually over time or its size is out of system memory limits. Algorithms that can facilitate incremental learning are known as incremental machine learning algorithms.

Incremental Learning for Edge AI

Edge AI is a system that uses Machine Learning algorithms to process data created and present at the local level by a hardware device. A constant internet connection is not required to process such data. The device makes decisions in real-time, in a matter of milliseconds. Therefore, in comparison to the cloud model, the communication cost is fairly reduced. In other words, Edge AI takes the data and its processing to the closest point of interaction with the user, whether it is a computer, an IoT device, or an Edge server.

Edge AI- Incremental Machine Learning and IoT

Edge AI- Incremental Machine Learning + IoT


In recent years, the applications of Artificial Intelligence have exponentially increased around the world. With the growth of corporate activities at work and now the pandemic situation, cloud computing has become a central part of AI evolution. In addition, as more and more customers are becoming aware of new technology, the number of people using their devices is increasing, businesses are becoming more aware of the need to bring the technology to those devices to be closer to customers and serve their needs more efficiently. This is the reason the Edge computing market is expected to continue to grow in the coming years.


Edge AI Software Market, By Region (USD Million)

Edge AI Software Market, By Region (USD Million)


Alexa- an example of Edge AI

Alexa, Google Home, or Apple Homepod are classic examples of systems using Edge AI. The speakers of Alexa learn words and phrases through Machine Learning and then stored them locally on the device. When the user communicates something to applications such as Siri or Alexa, they send the voice recording to an Edge network where it is passed to text via AI, and a response is processed. The response time is reduced to less than 400milliseconds in comparison to the response time of a system without edge AI.

Working of Amazon Echo

How Alexa works

Working of an Edge AI System

Edge computing works by pushing data, applications, and computing power away from the centralized network to its extremes, enabling fragments of information to lie scattered across distributed networks of the server. Its target users remain any internet client using commercial internet application services. Earlier available to large-scale organizations, it’s now available to small and medium organizations because of the cost reductions in large-scale implementations.

Technologies Used

  • Mobile Edge Computing: Mobile edge computing or multi-access edge computing is a network architecture that enables the computational and storage resources to occur within the radio access network (RAN) to enhance network efficiency and the delivery of results to end-users.
  • Fog Computing: This is a term used to describe a decentralized computing infrastructure that extends cloud computing to the edge of a network while also placing data, compute, storage, and applications in the most logical and efficient place between the cloud and the origin of the data. This is sometimes known as being placed “out in the fog.”
  • Cloudlets: These are mobility-enhanced, small-scale cloud data centers present at the edge of a network and represent the second tier in a three-tier hierarchy: Mobile or smart device, Cloudlet, and Cloud. The purpose of cloudlets is to improve resource-intensive and interactive mobile applications by giving more efficient computing resources with lower latency to mobile devices within a close geographical range.
  • MicroData Centers: These are smaller, reach-level systems that provide all the essential components of a traditional data center. It is estimated that micro data centers will be most beneficial to small industries.


4 tier Fog Computing Architecture example

The 4 tier Fog Computing Architecture example- Smart City


Benefits of Incremental Learning

The unique ability of an edge AI system to do incremental learning, i.e. to continuously train the ML model on streaming data, facilitates to learn an individual model for each device simplifies the learning task since each model only needs to be able to explain what is normal for itself and not how it varies compared to all other devices.

Edge Learning comes with several benefits:

  • Learning is done individually as per the convenience and use of each device/asset.
  • You can learn at work, continuously, after deployment – Tamagotchi-style.
  • Little or no data is required for the start of training the ML model used.
  • Train the ML model on real-time, high-frequency sensor data – valuable for automation
  • Reduce the amount of data stored both in the cloud and at the edge, which saves CPU cycles and battery through less communication and read/write
  • The model may adapt to changing configurations or environments.

IoT devices are often mobile and may hence operate in very different environments; Incremental Learning can adapt to a new site or a new rental customer.


Technical writer at Aipoint. She holds immense knowledge in Deep Learning and python.