AI on EDGE: What is it and How to Use?

AI on EDGE: Enhancing existing systems with intelligent processing

Publish date:
Discover more of what matters to you

In this article, we’ll explore the answers to the following questions:

  • What is AI on EDGE?
  • If we already have an existing software or hardware system, can we add AI capabilities to it? If yes, how?
  • Is it possible to implement AI functionality in legacy systems?
  • How can AI functionality be integrated into a closed system that can’t be modified? How can we enhance a system if we don’t have access to the source code, or if the system belongs to another company?
  • How can AI functionality be developed if the tech stack of my software doesn’t align with the technology standards of AI solutions?

I intend to give basic answers to these questions without diving deeply into the technical aspects of their implementation. Each of these questions can serve as a foundation for a separate article in the future. The content in this article isn’t the ultimate truth. It is based on Softacom’s experience gained from participating in numerous IT projects across various sectors of the economy and working with different technologies and frameworks.

All the articles and materials we prepare can be divided into 3 categories: Academic, Tech & Development, and C-level. The first type, Academic, consists of content with a scientific focus. It is aimed at those who create technology and develop scientific or mathematical methods for future practical applications. The Tech & Development category is for developers and technical specialists who design and build software systems. The C-level category is for executives who sit at the intersection of technology and business. They seek to understand how a business can benefit from using this or that technology.

This article is tailored for C-level executives, providing an overview of potential ways to implement AI solutions within existing software and hardware systems. Our goal is to explain complex AI solutions in simple terms. We focus on tasks related to computer vision and predictive models. But the principles discussed can also be relevant for generative models used to create texts, audio, images, or videos. 

AI on EDGE

 “AI on EDGE,” “AI on the EDGE,” “EDGE AI” and similar terms refer to the use of specialized devices that perform different functions. These functions refer to AI-related tasks in conjunction with existing software or hardware systems, such as those running on personal computers, without disrupting them. These solutions can help overcome the computing limitations of an existing system. ChatGPT can provide a more detailed explanation of what “AI on EDGE” means in different languages. 

To understand this, let’s consider an example. Imagine we have an existing video surveillance system with built-in functionality. It was purchased two years ago (and designed five years ago, long before the rise of GenAI). Now, we don’t have a problem with its modernization. The system operates as intended, providing the features implemented by the manufacturer (including video surveillance cameras, software, and specialized hardware). But what if we want to extend its capabilities beyond what was originally built into the system? 

What if a client company’s security service wants to detect weapons being brought onto the company premises, but the existing system lacks this functionality? Or suppose an organization wants to implement an anti-drone defense system using its current infrastructure – detecting drones in the air and alerting security guards.

There is also a much more interesting example: detecting people smoking at a gas station or, worse, on the premises of an oil refinery. Another scenario is identifying cigarettes and lighters in employees’ personal belongings as they pass through an X-ray security checkpoint at a facility.  

In such cases, we can enhance the system by integrating AI on EDGE devices between video cameras (or other devices that generate photo or video streams, such as X-ray scanners) and the video surveillance system servers. These devices process video signals, apply computer vision models for analysis, generate relevant insights, and transmit the necessary information in the appropriate format to the right destination. For example, this could involve triggering an alarm and displaying a real-time video feed from a specific camera on a security monitoring station.

AI on EDGE devices are specialized, compact computers designed for efficient AI processing, such as training neural networks (models) and running trained models at high speed (in our case, this means detecting objects like drones or weapons in images in real time). 

A great example of AI on EDGE devices is NVIDIA Jetson solutions (https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/). With these devices, there is no need to upgrade existing computing resources (such as computers or servers) that are not optimized for AI workloads. Instead, all AI functionality runs directly on AI on EDGE devices. 

Also, we don’t need to use cloud computing as it can be slower and more expensive. Instead, we deploy our AI model directly on an AI on EDGE device, where it performs tasks such as detection, classification, prediction, and more. We will do this not on common computers or servers that are not suitable for AI systems and require GPU power for AI processing. In a way, it is similar to specialized cryptocurrency mining hardware, which uses GPU-based architecture instead of CPUs that are not optimized for intensive computations.

Read how Softacom explores AI in computer vision and video processing.

If we have an existing software or hardware system, can we add AI functionality to it? And if yes, how?

Part of this question has already been answered above. But to give a clear answer – YES, it is possible.

We can develop an AI solution as a standalone system, deploy it alongside the existing infrastructure, and use the same data streams that the current system processes. The AI solution can then analyze data and integrate with the existing system using APIs or SDKs or transmit to a 3rd party system. 

How can AI functionality be implemented in a closed system that cannot be modified, especially if there is no access to the source code and the system belongs to another company?

In this case, AI on EDGE architecture is a suitable solution. As mentioned earlier, we can deploy a separate system with AI functionality that runs in parallel with the existing one and integrate the two using available APIs and SDKs – if such interfaces exist. If they don’t, the challenge becomes more complex. One of the options is to contact the manufacturer of the existing system to explore integration options and ensure compliance with the licensing agreement.

It should be noted that developing AI functionality for object detection, defect identification, or user behavior prediction is not just about finding a pre-trained model and deploying it on a device like NVIDIA Jetson using its official documentation. This is a complex process that includes the following stages:

  • The first step is to conduct a business analysis and describe the user story or concept that we want to implement in our system. After that, it is clear what neural networks (models) we can use, how they will be trained, what data will be used, and what technology stack is required.

    At Softacom, we call this approach Proved AI Architecture. We don’t experiment on our clients. We invest in internal research and development to find the most effective AI technologies and architectures tailored to particular tasks. 
  • After the specification is clear, we move on to developing a Proof of Concept (PoC). This approach ensures that we minimize resource and budget expenditures on building a full system and infrastructure until we validate that the AI core functions as expected with the required accuracy. During this process, we might discover that the required functionality is not possible to deploy with our available opportunities. For example, the inability to access a suitable dataset or the high cost of computing resources which could run hundreds of thousands of dollars. This step is broken down into the following substeps:
    • Finding a pre-trained model that fits our task or a generic pre-trained model with a suitable architecture. Here, we mean the difference between models designed for GenAI or computer vision. We can modify the model’s architecture if necessary (and if possible);
    • Collecting and preparing a dataset if we are going to train the model. We may use an open-source dataset as a foundation or purchase a licensed dataset. If our client or company already has a relevant dataset, we check if it meets the task’s requirements. If necessary, we also generate synthetic images to expand our dataset;
    • Using our in-house Softacom AI Lab, we perform image annotation;
    • Within Softacom AI Lab, using either Softacom’s or the client’s computing resources, we perform the model retraining;
    • The most interesting step is to verify how well the model works. To do this, we use a feature called Runs, also within Softacom AI Lab. We select a trained model, or multiple models if we want to run processing in multi-model mode, choose test videos, and run an emulation. As the uploaded videos play one after another, we can see how our future system will work;
    • If we are not satisfied with something, we repeat the cycle – expanding the dataset, performing additional training, and testing the result again;
    • In almost all cases, it is not enough to use only the model functionality, but it is necessary to apply post-processing with various algorithms, such as eliminating detection noise, confirming results based on the movement direction, and more. To achieve this, we either use existing algorithms from Softacom AI Lab or develop new ones. Later these same algorithms will be integrated into the current system;
    • Once we are happy with the results – we move on to the implementation phase. Validating and refining the model beforehand significantly reduces costs and speeds up AI deployment;
  • Selecting the appropriate AI on EDGE solution and hardware. Deploying our solution onto the AI on the EDGE device and developing the necessary functionality to integrate its logic into a decision-making system or for visualization;
  • Developing or refining the software solution to process the results we get from the AI on EDGE device;

This is just one part of the overall process we follow when developing AI solutions.

Is it possible to implement AI functionality for legacy systems?

It is possible in almost 100% of cases. The approach is similar to the development of closed systems. But if our client is a software owner and has legacy system source code, the task is much simpler.

Softacom is an expert in modernizing and migrating legacy systems, and therefore we can integrate AI functionality into the existing legacy system at the required level and depth. This is possible if the client’s system is built using programming languages and development environments such as Object Pascal/Delphi, C#/Visual Studio, C++/C++ Builder, and others. New AI functionality can become a fully integrated part of the existing system.

How can I develop AI functionality if the technology stack of my software product does not match the standard of the AI tech stack?

It is a normal situation when a current system that was developed 15 years ago, successfully works and satisfies the requirements of customers and end users, ensures security and brings profit to its owner. But it may have been built using a programming language or framework different from the one that was invented not so long ago.

Most AI on EDGE devices run on Python, with some using C/C++ to a lesser extent. The core idea behind AI on EDGE devices is to bring AI functionality closer to the hardware or “on the edge”. Thus we can create a bridge between an AI on EDGE device and an existing system ( for example, one developed in .NET/C#/Visual Studio or Object Pascal/Delphi) by using a technology-independent communication protocol and interface, such as TCP/IP, WebSockets, XML, JSON, REST API, and others.

In conclusion, the development of AI solutions using AI-on-Edge devices is currently a promising and, most importantly, cost-effective approach for advancing computer vision, generative AI, predictive analytics, and other applications.

Enhance computer vision and video processing with neural networks and machine learning technologies
Learn more about our AI transformation services
Learn more

Subscribe to our newsletter and get amazing content right in your inbox.

This field is required
This field is required Invalid email address
By submitting data, I agree to the Privacy Policy

Thank you for subscribing!
See you soon... in your inbox!

confirm your subscription, make sure to check your promotions/spam folder

Subscribe to our newsletter and get amazing content right in your inbox.

You can unsubscribe from the newsletter at any time

This field is required
This field is required Invalid email address

You're almost there...

A confirmation was sent to your email

confirm your subscription, make sure to check
your promotions/spam folder