As each new technology arrives on the scene it presents differing demands for performance, bandwidth, latency, availability and connectivity. The result is a mish-mash of priorities and a need to custom manage each protocol or even each device in order to deliver the necessary performance.
An example of the difficulties facing a network administrator is seen when comparing the needs of a large database with a group of internet of things (IoT) devices. The database will be expected to have relatively quick access times and maximize uptime during peak usage periods to serve web apps or other users.
The AI machine learning approach to this problem is to deploy an application that can adjust to changing conditions
This demands a solid connection and efficient network paths that must be constantly monitored. A drop in availability is important enough to trigger an alert that should be addressed and resolved as quickly as possible.
On the other hand, the IoT device doesn’t need to connect all the time. It may run a low-power WiFi adaptor on batteries or even be mobile. So this device will only connect periodically and may only need network access when it has data to stream. It might tend to drop off the network the rest of the time.
Setting up one flavor of alerting for these two types of devices would result in either missing outages on the database or receiving constant messaging on the IoT device. It’s certainly possible to tweak alert settings on each device, but doing so would be time-consuming and very detailed repetitive work.
The AI Machine Learning approach to this problem is to deploy an application that can adjust to changing conditions and develop a performance metric on its own. This metric will be automatically customized to the specific application and its needs. A large database may need a wider pipeline or more processing power at a given time of high demand. An AI program can measure and use predictive analytics to change that database’s resource allocation when needed.