March 1, 2023
Cloud computing has largely hit the mainstream. Your mom knows about it (at least vaguely — she’s probably asking you to help her put her pictures in the cloud). But IT progress continues to march on, and a new model of information processing is beginning to take shape: fog computing.
As smartphones continue to proliferate and smart devices begin to really take off, cloud servers became integral to application service delivery, allowing remote rather than local processing. At the same time, business applications embraced the cloud for its flexibility and to avoid hardware management. It enabled guaranteed uptime and greater connectivity.
So where does fog take over from cloud? When an army of connected devices require constant processing power and connectivity. The Internet of Things is coming fast. According to IDC, the IoT will expand by 2020 to include 4 billion people online, 25 million or more apps, 25 billion embedded and intelligent systems, and 50 trillion gigabytes of data.
Fog computing is a way to manage some of those bandwidth and processing power demands by splitting the duties between the local device and remote data centers. It should sound familiar if you know the hybrid cloud model, which balances onsite virtualization with hosted cloud from cloud providers.
The biggest problem solved by fog computing is bandwidth. There are some applications where even miniscule delays in transferring data via network can have significant negative effects. Any industrial application involving a malfunctioning piece of equipment could fall under this category – in the time it takes for data to transfer to a central cloud, be processed and analyzed, and then return to the origin facility, a batch of product could be ruined.
Other applications store some data locally in case the network connection is spotty, like Uber, which keeps some information on driver’s phones so service is not interrupted.