We're Hiring!
Take the next step in your career and work on diverse technology projects with cross-functional teams.
LEARN MORE
Mountain West Farm Bureau Insurance
office workers empowered by business technology solutions
BLOG
1
20
2017

The History of Cloud Computing: How Did We Get to Google Apps and IaaS?

Last updated:
9.16.2020
No items found.

The term “cloud” may only have reached our collective consciousness in the past few years, but the concepts involved in cloud computing date back many decades. Starting with utility computing and moving on to virtualization and grid computing, distributing compute resources has long been a way to minimize costs involved with IT infrastructure.

Let’s see how we moved from the mainframe to Salesforce with this quick history of cloud computing.

 

The Intergalactic Computer Network

Networks are obviously a vital component for any type of cloud resource. Without an internet connection — or some type of network connection — there is no way to access remote servers. This general idea dates all the way back to ARPANET, a predecessor to the internet that was the first packet switching TCP/IP network.

J.C.R Licklider, who called it the Intergalactic Computer Network, inspired the concept. He imagined it as “a ‘thinking center’ that will incorporate the functions of present-day libraries with anticipated advances in information storage and retrieval…a network of such centers, connected to one another by wide-band communication lines and to individual users by leased-wire services.”

That sure sounds an awful lot like modern data centers.

 

The Birth of Virtualization

mainframe computers paved the way for shared computing

In the 1950s, even before these concepts inspired research teams to develop ARPANET, users connected to mainframes via terminals without any real processing power of their own. Mainframes were incredibly expensive and notoriously difficult to set up and administrate, so most universities and scientific organizations only had one. This shared processing center could be considered a conceptual predecessor to the cloud.

From the 1960s into the ‘70s, IBM, MIT, GE, and others focused on creating “time-sharing” solutions, which were another way to share the processing power of mainframes and computers with a multitude of users. Time-sharing divided the compute resources among different users, but each user was still limited by the mainframe only being able to run a single job at a time.

During this period, IBM invented the CP-40 mainframe, considered the first instance of virtualization. This was the first computer that involved direct user interaction; previously the user had to feed instructions into the mainframe and then it would return results on a printout or screen. Part of the way the CP-40 was able to accomplish this was by creating a new instance of the operating system for each user, in addition to assigning them memory and other resources.

In 1987, Insignia Solutions developed SoftPC, which could run DOS applications on Unix, greatly reducing the cost for DOS use and development. A few years later, the software could also run Mac or Windows. A decade after that, Apple had Virtual PC, which virtualized Windows on Macs.

At the same time, VMware was founded, selling VMware Workstation, which evolved into ESX server by 2001. ESX servers did not require a host operating system — they could run on top of pure hardware, allowing a server to host many instances of virtual machines with better performance.

VMware is now the market leader in virtualization (and the basis for the gBlock Cloud platform). During the early years of virtual servers, they were used almost exclusively for internal projects — renting VMs from service providers only came later.

Grid computing, utility computing, and –as-a-Service

The concept of renting computing power does go back to mainframes as well. John McCarthy, who researched computing and taught at Stanford, MIT, Dartmouth, and Princeton, said in 1961, “the computer utility could become the basis of a new and important industry.”

IBM, GE, and others provided compute power for rent throughout the mainframe era, often with a focus on time-sharing. As the industry developed, important advances in control centers, security, and the metering of computing consumption were made. But this model became less popular with the advent of the affordable personal computer and less expensive servers.

Grid computing was a popular term starting in the early 1990s. It referred to making compute resources into a sort of power grid, with simple access and metered billing. This was a continuation of the idea of a “computer utility” service as demonstrated throughout the mainframe era. By the late ‘90s organizations like SETI@home were using CPU scavenging and volunteer computing to harvest extra computing power from personal computers, kind of like a cloud in reverse.

early HP utility data center advertisement

While virtualization began to rise in the late 1990’s, InsynQ, Inc began offering hosted applications and desktops. One year later, HP got into the game. By 2001 they were offering the “Utility Data Center,” one year after Sun Microsystems announced the Sun Cloud. The Sun Cloud wouldn’t be available until 2006, however, and HP’s Utility Data Center was actually based on allocation of bare metal servers rather than virtualized servers. Amazon also launched their cloud in 2006 with the creation of Amazon Web Services, which revolutionized the delivery of virtual servers over the internet.

Salesforce deputed in 1999 and was the pioneer of delivering software from a central processing center over a network, paving the way for the rise of Software as a Service. While businesses started to get on board, it wasn’t until Google Apps and enterprise applications hit the market with Web 2.0 in the late 2000s that cloud really took off in the public consciousness.

What's next?

High speed internet connections and advances in virtualization have allowed cloud computing to morph, spread, and innovate. Interoperability between software platforms though APIs and other links has also enabled cloud to proliferate, as it becomes easier to use the software and operating systems you want on the platform of your choosing. Meanwhile, bare metal servers continue to have their own niche, calling back to the days of mainframes and utility computing.

The rest, as they say, is history…except, of course, cloud continues to evolve and change. Software defined technologies and hyperconverged infrastructure are starting to make automation easier than ever. Will the cloud of 2027 look similar to 2017? We’re excited to find out.

Recent Blog Posts

lunavi logo alternate white and yellow
11.19.2024
11
.
8
.
2024
Load & Performance Testing with Azure Load Testing Service

Learn about load and performance testing in Microsoft Azure.

Learn more
lunavi logo alternate white and yellow
10.8.2024
09
.
25
.
2024
Maximizing Business Efficiency with Azure DevOps

For enterprises looking to adopt or mature their DevOps practices, Azure DevOps offers unmatched flexibility, scalability, and depth.

Learn more
lunavi logo alternate white and yellow
10.8.2024
09
.
09
.
2024
Exploring Microsoft Fabric: A Comprehensive Overview

Discover how Microsoft Fabric transforms data management and analytics with powerful tools for real-time insights and seamless collaboration, driving smarter, faster business decisions.

Learn more