Limitations and Pitfalls of Cloud Computing

Cloud computing companies have become commonplace. Business people recognize that cloud-based software and services make it possible to use computing resources more efficiently. Large capacity servers in massive server farms can run applications and services and provide good performance. The cloud isn’t scary anymore and everyone uses it.

Even so, cloud services have their limitations and pitfalls.

For example, some applications, unless heavily modified, do not do well with high latency. Other applications may have huge network requirements which do not fit “cloud” models well. Applications where the computation is far distant from the data can have excessive communication costs and long latencies.

Consider the case of an application developed to use artificial intelligence to recognize people on a bus. A web cam was placed at the front of the bus and the application tracked people entering and leaving the bus and therefore could calculate the number of seats that were empty.

The application was modeled two ways. The first had the entire application living in the cloud with data streaming from the webcam on the bus into the remote application. They built the second model using a small single-board computer running the program on the bus itself. While it communicated with the cloud, it only did so when there were no more seats available or when seats became available after someone left the bus.

Estimates of both approaches showed that in network savings alone, installing the small computer paid for itself in one day. In addition, interruptions in network traffic due to the roaming issues of the bus were not as frequent. Multiply this by a fleet of buses and you see real savings.

“Big Cloud” vendors are now reaching out with new IoT (Internet of Things) solutions, while avoiding discussions about the amount of Internet traffic, latencies, lack of control, and potential security problems that may come from this offering.

Even with common and established cloud applications, issues such as multi-country jurisdiction, not being able to guarantee privacy, lack of servers in more than two-hundred countries, lack of control on where data is stored and where processes are run, are all factors in what data processing you allow into the cloud.

An alternative approach is to create private clouds to do the initial processing of IoT data in such a way as to limit transport and exposure.

Peer-to-Peer cloud software

Peer-to-Peer cloud software allows systems administrators to set up their own cloud among the different computers or servers processing the data and the “Things” supplying that data. If the Things have even the tiniest networking capability, they could become a legitimate part of the cloud and allow any application that could authenticate to them access the Thing.

This would help keep the network traffic and processing local to the Things, thereby reducing the networking costs and often improving latency time in processing the raw data.

Ideally, this cloud software would be Open Source. Many pundits are anxious about Things being used to help spread viruses, aiding in denial of service attacks and other dastardly goings-on. Making sure the Thing software is Open Source means that the source code is available to fix the inevitable problems of rampant Things far into the future. Of course, all software and networking the Thing uses should also include good encryption and authorization.

Other uses of peer-to-peer cloud software

While peer-to-peer cloud software is exceptional for IoT, it is also useful in client/server cloud functions. By setting up clouds internal to your own organization or community, you make more efficient use of existing hardware. Using peer-to-peer clouds in conjunction with Big Cloud vendors can reduce the costs of the cloud software overall. This is called a hybrid cloud.

Now add to this the ability to actually buy, rent, and sell additional resources that you may have or need as the situation warrants, all done automatically once the criteria for exchange has been set up.

Cloud providers often have Service Level Agreements (SLA) that state what level of performance you will get from the resource provider. These include, but are not limited to, the percentage of time will your resources be available (often measured in “nines”, e.g. 99.999%), the type of security provided, and whether data is backed up and when, etc. These are all things that people look for when choosing a provider.

Likewise, accompanying contracts state how much the services cost and what happens if the SLA is not met.

Standardize these things, make the system electronic, and use some form of electronic currency, and the systems can automatically find the resources needed and balance for the best possible fit. This frees up the purchaser from having to constantly evaluate what supplier is going to provide the resources for their cloud needs.

Thank you very much,

Jon “maddog” Hall

About Jon "maddog" Hall:

Jon "maddog" Hall is the Board Chair Emeritus of the Linux Professional Institute. Since 1969, Mr. Hall has been a programmer, systems designer, systems administrator, product manager, technical marketing manager, author and educator, currently working as an independent consultant. Mr. Hall has concentrated on Unix systems since 1980 and Linux systems since 1994, when he first met Linus Torvalds and correctly recognized the commercial importance of Linux and Free and open source Software. As the Executive Director of Linux International(TM), Mr. Hall has traveled the world speaking on the benefits of open source Software having received his BS in Commerce and Engineering from Drexel University, and his MSCS from RPI in Troy, New York.

답글 남기기

이메일 주소는 공개되지 않습니다. 필수 필드는 *로 표시됩니다