January 10January 24, 2017 AoE (anywhere on earth)
Notification of acceptance: February 27, 2017
Camera-ready papers due:
March 17, 2017March 12, 2017
Most of the focus in public cloud computing technology over the last 10 years has been on deploying massive, centralized data centers with thousands or hundreds of thousands of servers. The data centers are typically replicated with a few instances on a continent wide scale in semi-autonomous zones. This model has proven quite successful in economically scaling cloud service, but it has some drawbacks. Failure of a zone can lead to service dropout for tenants if the tenants do not replicate their services across zones. Some applications may need finer grained control over network latency than is provided by a connection to a large centralized data center, or may benefit from being able to specify location as a parameter in their deployment. Nontechnical issues, such as the availability of real estate, power, and bandwidth for a large mega data center, also enter into consideration.
Another model that may be useful in many cases is to have many micro or even nano data centers, interconnected by medium to high bandwidth links, and the ability to manage these data centers and interconnecting links as if they were one larger data center. This distributed cloud model is perhaps a better match for private enterprise clouds, which tend to be smaller than the large, public mega data centers, and it also has attractions for public clouds run by telco carriers which have facilities in geographically diverse locations, with power, cooling, and bandwidth already available. It is attractive for mobile operators as well, since it provides a platform on which applications can be deployed and easily managed that could benefit from locality and a tighter coupling to the access network. Applications with latency constraints or with too much data to backhaul to a large mega data center can benefit from distributed processing. The two models are not mutually exclusive: for instance a public cloud operator with many large data centers distributed internationally could manage its network of data centers like a distributed cloud. The distinguishing characteristic from federated clouds is that the component data centers are more integrated, especially with respect to authentication and authorization, so that the computation, storage, and networking resources are as tightly managed as if they were in a single large data center.
The International Workshop on Distributed Cloud Computing (DCC) is interdisciplinary and touches both distributed systems as well as networking and cloud computing. It is intended as a forum where people with different backgrounds can learn from their respective fields and expertise. We want to attract both industry relevant papers as well as papers from academic researchers working on the foundations of the distributed cloud.
DCC 2016 accepts high-quality papers related to the distributed cloud which fall into at least one of the following categories:
Foundations and principles of distributed cloud computing
Optimization and algorithms
Economics and pricing
Experience with and performance evaluation of existing deployments and measurements (public, private, hybrid, federated environments)
Architectural models, prototype implementations and applications
Virtualization technology and enablers (network virtualization, software-defined networking)
Service and resource specification, languages, and formal verification