Professional Documents
Culture Documents
Table of Contents
Overview - Service Providers Have the Bandwidth to Build a Better Cloud By Dor Skuler, Vice President, Cloud Solutions, Alcatel-Lucent Opportunities in the Cloud By David Frattura, Senior Director of Strategy, Cloud Solutions Enablement, Alcatel-Lucent Reliable Cloud Computing Key Considerations By Eric Bauer, Reliability Engineering Manager; Randee Adams, Consulting Member of Technical Staff; Alcatel-Lucent The Cloud Declaration of Independence By Cindy Bergevin, Head of Cloud Solution Marketing, Alcatel-Lucent Cloud Research Results Identify Fears and Opportunities in Cloud Services Adoption By Susan J. Campbell, TMCnet Contributing Editor Enterprise Cloud Services Come of Age Xavier Martin, Vice-President, Messaging and Communications; Annie Ohayon-Dekel, Director, Business Strategy and Product Marketing; Alcatel-Lucent Enterprise How Service Providers Can Capitalize on the Enterprise Cloud Market By Beecher Tuttle, TMCnet Contributor A Classroom in the Cloud By Debbie Bradshaw, Alcatel-Lucent; Mohit Bhargava, President, LearningMate LTE Cloud Enhances Public Safety Communications By Kevin Wendt, Global Director, Network Management Services and Solutions, Alcatel-Lucent Distributed Denial of Service Protection: The Alcatel-Lucent Cloud-based Approach By Susan J. Campbell, TMCnet Contributing Editor More Information
eBook
By David Frattura, Senior Director of Strategy, Cloud Solutions Enablement, Alcatel-Lucent Communications service providers are well positioned to offer a new class of cloud. The carrier cloud brings the attributes of the service providers carrier-grade network to cloud computing and cloud communications to deliver next-generation cloud services with guaranteed performance and availability. The carrier cloud terminology is now becoming the accepted nomenclature in the industry for this class of cloud, with analysts and other vendors. By leveraging the network in a dynamic way for cloud services, and building a cloud "inside" the network, the industry can gain the benefits of public cloud, with the benefits of a managed network to provide high end performance (including low latency, guaranteed bandwidth and VPNs). As of today, service provider networks are still built in silos of dedicated equipment, with each silo providing a specific telecommunication service, such as voice, SMS and video streaming. Each of the silos is provisioned for peak times. In other words, most of the time, the system is running at less than 20% of capacity. Furthermore, each service is expensive and time-consuming to set up, with typical implementations of new services taking 1824 months.
With the move to cloud technology, services are no longer delivered from segregated silos that require dedicated equipment, applications and resources. Instead, all services are delivered from a single, elastic environment. Equipment and applications are virtualized. Processes and operations are simplified. New services can be added quickly, easily and cost effectively. These efficiencies are important because service providers have complex operations. They have all of the typical enterprise applications. In addition, they have to support all of the equipment and applications needed to operate and manage their networks and services. Service provider applications are much more demanding than typical enterprise applications. They require careful cloud deployments that define compute and storage characteristics as well as strict key performance indicators. Here, the network becomes a crucial asset. Because service providers have granular control of their networks, they can use its power to virtualize their operations and transform services. Service providers that make the effort to move to the cloud will reap the benefits. Alcatel-Lucent Bell Labs researchers found that moving to a cloud environment reaches deep into service providers network and service delivery layers, affecting up to 80 percent of their network equipment and software. They also found that moving to the cloud reduces both capital and operating expenditures (CAPEX and OPEX).
Service providers come to the cloud services market with deep service delivery experience, strong customer relationships and the right infrastructure. Together, these advantages set service provider cloud offerings very clearly apart from data center offerings. They also help service providers make the most of the incremental revenue opportunities in the cloud.
Many service providers have become important partners to their enterprise customers and a trusted source for business communications services. Cloud is an opportunity to sell new products to a very established customer base. Cloud services are also a natural extension of the leased line, business virtual private network (VPN), and wavelength services theyre already providing to enterprises.
Physical footprint
Footprint is a huge natural advantage for service providers. With central and distributed resources that cover broad territories, service providers can place services in the location that offers the best performance for cost. For example, by adding IT infrastructure components to their distributed central offices, service providers can take the cloud much closer to customers. Latency drops. Bandwidth costs drop. And customers can easily visit and inspect their local data center when needed. In some cases, using distributed resources to provide cloud services will make the most sense. In other cases, using centralized resources will be the better approach. The key is that service providers can smartly align their assets and costs with their customers demands and willingness to pay for guaranteed performance. Because they also own the access network, service providers can guarantee service quality and performance from the virtual machine in the data center all the way to the customer premises. To learn more about the carrier cloud read the full white paper.
eBook
By Eric Bauer, Reliability Engineering Manager; Randee Adams, Consulting Member of Technical Staff; Alcatel-Lucent Cloud-based services offer greater flexibility and economy than many traditional information services. But can they meet or even exceed end-users expectations for reliability and availability? Cloud computing offers a compelling business model for information services. Consequently, many new applications are being developed explicitly for cloud deployment, while many traditional applications will eventually evolve to the cloud. End users want these cloud-based services to be at least as reliable and available as traditional offerings. And to meet these expectations, cloud service providers and cloud consumers need to gain a solid understanding of the unique challenges of cloud computing and learn how to mitigate risks. The new challenges are primarily related to virtualization, rapid elasticity and resource sharing. These capabilities enable a new level of flexibility, convenience and economy, but they also make cloud computing inherently more complicated than traditional computing. This complexity adds more areas for potential failures. Delivering reliable and available cloud-based services must start with an awareness of how operations have changed in the cloud, including recognition of where new points of vulnerability lie. For example, load distribution, overload control and data management are all more complex in the cloud, and new usage models enabled by cloud computing can increase the impact of a site or server failure. After carefully identifying these issues, cloud service providers and cloud consumers can then take advantage of architectural opportunities for mitigating the risks. When this approach is backed by traditional engineering diligence, cloud-based services have the potential to meet or exceed the service reliability and availability requirements of traditional deployments. Satisfying these requirements can be crucial for all players in the cloud environment, where accountability is often split between cloud service providers and cloud consumers and where standards bodies are still working to establish clear outage measurement rules.
To benefit from elastic growth and other new capabilities offered by cloud computing, many traditional applications will be evolved to a cloud environment over several releases. The following usage scenarios, organized from the simplest to the most complex, illustrate a variety of advantages following virtualization.
While the new usage scenarios of the cloud deliver important benefits, they also present new challenges; for example: Co-residency: This type of server consolidation usage model makes it more difficult to predict application performance. Vulnerability to service impairments due to noisy neighbor applications is greater. These challenges can be mitigated with a fully tested, high-availability architecture that supports failure containment and recovery of each of the applications. Multi-tenancy: Multi-tenancy has the same cost benefits and challenges as co-residency. But the challenges are more pronounced because failures may impact multiple user populations. Multi-tenancy also has an increased security challenge, as user populations must be kept completely separate. To mitigate these challenges, high-availability architecture is required. It should support rigid failure containment and independent service recovery. Workflows should be tested under various failure scenarios. Robustness testing must ensure that each tenant is appropriately isolated; security testing should make sure that there is no cross-tenant access to applications or resources.
In a cloud environment, service load can potentially be distributed seamlessly across multiple servers, locations and cloud providers, with the assistance of load balancing mechanisms and policies. The challenge is to satisfy wide-ranging requirements, such as subscriber affinity, redundancy, latency, availability, security, capacity, and even regulatory issues. For example, appropriate load distribution architecture should consider the number of application instances, their proximity to end users, and application and data redundancy. Policies must also be clearly defined, so service distribution can be managed in accordance with latency, regulatory and security requirements. The distance between data centers should be considered, too, particularly when data exchanges are frequent and high transactional reliability is required. Overload control: To handle overload events, traditional systems set capacity thresholds, then shed or reject traffic as needed to keep the system from crashing. Cloud management mechanisms, however, can add new instance(s) of the application, to share the growing traffic load. For example, rapid elasticity can be used to address traffic spikes and shorten the time a system is in overload as extra service capacity is brought online. Native overload control mechanisms should also be present to handle any excess traffic during the interval before scaling activates and the new instances are sharing the traffic. In addition, the mechanisms should be there to manage traffic when the offered load exceeds maximum elastic capacity (for example, license or policy limits). Rapid elasticity: Besides supporting overload control, this powerful mechanism enables more efficient use of hardware resources. It can automatically increase (or decrease) resources (vertical growth) of a virtual machine or expand (or reduce) the number of virtual machines (horizontal growth). Horizontal growth can occur within the limits of a single data center or grow into an additional data center. Outgrowth expands capacity by adding resources in other cloud data centers. Effective use of rapid elasticity is based on resource monitoring, policies and thresholds. Hysteresis (that is, different growth and shrink thresholds) should be used to prevent capacity oscillations. To mitigate the risks associated with rapid elasticity, systems must be thoroughly tested and cloud-based applications must be designed to: Manage scaling and descaling Accurately monitor resource utilization and performance Support well-defined policies, backed by robust trigger mechanisms to control growth and contraction
For service reliability, all data must be redundantly stored and managed to survive the failure of a component. In addition, data synchronization presents new challenges, because cloud transactions can span multiple application instances and be stored in several locations. ACID and BASE mechanisms are typically used to keep data synchronized. ACID (atomicity, consistency, isolation, durability) properties are essential for transactional reliability and immediate consistency. However, they can be resource intensive and introduce latency into transactions. BASE (basically available, soft state, eventual consistency) properties enable simpler solutions that are less resource intensive. They are appropriate when data consistency can be achieved over longer time periods. For example, they are well suited to many web services, such as e-mail.
Cloud services should be redundant at the software and hardware levels and incorporate high availability mechanisms at their foundation, including automatic failure detection, reporting and recovery mechanisms. To enhance the internal mechanisms, the virtualization platform can provide an additional layer of failure detection and recovery at the virtual machine level. One must assure that the two mechanisms can peacefully coexist and dont collide during failure recovery.
For isochronal applications like video calling, its crucial to prevent latencies that disrupt service quality. But with virtualized configurations, resource contention, real-time notification latency, and virtualization overhead can all add latency. To address these issues, architects need to take the following actions: Carefully identify the real-time isochronal expectations for a virtualized platform. For example, the maximum notification latency must explicitly represent how late a real-time notification interrupt can be. Determine whether the target platform or infrastructure service can actually meet the identified requirements. Establish a recommended architecture and configuration for optimal isochronous performance on the specified platform or infrastructure service. Prototype and test the service to validate whether it is technically feasible to meet its requirements on a virtualized platform. For an in-depth analysis of these challenges, with recommendations for mitigating risks, see Reliability and Availability of Cloud Computing, published by Wiley-IEEE Press 2012.
Cloud computing introduces new technologies with unique benefits and risks. But it does not change the basic structure or importance of the engineering diligence required to maintain reliability and availability. The process of maintaining this diligence can be summarized in the following steps: Clearly define service reliability and availability requirements. Model and analyze overall solution architecture to ensure that it is capable of meeting reliability requirements over the long term. Carry out reliability diligence on individual components to make sure they can meet the overall solution requirements. Test the solution thoroughly and make sure that automated methods of failure detection and recovery work effectively. Track the performance of the solution in the field and follow up with corrective actions as needed. When this diligence process is applied to mitigate both traditional risks and the new challenges of the cloud, cloud-based services have the potential to meet or exceed the service reliability and availability requirements of traditional deployments.
eBook
11
By Cindy Bergevin, Head of Cloud Solution Marketing, Alcatel-Lucent Cloud computing is galvanizing the information and communication industry. Enterprises expect dramatically lower cost and higher agility in their IT operations. Communication service providers (CSPs) are looking for new cloud-based architectures that allow them to become a new type of cloud provider and to virtualize their own network and IT infrastructure, which would enable the rapid introduction of services with new types of business models. These cloud models differ from classical information and communication technology (ICT) architectures, where each application requires dedicated resources, and particular parts of the application (compute tier, database tier) are allocated to specific blades or servers. Software upgrades require complex in-place upgrade procedures with difficulty to limit service interruptions. The dependency on particular server hardware threatens long-term business-critical applications when hardware becomes end-of-life and prevents these applications from benefiting from the ongoing technology evolution.
Figure 1: Cloud computing overcomes the complexity and cost associated with diverse hardware dedicated to specific applications. The cloud is based on highly standardized compute and storage and network nodes. Cloud computing technology can overcome many of these issues. With cloud computing, applications become essentially independent from specific physical equipment. In recent years, two technologies have been developed that are at the foundation of cloud computing: virtualization and high-speed networking. With virtualization, a physical computer is separated into multiple virtual machines, giving each application its own environment and operating system as if it was the sole user of the computer. Moreover, virtual machines can be migrated at run time from one physical machine to another, or even from one data center to another data center. This way, cloud applications can follow their users, optimizing service experience and resource utilization. The rapid advance of high-speed network technology is a second essential enabler for cloud computing. Due to these high-speed networks, application workloads can be placed remotely in cost-effective data centers of a cloud provider without causing performance or latency issues In this way, cloud computing is liberating application providers from physical hardware and geographical constraints. For many, this flexibility will be of higher value than savings from better utilization of physical data center resources. To learn more about why all clouds are not created equal read the full white paper.
eBook
12
By Eric Bauer, Reliability Engineering Manager; Randee Adams, Consulting Member of Technical Staff; Alcatel-Lucent In todays rapidly evolving and complex cloud computing environment, who is responsible in the event of a service outage? Cloud computing introduces a paradigm shift in providing and supporting software services. With this shift come new questions of responsibility and accountability that are highlighted particularly when there is a service outage. Traditionally, applications have been offered by vertically integrated service providers and enterprises. Hardware and software were bundled together and could be purchased from a single supplier, who was accountable if a service was impaired. The cloud computing environment, however, breaks up this integrated approach. By decoupling the software from the underlying hardware resources, computing resources can be pooled by the cloud service provider to enable greater efficiency, convenience and economy and can be offered to multiple cloud consumers. In the case of an outage of a cloud-based service, who is then responsible? The infrastructure provider, the cloud consumer who purchases an infrastructure service, the software provider, one of the network service providers carrying IP traffic to the end-users device or even the end-users equipment? Accountability has clearly undergone some shifts in the cloud, and standards bodies are working to establish outage measurement rules that address the new paradigm. But the rules are not yet in place. For now, some existing distribution of service accountability can be adapted to this new environment. But to do so, all parties operating in the cloud need to be aware and aligned on the basic principles that are involved. Then, service level agreements (SLAs) can be established that clarify accountability, as well as the methods for measurement. To contribute to the clarification process, this article provides a brief overview of cloud business models and roles, along with suggested models for identifying responsibilities and measuring service availability.
The United States National Institute of Standards and Technology (NIST) formally defines cloud computing as a model for providing ubiquitous, on-demand access to shared, configurable computing resources, such as networks, servers, storage, applications and services. The NIST model offers major advantages. Cloud service providers can manage pooled computing resources. The resources are elastic and can be expanded and reduced on demand or triggered automatically by changes in traffic patterns. Convenient access to resources can be offered from any IP device, at any time. And measured service capabilities allow simple, usage-based pricing.
eBook
13
NIST defines the following three service models, which rely on the clouds shared computing resources. As shown in Figure 2, these models logically sit above the IP networking infrastructure used to link end users with applications hosted in the cloud. Software as a Service (SaaS) allows consumers to use the providers applications, such as e-mail and customer relationship management (CRM) applications, which run on cloud infrastructure. Platform as a Service (PaaS) offers middleware and operating systems that facilitate application deployment. Infrastructure as a Service (IaaS) provides virtualization hypervisor and hardware, such as compute, memory, storage and networking resources.
Primary roles
Cloud computing opens up interfaces between the applications, platforms, infrastructure, and network layers. As a result, the layers can be offered by different players, who may be expanding beyond their usual business activities. As Figure 2 shows, in the cloud: Suppliers develop the equipment and software used for IP networking and all cloud business models. They also provide integration services. IP network providers own and operate the networks and equipment used to deliver a service to end users. Cloud service providers own and operate the computing solutions, systems and equipment used to deliver a service to end users. Cloud consumers offer specific applications to end users. They pay service providers for whatever cloud resources they consume in this process. End users use software applications hosted in the cloud, relying on their own equipment and an IP network for access.
Service impairments can be the result of vulnerabilities in software, hardware, power, environment, application payload, IP networking, operational policies or service, application and user data, as well as natural disasters and human error. The telecommunications industry has traditionally identified three categories for outage accountability. As described in Figure 3, they include: Product attributable service outages associated with hardware or software Customer or service provider attributable outages External or force majeure attributable outages, such as a natural disaster or a malicious act
The nature of the cloud now makes accountability more complex. For example, it is often split between the cloud consumer and the service provider, and many more service providers can be involved in the service delivery. The following list offers a starting point for considering accountability on an element-by-element basis, with additional factors discussed in the Service measurements section that follows.
For critical enterprise applications, measuring performance is crucial. But the cloud environment is widely dispersed, and each end user may be served by a different combination of resources, including different IP networks and end-user devices. Consequently, one key service measurement challenge is choosing where to collect data in the service delivery path. Figure 5 shows four natural points for measuring a cloud-hosted application performance, whether focusing on availability, reliability, latency or other aspects of service quality. Data from these measurement points can help determine accountability.
Measurement Point 1 (MP 1) examines how each key component in the data center affects service availability. To eliminate all impairments not associated with the application, this measurement is taken with minimal IP routing, switching and facility infrastructure between the measurement point and the server hosting the application. Separate MP 1 ratings can be calculated for routers, security appliances, load balancers and other infrastructure configurations. MP 1 does not consider geo-redundancy. Measurement Point 2 (MP 2) considers how service availability is affected by the data center environment. That is, it measures the performance of individual application instances, along with the hosting data center. But it does not consider geo-redundancy.
Accountability in the cloud has not yet been clearly defined. At least one standards organization is working to create standards that will establish exactly what each player in the cloud is responsible for. But until these industry guidelines are formally adopted, SLAs or other agreements need to clearly provide all parties involved with the information they need to know as to who is responsible for preventing and remedying outages and how those outages are being identified and measured.
eBook
19
Cloud Research Results Identify Fears and Opportunities in Cloud Services Adoption
By Susan J. Campbell, TMCnet Contributing Editor The hype surrounding innovations in the cloud has drawn a number of companies to consider migration. The attraction is enjoying the variety of benefits afforded in subscription-based pricing models, easy access to advanced capabilities and the elimination of costly updates and system maintenance. For others, however, fears still remain regarding performance, security, ease of use and actual costs. To examine how these fears have affected adoption, Alcatel-Lucent researchers took a closer look. In-Country Cloud Research Results summarizes the Alcatel-Lucent findings from their survey of nearly 4000 IT decision makers in seven different countries, including the U.K., U.S., India, France, South Korea, Hong Kong and Taiwan. The study sheds light on the weakness of todays public cloud and the opportunity that exists for service providers (SPs) to extend to the cloud services market and capture a significant portion of cloud-based value-added revenues by providing the same guaranteed service level agreements (SLAs) routinely offered with business services. The primary concerns acting as inhibitors to the adoption of cloud services by enterprises throughout the global market tend to center on: Response time Stability End-to-end availability For decision makers, the most important element in cloud services needing improvement is performance. The global cloud services market is at stake in the process as it is expected to grow to a staggering $177 billion by 2015. Cloud services, according to the study, are being rapidly adopted by major enterprises across an array of market sectors. Still, weaknesses exist within the public cloud services. At the top of these weaknesses are the risks associated with availability and quality of service. These shortcomings are causing a gap in the availability and use of cloud services. In fact, according to the study, two-thirds of IT decision makers are not relying on the cloud for essential business applications as they fear service outages. The study also revealed that 46% of the participants referred to current system delays in cloud service as unacceptable. Likewise, one in four has complained that no simple resolution path exists when SLAs are not met. And, two out of five IT decision makers have already experienced either frequent or lengthy outages in service. Still, 44% are optimistic that weaknesses in the carrier cloud will be resolved and have built expansions into the cloud as part of their strategies for the next three years. Dor Skuler, Alcatel-Lucent Vice President of Cloud Solutions, stated that, Not all clouds are created equal. For example, a typical large enterprise supports between 250 and 750 IT applications, so before it decides to move them to the cloud it must be confident of a smooth migration. It needs to ensure that there are substantial efficiencies to be gained, risks to its operations are minimal, it is easy to use, and cloud performance is guaranteed with service level agreements.
Cloud Research Results Identify Fears and Opportunities in Cloud Services Adoption
Communications service providers can meet those expectations. By orchestrating and optimizing the assets within their networks and the network itself, they can meet the stringent cloud service delivery demands of consumers and businesses, added Skuler. The demand for cloud services is not expected to wane as these cloud research results reveal that IT decision makers are willing to pay for a cloud solution that is not only next generation, but also delivers high performance. In fact, carrier cloud services are four times more attractive, bringing the potential to generate 10 times more revenue. To fully capitalize on these opportunities, however, service providers must be able to overcome fears and performance perceptions to deliver on true expectations.
eBook
21
By Xavier Martin, Vice-President, Messaging and Communications; Annie Ohayon-Dekel, Director, Business Strategy and Product Marketing; Alcatel-Lucent Enterprise Enterprises are now looking seriously at the cost and efficiency benefits of using cloud computing, as new business models make it an attractive proposition. JMP Securities1 estimates that the three main cloud services market segments will enjoy a 15%, 61% and 27% compound annual growth rate (CAGR) over the next 10 years. Those areas are, respectively: Software as a Service or SaaS (also known as AaaS Application as a Service). Applications are hosted by the communication service provider (CSP), which allows the service to be accessed from anywhere, at any time. The service can be any software-based application, including communications, real time or not. Platform as a Service (PaaS): A set of software and product development resources hosted on the CSPs infrastructure. Developers can create applications on the platform over the Web using APIs, Web site portals, or gateway software installed on the customers computer (for example, force.com). Infrastructure as a Service (IaaS), which provides virtual server instances with unique IP addresses and blocks of storage on demand. This is sometimes referred to as utility computing.
Cloud services offer economies of scale to both the enterprise user and the CSP. The benefits to the user stem from the off-site nature of the service proposition: Services are available on demand, with no upfront investment or commitment. Resources are elastic: Consumption can be increased or decreased at any time, as needed. The infrastructure and services are fully managed by the cloud provider. From a technical standpoint, enterprises (and CSPs) benefit from the flexibility of the architecture that underpins the cloud: Services are supported by virtualized/scalable resources. Architecture is multi-tenant/multi-instance. Based on Internet technologies, cloud services are quick and easy to deploy and scale.
Even though the notion of the cloud originally referred to services available from the Internet, some variants have emerged, offering alternative models for the adoption of cloud-based architecture or services. These are: Public cloud: A multi-tenant environment, which can be a combination of multiple companies (shared public cloud) or multiple individual users and companies (public and community clouds). Private cloud: A private network, run by the customer, which often uses a corporate firewall. Both user access and network usage are restricted (for example, only accessible through VPN or on the customers premises). When offered by a cloud services provider, this is known as a virtual private cloud. Hybrid cloud: The hybrid cloud offers technology to manage complexity and performance, as well as security and privacy concerns. The hybrid cloud will deliver some cloud services, while sensitive data will remain on the premises of the enterprise.
1. At your Service: The Rise of Computing as a Utility, JMP Securities, May 2010.
Continued 22
eBook
According to the independent research firm Yankee Group,2 cloud and mobility are converging: The CAGR for cloud computing from 20082014 will be around 30%, while smartphone penetration will grow by 19% and tablet computers by 50% CAGR. As far as enterprise communications are concerned, increasing smartphone penetration, the proliferation of connected devices, and increased mobile bandwidth driven by 3G and 4G/LTE are building the case for adopting cloud services.
2. Yankee Group, Webinar: Bringing Cloud Services to the Enterprise, March 2011.
Continued 23
eBook
Given the proliferation of personal and enterprise video endpoints, Video Infrastructure as a Service (VIaaS) can be used as a good illustration of the benefits offered by a cloud-based architecture. The VIaaS solution is set to become the preferred mechanism for supporting soft clients and the dominant form of video infrastructure. It will include emerging video services, as well as alternative architectures delivered from the cloud. According to research by Gartner, this market (or rather delivery model) is expected to grow by 45% CAGR by 20153.
Continued 24
eBook
Communication service providers are ideally positioned to deliver such cloud-based services, because they can intelligently blend platforms for immersive communications with network-based services (such as prioritization and caching of applications), based on their business criticality. They can build Virtual Private Clouds for enterprise customers, or simply become pure cloud providers, by syndicating resources to serve many organizations from the same infrastructure.
eBook
25
Although the overall response to the cloud was positive, respondents noted a few areas of concern where improvements are necessary. As the chart below shows, the chief demand is for solutions that offer a higher level of performance, including greater security, response time and end-to-end availability. ITDMs are also looking for solutions that provide improved data security.
Figure 1: Performance and data security are the top demands for cloud services
Researchers discovered that service providers are well positioned to take advantage of the booming enterprise cloud market. Respondents noted that they dont necessarily need to partner with a current leader in cloud technologies, but rather are looking for a trusted provider that can offer secure solutions for mission-critical applications. More importantly, enterprises acknowledged that they are willing to pay for quality. Carrier-grade solutions were found to be four times more attractive than less robust solutions and capable of producing 10 times the revenue. To fully capitalize on this opportunity, service providers must provide a cloud solution that can support complex network topologies, offer a bandwidth and latency guarantee, as well as encrypted storage options, says Alcatel-Lucent. The ideal solution should also enable user-configured redundancy options, and rapid virtual machine instantiation. The Alcatel-Lucent CloudBand portfolio is comprised of two key elements: CloudBand Management System, which delivers orchestration and optimization of services between the communications network and the cloud CloudBand Node, which provides the computing, storage and networking hardware, and associated software to host a wide range of cloud services The entire portfolio enables service providers to capitalize on enterprises growing move to cloud-based solutions.
eBook
27
By Debbie Bradshaw, Alcatel-Lucent; Mohit Bhargava, President, LearningMate A new market opportunity for service providers exists at the intersection of two categories of compelling market drivers. On the one hand, educators are striving to exploit new teaching resources such as digital textbooks and interactive rich media content. Targeted education-sector funding and policy initiatives such as Race to the Top and Bring Your Own Device are making such resources more accessible. On the other hand, an array of technology drivers such as the proliferation of mobile devices and rich digital media which play on those devices, cloud-based delivery environments, open standards and interoperability, and flexible subscriber models make it possible for service providers to address tens of millions of new subscribers with a comprehensive enterprise service model. Together, these market forces could change the education experience for the next generation of students from kindergarten all the way through college and university. Departments of education will reduce the cost of lesson plan delivery while acquiring greater insight into how students and teachers are performing. Schools and teachers will gain a more streamlined lesson plan workflow and have a better understanding of what their students are actually learning. Publishers of educational materials will gain a new standards-based marketplace for their products, new opportunities to upsell into higher value content, and analytics that will better inform their product development and marketing decisions. Parents will have real-time access to class curriculum and to their childrens performance, making them more engaged participants in their childrens education. Students will gain access to rich, immersive content delivered on the devices they want to use with the ability to share, annotate and collaborate, all the while maintaining a permanent record of their learning.
The education sector in the United States is a huge marketplace. According to the U.S. National Center for Education Statistics, there were 50 million kindergarten to Grade 12 students1 enrolled in almost 100,000 public schools in 2012, and another 19.7 million students2 in about 4400 degree-granting institutions of higher education. Per-pupil expenditure on public elementary and secondary students was $10,499 in 2009,3 according to the U.S. Census Bureau.
1. U.S. National Center for Education Statistics as cited in the statistical abstract of the United States: 2011, table 215. 2. U.S. National Center for Education Statistics as cited in the statistical abstract of the United States: 2011. 3. U.S. Census Bureau, public education finances: 2009. Mobile in the Classroom.
The education sector is a complicated marketplace of countless diverse players at the federal, state, and local education agency and even individual institution level. Additionally, it is a sector uniquely governed by standards and pedagogical concerns intended to ensure consistent, evidence-based outcomes. Because of these and other factors, the adoption of technology into classrooms has gone far more slowly than in other sectors. Although the U.S. education and health care sectors are roughly equivalent at about $1.2 trillion in annual spending, IT expenditures are 10 times higher in health care than in education, according to a study by BCG Perspectives.5 The adoption of mobile technologies into the classroom has significantly lagged behind other sectors, notwithstanding the considerable promise that mobile holds. Handheld devices, the applications that run on them and the network infrastructure that supports them are delivering proven benefits in many other areas of the economy, and the education sector is keen to reap the same. Funding bodies at the federal and state levels have recognized this, and there are several new initiatives that support the broader adoption of mobile technologies. Even where mobile has been embraced, however, significant challenges must be overcome. When students bring their own devices to school, the network infrastructure cannot handle the increased load, and network crashes occur. Schools have the usual concerns of security and maintenance when it comes to the use of networked devices by their students, along with distinct issues such as ensuring an equitable playing field for all students and being able to determine that students are following lesson plans rather than surfing elsewhere. A three-way partnership that brings service provider network assets and subscriber business models together with Alcatel-Lucent technology and LearningMates GoClass application can deliver a potent digital content and workflow platform with benefits for every stakeholder in the educational marketplace. Download the full white paper to learn more
4. Greaves, T., Hayes, J., Wilson, L. and Gielniak, M., The Technology Factor: Nine Keys to Student Achievement and Cost Effectiveness, The Greaves Group, The Hayes Connection, One-to-One Institute, 2010. 5. Bailey, A., Henry, T., McBride, L. and Puckett, J., Unleashing the Potential of Educational Technology, BCG Perspectives, 2011.
eBook
29
By Kevin Wendt, Global Director, Network Management Services and Solutions, Alcatel-Lucent Long Term Evolution (LTE) enables faster, more efficient communications and new applications that improve safety for the public and public safety personnel. Higher speeds accelerate data sharing. New applications, such as real-time video and machine-to-machine (M2M) communications, accelerate access to people, documents and information. Public safety agencies can be more responsive. Their situational awareness improves, and interoperability between agencies and across jurisdictions is enhanced. 4G LTE is increasingly recognized as the right wireless technology to evolve and standardize public safety communications. In January 2011, the Federal Communications Commission (FCC) in the United States selected LTE as the data standard for a nationwide public safety network. The ability of LTE to enable nationwide interoperability was a key factor in the FCC decision.
Many public safety agencies like the capabilities LTE can deliver, but are concerned about equipment costs and their lack of experience with the technology. Moving to an LTE cloud infrastructure for data communications and applications brings benefits that alleviate many of these concerns. The LTE cloud is an end-to-end LTE network. It gives each public safety agency access to a private wireless broadband network within an overall cloud infrastructure that is shared among multiple agencies and jurisdictions. Agencies pay a monthly fee to access the LTE data network and applications. Land-mobile radio (LMR) voice communications remain separate, but in the future could evolve to voice over LTE (VoLTE).
Figure 1: The LTE cloud, an end-to-end wireless broadband network for public safety
As a first step, public safety agencies considering a move to LTE cloud infrastructure should evaluate their current platform and consider: Where do they want to be in 1 year, 5 and 10 years? What capabilities do they wish they had? What gaps in capabilities could be hindering their ability to perform? What areas in their current wireless communications would they most like to improve? Agencies should also talk to their peers, particularly early LTE adopters, to learn more about their motivation and approach. Getting involved in national organizations that are working on LTE communications for public safety and attending industry events also helps agencies learn more about broader considerations and requirements, including: Urban versus rural considerations Application roadmaps Investment and cost factors that influence timing decisions
When choosing an LTE cloud provider, the first priority for every public safety agency is to ensure their mission-critical communications needs are met. As a result, they should look for an LTE cloud provider that: Understands both LTE technology and mission-critical public safety communications requirements. The provider must have a proven ability to operate and manage an end-to-end, multivendor network for public safety. This expertise improves service uptime and quality. It also simplifies and accelerates deployments and problem resolution. Off-loading of problems to other vendors and finger pointing should never be an issue. Ensures security and privacy. Encryption locally in the jurisdiction as well as the point where information traverses the network into the core is crucial. A dedicated public safety platform is also important. This ensures there are no network resource contention, security or privacy issues with businesses or residences that are sharing the platform. Works with a strong partner ecosystem. The best LTE cloud providers work closely with application partners to develop and deliver the right public safety applications at the right time. They use their knowledge of the public safety market and agency requirements to influence application development. Offers different levels of network control. The LTE cloud provider should offer each agency a level of control that matches their comfort level. This could range from no involvement at all, to the ability to add and remove users, to full visibility of network status and real-time updates on network issues. Offers clear, measurable and relevant service level agreements (SLAs). SLAs define vendor performance levels and deliverables. Agencies should look for a well-defined project management structure to ensure the platform is successfully deployed. They should also look for well-defined operations processes and a clear and consistent plan for ongoing communications between themselves and the provider. Trust is another crucial factor. Public safety agencies need to feel comfortable that the provider has the infrastructure, people, processes and tools to deliver the services they are promising on day 1 and in year 5.
LTE is a relatively new technology that will progress over time. Providers that offer a dedicated platform for multiple public safety agencies will have to ensure that applications and services are kept current. Providers that offer a single tenant platform or a platform that is not dedicated to public safety wont necessarily have the same motivation. Public safety agencies that adopt the LTE cloud on a dedicated, multi-tenant platform will have better opportunities to advance their communications in step with the technology.
eBook
33
eBook
34
More Information
Documents
A Classroom in the Cloud Why All Clouds Are Not Created Equal Alcatel-Lucent Integrated DDoS Protection Solution Cloud Clout with Open APIs Soaring into the Cloud The Carrier Cloud: Driving Internal Transformation and New Cloud Revenue Virtual Desktop Performance and Quality of Experience Cloud Networking Report
Videos
Alcatel-Lucent Global Cloud Initiative What if there was a better class of cloud?
Articles
Webpages
CloudBand Data Center Connect solution Integrated DDoS Protection Secure the carrier cloud with Denial of Service Protection The Network Makes the Cloud Understanding the market opportunity for carrier cloud services
Subscriptions
Next Generation Communications eNewsletter TechZine Bell Labs Technical Journal Alcatel-Lucent Publications
Follow us on:
35
Note: Wi-Fi, Wi-Fi Alliance and the Wi-Fi logo are registered trademarks of the Wi-Fi Alliance.