Convergence Shifting Gears

Transcription

Convergence Shifting Gears
EQUITY RESEARCH
INDUSTRY UPDATE
February 13, 2015
Convergence Shifting Gears
TECHNOLOGY/COMM. TECH.
SUMMARY
Data centers are under constant pressure to quickly support new workloads (cloud,
analytics, etc.) while dealing with rising bandwidth and storage needs. At the same
time, network, server, and storage vendors are under pressure to deliver lower
priced, more efficient, and easier to use/deploy solutions. The result is a wider
adoption of software-defined environments and increased integration/convergence
across functional planes. The first wave (2010) of integrated solutions (VCE's Vblock,
NetApp's FlexPod, etc.) has gained significant traction. Yet, we expect the value
proposition to be pushed further with "hyperconverged" solutions that also integrate
the software layer (virtualization and management) into a single appliance. We see
new vendors such as Nutanix and SimpliVity at the forefront of this transition and
expect traditional IT vendors to either attempt to slow down the transition, partner
(with VMware or hyperconverged vendors), or be forced into strategic action.
KEY POINTS
■
Convergence 1.0. The adoption of integrated platforms has been strong in recent
years and driven by collaboration among traditional IT vendors such as Cisco,
IBM, HP, NetApp, EMC and VMware. Most solutions were primarily about prepackaging and configuring existing blocks to simplify purchasing, and although
they offer shorter time-to-market and some cost savings, they still require a big
upfront rack-scale deployment and have operational and scaling limitations.
■
Hyperconvergence 2.0. A new class of vendors is now focused on deeper
integration of software/hardware—"hyperconverged" solutions that tightly couple
the hypervisor and management system with a single compute/networking/
storage appliance offering enterprise-class storage features. These systems are
much different from integrated solutions and offer greater procurement flexibility,
step-by-step scalability, and better management and resource utilization.
■
Market size. IDC forecasts the integrated/converged market could reach ~$7.8B
in 2014 driving potential sales of $13.4B in 2017 (19.7% CAGR). Most of these
sales currently come from integrated solutions such as VCE (~26% 3Q14 share)
and NetApp/FlexPod (~20%) with hyperconverged only starting to ramp (<10%
share in 2014). We expect much broader hyperconverged adoption by 2017
pushing this opportunity to $2.7B-$3.4B (~20%-25% of the market) or higher.
■
New leadership. Start-ups with no commitment to legacy architectures, such as
Nutanix and SimpliVity, are quickly gaining mind share in SMB and enterprise
environments as their hyperconverged products mature and storage/management
feature sets expand. Meanwhile, software-based solutions such as VMware's
VSAN/EVO:RAIL are making their first steps into the market.
Forced to adapt. While traditional storage vendors (EMC and NetApp) are the
early leaders of integration, they have the most to lose as hyperconvergence
vendors broaden their storage feature set. Initially, we expect tactical moves
(pricing/partnering) to slow hyperconvergence down, but over time road-maps
could shift fueled by R&D and potentially technology-fueled acquisitions.
Oppenheimer & Co. Inc. does and seeks to do business with companies covered
in its research reports. As a result, investors should be aware that the firm may
have a conflict of interest that could affect the objectivity of this report. Investors
should consider this report as only a single factor in making their investment
decision. See "Important Disclosures and Certifications" section at the end of
this report for important disclosures, including potential conflicts of interest. See
"Price Target Calculation" and "Key Risks to Price Target" sections at the end of
this report, where applicable.
■
Ittai Kidron
212-667-6292
[email protected]
Michael Leonard
212 667-5522
[email protected]
George Iwanyc
415-399-5748
[email protected]
Oppenheimer & Co Inc. 85 Broad Street, New York, NY 10004 Tel: 800-221-5588 Fax: 212-667-8229
TECHNOLOGY / COMM. TECH.
Convergence, Evolving
Massive changes continue to stretch the data center as new delivery and
consumption models, such as public and private cloud, see greater adoption; as
analytics tools become an important vehicle to maximize business opportunities;
and as social and mobile drive the need for faster application development and
delivery. While the environment is dynamic, the core functional building blocks of
the data center (compute, networking and storage) have not changed although
they have been pushed to deliver higher performance, increased flexibility, and
more comprehensive feature sets and control, all at lower price points.
These rising currents have led enterprises to look for ways to reduce cost,
simplify data center architectures and accelerate resource and application
deployment time. Consequently, openness to "software-defined" architectures
has increased as well as the interest in product integration to cut on deployment
time and cost.
A notable outcome of the dynamic pace of change in the data center is the
embrace of "convergence." The first steps were simple with "best-of-breed"
hardware and software vendors teaming up with IT leaders such as Cisco, IBM,
HP, EMC and VMware to deliver pre-packaged solutions to accelerate
application deployment time and cut cost. This quickly evolved to physical
integration of the different data center functional elements into pre-packaged and
pre-tested data center reference platforms that could be quickly deployed.
Many vendors are now moving beyond simple product packaging into
"hyperconvergence" where compute, storage and networking hardware blocks,
as well as the hypervisor, optimized file systems and other management software
are combined into a single appliance. This hyperconvergence is in its early
stages, but it is evident that with the maturity of platforms from emerging vendors
such as Nutanix and SimpliVity and from virtualization leader VMware, CTOs and
IT managers have a growing list of options outside of the traditional IT vendor
eco-system. And as the scale and feature set of these hyperconverged platforms
improves, more enterprises are likely to consider these solutions.
In this note we present a primer on convergence focusing on the next wave of
development, namely hyperconvergence and the quest for higher scalability and
a broader feature set. Our discussion includes an overview of how convergence
has taken hold, why hyperconvergence could be a disruptive force in the data
center, and a brief review of the key vendors driving the disruption. Our net
conclusion is that hyperconvergence is likely to become a key component in data
center architectures, opening the door for a new breed of vendors, and forcing
traditional vendors to rethink their solution sets and go-to-market strategies.
The Challenge
Historically, data center workloads were limited to a single server which
interacted with the network to facilitate the data movement to and from the
storage layer. Each of these data center components was effectively an
independent operating unit individually procured, configured and managed in a
relatively simple and predictable network layout. However, with the gradual
growth in data and application use, discrete compute resources became
underutilized, and the cost of deployment and scaling mounted.
2
TECHNOLOGY / COMM. TECH.
The move to virtualization, which enabled the sharing of compute resources
between multiple applications, offered a more cost-effective way to scale and
manage in the age of workload and application growth. At the same time, it
introduced new challenges in networking and more noticeably in the storage
layer ranging from the rise in random IO, through LUN sharing, to more
complicated backup and restore processes.
More recently, we have witnesses the adoption of desktop virtualization (VDI)
and the explosion in application growth whether it be for business purposes
(web-based or analytics) or for employee purposes (social networking or BYOD).
These trends have further strained and complicated the relationship between the
various data center building blocks. Fundamentally, these applications and use
cases demand more scale, have different data traffic patterns, and require
dramatically greater bandwidth, storage support and faster response time. And
as performance requirements rise and timetables for deployment shorten, the
effectiveness of discrete architectures comes into question, and their ability to
address needs and quickly adapt falls short.
Virtualization concepts have spread further into the network with a broader push
toward software-defined networking (SDN) and software-defined storage (SDS).
The goal is similar─cut product and operating costs, improve flexibility and
efficiency, enable higher performance and scalability, simplify management and
allow more control and performance analytics. These advancements have in turn
pushed the need for tighter integration across data center silos. A visual
representation of some of the forces involved are highlighted in the following
exhibit from IDC.
Exhibit:
Source: IDC (2014), Oppenheimer & Co. Inc.
IDC’s August 2014 survey of enterprise users gives further color on the
perceived benefits enterprises see in integrated systems. The survey results
show that over 50% of the respondents expect integration to reduce downtime,
3
TECHNOLOGY / COMM. TECH.
increase productivity and efficiency, and enable faster application provisioning.
Improved utilization of compute and storage resources, as well as faster time-tomarket, were also common benefits expected by a large number of enterprises.
In conjunction with these operational benefits, enterprises also view integration
as a way to reduce costs.
Exhibit:
Expected Benefits from Integrated Systems
Improved IT staff productivity
Faster infrastructure, w orkload and application provisioning
Reduced dow ntime/improved performance
Improved utilization of compute resources
Reduced cost of data center facilities, pow er and cooling
Improved agility
Improved utilization of storage resources
Faster time to market
0%
10%
20%
30%
40%
50%
60%
70%
Source: IDC (August 2014), Oppenheimer & Co. Inc.
The First Integrated Solutions
The first integrated data center solutions were primarily about packaging and pretesting. Traditional IT vendors such as Cisco, IBM, HP, EMC and others
leveraged their data center building blocks along with solutions from a select
number of partners to assemble predefined and prequalified packages that
include hardware, software and support services. This helped vendors maintain
the value of their installed bases and core technologies, while streamlining and
simplifying the purchasing and deployment process for their customers.
The informal teaming up became formal as partnerships and joint ventures were
formed to put branding power behind integration. The most visible and arguably
first real public-facing effort at forming a converged company and product
portfolio was VCE, the Virtual Computing Environment company. VCE was a joint
venture formed by Cisco and EMC in 2009 with additional investments made by
VMware and Intel. VCE, which is now majority owned by EMC (Cisco retains
10% equity interest), approached convergence by building integrated Vblock
Systems using Cisco's Unified Computing System (UCS) server blades and
networking equipment, EMC's storage, and VMware's virtualization software.
Since VCE began shipping Vblock systems in 3Q10 its sales have scaled
quickly. Over the last seven quarters VCE has exceeded 50% YoY demand
growth (through 4Q14) and already topped the $2 billion annualized demand runrate exiting 3Q14. Several reasons for the strong acceptance include the strong
support and instant brand acceptance it got from its founding partners Cisco,
4
TECHNOLOGY / COMM. TECH.
EMC and VMware, as well as the ongoing effort to expand its ISV relationships
within the platform to add more value. The positive customer reaction to VCE’s
early to market best-of-breed approach and the general increased customer
acceptance of integrated systems have also been important factors in the quick
uptake.
Another early attempt to streamline and simplify the purchasing and deployment
process for enterprises has come from EMC's direct competitor NetApp.
Following a similar product integration methodology, NetApp released its
FlexPod solutions in 2010, which combine its storage FAS-series arrays with
Cisco's UCS servers and networking switches. The FlexPod reference
architectures require more work for its customers than VCE in assembling the
product components from approved partners, but the overall blueprint ensures a
cohesive and high-performing, pre-tested solution that offers a similar value
proposition to that built into a VCE solution.
Not to be outdone, EMC has also provided its customers with a specification
framework similar to FlexPod that is an alternative even to its own VCE. VSPEX
is EMC’s reference platform, but in contrast to the VCE solution that’s solely
based on Cisco/EMC equipment, VSPEX allows customers to assemble a stack
using multiple vendors including Cisco, Brocade, Dell, IBM, HP, Microsoft and
Citrix. The most recent version of VSPEX is VSPEX BLUE, which was unveiled
in February 2015. VSPEX BLUE is a hyperconverged solution using VMware’s
VSAN (first edition) and EVO:RAIL platform (more on this later) and EMC’s own
value-added software. With EMC continuing to add options to its integrated and
hyperconverged portfolio and Cisco making its own small moves into storage
(such as its Whiptail acquisition) and increased partnering with other storage
vendors such as NetApp (FlexPod), Nimble (SmartStack), Hitachi (UCP), and
IBM (VersaStack), it appears these two industry leaders are increasingly walking
a fine line between cooperation and competition.
Exhibit: An Example of a VCE Vblock Platform
Source: VCE, Oppenheimer & Co. Inc.
5
TECHNOLOGY / COMM. TECH.
The vendors’ embrace of platform integration has been widespread. As
mentioned, Hitachi Data Systems has made a move into the integrated market
with its Unified Compute Platform (UCP), including integrating products from its
relationship with Cisco and with other leaders such as VMware and Microsoft.
IBM offers its PureFlex single chassis converged solution leveraging its own
Storwize storage and x86 and Power system hardware. IBM also recently
announced its VersaStack products based on a partnership with Cisco UCS and
its own Storwize storage products. HP offers several integrated platforms (some
of which pre-date VCE’s launch), including several of its own 3PAR storage and
blade server technology. Looking to streamline its go-to-market approach HP has
more recently focused its integration nomenclature on the ConvergedSystems
brand identity. Dell is another vendor involved early in integration, leveraging its
various internal systems including compute, storage (Compellent Technologies
and EqualLogic) and networking (Force10). Like others, Dell has focused its
integration down to a few branding options including PowerEdge VRTX and
Active System to simplify the buying proposition.
Emerging companies are also entering the segment, such as the recent entry by
Pure Storage (founded in 2009, launched in 2014) with its FlashStack CI and a
partnership with Cisco and VMware. Others offering integrated systems
solutions, reference architectures, or both include Fujitsu, Huawei, Teradata,
Oracle and others.
Exhibit: Integrated Vendor Market Penetration
3Q14 WW Integrated Infrastructure Sales
per IDC ($1.5B)
26%
VCE
37%
Cisco/NetApp
HP
Others
20%
17%
Source: Company data, Oppenheimer & Co. Inc.
Not All Milk and Honey
As we've already discussed, pre-configured integrated solutions address several
concerns for IT managers including time-to-deployment challenges as well as the
high total cost of ownership from purchasing, deploying and managing discrete
6
TECHNOLOGY / COMM. TECH.
data center components. But at the same time, they are not without limitations
and at times they present new challenges.
The first wave of pre-packaged integration still has many of the same functional
and deployment limitations of discretely purchased infrastructure. For example, in
many cases enterprises are still looking at big upfront purchases of rack scale
solutions that are often over-provisioned for their needs. Purchasing big can
provide a cost benefit if it meets your current application needs, but for many
users this is an inefficient way to buy equipment as they buy ahead of need
growing into the capacity over an unknown period of time. Many users also never
reach the scale needed to make this type of purchase cost-effective. Small and
medium-sized enterprises with limited workloads and branch offices of larger
users may never scale to rack-sized deployments, but for security or compliance
reasons may still need to own their equipment instead of moving to the cloud.
Exhibit:
Scaling Comparison - Hyperconverged vs. Integrated Systems
Integrated Systems
Infrastructure Step
Workload needs
Hyperconverged Systems
Infrastructure Step
Workload needs
Bigger upfront cost and less efficient
Cost and deployment better matches need
Source: John Wiley & Sons (2014), Oppenheimer & Co. Inc.
Pre-configured integrated systems also at times suffer from the same operational
and scaling limitations that discrete solutions possess. Servers, networking and
storage controllers are bottlenecks that can fail in a way that makes parts of the
system unusable. Scaling benefits also may not extend the integration benefits
outside of the pre-configured block itself, requiring work to add new equipment
especially if it is from a vendor that's not pre-qualified. And lower level software
management of the individual components is still distributed, so the potential
benefits of tighter functional integration and lower propagation delays might not
be fully realized.
7
TECHNOLOGY / COMM. TECH.
While integration helps ensure functional interoperability and easier and faster
deployment, it also removes or limits some of the best-of-breed benefits of truly
discrete solutions. Integration by its nature cubbyholes users into pre-approved
buying choices, which are positive for the vendor but could limit flexibility and
raise costs for the user. So while integration addresses some of the pain points
of buying discretely or the pressing challenge of accelerating deployment time, it
still leaves more room for improvement.
New Vendors Up The Anti With Hyperconvergence
Packaged product integration in itself isn't a departure from historical data center
design. One could argue that a logical next step to orchestrating compute,
network and storage resources under a single go-to-market effort could be in the
form of a single appliance with a software management and a file system
designed from the ground up to optimize performance acting as a mini data
center of sorts.
By physically consolidating the components into a single appliance (instead of
multiple discrete components sitting next to each other), even greater efficiencies
and opportunities to accelerate and manage performance are available. This is
especially true if the virtualization and software layer are added to the mix as
management overhead is reduced and as propagation latency is minimized. This
tightly coupled hardware and software approach has come to be known as
hyperconvergence with Nutanix, SimpliVity and others as early evangelists of the
new approach. Hyperconvergence effectively delivers a single appliance and
software management platform that fully incorporates compute, storage and
networking services, as well as the hypervisor (a proprietary bundle or as an
open platform), and purposely designed system management system and
storage management features.
We view hyperconvergence as a significant change to the traditional network
hierarchy, and we believe it has the potential to be competitively disruptive,
opening the door to new vendors. As already referenced, the early leaders in the
hyperconverged space have been start-up "visionary" companies (as labeled by
Gartner) with no legacy installed base. These include Nutanix (founded in 2009
with products launched in 2011) and its Virtual Computing Platform; SimpliVity
(founded in 2009 with products launched in 2013) and its Omnicube platform;
and Scale Computing (founded in 2006 with first storage products in 2009) and
its HC3 platform. Other emerging vendors with their own take on
hyperconvergence include Atlantis Computing, Maxta, StorMagic and Zadara.
To explore the hyperconvergence approach further we take a closer look at
Nutanix's solution, but some of the basic tenets of their approach hold through to
various degrees with others as well. Nutanix's basic principle is to run
applications in a hybrid compute and storage environment in a single appliance
with "web-scale" elements built into the design from inception. Its Virtual
Computing Platform was designed from the ground up to purposely enable a fully
functioning virtual infrastructure environment that moves compute closer to
where the data is stored in a peer-to-peer architecture. This allows non-storage
compute workloads like virtualization to run in conjunction with storage services
workloads as sharable resources within each Nutanix box, as well as across a
cluster of appliances. The design is based on five core elements:
8
TECHNOLOGY / COMM. TECH.

Hyperconvergence. Integrating compute and
complexity and performance drag. Cutting latency.

Distributed. All data, metadata and operations are distributed across the
entire cluster, eliminating resource contention and enabling predictable
scalability.

Software defined. Eliminates the reliance on special purpose hardware for
features such as resilience and performance, and allowing for new
capabilities without hardware upgrades.

Self-healing systems. Designed to tolerate component failures through
fault isolation and automatic recovery without bringing down the overall
system.

Automation. Extensive automation and system-wide monitoring for datadriven efficiency, and REST-based programmatic interfaces for integrated
datacenter management.
storage
to
eliminate
This approach potentially offers several advantages including─better efficiency
through reduced storage requirements, effectively eliminating the need for a
storage area network (SAN); improved bandwidth and IOPS (Input/Output
Operations Per Second); tighter security through the integration of network
elements; and cost benefits from the use of industry standard x86 components.
And by designing the software to manage the hardware as modular building
blocks a hyperconverged system can be built to easily scale to large
deployments while still maintaining the functionality and simple management of a
single system and resource pool. This allows customers to start small and scale
up to fit an application's needs as it grows.
Exhibit: Nutanix Hyperconverged Platform Architecture
Source: Nutanix, Oppenheimer & Co. Inc.
Nutanix's platform is based on its Distributed File System (NDFS), which was
specifically designed from the ground up to connect storage, compute resources,
controller VM (virtual machine), and the hypervisor. NDFS runs on every Nutanix
node and manages direct-attached storage resources across all servers, thus
eliminating SAN or NAS traditional centralized storage. It was designed to be
fault-resilient to provide data availability in the event of a node/disk failure
(machines within the network contribute to rebuilding the lost data) and has builtin backup and disaster recovery. Writes to the platform are logged in a high-
9
TECHNOLOGY / COMM. TECH.
performance SSD tier and are replicated to another node before the write is
committed and acknowledged to the hypervisor.
Nutanix offers a tiered product portfolio (low- to high-performance), as well as an
option for partners to license the management software. The feature set is now
broad and stable and includes data deduplication, tiering, snapshots,
compression, and thin provisioning. The company's scale-out solution uses
standard x86 server blocks (using Super Micro servers) with flash solid-state
storage and traditional hard disk drives. NDFS runs on each Nutanix node with
an industry standard hypervisor and manages direct-attached storage resources
across all servers, eliminating the need for traditional centralized storage.
Nutanix's platform is designed to fit into existing network infrastructure, such as
1GbE and 10GbE. It is also modular in nature to allow the needed compute and
storage capacity to scale-up and scale-out in small, discrete increments to meet
various workload sizes without disrupting the network. This also allows for the
systems to have no single point of failure. Effectively this creates a low-cost and
highly scalable and fault-tolerant solution that can scale from individual enterprise
needs through cloud scale implementations. In each individual case, the
equipment can be installed to meet the performance metrics needed by the
specifically targeted application environment.
SimpliVity's OmniCube platform is similar to Nutanix's solution, offering gains of
merging industry standard x86 compute, SSD and HDD storage services and
network functionality into a single box. It also builds in WAN optimization, unified
global management, primary storage deduplication, backup deduplication,
caching, and global scale-out. When deployed, two or more OmniCube systems
create an OmniCube Global Federation with an easily scalable pool of shared
resources. The system is also interoperable with existing servers running virtual
machines. One of its notable differences, at least initially, is that SimpliVity
implements a more proprietary approach with its OmniStack file system
incorporating its own virtualization software.
Exhibit: An Example of a SimpliVity Hyperconverged Platform
Source: Nutanix, Oppenheimer & Co. Inc.
10
TECHNOLOGY / COMM. TECH.
Exhibit: Hyperconverged Vendor Platform Comparison
Platform
Key Products
Compute
Raw Storage Capacity
SSD
HDD
Hypervisors
Nutanix
SimpliVity
Scale Computing
NX Series
OmniCube
HC3
NX-1000 / NX-3000/ NX-6000
NX-7000 / NX-8000/ NX-9000
CN-2000 / CN-3000 / CN-5000
HC1000 / HC2000 / HC4000
(3 node Starter System)
(per node)
6 to 24 CPU cores
(per node)
8 to 24 CPU cores
(3 node Starter System)
12 to 36 CPU cores
200 GB to 6 x 1.6 TB
2 x 1 TB to 20 x 1 TB
2 x 400 GB to 6 x 800 GB
8 x 1 TB to 18 x 1.2 TB
12 x 500 GB to 24 x 1.2 TB
vSphere, Hyper-V, KVM
vSphere
KVM
Source: Company data, Oppenheimer & Co. Inc.
We are positive on the potential for hyperconverged solutions like Nutanix and
SimpliVity, but we caution that there are still points to be considered. The first is
that hyperconvergence is still a relatively immature market with a lack of
commonality across the various platforms. Each of the start-ups essentially took
a development approach of delivering a comprehensive "infrastructure-in-a-box"
solution, leveraging off-the-shelf commodity components to keep costs low.
However, sitting on top of the standard off-the-shelf hardware are new file
distributed systems that are unique to the individual vendors. For small
enterprises looking for an easy to use and deploy system, hyperconverged
solutions may be a good fit. For mid-sized and larger enterprises the comfort
level for entrusting their workloads and infrastructure (servers, networking and
storage) to an upstart vendor with relatively limited development and support
track record could be at first low. We expect progress here to accelerate as more
proof points are available, and we believe that Nutanix has already secured a
solid number of large enterprise customers, pointing to greater acceptance.
Another factor to consider with hyperconverged hardware is its flexible resource
scalability. First-generation solutions required compute resources to be added in
tandem with storage capacity. For applications that typically scale linearly (for
example VDI), this isn't a problem, but not all applications scale linearly, which
means there could be over-provisioning within the box or either compute or
storage resources. Steps are being taken to ease the problem as more
modularity is worked into the hardware. For example, the recently introduced NX8000 series from Nutanix is a more configurable box with users able to pick the
compute, HDD and SDD capacities independently. There may also be some
hesitancy with some enterprises still preferring to add compute and storage
resources independently and maintain dedicated teams and vendor support for
the different resources to spread risk (which can be both good and bad).
11
TECHNOLOGY / COMM. TECH.
Established Vendors Playing Catch-Up
While much of the hyperconvergence attention has been on start-up
development activity, larger established vendors are launching similar products.
HP recently introduced its HP ConvergedSystem 200-HC StoreVirtual, which like
other startups combines all the required hardware and software management
into a single scale-out system. The system includes HP's OneView InstantOn,
HP OneView for VMware vCenter, and HP StoreVirtual VSA technology for
simple self-installation and management of fully clustered, highly available
servers and storage.
EMC’s acquisition of ScaleIO in 2013 is another example of a traditional storage
vendor trying to broaden its convergence approach. ScaleIO offered a softwareonly solution (hypervisor-neutral and supporting multiple file systems) for creating
a low-cost, but high-performance and scalable virtual storage area network
(SAN). To accomplish this, ScaleIO's software is installed on a host (hardwareneutral) to present server-side storage as a pool of common, shared resources.
With ScaleIO's converged ECS solution, there's no need for an external storage
array. The solution is very scalable (thousands of nodes) and creates storage
flexibility by allowing servers to add and move capacity on-demand depending on
specific IO requirements. While ScaleIO fills a niche in EMC's portfolio, EMC's
interest in adding the capability validates the growing importance of convergence
within the overall storage landscape.
Licensing and partnerships are also routes established hardware vendors are
using to enter the hyperconverged market. For example, Dell recently introduced
its Dell XC Series of web-scale converged appliances, which combine
networking, storage, and processing. However, instead of developing its own file
system, Dell has partnered with Nutanix and is using its distributed file system
software. By partnering, Dell has accelerated its time to market with a fully
featured hyperconverged virtualization platform for virtual desktop infrastructure,
high-performance server virtualization, and data centers. At the same time
Nutanix expands its reach and gains access to a channel they were unlikely to
tap at this stage of their company lifecycle.
Other forms of software-defined storage convergence have also entered the
market since the initial wave of product-oriented convergence. Most notably,
VMware announced at VMworld 2013 the public beta of its Virtual SAN (VSAN)
storage platform. VMware introduced its first production version of VSAN in
March 2014 and has continued to enhance the platform with new features. Most
recently it added new features with VSAN 6 (February 2015), including all-flash
support in a two-tier architecture (adding SSD as a long-term storage option),
improved performance up to 7 million IOPS per cluster, increased scalability to
64 nodes per cluster (2x previous limits) and 8PBs of storage capacity, and
added enterprise-grade data services such as snapshots and cloning. We expect
the feature set to broaden over time, and thus far the company secured over
1,000 paying customers.
In addition to VSAN, VMware also offers its own hyperconverged qualified
hardware platform EVO:RAIL. EVO:RAIL is built around VSAN and vSphere
software, including Virtual Volumes (VVOLs) support, and hardware from
qualified partners including EMC, Dell, Fujitsu, HP, Hitachi Data Systems,
NetApp, SuperMicro and others (9 total). The participation of hardware vendors
could help VMware in its marketing push, although we note that all of the current
12
TECHNOLOGY / COMM. TECH.
solutions (including EMC’s recently announced VSPEX BLUE) are based still on
the feature-light first-generation VSAN platform. We expect VSAN 6-based
solutions to come to market only in 2H15. This makes the more fully featured
VSAN 6-based EVO:RAIL products more of a factor in 2016 market adoption.
VMware's converged approach leverages the ability of its hypervisor to sit
between the applications and the underlying infrastructure (integrating the
hypervisor and lower level hardware management). VSAN effectively allows for
integration between the storage and compute tiers while running on commodity
hardware from its partners. This effectively enables the aggregation of internal
drives in physical hosts into a shared pool of storage resources, removing the
need for RAID and NAS systems. From a top-level perspective this makes VSAN
effectively a competitor not only to hyperconverged solutions, but also to
traditional storage solutions offered by VMware's mother company EMC.
That said, VSAN 6 is still a newer, immature platform that's not fully featured from
a storage perspective. For example, it has yet to add features such as in-line
compression, deduplication and stretch clusters that are common to other
converged storage platforms. It also could suffer from a concentration risk and
lack of flexibility as some vendors may not want to put all of their software
management into the hands of VMware. In sum, VMware still has work to close
the feature/performance gap vs. hyperconverged vendors such as Nutanix.
Exhibit: Hyperconverged Vendors vs. VMware's EVO:RAIL
Nutanix
SimpliVity
VMware
NX Series
OmniCube
EVO:RAIL
Nutanix/Supermicro, Dell
SimpliVity, Cisco UCS, Dell
Dell, EMC, Fujitsu, HP,
Hitachi HDS, Inspur, Net One,
NetApp, Supermicro
Server per Node/Block
1, 2 or 4
1
4
Min. Configuration
3 nodes
1 node
1 block with 4 nodes
Max. Configuration
Theoretically no limit, but
recommended limit ~64 nodes
8 OmniCubes in a
data center / 32 in a federation
16 blocks (64 nodes)
1 node at a time
1 node at a time
1 block at a time (4 nodes)
vSphere, Hyper-V, KVM
vSphere
vSphere
Platform
Hardware Options
Scaling Method
Hypervisors
Source: Company data, Data Zombie (2014), Oppenheimer & Co. Inc.
13
TECHNOLOGY / COMM. TECH.
Exhibit: Hypervisor Virtualization Approach
Source: VMware, Oppenheimer & Co. Inc.
Evaluating the Integrated and Hyperconverged Markets
Gartner's analysis of the integrated network vendor landscape is presented in its
“Magic Quadrant for Integrated Systems” evaluation matrix presented in the
exhibit below. The analysis gives good perspective on the current vendor
landscape and includes both integrated and converged vendor positioning.
Gartner highlights VCE and the NetApp/Cisco partnership as integrated systems
segment "Leaders." Identifying these companies as leaders isn't surprising given
the characteristics we've already touched on, early presence in the market, and
their current high market share. It also highlights Oracle, which has taken a
different approach toward integration by tightly integrating the software stack and
focusing on Oracle software workloads, instead of focusing on integrating
internally available hardware products like the other vendors.
Exhibit: Integration Vendor Landscape
Source: Gartner (June 2014); IDC (Dec. 2014), Oppenheimer & Co. Inc.
14
TECHNOLOGY / COMM. TECH.
The more interesting quadrant, in our opinion, is the "Visionaries" segment. What
stands out is the position of emerging vendors Nutanix and SimpliVity. Nutanix
and SimpliVity are the only start-up companies on the list and aren't integrating
legacy systems in the same way most of the others are. Instead, they are coming
to market with a fresh take on convergence that isn't hamstrung by older software
and an installed base to manage. We suspect that as they grow and execute,
they would slowly climb into the "Leaders" quadrant. IDC, which offers their own
analysis of the landscape, already characterizes Nutanix and SimpliVity as
leaders in the hyperconverged systems market.
Given this perspective on the vendor landscape, it's equally important to get
some perspective on the market opportunity. The overall integrated market,
which by IDC's definition includes the elements of networking, servers, storage
and basic systems management software in a pre-integrated, vendor-certified
system, is already relatively large. IDC estimates total integrated sales (for
networking, server and storage) were roughly $5.4 billion in 2013, up about 60%
YoY, with storage the largest contributor to the total at roughly $2.9 billion (or a
little over half the total at 53%). IDC expects the strong growth to continue and
has the market pegged at a 2014 to 2017 CAGR of ~20% with total integrated
sales reaching $13.4 billion in 2017.
Given its much smaller base and still emerging status, we believe the
hyperconverged market (IDC forecasts $363 million for 2014, which would be
less than 10% of total integrated systems sales) would significantly outpace the
type of growth rate of the much larger integrated market over the next several
years. The potential cost savings apparent with hyperconvergence are appealing
and likely to drive adoption either as a replacement for existing equipment in
some applications or in new business opportunities and with greenfield network
builds. So while hyperconverged represents a small portion of the current market;
we expect it to outpace overall market growth and by 2017 potentially to account
for 20% to 25% (and potentially more) of the $13.4 billion in total integrated
infrastructure equipment sales.
Exhibit:
Integrated Systems Revenue
16,000
70%
14,000
60%
in Millions
12,000
50%
10,000
40%
8,000
30%
6,000
20%
4,000
10%
2,000
0
0%
2012
2013
2014E
Total
2015E
2016E
2017E
2018E
YoY Growth
Source: IDC (Nov. 2014), Oppenheimer & Co. Inc.
15
TECHNOLOGY / COMM. TECH.
Vendor Implications
So far many of the early adopters of hyperconverged solutions have been small
and medium-sized enterprises willing to look past the short track record involved
with a new technology from an emerging company in exchange for cost benefits
and accelerated deployments. In most small firms decision-making is at a level
(IT manager/storage administrator) where an emerging company and its channel
partners can get direct access to the decision-maker and make their pitch on
price performance and the operational gains.
That said, as hyperconverged platforms mature, feature sets expand and the
vendor base reaches critical mass, we expect more enterprises to consider
hyperconverged solutions in the appropriate use cases. The ongoing large
financial bet made by the venture capital community on hyperconvergence also
raises visibility throughout the industry. And with the increased visibility, we
expect more mid-sized and large enterprise customers to trial and over time
more broadly adopt hyperconverged solutions across more of their workloads.
There is strong evidence to show this wider adoption is happening. Our checks
and third-party market data show a growing number of enterprises are now
deploying hyperconverged solutions. For example, we believe Nutanix now has
strong mindshare in large deployments supported by a customer base of over
1,200 customers and over 50% market share, while SimpliVity recently
highlighted a nearly 500% increase in customer acquisitions during 2014. Most of
the deployments for a company like Nutanix begins with a single use case or
department, but as more experience is gained by the customer, Nutanix’s
footprint often expands into more departments and larger, more important tier-1
workloads, supporting the validity of their “land and expand” growth strategy and
the willingness of customers to move away from traditional architectures.
While we expect hyperconverged product adoption to rise as the emerging
vendors become more widely known and established and enterprise adoption
broadens, we believe traditional storage and server vendors will be forced to
make some difficult decisions on how to participate in this opportunity given
portfolio gaps and tie-ins to legacy architectures.
We view this hyperconvergence transformation as an easier transition for server
companies (most notably Cisco) as they embrace a market opportunity that is
largely incremental to their business and margin profile. The entry path is also
relatively painless as server companies can choose to continue along the current
path and partner with established hypervisor vendors such as VMware
(VSAN/EVO:RAIL) or embrace the new platforms and partner with Nutanix or
SimpliVity as we’ve seen with Dell (a Nutanix partner).
We believe convergence could be more disruptive and threatening for traditional
storage vendors as the features available on hyperconverged systems broaden
and are more adopted to support high-performance tier-1 workloads. Already
today Nutanix’s storage features and capabilities are robust, and its systems are
in live production environments supporting key workloads such as VDI,
databases, Oracle, SAP, etc.
Initially, we believe storage vendors are likely to aggressively defend their
storage footprints and look to slow down adoption with incremental performance
improvements and aggressive pricing (as we’ve already seen with other
disruptive changes in storage, such as with hybrid and all-flash adoption).
16
TECHNOLOGY / COMM. TECH.
Longer term, we see a wider range of scenarios for the traditional vendors
including a ramp-up in internal R&D to embrace hyperconvergence, partnerships
with the hyperconverged vendors, or strategic technology acquisitions (both
server and storage vendors could take this route). Several storage vendors could
also take an incremental approach and lean on those vendors like VMware
pushing the software-defined networking and storage transitions. Ultimately, we
believe it’s clear that converged architectures will grow in importance and open
the door for several new vendors to gain share, while forcing legacy vendors to
evolve and adapt.
A Brief Overview of the Key Start-ups
Nutanix
Founded in 2009 and headquartered in San Jose, California, Nutanix is an early
leader in the hyperconverged infrastructure system space, having released its
first product in November 2011. Nutanix's solution converges computing (industry
standard x86 servers) and storage (SSD and HDD) on the same node and allows
customers to scale their storage layer (to petabyte scale deployments) without
investing in a SAN or NAS. Nutanix's Complete Cluster approach was designed
to ensure resiliency and manageability, while allowing users to scale computing
and storage simultaneously with intelligent load balancing across the entire
platform (with no single performance bottleneck or point of failure). Storage
performance is accelerated as the data is housed closer to the compute than if it
were in a standard SAN or NAS setup (and bringing functionality closer to the
application). Key use cases include VDI, server virtualization, disaster recovery
and big data applications and analytics with the platform targeting all sizes of
customers (small and medium enterprise through large enterprise, as well as
webscale and cloud deployments). Increasingly Nutanix is seeing traction with
larger enterprises and more tier-1 workloads such as databases, SAP and
Oracle, which validates its growing presence in enterprise deployments.
Nutanix's overall solution was designed to be fault-resilient with a streamlined
restore process to ensure data availability in the event of a node/disk failure
(including built-in backup and disaster recovery). Writes to the platform are
logged in the high-performance SSD tier and are replicated to another node
before the write is committed and acknowledged to the hypervisor. NDFS also
ensures the storage is made available to all hosts, replacing the need for
centralized storage. The company currently offers three tiers of software
management (starter, pro, and ultimate) and allows a high degree of
manageability with policies set at a granule level. Nutanix continues to build out
its feature set throughout the platform (hardware and software). Recent additions
include Cloud Connect and Stretch Cluster, both of which are offered to ultimate
customers. The platform currently supports VMware vSphere, KVM, and
Microsoft Hyper-V hypervisors.
The company's product offerings include the NX-1000, NX-3000, NX-6000, NX7000, NX-8000 and NX-9000 series platforms, which can be deployed in small
three-node configurations or scaled to power deployments with thousands of
nodes. The company's biggest deployment to date includes roughly 2,000
servers and supporting 6 to 8 terabytes of storage per server. The various
17
TECHNOLOGY / COMM. TECH.
platforms allow different price performance options, and each of the nodes can
be added to a cluster (mixed and matched within a cluster) to meet workload
performance and capacity requirements. The 8000 and 9000 series products are
recent additions to the portfolio. The 8000 is the first class of products where
Nutanix is behaving more like a server company offering more configurability for
the compute performance and storage configurations (HDD and SSD), allowing
the hardware profile to better match the specific needs of the workload. With the
9000 series Nutanix is offering a high-performance option for those that need
maximum IOPS.
Exhibit: Nutanix NX Series Products
Source: Nutanix, Oppenheimer & Co. Inc.
18
TECHNOLOGY / COMM. TECH.
Exhibit: Nutanix NX Series Products
Source: Nutanix, Oppenheimer & Co. Inc.
Nutanix announced its first OEM deal in November 2014 with Dell. The nonexclusive deal gives credibility to Nutanix's value and likely helps accelerate
adoption with larger customers more comfortable dealing with a well-established
vendor. Dell licensed Nutanix software and is packaging the solution with its own
hardware. The new products are co-branded and have already started to ship to
customers. Given Nutanix's still emerging brand, the arrangement with Dell could
offer a boost with larger customers and in currently untapped regions.
Nutanix has revealed that it has already exceeded $100 million in recognized
cumulative revenue (in the first two years of product shipments) and that it
already has over 1,200 customers worldwide (across verticals and company
sizes). Recent comments from the company suggest an even higher sales rate
with sales bookings roughly around a $300 million annualized run rate (January
2015 quarter). And based on IDC's estimates of the hyperconverged market,
Nutanix has a market-leading position, accounting for 52% of all global
hyperconverged sales during the first half of 2014. We believe the bulk (on the
order of two-thirds of the total) of Nutanix's wins come in displacing traditional
discrete server and storage architectures, although it has also seen traction is
displacing other integrated (first-generation) and more immature converged
systems.
Nearly all of the company's R&D focus is on software development (with about
50 patents supporting the platform). The company's average deal size is around
$130,000. Customers include the likes of eBay, PricewaterhouseCoopers,
McKesson, Concur and the US government (across over 57 agencies to date
and accounting for over 20% of its total revenue). The company has quickly
ramped its channel reach (approaching 1,000 partners) and roughly one-third of
its revenue comes from international customers.
SimpliVity
Massachusetts-based SimpliVity was established in late 2009 with the intent to
simplify IT infrastructure with an all-in-one, assimilated infrastructure solution.
The company's first products launched in early 2013 under the OmniCube brand
and are based on industry standard components (x86 servers, HDD/SDD
19
TECHNOLOGY / COMM. TECH.
storage). The company's proprietary OmniStack file system and virtualization
software power the systems, enabling enterprise-grade computing, storage
services and network functionality in a single box. OmniCube currently supports
VMware, but is adding support for Hyper-V and KVM hypervisors as well.
The three key components of the OmniCube platform are:

Hyperconvergence: OmniStack provides a single software stack that
combines IT infrastructure functions in a single shared x86 resource pool.

Data Virtualization Platform: The core technology supports in-line data
deduplication, compression and optimization on all data at inception. The
Data Virtualization Platform is powered by the OmniCube Accelerator, a
PCIe card that offloads compute-intensive tasks.

Global Unified Management: The collective OmniCube systems form an
intelligent network of collaborative systems. It enables data movement and
sharing in multi-node and multi-site environments, as well as global VMcentric management. A single administrator manages the entire global
infrastructure.
OmniCube targets high-performance applications and can scale in small
increments to scale from small to large application needs. The platform was
designed to support public cloud integration and offers a wide feature set
including WAN optimization, copy data services, unified global management,
deduplication, and caching. As highlighted, the company also has a patented
approach to real-time compression delivered through its OmniCube Accelerator,
which performs the compression before the data is initially written to storage.
Exhibit: SimpliVity OmniCube Products
Source: SimpliVity, Oppenheimer & Co. Inc.
20
TECHNOLOGY / COMM. TECH.
SimpliVity currently uses a 100% indirect business model, leveraging a network
of worldwide resellers and distributors (including over 400 partners globally). The
company's channel partners are supported by a global sales base including
offices throughout the United States, Europe, the Middle East, Africa, and Asia.
In addition to the channel partners, SimpliVity also has a partnership with Cisco
that allows VARs to easily pair Simplivity’s solution with Cisco’s UCS platform in
a pre-validated solution. The company's installed base includes well over 1,000
OmniCube deployments with the company recently highlighting it shipped 1,500
OmniCube and OmniStack licenses in 2014 and saw its customer acquisitions
increase nearly 500% during 2014.
Scale Computing
Scale Computing, founded in 2006 and delivering products since 2009, is a
developer and manufacturer of complete end-to-end clustered hyperconverged
and storage solutions. The company's hyperconverged HC3 systems combine
server, storage and the hypervisor into one cluster. The fully integrated platform
is optimized by Scale's ICOS (Intelligent Clustered Operating System)
technology, which delivers the software intelligence of the hypervisor and
automated management without additional licensing requirements to lower
operating/licensing costs (for the hypervisor and third-party software). Scale's
platform was initially designed with Red Hat KVM as the hypervisor and IBM
GPFS as the distributed storage platform, but it has replaced GPFS with its
proprietary object storage platform known as "Scribe" to improve scalability.
While operating costs for Scale's KVM-based solution can be lower, the lack of
support for more established hypervisors from VMware and Microsoft could also
be viewed as a competitive disadvantage.
Exhibit: Scale Computing's HC3 Products
Source: Scale Computing, Oppenheimer & Co. Inc.
21
TECHNOLOGY / COMM. TECH.
Scale Computing's cluster line of products also allows IT managers to build out
storage clusters from single-digit terabytes up to multiple petabytes on a single
file system using commodity hardware. The storage nodes are available in two
basic performance configurations─the S-Series, designed for SMB use cases
such as archiving, virtualization, virtual desktops, and disk-based backup, and
the M-Series, designed for the more high-performance needs of mid-sized
enterprises with high-activity applications and virtual environments. Deployments
can range from initial three-node configurations scaling up to eight-node
systems. The company's installed base includes over 1,000 deployments.
Atlantis Computing
Mountain View, California-based Atlantis Computing was founded in 2006 with a
focus on optimizing storage solutions for virtual data centers. The company's
converged USX solution is a software-only approach that enables any application
to run on-demand with any storage option and effectively decouples the
application from storage. USX customers can combine different server and disk
(SSD/HDD) configurations using new or existing servers, as well as existing
SAN/NAS/DAS storage with the solution offering policy-based management of
the storage resources, storage pooling and automation of storage functions.
Deduplication and compression are also built into the solution. Atlantis has a
partnership with Citrix focused on extending the reach of its USX software, but in
addition to working with Citrix's XenServer hypervisor USX also works with
VMware's hypervisor. Support for additional hypervisors (such as Microsoft's
Hyper-V) is expect to be added as well.
The USX solution was developed from Atlantis' ILIO solution virtual desktop
software. ILIO enables both Virtual Desktop Infrastructure (VDI) and virtualized
XenApp to run entirely in-memory without physical storage. And its Atlantis ILIO
Persistent VDI 4.0 enables Citrix XenDesktop and VMware View customers to
deliver persistent virtual desktops. Atlantis has over 500 mission-critical
deployments with more than 600,000 licenses sold cumulatively.
Maxta
Founded in 2009 and based in Sunnyvale, California, Maxta offers a softwaredefined VM-centric storage platform in both a software-only solution as well as
reference architecture. The company's software product is its Maxta Storage
Platform (MxSP), which offers a hypervisor agnostic, highly resilient, softwareonly solution. MxSP eliminates the need for storage arrays (either SAN or NAS)
by aggregating dispersed storage resources from multiple servers. The platform
is designed to scale server virtualization and storage independently on demand
(one server at a time) and supports any form of storage (SSD or HDD). The
enterprise-class solution supports live migration of virtual machines, dynamic
load balancing, high availability, unlimited snapshots, in-line compression,
deduplication, data protection and disaster recovery. The solution has been in
customer production environments since late 2012.
Maxta expanded its portfolio in 2014 with a set of reference architectures know
as MaxDeploy. The MaxDeploy platforms provide the same software-defined,
convergence and scale benefits of MxSP, but by working with Maxta's pre-
22
TECHNOLOGY / COMM. TECH.
selected partners and configurations customers can accelerate their time to
market with an easy to deploy turnkey solution.
Pivot3
Founded in 2003 and based in Austin, Texas, Pivot3 offers hyperconverged
appliances that run on x86 commodity hardware and embed server virtualization
into a shared storage layer. The company's vSTAC (Virtual Storage and
Compute) products deliver unified, highly available shared storage and virtual
server appliances that are purpose-built for virtual server and big data workloads.
vSTAC is based on the company's purpose-built operating system, which is
optimized for streaming write I/Os and delivering high-performance load
balancing. The OS architecture ensures parity bits are distributed across storage
nodes in a cluster (differing from traditional RAID configurations) and allows
resources to be shared across the nodes.
Pivot3's product portfolio includes pre-configured and tuned vSTAC appliances
for different target applications including video surveillance, virtualization (mostly
VMware environments) and federated database. The company's installed base
already includes over 1,200 customer locations deploying vSTAC unified storage
and compute appliances. It uses a 100% channel focused go-to-market strategy
and it has a close relationship with Dell.
Founded in 2009, Pure Storage started with a focus on enterprise storage and
all-flash storage arrays, but recently added a converged product to its product
portfolio. Pure Storage’s core storage products accelerate random I/O-intensive
applications like server virtualization, desktop virtualization (VDI), databases, and
cloud computing. The company’s flagship product is the Pure Storage FlashArray
(the third-generation product was the FA-400 introduced in 2013), which is built
on a proprietary architecture (using commodity hardware components) designed
to allow Pure Storage solutions to scale from a single application to consolidated
cross-data center deployments. The company's arrays run the Purity Operating
Environment and are known for their data efficiency and specifically
deduplication (including variable block size deduplication). Configurations can
range from 10s to 100s of usable TBs of flash storage, for both HA and non-HA
configurations.
Complementing its storage product is its newly introduced FlashStack CI
converged infrastructure product, which integrates Pure Storage's PureStorage
FlashArray series of products with Cisco UCS blade servers and Nexus switches
and VMware's vSphere 5 and Horizon 6 software stack. The stack is designed for
virtual server and desktop implementations with the goal of shaping nextgeneration cloud and web-scale storage systems. Pure is also providing a
converged reference architecture.
StorMagic
Founded in 2006, StorMagic provides storage management focused on
simplifying and automating the management of data storage in VMware server
virtualization environments. Its core product, SVSAN, is a software solution that
23
TECHNOLOGY / COMM. TECH.
creates a storage virtual appliance that allows small to mid-sized organizations to
build a cost-effective virtual SAN. The company's typical customer has anywhere
between 10 and 10,000 edge sites, where local IT resource is not available.
Customers include Oxford University, Weiss Group Inc., University of Illinois and
Wisconsin University.
StorMagic's SVSAN is designed to be deployed on industry standard storage
solutions and is installed as a Virtual Storage Appliance (VSA) requiring minimal
server resources. The platform then creates the shared storage necessary to
enable features such as High Availability/Failover Cluster, vMotion/Live Migration
and Distributed Resource Optimization (DRS)/Dynamic Optimization. SVSAN
currently supports VMware's vSphere and Microsoft's Hyper-V hypervisors.
StorMagic is a VMware Select Technology Alliance Partner.
Zadara Storage
Zadara Storage was founded in 2011, with headquarters in Irvine, California. The
company offers a different approach to convergence with its enterprise storageas-a-service (STaaS) model. Zadara offers high-performance, highly available
and predictable (QoS) file and block storage in a pay-as-you-go model for onpremise deployment. The solution is based on cloud block storage software that
runs on industry standard x86 hardware. In addition to standard block storage,
users can create software-defined Virtual Private Storage Arrays (VPSA). Each
VPSA has dual controllers and solid-state drive and traditional hard disk drives.
The platform provides performance isolation between users, security with data
encryption and built-in billing capability. It can also manage both iSCSI block
(SAN) and CIFS/NFS file (NAS) storage simultaneously, allowing each database
and application access to the ideal type of storage for its needs and resulting in
better reliability, availability, serviceability, and performance. To further improve
reliability/availability users can distribute their application servers across different
zones and storage access within the same region. This allows storage across
any of Zadara's 36 public storage clouds including Amazon Web Services (5
regions), Microsoft Azure (3 regions), and Dimension Data (3 regions). Zadara is
also adding support for Microsoft Volume Shadow Copy Service.
Stock prices of other companies mentioned in this report (as of 02/11/15):
Fujitsu (FJTSY-OTC, $32.07, Not Covered)
Hewlett-Packard (HPQ-NYSE, $38.18, Not Covered)
Hitachi (HTHIY-OTC, $67.00, Not Covered)
IBM (IBM-NYSE, $158.20, Not Covered)
Teradata (TDC-NYSE, $43.83, Not Covered)
Unisys (UIS-NYSE, $23.09, Not Covered)
24
TECHNOLOGY / COMM. TECH.
Important Disclosures and Certifications
Analyst Certification - The author certifies that this research report accurately states his/her personal views about the
subject securities, which are reflected in the ratings as well as in the substance of this report. The author certifies that no part
of his/her compensation was, is, or will be directly or indirectly related to the specific recommendations or views contained
in this research report.
Potential Conflicts of Interest:
Equity research analysts employed by Oppenheimer & Co. Inc. are compensated from revenues generated by the firm
including the Oppenheimer & Co. Inc. Investment Banking Department. Research analysts do not receive compensation
based upon revenues from specific investment banking transactions. Oppenheimer & Co. Inc. generally prohibits any research
analyst and any member of his or her household from executing trades in the securities of a company that such research
analyst covers. Additionally, Oppenheimer & Co. Inc. generally prohibits any research analyst from serving as an officer,
director or advisory board member of a company that such analyst covers. In addition to 1% ownership positions in covered
companies that are required to be specifically disclosed in this report, Oppenheimer & Co. Inc. may have a long position
of less than 1% or a short position or deal as principal in the securities discussed herein, related securities or in options,
futures or other derivative instruments based thereon. Recipients of this report are advised that any or all of the foregoing
arrangements, as well as more specific disclosures set forth below, may at times give rise to potential conflicts of interest.
Important Disclosure Footnotes for Companies Mentioned in this Report that Are Covered by
Oppenheimer & Co. Inc:
Stock Prices as of February 13, 2015
Brocade Communications (BRCD - NASDAQ, $12.06, PERFORM)
Cisco Systems (CSCO - NASDAQ, $29.46, OUTPERFORM)
EMC Corporation (EMC - NYSE, $27.87, OUTPERFORM)
Intel Corp. (INTC - NASDAQ, $34.13, PERFORM)
Microsoft Corporation (MSFT - NASDAQ, $43.09, OUTPERFORM)
NetApp, Inc. (NTAP - NASDAQ, $36.90, PERFORM)
Oracle Corporation (ORCL - NASDAQ, $43.89, PERFORM)
Teradata Corp. (TDC - NYSE, $46.02, PERFORM)
VMware, Inc. (VMW - NYSE, $83.62, OUTPERFORM)
All price targets displayed in the chart above are for a 12- to- 18-month period. Prior to March 30, 2004, Oppenheimer & Co.
Inc. used 6-, 12-, 12- to 18-, and 12- to 24-month price targets and ranges. For more information about target price histories,
please write to Oppenheimer & Co. Inc., 85 Broad Street, New York, NY 10004, Attention: Equity Research Department,
Business Manager.
Oppenheimer & Co. Inc. Rating System as of January 14th, 2008:
Outperform(O) - Stock expected to outperform the S&P 500 within the next 12-18 months.
Perform (P) - Stock expected to perform in line with the S&P 500 within the next 12-18 months.
Underperform (U) - Stock expected to underperform the S&P 500 within the next 12-18 months.
Not Rated (NR) - Oppenheimer & Co. Inc. does not maintain coverage of the stock or is restricted from doing so due to a potential conflict
of interest.
Oppenheimer & Co. Inc. Rating System prior to January 14th, 2008:
Buy - anticipates appreciation of 10% or more within the next 12 months, and/or a total return of 10% including dividend payments, and/or
the ability of the shares to perform better than the leading stock market averages or stocks within its particular industry sector.
Neutral - anticipates that the shares will trade at or near their current price and generally in line with the leading market averages due to a
perceived absence of strong dynamics that would cause volatility either to the upside or downside, and/or will perform less well than higher
25
TECHNOLOGY / COMM. TECH.
rated companies within its peer group. Our readers should be aware that when a rating change occurs to Neutral from Buy, aggressive
trading accounts might decide to liquidate their positions to employ the funds elsewhere.
Sell - anticipates that the shares will depreciate 10% or more in price within the next 12 months, due to fundamental weakness perceived
in the company or for valuation reasons, or are expected to perform significantly worse than equities within the peer group.
Distribution of Ratings/IB Services Firmwide
IB Serv/Past 12 Mos.
Rating
Percent
Count
Percent
OUTPERFORM [O]
Count
323
55.69
146
45.20
PERFORM [P]
250
43.10
93
37.20
7
1.21
0
0.00
UNDERPERFORM [U]
Although the investment recommendations within the three-tiered, relative stock rating system utilized by Oppenheimer & Co. Inc. do not
correlate to buy, hold and sell recommendations, for the purposes of complying with FINRA rules, Oppenheimer & Co. Inc. has assigned
buy ratings to securities rated Outperform, hold ratings to securities rated Perform, and sell ratings to securities rated Underperform.
Company Specific Disclosures
Oppenheimer & Co. Inc. makes a market in the securities of BRCD, CSCO, INTC, MSFT, NTAP and ORCL.
Additional Information Available
Please log on to http://www.opco.com or write to Oppenheimer & Co. Inc., 85 Broad Street, New York, NY 10004, Attention: Equity
Research Department, Business Manager.
Other Disclosures
This report is issued and approved for distribution by Oppenheimer & Co. Inc. Oppenheimer & Co. Inc. transacts business on all principal
exchanges and is a member of SIPC. This report is provided, for informational purposes only, to institutional and retail investor clients of
Oppenheimer & Co. Inc. and does not constitute an offer or solicitation to buy or sell any securities discussed herein in any jurisdiction
where such offer or solicitation would be prohibited. The securities mentioned in this report may not be suitable for all types of investors.
This report does not take into account the investment objectives, financial situation or specific needs of any particular client of Oppenheimer
& Co. Inc. Recipients should consider this report as only a single factor in making an investment decision and should not rely solely
on investment recommendations contained herein, if any, as a substitution for the exercise of independent judgment of the merits and
risks of investments. The analyst writing the report is not a person or company with actual, implied or apparent authority to act on behalf
of any issuer mentioned in the report. Before making an investment decision with respect to any security recommended in this report,
the recipient should consider whether such recommendation is appropriate given the recipient's particular investment needs, objectives
and financial circumstances. We recommend that investors independently evaluate particular investments and strategies, and encourage
investors to seek the advice of a financial advisor. Oppenheimer & Co. Inc. will not treat non-client recipients as its clients solely by virtue
of their receiving this report. Past performance is not a guarantee of future results, and no representation or warranty, express or implied,
is made regarding future performance of any security mentioned in this report. The price of the securities mentioned in this report and
the income they produce may fluctuate and/or be adversely affected by exchange rates, and investors may realize losses on investments
in such securities, including the loss of investment principal. Oppenheimer & Co. Inc. accepts no liability for any loss arising from the
use of information contained in this report, except to the extent that liability may arise under specific statutes or regulations applicable to
Oppenheimer & Co. Inc. All information, opinions and statistical data contained in this report were obtained or derived from public sources
believed to be reliable, but Oppenheimer & Co. Inc. does not represent that any such information, opinion or statistical data is accurate or
complete (with the exception of information contained in the Important Disclosures section of this report provided by Oppenheimer & Co.
Inc. or individual research analysts), and they should not be relied upon as such. All estimates, opinions and recommendations expressed
herein constitute judgments as of the date of this report and are subject to change without notice. Nothing in this report constitutes legal,
accounting or tax advice. Since the levels and bases of taxation can change, any reference in this report to the impact of taxation should
not be construed as offering tax advice on the tax consequences of investments. As with any investment having potential tax implications,
clients should consult with their own independent tax adviser. This report may provide addresses of, or contain hyperlinks to, Internet web
26
TECHNOLOGY / COMM. TECH.
sites. Oppenheimer & Co. Inc. has not reviewed the linked Internet web site of any third party and takes no responsibility for the contents
thereof. Each such address or hyperlink is provided solely for the recipient's convenience and information, and the content of linked third
party web sites is not in any way incorporated into this document. Recipients who choose to access such third-party web sites or follow
such hyperlinks do so at their own risk.
This research is distributed in the UK and elsewhere throughout Europe, as third party research by Oppenheimer Europe Ltd, which is
authorized and regulated by the Financial Conduct Authority (FCA). This research is for information purposes only and is not to be construed
as a solicitation or an offer to purchase or sell investments or related financial instruments. This research is for distribution only to persons
who are eligible counterparties or professional clients and is exempt from the general restrictions in section 21 of the Financial Services
and Markets Act 2000 on the communication of invitations or inducements to engage in investment activity on the grounds that it is being
distributed in the UK only to persons of a kind described in Article 19(5) (Investment Professionals) and 49(2) High Net Worth companies,
unincorporated associations etc.) of the Financial Services and Markets Act 2000 (Financial Promotion) Order 2005 (as amended). It is not
intended to be distributed or passed on, directly or indirectly, to any other class of persons. In particular, this material is not for distribution to,
and should not be relied upon by, retail clients, as defined under the rules of the FCA. Neither the FCA’s protection rules nor compensation
scheme may be applied.
Distribution in Hong Kong: This report is prepared for professional investors and is being distributed in Hong Kong by Oppenheimer
Investments Asia Limited (OIAL) to persons whose business involves the acquisition, disposal or holding of securities, whether as principal
or agent. OIAL, an affiliate of Oppenheimer & Co. Inc., is regulated by the Securities and Futures Commission for the conduct of
dealing in securities, advising on securities, and advising on Corporate Finance. For professional investors in Hong Kong, please contact
[email protected] for all matters and queries relating to this report. This report or any portion hereof may not be reprinted, sold, or
redistributed without the written consent of Oppenheimer & Co. Inc.
This report or any portion hereof may not be reprinted, sold, or redistributed without the written consent of Oppenheimer & Co. Inc. Copyright
© Oppenheimer & Co. Inc. 2015.
27