Kazuhm Welcomes Distributed Computing Pioneer Dr. Larry Smarr as Technology Evangelist

UCSD Distinguished Professor Emeritus Brings Decades of Experience and Technical Expertise

August 19, 2020—Kazuhm, a leader in technology and tools for maximizing IT efficiency, today announced that Dr. Larry Smarr will provide support to the Kazuhm leadership team as Technology Evangelist. With over 40 years of experience driving information technology innovation in academia, government agencies, and private industry, Dr. Smarr brings a practical vision for the ever-broadening use of distributed computing. 

“We are thrilled to have Dr. Smarr with his unmatched knowledge and experience contributing to the Kazuhm team’s foundational expertise and outreach,” said Andreas Roell, Kazuhm CEO.  “Containerization, when applied to distributed computing, represents a paradigm shift in the way organizations process their workloads and store data, and there is no one better for Kazuhm to partner with than Dr. Smarr to deliver this message to the IT community.”

Dr. Smarr is Distinguished Professor Emeritus at the University of California, San Diego. From 2000-2020, he served as the founding Director of the California Institute for Telecommunications and Information Technology (Calit2), a UC San Diego/UC Irvine partnership, and held the Harry E. Gruber professorship in UCSD’s Department of Computer Science and Engineering. Before that (1985-2000) he was the founding director of the National Center for Supercomputing Applications (NCSA) at UIUC. He received his Physics Ph.D. in 1975 from the University of Texas at Austin and did postdoctoral research at Princeton, Harvard, and Yale, before becoming a Professor of Physics and of Astronomy at UIUC in 1979.

Additionally, Dr. Smarr has supported government agencies at the state and federal levels including 8 years as a member of the NIH Advisory Committee to the NIH Director, serving three directors. Dr. Smarr served on the NASA Advisory Council to four NASA Administrators, was chair of the NASA Information Technology Infrastructure Committee, and the NSF Advisory Committee on Cyberinfrastructure. He also served on Governor Schwarzenegger’s California Broadband Taskforce in 2007. He currently serves on the Advisory Board to the Director of the Lawrence Berkeley National Laboratory. He continues to provide national leadership in advanced cyberinfrastructure (CI), currently serving as Principal Investigator on three NSF CI research grants: Pacific Research Platform, Cognitive Hardware, and Software Ecosystem Community Infrastructure, and Toward a National Research Platform.

Among numerous honors and awards, Dr. Smarr is a member of the National Academy of Engineering, as well as a Fellow of the American Physical Society, the American Association for the Advancement of Science, and the American Academy of Arts and Sciences.  He received the IEEE Computer Society Tsutomu Kanai Award for his lifetime achievements in distributed computing systems in 2006 and in 2014 the Golden Goose Award.

“Capital investments in on-premise computers are often underutilized in many organizations because of the lack of a secure and flexible software infrastructure that can make full use of the capability of today’s distributed systems,” said Dr. Smarr.  “I am looking forward to partnering with Kazuhm to help the IT community across sectors more fully utilize the hardware assets they have paid for, as well as to extend to external cloud resources, to meet the growing demand for compute capacity.”

Dr. Smarr gives frequent keynote addresses at professional conferences and to popular audiences. His views have been quoted in Science, Nature, the New York Times, Wall Street Journal, Time, Newsweek, Atlantic, New Yorker, Wired, MIT Technology Review, Fortune, Business Week, CBS, and the BBC.

 

 

Comments

Kazuhm Launches Industry’s First SaaS-Enabled Distributed Computing Platform

New feature provides IT departments with self-service installation for over 100,000 applications

April 29, 2020Kazuhm today announced the launch of its “bring-your-own-app” functionality allowing users to independently run any application with a Docker Compose file within the Kazuhm distributed computing environment. This new feature enables Kazuhm users to choose from more than 100,000 containerized applications in the Docker Hub library and quickly and easily deploy those apps across their Kazuhm-enabled corporate assets including servers, desktops, laptops, and multi-cloud resources. 

Kazuhm is the only distributed computing platform available as an enterprise-grade product allowing organizations to take full advantage of the computing power they already own across all their devices. With the latest Kazuhm release, customers can upload and edit Docker Compose files, then configure and deploy applications with just a few clicks. This eliminates the need for extensive command-line capabilities and facilitates the use of many popular applications within the Kazuhm platform. Among the 100,000+ applications available as Docker Compose files are MySQL, WordPress, Elastic Stack (ELK), Redis, Cassandra, and many more.

“Since our inception, our vision has been to provide corporations with the industry’s first SaaS-enabled distributed compute platform. With the launch of our bring-your-own-app functionality, we have now reached this significant milestone,” says Gregg Holsapple, vice president of product at Kazuhm. “Kazuhm allows corporations of any size to build a powerful compute fabric using resources they already own, lowering IT costs, and improving application performance. And the beauty behind it is that our drag-and-drop approach does not require any command-line capabilities, making it quick to establish and easy to maintain while saving hours of IT staff time.”

Also available in the current release are expanded scheduling and control features, deeper insights into host CPU and memory usage by deployed applications, and more visibility and automation for Docker installation on Windows and Linux devices.

 

Kazuhm COVID-19 Response

In related news, Kazuhm announced on March 23 their AI-driven distributed computing solution will be provided free to any organization fighting the coronavirus pandemic.  University and private sector research labs, test kit manufacturers, companies providing free or reduced-cost video conferencing solutions, and companies who have quickly ramped up manufacturing for items such as hand sanitizer, gloves, and face masks can all take advantage of Kazuhm to increase their computing capacity.  This offer is available to any organization across the globe that meets the criteria of contributing to the fight against COVID-19. Visit https://www.kazuhm.com/covid-19-response/ for more information on this offer.

 

 

Comments

Kazuhm Expands AI-Driven User Insights and Controls with Latest Distributed Computing Solution

Enterprise-grade distributed computing solution enables IT cost savings and  intelligent compute resource management

SAN DIEGO, Ca.—April 3, 2020Kazuhm today announced the availability of advanced AI-driven insights and controls in the latest release of their distributing computing platform.  Newly added functionality uses artificial intelligence algorithms to forecast available computing capacity across corporate assets including servers, desktops, laptops, and multi-cloud resources.  Additionally, users can now more precisely control when any given compute resources are used based on configurable exclusion windows and usage limits.

The Kazuhm platform represents the first time distributed computing is available as an enterprise-grade product enabling organizations to take full advantage of the computing power they already own across all their devices from desktops, laptops and tablets to servers and multi-cloud environments.  Kazuhm allows customers to quickly and easily unify their resources and run enterprise applications faster, more securely, and at a lower cost.

“Kazuhm’s distributed compute software helps True run transcoding on compute assets we already own and save hundreds of thousands of dollars on compute and hardware costs,” said Reed Anderson, CTO, True Corporation. “We are excited about the artificial intelligence-driven insights we will get from this latest release, enabling even more optimization of our resources and therefore cost savings.”

Available to customers in the latest product release, Kazuhm now offers the following features:

  • Control features include the ability to set exclusion windows for each Kazuhm-enabled device on your network, pause and resume work on those devices, add additional resources to an existing host group, and manage storage from within the Kazuhm platform.

 

Kazuhm Scheduler

 

  • Monitoring features include a notification center that displays timely and critical information about your Kazuhm-enabled resources, information on Kazuhm-specific CPU usage across devices, and cloud resource status.

 

Kazuhm Notifications Center

 

  • Usability features include a newly streamlined Windows installation process and Google Cloud Platform provisioning capability.

 

Kazuhm Windows Installer

 

Kazuhm is applicable across a wide range of end user devices, processor platforms, and operating systems including Linux, MacOS, and Windows. Companies interested in containing their IT costs while improving compute capacity and performance can request a free trial of the Kazuhm platform at https://www.kazuhm.com/.

Kazuhm COVID-19 Response

In related news, Kazuhm announced on March 23 their AI-driven distributed computing solution will be provided free to any organization fighting the coronavirus pandemic.  University and private sector research labs, test kit manufacturers, companies providing free or reduced cost video conferencing solutions, and companies who have quickly ramped up manufacturing for items such as hand sanitizer, gloves, and face masks can all take advantage of Kazuhm to increase their computing capacity.  This offer is available to any organization across the globe that meets the criteria of contributing to the fight against COVID-19. Visit https://www.kazuhm.com/covid-19-response/ for more information on this offer.

 

Comments

Kazuhm Provides Free Compute Capacity to Organizations Fighting the COVID-19 Pandemic

Enterprise-grade distributed computing platform enables immediate increase
in compute capacity and storage at no cost

SAN DIEGO—March 23, 2020  Kazuhm today announced their AI-driven distributed computing solution will be provided free to any organization fighting the coronavirus pandemic.  University and private sector research labs, test kit manufacturers, companies providing free or reduced cost video conferencing solutions, and companies who have quickly ramped up manufacturing for items such as hand sanitizer, gloves, and face masks can all take advantage of Kazuhm to increase their computing capacity.  This offer is available to any organization across the globe that meets the criteria of contributing to the fight against COVID-19.

“COVID-19 is bringing people and organizations around the world together with a singular goal of beating the pandemic and saving lives,” said Rick Valencia, interim CEO of Kazuhm. “The shift in resources, the changeover in manufacturing lines, the ramp up of research labs all takes immense application processing power, and Kazuhm is dedicated to ensuring these teams are getting that processing power as easily, efficiently, and securely as possible.”

Universities, research labs, non-profits, hospitals, and enterprises across sectors are struggling to meet the sudden demand for everything from hand sanitizer to respirators to online learning and meeting solutions. The Kazuhm platform can help these organizations quickly and easily unify the compute resources they already own to maximize distributed storage and application processing power.

In many organizations, compute resources such as desktops, laptops and servers go unused approximately 70% of the time.  Kazuhm represents the first time distributed computing is available as an enterprise-grade product enabling organizations to take full advantage of the computing power they already own across all their devices from Linux, MacOS, and Windows desktops, laptops and tablets to servers and multi-cloud environments.  Industry proven in applications for genomics research, data analytics, and image and video processing, Kazuhm allows customers to quickly and easily unify their resources and run any containerized application faster, more securely, and at a lower cost.  A user-friendly interface and integrated dashboards enable simple setup, AI-driven insights, and complete control.

Companies interested in containing their IT costs as well as improving compute capacity and performance while fighting the COVID-19 pandemic can sign up for this free offer at https://www.kazuhm.com/covid-19-response/.

 

Comments

Rick Valencia Joins Analytics Ventures as Operating Partner

Former Qualcomm Executive Brings Vast Experience in Driving Operational Growth to Kazuhm

SAN DIEGO—Dec. 18, 2019 Analytics Ventures, a fund dedicated to creating and building venture companies that harness the power of artificial intelligence (AI), announced today that Rick Valencia has joined the Analytics Ventures leadership team, taking on the role of operating partner. Mr. Valencia’s primary function will be to help Analytics Ventures-backed companies transition from successful startups to independent, fast-growth technology companies.

“Rick Valencia is an astute investor, entrepreneur and diligent operations executive capable of rapidly scaling ventures with exceptional technology and early traction,” said Navid Alipour, managing partner at Analytics Ventures. “His technical and operational acumen in combination with his vast network of business and technology relationships will be key to the future success of our ventures.”

Before joining Analytics Ventures, Mr. Valencia was an SVP at Qualcomm and served as President of Qualcomm Life, Inc. after he spearheaded its formation in 2012. As President of Qualcomm Life, Mr. Valencia was also responsible for overseeing Qualcomm’s healthcare venture funds, dRx Capital and Qualcomm Life Fund. Prior to founding Qualcomm Life, Rick founded ProfitLine, Inc., a telecommunications service management provider, and served as Chief Executive Officer from 1992 until the sale of the company in 2009. Rick is also on the Board of Directors of Tandem Diabetes Care, (NASDAQ: TNDM) and is the Executive Chairman of TrekIT Health.

“Having successfully launched multiple AI companies over the past two years with their venture studio model, I feel that Analytics Ventures has a proven approach to leverage the power of AI for company formation across multiple industry verticals,” said Mr. Valencia. “I am excited to join this team of visionary business leaders and exceptional artificial intelligence scientists and look forward to being a catalyst for operational excellence and growth for our venture companies.”

Mr. Valencia’s initial focus as operating partner at Analytics Ventures will be to assist portfolio company, Kazuhm, in commercializing its innovative hybrid compute offering by assuming the role of executive chairman and interim CEO.  Kazuhm is an IT technology company enabling next-generation hybrid computing and has succeeded in building an enterprise-grade compute platform that allows application and hardware providers to let their users process more data faster, more securely, and at a lower overall cost. They do this by intelligently unifying a company’s existing, yet underutilized, enterprise compute resources, including desktops, servers, and cloud. He will take on this role with the objective to build out and scale Kazuhm’s operational framework to address the growing demand from enterprise customers.

About Analytics Ventures

Analytics Ventures is a venture studio providing front-to-end infrastructure to ideate, form, launch, and fund brand new companies in artificial intelligence. With its own in-house AI lab, technology, back-office, and marketing setup, Analytics Ventures takes companies from formation to public launch in as little as six months. Winner of the Awards.AI Venture Capital firm of the year for two years in a row, the fund’s ecosystem is supported by a large network of corporate and academic partnerships, as well as other venture funds. To learn more about Analytics Ventures, visit www.analyticsventures.com.

Comments

Cloud today gone tomorrow?

by Kevin Hannah, Director of Product Operations, Kazuhm

The Changing Definition of “Cloud”

The message of achieving IT nirvana by moving to the cloud continued to ring loud throughout 2018. But in the face of practical realities that included overrunning budgets1, security concerns2, performance issues due to network latency3, and an ever-increasing skills gap, the emphasis on public cloud changed to one of hybrid cloud where organizations were encouraged to take advantage of both public and private deployments; with 80% “repatriating workloads back to on-premise systems”4. A fact that the public cloud providers have been forced to embrace, as evidenced by Amazon announcing Outposts to bring their hardware into customer data centers. Now more recently followed by Google with Anthos. And a further recognition that “some customers have certain workloads that will likely need to remain on-premises for several years”5.

The number of “cloud” options has continued to increase, there is no one-size-fits-all, and so what we were really talking about at the end of last year was any variant on xyz cloud (public, private, multi, and hybrid).

But wait. The “fog” is rolling in. Or as Gartner would say “the Edge will eat the Cloud”7. The future tsunami inherent in Edge and Internet-of-Things (IoT) deployment behind both these statements, driving organizations away from a single threaded focus on “cloud”, requires another rethinking of our definitions. Add the ability to run workloads on desktop and the truly disparate constituent parts of this ever-expanding compute continuum and xyz cloud just doesn’t cut it anymore.

Adding a version number, e.g. Cloud 2.0, is lackluster. And although the use of “3rd platform” by IDC7 builds on an evolution of mainframe/greenscreen, through client/server, to cloud/browser, and comes somewhat closer, I see it as muddying the waters by weaving in social business and big data analytics that are not intrinsically part of a compute continuum.

Is it Cloud, is it Edge, or is it both? I believe we need new terminology, one best characterized in a Next Generation Grid of heterogeneous, connected, compute resources.

 

Containers as the “Life Blood” of Digital Transformation need a Heart

Despite the hybrid/multi-cloud push in 2018 and the lauded growth rates in $spend and adoption, the reality is somewhat different and “the so-called rush to the cloud is not, at present, much of a stampede, at all”; by 2021 only 15% of IT budgets will be going to the (public) cloud8.

Cloud this year is “still only used for around 10-20% of applications and workloads, according to 451 Research9, and this doesn’t even differentiate between production and non-production.

The drip has now become a trickle in 2018 but to reach flood stage will require the ability to have workloads move freely across the entire compute continuum, from desktop, to legacy server, to private cloud, public cloud, to the Edge and the IoT beyond. In other words, Containers. So, it is not a surprise that Forrester predicts “2019 will be the year that enterprises widely adopt container platforms as they become a key component in digital transformation initiatives”.  A recent survey of IT professionals done by Kazuhm supports this with 75% of respondants predicting they would increase their use of containers in 2019.

However, it is not just a case of organizations simply rolling-out containerized application workloads. It matters that the right workloads are deployed onto the right resources for the right reasons (including cost, performance, security/compliance, and even more esoteric vectors such as “data gravity”11 that root the location of processing that data). In other words, Optimal Workload Placement. We have already explored the breadth of resource but the addition of a myriad of both workload types and business reasons exponentially compounds the complexity.

The use of AI and the cloud have seen parallel growth. The latter an enabler through collection, storing, processing, and analyzing the vast volumes of rich data necessary to feed AI algorithms. But again, AI at the Edge is set to take center stage as issues with latency, bandwidth, and persistent connectivity (reliability), compound the problems the cloud already has with privacy-security-regulatory concerns and economics. What were we saying about cloud being inadequate as an overarching term…

That aside, now is the time to apply AI inward, with 2019 I believe to be marked as the start in the evolution of AI-enabled Orchestration of container workloads, the pumping heart of digital transformation.

The future is AI-enabled Orchestration for Optimal Workload Placement on the Next Generation Grid.

 

You hear that Mr. Anderson?… that is the sound of inevitability…

My parting thought for this future. “AWS wants to rule the world” 12. As did IBM, the biggest American tech company by revenue in 1998. Now 20 years later they are not even among the top 30 companies in the Fortune 500. The cycle of technology change continues to turn, but at an even faster pace. Perhaps Cloud today gone tomorrow?

 

References

1 Source: Cloud trends in 2019: Cost struggle, skills gap to continue https://searchitchannel.techtarget.com/feature/Cloud-trends-in-2019-Cost-struggle-skills-gap-to-continue

2 Source: What’s Coming for Cloud Security in 2019? https://www.meritalk.com/articles/whats-coming-for-cloud-security-in-2019/

3 Source: Cloud 2.0: What Does It Mean for Your Digital Strategy? https://www.forbes.com/sites/riverbed/2018/10/11/cloud-2-0-what-does-it-mean-for-your-digital-strategy/

4 Source: Businesses Moving from Public Cloud Due To Security, Says IDC Survey https://www.crn.com/businesses-moving-from-public-cloud-due-to-security-says-idc-survey

5 Source: Amazon Web Services Announces AWS Outposts https://www.businesswire.com/news/home/20181128005680/en/Amazon-Web-Services-Announces-AWS-Outposts

6 Source: Gartner, The Edge will Eat the Cloud https://www.delltechnologies.com/en-us/perspectives/the-edge-will-eat-the-cloud-a-gartner-report/

7 Source: IDC https://www.idc.com/promo/thirdplatform

8 Source: ‘Big four’ set for assault on cloud market https://techhq.com/2018/11/big-four-set-for-assault-on-cloud-market/

9 Source: Sky’s the limit in global race to adopt cloud https://www.raconteur.net/technology/skys-the-limit-in-global-race-to-adopt-cloud

10 Source: Predictions 2019: What to Expect in the Cloud/Container World https://www.eweek.com/development/predictions-2019-what-to-expect-in-the-cloud-container-world

11 Source: Defying data gravity: How can organizations escape cloud vendor lock-in? https://www.cloudcomputing-news.net/news/2018/nov/23/defying-data-gravity-how-can-organisations-escape-cloud-vendor-lock-/

12 Source: AWS wants to rule the world https://techcrunch.com/2018/12/02/aws-wants-to-rule-the-world/

Comments

The Triple DNA Helix of AI at the Edge

Kevin Hannah, Director of Product Operations for Kazuhm, explores why artificial intelligence should take center stage at the edge and why our ability to process the tsunami of information that is coming at us from 5G, IoT, and Big Data is going to depend on how successfully artificial intelligence is deployed at the edge in the blog below and in Amelia Dalton’s ‘fish fry’ podcast from the EE Journal that you can access here.

As Neo was told “that is the sound of inevitability” so too are organizations when it comes to both AI and the Edge. But inevitable as it is, if we are to see the delivery of tangible business value rather than just continuing to read articles espousing lofty promises of what will be, we need to understand the three complimentary entwined strands of what makes AI at the Edge both possible and more importantly financially viable.

AI Applications are the obvious end-user manifestation of AI at the Edge. But why focus on AI rather than one, or many, of the other technology darlings such as AR, VR, and Autonomous Driving? All are perceived to deliver value at the Edge based on their need(s) for low latency performance, reduced movement of data, either for bandwidth reduction or compliance jurisdiction/sovereignty, survivability, and reliability.

The business case for AI is simply an extension of the tidal wave of Business Intelligence and Analytics associated with all things Big Data. And that is the key. Massive data volumes generated by next generation connected Internet of Things (IoT) devices continues to grow exponentially.

AR/VR are cool to demonstrate but have offered little to organizations in terms of real revenue gain and Autonomous Driving is going to face numerous uphill struggles against regulatory adoption.

But the use of AI, trained using Machine Learning (ML) algorithms, on data at the Edge is easy to grasp in terms of immediate business benefit – insights generated, and immediate actions taken where the data is produced rather than having to rely on distant, centralized, cloud resources. This is no more evident than in Manufacturing where high precision manufacturing and robotics require AI located on premises to ensure real-time responsiveness, while connected machines and sensors provide new insights into predictive maintenance and energy efficiency across disparate geographic locations in pursuit of improving operating profit.

However, the Edge is a continuum stretching from the IoT device layer, through the Access Edge “last mile” layer, to the Infrastructure Edge data center layer, with aggregated data ML seamlessly picking up where work at the device leaves off. Ultimately, providing opportunity to improve scalability and performance by placing AI at an optimal location in the Edge topology.

And it is this AI-as-a-Service sitting at the network edge that represents a key monetization opportunity for Communication Service Providers (CSPs).  It allows them to move away from selling undifferentiated basic bandwidth services, become relevant in the developing AI Application ecosystem, and drive new revenue. This is a time-sensitive endeavor as the major public cloud providers look to extend their reach in reaction to the “edge will eat the cloud” (Gartner).

Edge Infrastructure is the domain of the CSPs who as we have discussed are leveraging their network infrastructure as a rich, functional platform for AI applications. Ownership of access networks and edge cloud infrastructure gives them a competitive advantage over public cloud providers particularly in the 5G era. And without 5G there will be network problems in not only providing connectivity for the billions of anticipated IoT devices but also for transmitting the huge volumes of data that will be generated.

Out of 5G is born Software-defined networking (SDN) designed to make networks more flexible and agile through Network Function Virtualization (NFV), and Mobile Edge Computing or Multi-Access Edge Computing (MEC) in the form of what is essentially a cloud-based IT service environment at the edge of the network.

A set of standardized compute resources are provided, both CPU and GPU, running cloud native applications and orchestration to mimic the platform simplicity, API familiarity, and developer comfort of the cloud. But within the 5G networks, these resources reside on a playing field differentiated by location… a game the CSP can win.

So, with companies such as NVIDIA looking to Edge-located GPUs in support of AR, VR, and Connected Gaming over this standardized 5G infrastructure, although not a direct use for AI as mentioned earlier, these resources can be recaptured (when idle) as a powerful accelerator of AI training algorithms.

And back to the billions of anticipated IoT devices such as mobile phones, whose compute resources inside are becoming increasingly powerful. They can now enable Federated Learning as a privacy-preserving mechanism to effectively leverage these decentralized compute resources to train ML models coordinated through these other Edge-located ML resources.

A complete, connected, ecosystem hosting AI stacks for both the CSP and their clients/partners offers the opportunity to rethink business models and how to participate in value creation, value distribution and value capture. Here, effective participation is the key to monetizing network infrastructure.

AI-Enablement is the use of the AI stack by the CSP for automated workload orchestration, the underpinning for provisioning and managing services and applications at the Edge.

This means the Edge itself becomes more intelligent. Making it not only relevant for low latency applications but offering potential to unlock highly intelligent and secure opportunities, data transmission efficiencies, traffic steering, zero-touch service management, and optimal workload (including Virtual Network Function, VNF) placement; a smart way to handle the right workload, on the right resource, for the right reason whether that be cost, performance, security/compliance, routing, or even reliability.

AI will be critical to network automation and optimization, with real-time decisions needed in support of traffic characterization, meeting end-to-end quality of service, and in particular – Dynamic Network Slicing that allows CSPs to monetize their infrastructure by offering multiple service tiers at different price points. For example, a slice of the network to handle certain floor robotics that rely on ultra-low latency may garner a higher price than a parallel slice for less time-sensitive edge compute.

The DNA of AI at the Edge is now starting to form. Time will tell as to who will endure (through financial success) to pass theirs to a next generation where AI functionality is so completely decoupled and disseminated so broadly that it will seem to disappear altogether.

Want to hear more? Listen to Amelia Dalton’s podcast ‘fish fry’ from the EE Journal featuring Kevin Hannah, Director of Product Operations for Kazuhm, at the link below.

The Curious Case of the Critical Catalyst – Why Artificial Intelligence will be the Darling of the Edge

Comments

Webcasts

View our webcast playbacks or sign up for upcoming webcasts.  Learn more about the Kazuhm platform and how our customers are using it to accelerate their “cloud-smart” strategy.

See all webcasts here: https://www.kazuhm.com/webcasts/

Comments

2019 IT Industry Opinion Survey

This report highlights important findings uncovered during Kazuhm’s December 2018 survey of IT industry professionals as they offered opinions on their outlook towards their challenges and expectations for 2019.

Survey Overview

During the fourth quarter of 2018, the Kazuhm team saw a rapid increase in compute workload processing demand as a result of growth in compute resource-intensive applications involving the massive growth in data received via the Internet of Things (IoT) and the application of that data to Artificial Intelligence (AI) initiatives.  We launched our IT professionals survey to find out how IT professionals are coping with this increased demand and the following blog post is a summary of the results of that survey.  Key findings include:

  • Among several IT initiatives cited, 60% of respondents said they were planning edge computing/IoT initiatives in 2019; 56% said they were planning AI/ML initiatives in 2019.
  • 86% said they planned to increase use of the public cloud, yet only 42% were confident their data was safe in the public cloud.
  • 75% said they would increase their use of containers in 2019.
  • 27% said they would purchase new laptops/desktops to increase capacity.
  • 43% said they would purchase new servers to increase capacity.

More Data-Driven Applications Mean More Demand for Compute Resources

The majority of respondents report that their companies will launch IoT/Edge computing initiatives during 2019.  Likewise, the majority of respondents report their companies will launch artificial intelligence or machine learning initiatives.

This holds true when responses are broken down by company size.  Only the smallest companies (50 employees or less) have a majority of respondents reporting they are NOT planning AI or IoT initiatives this year.  Companies with between 1000 and 5000 employees had the largest percentages reporting plans for IoT and AI initiatives at 68% and 70% respectively.

These initiatives drive demand for compute resources.  Respondents reported plans to meet this demand in several ways: 86% reported they plan to increase work done in the public cloud and 75% reported they will increase containerization of workloads.  The most popular container technology among respondents is Docker, with just over 30% saying they use Docker containers to process production workloads. In addition, 83% said they plan to purchase new laptops and desktops with 27% reporting they will do so to increase capacity.  70% of respondents said they will purchase new servers with 43% reporting they will do so to increase capacity.

Are companies using all their current capacity?

While companies are planning to increase capacity through purchases, much of their current capacity has significant idle time.  Laptops and desktops spend nights and weekends powered off while servers operate under capacity.  Most respondents estimate that at least 50% of their laptops and desktops are idle or powered off during nights and weekends.  More than 70% reported their servers are idle more than 25% of the time on nights and weekends.

 

So not only is there new capacity being purchased, but there is already capacity within most organizations today that is currently unused/underused and can be tapped for use as companies ramp up for their new, AI/ML or IoT/Edge initiatives, saving significantly on public cloud costs.

A desktop that is 80% idle, for example, equates to a $1000 value, as compared to the equivalent AWS EC2 and Google Compute Engine cost, when recaptured for container-based workloads. The same is true for underused server resources. For example, where such a server is 40% idle, it can be given a $1000+ value when that underutilized capacity is recaptured for new digital workloads. The value of recapturing unused/underused resources scales very quickly for the typical numbers of desktops and servers in most organizations.

Security

Keeping data secure is often cited as a top concern of IT professionals and the respondents in this survey are no different.  As noted above, the majority plan to increase their work in the public cloud, but at the same time, public cloud security remains a concern. 

58% of respondents are not confident their data is secure in the public cloud. Security concerns can be mitigated using an IT resource recapture strategy that keeps your data in-house on your own laptops and servers.

Conclusion

As IT professionals prepare their organizations for the demands of IoT/Edge computing and AI/ML initiatives, cost, performance, and security are top of mind.  Cloud computing is doubtlessly an enabler, but while many prepare to increase their IT spending in the cloud, concerns remain regarding data security.  Workload containerization combined with the recapture of underused resources offers a secure, performant, cost-effective alternative.  Given the levels of IT resources, including desktops and servers that have unused/underused capacity, IT professionals have the opportunity to build a thoughtful strategy around the rollout of data-heavy applications where they choose the right resource for the right reason. 

Download the sharable infographic here.

Kazuhm 2019 IT Industry Outlook Infographic

Survey Methodology

The survey was built and conducted using the Qualtrics platform.  Responses were 100% anonymous and were received from more than 540 IT professionals—including Chief Information Officers (CIO), Chief Technology Officers (CTO), IT systems administrators, IT systems analysts, IT managers, IT directors, and purchasing managers across a wide array of sectors. Opinions were collected over a 2 week period ending in December 2018.

About Kazuhm

Kazuhm is a next generation workload optimization platform that empowers companies to maximize all compute resources from desktop to server, to cloud. Founded with a belief that organizations have become too dependent on cloud computing while disregarding the untapped resources that already exist within their organizations today, Kazuhm securely and efficiently recaptures unused processing power to boost productivity and minimize unnecessary IT investment. As the first fully-connected, commercial grade compute ecosystem, it allows organizations worldwide to process workloads at a fraction of the cost. Global IT managers and leaders have adopted Kazuhm’s easy, centralized install process that puts resource control back into their hands. Learn more at www.kazuhm.com or sign up for your Free Trial today!

 

Comments

TechRadar Highlights the Future of Cloud Computing

What will the future of cloud computing look like as we enter 2019?  Changes are afoot in the world of cloud computing and journalists and editors are starting to take note. Cloud computing has been the darling of the media for the last several years supported by the meteoric rise of services like AWS, Google Cloud, and Amazon’s Azure. As this market begins to mature and the hype cycle begins to level off, weaknesses of cloud offerings become more apparent and complementary or alternative solutions will begin to take hold. Common issues with cloud computing include security, vendor lock-in, rising cloud costs, and poor performance.  For example, according to Kazuhm’s recent survey of more than 500 IT professionals across sectors, while many plan to increase their use of the public cloud, only 42% feel their work done in the public cloud is completely secure.  In 2019 customers will begin to demand solutions to these issues which will open up opportunities for new companies and/or products to enter the market.  What’s more, the advent of the cloud and cloud native applications are driving changes in the demand and availability of IT skills from the system admin level all the way to the CIO.

In the linked article, Kazuhm CEO Tim O’Neal answers TechRadar’s questions and shares his vision of The Future of Cloud Computing in 2019.  Topics such as how to maximize your compute resources including cloud and also on premises servers and desktops, the top mistakes companies make when moving to the cloud, which workloads may not be well-suited to the cloud, the IT talent gap associated with the rise of the cloud and more are explored in this article.  The article makes a good read for those purchasing IT resources, planning 2019 cloud migrations, or simply staying current on IT career planning.

Comments