OpenDrives Partners with Kazuhm to Power High-Performance, Low-Cost Storage and Compute

March 24, 2021  — OpenDrives, the global provider of enterprise-grade, hyper-scalable network-attached-storage (NAS) solutions, announced today that it has partnered with Kazuhm, the distributed computing technology leader that enables IoT and other enterprise data to be processed with ultra-low latency and cost. Joining the OpenDrives containerization marketplace, also announced today as a new feature to OpenDrives’ centralized management software platform (Atlas), Kazuhm allows end-users to activate and integrate compute nodes on their network regardless of operating system or type of device.

“At OpenDrives, we’ve taken a storage-first approach to compute by bringing containerized applications and automation directly into the storage infrastructure. The introduction of ‘pods’ and ‘recipes’ simplifies deployment and management of these containerized applications, thus reducing complexity and increasing performance,” said Sean Lee, Chief Product and Strategy Officer at OpenDrives. “By partnering with best-in-class distributed compute technology leaders like Kazuhm, who share our same values in ensuring ultra-low latency and cost, enterprise-scale organizations can run multiple workloads in isolated virtual environments as close as possible to the data to optimize performance.”

Kazuhm runs secondary workloads that never interfere with the node’s primary function to enable secure, low-cost, low-latency compute across a variety of enterprise applications. Designed to run on the OpenDrives storage device, Kazuhm enables OpenDrives’ complete storage hardware and compute offering to leverage any customer IT asset, providing a flexible storage-compute offering versus rigid on-premise private cloud solutions that require dedicated hardware assets.

“We are extremely pleased to have the opportunity to be the first OpenDrives partner to deliver our distributed computing capabilities as an integrated feature of the Atlas platform,” said Andreas Roell, CEO of Kazuhm. “OpenDrives’ containerization marketplace enables us to expand our reach to a new set of customers that have not yet experienced the cost and performance benefits of running containerized workloads on their own on-premise devices.”

Kazuhm is available through the OpenDrives containerization marketplace, a rapidly growing ecosystem that, through a robust API, allows customers to load recipes on the compute module without interfering with storage performance and further power OpenDrives’ scale-up, scale-out infrastructure. OpenDrives’ software dashboard, also included in the latest Atlas software release, unlocks deep insights into all analytics running on the OpenDrives system, in real-time, with an easy-to-use graphical interface.

Additional applications in the OpenDrives containerization marketplace include: DaVinci Resolve Project Server, as well as common agents for services like Grafana Analytics and Splunk.

To learn more about OpenDrives’ latest solutions or to see how you can join the OpenDrives containerization marketplace, email [email protected] or visit www.opendrives.com. To learn more about Kazuhm, visit www.kazuhm.com.

About OpenDrives

OpenDrives is a global provider of enterprise-grade, hyper-scalable network-attached-storage (NAS) solutions. Founded in 2011 by media and entertainment post-production professionals, OpenDrives is built for the most demanding workflows, from Hollywood to healthcare, and businesses large and small. OpenDrives delivers the highest performing solutions to match individual performance needs, even for the most robust, complex and mission-critical projects, on-premises and into the cloud. OpenDrives is headquartered in Los Angeles, CA. To learn more about OpenDrives, visit www.opendrives.com.

Comments

Cloud today gone tomorrow?

by Kevin Hannah, Director of Product Operations, Kazuhm

The Changing Definition of “Cloud”

The message of achieving IT nirvana by moving to the cloud continued to ring loud throughout 2018. But in the face of practical realities that included overrunning budgets1, security concerns2, performance issues due to network latency3, and an ever-increasing skills gap, the emphasis on public cloud changed to one of hybrid cloud where organizations were encouraged to take advantage of both public and private deployments; with 80% “repatriating workloads back to on-premise systems”4. A fact that the public cloud providers have been forced to embrace, as evidenced by Amazon announcing Outposts to bring their hardware into customer data centers. Now more recently followed by Google with Anthos. And a further recognition that “some customers have certain workloads that will likely need to remain on-premises for several years”5.

The number of “cloud” options has continued to increase, there is no one-size-fits-all, and so what we were really talking about at the end of last year was any variant on xyz cloud (public, private, multi, and hybrid).

But wait. The “fog” is rolling in. Or as Gartner would say “the Edge will eat the Cloud”7. The future tsunami inherent in Edge and Internet-of-Things (IoT) deployment behind both these statements, driving organizations away from a single threaded focus on “cloud”, requires another rethinking of our definitions. Add the ability to run workloads on desktop and the truly disparate constituent parts of this ever-expanding compute continuum and xyz cloud just doesn’t cut it anymore.

Adding a version number, e.g. Cloud 2.0, is lackluster. And although the use of “3rd platform” by IDC7 builds on an evolution of mainframe/greenscreen, through client/server, to cloud/browser, and comes somewhat closer, I see it as muddying the waters by weaving in social business and big data analytics that are not intrinsically part of a compute continuum.

Is it Cloud, is it Edge, or is it both? I believe we need new terminology, one best characterized in a Next Generation Grid of heterogeneous, connected, compute resources.

 

Containers as the “Life Blood” of Digital Transformation need a Heart

Despite the hybrid/multi-cloud push in 2018 and the lauded growth rates in $spend and adoption, the reality is somewhat different and “the so-called rush to the cloud is not, at present, much of a stampede, at all”; by 2021 only 15% of IT budgets will be going to the (public) cloud8.

Cloud this year is “still only used for around 10-20% of applications and workloads, according to 451 Research9, and this doesn’t even differentiate between production and non-production.

The drip has now become a trickle in 2018 but to reach flood stage will require the ability to have workloads move freely across the entire compute continuum, from desktop, to legacy server, to private cloud, public cloud, to the Edge and the IoT beyond. In other words, Containers. So, it is not a surprise that Forrester predicts “2019 will be the year that enterprises widely adopt container platforms as they become a key component in digital transformation initiatives”.  A recent survey of IT professionals done by Kazuhm supports this with 75% of respondants predicting they would increase their use of containers in 2019.

However, it is not just a case of organizations simply rolling-out containerized application workloads. It matters that the right workloads are deployed onto the right resources for the right reasons (including cost, performance, security/compliance, and even more esoteric vectors such as “data gravity”11 that root the location of processing that data). In other words, Optimal Workload Placement. We have already explored the breadth of resource but the addition of a myriad of both workload types and business reasons exponentially compounds the complexity.

The use of AI and the cloud have seen parallel growth. The latter an enabler through collection, storing, processing, and analyzing the vast volumes of rich data necessary to feed AI algorithms. But again, AI at the Edge is set to take center stage as issues with latency, bandwidth, and persistent connectivity (reliability), compound the problems the cloud already has with privacy-security-regulatory concerns and economics. What were we saying about cloud being inadequate as an overarching term…

That aside, now is the time to apply AI inward, with 2019 I believe to be marked as the start in the evolution of AI-enabled Orchestration of container workloads, the pumping heart of digital transformation.

The future is AI-enabled Orchestration for Optimal Workload Placement on the Next Generation Grid.

 

You hear that Mr. Anderson?… that is the sound of inevitability…

My parting thought for this future. “AWS wants to rule the world” 12. As did IBM, the biggest American tech company by revenue in 1998. Now 20 years later they are not even among the top 30 companies in the Fortune 500. The cycle of technology change continues to turn, but at an even faster pace. Perhaps Cloud today gone tomorrow?

 

References

1 Source: Cloud trends in 2019: Cost struggle, skills gap to continue https://searchitchannel.techtarget.com/feature/Cloud-trends-in-2019-Cost-struggle-skills-gap-to-continue

2 Source: What’s Coming for Cloud Security in 2019? https://www.meritalk.com/articles/whats-coming-for-cloud-security-in-2019/

3 Source: Cloud 2.0: What Does It Mean for Your Digital Strategy? https://www.forbes.com/sites/riverbed/2018/10/11/cloud-2-0-what-does-it-mean-for-your-digital-strategy/

4 Source: Businesses Moving from Public Cloud Due To Security, Says IDC Survey https://www.crn.com/businesses-moving-from-public-cloud-due-to-security-says-idc-survey

5 Source: Amazon Web Services Announces AWS Outposts https://www.businesswire.com/news/home/20181128005680/en/Amazon-Web-Services-Announces-AWS-Outposts

6 Source: Gartner, The Edge will Eat the Cloud https://www.delltechnologies.com/en-us/perspectives/the-edge-will-eat-the-cloud-a-gartner-report/

7 Source: IDC https://www.idc.com/promo/thirdplatform

8 Source: ‘Big four’ set for assault on cloud market https://techhq.com/2018/11/big-four-set-for-assault-on-cloud-market/

9 Source: Sky’s the limit in global race to adopt cloud https://www.raconteur.net/technology/skys-the-limit-in-global-race-to-adopt-cloud

10 Source: Predictions 2019: What to Expect in the Cloud/Container World https://www.eweek.com/development/predictions-2019-what-to-expect-in-the-cloud-container-world

11 Source: Defying data gravity: How can organizations escape cloud vendor lock-in? https://www.cloudcomputing-news.net/news/2018/nov/23/defying-data-gravity-how-can-organisations-escape-cloud-vendor-lock-/

12 Source: AWS wants to rule the world https://techcrunch.com/2018/12/02/aws-wants-to-rule-the-world/

Comments

The Triple DNA Helix of AI at the Edge

Kevin Hannah, Director of Product Operations for Kazuhm, explores why artificial intelligence should take center stage at the edge and why our ability to process the tsunami of information that is coming at us from 5G, IoT, and Big Data is going to depend on how successfully artificial intelligence is deployed at the edge in the blog below and in Amelia Dalton’s ‘fish fry’ podcast from the EE Journal that you can access here.

As Neo was told “that is the sound of inevitability” so too are organizations when it comes to both AI and the Edge. But inevitable as it is, if we are to see the delivery of tangible business value rather than just continuing to read articles espousing lofty promises of what will be, we need to understand the three complimentary entwined strands of what makes AI at the Edge both possible and more importantly financially viable.

AI Applications are the obvious end-user manifestation of AI at the Edge. But why focus on AI rather than one, or many, of the other technology darlings such as AR, VR, and Autonomous Driving? All are perceived to deliver value at the Edge based on their need(s) for low latency performance, reduced movement of data, either for bandwidth reduction or compliance jurisdiction/sovereignty, survivability, and reliability.

The business case for AI is simply an extension of the tidal wave of Business Intelligence and Analytics associated with all things Big Data. And that is the key. Massive data volumes generated by next generation connected Internet of Things (IoT) devices continues to grow exponentially.

AR/VR are cool to demonstrate but have offered little to organizations in terms of real revenue gain and Autonomous Driving is going to face numerous uphill struggles against regulatory adoption.

But the use of AI, trained using Machine Learning (ML) algorithms, on data at the Edge is easy to grasp in terms of immediate business benefit – insights generated, and immediate actions taken where the data is produced rather than having to rely on distant, centralized, cloud resources. This is no more evident than in Manufacturing where high precision manufacturing and robotics require AI located on premises to ensure real-time responsiveness, while connected machines and sensors provide new insights into predictive maintenance and energy efficiency across disparate geographic locations in pursuit of improving operating profit.

However, the Edge is a continuum stretching from the IoT device layer, through the Access Edge “last mile” layer, to the Infrastructure Edge data center layer, with aggregated data ML seamlessly picking up where work at the device leaves off. Ultimately, providing opportunity to improve scalability and performance by placing AI at an optimal location in the Edge topology.

And it is this AI-as-a-Service sitting at the network edge that represents a key monetization opportunity for Communication Service Providers (CSPs).  It allows them to move away from selling undifferentiated basic bandwidth services, become relevant in the developing AI Application ecosystem, and drive new revenue. This is a time-sensitive endeavor as the major public cloud providers look to extend their reach in reaction to the “edge will eat the cloud” (Gartner).

Edge Infrastructure is the domain of the CSPs who as we have discussed are leveraging their network infrastructure as a rich, functional platform for AI applications. Ownership of access networks and edge cloud infrastructure gives them a competitive advantage over public cloud providers particularly in the 5G era. And without 5G there will be network problems in not only providing connectivity for the billions of anticipated IoT devices but also for transmitting the huge volumes of data that will be generated.

Out of 5G is born Software-defined networking (SDN) designed to make networks more flexible and agile through Network Function Virtualization (NFV), and Mobile Edge Computing or Multi-Access Edge Computing (MEC) in the form of what is essentially a cloud-based IT service environment at the edge of the network.

A set of standardized compute resources are provided, both CPU and GPU, running cloud native applications and orchestration to mimic the platform simplicity, API familiarity, and developer comfort of the cloud. But within the 5G networks, these resources reside on a playing field differentiated by location… a game the CSP can win.

So, with companies such as NVIDIA looking to Edge-located GPUs in support of AR, VR, and Connected Gaming over this standardized 5G infrastructure, although not a direct use for AI as mentioned earlier, these resources can be recaptured (when idle) as a powerful accelerator of AI training algorithms.

And back to the billions of anticipated IoT devices such as mobile phones, whose compute resources inside are becoming increasingly powerful. They can now enable Federated Learning as a privacy-preserving mechanism to effectively leverage these decentralized compute resources to train ML models coordinated through these other Edge-located ML resources.

A complete, connected, ecosystem hosting AI stacks for both the CSP and their clients/partners offers the opportunity to rethink business models and how to participate in value creation, value distribution and value capture. Here, effective participation is the key to monetizing network infrastructure.

AI-Enablement is the use of the AI stack by the CSP for automated workload orchestration, the underpinning for provisioning and managing services and applications at the Edge.

This means the Edge itself becomes more intelligent. Making it not only relevant for low latency applications but offering potential to unlock highly intelligent and secure opportunities, data transmission efficiencies, traffic steering, zero-touch service management, and optimal workload (including Virtual Network Function, VNF) placement; a smart way to handle the right workload, on the right resource, for the right reason whether that be cost, performance, security/compliance, routing, or even reliability.

AI will be critical to network automation and optimization, with real-time decisions needed in support of traffic characterization, meeting end-to-end quality of service, and in particular – Dynamic Network Slicing that allows CSPs to monetize their infrastructure by offering multiple service tiers at different price points. For example, a slice of the network to handle certain floor robotics that rely on ultra-low latency may garner a higher price than a parallel slice for less time-sensitive edge compute.

The DNA of AI at the Edge is now starting to form. Time will tell as to who will endure (through financial success) to pass theirs to a next generation where AI functionality is so completely decoupled and disseminated so broadly that it will seem to disappear altogether.

Want to hear more? Listen to Amelia Dalton’s podcast ‘fish fry’ from the EE Journal featuring Kevin Hannah, Director of Product Operations for Kazuhm, at the link below.

The Curious Case of the Critical Catalyst – Why Artificial Intelligence will be the Darling of the Edge

Comments

Kazuhm Wins 2019 NAB Show Product of the Year Award

Next-Generation Workload Processing Platform Recognized for Achievements and Innovation in IT Networking/Infrastructure and Security 

San Diego, CA – April 17, 2019—Kazuhm today announced that it received a Product of the Year award at the 2019 NAB Show—a program that aims to recognize the most significant and promising new products and technologies showcased by exhibitors. Kazuhm was specifically recognized in the IT Networking/Infrastructure and Security category.

Kazuhm is a workload processing platform that allows organizations to recapture existing IT resources and intelligently manage work across a fabric of desktops, data centers, cloud, and edge/Internet of Things (IoT). The intuitive user interface (UI) is easy to operate, helping to limit an organization’s dependence on IT support. Enterprise customers typically see such benefits as lower IT and cloud costs, enhanced security, improved performance, and reduction of latency by enabling the right work to be processed on the right resource for the right reason.

“Since our company’s inception, we’ve seen tremendous excitement and opportunity to apply our best-in-class solution to telecommunications, media and entertainment. These industries often require low-latency solutions for delivering entertainment services, or compute resource-heavy applications to support transcoding and rendering,” said Tim O’Neal, Kazuhm CEO. “Regardless of the use case, Kazuhm is well-equipped to meet companies’ growing needs in the space, as our secure, AI-enabled platform delivers optimal compute workload placement and processing across compute resources.”

Adds O’Neal, “We’re honored to be among the first class of technology providers to win an NAB Show Product of the Year award, and hope to harness the present momentum to serve this robust and evolving market.”

NAB Show Product of the Year award winners were selected by a panel of industry experts in 16 categories and announced at an awards ceremony and cocktail reception at the Westgate Las Vegas Resort on April 10. To be eligible for an award, nominated products and technologies needed to be on display at the 2019 NAB Show for the first time and available for delivery in calendar year 2019. Additional details can be found here.

“Nominees like Kazuhm are revolutionizing the way people experience media and entertainment,” said NAB Executive Vice President of Conventions and Business Operations, Chris Brown. “The 2019 NAB Show Product of the Year Awards highlight the best of what’s new at the premier launchpad for breakthroughs at the intersection of media, entertainment and technology.”

To learn more about Kazuhm, including its proprietary video transcoding application, which debuted at NAB this year, please visit www.Kazuhm.com.

Comments

Kazuhm Named a 2019 “Cool Company” by San Diego Venture Group

First-of-its-Kind Workload Processing Platform Joins 32 of the Fastest Growing, Most Exciting Startups in Southern California

San Diego, CA – April 2, 2019—Kazuhm, a next-generation workload processing platform, today announced that it has been recognized as one of only 33 “Cool Companies” for 2019 by San Diego Venture Group. Kazuhm won out amongst more than 250 applicants.

San Diego Venture Group (SDVG) promotes the formation, funding, and development of innovative new ventures in the San Diego community. SDVG’s Cool Companies list highlights the fastest-growing, most exciting startups in Southern California.

“We are grateful for this recognition from San Diego Venture Group,” said Kazuhm CEO Tim O’Neal. “We are proud to call ourselves a member of the San Diego startup community, which we strongly feel is one of the most active, thriving and important startup communities in the United States today.”

Launched in October of 2018, Kazuhm is a commercial-grade distributed computing workload processing platform that empowers organizations to maximize all compute resources across desktop, server, and cloud, all the way to the Edge. The platform enables organizations to recapture and use all existing compute nodes to process containerized workloads, saving IT costs and enhancing performance and security.  Its desktop recapturing technology is used by organizations across telecom, healthcare, retail, financial services, higher education and more.

“We were happy to name Kazuhm a 2019 Cool Company,” said SDVG President Mike Krenn. “One of the reasons why we love doing the annual ‘Cool Companies’ list is because it shows how the extremely diverse San Diego tech ecosystem is now a hotbed for all kinds of innovation, in an array of key areas.”  

This news comes on the heels of Kazuhm’s recent announcement that it has joined the NVIDIA Inception Program. Inception nurtures dedicated and exceptional startups who are revolutionizing industries with advances in AI and data science.

Kazuhm was also recently named an official nominee in the first-ever NAB Show “Product of the Year” Awards. NAB Show is the world’s largest event focused on the intersection of technology, media, and entertainment. Kazuhm will be exhibiting at NAB (N2739 – The Startup Loft) from Saturday, April 6 through Thursday, April 11.

This year’s Cool Companies event will be on April 30, at the Belly Up Tavern in Solana Beach, and will give participants, like Kazuhm, an opportunity to meet with more than 60 venture capital firms and 20 local investors.

Comments

Kazuhm Joins NVIDIA Inception Program

Next-Generation Workload Processing Platform to Receive Industry-leading GPU Tools, Technology and Deep Learning Expertise to Catalyze Business Growth and Success

SAN DIEGO—March 5, 2019—Kazuhm™, the next-generation workload processing platform, today announced it has joined the NVIDIA Inception program, a virtual accelerator program that is designed to nurture startups during critical stages of product development, prototyping and deployment, to revolutionize industries with advancements in artificial intelligence (AI) and data science.

Kazuhm is a first-of-its-kind, commercial-grade distributed computing workload platform that empowers organizations to maximize all compute resources across desktop, server, and cloud,  all the way to the edge. Its technology enables enterprises across industries—including telecom, healthcare, retail, financial services, higher education and more—to efficiently recapture unused processing power to boost productivity, minimize unnecessary IT investment and improve security.

As a member of Inception, Kazuhm will strengthen its deep learning expertise through powerful graphics processing unit (GPU) tools, hardware grants and best-in-class training in neutral-network machine learning applications, among other benefits. 

“Through Kazuhm’s collective industry acumen and our proprietary research, we know the demand for compute resources is growing, and is primarily driven by increases across the Internet of Things (IoT)/edge, and AI and machine learning projects,” said Tim O’Neal, CEO of Kazuhm. “To meet this need head-on, we recognize the value in working alongside industry pioneers, like NVIDIA through its Inception program, to improve our insights and access to game-changing technology and resources. Ultimately, we know this will benefit of our customers and our growth trajectory.”

Kazuhm recently announced the results of its 2019 IT Industry Outlook Report, which surveyed IT professionals’ current and anticipated workload processing and usage habits, sentiments and predictions for the year.

Kazuhm will be presenting a Tech Talk at NVIDIA’s GPU Technology Conference in Silicon Valley on March 18-21, 2019. Attendees are also encouraged to stop by Kazuhm’s booth during the show (booth #334).

Comments

Kazuhm Announces Results of 2019 IT Industry Outlook Report

Survey Analyzed IT Professionals’ Current and Anticipated Workload Processing and Usage Habits, Sentiments and Predictions for This Year

 Visit Kazuhm at Developer Week SF Bay Area Booth #617

SAN DIEGO – Feb. 19, 2019 – Kazuhm™, the next generation workload processing platform, today released the findings of 2019 IT Industry Outlook report. More than 540 IT professionals—including Chief Information Officers (CIO), Chief Technology Officers (CTO), IT systems administrators, IT systems analysts, IT managers, IT directors, and purchasing managers—participated in the survey, which explored cloud and container usage; sentiment relative to security in the public cloud; and percentage of on-premise hardware that operates in idle mode after-hours, among other topics.

Key takeaways derived from the report include:

Compute Resource Demand

  • The demand for compute resources is growing, driven primarily by increases to IoT/Edge (60 percent) and AI/ML projects (56 percent)
  • To meet this resource demand, 86 percent of organizations plan to expand use of the cloud; 75 percent plan to increase production work using containers; 27 percent plan to purchase new desktops to increase capacity; and 43 percent plan to purchase new servers to increase capacity

 Public Cloud

  • 47 percent of respondents said more than 50 percent of their production work is done using the public cloud
  • 86 percent expect their use of public cloud for production work to increase in 2019
  • Only 42 percent feel confident their work done in the public cloud is completely secure

Container Usage

  • 73 percent of respondents said that greater than 25 percent of their production work is containerized, and 75 percent expect their use of containers to increase in 2019

Idle Hardware

  • 80 percent said greater than 25 percent of their desktops and laptops are idle or powered off at night; 32 percent said that greater than 75 percent of their desktops or laptops are idle or powered off at night
  • Nearly three-quarters of respondents said that their in-house or co-located servers are greater than 25 percent idle at night

Kazuhm was founded on the notion that we could help companies in any industry discover and exploit existing compute nodes within organizations. Underlying this mission is a desire to simplify and streamline the increasingly complex IT landscape,” said Tim O’Neal, CEO of Kazuhm. “Armed with the insights we gleaned from this survey, we not only have a better understanding of the challenges facing IT professionals today, but we can more intelligently engineer the Kazuhm platform to meet the changing needs of this audience tomorrow—whether it’s helping to prioritize a secure environment for workload processing or designing new solutions that complement cloud and containerized workloads.”

The full results of Kazuhm’s 2019 IT Industry Outlook report are available at https://www.kazuhm.com/2019-it-industry-opinion-survey-results.

The survey was commission through Qualtrics and ran from Nov. 28, 2019 through Dec. 12, 2018.

Comments

TechRadar Highlights the Future of Cloud Computing

What will the future of cloud computing look like as we enter 2019?  Changes are afoot in the world of cloud computing and journalists and editors are starting to take note. Cloud computing has been the darling of the media for the last several years supported by the meteoric rise of services like AWS, Google Cloud, and Amazon’s Azure. As this market begins to mature and the hype cycle begins to level off, weaknesses of cloud offerings become more apparent and complementary or alternative solutions will begin to take hold. Common issues with cloud computing include security, vendor lock-in, rising cloud costs, and poor performance.  For example, according to Kazuhm’s recent survey of more than 500 IT professionals across sectors, while many plan to increase their use of the public cloud, only 42% feel their work done in the public cloud is completely secure.  In 2019 customers will begin to demand solutions to these issues which will open up opportunities for new companies and/or products to enter the market.  What’s more, the advent of the cloud and cloud native applications are driving changes in the demand and availability of IT skills from the system admin level all the way to the CIO.

In the linked article, Kazuhm CEO Tim O’Neal answers TechRadar’s questions and shares his vision of The Future of Cloud Computing in 2019.  Topics such as how to maximize your compute resources including cloud and also on premises servers and desktops, the top mistakes companies make when moving to the cloud, which workloads may not be well-suited to the cloud, the IT talent gap associated with the rise of the cloud and more are explored in this article.  The article makes a good read for those purchasing IT resources, planning 2019 cloud migrations, or simply staying current on IT career planning.

Comments

Analytics Ventures Launches Kazuhm™ for Next Generation Workload Optimization

The First-of-its-Kind Commercial-Grade Platform Recaptures Desktop, Server and Cloud Compute Resources to Optimize Operations and Improve Security at a Fraction of the Cost

 Kazuhm Builds Community Around Cloud Alternative Resources at www.GetYourHeadOutOfTheCloud.com

SAN DIEGO – Oct. 16, 2018 – Analytics Ventures, a fund dedicated to creating and building venture companies that harness the power of artificial intelligence technologies, today announced a new addition to its venture portfolio: Kazuhm™, a commercial-grade distributed computing workload platform that empowers organizations to maximize all compute resources—from desktop, to server, to cloud—thereby saving time and money, while improving security. Currently compatible across Windows and Linux operating systems, Kazuhm offers simple, central installation to securely and efficiently recapture unused processing power to boost productivity and minimize unnecessary IT investment.

The IT industry has always been driven by a need to innovate. While that approach has undoubtedly served the market well, in recent years it has also led many organizations to overlook opportunity right under their nose, in this case, amongst their on-premise hardware. This has created a scenario whereby systems and technology managers, and their C-suite counterparts, have critically underutilized hardware. Rather than fully tapping into available on-premise resources, they have invested time and money in cloud computing and storage, frequently compromising data and system security.

“Today, cloud costs are soaring, demand for high-performance computing (HPC) is increasing and there are more security risks than ever,” said Tim O’Neal, CEO of Kazuhm. “Against this backdrop, tensions between IT teams and other members within their organization who depend on workloads have reached an all-time high due to processing bottlenecks. The answer is not always investing in more, newer cloud resources. The answer is using the available computer resources more efficiently. The answer is Kazuhm.”

Kazuhm helps companies across sectors discover and exploit existing compute nodes within their organizations. Specifically, Kazuhm addresses the following three problems that most commonly plague IT personnel today:

  • Out-of-control Cloud Costs—According to Gartner and Goldman Sachs, cloud computing costs are expected to reach $72 billion in 2019. Kazuhm allows users to harness their own internal compute resources to process workloads instead of sending them to the cloud, thereby reducing cloud costs.
  • Heightened Demand for HPC—Thanks to the advent of artificial intelligence and expansion of the Internet of Things (IoT), the HPC server market is forecast to grow from $12.4 billion this year, to nearly $20 billion in 2022. Kazuhm helps organizations minimize incremental spend by enabling them to make the most of their on-premise compute assets to efficiently process HPC workloads.
  • Increased Security Risk in the Cloud—Cybersecurity Insiders reports that 91 percent of companies are concerned about cloud security, and 18 percent have identified a cloud security incident in the last 12 months. Because Kazuhm enables data to be processed on an organization’s own desktops and servers within their own facility, data, and valuable IP never leave the premises and are therefore more secure.

“Distributed computing in heterogeneous environments, and leveraging unused computer assets isn’t necessarily new, but the technology has finally caught up and is well within the means of implementation thanks to commercial organizations like Kazuhm,” said Reed Anderson, CTO, True Digital. “We’re currently using Kazuhm for transcoding, which allows us, as a mobile provider, to personalize screen resolution and experience for consumers based on whether they’re using Wi-Fi, mobile or broadband. This process can take a lot of time, so being able to tap the unused processing power of idle machines across our physical assets and data centers is a significant competitive advantage. It allows us to save time and money over alternative solutions such as a dedicated system or the cloud.”

The fully-connected compute ecosystem activated by Kazuhm puts control back in the hands of IT managers and leaders. Easily and centrally installed, it allows organizations worldwide to process workloads at a fraction of the cost, with increased agility and higher security.

“In the next two years, enterprise organizations are expected to move half their public cloud applications to a private cloud or non-cloud environment, meaning there is an inherent and immediate gap in the market that Kazuhm can fill seamlessly,” said Navid Alipour, managing partner, Analytics Ventures. “We’re confident that Kazuhm, as our most recent venture launch, will change the way workloads are processed forever. We believe we’re on the cusp of something monumental, as we help IT leaders ‘take charge’ of their compute resources in unprecedented ways.”

Kazuhm can be implemented across any number and type of corporate nodes into an organization’s on-premise hardware infrastructure in an easy and centralized manner. Organizations interested in using Kazuhm may visit  https://www.kazuhm.com to learn more.

Founded on the belief that the application of cloud computing has become out of control, Kazuhm considers its mission to create awareness in the IT community that it is time to become smarter  how the cloud is being used as a function of the entire compute ecosystem. As such, Kazuhm has also launched an industry platform called GetYourHeadOutofTheCloud.com to help educate and bring together IT professionals for healthy discussions about all topics related to a more balanced utilization of cloud computing. Users have the ability to join the “Cloud Busters,” a group of IT experts becoming active for the cause.

About Kazuhm

Kazuhm is a next generation workload optimization platform that empowers companies to maximize all compute resources from desktop to server, to cloud. Founded with a belief that organizations have become too dependent on cloud computing, while disregarding the untapped resources that already exist within their organizations today, Kazuhm securely and efficiently recaptures unused processing power to boost productivity and minimize unnecessary IT investment. As the first fully-connected, commercial grade compute ecosystem, it allows organizations worldwide to process workloads at a fraction of the cost. Global IT managers and leaders have adopted Kazuhm’s easy, centralized install process that puts resource control back into their hands. Learn more at www.kazuhm.com.

About Analytics Ventures

Analytics Ventures is a venture studio fund providing front-to-end infrastructure to ideate, form, launch and fund brand new companies in artificial intelligence (AI). With its own in-house AI lab, technology, back-office, and marketing setup, Analytics Ventures takes companies from formation to public launch in as little as six months.  Supported by a large network of corporate and academic partnerships, as well as other venture funds, Analytics Ventures has launched leading AI ventures ranging from financial services, to healthcare, advertising and more. To learn more about Analytics Ventures, visit www.analytics-ventures.com.

Comments