OpenDrives Partners with Kazuhm to Power High-Performance, Low-Cost Storage and Compute

March 24, 2021  — OpenDrives, the global provider of enterprise-grade, hyper-scalable network-attached-storage (NAS) solutions, announced today that it has partnered with Kazuhm, the distributed computing technology leader that enables IoT and other enterprise data to be processed with ultra-low latency and cost. Joining the OpenDrives containerization marketplace, also announced today as a new feature to OpenDrives’ centralized management software platform (Atlas), Kazuhm allows end-users to activate and integrate compute nodes on their network regardless of operating system or type of device.

“At OpenDrives, we’ve taken a storage-first approach to compute by bringing containerized applications and automation directly into the storage infrastructure. The introduction of ‘pods’ and ‘recipes’ simplifies deployment and management of these containerized applications, thus reducing complexity and increasing performance,” said Sean Lee, Chief Product and Strategy Officer at OpenDrives. “By partnering with best-in-class distributed compute technology leaders like Kazuhm, who share our same values in ensuring ultra-low latency and cost, enterprise-scale organizations can run multiple workloads in isolated virtual environments as close as possible to the data to optimize performance.”

Kazuhm runs secondary workloads that never interfere with the node’s primary function to enable secure, low-cost, low-latency compute across a variety of enterprise applications. Designed to run on the OpenDrives storage device, Kazuhm enables OpenDrives’ complete storage hardware and compute offering to leverage any customer IT asset, providing a flexible storage-compute offering versus rigid on-premise private cloud solutions that require dedicated hardware assets.

“We are extremely pleased to have the opportunity to be the first OpenDrives partner to deliver our distributed computing capabilities as an integrated feature of the Atlas platform,” said Andreas Roell, CEO of Kazuhm. “OpenDrives’ containerization marketplace enables us to expand our reach to a new set of customers that have not yet experienced the cost and performance benefits of running containerized workloads on their own on-premise devices.”

Kazuhm is available through the OpenDrives containerization marketplace, a rapidly growing ecosystem that, through a robust API, allows customers to load recipes on the compute module without interfering with storage performance and further power OpenDrives’ scale-up, scale-out infrastructure. OpenDrives’ software dashboard, also included in the latest Atlas software release, unlocks deep insights into all analytics running on the OpenDrives system, in real-time, with an easy-to-use graphical interface.

Additional applications in the OpenDrives containerization marketplace include: DaVinci Resolve Project Server, as well as common agents for services like Grafana Analytics and Splunk.

To learn more about OpenDrives’ latest solutions or to see how you can join the OpenDrives containerization marketplace, email [email protected] or visit www.opendrives.com. To learn more about Kazuhm, visit www.kazuhm.com.

About OpenDrives

OpenDrives is a global provider of enterprise-grade, hyper-scalable network-attached-storage (NAS) solutions. Founded in 2011 by media and entertainment post-production professionals, OpenDrives is built for the most demanding workflows, from Hollywood to healthcare, and businesses large and small. OpenDrives delivers the highest performing solutions to match individual performance needs, even for the most robust, complex and mission-critical projects, on-premises and into the cloud. OpenDrives is headquartered in Los Angeles, CA. To learn more about OpenDrives, visit www.opendrives.com.

Comments

Kazuhm Welcomes Distributed Computing Pioneer Dr. Larry Smarr as Technology Evangelist

UCSD Distinguished Professor Emeritus Brings Decades of Experience and Technical Expertise

August 19, 2020—Kazuhm, a leader in technology and tools for maximizing IT efficiency, today announced that Dr. Larry Smarr will provide support to the Kazuhm leadership team as Technology Evangelist. With over 40 years of experience driving information technology innovation in academia, government agencies, and private industry, Dr. Smarr brings a practical vision for the ever-broadening use of distributed computing. 

“We are thrilled to have Dr. Smarr with his unmatched knowledge and experience contributing to the Kazuhm team’s foundational expertise and outreach,” said Andreas Roell, Kazuhm CEO.  “Containerization, when applied to distributed computing, represents a paradigm shift in the way organizations process their workloads and store data, and there is no one better for Kazuhm to partner with than Dr. Smarr to deliver this message to the IT community.”

Dr. Smarr is Distinguished Professor Emeritus at the University of California, San Diego. From 2000-2020, he served as the founding Director of the California Institute for Telecommunications and Information Technology (Calit2), a UC San Diego/UC Irvine partnership, and held the Harry E. Gruber professorship in UCSD’s Department of Computer Science and Engineering. Before that (1985-2000) he was the founding director of the National Center for Supercomputing Applications (NCSA) at UIUC. He received his Physics Ph.D. in 1975 from the University of Texas at Austin and did postdoctoral research at Princeton, Harvard, and Yale, before becoming a Professor of Physics and of Astronomy at UIUC in 1979.

Additionally, Dr. Smarr has supported government agencies at the state and federal levels including 8 years as a member of the NIH Advisory Committee to the NIH Director, serving three directors. Dr. Smarr served on the NASA Advisory Council to four NASA Administrators, was chair of the NASA Information Technology Infrastructure Committee, and the NSF Advisory Committee on Cyberinfrastructure. He also served on Governor Schwarzenegger’s California Broadband Taskforce in 2007. He currently serves on the Advisory Board to the Director of the Lawrence Berkeley National Laboratory. He continues to provide national leadership in advanced cyberinfrastructure (CI), currently serving as Principal Investigator on three NSF CI research grants: Pacific Research Platform, Cognitive Hardware, and Software Ecosystem Community Infrastructure, and Toward a National Research Platform.

Among numerous honors and awards, Dr. Smarr is a member of the National Academy of Engineering, as well as a Fellow of the American Physical Society, the American Association for the Advancement of Science, and the American Academy of Arts and Sciences.  He received the IEEE Computer Society Tsutomu Kanai Award for his lifetime achievements in distributed computing systems in 2006 and in 2014 the Golden Goose Award.

“Capital investments in on-premise computers are often underutilized in many organizations because of the lack of a secure and flexible software infrastructure that can make full use of the capability of today’s distributed systems,” said Dr. Smarr.  “I am looking forward to partnering with Kazuhm to help the IT community across sectors more fully utilize the hardware assets they have paid for, as well as to extend to external cloud resources, to meet the growing demand for compute capacity.”

Dr. Smarr gives frequent keynote addresses at professional conferences and to popular audiences. His views have been quoted in Science, Nature, the New York Times, Wall Street Journal, Time, Newsweek, Atlantic, New Yorker, Wired, MIT Technology Review, Fortune, Business Week, CBS, and the BBC.

 

 

Comments

Kazuhm for Analytics, AI & ML

Comments

Kazuhm Launches Industry’s First SaaS-Enabled Distributed Computing Platform

New feature provides IT departments with self-service installation for over 100,000 applications

April 29, 2020Kazuhm today announced the launch of its “bring-your-own-app” functionality allowing users to independently run any application with a Docker Compose file within the Kazuhm distributed computing environment. This new feature enables Kazuhm users to choose from more than 100,000 containerized applications in the Docker Hub library and quickly and easily deploy those apps across their Kazuhm-enabled corporate assets including servers, desktops, laptops, and multi-cloud resources. 

Kazuhm is the only distributed computing platform available as an enterprise-grade product allowing organizations to take full advantage of the computing power they already own across all their devices. With the latest Kazuhm release, customers can upload and edit Docker Compose files, then configure and deploy applications with just a few clicks. This eliminates the need for extensive command-line capabilities and facilitates the use of many popular applications within the Kazuhm platform. Among the 100,000+ applications available as Docker Compose files are MySQL, WordPress, Elastic Stack (ELK), Redis, Cassandra, and many more.

“Since our inception, our vision has been to provide corporations with the industry’s first SaaS-enabled distributed compute platform. With the launch of our bring-your-own-app functionality, we have now reached this significant milestone,” says Gregg Holsapple, vice president of product at Kazuhm. “Kazuhm allows corporations of any size to build a powerful compute fabric using resources they already own, lowering IT costs, and improving application performance. And the beauty behind it is that our drag-and-drop approach does not require any command-line capabilities, making it quick to establish and easy to maintain while saving hours of IT staff time.”

Also available in the current release are expanded scheduling and control features, deeper insights into host CPU and memory usage by deployed applications, and more visibility and automation for Docker installation on Windows and Linux devices.

 

Kazuhm COVID-19 Response

In related news, Kazuhm announced on March 23 their AI-driven distributed computing solution will be provided free to any organization fighting the coronavirus pandemic.  University and private sector research labs, test kit manufacturers, companies providing free or reduced-cost video conferencing solutions, and companies who have quickly ramped up manufacturing for items such as hand sanitizer, gloves, and face masks can all take advantage of Kazuhm to increase their computing capacity.  This offer is available to any organization across the globe that meets the criteria of contributing to the fight against COVID-19. Visit https://www.kazuhm.com/covid-19-response/ for more information on this offer.

 

 

Comments

Kazuhm Expands AI-Driven User Insights and Controls with Latest Distributed Computing Solution

Enterprise-grade distributed computing solution enables IT cost savings and  intelligent compute resource management

SAN DIEGO, Ca.—April 3, 2020Kazuhm today announced the availability of advanced AI-driven insights and controls in the latest release of their distributing computing platform.  Newly added functionality uses artificial intelligence algorithms to forecast available computing capacity across corporate assets including servers, desktops, laptops, and multi-cloud resources.  Additionally, users can now more precisely control when any given compute resources are used based on configurable exclusion windows and usage limits.

The Kazuhm platform represents the first time distributed computing is available as an enterprise-grade product enabling organizations to take full advantage of the computing power they already own across all their devices from desktops, laptops and tablets to servers and multi-cloud environments.  Kazuhm allows customers to quickly and easily unify their resources and run enterprise applications faster, more securely, and at a lower cost.

“Kazuhm’s distributed compute software helps True run transcoding on compute assets we already own and save hundreds of thousands of dollars on compute and hardware costs,” said Reed Anderson, CTO, True Corporation. “We are excited about the artificial intelligence-driven insights we will get from this latest release, enabling even more optimization of our resources and therefore cost savings.”

Available to customers in the latest product release, Kazuhm now offers the following features:

  • Control features include the ability to set exclusion windows for each Kazuhm-enabled device on your network, pause and resume work on those devices, add additional resources to an existing host group, and manage storage from within the Kazuhm platform.

 

Kazuhm Scheduler

 

  • Monitoring features include a notification center that displays timely and critical information about your Kazuhm-enabled resources, information on Kazuhm-specific CPU usage across devices, and cloud resource status.

 

Kazuhm Notifications Center

 

  • Usability features include a newly streamlined Windows installation process and Google Cloud Platform provisioning capability.

 

Kazuhm Windows Installer

 

Kazuhm is applicable across a wide range of end user devices, processor platforms, and operating systems including Linux, MacOS, and Windows. Companies interested in containing their IT costs while improving compute capacity and performance can request a free trial of the Kazuhm platform at https://www.kazuhm.com/.

Kazuhm COVID-19 Response

In related news, Kazuhm announced on March 23 their AI-driven distributed computing solution will be provided free to any organization fighting the coronavirus pandemic.  University and private sector research labs, test kit manufacturers, companies providing free or reduced cost video conferencing solutions, and companies who have quickly ramped up manufacturing for items such as hand sanitizer, gloves, and face masks can all take advantage of Kazuhm to increase their computing capacity.  This offer is available to any organization across the globe that meets the criteria of contributing to the fight against COVID-19. Visit https://www.kazuhm.com/covid-19-response/ for more information on this offer.

 

Comments

Kazuhm Provides Free Compute Capacity to Organizations Fighting the COVID-19 Pandemic

Enterprise-grade distributed computing platform enables immediate increase
in compute capacity and storage at no cost

SAN DIEGO—March 23, 2020  Kazuhm today announced their AI-driven distributed computing solution will be provided free to any organization fighting the coronavirus pandemic.  University and private sector research labs, test kit manufacturers, companies providing free or reduced cost video conferencing solutions, and companies who have quickly ramped up manufacturing for items such as hand sanitizer, gloves, and face masks can all take advantage of Kazuhm to increase their computing capacity.  This offer is available to any organization across the globe that meets the criteria of contributing to the fight against COVID-19.

“COVID-19 is bringing people and organizations around the world together with a singular goal of beating the pandemic and saving lives,” said Rick Valencia, interim CEO of Kazuhm. “The shift in resources, the changeover in manufacturing lines, the ramp up of research labs all takes immense application processing power, and Kazuhm is dedicated to ensuring these teams are getting that processing power as easily, efficiently, and securely as possible.”

Universities, research labs, non-profits, hospitals, and enterprises across sectors are struggling to meet the sudden demand for everything from hand sanitizer to respirators to online learning and meeting solutions. The Kazuhm platform can help these organizations quickly and easily unify the compute resources they already own to maximize distributed storage and application processing power.

In many organizations, compute resources such as desktops, laptops and servers go unused approximately 70% of the time.  Kazuhm represents the first time distributed computing is available as an enterprise-grade product enabling organizations to take full advantage of the computing power they already own across all their devices from Linux, MacOS, and Windows desktops, laptops and tablets to servers and multi-cloud environments.  Industry proven in applications for genomics research, data analytics, and image and video processing, Kazuhm allows customers to quickly and easily unify their resources and run any containerized application faster, more securely, and at a lower cost.  A user-friendly interface and integrated dashboards enable simple setup, AI-driven insights, and complete control.

Companies interested in containing their IT costs as well as improving compute capacity and performance while fighting the COVID-19 pandemic can sign up for this free offer at https://www.kazuhm.com/covid-19-response/.

 

Comments

Rick Valencia Joins Analytics Ventures as Operating Partner

Former Qualcomm Executive Brings Vast Experience in Driving Operational Growth to Kazuhm

SAN DIEGO—Dec. 18, 2019 Analytics Ventures, a fund dedicated to creating and building venture companies that harness the power of artificial intelligence (AI), announced today that Rick Valencia has joined the Analytics Ventures leadership team, taking on the role of operating partner. Mr. Valencia’s primary function will be to help Analytics Ventures-backed companies transition from successful startups to independent, fast-growth technology companies.

“Rick Valencia is an astute investor, entrepreneur and diligent operations executive capable of rapidly scaling ventures with exceptional technology and early traction,” said Navid Alipour, managing partner at Analytics Ventures. “His technical and operational acumen in combination with his vast network of business and technology relationships will be key to the future success of our ventures.”

Before joining Analytics Ventures, Mr. Valencia was an SVP at Qualcomm and served as President of Qualcomm Life, Inc. after he spearheaded its formation in 2012. As President of Qualcomm Life, Mr. Valencia was also responsible for overseeing Qualcomm’s healthcare venture funds, dRx Capital and Qualcomm Life Fund. Prior to founding Qualcomm Life, Rick founded ProfitLine, Inc., a telecommunications service management provider, and served as Chief Executive Officer from 1992 until the sale of the company in 2009. Rick is also on the Board of Directors of Tandem Diabetes Care, (NASDAQ: TNDM) and is the Executive Chairman of TrekIT Health.

“Having successfully launched multiple AI companies over the past two years with their venture studio model, I feel that Analytics Ventures has a proven approach to leverage the power of AI for company formation across multiple industry verticals,” said Mr. Valencia. “I am excited to join this team of visionary business leaders and exceptional artificial intelligence scientists and look forward to being a catalyst for operational excellence and growth for our venture companies.”

Mr. Valencia’s initial focus as operating partner at Analytics Ventures will be to assist portfolio company, Kazuhm, in commercializing its innovative hybrid compute offering by assuming the role of executive chairman and interim CEO.  Kazuhm is an IT technology company enabling next-generation hybrid computing and has succeeded in building an enterprise-grade compute platform that allows application and hardware providers to let their users process more data faster, more securely, and at a lower overall cost. They do this by intelligently unifying a company’s existing, yet underutilized, enterprise compute resources, including desktops, servers, and cloud. He will take on this role with the objective to build out and scale Kazuhm’s operational framework to address the growing demand from enterprise customers.

About Analytics Ventures

Analytics Ventures is a venture studio providing front-to-end infrastructure to ideate, form, launch, and fund brand new companies in artificial intelligence. With its own in-house AI lab, technology, back-office, and marketing setup, Analytics Ventures takes companies from formation to public launch in as little as six months. Winner of the Awards.AI Venture Capital firm of the year for two years in a row, the fund’s ecosystem is supported by a large network of corporate and academic partnerships, as well as other venture funds. To learn more about Analytics Ventures, visit www.analyticsventures.com.

Comments

Kazuhm Expands Support for Microsoft Azure and Windows for Distributed Computing Platform

Containerized workloads run seamlessly across Windows Desktops, Servers, and Azure Cloud

SAN DIEGO, Ca.—October 1, 2019Kazuhm, a container-based distributed computing platform, today announced the expansion of support across Microsoft Windows desktops and servers, as well as Microsoft Azure cloud for container-based distributed computing. Using the Kazuhm platform, customers can now connect all their Microsoft-based resources to form a powerful compute fabric with the ability to process containerized workloads including multimedia applications such as transcoding or data science applications such as Apache Spark. 

As of August 2019, Microsoft Windows held >78% of the desktop OS market according to Statcounter. However, most container management solutions do not address this massive amount of compute resource in distributed computing environments—specifically allowing users to run workloads on desktops. Furthermore, Microsoft reported 41% growth in the Azure commercial cloud business in the first three months of 2019. Kazuhm has extended support for customers using Microsoft servers and desktops for distributed workload processing to now include Azure cloud for unified containerized workload processing from desktop to server to cloud.

“Digital transformation means not only the ability to incorporate cloud-based compute resources, but the ability to optimize all the resources of an enterprise, whether those resources are desktops, servers, or even edge devices,” said Tim O’Neal, Kazuhm co-founder and CEO. Microsoft’s dominance in the enterprise is extending from Windows desktops and servers towards Azure cloud services, and Kazuhm allows enterprise customers to take advantage of any part of their portfolio of Microsoft assets to process workloads in the most efficient way possible.”

For more information on Kazuhm support for enterprise Microsoft users, see https://signup.kazuhm.com/ms.

Comments

Cloud today gone tomorrow?

by Kevin Hannah, Director of Product Operations, Kazuhm

The Changing Definition of “Cloud”

The message of achieving IT nirvana by moving to the cloud continued to ring loud throughout 2018. But in the face of practical realities that included overrunning budgets1, security concerns2, performance issues due to network latency3, and an ever-increasing skills gap, the emphasis on public cloud changed to one of hybrid cloud where organizations were encouraged to take advantage of both public and private deployments; with 80% “repatriating workloads back to on-premise systems”4. A fact that the public cloud providers have been forced to embrace, as evidenced by Amazon announcing Outposts to bring their hardware into customer data centers. Now more recently followed by Google with Anthos. And a further recognition that “some customers have certain workloads that will likely need to remain on-premises for several years”5.

The number of “cloud” options has continued to increase, there is no one-size-fits-all, and so what we were really talking about at the end of last year was any variant on xyz cloud (public, private, multi, and hybrid).

But wait. The “fog” is rolling in. Or as Gartner would say “the Edge will eat the Cloud”7. The future tsunami inherent in Edge and Internet-of-Things (IoT) deployment behind both these statements, driving organizations away from a single threaded focus on “cloud”, requires another rethinking of our definitions. Add the ability to run workloads on desktop and the truly disparate constituent parts of this ever-expanding compute continuum and xyz cloud just doesn’t cut it anymore.

Adding a version number, e.g. Cloud 2.0, is lackluster. And although the use of “3rd platform” by IDC7 builds on an evolution of mainframe/greenscreen, through client/server, to cloud/browser, and comes somewhat closer, I see it as muddying the waters by weaving in social business and big data analytics that are not intrinsically part of a compute continuum.

Is it Cloud, is it Edge, or is it both? I believe we need new terminology, one best characterized in a Next Generation Grid of heterogeneous, connected, compute resources.

 

Containers as the “Life Blood” of Digital Transformation need a Heart

Despite the hybrid/multi-cloud push in 2018 and the lauded growth rates in $spend and adoption, the reality is somewhat different and “the so-called rush to the cloud is not, at present, much of a stampede, at all”; by 2021 only 15% of IT budgets will be going to the (public) cloud8.

Cloud this year is “still only used for around 10-20% of applications and workloads, according to 451 Research9, and this doesn’t even differentiate between production and non-production.

The drip has now become a trickle in 2018 but to reach flood stage will require the ability to have workloads move freely across the entire compute continuum, from desktop, to legacy server, to private cloud, public cloud, to the Edge and the IoT beyond. In other words, Containers. So, it is not a surprise that Forrester predicts “2019 will be the year that enterprises widely adopt container platforms as they become a key component in digital transformation initiatives”.  A recent survey of IT professionals done by Kazuhm supports this with 75% of respondants predicting they would increase their use of containers in 2019.

However, it is not just a case of organizations simply rolling-out containerized application workloads. It matters that the right workloads are deployed onto the right resources for the right reasons (including cost, performance, security/compliance, and even more esoteric vectors such as “data gravity”11 that root the location of processing that data). In other words, Optimal Workload Placement. We have already explored the breadth of resource but the addition of a myriad of both workload types and business reasons exponentially compounds the complexity.

The use of AI and the cloud have seen parallel growth. The latter an enabler through collection, storing, processing, and analyzing the vast volumes of rich data necessary to feed AI algorithms. But again, AI at the Edge is set to take center stage as issues with latency, bandwidth, and persistent connectivity (reliability), compound the problems the cloud already has with privacy-security-regulatory concerns and economics. What were we saying about cloud being inadequate as an overarching term…

That aside, now is the time to apply AI inward, with 2019 I believe to be marked as the start in the evolution of AI-enabled Orchestration of container workloads, the pumping heart of digital transformation.

The future is AI-enabled Orchestration for Optimal Workload Placement on the Next Generation Grid.

 

You hear that Mr. Anderson?… that is the sound of inevitability…

My parting thought for this future. “AWS wants to rule the world” 12. As did IBM, the biggest American tech company by revenue in 1998. Now 20 years later they are not even among the top 30 companies in the Fortune 500. The cycle of technology change continues to turn, but at an even faster pace. Perhaps Cloud today gone tomorrow?

 

References

1 Source: Cloud trends in 2019: Cost struggle, skills gap to continue https://searchitchannel.techtarget.com/feature/Cloud-trends-in-2019-Cost-struggle-skills-gap-to-continue

2 Source: What’s Coming for Cloud Security in 2019? https://www.meritalk.com/articles/whats-coming-for-cloud-security-in-2019/

3 Source: Cloud 2.0: What Does It Mean for Your Digital Strategy? https://www.forbes.com/sites/riverbed/2018/10/11/cloud-2-0-what-does-it-mean-for-your-digital-strategy/

4 Source: Businesses Moving from Public Cloud Due To Security, Says IDC Survey https://www.crn.com/businesses-moving-from-public-cloud-due-to-security-says-idc-survey

5 Source: Amazon Web Services Announces AWS Outposts https://www.businesswire.com/news/home/20181128005680/en/Amazon-Web-Services-Announces-AWS-Outposts

6 Source: Gartner, The Edge will Eat the Cloud https://www.delltechnologies.com/en-us/perspectives/the-edge-will-eat-the-cloud-a-gartner-report/

7 Source: IDC https://www.idc.com/promo/thirdplatform

8 Source: ‘Big four’ set for assault on cloud market https://techhq.com/2018/11/big-four-set-for-assault-on-cloud-market/

9 Source: Sky’s the limit in global race to adopt cloud https://www.raconteur.net/technology/skys-the-limit-in-global-race-to-adopt-cloud

10 Source: Predictions 2019: What to Expect in the Cloud/Container World https://www.eweek.com/development/predictions-2019-what-to-expect-in-the-cloud-container-world

11 Source: Defying data gravity: How can organizations escape cloud vendor lock-in? https://www.cloudcomputing-news.net/news/2018/nov/23/defying-data-gravity-how-can-organisations-escape-cloud-vendor-lock-/

12 Source: AWS wants to rule the world https://techcrunch.com/2018/12/02/aws-wants-to-rule-the-world/

Comments

The Triple DNA Helix of AI at the Edge

Kevin Hannah, Director of Product Operations for Kazuhm, explores why artificial intelligence should take center stage at the edge and why our ability to process the tsunami of information that is coming at us from 5G, IoT, and Big Data is going to depend on how successfully artificial intelligence is deployed at the edge in the blog below and in Amelia Dalton’s ‘fish fry’ podcast from the EE Journal that you can access here.

As Neo was told “that is the sound of inevitability” so too are organizations when it comes to both AI and the Edge. But inevitable as it is, if we are to see the delivery of tangible business value rather than just continuing to read articles espousing lofty promises of what will be, we need to understand the three complimentary entwined strands of what makes AI at the Edge both possible and more importantly financially viable.

AI Applications are the obvious end-user manifestation of AI at the Edge. But why focus on AI rather than one, or many, of the other technology darlings such as AR, VR, and Autonomous Driving? All are perceived to deliver value at the Edge based on their need(s) for low latency performance, reduced movement of data, either for bandwidth reduction or compliance jurisdiction/sovereignty, survivability, and reliability.

The business case for AI is simply an extension of the tidal wave of Business Intelligence and Analytics associated with all things Big Data. And that is the key. Massive data volumes generated by next generation connected Internet of Things (IoT) devices continues to grow exponentially.

AR/VR are cool to demonstrate but have offered little to organizations in terms of real revenue gain and Autonomous Driving is going to face numerous uphill struggles against regulatory adoption.

But the use of AI, trained using Machine Learning (ML) algorithms, on data at the Edge is easy to grasp in terms of immediate business benefit – insights generated, and immediate actions taken where the data is produced rather than having to rely on distant, centralized, cloud resources. This is no more evident than in Manufacturing where high precision manufacturing and robotics require AI located on premises to ensure real-time responsiveness, while connected machines and sensors provide new insights into predictive maintenance and energy efficiency across disparate geographic locations in pursuit of improving operating profit.

However, the Edge is a continuum stretching from the IoT device layer, through the Access Edge “last mile” layer, to the Infrastructure Edge data center layer, with aggregated data ML seamlessly picking up where work at the device leaves off. Ultimately, providing opportunity to improve scalability and performance by placing AI at an optimal location in the Edge topology.

And it is this AI-as-a-Service sitting at the network edge that represents a key monetization opportunity for Communication Service Providers (CSPs).  It allows them to move away from selling undifferentiated basic bandwidth services, become relevant in the developing AI Application ecosystem, and drive new revenue. This is a time-sensitive endeavor as the major public cloud providers look to extend their reach in reaction to the “edge will eat the cloud” (Gartner).

Edge Infrastructure is the domain of the CSPs who as we have discussed are leveraging their network infrastructure as a rich, functional platform for AI applications. Ownership of access networks and edge cloud infrastructure gives them a competitive advantage over public cloud providers particularly in the 5G era. And without 5G there will be network problems in not only providing connectivity for the billions of anticipated IoT devices but also for transmitting the huge volumes of data that will be generated.

Out of 5G is born Software-defined networking (SDN) designed to make networks more flexible and agile through Network Function Virtualization (NFV), and Mobile Edge Computing or Multi-Access Edge Computing (MEC) in the form of what is essentially a cloud-based IT service environment at the edge of the network.

A set of standardized compute resources are provided, both CPU and GPU, running cloud native applications and orchestration to mimic the platform simplicity, API familiarity, and developer comfort of the cloud. But within the 5G networks, these resources reside on a playing field differentiated by location… a game the CSP can win.

So, with companies such as NVIDIA looking to Edge-located GPUs in support of AR, VR, and Connected Gaming over this standardized 5G infrastructure, although not a direct use for AI as mentioned earlier, these resources can be recaptured (when idle) as a powerful accelerator of AI training algorithms.

And back to the billions of anticipated IoT devices such as mobile phones, whose compute resources inside are becoming increasingly powerful. They can now enable Federated Learning as a privacy-preserving mechanism to effectively leverage these decentralized compute resources to train ML models coordinated through these other Edge-located ML resources.

A complete, connected, ecosystem hosting AI stacks for both the CSP and their clients/partners offers the opportunity to rethink business models and how to participate in value creation, value distribution and value capture. Here, effective participation is the key to monetizing network infrastructure.

AI-Enablement is the use of the AI stack by the CSP for automated workload orchestration, the underpinning for provisioning and managing services and applications at the Edge.

This means the Edge itself becomes more intelligent. Making it not only relevant for low latency applications but offering potential to unlock highly intelligent and secure opportunities, data transmission efficiencies, traffic steering, zero-touch service management, and optimal workload (including Virtual Network Function, VNF) placement; a smart way to handle the right workload, on the right resource, for the right reason whether that be cost, performance, security/compliance, routing, or even reliability.

AI will be critical to network automation and optimization, with real-time decisions needed in support of traffic characterization, meeting end-to-end quality of service, and in particular – Dynamic Network Slicing that allows CSPs to monetize their infrastructure by offering multiple service tiers at different price points. For example, a slice of the network to handle certain floor robotics that rely on ultra-low latency may garner a higher price than a parallel slice for less time-sensitive edge compute.

The DNA of AI at the Edge is now starting to form. Time will tell as to who will endure (through financial success) to pass theirs to a next generation where AI functionality is so completely decoupled and disseminated so broadly that it will seem to disappear altogether.

Want to hear more? Listen to Amelia Dalton’s podcast ‘fish fry’ from the EE Journal featuring Kevin Hannah, Director of Product Operations for Kazuhm, at the link below.

The Curious Case of the Critical Catalyst – Why Artificial Intelligence will be the Darling of the Edge

Comments