Friday, September 13, 2024
AI Applications
Tech

Three Things You Should Know about AI Applications

6.62KViews

Three Things You Should Know about AI Applications

Lori MacVittie, F5 Distinguished Engineer

AI Applications

There are probably more than three things you should know, but let’s start with these three and go from there, shall we? 

First, it’s important to note that AI is real. Yes, it’s over-hyped. Yes, entire portfolios are being “AI-washed” in the same way everything suddenly became a “cloud” product over a decade ago. But it’s real according to the folks who know, which is to say decision makers in our 2024 State of AI Application Strategy research.

While most organizations (69%) are conducting research on technology and use cases, 43% say they have implemented AI at scale. That’s either generative or predictive. 

Somewhat disconcerting is the finding that 47% of those already implementing AI of some kind have no—zero, nada, zilch—defined strategy for AI. If we’ve learned anything from the rush to public cloud, it should be that jumping in without a strategy is going to cause problems down the road. 

To help you define that strategy—especially when trying to understand the operational and security implications—we’ve put together a list of three things you should consider. 

1 . AI applications are modern applications

It shouldn’t need to be said, but let’s say it anyway. AI applications are modern applications. While the core of an AI application is the model, there are many other components—inferencing server, data sources, decoders, encoders, etc.—that make up an “AI application.” 

These components are typically deployed as modern applications; that is, they leverage Kubernetes and its constructs for scalability, scheduling, and even security. Because different components have different resource needs—some workloads will benefit from GPU acceleration and others just need plain old CPUs—deployment as a modern application makes the most sense and allows for greater flexibility in ensuring each of the workloads in an AI application is deployed and scaled optimally based on its specific computing needs. 

What this means is that AI applications face many of the same challenges as any other modern application. The lessons you’ve learned from scaling and securing existing modern applications will help you do the same for AI applications. 

Strategic Takeaway:

 Leverage existing knowledge and practices for application delivery and security but expand to include approaches that recognize that different components of AI applications may have varying resource needs, such as GPU acceleration for compute-intensive tasks or CPU resources for less compute-intensive workloads. Modern application deployments allow for flexibility in allocating resources based on the specific requirements of each component, optimizing for performance and cost efficiency.

2 . AI applications are modern applications

It shouldn’t need to be said, but let’s say it anyway. AI applications are modern applications. While the core of an AI application is the model, there are many other components—inferencing server, data sources, decoders, encoders, etc.—that make up an “AI application.” 

These components are typically deployed as modern applications; that is, they leverage Kubernetes and its constructs for scalability, scheduling, and even security. Because different components have different resource needs—some workloads will benefit from GPU acceleration and others just need plain old CPUs—deployment as a modern application makes the most sense and allows for greater flexibility in ensuring each of the workloads in an AI application is deployed and scaled optimally based on its specific computing needs. 

What this means is that AI applications face many of the same challenges as any other modern application. The lessons you’ve learned from scaling and securing existing modern applications will help you do the same for AI applications. 

Strategic Takeaway:

 Leverage existing knowledge and practices for application delivery and security but expand to include approaches that recognize that different components of AI applications may have varying resource needs, such as GPU acceleration for compute-intensive tasks or CPU resources for less compute-intensive workloads. Modern application deployments allow for flexibility in allocating resources based on the specific requirements of each component, optimizing for performance and cost efficiency.

3 . Different AI applications will use different models

Like the eventual reality that is multicloud, it’s highly unlikely organizations will standardize on a single AI model. That’s because different models can be a better fit for certain use cases. 

That’s why we are unsurprised to learn that the average enterprise is already using almost three (2.9) distinct models, inclusive of open-source and proprietary models. When we look at the use of models based on use cases, we start to see a pattern. For example, in use cases which rely heavily on sensitive corporate data or ideas—security ops and content creation—we see significant trends toward open-source models. On the other hand, looking at a use case for automation, we see Microsoft gaining use, largely due to its ability to integrate with the tools and processes already in use at many organizations. 

This is important to understand because the practices, tools, and technologies needed to deliver and secure a SaaS-managed AI model is different than that of a cloud-managed AI model is different than that of a self-managed AI model. While there are certainly similarities—especially for security—there are significant differences that will need to be addressed for each deployment pattern used.  

Strategic Takeaway: 

Analyze the use cases within your organization and identify patterns in the adoption of different AI models. Consider factors such as data sensitivity, integration capabilities, and alignment with existing tools and processes. Tailor your approach to deployment and security based on the specific characteristics of each deployment pattern.

There are a lot of considerations for building, operating, and securing AI applications, not the least of which is all the new requirements for model security and scalability. But many of the lessons learned from deploying modern applications across core, cloud, and edge for the past decade will serve organizations well. The core challenges remain the same, and applying the same level of rigor to scaling and securing AI applications will go a long way toward a successful implementation. 

But forgoing attention to the differences and leaping in without at least a semi-formal strategy for addressing delivery and security challenges is bound to lead to disappointment down the road. 

TAGGED :

TECH

STRATEGIC TAKE AWAY

DUBAI

AI APPLICATIONS

Leave a Reply