Charging for Vertex AI Models
Now THIS is how you show your executives the money!
Over the last two weeks we’ve covered what you’ll be monitoring and why - crucial elements of keeping the lights on! As we close out our series on Vertex AI, we highlight what exectives really want - how to make AI make financial sense. Once you can illustrate how you’re using AI to make or save the organization money, you’ll look like an IT Legend in the eyes of your executive team.
Let’s break down the five ways I see organizations charging for Vertex AI…
They’ll form lines out the door to get their hands on your solution…
Per User, Per Month
If the problem you solve is linear or helps individual end users, this might be a fit.
Pros
Easy to understand cost structure in today’s world, where SaaS dominates.
Predictable revenue; can automate forecast.
Cost structure likely in line with increasing growth.
Cons
Scaling through smaller organizations is a challenge, as lower user counts will likely struggle to justify costs while you struggle to justify supporting them.
Capped revenue potential! If what you solve is worth more to an organization than your MSRP, you’ve left money on the table.
Predictablity problems arise for Customers when you eitther meter - and charge for - usage as it occurs, or you cap available user counts and force additional purchases before additional users can use your solution.
What an impressive project! This must have set somebody back a pretty penny…
2. Fixed Price Per Solution (Recurring)
If your solution (built in your environment, consumed by the Customer) delivers organization-wide value rather than scaling with each end user, this may be a valid fit.
Pros
This is still an easy to understand pricing structure for Customers.
The recurring component makes this revenue easy to predict and forecast.
Cons
Again, this model may either frustrate Customers or leave money on the table. Organizations are either priced out immediately or you’re risking that the value you deliver is SO great that Customers view this as a steal.
This requires careful cost tuning - surging usage would either require throttling (a poor user experience) or justification to Finance (our consumption expectations were off…) and possible demands for up-selling, possibly even before the renewal.
3. Fixed Price Per Solution (One-Time)
If your solution (built in your Customer’s environment) delivers organization-wide value rather than scaling with each end user, this may be a valid fit.
Pros
Just as with the scenario above, this is still an easy to understand pricing structure for Customers.
Again, this approach makes it easy to forecast - it just only happens once.
This approach makes it easy to attach a Support contract, as the project’s deployment has a clear end date.
Ongoing cost controls become a Customer decision, where throttling or resouces expansion is a decision they make.
Cons
Again, this model may either frustrate Customers or leave money on the table. Organizations are either priced out immediately or you’re risking that the value you deliver is SO great that Customers view this as a steal.
Scope-creep is an extremely real concern, as Vertex AI is new and organizations may have emerging needs or considerations. This makes project management and project wrap-up a challenge, especially if you quantify and track hours for projects being deployed.
It’s highly likely that you’d have to sell another model implementation in order to earn more from this engagement.
The meter is running…
4. Metered Usage
Admittedly, I struggle to think about who this pricing model would be for. The most common scenarios for this would be either Customers that are extremely good at negotiating (exmaple: modest flat fee , plus metered charges above a certain threshold) or ISVs (now sometimes referred to as SDCs). Providing a Vertex AI model to these extremely cloud-literate Customers may be exactly what they’re looking for, but may miss the mark with every other audience.
Pros
Customers may love this truly flexible model.
This model is likely easy to sell to these types of organizations.
There isn’t much to negotiate here - you have a pretty hard floor you can’t go beneath.
Cons
Customers may not understand this truly flexible model.
You might be leaving substantial money on the table. Compare this to a scenario where a VM with an application and another with a database have various agents & your support charges with a total cost of $900 per month. If you charged a markup on that, you might get $1,200 per month. If you instead pitched the solution as <Application Name> Management, the value of to the Customer is related to how important that application is to them, not the cost of a server and your agents on that server.
Maybe you just… don’t charge.
5. Just Don’t Charge For It
If you’re building a chatbot, that’s a pretty tough one to charge for. If it is something that is a value-add that your competitors don’t have, maybe you don’t charge for it as a way to
Pros
It’s hard to beat free…
Adding an AI element may make end-user adoption much simpler.
There isn’t much to negotiate here - you have a pretty hard floor you can’t go beneath.
Cons
It might be tough to get buy-in from Sales if this isn’t something they can monetize.
There’s an off chance that adding an AI element to your solution may make adoption more complex.
Your costs were probably defined before the advent of AI, so even if the associated costs are small they go straight to the bottom line of your solution.
How you build and support your models plays a critical factor in this, but if I had to pick my preferred approaches I would say 1) per project, deployed into the Customer’s infrastructure (with a support contract!) and 2) per project, deployed into your infrastructure. This gives you the maximum amount of flexibility to meet your Customers’ needs without leaving any money on the table. You’ll need to clear each engagement with Finance, but it’s extremely worth it and your Customers will thank you for tailoring the solution specifically to them.
Stay tuned for what’s next - we’ll be doing the same A-to-Z (hint, hint) coverage for another hyperscaler starting next week!