Data Science is the inter-disciplinary combination of statistics, computer science, and business acumen (Loukides, 2010) and MLOps is something like ~ a set of practices that automates and standardizes the lifecycle of machine learning models, from development and training to deployment, monitoring, and maintenance in production.

We invest in data governance to increase trust in the data.

We invest in MLOps to increase trust in the judgement of the model.

And while there is presently a vigorous discussion about the distinction between AIOps, which is MLOps using AI, and MLOps, which is MLOps which already used AI, I’m sticking with MLOps for this post.

Three more ideas to express: descriptive, predictive, and prescriptive data product.

Concretely and briefly:

  • Descriptive is something like “here’s a line chart of something over the past 12 months”.
  • Predictive is something like “here’s a line chart of something over the next 12 months”.
  • Prescriptive is something like “given how this line over the past 12 months moves with this other line over the past 24 months, the system will adjust do this over the next 3 days to achieve some outcome over the next 24 months.”

I haven’t seen it in the wild yet, but adding a fourth, generative, seems unnecessary and … sadly … inevitable. Somebody is going to try [1].

Generative is prescriptive because next best token is a prescriptive product. Most users never experience the probability distribution of each candidate token produced by layered transformers, they’re just served the recommendation prescriptively. The vast majority of users experience it as text in a chat interface, never having seen the underlying distributions of what could have been said. The winning set of tokens represent a prescriptive treatment. And there’s most certainly an intended effect on the recipient.

Adding Generative as a fourth type is a complication that is highly unlikely to alleviate the lag, friction, and conflicts pervading data science and MLOps alike.

the lag, friction, and conflicts could be different this time around though.

Here’s why.

If only they would invest in Data Governance and MLOps

The rate limiter is culture.

Investments in data governance tend to lag where trust isn’t an essential quality of the survival (or flourishing) of the organization. When the few who are empowered to make any decision about anything don’t need data to inform any decisions, then there is no point in investing in data. Data quality is an unnecessary quality attribute, as the persistent state of dirty data further justifies its irrelevance. It sets up a convenient script, by with one can complain that they can’t trust the data so they always have to rely on their dead reckoning excellent judgement alone. Vast parts of the global economy operate this way. It doesn’t matter.

It follows that investments into MLOps will lag where trust isn’t an essential quality. The firm isn’t going to tolerate prescriptive data product like recommendations or automated next best offer delivery in the first place, so why bother with managing the model itself?

Arguing that a firm suddenly has to invest in data governance in order to enable Generative AI is unlikely to be persuasive. The firm isn’t committed to generative AI. It’s Feldman (1989) all the way down. The first sign of difficulties are evident in the automated CX Support Agents and the flow of problems the interaction between what the firm wishes what was true, and what customers wish was true, interact. This soon will be a meme.

The core rate limiter lies in the underlying culture, typically shaped by what the competitive environment tolerates. It is not the technology itself.

Don’t despair. There are plenty of points of potential GDP latent in the tech.

Where does the Data Science of MLOps matter?

In competitive environments.

The higher the competitive pressure, the greater the demand for differential edge. Red Queen dynamics apply. If you compete on judgement, then you compete on improving judgement. MLOps matters because it’s existential.

MLOps hit the quaternary sector long ago. The implication of competing on judgement is surprisingly variable in the services sector. There’s considerable variance in the Secondary and Primary sectors. For instance, some farms have deployed vision models to microtarget pests and fungus as the tractor drives across the field, all the while some farms remain unmechanized. Yield. Yield. Yield.

Manufacturing productivity peaking in the mid-1960’s. We won manufacturing as a domain back then. The application of MLOps is going at exactly the appropriate speed in the secondary sector.

The lag, friction, and conflicts pervading Data Science and MLOps

Whether it was in response to recsys and narrow ml in the early 2010’s, or the reaction to generative AI in the mid-2020’s, the preservation of hard won optimizations and risk management pops up in the OR literature.

Dai et al (2025) offers: “First, flow-based generative models frame generation as deterministic transport characterized by an ordinary differential equation, enabling auditability, constraint-aware generation, and connections to optimal transport, robust optimization, and sequential decision control. Second, operational safety is formulated through an adversarial robustness lens: decision rules are evaluated against worst-case perturbations within uncertainty or ambiguity sets, making unmodeled risks part of the design.”

It’s almost like how the website redesign is treated as an episodic event, like an eclipse, as opposed to an ongoing continuously improved product. In environments where prescriptive Machine Learning was litigated but never deployed, Generative represents another exciting threat to the status quo to be hedged against. It rhymes with the 2010 website redesign we did because of the mobile web.

In environments where Machine Learning was never attempted, Generative represents something that is harder to ignore simply because it has captured the imagination of so many. Will it be different this time? Why? Even if the same people who fended off the technology the last time aren’t there anymore, the narratives they left linger behind.

Change those narratives, the incentives, mean it, and change the outcome.

Trust in the output is the key fault line. Which is what Data Governance and MLOps address! A paradox!

The choice to rely entirely only LLM’s to execute MLOps for managing LLM output is an intriguing choice. If one has blind trust and maximum faith in LLM’s to operate in a trustworthy manner, then there isn’t a problem. But some know that LLM’s can be manipulative. Which is a challenge. The debate tends to be long and typically ends in either in gnashed teeth and disengagement, or with adherents of the extreme maximalist view grudgingly coming around to the idea that maybe the LLM’s will eventually become good enough, but just not right now, which doesn’t mean they shouldn’t stop trying or anything like that.

The Data Scientist re-emerges from the snack counter, latte in hand, and says, “What if we took a predictive approach to MLOps and AIOps then while we work towards the prescriptive, self-reinforcing, vision?”

They’re fun at parties.

What could be different this time?

In the event that the Data Science / MLOps feedback loop doesn’t tightly seal to exclude humans in the loop, the humans in the loop will continue to be extremely predictive of outcomes.

It’s very likely that the organizations with smaller, empowered, teams of five, armed with an agentic stack, high data quality, high strategic clarity, strong subject matter expertise, and competent data science and MLOps capabilities, will accrue significant competitive advantages.

Those competitive advantages will compound over time, reinforcing on reinforcement learning, and sum up to un-ignorable competitive pressures.

This position represents neither a minimalist (no data science, no MLOps, no data for anyone!) or maximalist (no human in the loop, max-sealing, no data for anyone!) views.

Keep the best humans in the loop running with the best machines in the loop. Empower. Prosper.

It really could be different this time.

References And Notes

[1] Why? It’s the newest thing out there! Because if your business isn’t already taking a Generative Agentic Approach to AIOps leveraging MLOps for your AIOps, you’re already dead in the water!

Cheng et al (2026) Cultural Compass: A Framework for Organizing Societal Norms to Detect Violations in Human-AI Conversations.

Dai, T., Simchi-Levi, D., Wu, M. X., & Xie, Y. (2025). Assured Autonomy: How Operations Research Powers and Orchestrates Generative AI Systems. arXiv preprint arXiv:2512.23978.

Feldman, M. S. (1989). Order without design: Information production and policy making (Vol. 231). Stanford University Press.

Loukides (2010) https://www.oreilly.com/radar/what-is-data-science/