Love to touch on AI, at the very least, we know that every customer experience is going to ultimately have some level of machine learning oriented adaptive behavior eventually.
Most of that is evidenced by the sort of consumer experiences that we have in retail today with Amazon and Netflix and Uber and other vendors that have truly embraced machine learning based recommender systems. Within the data ecosystem, I'm really curious about your vision on how AI is both a consumer and a contributor to well functioning and sustainable data environments.
How does a machine learning come into play when considering data architectures and strategies? Any of your sort of vision on that and examples or anecdotes about using machine learning today might help our audience can form some ideas of their own.
Sure. Absolutely. I think let's preface this whole the my response by saying that you've already done the part of creating a single version of the data truth as far as cleansing data quality, harmonization, making sure that business rules are applied to your data and your compliant. So once you've done that, now you've got high quality data, standardized data that you can deliver to AI.
From my perspective, when I look at AI, I look at it as something that I would work with three relevant working principles and they are and who knows what's going to happen down the line.
Automation is a pretty simple thing, something that is done manually. If you can automate that, that would be great. And it doesn't necessarily have to mean that you're sort of replacing someone's manual human job. But it's just that even in data operations and data engineering, there are certain things that people tend to do by hand. And I don't think you need to do that by hand. You need to always think of how do you automate it.
Now, the second principle is amplify. And amplify is basically exactly what it means. When when you have an amplifier, what does it do? It enhances the volume of the sum. Because you need to do it for a thousand people so you can have a single person doing something manually and thousands of people are waiting in line for that task to complete.
Today I have a thousand requests. I get a thousand jobs that I can run to do something, whatever that something is.
And the third aspect is to simplify. Which basically talks about how do you take a very complex problem or a complex task? For the lack of a better term, maybe even have a built in Gantt chart inside that has dependencies as to which one you want to work on first. And then in that process of working with that thing, if it needs scale, then you amplify it. And of course, you've already automated it. That's why you got to the simplification part.
So I think these three sort of working principles are relevant even in data engineering. And my sort of thought process on this is: today we are doing the severless data integration hubs where there is a little bit of manual touchpoint, one as far as, incorporating the data rules, making sure we have the right routine. It is possible that in the future that AI once it has been trained that these are all the data quality issues that need to be aware of, you need to go look at. Then it is able to go and look for those data quality issues and also build new quality rules based on transitive relationships that it has figured out, which makes the quality of data even higher than what it is today.
Because today it's just we start with a set of simple rules and they're all individual rules. But sometimes these rules have relationships, latent hidden relationships, inherent relationships that we are not aware of.
I think that is where AI will come to play. And I think this whole aspect of automation, amplification, simplification is something that is relevant across the board in every vertical, in every market segment, but it's also relevant in the horizontal, which is data. And I think that's something that will change the way we manage data in the future.