Back to Top
[A] Podcast
audio controls 10 10
audio controls 10 10

AI and ML: Automate, Amplify, and Simplify using Big Data

[A] Podcast #

backforward audio controls 10 10
backforward audio controls 10 10


Interview With Gaja Vaidyanatha

Learn how Artificial Intelligence (AI) and Machine Learning (ML) enhance workflows and data architecture with automation, amplification and simplification.



Gaja Krishna Vaidyanatha is an evolved data practitioner with a 28+ year track record of managing and integrating large data footprints, on-premises and on the Cloud. He is passionate about building serverless data hubs, delivering high-quality data for Analytics, Machine Learning & AI. His data philosophy states this - Good data creates good business insights which leads to great business outcomes. He is the Principal at CloudData LLC, a specialty consulting firm, based in Austin, Texas, USA. CloudData enables customers with the design, architecture and building of Serverless Data Integration Hubs in the Cloud.


Follow Gaja Vaidyanatha on social media:

And follow the latest from CloudData LLC:

Podcast Bonus Material

With AI, work with these three relevant principles: automate, amplify, and simplify


Love to touch on AI, at the very least, we know that every customer experience is going to ultimately have some level of machine learning oriented adaptive behavior eventually.

Most of that is evidenced by the sort of consumer experiences that we have in retail today with Amazon and Netflix and Uber and other vendors that have truly embraced machine learning based recommender systems. Within the data ecosystem, I'm really curious about your vision on how AI is both a consumer and a contributor to well functioning and sustainable data environments.

How does a machine learning come into play when considering data architectures and strategies? Any of your sort of vision on that and examples or anecdotes about using machine learning today might help our audience can form some ideas of their own.

Sure. Absolutely. I think let's preface this whole the my response by saying that you've already done the part of creating a single version of the data truth as far as cleansing data quality, harmonization, making sure that business rules are applied to your data and your compliant. So once you've done that, now you've got high quality data, standardized data that you can deliver to AI.

From my perspective, when I look at AI, I look at it as something that I would work with three relevant working principles and they are automate, amplify and simplify. And these are three characteristics which if you can do that, that's a good starting point with AI and who knows what's going to happen down the line.

Automation is a pretty simple thing, something that is done manually. If you can automate that, that would be great. And it doesn't necessarily have to mean that you're sort of replacing someone's manual human job. But it's just that even in data operations and data engineering, there are certain things that people tend to do by hand. And I don't think you need to do that by hand. You need to always think of how do you automate it.

Now, the second principle is amplify. And amplify is basically exactly what it means. When when you have an amplifier, what does it do? It enhances the volume of the sum. In the context of AI, we're talking about how can you do that same thing which you just automated, but do it a thousand times faster? Because you need to do it for a thousand people so you can have a single person doing something manually and thousands of people are waiting in line for that task to complete.

You need something to be automated and amplified to the point where it can scale. Today I have a thousand requests. I get a thousand jobs that I can run to do something, whatever that something is.

And the third aspect is to simplify. Which basically talks about how do you take a very complex problem or a complex task? Break it down into smaller chunks and then solve them one by one in a prioritized fashion. For the lack of a better term, maybe even have a built in Gantt chart inside that has dependencies as to which one you want to work on first. And then in that process of working with that thing, if it needs scale, then you amplify it. And of course, you've already automated it. That's why you got to the simplification part.

So I think these three sort of working principles are relevant even in data engineering. And my sort of thought process on this is: today we are doing the severless data integration hubs where there is a little bit of manual touchpoint, one as far as, incorporating the data rules, making sure we have the right routine. It is possible that in the future that AI once it has been trained that these are all the data quality issues that need to be aware of, you need to go look at. Then it is able to go and look for those data quality issues and also build new quality rules based on transitive relationships that it has figured out, which makes the quality of data even higher than what it is today.

Because today it's just we start with a set of simple rules and they're all individual rules. But sometimes these rules have relationships, latent hidden relationships, inherent relationships that we are not aware of.

I think that is where AI will come to play. And I think this whole aspect of automation, amplification, simplification is something that is relevant across the board in every vertical, in every market segment, but it's also relevant in the horizontal, which is data. And I think that's something that will change the way we manage data in the future.

Highlighted Quotes

Related Resources

[A] Treasury