Snowflake VP of AI Baris Gultekin: No AI Strategy Without a Data Strategy & Why Skills Are the New Apps

The following is a conversation between Alp Uguray and Baris Gultekin.

Summary

In this episode of Masters of Automation, Alp Uguray sits down with Baris Gultekin, Vice President of AI at Snowflake, to explore the architecture of enterprise intelligence and why the agentic future will be shaped by data gravity — not model size.

Baris brings a rare cross-layer perspective. He spent 16 years at Google, where he co-created Google Now — the first AI assistant that predicted what you needed before you asked — and helped grow Google Assistant from 10 million to 500 million monthly users. He then co-founded nxyz, a blockchain data infrastructure company backed by Paradigm, Sequoia, and Greylock, which was acquired by Snowflake in 2023. Now he leads the development of Snowflake's entire Cortex AI product portfolio — including the recently GA'd Cortex Analyst, multi-model inference, and agentic capabilities.

The conversation opens with Baris's decision to leave Google at the height of his career — walking away from equity, impact, and safety — to build in the crypto winter. From there, Alp and Baris trace the evolution from the brittle, hand-built use cases of the Google Assistant era to today's reasoning-capable AI agents. They unpack Snowflake's core thesis — "bring AI to the data" — and why that architecture isn't just about security, but about unlocking the semantic understanding that makes agents trustworthy at enterprise scale.

They go deep on what happens when agents replace traditional application UIs, how the $500 billion SaaS economy evolves when natural language becomes the interface, and why skills — modular instructions and scripts that give agents new capabilities — represent the most exciting frontier in enterprise AI. Baris draws a vivid analogy: loading a skill into an agent is like Neo learning karate in The Matrix — instant, composable, and exponentially expandable.

The episode also tackles the economics of intelligence, agent-to-agent protocols like MCP, the accountability question when autonomous agents act on enterprise data, and why Baris believes the current moment in AI UX is equivalent to the terminal era of computing — with visual, voice, and multi-modal interfaces about to transform how we interact with intelligence.

Guest Bio

Baris Gultekin is Vice President of AI at Snowflake, where he leads the Cortex AI product portfolio and drives the company's AI product roadmap and strategy. He built Snowflake's AI product suite from the ground up after joining in 2023 through the acquisition of nxyz, the blockchain data infrastructure company he co-founded and served as CEO. Previously, Baris spent 16 years at Google, where he co-created Google Now and served as Product Director for Google Assistant, growing it from 10M to 500M monthly users. He holds an M.S. in Electrical Engineering from Cornell University and an MBA from Stanford.

Takeaways

  • The assistant era taught a hard lesson: when you make AI feel natural, users expect human-like intelligence — and when it fails on most use cases, trust breaks down completely.

  • "Bring AI to the data" isn't just a security play: it enables governance inheritance, semantic understanding of business logic, and dramatically simpler architectures for enterprise AI.

  • There is no AI strategy without a data strategy: companies must break down data silos, bring business semantics onto their data platform, and make their knowledge base AI-accessible before building agents.

  • Skills are the new frontier: modular instructions and scripts that agents can load on-demand create an exponentially expanding surface of capabilities — from drafting marketing campaigns to pulling contract data.

  • The SaaS stack won't disappear, but the interface will: enterprise platforms like Salesforce and SAP will evolve so that majority of interactions are driven by agents rather than humans — changing UIs, not eliminating vendors.

  • Agent-to-agent protocols are emerging fast: MCP for context retrieval and new commerce/negotiation protocols will enable agents to transact across systems — and natural language makes adoption faster than prior protocol shifts.

  • Trust is the gating factor for autonomy: low-stakes automations are already running without humans in the loop; high-stakes cases still need oversight, but what once took weeks now takes minutes.

  • We're in the terminal era of AI UX: today's text-based interactions are the command line of the AI age — visual, voice, and multi-modal interfaces will rapidly evolve to match how humans naturally process information.

  • The economics of intelligence are democratizing: both data access (via natural language) and intelligence (via cheaper, better models) are being democratized — and the pie is growing so fast it's lifting all boats.

  • Startup advice hasn't changed: the best founders solve problems they feel passionately about and deeply understand — not problems they identified through market analysis.

Chapters

  • 00:00 Introduction — Walking away from Google after 16 years

  • 03:01 From Google Now to Google Assistant: why the technology wasn't ready then

  • 05:24 Bring AI to the data: Snowflake's core architecture thesis

  • 06:07 Data as context: why agents are only as good as their data foundation

  • 08:34 The application layer evolves: agents, SaaS, and natural language interfaces

  • 11:27 Agent-to-agent protocols: MCP, commerce, and the agentic web

  • 13:02 The Matrix moment: skills as the breakthrough for enterprise agents

  • 15:48 Hyper-personalization: internal tools, custom apps, and the long tail of automation

  • 18:04 UX evolution: why we're in the terminal era and where voice and vision take us

  • 18:57 The oracle problem: source of truth when agents negotiate

  • 20:07 The economics of intelligence: democratization, value accrual, and the growing pie

  • 24:57 Jobs and industries: rapid change, education, healthcare, and universal impact

  • 27:18 Kids and AI: flashcards, personalized learning, and a different world

  • 30:22 Enterprise alignment: accountability, guardrails, and scoped agents

  • 34:00 Healthcare data: regulation, fragmentation, and the weekend project to reclaim your own records

  • 36:40 Reactive vs. autonomous agents: what's blocking full enterprise autonomy

  • 39:10 System of records, system of agents: how the data layer changes with AI

  • 44:14 Semantic models and the ontology of enterprise knowledge

  • 46:50 Startup advice: passion, depth, and the burning desire to make it happen

  • 49:05 Snowflake's partner ecosystem: why access to data is the biggest startup unlock

  • 53:16 General intelligence: what we overestimate, what we underestimate, and why tools matter most

These agents, it’s like in The Matrix — ‘I want to know karate’ and I know karate. Being able to tap into these skills and then use it for various purposes expands the surface on which you can take action.
— Baris Gultekin, VP of AI at Snowflake
There is no AI strategy without a data strategy. Ultimately, you still need to think about how you bring all of your data together so that your data assets can be joined, can be combined when you’re trying to bring insights across your data.
— Baris Gultekin, VP of AI at Snowflake
When you make something more natural, there are expectations of human-like intelligence. When it works in a couple of your use cases but not most, that expectation breaks down. That was the biggest challenge with the Google Assistant era.
— Baris Gultekin, VP of AI at Snowflake

Transcript

Alp Uguray (00:01): Hi Baris, thanks for joining me today.

Baris Gultekin (00:05): Thanks for having me.

Alp Uguray (00:07): Baris, I want to start with a moment. It is 2022. You've been at Google for 16 years — one of the longest tenures of anyone at that level. You've built products used by hundreds of millions of people. You have equity, you have impact, you have safety — and you walk away from that. And you walk away to start nxyz, a blockchain data infrastructure company, in the middle of what some would say is the crypto winter. And 18 months later, you sell that business to Snowflake and take on the most ambitious AI infrastructure challenge in the enterprise world.

So my question is: what led your decision to jump into trying to accomplish what most see as impossible?

Baris Gultekin (01:05): It's a great, deep question to start with. I've been at Google at that time for a while and I loved my tenure there. Google is a phenomenal company. I got to work on really interesting things. Just loved it. And at the same time, I've always wanted to do something on my own. This is something that's always been ingrained in me — to just try it out. And I had a lot of ideas. I was really fortunate to be able to start a lot of new products at Google.

Around that time, I've been thinking about what's next. I've spent a lot of time in the AI space before AI was hot. And at the time, it didn't quite work. So I felt like maybe it's time to try something else. I was really fortunate to have worked with Sridhar Ramaswamy, who's now the CEO of Snowflake, at Google when he was running Google Ads. Really enjoyed working with him — I've kind of seen him as a mentor.

So when I was talking to him, he suggested — he'd been thinking about blockchain as well. And I was really interested in that space. So it was just the right time to try something else, something new, work with amazing people in a startup environment. That's what I got to do.

And again, I was really fortunate to work with this awesome group of people. As we did this, the AI boom started with ChatGPT launching and the rest was history. So Snowflake saw that as an opportunity to double down on AI and acquired our company with lots of AI experts to come join and build out AI product at Snowflake.

Alp Uguray (03:01): And then that begins the Snowflake journey and starting the product. You've seen Google Now, the assistants, the voice agents back then. And now they've taken a different form and a different shape. How do you see that right now — especially seeing what it was like before, and now that it works?

Baris Gultekin (03:29): "Now that it works" is the interesting thing, right? Because Google Now was a precursor to Google Assistant and it launched with a lot of fanfare. People loved the idea of getting information proactively before they asked for it — very helpful tips and information. Then came Google Assistant, and there were a lot of expectations with Siri, then Google Assistant, and Alexa — about having intelligence at your fingertips on your mobile device.

The challenge was that the technology wasn't quite ready. Each use case had to be built by hand, one by one. When somebody says this, then do that. And that is by nature brittle. When you talk about voice, you're making intelligence more natural. And when you make something more natural, there are expectations of human-like intelligence, human-like behavior. So when it works in a couple of your use cases but not in most of your use cases, that expectation breaks down.

That was the biggest challenge with the Google Assistant era. And now with AI being as capable as it is, we're in a completely different world — because there is clearly a deeper understanding of what's being asked and the capabilities are much more robust. It feels a lot more natural and it kind of matches the expectation.

And what's also interesting is the likes of Google Assistant started with consumers first. Whereas ChatGPT is still a very consumer-oriented product, but there's a ton of opportunity in the enterprise space — which is relatively unique. Having enterprise almost exceed the pace of innovation — it's fascinating to see.

Alp Uguray (05:24): From the workflow aspect too — before, you'd have to design each of the logic and the business workflow. Now LLMs can understand that through skills and training data. One difference in being at Snowflake is that the AI is coming to the data versus the other way around. So how is that architecture critical for the future of generative AI applications? Is it purely governance, or are there different advantages that aren't obvious to people today?

Baris Gultekin (06:07): Maybe if I zoom back a little — ultimately, agents are incredibly powerful, but they are only as good as the context that's given to them. So data is incredibly critical and data is the most important asset of any company. Therefore, platforms like Snowflake, where you're starting with a solid data foundation, allow companies to build on that foundation.

When we started, our customers essentially told us: we want to bring AI to run next to data versus bringing massive amounts of data to AI. As you called out, one of the biggest benefits is security and governance. When you do that, all of the governance that you put on your data — all of your access policies that are already set — are by nature respected, versus you having to replicate it in multiple places. So it's substantially simpler to build, as well as more secure and governed.

A huge benefit also is that by being close to data, we're helping our customers build AI-ready data so that all of their data is much more easily usable with AI. And then we help them do that by bringing the semantics of their business onto their data platform — so that AI can understand how their data is laid out, what are the things that they care about, what are the metrics, what are the terms and synonyms used in their business.

At a high level, ultimately it's all about trust. For companies to be able to scale AI in their organizations and with their customers, trust is incredibly important. Trust shows both in terms of: are you governing the data? Is only people who have access to this data able to use it? And also: are you able to build high-quality AI solutions, high-quality agents — which boils down to the deep understanding of their data and business.

I feel very fortunate that Snowflake, being very close to data, is very uniquely positioned to take advantage of this opportunity with our customers.

Alp Uguray (08:34): It's also very interesting because customers have different systems of record that they've signed up for over the years — they had applications built for the **Salesforce**s, **SAP**s, and **Workday**s of the world. Each of them have their own database, UI, custom business logic — just hidden and unable to speak to one another.

In an agentic future, do you think the application layer kind of disappears somewhat? Instead, we'll have agents directly query a unified data layer, execute actions — and maybe the agent of one system of record speaks to the agent of another system of record provider, and that builds the agentic future. How do you think that's going to evolve?

Baris Gultekin (09:30): It's a good question — it's a question that's on top of everyone's minds. How does the world change? How do these platforms evolve? I absolutely believe all of these platforms will evolve to a point where the majority of interactions are done by AI versus humans. Today, these platforms are built for humans, but that is absolutely going to change. And in that world, when those interactions are mostly driven by agents, the interfaces change.

You've made another assumption in that question — do these companies disappear altogether? I don't necessarily see that to be the case. If you're using Salesforce, you might interact with Salesforce differently. You might interact with an agent that captures that information there. And then an agent can talk to a Salesforce agent and so forth.

However, each of these solutions, each of these providers will evolve to support this new paradigm. I am sure there are going to be a ton of new opportunities that will emerge as well. When you're able to break the barriers a little bit so that there is a level playing field in natural language for agents to interact with one another, those silos will absolutely start breaking down. There's tremendous opportunity for consumers to more easily get work done across multiple different platforms and solutions.

It's hard to imagine exactly how that's going to evolve. Absolutely it will evolve.

Alp Uguray (11:27): And it's moving very fast. I've seen the multi-agent frameworks online — the agents people are downloading, running on Claude and on Mac. In that future, let's say I have an agent and there's another agent someone else has — unleash my agent to negotiate meeting times, approve purchases, manage shopping behavior. What do you think will be the protocol layer for that agentic future? Do you think that's still going to happen as API calls, or are payments managed differently?

Baris Gultekin (12:21): What's happening is the LLMs are now conversant in natural human language. There are a lot of benefits to a series of APIs, but there are great protocols emerging that are making this very easy for agents to talk to one another. You have MCP for just getting context from various places. You have other agent-to-agent protocols emerging for agents to talk to one another. You have protocols for commerce emerging.

I think it's in the early days — protocols will emerge. And the beauty of all of this is because it's natural language, I think the adoption will be much faster.

Alp Uguray (13:02): Yeah, and also in a voice that we understand. I can speak every single language around the world, and that makes it much more accessible. Among this, what is one case that's exciting you the most that you see day to day?

Baris Gultekin (13:25): Right now, it's not quite a single use case, but I am really excited about the power of these reasoning models when they meet skills. Just to expand on this — skills are essentially instructions and scripts to take certain actions. And you can imagine many, many skills that will get developed, that will get open-sourced.

For these agents, it's like in The Matrix — I'll say, "now I want to know karate and I know karate." Being able to tap into these skills and then use it for various purposes expands the surface on which you can take action. So I'm incredibly excited about the power of that.

More concretely, in the enterprise space — for Snowflake, we built our own agent for our sales team. It's a very powerful agent — being able to look up any customer's information, any open issues they have, whether they have any renewals, what are the use cases they're working on. Really powerful. But you could imagine expanding it: you can add a skill to pull up information from some repository, add a skill to draft marketing campaigns, and so forth. Each of those can be really well customized. Excited about that potential.

Alp Uguray (15:09): And it's huge because in each skill, customers can include their business logic that is very niche to them. And then there will be a different AI — so personalized and tailored — just living to automate their processes.

Baris Gultekin (15:28): Exactly. And that applies to both processes within the company, but you could even personalize it for your own processes, your own workflows, automating many of your own needs as well. And because this is super easy to create, I expect a lot of hyper-personalization to happen here too.

Alp Uguray (15:48): Looking back at the Google Assistant time and user behavior of interacting with AI agents — especially at the time it was very voice-driven. Do you think that as UIs change, we will adopt maybe one agent that has thousands of skills — that can do my legal, my accounting, and I just say it to the agent?

Baris Gultekin (16:21): I definitely think that the user experience is very nascent right now. It's all text-based. If you go back to how computing evolved, you can go back to the terminal and it was all text — which I feel like we're now back there. We love using the terminal for coding assistants and so forth. But from that, we evolved to a graphical user interface, which was more natural. Vision has a lot — you can pack a lot more information in it.

Then other modalities are coming. So I do believe that interaction will absolutely have a lot more visual components than what we have today. Like being able to just ask a question and rather than answering one question at a time, just seeing a form — quickly tapping on something to fill out a form — that feels faster. Stuff like that will evolve.

Voice has already made a comeback. It's quite popular. I love talking to my AI agent. You can brainstorm, you can talk while you're driving. I think that is getting much more natural. Whenever things are natural, they'll get adopted more. So voice is an important modality, but the part that is going to develop more is the user experience of interacting with agents through visual interactions.

Alp Uguray (18:04): Each of them makes me think about the data layer as well — each UI will be connected to an agent, and that agent will use skills, use the data to connect and make a judgment or decision based on the agency we give to them. How would you see that each of the agents sees the same source of truth?

For example, I may ask for it to negotiate pricing for me and my agent grabs a price from somewhere, and another agent grabs a price elsewhere. There could be discrepancy. How do you see that same source of truth be accomplished at the data layer?

Baris Gultekin (18:57): We're talking about these agents getting more and more human-like in their capabilities. So we could absolutely look at how we do it and expect that is one way to do it. When you're negotiating a price, you have a sense of value for yourself, you have a sense of price of similar items in the market. Those are things that agents can absolutely have access to.

In terms of common grounding, I'm not sure if that's necessary. You have the common grounding which is the world knowledge and the web. And then increasingly, what I expect will happen is my agent will have a lot more knowledge and information about my own needs and preferences — and that will be different from your agent by nature. But both of those will also be grounded on all of the accessible information that's out there.

Alp Uguray (20:07): Each agent will defend in a way their master's outcome. I'd love to talk about the economics of intelligence — because in a way, we are now in an era similar to renting compute: we are renting intelligence. Training a frontier model costs millions of dollars and inference at scale costs a lot as well. How do you see that evolve over time, especially architecting a world where compute happens where the data already lives? Does that data-proximate compute model democratize AI, or how does the shift of power from cloud providers to data platform providers like Snowflake evolve over time?

Baris Gultekin (21:15): Clearly there is democratization of data as well as democratization of intelligence happening. Democratization of data is happening because you can now access with natural language massive amounts of data, sift through it, get insights, and use it. And democratization of intelligence is happening by these models getting both substantially better as well as cheaper over time. At the same time, because capabilities are increasing, people are using them more and more — so overall cost and usage is increasing.

In terms of balance of power — I don't really think about it as a balance between data platforms and AI platforms. There's a clear stack. I also see that everyone is going up the stack as well as down the stack at the same time.

When you look at where the value has been accruing — look at the valuation of NVIDIA, for instance. Huge amounts of value is accruing at the infrastructure layer. You have hardware providers like NVIDIA. Then the LLM providers are capturing a lot of value and creating a lot of value. Then you have the data layer, then the application layer and so forth. Increasingly, many companies are focusing more on the application layer — because ultimately that's where you create a lot of consumer value, a lot of customer value.

The pie is growing so much that it is not a fixed pie we need to protect. Everyone is running to build high-value, high-quality products to serve and capture this massive potential. I see the tide rising all boats versus some tug of war between platforms.

Alp Uguray (23:50): And in a way, everyone is collaborating with each other as well — so that improves the footprint everywhere. The UI is changing, user adoption is changing, people do want it more. Costs are going down because we're able to produce better models, closer to data. But user behavior is changing as well — the way jobs and enterprises are evolving over time. I'm not a big buyer of "jobs are going away." There will be jobs, and there will be even more. Looking from your lens — what do you see?

Baris Gultekin (24:57): There is one thing to pay attention to: I absolutely believe that there's going to be a ton of jobs and opportunities. At the same time, I don't think change has happened this rapidly before. So there is going to be a period of adjustment, and that period of adjustment may be felt in a way that is uncomfortable. I believe the opportunity is massive, however.

In terms of how this is all going to shape up — I see everything evolving. Look at education. The way my kids are learning is changing and should change. The types of tools they have access to, how they use these solutions to deeper learn, understand, customize, personalize their education — it's a massive opportunity. How much that levels the playing field in education across the world is a massive opportunity.

So you have education on one end. You have clearly a lot of transformation happening in technology on another end. Healthcare is changing — now everyone has an AI solution they can quickly ask questions to about health-related issues. AI is incredibly good at giving you information that is helpful.

The change is not really focused on one industry because we're talking about intelligence. It's a very human term. It affects many industries — all industries.

Alp Uguray (26:51): From the education aspect, that's very interesting. When I was growing up, I was still connecting with a normal phone to dial-in internet. My parents wouldn't be able to use the phone because I had the internet. And right now, kids have ChatGPT that can do so many things. How are your kids, for example, using AI?

Baris Gultekin (27:18): I feel like all of our sci-fi movies became real within a span of a couple of years. You have self-driving cars on the streets, you talk to your computer. My daughter, for instance, will go and practice a topic to prepare for an exam. She'll say, "Hey, I'm learning this — give me a series of flashcards." And it gives her a series of flashcards, she practices on that.

Whenever there's something she doesn't understand, she asks questions on that specific topic — immediately gets answers. This is a different world. Versus you going through your notes and your books to try to understand something and having to have a human to ask questions if you don't understand something.

Alp Uguray (28:11): In a way, they only learn what they need to learn. And how it's portrayed and interpreted is perfectly designed — which is amazing.

Baris Gultekin (28:29): There's also this hyper-personalization that will get realized in all sorts of different ways. If I want something done, now I can just go create a custom app just for myself — within tens of minutes. And now I have exactly what I need. You could apply that to everything. You can have very hyper-personalized education just for your needs. Apply it to every single situation.

Alp Uguray (28:57): Do you think that will result in more internal tools and internal applications getting built? It could be even a student building a flashcard application — or an enterprise building their own expense management software by live coding over a weekend and managing it.

Baris Gultekin (29:18): Yeah, certainly. I definitely believe in that. And these products are also easy to stitch together, as you called out earlier. Because the interfaces are mostly natural language-based, you could imagine: I've just built this, and I'll have it talk to this other system that somebody else built. An exponential set of capabilities that you get because you're constantly building on top of one another.

Alp Uguray (29:54): And you have full control on the design and improvement of the process. In terms of alignment — the foundation model companies do a ton of work on post-training, trying to align the model to human values. When it comes to enterprises, I think it's a little different because every enterprise is in a competitive world and everyone is trying to win for themselves.

Aligning enterprise AI with corporate values is an interesting space. If an agent optimizes based on profit but accidentally does something — like forgets there was an agreement, or the legal side of things — who's responsible? Is that the model provider who couldn't do the post-training well? Is it the enterprise that didn't do the agent configuration well? Or is it the CEO who needed to think about the accountability side of it?

Baris Gultekin (31:25): Building systems with strong controls, guardrails, evaluations is a very hot developing topic as anyone is building production-ready agents. You need to have a set of guardrails. You need to make sure those guardrails are followed by the agent. You need to have a series of evaluation datasets and know exactly where the agent trips up for the task you built the agent to do.

Usually the recommendation — at least for where technology is today — is to have scoped agents for which you know exactly what the skills are, what the tests are that you want the agent to complete, and build out those evaluations. Know exactly where the agent is good and where it's not good.

In terms of where responsibility lies — I think responsibility lies with the company and the person using these products as tools. Ultimately, these are very powerful tools, and the judgment is still on the company, on the developer, on the person to figure out how to create it, how to tune it, where to apply it, where to use it.

Alp Uguray (32:47): It's just like you have access to a lighter — a lighter can light a candle or light an entire building down by burning curtains. It depends on who uses the tool and knows exactly what to do with the tool. Setting those guardrails is really important.

It's an exciting space — everything is getting rebuilt from scratch. We mentioned education, we mentioned healthcare that is evolving. There's also the aspect in healthcare where the data has always been so problematic — always stuck somewhere in some data warehouse and on premises. It's very valuable data and it's being used. Maybe a model can learn from it and help thousands. How do you see that space evolve to break those barriers to entry?

Baris Gultekin (34:00): Healthcare is a regulated industry for a good reason. The data is valuable. But at the same time, regulation is there for a good reason. Development here needs to very clearly respect all of those regulations and the guardrails that are put in place.

I'm not a healthcare expert, so I'll give more personal examples. Just recently, I wanted to collect all of my health information in one place so that I can do some analysis. And it turns out it's very, very difficult to do. I've been seeing various different practices that are captured in multiple different systems that do not talk to one another.

But AI is now at a place where I can just go tell Claude to use computer use, go to all of these interfaces, scrape my own data, put it in my own computer so I can run my own analysis. If I were to do this myself manually, I'd have to click hundreds of times to try to collect all this data. It would take a long time. Whereas when I instruct the computer to do it — this was a weekend project — I was able to do that within an hour or so. And it was quite valuable data.

Alp Uguray (35:41): It's in a way very UI-driven — an agent navigates through the UI, downloads that data, and makes it accessible. I did a weekend project on this too — I was getting a lot of spam calls, but they were voice agents that were spam calling me. So I was looking to build a voice agent that would also spam call back the other voice agent and see which one would hang up first.

Baris Gultekin (36:13): That's great.

Alp Uguray (36:40): So there's talk about reactive agents and autonomous agents. In enterprises, we haven't seen a fully autonomous agent taking ownership of the full end-to-end business workflow yet. What do you think is the limiting factor — other than control?

Baris Gultekin (37:07): I think it really does boil down to trust. Now the technology is at a place where you can automate a lot of things. And we do see with our customers a ton of automation happening. In the majority of these cases, there is a human in the loop — for good reasons. You want some validation.

Increasingly, as quality gets to a level where it surpasses even human error rates, the automation will not necessarily require humans to be in the loop for those things. We see this with low-stakes automations right now — there is definitely automation happening where humans aren't in the loop. In high-stakes cases, there are still humans in the loop, looking at things, approving things.

But even in those situations — what used to take weeks now takes only minutes because of the efficiencies we're able to get.

Alp Uguray (38:17): Even in customer support — when a ticket comes through, the agent can manage the entirety of the conversation and go ahead and resolve it.

Baris Gultekin (38:28): Right. And we see that happening. In cases where it doesn't resolve, it can fall back to a human — but with a summary and a set of resources for that human to make a judgment call.

Alp Uguray (38:41): When agents can make changes in the systems of record — if I have one agent able to change the database system, send an email, be my management — essentially be my AI manager managing my tech stack — it will interact with many different application systems.

From the other system side, similar things are happening. An application provider is giving their agent and enabling me to access those systems of record. So the system of record is going to stay for a long time. And the data layer is still very important.

How do you think the data layer will change with AI? I know there's Cortex now as well — it's able to speak to the Snowflake OS in a way, and let me understand. That's a huge change of user interactions on the data side.

Baris Gultekin (40:24): I think massive amounts of new data just got unlocked. That alone has implications. In terms of use cases — you could imagine, we have customers that have tens of millions of documents. And you can go and talk to those tens of millions of documents and ask questions — which is now possible, which was absolutely not possible before.

But whenever you want to do something like, "Tell me how many of my contracts are expiring tomorrow" — if you haven't done the pre-processing before, it would not be a good architecture to figure this out in real time. Instead, what people do is run an extraction job, figure out all the different fields, extract them, put it together so you can look at data, aggregate it, get insights.

What I see happening from a tech stack evolution perspective is a couple of things. One: all these different data sources that did not talk to one another before are now becoming available for an agent to connect to and reason with. MCP is a big unlock from that perspective. I could imagine there are different data sources, I have a question, and the question requires me to look at multiple different data sources. A protocol like MCP allows the model to go take that information and give you an answer.

That's great for real-time lookups. For something that requires you to reason over massive amounts of data, that breaks down. So you still need a data platform like Snowflake to have access to that data in the first place — so you can run large-scale analytics and get insights.

What's happening is the capabilities of these platforms are emerging. What you can do with massive amounts of data is changing, is increasing. One example — super simple: let's say you have lots of customer reviews. Being able to just say, "Summarize all the customer reviews for me." Now it's literally someone in 10 seconds can get an answer from massive amounts of reviews — just glean insights, understand what's working, what's not working, what are people complaining about. That capability just got unlocked.

You can imagine many more such capabilities emerging. That requires both these systems to talk to one another. In many cases, it also requires the core principles: we like saying at Snowflake, "There is no AI strategy without a data strategy." Ultimately, you still need to think about how you bring all of your data together so that your data assets can be joined, can be combined when you're trying to bring insights across your data.

Alp Uguray (43:40): In a way, the summaries or even sentiments on customer reviews — that's like AI scientifically generating data. And that generation is already an increase in the data layer's value. And then there's the semantic layer to be able to retrieve that data from tables — which is really interesting in the context of millions of rows.

Baris Gultekin (44:14): Semantic models, semantic layers are becoming very crucial parts of the data stack. There's a lot of conversation happening these days around the ontology of a company's knowledge. You have your own knowledge base with lots of documents. You have your structured data — and that structured data has your business semantics: what your metrics are, how you define profits, how you define whatever metric you care about.

All of those are crucial parts of enabling data for AI. So as companies think about how to get their company AI-ready, it starts with getting your data in the right place. Break down all your data silos, make sure all your data is secured and governed so only the right people have access. Then bring the semantics of your business onto that data. Bring your knowledge base of all your documents accessible. Then build out your solutions.

That alone is a massive space and it's only going to mature as more capabilities get online.

Alp Uguray (45:43): At the end of it, enterprises operate on their data — and the more clean it is, the better the agent understands, reasons, and gives them a response. I'd like to ask about the startup world — it's booming right now. So many opportunities, a new startup coming out to disrupt so many verticals from healthcare to supply chain.

At the time when you started your blockchain startup, that was also taking a path not everyone is fearful to take. If you were to go back at that time — and let's say that time is now — what would be one problem you would want to tackle first?

Baris Gultekin (46:50): With startups, the idea of "let me think of ideas, let me look at it" — it usually does not work. Ultimately, you're trying to really deeply solve a problem. And for you to really deeply solve a problem, you need to have depth on that problem. It's either something you want to solve for yourself, or you really deeply understand.

Not only that, but you also need to feel really passionately about that problem — because startup world is a challenging ride. You're a small fish trying to grow. Every day is a countdown until the next event — either the next raise or countdown to profitability.

Given that, you really need to feel passionately about that space, otherwise it doesn't work. If I were trying to do a startup now — I don't know. I would start with a problem that I feel really passionately about and think about how that problem is going to get solved in the best way.

The answer is not always "I need to go start this company because no one else is solving it." Usually, people who build successful products are ones that start these companies because they cannot not do it. There is this burning desire to go make it happen. And then you go make it happen.

Alp Uguray (48:36): It is. And then one problem that a founder tries to solve turns out to be hundreds more problems within that problem. When it comes to the ecosystem evolving — because now there's the concept of being a "wrapper" and everyone is being a wrapper of something — how do you see Snowflake supporting that ecosystem? There's the Postgres aspect, the data storage, the agentic aspect that startups can take and build an application layer on top of.

Baris Gultekin (49:19): Snowflake has always been a very partner-friendly company and the platform to build on top of. What we do with our partners — there's a series of products that might be helpful. Startups that build on Snowflake are building for a couple of reasons.

One: Snowflake makes it very easy for our partners to bring their solutions to Snowflake's customers. The tricky thing is access to data is usually the biggest blocker for startups — and Snowflake creates a secure environment for Snowflake's customers to provide that access to other partners. So that's a very attractive platform for others to come and build on.

Increasingly, Snowflake offers inference that's super easy to use across all model providers that a startup can use easily. The benefit is it's running within the Snowflake security boundary — so it's attractive for Snowflake's customers if you're selling to those customers.

You can build agents on the data and then make those agents available for customers on Snowflake's platform. Again, super powerful — being able to do more complex reasoning tasks with data and then making it accessible and shareable. You called out Postgres as a new capability that's now possible. And then as startups grow, Snowflake's own analytics platform at some point also becomes interesting.

Overall, the idea is that partners working within Snowflake's platform make their products much more accessible to a large audience of Snowflake customers.

Alp Uguray (51:34): So it's building on top of the platform — there are so many advantages. As the platform scales, they scale with it. I'd like to ask one last question — around coding agents versus general agents. There's the conversation with Dario Amodei, the CEO of Anthropic, saying that coding agents would be the path to lead to a very general intelligence system.

We've seen it — Claude Code, Claude, and Cowork are coming up as well. On the other hand, there are vision models and world models. So many blending of things happening. Against enterprises — they are adopting at the latter part of the curve. Based on what you're seeing as the data layer, your experience from startups to enterprise adoption and leading Snowflake's AI — what do you think will unlock that general intelligence as the next bet?

Baris Gultekin (53:16): I think we both sometimes overestimate what AI can do and then underestimate it in some cases. When we talk about general intelligence, human-like intelligence — I don't think we're anywhere close to it. And I don't think our current architecture's trajectory is necessarily going to get us there in the next one or two years.

At the same time, this current trajectory is incredibly valuable and incredibly powerful and will solve a ton of problems and move humanity forward. It's a bit of an academic thing to figure out: do we have human-like intelligence? Are we going to get general intelligence and when? To me, that's a less interesting question.

The more interesting question is: where are we in terms of capabilities, and are we making the most use of it? Where is the technology going? How much more opportunity gets unlocked? How does life change as these capabilities become available?

I see them more as tools — and these tools are very helpful. I'm incredibly excited about these tools becoming more helpful and more cheap and accessible.

Alp Uguray (54:37): At the end of the day, it helps us solve some problems and be tools and be helpful to us. That's it. Thank you very much for joining me today. It was great.

Baris Gultekin (54:51): Thanks Alp, thanks for having me.

Founder, Alp Uguray

Alp Uguray is a technologist and advisor with 5x UiPath (MVP) Most Valuable Professional Award and is a globally recognized expert on intelligent automation, AI (artificial intelligence), RPA, process mining, and enterprise digital transformation.

https://themasters.ai
Next
Next

MIT Prof. Ramesh Raskar: The Internet of AI Agents: Why AGI Won’t Be One “God Model”?