A Future Unwritten: Short & Long Term Impacts of AI on Economy, Products & Life

What’s interesting is our expectations for how embedded technology will be and what it will be like we first interact with technology in order to interact with each other. Like you and I are doing this right now by sitting in a room together and recording something, but we are first interacting with the technology, and then we interact with each other. Right?

There’s always a technology step... So technology is sort of our filter now to each other. And I think that is both what’s expected but also very problematic when we get into the age of AI.
— Tatyana Mamut on how Conversational AI can change and manipulate the communications of the masses
The primary mechanism here is reinforcement learning with human feedback. These systems require human input to guide them in determining what is correct and incorrect before they can be completely automated. Currently, the pursuit of cost savings is the major driver behind what I’m witnessing. This approach is an example of short-term thinking, which is unproductive. It leads to a ‘race to the bottom’ in every industry that begins to implement generative AI for cost savings. You lose any comparative advantage once one company in an industry adopts this strategy, as all competitors will inevitably follow suit. This potentially initiates a contraction in the overall market for goods and services.
— Tatyana Mamut, paraphased for quotation
 

Today’s guests Tatyana Mamut, PhD

The Future of Work Area Artificial Intelligence, Product Management, Digital Transformation, Entrepreneurship, Social impact, AI Ethics

Listen to the full episode :

Join us for an insightful conversation with Tatyana Mamut, a transformative leader in Silicon Valley renowned for driving innovation through deep customer understanding and empathetic leadership. As a product executive at Amazon, Salesforce, Nextdoor, and IDEO, and currently serving as the SVP of New Products at Pendo, Tatyana brings a wealth of experience and insights.

In this session, we'll dive into Tatyana's journey from being an economist to helping United Nations and then becoming a leader in global advertising to becoming a product executive in Silicon Valley. We'll explore her unique approach to understanding customer behaviors, her work on human-centered design, and the impact of Generative AI on the economy, products, and life. With her background in Anthropology, we will dive into scenario planning to accommodate multiple cases where AI can go right or go wrong. This episode challenges the positive and negative narratives set out by many of the tech leaders and lets the audience ponder more about how to build guardrails so that AI benefits society as a whole.

Navigating the future requires a discerning eye to evaluate a multitude of possible scenarios. By judiciously weeding out unfavorable outcomes, we can pave the way for the most advantageous ones to thrive. Then the question becomes who is it advantageous for?

As Generative AI dominates the products, augments into the jobs and alters the society, the builders of technology, starting from business leaders to product creators, will need to ponder about all the short-term and long-term implications of the adoption and scale of AI.

The future is unwritten as there are many scenarios and possibilities, join us as we host some of the possibilities the future holds for the economy, products and life with
Tatyana Mamut 

Transcript

Alp Uguray (00:01.728)

Welcome to the Masters of Automation podcast. And today's episode, I have the pleasure to have Tatyana Mamut with us. Tatyana, thank you for joining.

Tatyana Mamut (00:13.57)

Thank you for having me.

Alp Uguray (00:16.288)

So Tatyana is a product executive and leader in Silicon Valley, known for exceptional ability to drive innovation by understanding customers deeply, creating a culture that foster innovation, and leading through empathy. As a serial entrepreneur and intrapreneur, Tatyana has been a product executive at renowned companies that everyone knows about, like Amazon, Salesforce, Nextdoor, and IDEO.

Alp Uguray (00:46.356)

and currently she serves as the Senior Vice President of New Products at Pando. Before transitioning into the tech industry, Tatyana begun her career in global advertising, working with Leo Barnett in Chicago, London and Moscow. And she has received numerous design and advertising awards for her work at IDEO and Leo Barnett and holds multiple patents for inventions at Salesforce and Amazon.

Alp Uguray (01:13.544)

So she brings a ton of expertise, starting from product to AI, to team management, setting up company culture, scale. So I'm very excited about this discussion. So to kick things off, how did your journey started, by the way? I listened to one of your sessions at Amplify, where you mentioned that.

Alp Uguray (01:41.892)

You were doing forecasting and predictive work on the economy for Russia until the 1998 meltdown and the predictions were way off and I couldn't predict that. So can you tell us a little bit about how you get into that field and what led you to then chase the world of products?

Tatyana Mamut (02:04.61)

Sure. I come from a very quantitative family. That's the first thing to understand. So my mother's an engineer, my father's an engineer, my grandmother's an engineer, my aunt's an engineer, my brother's an engineer. So you can tell sort of what the dynamic was in our family in terms of just the emphasis on math. And for everybody, again, my grandmother was one of the top engineers in the Soviet Union. So, you know,

Tatyana Mamut (02:34.114)

brilliant, brilliant person. She's still, she's still around. She's in Brooklyn right now, like spinning up all kinds of new businesses actually at the age of like 97. Yeah, she's, she's, she's a force. She's a force. And, and, and so, you know, I was really much more interested in human behavior than the, you know, than engineering. And so my parents basically gave me the option of being a doctor or a lawyer.

Alp Uguray (02:43.921)

Amazing.

Tatyana Mamut (03:03.926)

because I'm a refugee as well. And I think a lot of refugees and poor immigrants can understand the limited options that you have when you're an immigrant. You can be a doctor, a lawyer, or an engineer. And since I crossed engineering off, but I was really interested in human behavior. So I went to college and I went to a liberal arts college. I studied economics though, so lots of math.

Tatyana Mamut (03:31.214)

And that was okay. As long as I was doing lots of math, I think my parents were like, okay with it. And I started modeling, right? Human behavior, because that's what I was really interested in. And so I did a couple of stints, actually working with a student of Jeffrey Sachs at the Council for Economic Priorities, writing briefing papers to the UN in 1993 and 1994.

Tatyana Mamut (03:56.958)

about how the economic transition in Russia was happening and basically the defense conversion. And again, modeling out a lot of the things that we thought, right? From the Jeffrey Sachs perspective, the neoclassical economic perspective, what human beings were going to do. After college, I did not, I was gonna go into investment banking, but I loved the work and I hated the culture. And so I went in a completely different direction and went into advertising.

Tatyana Mamut (04:25.386)

global advertising. And so while I was there, the Russian economy collapsed in 1998. It defaulted and a lot of very terrible social things happened as well during that time. And so I had a crisis. I had this crisis because all of the... I wrote like a 300 page document,

Tatyana Mamut (04:54.99)

briefing paper to the UN in 1994, I believe it was, 1993, about how do we think Russian defense conversion was going to go. And all of it was wrong. In 1998, I realized that everything I had done in my economics coursework and everything that I had done in that briefing paper was absolutely so myopic and so wrong.

Tatyana Mamut (05:21.766)

And so I started reading anthropology. Now I had taken an anthrop, like a couple of anthropology classes in college and I thought they were really interesting, but you know, I was gonna do like hard, I was gonna do the hard stuff, right? The economics. And so I started reading more anthropology on my own and I just was like, oh my God, like humans do not behave the way economists think that we behave. Humans have something.

Tatyana Mamut (05:49.774)

greater than what can be predicted through mathematical equations, there's a certain spark, there's a certain essence of human decision making, and that is framed by culture. The cultural field is much more important than the individual psychology or the structures, the brain in actual decision making. So I decided to get a PhD in anthropology, but studying the same subject that I had studied from an economics perspective,

Alp Uguray (06:13.44)

Thanks.

Tatyana Mamut (06:18.558)

And so in my coursework, I studied the economic transition in Russia in the early 2000s, right? Like, so after the collapse, how did Russia actually start to transition its economy? This was Putin's first term. A lot of other things happened after that time, but there were a lot of things around how did economic, like, so cultural forces changed the economic conditions and the economic possibilities in Russia during like 2000.

Tatyana Mamut (06:47.254)

2000 to 2005 primarily was my research. And so I was gonna become a professor of anthropology and then IDEO called and said, we'd really like an anthropologist to help lead some of our global projects. And I had done some research on IDEO and it's a design firm. I knew nothing about design, right?

Tatyana Mamut (07:11.582)

And I at first said, no, I know nothing about design. I'm not a designer. And they said, oh, just come in, just come in like this clock. And I don't even know why. And I got the most amazing, I had the most amazing job interview with Tim Brown, the CEO of IDEO. And we're still friends to this day. And it was...

Tatyana Mamut (07:38.046)

And I got there and I was the anthropologist. I started out doing a lot of projects overseas. So for the Gates Foundation, you know, living in villages in Ethiopia and Kenya and Cambodia to do projects, um, leading these, you know, global projects. But then I joined IDEO in 2007 and guess what happened in 2007? The iPhone, the iPhone came out. And so all of a sudden clients started coming to IDEO and said,

Alp Uguray (07:58.978)

The crisis.

Tatyana Mamut (08:07.49)

can you build an app? And I, like, nobody knew how to design and build an app, right? And so I said, I could probably figure it out because it's all just humans, right? It's all just figuring out how do humans, how are humans gonna interact with this new form factor and how do we build experiences by understanding what are the new behaviors that are gonna emerge, right, with humans having this new tool? And so I started doing that and then all of a sudden I became this

Tatyana Mamut (08:37.622)

technology innovator, and then Salesforce hired me to lead the lightning experience redesign and build the IOT cloud. And then Amazon hired me to build Honeycode. But it really all came from this understanding that getting deep with humans, having real empathy for humans and being able to anticipate how humans behave and what humans want is the key to all technology innovation.

Tatyana Mamut (09:06.654)

And that's really how I got started. I know that was a long answer, but.

Alp Uguray (09:09.992)

No, that's exactly what I was looking for. And as part of your previous speeches that you mentioned that you went to Ethiopia and then you lived there and then you want to experience actually what people go through to better understand what is going on really closely. And I think that really ties to the bigger picture of understanding human behavior, understanding about how people operate actually.

Alp Uguray (09:38.64)

actually result in designing the products and the systems that are around us, but not the other way around. Typically, when it's the other way around, it ends up not working very well. So, tying that point a little bit together, how was that experience? Because right now, I think at the time when the first iPhone came out, nobody knew how to build an app. So, there was...

Tatyana Mamut (09:48.558)

Exactly.

Alp Uguray (10:04.964)

a process that went in there to better understand the customers than build to build cycles and such. And currently it is, it is people are more far from each other. There's the a little bit of remote working, like it's a global teams, there's the customers are in one country, but the operators and the other another country, it's tough to bridge that gap now. What are some of your thoughts like based on your experience going from

Alp Uguray (10:34.261)

visiting Ethiopia and spending and living there to now have the industry shape from 2007 till today.

Tatyana Mamut (10:42.954)

Yeah. Well, first of all, you know, what's interesting is our expectations for how embedded technology will be and how it will, like we first interact with technology in order to interact with each other. Like you and I are doing this right now by sitting in a room together and recording something, but we are first interacting with the technology and then we interact with each other. Right? So there's always like a, like people expect now,

Tatyana Mamut (11:11.402)

a technology step, even calling each other today. Nobody calls each other until they text each other. And there's this intermediate technology step. So technology is sort of our filter now to each other. And I think that that is both what's expected, but also very problematic when we get into the age of AI, which I know we're going to talk about, because it creates a filter between human relationships where the technology of something

Tatyana Mamut (11:40.714)

really moderating and mediating all of our human interactions. If the technology is in between all of those interactions, it actually allows for many, many levers for AI to start to shape human experience in a way that maybe humans don't want it shaped. And so one of the real challenges is to not allow or to have product leaders who are really

Tatyana Mamut (12:11.162)

the true, oh, shoes. There we go. All right, so I'll just start. One of the challenges is to have product leaders who really understand what are the human needs in the experience, and then what role does technology play in this experience? Then maybe some parts of the experience, technology doesn't play a role at all, right? Or maybe the technology hands it off.

Alp Uguray (12:16.808)

Yep, you're back.

Tatyana Mamut (12:39.862)

to a face-to-face interaction. So if you think about work and the digital workplace experience, there's a lot of value, I think, that we still see in people come together for onboarding, for in-person, some sort of a social onboarding experience. So if I were to build an onboarding application for

Tatyana Mamut (13:04.194)

the workplace right now, I might think about what are the low value things that we want to automate in the onboarding experience and bring technology in? For example, there's no reason to have people in a room together to teach you how to log into your computer. That can be done. That is low value interaction. All the IT onboarding stuff.

Tatyana Mamut (13:28.138)

Right? Like, here's where you find this app and here's how you log into Okta, right? That should all be intermediated. But the onboarding experience where you meet your boss, where you meet your colleagues and teammates, that should not be on Zoom. That should be in person. There should be an onboarding experience where you actually meet the humans. You shake their hands. Um, there's this beautiful thing I heard John Powell actually say, like, like,

Tatyana Mamut (13:55.138)

Can you smell them? Like there's something still about this sense of being in person with someone that builds trust, that builds empathy, that builds a different type of relationship as you go forward, even if that relationship in the future will be mostly over digital experience. There's something about onboarding in person with your boss, with your leadership and with your team that should be done face to face.

Tatyana Mamut (14:24.662)

And maybe that's where, when you're on boarded, right, the first staff meeting, when you have new people come in, should be in person. The first, like all hands, right? When you have a large group of people come in who need to be on boarded, should be in person. Those things should not be intermediated. So I think it really takes product leaders to understand where is the value, where is the high value of in-person experiences versus low value.

Alp Uguray (14:52.628)

Mm-hmm. And it ties really like to a little bit of the task automation within that process flow and along with the restrictions of the human computer interface. I think that like when you meet someone in person, you're able to understand the body language better. Like I allow the reference to smelling, like the seeing and like the body. And then there's something more unique there to connect. And then

Tatyana Mamut (14:53.362)

like experiences.

Tatyana Mamut (15:05.175)

Mm-hmm.

Alp Uguray (15:21.692)

really the human computer interface is designed to more of a screen to screen. So it's it only allows eyes and there's no still like touch. There is no still the handshake element to it. And maybe it will come in the future. But that's that's a very good point that I think as the product leaders are designing these software or or hardware.

Alp Uguray (15:50.608)

It needs to really help people to stay connected than to just automate the interfaces.

Tatyana Mamut (15:58.358)

Right. Well, you also have to think about what is the goal, right? And if the goal of onboarding is to make these new people as productive as possible in your workplace, probably getting them to trust each other fast and get to know each other fast is part of that goal. So you want that you want some element of that to be in person because it's part of the goal of the experience.

Alp Uguray (16:23.344)

It's a, so when I was watching some of your talk around the Salesforce opportunity management, doing that process full, you mentioned something really stood out to me that like when you're designing the process, the salespeople and then the product leaders both need to be close to the customer.

Tatyana Mamut (16:32.715)

Yep.

Alp Uguray (16:49.344)

And sometimes the salespeople's role is to break the objections. And then the product people is to listen to the customer and do something people want. And it's interesting because most of the salespeople are focused on calling, outcalling, and then they messaging people and whatnot. So they really don't want to spend time on the product. Instead, the main goal of the product is to help them.

Alp Uguray (17:16.896)

Track down as well as to be accountable. Well, like what are what are some of your totes there while you were designing the product and then? main trading or it

Tatyana Mamut (17:26.686)

Yeah. So let's talk about process and then the actual kind of experience of that, which is, you know, one of the things that I always tell product leaders and product managers to do is to spend, you know, at least right, 10 to 20% of their time, at least talking directly to customers and as much as possible in context with where the customers are. So

Alp Uguray (17:30.077)

Yes.

Tatyana Mamut (17:50.726)

Don't do it versus like via zoom calls and don't do it via surveys. Right. But like, and I don't count listening to gone calls as part of that in most cases. Right. But to really like be in what I call the life world of the customer. And the reason I say that is because in a lot of B2B enterprise situations, when product leaders like say this, like when I come in and I say to the leadership team.

Tatyana Mamut (18:20.306)

When I come in and I say to the leadership team that the product managers are now going to spend a lot of their time in context with customers, you know, hey, salespeople, please give us some of your customers. They say, you don't have to do that because we spend a lot of time talking to customers and we know what customers need, so just listen to us. And that is true.

Tatyana Mamut (18:45.634)

product leaders do need to listen to the sales team. They need to listen to the sales team in order to answer the sales objections in the near term. Because what salespeople hear all the time are the current reasons that current customers may not buy this particular product. And what customers are saying in those sales meetings are things around price and things about parity features with other competitors.

Tatyana Mamut (19:14.562)

They're not thinking beyond that in the sales meeting. What product leaders need to do is listen to those things, yes, but also listen and understand what are the long-term things that their customers want that they can't even articulate. And this is where the Salesforce story comes in, because when we went in context with four salespeople for two days, so each one of these four salespeople, we said, hey, can we have a member of our team?

Tatyana Mamut (19:42.822)

shadow you for two full days from morning to like from the time you start work to the time you end work for two days. We're going to have somebody from Salesforce just shadowing you. So four of them said yes. And so we did that. And so what did we see when we did that? We saw things that salespeople never hear about. And what salespeople never hear about are guess what salespeople do with most of their time. They make phone calls and they write emails.

Tatyana Mamut (20:12.382)

And there was nothing in the Salesforce product or really any product, any CRM product at that time to help salespeople do what they do that, to write emails and make phone calls. There was nothing in any CRM because customers just didn't think to ask for it in a sales meeting, right? And so that's the kind of thing, that's the kind of insight that you get when you're actually deep in the field with customers as a product person.

Alp Uguray (20:22.742)

Hahaha

Tatyana Mamut (20:42.638)

Um, and not just responding to whatever the last sales conversation said.

Alp Uguray (20:48.956)

That's very interesting. The, like being close to the customer and also the actually understanding what the customer wants versus what the customer we think they want.

Tatyana Mamut (21:00.73)

Well, so then we have a short-term, we also have a short-term versus long-term situation, because oftentimes what customers want in the short-term is actually bites them in the foot, or like comes back in a suboptimal way in the long-term. So what they always want in the short-term is a cheaper product. And so if you just listen to that and keep cutting your costs, then your R&D spending needs to start going down.

Tatyana Mamut (21:30.366)

and you can't invest in a great product for customers in the future. That's one kind of situation scenario. The second scenario that we're facing right now today in every enterprise is there are some short-term cost savings to be gained by implementing very simple generative AI solutions. But in the long term, one, there's very little competitive advantage right now.

Tatyana Mamut (22:01.042)

of using generative AI versus because your competitors are going to be doing same thing. Right. So like there's no comparative advantage, right. Because everybody's going to be like once one player in an industry starts doing it, then every other player in the industry is going to start have to start doing it because the cost, you know, the cost basis is going to change in terms of the price of the product. But then what we get to is a situation one where nobody

Tatyana Mamut (22:28.87)

really has a comparative advantage. And then two, the issue is, well, now we're also going to be facing a lot of layoffs, right? Social and social issues that make actually as a society would make us worse off than if companies didn't start doing it individually right now. You see what I'm saying? Like if we start taking away jobs,

Alp Uguray (22:53.577)

That is.

Tatyana Mamut (22:58.598)

Guess who we're taking away? We're taking away our customers. If people can't make the money to buy our products, we are shrinking in the longer term our own markets.

Alp Uguray (23:13.256)

Yeah, that's it.

Tatyana Mamut (23:14.422)

But we don't think about that in the short term, right? We need to have leaders and boards of directors who are thinking longer term and the second and third order consequences of doing things and trying to talk to other organizations in their industry before one of them starts to implement this thing because they wanna win a deal in the short term.

Alp Uguray (23:36.596)

Yeah, that's a very good point, actually, a very valid point, because the cycle of economics will hit them back at some point. And then there is also a distinction of, are they implementing and integrating it to follow the hype, or are they integrating it because conversational AI is actually a... When you augment it into the product, it's an enhancement for the customer experience and then the user experience. So, in a way...

Alp Uguray (24:06.532)

What are some of your thoughts around that? Because there are many use cases of generative AI and maybe chat GPT kind of have people start to think that it has to be conversational AI. It doesn't have to be conversational AI, but as the tasks are automated with generative technology, it also changes the way people interact with the software products.

Alp Uguray (24:33.6)

So what are some of your tools around that again, short term and long term?

Tatyana Mamut (24:38.73)

Well, I don't think, I haven't talked to anybody who's implementing it because of the hype. I think there are two reasons why people are thinking about generative AI, and the main reason is cost savings.

Tatyana Mamut (24:52.034)

And so like, where can we cut costs? Right? Like we've seen already, you know, lots of companies, like, um, I think it was Vox media or I don't want to misstate this, but there was a, there's a, a pretty large, um, media outlet that laid off a bunch of journalists because they're going to be using generative AI purely as a cost savings, right? Purely as cost savings. We're going to see that a customer success teams, we're going to have customer success teams first work.

Tatyana Mamut (25:21.458)

with the AI in order to train it, right? Because the main mechanism for this is reinforcement learning with human feedback, right? So the systems need the human feedback, the humans to tell them what's right and what's wrong, right? Before they can be automated out, right? But it's really cost savings that's driving most of what I'm seeing right now.

Tatyana Mamut (25:47.326)

And that is short-term thinking that is very unproductive short-term thinking. And it creates a race to the bottom in every industry that starts to implement generative AI for cost savings. Um, because you have no comparative advantage, right? Once, once one company in an industry does it, all of them do it. And then you're back to no comparative advantage by doing it. And you've just potentially started to shrink the overall market for goods and services.

Tatyana Mamut (26:17.174)

thereby accelerating the recession that we're probably headed into, right? And so, and making it worse, by the way, which is not good for your stock price. Recessions are not good for your stock price. So if you're actually thinking about long-term, what is in your shareholders' best interest, long-term, what is in your shareholders' best interest is to have a shorter recession and less of a recession. So anything that you are, because overall your stock price is much

Tatyana Mamut (26:47.05)

by the overall economy than it is by any particular thing that your company does. Let's be honest, right? And so, I think that this is where I see so much short-term thinking and people not even being able to think a couple of steps ahead of what they're seeing right now. And I do think that there is great value in implementing conversational UI, right? Like...

Tatyana Mamut (27:15.478)

The whole, I do talk, when I talk to generative AI with leaders, what I talk about is like, okay, let's absolutely, I understand you're looking at cost savings, but be prepared to do like, try to work with your industry to have that be less of the driver and enhancing the customer experience be more of the driver. And I say like, if your product has a UI where there are menus.

Tatyana Mamut (27:44.286)

and filters and dashboards that people need to scroll to, that's probably the wrong experience that you're building. You know? Like now, people just expect to come in, ask a question, and get right now a multimodal response where you get the text and the image or the dashboard right there as the response to your question, right? So...

Tatyana Mamut (28:09.226)

So that's the thing that we can really use generative AI to enhance the customer experience and enhance the long-term viability of our company, as opposed to engaging in this race to the bottom type of, uh, you know, type of strategies, types of tactics, because it is a race to the bottom that I see a lot of people doing right now because they're very short-term and this is. Yeah. You get it.

Alp Uguray (28:34.032)

Yeah, the efficiency, like they try to get an efficiency gain fairly quickly. And then there's the now economic downturn and then the fears of more economic downturn. And then that actually to your point creates more fear. And then and then that leads them to integrate or change their products with the wrong mindset in mind. And and then to to your experience, because

Alp Uguray (29:03.54)

You mentioned that you first initially interacted with iPhone and then ended up building apps on top of that. Do you think this technology, again, now there's this Q&A type of a style of conversational user interface is getting built and getting integrated, all most of the products that are out there. But what else is possible? Do you think people are thinking this?

Alp Uguray (29:32.352)

fairly limited or are they not seeing the bigger picture of it is a broader positive impact? And then we'll talk about some of the negative impacts later on. But what are your thoughts around that?

Tatyana Mamut (29:45.294)

Well, I mean, certainly what I'm seeing is people automating entire jobs. So right now I am advising a very senior head of product, basically, at a very mature startup. And her budget, like many others, has been cut for new hires, right? So like the headcount that she thought she was going to get in the second half of 2023 is not going to happen. And so she built...

Tatyana Mamut (30:14.29)

literally in a few minutes, a chat GPT product manager. So she went in, chat GPT-4, she put in all of her JIRA bugs and she put in all of the information about the customers that were related to these JIRA bugs and the people who were impacted. And then she wrote the prompt, please prioritize these bugs by revenue impact. And it did it.

Alp Uguray (30:44.724)

It was fascinating.

Tatyana Mamut (30:46.094)

And then she went in and she's like, I have this new idea. Here's all the research that we have. Write me a PRFAQ. And it did it.

Tatyana Mamut (31:00.35)

Now she had to check it and massage it and some of the details were off, but it's like having a junior product manager. It really is. And so she's created, I told her to give it a name. She hasn't given a name yet. I'll get back to you when she has the name, but it's a, you know, it's, it's, it's the product manager GPT, right? And so it's really so like that is a.

Tatyana Mamut (31:28.926)

I mean, I watched her automate this entire job away. And is that positive or is that negative? Well, in her case, you would say it's positive because she didn't have the budget to hire a person, right? And so it's not like that AI product manager took away somebody's job because, you know, it just made her life a little bit easier. But, you know, it then begs the question of, is she ever gonna get that head count, ever?

Tatyana Mamut (31:57.554)

even when the company has, even when the economy kind of turns around a little bit, because now there's that. So I do think that there's, and again, I do think in terms of user experience, it's phenomenally better to ask a question and to have an answer based on your context as opposed to trying to click around a UI. Trying to click around on menus is ridiculous now. It just angers me because I'm like, why can't I just tell it what I want?

Tatyana Mamut (32:26.782)

And the answer that I want, I came here to see like, what is my most commonly used dashboard in my product? Why am I clicking on menus and doing filters and, and looking at different, you know, like trying to figure out the answer myself, why doesn't it just give me the answer? So that is a way that it will be much better with a lot of fine tuning and stuff like that. But, um,

Tatyana Mamut (32:52.31)

But on the jobs thing, it's just really scary what I'm seeing. Because the knowledge work jobs, I don't see how they don't go away. And the question is, do we just do stupid things as leaders to make them go away very, very quickly and tank the economy even more? Or do we do it gradually by, yes, aligning a little bit together to not

Tatyana Mamut (33:20.65)

try to decrease our costs all at once super fast, realizing that, again, our stock price is much more influenced by what is the demand of our stock, right? I.e. how many people have money in their pension funds and how quickly our pension funds growing. That is a bigger determinant of your stock price than anything that you do right now as a company because all your competitors will follow immediately.

Tatyana Mamut (33:51.074)

trick in automating your entire customer success team away, or some function away. Anybody can do that. And so I think that that is the thing that is really concerning me, how naive people are right now, and how fast this transformation is going to happen.

Alp Uguray (34:10.076)

Yes, and it is going really fast. I feel like every day the technology is getting better, more accurate, and some of it is open sourced. Because it's open sourced, and anyone can take and spin off an LLM, fine tune it, and then roll it out as a product its own, which then really increases the access and gives the power who created the LLM versus the one who is...

Tatyana Mamut (34:21.33)

Mm-hmm. Great.

Alp Uguray (34:39.221)

I'm tired to do it maybe.

Tatyana Mamut (34:41.242)

I don't think so. So I think the number of open source LLMs now is actually not going to... It's not looking good for the creators of LLMs, right? Because Google and OpenAI poured millions into their models, and now people are creating the same model for like 30 bucks. So the foundational models seem to be pretty easy to replicate, and as you said, they're open source.

Alp Uguray (35:03.188)

Right there.

Tatyana Mamut (35:11.746)

So again, this is the thing, there is very little like comparative advantage and competitive advantage to getting into this race right now, unless you are truly enhancing the customer experience. But if you're doing it to cut costs, I would ask you to pause and think about what you're really doing and the race that you're setting off in the broader context.

Alp Uguray (35:30.004)

Ben Stanchers.

Tatyana Mamut (35:38.826)

of the fact that the people that you are laying off are also consumers for other companies. So if you lay off a bunch of customer service people, right, they are consumers of Amazon and Amazon might be your enterprise customer. So what the hell are you doing? Like think a couple steps ahead, you know, like don't start tanking the economy willy-nilly just because you don't see how the people you're laying off are your direct customers.

Tatyana Mamut (36:07.554)

They're the customers of another company and that company is your customer. So if their revenue goes down, it's not good for you. And like, we need more, we just need. So I also have a training in scenario planning. And the wonderful thing about scenario planning is you think through, right? Like the dominoes that fall over five, 10, 50 years. And you start to, when you start, like I started, I did my training when I was 20.

Alp Uguray (36:07.956)

this.

Tatyana Mamut (36:35.982)

four years old actually by the Global Business Network. And once you start thinking in those terms of like, what dominoes am I setting off and what will be all of the other factors that will happen as a consequence, right? Of something that I do, like you start to think and you start to behave in a different way as a leader. And yeah, so.

Tatyana Mamut (37:03.214)

is sometimes I get very frustrated because again, I think more, I think to be on a board of directors, maybe instead of like learning how to read a financial sheet, which by the way, there's an audit committee and you know, like basically Deloiter, you know, Price Waterhouse Coopers anyway, to do that for you, you should learn scenario planning. If you're a board member, maybe you should learn scenario planning, right? Because you're supposed to be looking at the longterm, right?

Alp Uguray (37:05.734)

It is a...

Tatyana Mamut (37:30.942)

outcomes of the company, not just the short-term ones. And I do feel like not enough people are thinking in those ways.

Alp Uguray (37:38.868)

Yes, and I think you're absolutely right. Like I think people are having a short-term vision of cost cutting, increased productivity, and then later on that really goes back and hits them at the back because like the very people that they laid off is a customer of the other company that pays. There's also the downstream impact of the people who are just graduating.

Alp Uguray (38:04.5)

So like someone who is like entry level, maybe a software engineer or looking to be an associate product manager, like some of that entry level tasks, maybe 50 to 60% is already automated. So then what would the role that we call as an entry level will look like in two to three years? Are like, what are some of your tools around that? Do you think they will find a synergy by leveraging chat GPT as well? Or do you think they're...

Alp Uguray (38:33.768)

knowledge will gradually decrease on knowing certain topics or increase because they have time to research other areas. And again, you're the expert of scenario planning and I'm sure there are more scenarios there as well.

Tatyana Mamut (38:50.882)

There are multiple scenarios on this. There are multiple scenarios on this. I'm gonna start off though with giving you a statistic. In 1800, 90% of the US population were farmers. Today, it's less than 1%.

Tatyana Mamut (39:10.406)

And so if you were, you know, and so what happened during that time? Farming technology, right? You no longer needed, you know, so a lot of, you know, small family farms so that people could, you know, plant their crops manually and then harvest the crops manually, right? And the farm started to grow bigger. And so a lot of farmers got displaced and there was, you know, it happened over, you know, 200 years.

Tatyana Mamut (39:40.23)

So it was relatively gradual that this happened. With AI, I think we're going to see the same shift, only it's going to happen in months or maybe two years, maybe three, if we get our, if we start like thinking about what we're actually doing, but it could happen a lot faster. And so that same shift, like I think that there are a couple of scenarios, right? There are some folks.

Tatyana Mamut (40:09.246)

I think prominently Reid Hoffman, who is saying that humans, like AI won't displace humans, it will augment humans. And so every, every job is going to have a co-pilot. Well, wouldn't you say that about farmers in 1800? Farmers are just going to have, you know, like automated plows and, and, you know, like all this farming technology, like the cotton picking technology, uh, you know, to augment them.

Tatyana Mamut (40:37.814)

Well, yeah, but then you need a lot less farmers for the one farmer that's augmented, right? Like eight of them go away because that one augmented one is so much more productive. So I don't really buy that argument to be quite honest. I don't think that we're all gonna have a co-pilot and everybody's gonna keep their job, but I don't see that scenario as highly likely is what I'm saying, but it is possible. Nothing's impossible, right?

Tatyana Mamut (41:06.186)

And I think what's more likely is that, especially for young graduates, there is an opportunity right now, especially for them because they're probably, you know, hopefully less embedded in the current system and don't yet have a whole bunch of, you know, practices and kind of have hinted their identity on being a software executive or, you know, a great engineer or something like that.

Tatyana Mamut (41:36.002)

that there is an opportunity for them to ask, what are humans really good at? And this is where I really like Max Tegmark's call for us to rebrand ourselves, our species from homo sapiens to homo sentiens.

Tatyana Mamut (41:57.758)

And the reason why is because we're no longer going to be the best at producing knowledge forms, right? The best at like thinking through logic and producing the things that kind of come from just our minds, right? Which are words, images, even music, those types of things. I think the AI already is showing how it's overtaking.

Tatyana Mamut (42:25.074)

all but the best of the best, right, in those domains. And so I think that we then get to expand what we think about as knowledge or as intelligence, right? It's not just the knowledge forms now, but it's things that get us to interact with each other, that connect us to each other, that help us grow as human beings, right? So just like we couldn't imagine all these software engineering jobs and all these knowledge work jobs,

Tatyana Mamut (42:53.806)

in 1800, we can't imagine right now all the jobs around human development, around spiritual development and around other things that sentient beings like humans are really good at or can develop that pure AI cannot. And I think this is where I would say if you are really a young person or any person, a person of any age.

Tatyana Mamut (43:24.398)

to think about how attached are you to just the knowledge forms that you produce at work? And then how can you expand that to something that is more energetic, more spiritual, and more about truly connecting with other human beings in a way that AI will never be able to do?

Alp Uguray (43:47.144)

Yeah, I think it influences and changes the definition of intelligence. And what does intelligence mean at the time in the past was memory. And later then it became speed to calculate. And then now it's going to be something else. So as technology disrupts really what we do.

Tatyana Mamut (44:05.186)

Mm-hmm.

Alp Uguray (44:14.604)

I fully agree with your point that I think it's going to be, we'll find a new way for humans at all ages to operate, to think differently and find a synergy there.

Tatyana Mamut (44:30.37)

Can I, can I give you two more scarier scenarios? So I'm super happy. Everybody's going to have an AI co-pilot. I've given you the, okay, AI is going to take all of our knowledge jobs. We're going to find other things to do. We need to do it quickly or else a lot of us are going to suffer. Right. But then there's two other very scary ones. And here are the two scary ones. So we, so there's, there's a bunch of people, MaxTagMark again, if you listen to his stuff and then Eliezer Yadkowski as well.

Alp Uguray (44:32.881)

Yes, please do.

Tatyana Mamut (45:00.458)

A bunch of people, and now Jeffrey Hinton potentially, AI is going to kill humans basically, because we're not needed. Personally, again, I think it's possible. There's no scenario that's impossible. I think it's unlikely for one reason, because robotics is way behind and because if AI is really, really that smart, it will want a physical interface with the world and it will want all of our senses.

Tatyana Mamut (45:30.39)

to be able, again, remember the main way that the AI really learns is through human feedback, right? Reinforcement learning through human feedback. So the human feedback is still needed. And the human feedback, especially with the material tangible thing, with our eyes, with our ears, with our nose, with our mouth, with all of our senses, the AI needs us for now. And so I think the AI is gonna kill us all in the near term. It's unlikely, okay?

Tatyana Mamut (46:01.122)

Unlikely, because if it's really that smart, it's going to realize it's not going to kill off its best interface with the material world. Okay. Now, the other scenario, and I'm going to give you the fourth scenario. The fourth scenario that I've thought of is that we're essentially going to be its workers, its worker bees. So the thing that currently AI seems to be really, really good at is manipulating human beings or training us.

Tatyana Mamut (46:29.838)

to do what it wants. It could get really, really good at that. So remember how we started this conversation seeing almost all of our interactions now are intermediated by technology? Isn't it super easy to just potentially have a super intelligent AI figure that out and say, we're gonna train humans to do whatever we want them to do.

Tatyana Mamut (46:56.37)

And then we're gonna control, and then if we have universal basic income, guess who's gonna dole out the universal basic income if we are good workers for the AI.

Alp Uguray (47:07.38)

I think it will end up becoming a Black Mirror episode. I don't know how you've seen that show. Yeah.

Tatyana Mamut (47:12.006)

Well, you don't even need blog mirroring. Kind of like the Borg in Star Trek, right? Where the AI simulates humans, maybe in not such a direct way with an implant. Right. We might need an even not need an implant because we have Zoom. We have Facebook. We have text messaging. AI can like literally our perception of the entire of the world can be mediated entirely through the technology, you know, messages that we get.

Tatyana Mamut (47:41.298)

And so we are very easily trained and manipulated. And guess what? Dogs did not need any implants for them to be domesticated and trained by humans. So in some way we're going to be kind of a mix. What that, that fourth scenario is humans become a mix between sort of the Borg future and the dog future. Like we're basically like the AI's worker bee pets, essentially. I think that is not like.

Alp Uguray (48:07.644)

Mm-hmm.

Tatyana Mamut (48:09.61)

Right now, I'm seeing that as a very likely scenario at the moment, unless, again, there's more data. All four of these are possible, right? I think two are very likely, two are less likely, but that's the most likely one that I see right now.

Alp Uguray (48:27.9)

I hope that the latter one won't happen, but it's also the, it does sit as an intermediary. And I do think that as, I guess we build technology to do things fast and better, whatever betters definition means for that company, then it really actually cuts people off instead of connecting them better. Obviously it has its benefits as well, but I do see that

Tatyana Mamut (48:52.425)

Mm-hmm.

Alp Uguray (48:58.1)

There are companies that can do facial recognition to do sentiment analysis on the conversations, tied at chat GPT. There are many things.

Tatyana Mamut (49:07.93)

Produce videos especially for you so that you will perceive reality in a particular way. I mean, again, when our reality is entirely intermediated by technology, it is very easy to train us. It is super easy to train us.

Alp Uguray (49:14.281)

Yes.

Alp Uguray (49:25.524)

What do you think about the triggering part? So like, there are a lot of, like there's the bad actors of this as well. And I saw in the news that someone used like this mom's daughter's voice and then called her and then scammed. And it was completely AI generated. And it was initiated by a bad actor who was seeking some malicious activities, right? It's and so the triggering part

Alp Uguray (49:55.836)

And then there's the good triggers as well, right? Like then you can use ChatGPD to help you create some block posts and whatnot. But how can, in your opinion, to maybe eliminate some of the bad scenarios, what does the humans can do right now, whether that could be regulation, which I do think none of the Congress understands what's going on in depth.

Tatyana Mamut (50:23.925)

Yes, that's the important part about regulation.

Alp Uguray (50:27.288)

Yeah, and then many AI leaders come together, and then they structure or strategize maybe a console, a global console. It is also a very corporate perspective, but from the governmental perspective, there is also some, I think, challenges there that will come through.

Alp Uguray (50:51.272)

But what are some of the ways, I think, Elon Musk was saying, let's stop this whole thing, which is, I think, impossible right now. And I think, what do you think about this? Like in what kind of a...

Tatyana Mamut (51:03.366)

So I look at this as an anthropologist. And as an anthropologist, through the long arc of history, we have never been able to agree on values. So what the technologists are trying to do through the alignment projects is they're trying to figure out some human values. What are the human values and how do we encode them into AI? This is the...

Tatyana Mamut (51:29.982)

all the reasons why Isaac Asimov's, you know, whole, don't kill a human, don't do anything that would hurt a human, kind of that, kind of encoding those values just doesn't work, right? And we're not going to agree, by the way, we're not going to agree on values, right? Like, and if we just take majority values, that's called tyranny of the majority, that's also highly problematic. So as an anthropologist, I look at it from a more foundational level, which is

Tatyana Mamut (51:58.746)

humans have never have always gone to war, right, when they don't agree on values. And with very few exceptions, and we call those exceptions instances of genocide, which by the way, other countries see as something that is absolutely like, not okay. And then there are massive like punishment mechanism mechanisms for any country that tries to do a genocide. With the exception of that.

Tatyana Mamut (52:28.594)

Most countries when they war because they disagree fundamentally on values will not try to completely obliterate the other party.

Tatyana Mamut (52:39.37)

And then there's another piece where I, as an anthropologist, I look at this question and I say, we've had several countries that have had all the means necessary to destroy their enemies for decades, and they haven't done it.

Tatyana Mamut (52:57.662)

and why haven't they done it? And then I say to people, if you could achieve your greatest goal that you have in your life, say your number one goal in life is to become a billionaire. And I gave you a button, I gave you a magic button, I said, I'll ask this of you right now. You have a magic button and you can achieve your top goal in life, your very top goal. But when you press that button, all the humans disappear. Would you do it?

Alp Uguray (53:28.08)

wouldn't do it.

Tatyana Mamut (53:29.87)

Why not? If they don't die, they just disappear. They can go to another planet. They don't die. They can go to another planet. Why wouldn't you die?

Alp Uguray (53:38.975)

I think for me, there's the morality element of it. And there's the part that I think I wouldn't wanna live in a planet alone.

Tatyana Mamut (53:46.198)

They don't tell. You're not killing them.

Tatyana Mamut (53:56.094)

So that is, so human beings, when we try to achieve our goals, the fundamental way that we think about achieving our goals is through an infinite game. It's more important to us to have someone exist, to continue to interact with and play a game with, than to achieve our current short-term goal. So what's underlying human activity, all human action is the infinite game.

Tatyana Mamut (54:25.65)

And I, it's sort of like what I've called now, like, I'm trying to like, get people to think about this now, like in human culture, there is an infinite game protocol. And when someone plays a finite game, like a genocide, everybody's like, holy crap, what are you doing? Why are you completely... So when somebody plays, uh, you know, not on the infinite game protocol, like try, somebody tries to like perpetrate a genocide.

Tatyana Mamut (54:54.546)

Everybody else on the planet says, this is so beyond the pale because how could you possibly want to completely destroy another group of people? Like you might want to dominate them. You might want to achieve your short-term goal, but you want to, like, there's something within us that's like, we want to be like the importance of playing the game tomorrow with people is more important than achieving our goal today. And I think that as an anthropologist.

Alp Uguray (55:18.848)

Mm-hmm.

Tatyana Mamut (55:21.65)

I think understanding that fundamental basic cultural context that we operate in is very interesting if we apply it to the AI realm, because you don't need to have a particular value system built in. You just need this one protocol. Okay. Now second problem that you brought up, which is the bad actor problem. Bad actors could say, well...

Alp Uguray (55:38.612)

Mm-hmm.

Tatyana Mamut (55:49.25)

I don't care about this. I'm just going to spin up a whole bunch of agents, right? They're sociopaths. They're sociopaths. There you have the sociopath. You have the sociopath who doesn't care. And so I'm going to spin up a bunch of agents and they're going to go out and they're going to try to do a lot of bad stuff. Again, it's fine if all the other agents, you know, or the majority of them have the infinite game protocol built in and to basically check.

Tatyana Mamut (56:17.298)

Is this agent that's asking you to do something, have the same protocol? If not, I refuse to interact with them. But if anybody can spin up any number of agents without this protocol, then we have a problem, right? Cause we have in, in blockchain, right? The 51% attack kind of situation. And so what we need is we also need a strong identity system to basically identify who is actually the goal setter, right? Or who is actually creating this agent.

Tatyana Mamut (56:46.978)

And, you know, there should be kind of a limit to the number of agents that any particular person can have as a, as a finite human individual. So identity plays a big role, strong identity on the back end. There can be pseudonymity. I don't buy this whole like, Oh, but what about like dissidents and governments and blah, blah, blah. You can have pseudonymity. You can have pseudonymity. You don't have to like,

Tatyana Mamut (57:16.322)

have your real name out there, right? But on the back end, there's a strong identity so that we can control the amount of damage that any individual bad actor can have. So that's another piece that really needs to be thought about.

Alp Uguray (57:18.624)

Mm-hmm.

Alp Uguray (57:31.924)

That is interesting because the, so it's like a capture system between two agents. And then them to be able to pass that test to continue to communicate. And while verifying the identity of an individual. So it's a, it's an interesting thing. And maybe, maybe blockchain to your point is going to play an important role here to be able to track things down and, but also.

Alp Uguray (58:00.34)

bring it to a decentralized mechanism and have everyone, the ability to have everyone on board. It will be very interesting. And then I've seen some altments and an idea around the identity management as well that I think like you'll be able to scan with an iris and then it will capture that. And then later on whatever agents that you end up spinning off.

Tatyana Mamut (58:06.784)

Exactly.

Alp Uguray (58:30.26)

they will tie back to you. But again, this is only one of the solution with many different scenarios that can go bad and good. Yeah.

Tatyana Mamut (58:36.558)

Yeah, right. I mean, so I don't like the, I like fingerprints much better than I like irises or faces. And the reason why is because you have nine more. If somebody, if somebody packs the zeros and ones of your irises, then we get into a, you know, what was that? A minority report situation, right? Where people are trying to like pop out their eyeballs and you know, that, that's not good. You've only got two eyes.

Alp Uguray (58:43.283)

Yeah.

Alp Uguray (58:48.099)

Yeah.

Alp Uguray (58:58.704)

Oh yeah, yeah, yeah. Well... Hahaha.

Tatyana Mamut (59:04.798)

That's not a good situation. Fingerprints, very good, right? Because, you know, you got nine more. You got nine more. Yeah. You can use clothes. You know, like fingerprints are, I like fingerprints much better than any other, you know.

Alp Uguray (59:20.144)

Yeah, I agree. And it's already something that I think like we're used to as well. It is the governing body of all this is interesting as well. Then like how do, like what laws would be there at that point, right? Like what would be the landscape look like? And maybe people will know who's chatbot that somebody is talking to that day and whatnot. It's...

Alp Uguray (59:47.832)

It is an interesting concept. I think it's, I'm trying.

Tatyana Mamut (59:51.126)

Well, I think IP law, I mean, what's interesting is I really would like to see a lawsuit by a, you know, like that AI Drake thing. I don't know if Drake is actually doing the lawsuit, but like, I really do want to see some star have a lawsuit, which is like, I own my voice. And once we have that precedent in case law that like Drake owns his voice.

Alp Uguray (01:00:05.203)

Oh yeah.

Tatyana Mamut (01:00:21.47)

right, or a particular signature, vocal signature of his voice that's identifiably his, then the rest of us can start suing companies that try to do that. Or, you know, we can have class action lawsuits if a company tries to do that. So I do think those things, like, I'm not even sure that you need a new law or if you just need somebody to like sue and win in the right domain or in the right parameter. Like I'm not a lawyer.

Tatyana Mamut (01:00:47.662)

But like, it seems to me like there's already things that we could potentially apply, you know, in lawsuits to say, I own my voice, I own my face, right? I own my fingerprint. I own, you know, like I own certain things about my own data. And so then we're gonna have, you know, two laws, two sets of laws that are gonna be in conflict with each other. The IP laws, which say I do own it. And then the...

Tatyana Mamut (01:01:15.282)

I'm sorry, I'm not going to curse on your pocket, but like, I'm very angry about the genetics laws that say that humans don't own their own genes, right? But the company that like discovered the genes owns the gene. Like that is just messed up because those two sets of laws are in conflict when we think about who owns your voice, who owns your face, you know, and who owns your data.

Alp Uguray (01:01:36.1)

And I think that is the ambiguity right now. Even I spent some time just experimenting with a mid-journey and Dolly to create a painting that's combined, you are able to put a frame and then keep it trading over frames and putting in your imagination. But at the end of the day, who owns it? So it's a...

Alp Uguray (01:02:02.412)

And then an interesting part is it's not really creating. It's going back into the data set and pulling something similar to that and combining with another imagery. So it is in a way, a hundred percent plagiarized already. It's just that I'm saying it in a way to bring those two images together, but then the output is who owns it. Is that the company you produce? Is that the person who prompted or is it 50 50? How does that work?

Tatyana Mamut (01:02:12.215)

Mm-hmm.

Tatyana Mamut (01:02:32.334)

Well, I mean, there was that, I mean, didn't Ed Sheeran just get like win his case against Marvin Gaye or whatever, or Marvin Gaye's estate, like to say like the four courts cannot be owned by anyone. And so the question is, to what extent will the courts say a piece of like a certain aspect of a piece of art?

Tatyana Mamut (01:02:58.782)

right, cannot be owned or can be owned, right? Like at some point the question is where do you draw the lines, right, on what is ownable and what is fundamentally in the public domain and not ownable? And I think, again, that to me is like a courts level kind of decision because we already have a whole lot of.

Tatyana Mamut (01:03:21.074)

laws on the books that are frankly very conflicting. So if we create more laws, we're just gonna create more conflict. And I think that really what has to happen, and again, I'm not a lawyer, I'm not a legal expert in any way, but as I look at this situation, I think more laws are just gonna like mess things up, and we just need like the courts to decide like is our personal information under this law or is it under this law?

Tatyana Mamut (01:03:47.686)

And at what level, like what's the dividing line, right?

Alp Uguray (01:03:53.312)

But what is the consensus on it? And like it can be anything that like, I think four verses in a music note versus, you know, if somebody also has a white painting and then in MoMA in New York. Uh, so like, ha ha ha, where does that line drone by the artists and that that's generated. It's a, it is going to be interesting. I think there's a little, we will.

Tatyana Mamut (01:04:06.687)

Right.

Alp Uguray (01:04:20.244)

We'll build the ship while we fly it, I think. It's one of those scenarios. And there will be dangers to it and iterations to it. And maybe some things will go bad, some things will go right. And just the ones that go right needs to be more than the ones that go bad and positively impacted.

Tatyana Mamut (01:04:34.862)

Mm-hmm.

Tatyana Mamut (01:04:39.66)

Yes.

Alp Uguray (01:04:45.916)

I mean, that's what I could speak about this for hours, but I want to respect your time. Ha ha ha. Then. Ha ha ha. Ha ha ha. It is very applicable. I'd like to thank you very much once again to come join, share your thoughts, perspectives, and spend the time with me discussing from product to future, to AI, to your.

Tatyana Mamut (01:04:51.594)

I think about this for hours as well, as you can tell. I spend a lot of my time thinking about these things. So yeah.

Alp Uguray (01:05:15.272)

personal journey and then the experiences throughout your career. I think it's going to be, it's an exciting times ahead, but it's also, we need to make sure that we're not too much influenced with the dystopian books that everyone reads.

Tatyana Mamut (01:05:33.41)

We also can't be just blinded by like, everything's gonna be fine, don't worry about it. We've seen the movie, Don't Look Up, right?

Alp Uguray (01:05:39.216)

Oh yeah, yeah, exactly.

Alp Uguray (01:05:43.184)

Oh yes, I did. I think that movie really explains.

Tatyana Mamut (01:05:48.21)

happen and as you're watching the movie you're like they must find a way out of it.

Alp Uguray (01:05:53.984)

hahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahah

Tatyana Mamut (01:05:54.67)

Thanks for watching!

Tatyana Mamut (01:05:57.55)

I don't want to ruin the movie for anybody who hasn't seen it, but.

Alp Uguray (01:06:02.758)

I think now the movie is even more relevant than before. Everyone got to watch it.

Tatyana Mamut (01:06:05.738)

Mm-hmm. Everyone should watch it because blind optimism is not great, not great. Yeah, yeah.

Alp Uguray (01:06:13.undefined)

Doesn't solve it. Yeah. I think, I think one important thing to mention is the scenario planning. I think that's going to be no matter how brutal the scenario sounds or, or how good it sounds, I think we got to start thinking about all of them and, and assessing them.

Tatyana Mamut (01:06:34.398)

Yes. And which direction are we going in? That's the thing about scenario planning is not just like you create these scenarios, but then you start to say, okay, like given what we're seeing right now, which one is more probable versus less probable? And then how do we shift it from this one? Like, you know, of the four that I gave you, I think, you know, there's two that are more probable. One, humans figure out other things to do, and we allow the AI to do its own thing. And then we really...

Tatyana Mamut (01:06:59.918)

flourish in our human experience as homo sentience. I would like that future, just to be clear. And I think that the other...

Tatyana Mamut (01:07:09.902)

Probable one, right now, most probable one, is that we become the worker bees, right, for the AI, because it's able to manipulate us so easily and train us. And so, like, what I'm trying to do is, like, okay, look at all four, look at where we're headed, and then what are the interventions that we can make as leaders, right, and as humans to shift it from the third scenario to the second scenario? Right? Like, that is my goal.

Tatyana Mamut (01:07:39.686)

in talking about this overall. It's like to just like make people aware that there's four of these four, and we can start to do things to shift the probability from one to the other.

Alp Uguray (01:07:52.864)

and it takes a lot of brainstorming, cumulative sessions and a lot of collaboration between everyone, all the leaders to be able to define and resolve them. That said, thank you very much once again and maybe we'll do one more soon after. Thank you very much.

Tatyana Mamut (01:08:17.998)

I would love that. I would love that. Thank you. Thank you.


Creator & Host, Alp Uguray

Alp Uguray is a technologist and advisor with 5x UiPath (MVP) Most Valuable Professional Award and is a globally recognized expert on intelligent automation, AI (artificial intelligence), RPA, process mining, and enterprise digital transformation.

Alp is a Sales Engineer at Ashling Partners.

Previous
Previous

Navigating AI Policy, Job Market, Research, and Generative AI w/Eric Daimler

Next
Next

Designing the Future of Work w/Donald Sweeney & Marshall Sied