Building A Culture of Radical Candor: Innovative, Creative & Collaborative Teams w/Jason Rosoff

Don’t be afraid to focus on things that don’t scale, especially if you’re not sure that you totally understand a product or a product area. It’s really, really helpful to like get down to the metal and see what’s actually happening.

I think there’s a very strong element there because in a lot of the businesses, things get siloed and within those silos of operation, they don’t realize actually how does this product or service serves the users, serves the customers.
— Jason Rosoff on understanding the operations and execution of a business that breaks siloes
I’m specifically excited about the ability to practice difficult, especially social and intersocial skills using AI, like a conflict prosthesis that we can put on when we know we’re gonna be in an argument and it actually makes us better.
— Jason Rosoff on the intersection between AI and Communication
 

Today’s guests Jason Rosoff

The Future of Work Area The Future of Company Culture and Communication

Listen to the full episode :

Jason is the CEO and Co-founder of Radical Candor. Over the last four years, he’s helped all kinds of organizations, from tiny startups to the giants in the Fortune 100, realize the power of creating a more Radically Candid culture. Through this work, he’s helped hundreds of companies develop real human relationships between team members and through those relationships, achieve amazing results collaboratively.

In his past lives, Jason received undergraduate and graduate degrees in business from New York University. He also worked as a product design team lead at Fog Creek Software, the small-but-mighty New York-based software company that created well-loved products like Trello and Stackoverflow. In 2010, he moved to California and helped launch Khan Academy, the world-renowned educational tech non-profit. Over the next seven years, he helped it grow from three people to a few hundred and reach over 100 million students around the world as both chief people and chief product officer.

During the interview with Alp Uguray and Jason Rosoff provided several key insights about Large Language Models (LLMs), Artificial Intelligence (AI) and their convergence to help people with coaching. Firstly, Rosoff mentioned that LLMs have the potential to be used as conflict resolution experts, acting as a more neutral intermediary between two parties in conflict. This could be particularly useful when emotions are high and an unbiased perspective is needed. LLMs can detect the content and emotion of what is being said and can provide feedback to the participants based on their analysis. Secondly, we discussed that AI has the potential to provide coaching and access to the knowledge and teachings of experts, making them more accessible to a broader audience. This means that more people can benefit from the expertise of professionals in various fields, regardless of their geographical location or financial constraints. Furthermore, AI can assist individuals in practicing complex social and intersocial skills. By simulating conversations and interactions, AI can help people improve their communication abilities and provide feedback from the audience to aid in their development.

This can be particularly valuable for individuals who struggle with social interactions or need to enhance their conversational skills. Additionally, AI can create safe practice spaces for individuals to engage in challenging conversations, such as those involving radical candor. These safe spaces allow individuals to practice and refine their communication techniques without fearing negative consequences or judgment. In conclusion, the key insights provided by Jason Rosoff during the interview with Alp Uguray highlight the potential of LLMs and AI in the coaching realm. LLMs can act as conflict resolution experts, detect content and emotion, and provide feedback. AI can provide coaching and access to expert knowledge, help individuals practice social skills, and create safe practice spaces for difficult conversations. These insights demonstrate the transformative impact of LLMs and AI in coaching and communication that we have yet to explore.

Transcript

Alp Uguray (00:03.874)

Hello everyone, welcome to Masters of Automation podcast series. And today I have the honor of having Jason Rosoff. Jason, welcome to the show.

Jason (00:15.614)

Thanks so much, excited to be here.

Alp Uguray (00:17.986)

So Jason is the CEO and co-founder of Radical Candor. Over the last four years, he's helped all kinds of organizations from tiny startups to giants in the Fortune 100 realize the power of creating a more radically candid culture. Through his work, he's helped hundreds of companies develop real human relationships between team members and through those relationships achieve amazing results collaboratively.

In his past lives, Jason received undergrad and grad degrees in business from NYU. He also worked as a product design team lead at Falk Creek Software, the small but mighty New York based software company that created well loved products like Trello and Stack Overflow. In 2010, he moved to California and helped launch Khan Academy, the world renowned educational tech nonprofit.

Over the next seven years, he helped it grow from three people to a few hundred and reach over 100 million students around the world as both chief people officer and chief product officer. And now Jason lives in a small town on the coast of Connecticut with his partner, Jillian and Doc Jack. So I'm pretty close as well. I'm up in Massachusetts in Boston.

Jason (01:43.475)

Oh yeah.

Alp Uguray (01:46.07)

I mean, you have an incredible portfolio and you contributed in building some of the products that I used almost daily. Yeah, like I'm an early adopter of Khan Academy. I've been using Stack Overflow and obviously I read Radical Candor to create collaborative cultures at the companies that I was part of. But to kick things off and obviously we will get to each of them and your experiences and learnings from them.

What led you to this field? What was Jason up to when he was young and then that resulted in this career trajectory?

Jason (02:26.834)

We're going way, way back. Well, first of all, I mean, I appreciate the question and having an opportunity to tell a little bit of my story. And one of the reasons why I appreciate the opportunity is because I think I've come to appreciate that I didn't have a path. I only saw the path in retrospect. You know what I'm saying? Like, I feel like I could tell you a story right now about

how I took a computer apart for the first time when I was like nine years old. It was my dad's work computer. It cost $5,000. He was like pretty nervous, but I also managed to put that computer back together and made it work. That I learned the basics of programming, literally using BASIC, the programming language. When I was 11, and I went on to take various sort of like engineering related courses.

Hang on just one second.

Alp Uguray (03:27.839)

Sure.

Jason (03:47.858)

So yeah, dog was barking, dog is no longer barking. Dog is now staring at me.

Alp Uguray (03:48.011)

All good. Dog was barking.

Jason (03:57.562)

It's all true. Everything I just said is true. And it sort of like put me on this trajectory to be interested in like engineering as a discipline. And one thing that I, and my dad has a background in mechanical engineering. So like I had some like familial, you know, footsteps that I could be following in and focusing on these technical things. But I wound up making the shifts from that to.

sort of hybrid of business and technology because one of the things that I felt strongly about

coming up, having my formative years, I went to college right at the end of the dot com boom. So I graduated college in 2001, undergraduate in 2001. And that was like right as the world was falling apart from like, from a tech perspective. The, and what I realized as I went into school was the writing was already on the wall. I think the sort of wheels were coming off the bus and like, it was gonna be really hard to maintain this level of

growth in this industry. And I realized that was in part because of shady business practices. And it was like hard. The world, the environment made it hard to bring ideas to life because people didn't think creatively.

they were sort of like focused almost entirely on making money and not focused so much on the human side of things, both of like the people working in the companies, as well as the customers that they were serving. And so I went to work for a very, my first job was working for this like tiny like two person consulting company in New York City. And I was like building networks and.

Jason (05:45.554)

doing like very basic IT administration type of stuff. My second job was working in operations, technology operations for this company that did something that I thought was really great. So they help people turn their photos into books, which now sounds very quaint, like it's a thing that happened in the past, but at the time it was pretty revolutionary, right? It was like the transition to digital cameras and people were just getting to this place where.

They're like, wow, we can do more than just like make a photo album. Like we can tell a story with these photos. And I was very enamored with this idea of helping technology that made a real impact in people's lives succeed. And I think that's been the story of the rest of my career is I realized like there's a lot that I can do. I'm like pretty, um, I'm like a pretty decent hack, like computer hack. Like I can.

write code and I can build stuff on my own. I'm not, I wouldn't say I'm an engineer or developer. I don't have the discipline to write, you know, highly efficient code. But I can like put things together. And I really care about the both the what that we're putting together and the how we accomplish it. And I think that has opened a lot of doors for me and is the reason why Kim wanted to start this company with me.

Alp Uguray (07:09.014)

I mean, that is I think, you know, in the tech sector, like you said, a lot of people want to make quick bucks or just beat so much around the technical aspects of things. They miss seeing the social impact of the technology that they're building. And as being part of the nonprofits and the products that you've been building over time, I think it's a testament to.

to the world that business plus tech plus social impact is the main impact is. And my first question is, so you've been obviously working very closely with Kim Scott and through the implementation of the book and the adoption of the strategies based in the book, what would you say some of the most transformative and

transform it into a lesson or realization you had during the foundational days of radical candor.

Jason (08:10.418)

Well, I think there's like multiple layers here. So for the folks that are in your audience that are entrepreneurs, I would say that one of the most important lessons that I learned coming from an organization that was very focused on scale was that sometimes the best way to learn a new industry or to figure out if what you're doing makes any sense at all is to do things that don't scale. And so when we started the business, Kim and I made this commitment of like,

I have to be in the classroom. I have to teach this stuff. I have to see people as they try to put this into practice. I need to build relationships in order to understand, one, what works and doesn't work today, and two, what might be needed in the future in order to help people be successful. And that was very different work. I went from being responsible for a team of almost 200 people.

between people operations and product to responsible for a team of one and a quarter, if you count Kim, because Kim spends most of her time writing. So she wasn't operating the company with me. She was my IP and business partner, but not operating. And that first year was really like rolling up my sleeves and getting out there and sitting down with people and trying to help them put radical candor into practice.

And I feel like I've never stopped doing that. Still to this day, like this year, I've done 10 or 15 client engagements, even though now we have a team of 10 people who can go out and do these client engagements on our behalf. So.

That's like thing number one is don't be afraid to focus on things that don't scale, especially if you're not sure that you totally understand a product or a product area. It's really, really helpful to like get down to the metal and see what's actually happening. Um, and so I think that's a, that was an intuition that I had going into this. And it's a lesson that I'm taking coming out of building this business. It's like that, that is a, it's an important thing that I'll, I'll carry with me.

Alp Uguray (10:23.154)

And I think that things do things that don't scale is also like part of why combinators multiple as well. And I think there's a very strong element there because in a lot of the businesses, things get siloed and within those silos of operation, they don't realize actually how does this product or service serves the users, serves the customers.

Jason (10:47.73)

or how is it failing to serve them? And as a person with a background in software, like one of the most...

Alp Uguray (10:50.818)

I was failing.

Jason (10:59.518)

painful things that you can do is sit down with a power user of your software and watch them use it. Because what you'll realize is that they've figured out like a whole bunch of workarounds and all this other stuff to like get around your janky software in order to like make their jobs easier. And so this is not a novice user, like watch a power user use your software and be embarrassed about how difficult you've made their life by not understanding where you are serving or not serving them.

I feel the same way about radical candor and the sort of like the skill set and mindset that is associated with it. I also think I want to mention that the other reason why I was encouraged to take this like things that don't scale approach is that one of the fundamental tenants of radical candor is that relationships don't scale human beings don't scale. Right. Which makes each relationship that we have very important like if we're going to invest in a relationship, we want to be sure that relationship is.

valuable to us and to the other party, that both of us are getting something out of it. And I think it can be very tempting sometimes to focus on the macro and forget the basics of being a human, of being a partner in somebody's journey. And so I think a lot, because so much of radical candor is about things that are invisible.

that are not observable, right? They're like the attitudes that we carry as we go into a conversation, and the feelings that we have about a conversation. Like we all wish, we all like to pretend or imagine that we're master observers of human behavior and we can tell how somebody is feeling. But one of the other lessons that I've taken away from radical candor is you do not know what someone is going through. You have no idea what someone is experiencing. The only way...

you have even a hope of knowing is by asking them and them telling you.

Alp Uguray (13:01.007)

That's very well said and through those conversations, I think, I read somewhere that at the end of the day, all businesses are formed of people and people relationship what brings them forward or backwards at the end of the day. And in your experience that led you to radical candor, what were some of the things now looking backwards?

Oh wait, if we have done this differently, if we had these communications well established or if we understood one another better within operations or product teams, things could have been better. Do you remember a few experiences like that you can share?

Jason (13:45.638)

Yeah, so one of them is a lesson that I keep learning over and over, and every time I learn it I remember to reteach it to my team, which is that when you are having a very difficult to resolve disagreement with your teammates, it is highly likely that the dependent variable is time. And what I mean by this is that...

most of the worst sort of knockdown drag out fights that I've had with my team members really boiled down to me saying something should happen fast and them saying something should happen slow. Or vice versa, me saying something should be slow and them saying something should be fast. And

Alp Uguray (14:23.841)

Yeah.

Jason (14:30.922)

And what's interesting about it is that we very rarely articulate that particular part of our point of view very clearly. So for example, and fast and slow are also relative terms. So maybe I'm saying fast and I mean three to six months from now, but I say fast and my team hears three weeks from now. Or maybe the team is saying slow and I hear

you know, six months from now, but the team means they're really busy for the next two weeks. And could we get to this, like starting week three, like those and, and what's embarrassing about it is like, so many of those disagreements could have been avoided by just saying like, Hey, I, I'm, I wonder if we can just talk about time scale, and what you think is reasonable, like what will be a reasonable time scale to solve the this problem, and then seeing if you can come to some level of agreement, but the

The term I've heard for this is sort of walking down the ladder of inference. So like in a conversation, we infer a lot of things about what the other person is thinking. Once we start to make assumptions on top of the inferences that we've made, we're climbing up the ladder of inference, right, making it harder and harder for us to really understand what each other are saying because our assumptions are layered on top of assumptions, on top of assumptions. But if we climb down the ladder and we unpack an assumption and we get to what the person actually means,

right, when they say slow or fast, we make it a lot easier to move forward. And so now, at least when I find myself in this situation where I'm like, why is this so difficult? Like, why can we seem not to come to agreement? I at least remember that one possible answer is that we need to step down a ladder, a rung on the ladder of inference and talk about time scale.

Alp Uguray (16:16.322)

Yeah, I think one aspect of it is the cultural aspect of it, the cultural background of how different cultures and countries express themselves differently, and that can lead to a lot of misunderstandings and global teams. There's also the aspect of knowing what the other means, like maybe, as you said, the perception of big and small.

Jason (16:32.426)

Sure. Yep.

Alp Uguray (16:44.878)

To that point that especially tying to the perception of how someone feels that I think in radical candor there's the connecting personally access that sheds light into a little bit of how to establish it. But how would you do, how would you clear those misconceptions with someone that you don't maybe know personally very close or close to?

Just a more team like someone is in Australia or somewhere else that are you working and you don't have the opportunity to maybe grab a coffee or see each other in person.

Jason (17:24.762)

Yeah, it's such a great question. I think.

similar to how disagreements can be fueled by cultural differences. I think what it means to care personally about somebody else is culturally specific, but even more importantly, it's individually specific. You know what I mean? Like what Alp wants and what Jason wants, like what it means to care about us personally, maybe they're similar, but I highly suspect that there are some significant and important differences in that. So part of this is sort of being aware.

of the other person. But you asked this question, like, what do you do in a situation where you don't know the other person particularly well, but you don't have the opportunity to be aware of what care personally means to them? And I think one of the things that we can do in those situations is we can be vulnerable. We can share out like what we need and how we are feeling. And so one of the things that frequently happens as I'm coaching someone through like how to have a radically candid conversation,

is they start to experience frustration. And their reaction to experiencing frustration, maybe the other person is dismissing what they have to say, or they're sort of debating with them. So they start to feel frustrated. And the reaction to feeling frustrated is to get angry, to raise their voice, or to try to argue against what the other person is saying, for example, to say, well, you said this, and I don't really think that's true, and this is this other thing.

Alp Uguray (18:52.686)

Thank you.

Jason (18:56.534)

And often what has happened is that people have, in those moments, what has happened is that people are not, they're thinking, but they're not thinking very, they're not thinking more than maybe two steps ahead in the conversation. And that's in part because of an unavoidable like hardware problem that human beings have, which is like.

When we start to become emotional, there's a biofeedback loop, which makes it harder and harder to think clearly. Great evolutionary adaptation if you're constantly in physical danger, not the greatest if you're very rarely in physical danger. And what's triggering those biochemical responses is feeling scared at work. And...

my observation is one of the easiest ways out in those moments is to slow down and to actually say, hey, I need to admit, I'm feeling frustrated in this conversation. Like, how are you feeling about it? And often, and it seems so simple, but it is really vulnerable. It's a scary thing to do, to like say how you really feel to somebody else, maybe that you don't know terribly well when things aren't going the greatest. But instead of making it about

solving the problem, it's talking about how can we relate better to each other in this moment. And I think one of the only ways to do that is by revealing a little bit of ourselves and encouraging the other person to reveal a little bit of themselves. Like that is the sort of...

Alp Uguray (20:25.026)

Mm-hmm.

Jason (20:35.41)

That's what we need. Those are the elements of building a foundation of trust inside of a conversation. And it doesn't have to be a huge risk, right? Saying, hey, I'm feeling frustrated. Like that's not, you know, I had a really tough childhood and my mom was really mean to me. Like, you know what I'm saying? Like we're not really, there are depth. There's depth to this dimension. You don't have to go all the way up on the to the top of the care personally. But I think showing a little bit allows us to build connection in those moments. And so as opposed to,

Alp Uguray (20:48.159)

Oh yeah.

Jason (21:05.774)

The other thing that people do is instead of saying, hey, I'm feeling frustrated, they'll say, Alp, you seem really frustrated. And maybe you were feeling frustrated and you feel embarrassed that you got called out on feeling frustrated, or maybe you weren't feeling frustrated and now you feel like this person doesn't understand me at all and they don't know what I'm feeling.

Alp Uguray (21:15.276)

I'm gonna go.

Jason (21:28.038)

And so instead, I think focusing on that self-disclosure is an interesting way to build connection in the moment, even if you don't have a long-standing relationship. And so the reverse is also true. If it's going really well in that conversation, you could say, you know, I'm really enjoying this conversation. I find myself really enjoying this conversation. I appreciate how open you are to the ideas that I'm sharing. And that's so nice. I don't often get a chance to talk to you, and it's really nice to have this opportunity.

But like, we also don't do that. You know what I'm saying? Like, that's the easiest thing in the world to do. It's so low risk. Like, there's very little risk to doing it, but we forget to do it. And those are the ways that I think about building connection, even if you don't have a ton of time. Like talking, if we're feeling, if the emotion is negative, talking about our feeling, like how we're feeling. And if the emotion is positive, talking to the other person about how they're bringing positivity to the conversation.

Alp Uguray (22:25.562)

I think the creating a medium where both sides are empathizing with each other and trying to read and comment. And then some things that I noticed with remote working, or just working with global team settings, is that people are very meeting-oriented. When the team meeting ends, they're like, okay, I'm on to the next one. And by the end of...

10 to 15 minutes in, they're ready to jump to the other one. And then the other portion of it is separate virtual conversations. So for example, and that I find fascinating, right? Like everyone in a separate discussion, having a separate session, like two people, three people, four people, if you're not included or excluded, right? Like all, all those sorts of aspects. And I always the, like from creating a.

Jason (23:02.272)

Mm. Yeah.

Alp Uguray (23:21.89)

a space where someone can genuinely share maybe like level one, not level four, like you said, of intimacy. How do teams can build the trust for a large team like that? And like in the case of everyone is focused on high productivity, but in the meantime, like...

Jason (23:29.726)

Mm-hmm.

Alp Uguray (23:45.65)

would people have breakout rooms or would have like there was at some point there were virtual events there were magicians coming on the zoom calls and like how would you tackle any issue like that and then next I would like to tie a bit to the AI afterwards.

Jason (23:53.85)

Yeah.

Jason (24:03.486)

Sure, sure. So.

Relationships are the sum of the interactions that you have with a person and the feeling that gets left with you after those interactions are over.

Jason (24:21.426)

That means you need to interact. Like, there's no substitute for actually interacting with people. What you were describing, I think...

Jason (24:36.17)

often what teams need is to stop doing things as opposed to start doing a whole bunch of stuff. So for example, if it is common for, once a meeting concludes, for like a meeting after the meeting to appear, where like some subset of the people who are in that meeting go into a discussion and that's where the real like wheeling and dealing is actually being done. You know what I'm saying? Those are where the real decision's being made and the power's being brokered.

Well, stop that. Like, I know it seems trivial, stop doing it. If you're involved in a meeting like that, say this was a decision that should have been made in the full meeting. Like, why are we having this conversation now? Why didn't we have this conversation in the full meeting when, like, that's what we were there to discuss. I do think it seems really risky to do that if you're the only one doing it. But as a company, if you said, hey, look,

No blame, no shame. We've just noticed that this is a thing that's started to happen. And we wanna see what would happen if we said, look, if anybody ever feels like we're having a meeting that should have happened in public, but is happening in private, that they have the right to raise their hand and say, can we bring this conversation back to the full group? It's about stopping, like sometimes about stopping the behaviors that are deteriorating trust. The same thing with like,

allowing your colleagues to vent to you about somebody else. Right, it seems so trivial, but if you got off this call and you talk to your colleague and you're like, man, that Jason, he was really full of himself. And I get whined that you had that conversation behind my back. There's nothing worse for trust than feeling like people are talking about you behind your back. So I think when we find ourselves in those moments,

Alp Uguray (26:31.479)

Yeah.

Jason (26:34.686)

you know, it feels sort of good. We get this like dopamine hit because we're like in the, like we're in the in-group. This person's telling me their sort of secret. Like there's like a positive, even though like the social impact is negative, there's like a positive emotional feedback loop in those conversations. And we need to, we actually need to like call those things out and stop doing them. And I actually think that goes a really long way to encouraging people to recommend things that would help you build trust.

Alp Uguray (26:49.3)

Mm-hmm.

Jason (27:04.434)

That being said, that is like the meta idea, which is like how can we stop doing things that deteriorate trust and bring things back to clear lines of collaboration where people feel like their voices are heard, respected, and understood. A minor idea is you can save, you can start your meetings five minutes late.

And in the first five, basically, so like, as everybody knows that the meeting starts at, you know, 35 or 05, you know, five minutes past the hour or five minutes past the half hour, like, that's the rule. We always start meetings then, but like, you're welcome to join before. And in those five minutes before, that's time for socializing. Like, every 30 minute meeting that I've ever been a part of could be a 25 minute meeting. Like, it's just all, and same thing with an hour long meeting. It could have been a 55 minute meeting.

But if you say like, the whole idea is we're gonna take those five minutes at the top of those meetings, and that's a time for us to just check in with each other. It's like free forum socializing. The same thing that you would do if you kind of ran into somebody grabbing a glass of water or something like that in between meetings. Like you can create a habit of creating that space inside your meetings.

Alp Uguray (28:26.966)

I think that is like a great organizational habit. And it's like, if somebody is told to do something in one year versus six months, they may be able to do the same thing, but it's just, I think, the mind to it. And to that end, especially like right now, and then your experience in Khan Academy and their partnership with OpenAI on student GPT and teacher GPT.

Jason (28:38.298)

Yeah.

Jason (28:53.29)

Mm-hmm.

Alp Uguray (28:54.782)

I think we're seeing right now a lot of the communications being intermediate by, in a way, AI, right? Like, I think emails getting summarized or be written, both input and output at that point is mediated by that. How do you see, and obviously this is going to keep scaling right across the communications in a company.

Well, how do you see some of these technological advancements reshaping the organization's dynamics, maybe from the organizational behaviors or employee interactions, just based on your experience running Radical Candor and then seeing a startup company, a nonprofit that's out there to teach people and obviously the interactions between...

student and a teacher or manager versus an employee and a team member versus another team member. And Kyle, what are some of your thoughts as this industry is evolving and intermediate in communications?

Jason (30:02.502)

Yeah, so I think...

Jason (30:09.702)

We're definitely not at peak large language model. I think the reason we're having this conversation is specifically because of one type of AI, which is large language models, which allow for...

Jason (30:23.622)

which make it much easier to interact with a machine in natural language and to have the machine output natural language. And I think the...

Jason (30:39.262)

The reason why I started by saying this is not peak LLM is because I think they're going to continue to improve, but I've already started to see like some of the rough edges of LLMs and some of the specialization that's going to need to be done in order to make them feel really helpful. One thought that I had as you were describing, like what is the situation is like, I think it would be really interesting to have an LLM

be a conflict resolution expert. And the reason why it would be really interesting is because if instead of you hearing my voice, what you got was, like I could say whatever I wanted to the thing, and I would have approval over what it sent back to you, but what the LLM would send back, would show me first before it sent it to you would be something like.

Alp Uguray (31:13.611)

Resolution.

Jason (31:35.838)

here's what I think you said, here's the content of what you said, and here is the emotion that I detected in that content. Did I get those things right? And if what the AI sent to you as the person I was in conflict with was like, here are the things that Jason said that felt important to him, and here's how he's feeling.

I feel like that would be a much easier place to start than if I just said the thing angrily to you to begin with because then you might be triggered and feel angry. You know what I'm saying? I think there's some benefits, especially when our sort of hardware and software are likely to cause us to fail to communicate clearly where LLMs might really be helpful as an intermediary. So I think they're...

Alp Uguray (32:20.544)

Mm-hmm.

Jason (32:25.37)

That said, I still think human interaction with them is necessary. I don't think they, I think it's a very long time or potentially impossible for us to fully trust an AI to share what we're thinking and feeling. Like without that human interaction, I think we go into the like this dystopian place of like, you know, what did it actually say to Alp and like, how's Alp, anyway. So like, I think there's some element of that is always gonna be necessary. I also think like,

So essentially, AI is a dispassionate observer of human behavior, especially that it's communicated through language, it is, I think, exciting. I think it opens up possibilities for building self-awareness and relational awareness and all of these things. So I think there's stuff that's positive there. At the end of the day, I don't think...

in my lifetime, we're going to see a world completely mediated by AI, meaning like where there's no human to human communication, where people don't get on, you know, Riverside and like, have a chat with one another, which means that we still need to work on improving the software, like the human software, you know what I'm saying? Like we still need to continue to upgrade the human software. And I think in

Alp Uguray (33:30.155)

Mm-hmm.

Jason (33:46.138)

in an ideal world, AI as this dispassionate observer is there to give us feedback, to help us pick apart like thoughts and feelings, for example, to do some of the things that it's really hard to do, especially when we're emotional and our software might be sending mixed signals to our brain. That I think is like big possibility and really exciting.

Alp Uguray (34:11.878)

There's the more I think about it, I think the use of intent, use and intent of the technology for the better, I think keeps it away from it being a black mirror episode that at that point it's like I think be interesting to have like an as an intermediary like you, so I didn't think about that before to have more like a referee right like,

Jason (34:24.984)

Yeah.

Jason (34:39.506)

Yeah, that's exactly what the referee's perfect description. Yep.

Alp Uguray (34:44.298)

to be able to say like right now here's what Jason is thinking, this is what's Alt is thinking, here are some points that you guys may be misunderstanding each other.

Jason (34:54.362)

Yeah, yeah. Or like, here's what you agree on, right? Sometimes the hardest thing in a conflict is to just see that there are places where you agree, like you and Alp seem to be saying the same thing about this. This is what great mediators do, right? mediators are able, like, I have trained their brains have improved their software enough to allow them to be dispassionate observers in the inside of a conflict and not get as caught up in the emotion as the participants. So I think there's like, there's something there. I also think there's something in the sort of like,

the way that AI has the potential of raising the floor for communication. And I think this should not be undersold. There are a lot of people who have a lot to say, but have not learned how to express themselves effectively. And I think that AI, in many ways, there's a very positive version of the future where there's a...

you build a relationship with an AI that is helping you teach is helping to teach you how you can communicate what you want to say. And I think this happens for a lot of reasons like maybe this is not your first language, for example, and you're trying to learn to communicate in English, but imagine talking to the AI in your native language, and the AI helping you understand like, oh, there's like, that's a subtle, like you're doing something subtle linguistically, like you've chosen this word, and there's not really a great word for that.

in English, you know what I'm saying? I think there's some power of like, again, that's sort of what I described is like raising the floor for people so that ideas can be expressed in a way that is more easily, collectively more easily understood or clearer, more clearly expressed. And I think that those, what that means, clearly understood or clearly expressed is going to be culturally specific, both like,

macro culture as well as like team culture, like what are the sort of hallmarks of good communication with in this relationship or on this team or in this company or you know, sort of writ large in society. I think those things are all TBD. But I think a lot of the AI stuff is moving in the right direction in terms of allowing people to fine-tune all of these models to allow them to direct to sort of like point the AI in the in the right direction.

Jason (37:21.806)

I will share just one quick anecdote, which is I had a friend over and he was saying that he went to a barbecue with his family and there was a bunch of people in their 70s and 80s and they started to talk about AI and so his ears sort of pricked up and he was like, oh, I wonder what they're going to say. And what they started to talk about was this fear that they had that you wouldn't, because AI was getting so good, both like...

visual, sort of like image related AI and sound related AI was getting so good, like there's this fear that you can't trust what you see. Like we can't trust the information. Yeah, yeah, exactly. And I think that notion felt sort of quaint to me because I was like, I don't trust anything already. Like that's already in the past for me. But then I realized that there are a lot of people

Alp Uguray (37:56.537)

Yeah, like deep fakes, like having a lot of deep fakes.

Jason (38:15.33)

you know, who don't, who, I mean, part of this was training, like in both, especially my undergraduate degree, there was a lot of like, essentially like media skepticism courses, like how to interpret information from multiple sources, and how to look at things that disagree, and when to be cautious about like the validity of what a particular source is saying, like all those things like

I feel like I got some reasonably significant amount of training in skeptical view, but like there are a lot of people who either didn't or don't remember those things. And I think this is, I want to acknowledge that I think this is like really scary for a lot of people.

Alp Uguray (39:00.382)

Yeah, and that's the part to ability to distill information and being cautious about what we read, what we hear, and like doing cross-verifications, cross-double checking. And then I think that brings a curious mind towards more of a skeptical mind, but in a good way, right? Like because of verifications across the board. And then you had a very good point about...

the communication aspect of it as well. I think when two parties are communicating with each other, obviously still reacting and better understanding those assumptions is going to be really key to that. And I was thinking about the ruinous empathy and obnoxious aggression as the quadrants.

I was thinking about AI intermediates a conversation. It would be interesting that it marks that, for example, you are having a realness empathy right now, or giving flags like that. I'm taking it to the next step a little bit from the AI discussion, even though I can't talk about this for hours, like everyone. And so for the...

There's a great concept of rock stars and superstars in the book. And from your experience, how do companies, and also just looking back right now in the previous companies that you worked at or the companies you're analyzing, react to the categorization of it? Do they find it challenging to?

Like between two, like am I a rock star or a superstar? Or like how does that tie into people aspect of it?

Jason (40:59.79)

Yeah, yeah. So I can't, I think this was in the second edition of the book, it may have even, I think it was in the second edition of the book. Kim actually changed the language there a little bit. She changed it to be rock star mode and superstar mode. So the whole idea of, because I think it is easy to turn those terms into a judgment potentially. Although they're both meant to be quite positive. But

For listeners who may not be familiar with the terms, a quick definition. So people in rock star mode, they're the people who provide stability to your team. These are the people who understand the code inside and out are able to like really quickly fix bugs, enjoy the work of making sure that the code base is up to, will...

doesn't need to be rewritten every six months, like they're building on fundamentally good principles. They enjoy teaching others how to do that. That's a person in rockstar mode in the software engineering context. A person in superstar mode in software engineering context is the person who is like, goes, you know, on the weekends, like builds out a prototype of a thing that could potentially change the business dramatically.

and gets really excited about the possibilities of entirely new things. They may also be incredibly ambitious, meaning that they want to be promoted, they want new opportunities. The same is not good enough for them, they don't care about refining and refining, what they care about is digging a new mine someplace else. They're not going to like...

chip down to the diamond until they get the perfect beautiful thing. They're there. They just want to like blow a hole in the side of that mountain and go looking for something new. And it's clear when you describe it that way that you need both of those people, right? You need people with both of those attitudes. And one thing that Kim tried to make clear in the book is like, at different points in our career, we need different things. So someone who is in superstar mode today might be in rock star mode a year from now.

Jason (43:11.046)

And maybe they're in rock star mode because their attitude about the kinds of projects they like changes or maybe because they get really inspired by like some technical part of the product and they wanna spend time there. Or maybe they shift from superstar mode into rock star mode because they had a kid and you know, like they're not as ambitious for the next big opportunity. And what they want right now is some stability in their life. And so I think when you put it that way, I think people appreciate that

One, I can change modes throughout the course of my career, so I'm not stuck, I'm not in a box. And two, that it's actually really important. I think people accept that it's important to have both of these forces on your team and to not be too heavily weighted toward one or the other.

Alp Uguray (43:55.806)

No, that makes absolutely sense. And I think I read the second edition. I think that's what I was referring to. That absolutely makes sense. So, John, I just want to ask about this question just to make it clarified, because it does make sense that someone's performance over time change and their expectations and life situations change, and then it enables them to act differently and perform differently. From...

So tying to our previous point, because that sparkles some thoughts in my mind, there's obviously like mimicking the video camera or mimicking the voice of someone, and I think that is right now probably the biggest dangers of AI, and obviously that could be a good thing as well. I mean everyone's thinking about a fake part of it, which is really scary,

There's also part where somebody can give a same presentation in 10 different rooms with a different voice, like with a transcript for example. So to that and how like what are some of the things that... please go ahead.

Jason (45:02.278)

Yep.

Jason (45:07.706)

Or just to build on that, or someone who has a degenerative condition where they're losing their voice could have their voice preserved. I think it is scary, but I also think it opens up, in some ways all the things we've been talking about are sort of like various forms of prostheses. If you could imagine.

So like a conflict prosthesis that we like put on when we know we're gonna be in an argument and it actually makes us better. You know what I'm saying? It stabilizes us and it makes us better at having those conversations. The same thing with the voice idea. Sorry, I just wanted to throw that out.

Alp Uguray (45:44.386)

No, that was great because I think Bruce Willis is also diagnosed with dementia and now that he gave away his IP to the studios, I believe, so that they can still do die-hard movies. It's awesome. That means that the whole new generation will be able to watch those movies in different forms. And then looking back on some of these stuff, like from student GPT to teacher GPT

Jason (45:58.651)

Yeah, yeah.

Alp Uguray (46:14.83)

these developments that are happening? What are some of the trends that you see that can help radical candor that can influence radical candor's coaching methods or like teaching methods?

Jason (46:30.462)

So one of the things I'm most excited about is creating safe practice space that is reasonably realistic. So between like good prompt engineering, the whisper API and like GPT-4, you can have a pretty realistic conversation with a bot. And so if you're given, so one of the hardest things about radical candor is like I need like,

how am I gonna have this conversation? Like, what do I say? It's like the most common question that we get. And we're like, well, I'll do you one better. Why don't you try saying it? Like, and the idea of what are you worried might happen? Right? Well, I'm worried that Alp is gonna get, you don't seem like to get really angry, but let's say I'm worried that Alp is gonna get really angry. He's gonna yell at me. I'm gonna try to give him this feedback and he's gonna get pissed off at me. So I can tell the bot, get pissed off.

Like, I'm gonna try to, I wanna try to give you this feedback and the reaction that I want from you is like defensive and angry. But maybe you're worried, like maybe I'm worried like, oh, like I think Alp is, doesn't really respect my opinion and he's gonna try to sort of brush off this feedback. He's gonna say it's not a big deal and not to worry about it. Well, you can practice that also. Like I feel like we're not that far away from the ability to...

Jason (48:00.506)

to create safe spaces to practice some of these really difficult things. And that is super exciting to me. I feel like it is, in our workshops, the best facsimile that we have of that is to encourage people to role play. But a lot of people are really uncomfortable with role play. And now I'm like, well, how cool is this? You can actually have.

all of the people talking to the AI, like they could collaboratively try to move the conversation forward and see what it's like. You know what I'm saying? Like experiment with different things and see what the reaction is that they get. And while it is not a perfect facsimile for the real conversation that you're going to have, it starts to help you build a little bit of endurance for dealing with resistance or being brushed off or disrespected or whatever it is. Like it helps, I can see it. You know what I'm saying? It's sort of like,

The only way to learn to hit a baseball is to swing the bat a whole bunch of times. And the problem with conversations is that especially where we might disagree or like I might be offering a criticism of somebody else, we don't actually get that many chances at it. Yeah.

Alp Uguray (48:55.55)

Yeah.

Alp Uguray (49:13.122)

Yes. And it's something that has been like a skill that I think people develop over time by reading, learning and just experience of interacting with one another. I really like the idea of like having maybe let's say a virtual coach, I can just practice my pitch or practicing certain things and then it can also provide some feedback on body posture or like some

tone of certain words so that I don't come across as negative or like I don't want to offend someone by just like my accent or something like this. There are all those aspects that can feed into it that opens doors for.

Jason (49:55.526)

it could identify bias in our language choices. So like if we're using language that is exclusive, like there's so many possibilities. And again, I see this as sort of like, it's almost like, I'd like the virtual coach idea, I think is really interesting. But I think this idea of like highly specialized coaches who are like, I can help you with a very specific set of conversations is really exciting because I think it's one of those things where

This is another raise the floor situation because most people will never have access to Kim. You know what I'm saying? As a coach, like they have access to our words, but like what if we could get the bot to be 80% as good as Kim? Well, that's like a thousand times better than what the vast majority of people have access to now, which is nothing. Like they have no coach, like they have no support whatsoever. So anyway, I think that there's, I'm...

specifically excited about the ability to practice difficult, especially social and intersocial skills using AI.

Alp Uguray (51:02.25)

And that sounds amazing. And for any entrepreneurs out there who are listening to it, maybe they can start building some of the coaching futures. But to your point, it enables access to knowledge and teachings of someone and helps it to scale to so many people. And that's what I'm excited about as well. And...

Jason (51:21.5)

Mm-hmm.

Alp Uguray (51:30.59)

Even with our conversation today, I'm learning a lot from you and our discussion. I'm just imagining that if someone had the opportunity to tap into your knowledge or Kim's knowledge or any other experts out there, that would be very valuable for them to move themselves positively. I know that we're almost out of time and I want to be respectful of your time.

Jason (51:35.315)

Same.

Alp Uguray (51:59.662)

First, I would like to thank you for the contribution, your thoughts and the knowledge that you shared today. I found it really valuable and I'm sure everyone in the audience does as well. Thank you very much for your time and conversation today.

Jason (52:19.442)

Yeah, I really appreciate the chance to chat with you about this. I just, I really appreciated the way you framed all of these questions. I feel like it made me think about things in a different way. And I hope that I was able to provide some answers that are enlightening to your audience. And I would encourage folks to share feedback with you about the episode and ask you to pass it on to me if there's anything I could have said or done.

To make things clearer, I'm always interested in that.

Alp Uguray (52:51.574)

Yes, of course. Of course, we will share a feedback form and then I'll be happy to share all those with you as well right after.

Jason (52:58.398)

That'd be fantastic.

Alp Uguray (53:00.482)

Thank you very much and thank you everyone for listening.

Jason (53:04.319)

Take care.


Creator & Host, Alp Uguray

Alp Uguray is a technologist and advisor with 5x UiPath (MVP) Most Valuable Professional Award and is a globally recognized expert on intelligent automation, AI (artificial intelligence), RPA, process mining, and enterprise digital transformation.

Alp is a Sales Engineer at Ashling Partners.

Next
Next

Navigating AI Policy, Job Market, Research, and Generative AI w/Eric Daimler