Stephen Wolfram: Computation, AGI, Language & the Future of Reasoning.

Today’s guests Stephen Wolfram

Stephen Wolfram is a computer scientist, mathematician, theoretical physicist, and the founder of Wolfram Research, a company behind Wolfram|Alpha, Wolfram Language, and the Wolfram Physics and Metamathematics projects.

The following is a conversation between Alp Uguray and Stephen Wolfram.

Summary

In this conversation, Alp Uguray hosts Stephen Wolfram to discuss the intersection of computation, AI, and human intelligence. They explore the differences between large language models and formal computation, the concept of the Ruliad, and the limitations of AI in understanding complex mathematical proofs. The discussion also delves into the future of AI, the nature of communication and knowledge transfer among AI systems, and the implications of computational processes in the natural world. In this conversation, Stephen Wolfram discusses the nature of sensory data in AI, the implications of quantum mechanics on human cognition, and the future of education with a focus on computational thinking. He emphasizes the importance of foundational understanding in entrepreneurship and the need for adaptability in business. The discussion highlights the evolving landscape of technology and education, advocating for a shift from specialized skills to a more generalized approach to learning and thinking.

Takeaways

  • Computation allows for a level of understanding beyond unaided human capabilities.

  • Large language models (LLMs) mimic human-like reasoning but lack formal structure.

  • The Ruliad encompasses all possible computations, but LLMs struggle to navigate it.

  • Human mathematics is shaped by our sensory experiences and historical context.

  • AI's ability to reason is fundamentally different from human reasoning.

  • The efficiency of computation contrasts with the inefficiency of pure reasoning.

  • AI could develop a richer language for communication beyond human languages.

  • Understanding the computations in nature is a challenge for both humans and AI.

  • The evolution of AI communication may lead to new forms of knowledge transfer.

  • The future of AI may involve intelligences that are alien to human understanding. The sensory data we receive shapes our understanding of the world.

  • AI's perception differs significantly from human sensory experiences.

  • Quantum mechanics introduces the concept of multiple paths of history.

  • Human cognition seeks definite answers, contrasting with quantum uncertainty.

  • Education should focus on computational thinking rather than just programming skills.

  • The future of programming may resemble the decline of hand trades.

  • Generalized knowledge will be more valuable than specialized skills.

  • Conviction in entrepreneurship stems from a solid foundational understanding.

  • Successful entrepreneurs often pivot their plans based on real-time feedback.

  • Computational thinking enhances our ability to understand and innovate.

Computational thinking is a tool for general thinking—like philosophy, but with computers as your amplifier. The craft of low‑level programming is heading the way of assembly language—useful, but no longer the main act.
— Stephen Wolfram
We humans have sampled only a pinpoint of the computational universe—and decided that is reality. LLMs widen the lens, but formal computation lets us build the telescope.
— Stephen Wolfram (03:51)
Out of the infinite mathematical landscape, humans have published maybe four million theorems. That tiny path is our humanized slice of the Rulliad.
— Stephen Wolfram ( 12:58 )

Chapters

00:00 - Introduction to Computation and AI

02:59 - The Role of Large Language Models

06:13 - Exploring the Ruliad and Human Mathematics

09:05 - The Nature of Computation vs. Human Intelligence

12:01 - The Future of AI and AGI

15:04 - Communication and Knowledge Transfer in AI

17:47 - The Limits of Human Understanding and AI

21:06 - The Evolution of AI Communication

24:02 - Conclusion: The Future of AI and Human Interaction

34:44 - The Nature of Sensory Data and AI

39:02 - Quantum Mechanics and Human Cognition

46:45 - The Future of Education and Computational Thinking

52:06 - Building Conviction in Entrepreneurship

01:05:14 - Adapting and Pivoting in Business


Transcript

Alp Uguray (00:02.292)

Hi everyone. I have the today pleasure of hosting Stefan Wolfram. Stefan, nice to meet you and thank you very much for taking the time to be with me today. So I will go right in and then ask you, I have a lot of questions, especially around automation and as well as computation.

Stephen Wolfram (00:12.11)

Nice to meet you too.

Alp Uguray (00:27.508)

and how today's large language models are changing, how work is done, but also how language is interpreted and what we think we understand about them. To go deeper from your perspective, what are the key differences, both in some philosophically and technically, between some of the capabilities of the two systems of large language models and

the giant computational infrastructure that you build that is more Wolfram.

Stephen Wolfram (01:04.628)

So, mean, you know, last 40 years or so, I've been trying to formalize the world in terms of computation. kind of building a computational language for describing things in the world and computing things about the world. That's now our Wolfram Language Technology stack. And kind of the idea there is to do well, to be able to make sort of a precise formalization so that you can kind of build

an arbitrarily tall tower of computation. it's, and it's kind of, that's, that's, that's kind of the direction that I've been going is to formalize things. So you can kind of build up a big tower of computation. What large language models and current AI is really concentrating on is doing things that are a bit more like what humans, unaided humans tend to do, which is something

Alp Uguray (01:49.212)

.

Stephen Wolfram (02:02.36)

broad but quite shallow. mean, we can talk about things, we can use natural language and so on, but if you ask us to run code in our minds, we really can't do that. It's a different type of thing. Now, if you go back to, I don't know, the 1600s or something, before the formalization of science through mathematics and so on, then if we'd gone straight from that to large language models,

I know, we might, I'm not sure anybody would have discovered, you know, calculus and sort of mathematical approach to science and in the end, computation, but we do have that. And that allows us to do lots of kinds of things that the unaided human can't do. I, you know, I guess I've spent a large part of my life kind of living the AI dream of being able to go from sort of a thing that I imagine to a thing that I can actually work out computationally.

pretty efficiently. that's, you know, that, that, that computation piece is sort of a superpower that lets one go beyond the things which one can do with one's kind of unaided mind. And I think the, the LLMs are doing kind of the kinds of things that we easily do with our unaided minds. They can do it faster. They can do it on broader amounts of data and so on, but it's sort of the same, the same general idea. It's a different idea from this idea of sort of systematic formalized computation.

Alp Uguray (03:28.672)

And in your like meta mathematics work, you describe the rule layout, the limit of all possible computations that could happen.

And if the rule layout is a vast mathematical landscape, where do little lamps fit in that space? As completed.

Stephen Wolfram (03:51.618)

Well, yeah, mean, the, you know, I don't think LLMs have a great place there in the sense that the Rulliad is this kind of thing that represents all of these kind of computational processes, all these things that one can do with computation. I think the LLMs are really kind of things that recognize kind of human-like patterns.

most of what's out there in the Rulliad, most of what's out there in meta-mathematical space, in the space of all possible mathematical derivations, most of what's out there is incredibly alien, incredibly non-human. It's really very unsuitable for LLMs. The parts that LLMs can do are to get kind of the human texture, perhaps human choices, but certainly human texture of what's done, for example, in mathematics. So it's very easy to get an LLM these days.

to produce something that sort of walks and talks like a math paper, even though it's actually all nonsense. But the texture of it, it looks like a math paper. It doesn't have that kind of formal structure that you need to kind of get math right, but it does have the kind of the human level gloss that makes it kind of look like mathematics. And I think that's some, so when it comes to the...

The rule we had is an interesting question. To what extent the kinds of things that we humans pick out as being interesting, instead of the infinite space of all mathematical possibilities, whether the things we pick out as interesting are things that an LLM could also say, yes, I recognize that that's kind of something that's likely to be interesting to humans. I have to say my experiments along those lines have not been terribly successful. That is,

The the Rulliard and mathematical space is sufficiently alien, but I don't think the LLM gets much of a kind of a handhold in the whole thing. It's it's it just sort of flails around You know occasionally it can see something that's very much like what it's used to seeing from having kind of ground up You know human written papers about mathematics and so on it'll see some some little expression that's kind of reminiscent of this and it'll kind of latch on to that. I mean

Stephen Wolfram (06:13.942)

I tried an experiment recently. was interested in a proof that I did with automated theorem proving 25 years ago, actually. It's kind of the only known example of a theorem that's been proved by automated theorem proving for the first time that wasn't a theorem that was already at least very much believed to be true as a matter of sort of traditional mathematics. It's a theorem about what the simplest possible axiom for Boolean algebra for logic is. And the proof

That I found with automated theorem proving is a long complicated proof. It's got hundreds of steps. If you unroll it, it's like 83,000 steps long. It's completely human incomprehensible. I mean, I've tried to understand a bit. not the best person to try to do that, but a bunch of other people have tried over the last 25 years to understand this proof. You can't get anywhere. By step three of what the automated theorem proving system did, you've got incomprehensible kinds of lemmas. So.

The question that I asked was, could an LLM somehow help me understand this proof? Could it go into this kind of this sort of path in metamathematical space that had been carved out by the automated theorem prover? And could it sort of humanize that path? Could it kind of make that path something that had some kind of human narrative behind it? Well, that was a complete flop. I mean, it found some some trivial things. I thought maybe it had found something interesting because it told me

That you know this particular lemma was a bit like some things called Mufang loops So I go look up Mufang loops and yeah, it's not really right It's it's it kind of it sort of got the you know it if you kind of looked at it from a distance You know with slightly fuzzy vision was kind of looked roughly right, but when you asked for it was it precisely right? It was no it wasn't now. I think this is some you know can

Alp Uguray (07:53.716)

Okay.

Stephen Wolfram (08:08.608)

What can the LLMs capture from the three or four million papers that have come out in history of human mathematics? They certainly can capture the overall sort of texture of those papers. Can they capture the kind of deductive sequences of those papers? it's, it, it, there's sufficiently, doing computation is incredibly efficient and doing things by sort of pure reasoning is very inefficient. I mean, it's like when Wolfram Alpha came out in 2009.

It was kind of at a low point in the history of AI. so people just said, well, AI is never going to work. there were nevertheless a few people who were trying to sort of build common sense reasoning systems and so on. And when Wolfram Alpha came out, they pretty much said, OK, we give up. This is not, you know, because Wolfram Alpha could just grind through and do the things that they had tried to make common sense reasoning systems do. We could just

sort of blast through the answer. I mean, the way I put it to them is that what they had done was kind of the natural philosophy of the world. They had kind of tried to get to the point where you were working out what happens in the world by pure reason, so to speak, to figure out what happens in some physics problem. You would reason that this moves, that moves this, and that goes down and this thing goes up and so on. We were kind of cheating because we just turned that question into a math problem.

just ground through the, kind of blasted through the math using algorithms and using computation, then got to the answer. So it's a very different path. The LLMs get to do the reasoning type path. Now, you know, what patterns of reasoning are there that may be relevant, for example, to mathematics. One thing that people were pretty surprised about when LLMs started to, you after the chat GPT moment, when LLMs really started to work,

One of the things that people found very surprising is that LLMs could kind of do logic. I think the reason is because LLMs kind of saw patterns of explanation and they did the same kind of thing that Aristotle had done, which was to notice that there were certain patterns of explanation that were repeated and those became kind of the syllogisms of logic that could then be applied as sort of templates to arguments in lots of other places. So that's sort of how the LLM kind of learned logic.

Stephen Wolfram (10:33.078)

Now, what are the higher level kind of logic like flows that we can think of in mathematics? We certainly know low level ones, which are very efficiently implemented computationally. Are there kind of bigger arcs of kind of mathematical, of sort of humanized mathematics that maybe we haven't recognized so well? I mean, that's certainly what seems to have happened with language in the sense that, you know, why did Chatchit B.T. work as well as it did? I think the number one reason is because it picked up certain kinds of regularities.

Associated with language that we didn't readily know I mean we knew that that there was grammar that said that you know in English Most sentences go noun verb noun, but we didn't have sort of more semantic tagging rules for that We didn't have we didn't know which kind of noun goes with which kind of verb goes with which kind of noun and chat GBT Kind of the LLMs had sort of found a pattern for those things Which they could then use in actually producing sort of meaningful text

I think it is perfectly possible that with sort of vast amounts of kind of mathematical activity and reasoning that maybe there are some sort of arcs of mathematical structure that we haven't recognized yet. I know, you know, we are the number one source of kind of mathematical training data and, you know, we've generated lots and lots of it. It's quite nontrivial to, you both have to be able to solve the math.

And you have to be able to figure out what math is, is kind of what the distribution of math that's interesting to look at is. But that's something we've done. And I, you know, I don't know what's going to happen. I people have been using our training data. We'll see what comes out of it. And whether, whether there's sort of a discovery of a sort of arc of mathematics in the same way that logic is sort of a discovery of the arc of reasoning for those kinds of things. I, I, I think it's

the sort of interesting questions there. I've sort of studied quite a lot the foundations of mathematics. And I think it's I mean, this question that we're talking about now is really a question about human mathematics. It's not a question about mathematics as mathematics could conceivably and formally be. There is no such that sort of all possible paths exist there. It's not something where these sort of humanized paths can be identified.

Alp Uguray (12:53.244)

Mm hmm.

Stephen Wolfram (12:58.464)

what could potentially be identified is the sort of empirical meta mathematics that is that is of all possible mathematical theorems of which there are infinite number the three or four million that humans have chosen to publish in papers about mathematics how do we pick those particular ones what's special about those particular ones that's that's something that we could imagine having some statements about now you know i think

I've certainly looked for many years, probably 30 years or so, for kind of signatures of kind of the human aspect of why did we pick these axiom systems and mathematics and not other ones? And there's a little bit I figured out. But mostly, I think it has been, it's sort of complex accidents of both history of mathematics to some extent, and our experience of the physical world.

Alp Uguray (13:48.38)

you

Stephen Wolfram (13:55.99)

which in turn depends on the particular sensory systems that we have and it's kind of a very much it's on us so to speak but exactly being able to draw the line from the way we are sort of as biological entities so to speak to how we got the particular mathematics we got with arithmetic and geometry and things like that one can get a certain distance towards that and I think it's I mean there's

Alp Uguray (14:22.571)

Okay.

Stephen Wolfram (14:25.112)

There's a certain amount one could say about what kind of mathematics it would be conceivable that mathematical observers like us could possibly explore. But I think, is there a more detailed characterization of math that humans pick out of all the infinite number of possible theorems, the ones that we choose to study? I don't know yet.

Alp Uguray (14:41.14)

I think there is the, like your point of we are part of the physical world and the way we reason is based on the fundamentals that

that surround us, so we could look into the stars, we could look into study them, we could look into the physical space and then study them. And at OLMs, and I read your paper as well on chat, GPT, and your website, it's very much of, like just adds a one token based prediction and based on just the corpus of the training data set. like, is it...

In a way, it is smart, but it's not intelligent to an extent that it scales to every reality of the world. And I thought that was really interesting because what would it take it to be able to scale, to reach to that point of AGI, especially that...

better understanding on computation, better understanding on reasoning, and then the ability to solve proofs, even as right now it's even unable to solve a very tough coding challenge as well. So without knowing it in its corpus, how can it become more intelligent with the capability of computation involved?

Stephen Wolfram (16:22.006)

mean, you know, computation is this thing that isn't intelligent like humans are intelligent. Computation is very alien form of intelligence. mean, humans have certain kinds of intelligence characteristics that LLMs, you know, have a comparable versions of. And when it comes to, you know, imagine an LLM or imagine a system, a computational system, it's just doing all kinds of computation.

I mean, I've spent a large part of my life studying sort of what happens in the computational universe. If you just essentially pick a tiny program, even at random, for example, what does it typically do? One might have assumed as I did in the kind of beginning of the 1980s, that if the program is sufficiently simple, the behavior that it produces will be somehow correspondingly simple. Turns out that isn't true. It was a big surprise to me. I think, you know, I gradually watch people sort of...

Alp Uguray (17:00.584)

Mm hmm.

Stephen Wolfram (17:18.518)

becoming more more used to the idea that that's really the way things work and it's kind of charming because there are always people who say who kind of just you can see their intuition just hasn't hasn't grok that particular point and they keep on saying but look you can make you have to you know to get this kind of behavior you have to make the program more complicated or or you know this this thing we're seeing it has to be something where there's some very complicated and what if what if you made more complicated rules what if you did this and that you don't need any of that stuff

I noticed that, I don't know whether it's partly a younger generation or something, but there people to whom things like the phenomenon of computational irreducibility that I started talking about 40 years ago is obvious. I wrote this big book about 20 years ago now called New Kind of Science. And in the preface to that book, I kind of said, a lot of what I've said in this book will seem completely shocking and implausible to people now, but in time it will come to seem completely obvious.

there's now sort of a subpopulation to whom these things are completely obvious. Which is nice, that's the way progress gets made. I think that the, but this, to this point about sort of well, what happens when you're doing lots of computation, lots of complicated things happen, those things are very non-human. They're the things that are out there in the computational universe. The things that we humans have sampled in the computational universe, we've sampled

Alp Uguray (18:20.671)

Mm-hmm.

Stephen Wolfram (18:45.666)

just very, very tiny parts of the computational universe and decided those are the things we care about. But there's an awful lot else out there. And if you say, well, you know, take this sort of super AI that's computationally, you know, enhanced, it can go off and explore all these parts of the computational universe that have not been humanized, so to speak. So what happens when it does that? Well, you know, we've got this thing that's going off and computing lots of things.

that are not really things that resonate with the things we humans think about or the things we humans can kind of fit in our minds. But that's very much like what we already see happening in the natural world. The natural world is, a sense, full of computational processes that are very non-human. mean, sort of notably, people say, you know, the weather has a mind of its own. It's those those processes of fluid dynamics in the atmosphere. They're doing all kinds of computations.

just like the neurons in our brains are doing all kinds of computations that happen to be based on electrochemistry rather than fluid dynamics. But it's really at some computational level, it's very much the same kind of thing. And I think that's the limiting case is when you amp up the computational capability, you go off into the computational universe and you've got things that are essentially alien intelligences, not well aligned with human ones. And I think the...

Alp Uguray (19:51.412)

Okay.

Stephen Wolfram (20:09.688)

The of the chasing of AGI while charming is, you know, it's kind of interesting to me that the things people say about it and the risks it has and all this kind of thing. If you read what people wrote in the early 1960s, it is basically word for word what's being said today. The only difference is it's really charming. you take paragraphs that were written in the early 1960s about how, you know, our species is merely a stepping stone to the ultimate, you know, digital AI future type thing.

The only thing that's different, the only thing that tells you the age is that the kind of sensibilities about society and the kinds of things that people say about sort of the structure of society, they're a bit different back in the early 1960s than they are today. But that's the only way you can date it. It's just like, you know, when you see a movie or something, you can date it by the computers that you see in the movie type thing. So in this case, you can date the things that are said by sort of the quirks of their time, so to speak.

but the main sentiments are very much the same. And when you say, well, what does it take to make sort of the super AI? Well, it's, you you'll keep on saying, well, that super AI isn't quite like a human, you know, and because it has this feature that's wrong, it doesn't have mortality, doesn't, it doesn't have children, it doesn't do this or that. And, the only thing that will satisfy your criterion of being exactly like a human is pretty much a human. Now, as soon as you go away from that,

Alp Uguray (21:26.787)

Okay.

Stephen Wolfram (21:36.706)

you're getting to things which, for example, have attributes that are deeply non-human, like being able to do sophisticated computations, which we humans can't do. And I remember when Wolfram Alpha came out, people were using it to kind of fuel Turing test systems. And the total giveaway for those Turing test systems, you ask a bunch of random questions about all kinds of different things. If it could answer those questions, it wasn't a human, which is kind of the opposite way around that you might think, where...

You think the human is the pinnacle and that's what you're looking for. So I think the thing to understand about what happens when you kind of amp up the computational capability, where do you go? Well, you go to things which are in part a kind of like sort of the generic computational universe, which are a bit like the generic physical universe. Lots of processes that we humans

just don't care about. Where we say, yes, there's something sophisticated and computational going on in that waterfall or something like that. But the details of it are not things that we engage with in our minds. We don't really care about those details. Now, you know, I have been curious recently actually about the following question. If you look at, you know, cats and dogs versus us, the sizes of their brains are maybe 10 to 100 times smaller than ours. And somehow, just as we saw in the development of LLMs,

sort of a threshold at which things start to happen. Like you start to be able to get compositional language, which presumably the cats and dogs don't have, and we do have, and we're able to take our thoughts as they are in our brains and sort of package them up and turn them into language and transmit them to another brain which can then unpack them and get something which is a reasonable approximation to the original thought in the first brain, so to speak.

you know, at what point do we get to compositional language? It's something our species invented, you know, maybe some other species maybe have partial versions of it, but we're the only species we know that's really gone the distance with compositional language. And there is a question if we had, you know, hundred times the number of neurons in our brains, is there another level of kind of sophistication or abstraction that we can get to? What would it be like? I think the downer with that whole story is,

Alp Uguray (23:48.717)

Mm-hmm.

Stephen Wolfram (24:02.538)

it will be like something that we humans won't understand or appreciate. it's like the natural world, which is full of computation that we humans don't understand. mean, in a sense, science, natural science is this attempt to find a bridge from what actually happens in the natural world to what fits in our kind of finite minds. And there's certain kinds of things we've successfully bridged. We've successfully gotten kind of human level narratives about some of the things that happen in the natural world, but others we have not.

This whole idea of computational irreducibility tells us that there are lots of things in the natural world where we just won't ever get that sort short narrative description of what's going on. It's, it's, I mean, I think that that's kind of a thing where, yes, you can have all sorts of sophisticated things happening that are far beyond us humans in the same way that nature is beyond us humans, but

It isn't quite as exciting as you might think it would be, so to speak. It's just something, you know, it's like we're already probably in the position where there's a civilization of AIs out in our world that are kind of autonomously doing lots of stuff, whether they're selling us ads or whether they're bidding on ads or whether they're doing something more sophisticated and more interesting than that. I don't know. But, you know, there's already sort of the civilization of AIs that's out there running, many aspects of which we don't understand.

Alp Uguray (25:02.495)

Mm-hmm.

Mm-hmm.

Stephen Wolfram (25:27.33)

You know, we look inside, we can say, well, there's this distribution of activations and this neural net. What does it mean? Do we understand it? No, we don't. Does it matter that we don't understand it? Not really. I mean, it's the same thing. You we've lived through a period of history about a hundred and something years where we sort of did understand how the machines we built work or the devices we use work. I mean, back, you know, before the industrial revolution.

Alp Uguray (25:37.524)

Okay.

Stephen Wolfram (25:54.688)

If you wanted to get from here to there, a horse was a pretty good bet. And you could operate a horse without knowing how a horse works inside. And that's how people did it. Later on, for some brief period, you get to, well, yes, you can understand how the, how the steam engine or the car works. And that's a, that's a period in history where we could sort of understand those things. But I think that period in history is kind of coming to an end. And I'm not sure that there's anything

so dramatic about that phenomenon.

Alp Uguray (26:25.956)

And that's very interesting because we do not fully understand the computations that happen around us in reality.

And we also do not understand the computation that that happened that we surround ourselves with that we create some parts of it. like to, to an extent in like, if that's the autonomous AI agents are loose and then they speak in a different language in between themselves, because they don't technically need to speak and communicate in English or a human native language as well.

Like they could transfer knowledge between different systems with a different type of mechanism that they could invent over time. How likely is it, do you think, that that sort of an intelligence of like creation is possible that these type of systems, given that they have access to more of each other's knowledge?

Stephen Wolfram (27:24.908)

Well, you know, there's a question of how do humans transfer knowledge from one brain to another? You know, each brain has a certain pattern of, you know, weights and activations and who knows what else that encodes knowledge in that brain. Somehow that, you know, brains aren't identical. You can't just pick up, you know, the pattern of activations from one brain and stick it another. You can't even do that with computers. know, computers, if you took a computer that's been running, you know, it's been running its operating system for a while.

Alp Uguray (27:53.556)

Yeah.

Stephen Wolfram (27:53.57)

And you say, let me do a direct memory, you know, memory transplant from one computer to another. It's not going to work. It's, and nor is it with brains, that have had different experiences, et cetera, et cetera, et cetera. So then the question is what's the, what's the transfer protocol between brains? And the answer is the only transfer protocol we really know right now is language. And what's happening in language. You're taking all those details, our a hundred billion neurons are firing in all kinds of ways.

And somehow you're organizing those into this stream of fixed set of tokens, know, 50,000 words in typical language. You're taking all of those 100 billion neurons and all their detailed firings and you're sort of, putting them through this thin path of this limited set of words and so on. But that's important. You have to do that because there's no way to do a sort of direct transfer from your 100 billion neurons to my 100 billion neurons.

It has to be done through some kind of encoding that can be encoded at one side, decoded at the other side. Now, could you imagine a million word language being used by the AIs? Yes, you could. And in fact, that's a thing when it, what would it look like to have a million word language? Well, things are a bit shallower. You don't have to, in English, I might have to say the big red dog or something.

but in a million word language, there might be a single word for that. So what you end up with is sort of less thinking, less processing, more kind of just sort of the syntax is richer, so you can kind of describe more. But it's still the case that insofar as the world is an infinitely varied kind of computationally irreducible place, you will run out of sort of descriptors fairly quickly, even if you have a million words.

Alp Uguray (29:35.935)

Okay.

Stephen Wolfram (29:50.584)

there'll still be plenty of things that are, you that particular shape of tree. You're not going to have a word for every possible shape of tree, every possible shape of cloud, something like this. You might have words for, you know, and we typically have words for, I don't know, 10 different kinds of clouds. And maybe, you know, you'd have words for 10,000 different kinds of clouds, but there's an infinite number of possibilities. So it doesn't get you that far to do that. Now, I mean, there are other things you can do to transfer kind of

Alp Uguray (30:02.611)

Yeah.

Stephen Wolfram (30:18.318)

from one mind to another. mean, another one that's kind of a fun one to think about is we form mental images, but we don't have a way to transfer those mental images, other than by kind of describing them in words, or maybe if we're really good at drawing, can draw it and show somebody the picture. That's a very slow process. It's about as slow as if you had a dog who wants to communicate in English and they're typing with their paws on some big, you know,

Alp Uguray (30:35.344)

Yeah.

Stephen Wolfram (30:46.894)

pad or something. So, but you could imagine, you know, for an AI, it has the ability to, you know, it can, it can generate an image and we can imagine a situation maybe even from brains where, you know, we're picking up neural signals from our brains and producing, you know, a little display of this is, this is my mental image, so to speak. And that's a, that's probably a much higher bandwidth communication channel with another brain.

Alp Uguray (30:50.408)

Mm-hmm.

Stephen Wolfram (31:16.27)

than our one dimensional languages. I'm not sure exactly how that will work because it's kind of a thing where, you know, there's a way that, you know, people say a picture is worth a thousand words or something. And maybe this is a way in which you can get sort of a higher bandwidth communication, but still you have to sort of actualize things in some, you have to have some way in which you have taken all of the detail of the world.

which could be encoded in different ways in different minds and somehow standardize that so that you can get so you can get something that can be sort of transported to another mind whose structure is different. So yes, I mean, it's perfectly possible that the AIs can invent their million word language and, you know, maybe they can invent something that goes beyond

Alp Uguray (32:02.94)

Thanks.

Stephen Wolfram (32:14.914)

kind of, I mean, there are creatures that, like the cuttlefish, for example, cuttlefish have chromatophores on their skins and they have patterns. Then cuttlefish presumably communicate through something a little bit like mental images. We don't understand what they're saying to each other, but you can see a cuttlefish has all these waves of color and so on, and other cuttlefish respond to those waves of color. And that's a...

That's a sort of different form of communication. don't know whether that's, I'll be a little embarrassed if cuttlefish are being more sophisticated in that communication than we humans are being in all the kinds of things we've developed with language, but I don't think it's easy to tell.

Alp Uguray (32:52.084)

You

That is very true. it's like the, I think the communication also develop in the reality that we are exposed to and then that like the physical reality that's around us. If the fish adopts to communicate in that way, and then we adopt to communicate because of our sound and ability to make sound.

And the same goes, assume, the AI's access to certain data and then the way it can compute certain things in the realm of and in the boundaries of computers and potentially even the quantum as well, quantum computing.

Stephen Wolfram (33:46.028)

Well, that's a different issue. mean, look, the one thing about, you know, our description of our world is very much based on our senses. And, you know, there are many aspects of our senses that are pretty specific to us. Like, for example, you know, our brains think in kind of millisecond or, you even even 100 millisecond timeframes. But, you know, the speed of light is very fast for the distances we're dealing with in our typical environment. The speed of light is very fast relative to that.

So in a typical situation, we are kind of seeing the structure of space around us in one gulp before we can think about it. If you're a piece of digital electronics running a million times faster, then that image of the scene doesn't necessarily have to come in all in one gulp. You can be seeing those individual photons arriving and it's really just a matter of the human kind of presentation of the scene.

that we choose to make it into a single image rather than just, this is a bunch of photons arriving. And I think that's a, you know, it's the question of the sort of the sensory data that we get, you if you're an AI that's being fed by a bunch of IOT devices, you know, you've got a million IOT devices from different parts of the world that are kind of feeding your information. I don't know what you conclude about what you discuss about what happens in the world. You know, it's not quite the same. It's not.

The sensory system is very different if the kind of, you know, we have, you know, some number of, you know, tens of millions of kind of touch sensors on our skin and so on. But imagine that you are, you know, an AI that has, I don't know, a hundred million, know, IOT sensors in all in different places. It's a, you know, there may very well be things that would be your

Alp Uguray (35:36.316)

Mm-hmm.

Stephen Wolfram (35:42.648)

kind of words for things that happen in the world where you have a word that means kind of, I don't know, some kind of strange correlation between the observations of different IoT sensors in different places, something that we don't have any experience of and don't have any way to, have not chosen to have any way to talk about. But in terms of the quantum side of things, it's sort of interesting because sort of the essence of quantum phenomena is that

instead of having this sort of one path of history where definite things happen like in classical physics, in quantum mechanics, sort of the big idea is that there are many paths of history and we typically only get to see sort of the some kind of probability of different paths being followed. But more specifically what happens is inside a quantum processor or a quantum computer for that matter, there are sort of these many threads of history that are being followed.

The thing is, and the thing that makes I think quantum computers not a very practical possibility, is that we humans want a definite answer. It isn't adequate to us. Our brains are actually quite architected so that we try and aggregate all the different things that are going on in our brains to get definite answers. want, know, probably maybe, maybe every 10 times a second or something, our brains, you know, aggregate all these nerve fibers, go to these basal ganglion.

ganglia and so on and they seem to be aggregating the thoughts that are coming from all those different nerve cells. We're reaching some kind of consensus about what to do, maybe 10 times a second. And that's, that's kind of how brains are architected and how our experience of the world works that we think definite things happen. One thing happens another than another than another. In quantum mechanics, many things are happening in parallel. One could imagine a quantum brain where many things

where it could have many thoughts in parallel. That brain isn't our brain. Our brain doesn't work that way. And, you know, could we imagine kind of an AI that operates where it's sort of thinking about many things in parallel? Sure. That's sort of the story of distributed computing. Interestingly, distributed computing is something that's very hard for people to get their, wrap their brains around. mean, distributed computing, when it's just like, well, I'm just going to distribute.

Alp Uguray (37:41.536)

.

Stephen Wolfram (38:04.332)

you know, the different, you know, I'm going to use parallel table in Orphan Language or something, and just do a table of values where each element of the table is being computed on a different core. Well, that's, you know, that's reasonably easy to understand. But when things get more complicated, we humans are remarkably bad at understanding kind of sort of parallel distributed computation going on. And I think that's, that's very closely related to the fact that we have this

Alp Uguray (38:27.78)

Yes.

Stephen Wolfram (38:33.102)

this fundamental kind of design of our brains that tends to sort of sequentialize everything. And that's really the way that we think about the world is in the sequence of events through time, so to speak. that's something, yes, the AIs can avoid that, but the things that they're doing when they avoid that are things that are very non-human. They're things that are not, you know, are as unrecognizable probably as the processes in nature.

Alp Uguray (38:49.268)

And then there's the approach of making AI more like us, because in a way we are kind of creating it. And then there's also approach where it can go.

do its own things. And I want to maybe tie it to education as well, because of like from taking the computational thinking to like traditional hand calculation in schools that we have. Like what are some of your thoughts about that? And I saw that there's the other separation of like College of Computer Science and College of

I think computing that's happening. I'd love to learn more about what that really means for future and for students to...

Stephen Wolfram (39:58.146)

I don't have a clue. would think that was some piece of university politics that caused that to happen. So that's probably not a significant intellectual thing. But there is something which I think is more intellectually significant, which is that computer science has a funny history. mean, it has become kind of the trade, for the most part, people study computer science.

Alp Uguray (40:02.982)

Already. Okay.

OK.

Stephen Wolfram (40:27.694)

as for the trade of programming, to learn the trade of programming, so to speak. I think that's a trade which in many ways is going to be a trade that goes the way that many hand trades have gone, which is it kind of goes extinct, except for a small number of practitioners. mean, back when I started doing computing, assembly language was the language anybody serious used. And I remember people telling me back in the 70s, oh, you know,

Alp Uguray (40:49.472)

.

Stephen Wolfram (40:56.93)

There's this newfangled language called C at the time. And people telling me, it's never going to catch on. Assembler language is the only thing you can really program a serious application programming. Well, that, you as you're smiling because that's ancient history and it obviously didn't play out that way. We're in a situation today where a lot of programming gets done kind of in a very step-by-step way with kind of standard low-level programming languages.

You know, large part of my life work has been automating that process, figuring out what are the lumps of computational work that are repeatedly needed, just like figuring out the words that one would need in a human language to express things, figuring out the lumps of computation that one needs to sort of express the kinds of things one wants to express in thinking about the world computationally and just implement those things and do it in a sort of coherent way.

Alp Uguray (41:46.492)

you

Stephen Wolfram (41:56.014)

That's kind of the story of Orphan Language is the story of trying to make sort of as high a level as possible a language for expressing things computationally. So that once you can think about things computationally, then you have a medium for expressing those things. It's kind of the analog of what happened with mathematical notation back 500 years ago, where mathematics went from being expressed with natural language words to being expressed in a kind of streamlined notation.

That's what launched algebra and calculus and basically all the modern mathematical sciences. So what we're trying to do with Wolfram language is provide kind of the notation that allows you to talk about the world computationally and kind of launch computational X for all X, so to speak. And I think, you know, learning how to sort of think computationally about things and learning how to formulate things sort of so that you can express it.

to our computational language, that's a very valuable thing to do. That's an important thing to do. Learning how to write low level Java code or something, to do something or Python code or whatever else is one of those activities that I think is, you know, it's days are numbered, so to speak. I mean, I see that from what we've done. You know, I see, you know, CEOs and CTOs of lots of tech companies who use our technology all the time, the people in the trenches just writing code.

aren't using our technology. It's a very bizarre kind of inverted situation, which I finally understand because the typical programmer doesn't expect that they have to do serious computational thinking. The typical programmer is doing, I'm doing this, I'm writing this piece of code, I'm writing another piece of code. It's, know, I'm writing it according to the spec. The formation of the spec is a of computational thinking, but that's not what the programmer does. The programmer is the kind of the, the,

the kind of the artisan or something who's actually, you know, grinding the lenses or whatever, not the person figuring out how the optical system is supposed to work. And so what you see is this sort of weird phenomenon where people can, you know, use our computational language to do all sorts of things in a few hours. And then sort of the programmers come in and they take months to do the same thing, which is a

Stephen Wolfram (44:19.778)

which is kind of just a, you know, a precursor of the future, which is that low level programming function just doesn't need to be there. It can be automated away. It helps now that we have LLMs that you can use like a notebook assistant system to really go from sort of a vague natural language description to something which is like, here's a, here's a possible meaning to what you said.

in computational language. You can look at that and can say, yeah, that's what I meant. Or you can say you missed the point, you know, try again, or you can adjust it yourself. But that seems to be a very productive way to work that you go from even a fairly vague understanding of what you're trying to do to something where the notebook assistant, the LLM is producing kind of this sort of precise version of it in our computational language that is a language that is set up

Alp Uguray (45:13.428)

Mm-hmm.

Stephen Wolfram (45:17.834)

specifically to be sort of readable by humans as well as writable by humans. And so I think that's, know, in terms of the sort of the direction of education, learning sort of computational thinking, very important idea, learning the craft of programming, you know, in standard programming languages. I think, you know, that's about, it's gonna go the same way that assembly language programming went, you know, back in the day.

Believe me, at the beginning of the 80s, when I was the first big computer system I built, which started my first company and so on, it was a big system written in C. And people were saying, you can't be serious. It has to be in assembly language. That's not a realistic way to set things up. Now, to be fair, Unix had already been written in C by that time. So that was sort of a silly thing to say.

but people say silly things all the time. that's a, that's, that doesn't turn, but I think, you know, what we're seeing right now is, you know, from an educational point of view, learning how to think computationally about things, which is a lot about kind of getting a sense of how one thinks about lots of different kinds of things in the world, computational, whether that's thinking about how one thinks about images or sounds or, geography or whatever else computationally, that's, that's the really valuable thing to do.

Alp Uguray (46:17.968)

Okay.

Mm-hmm.

Stephen Wolfram (46:45.614)

I think at an educational level, not kind of the craft details of writing, oh, I don't know, object oriented programming, methods for this, that, and the other thing. And I think that's kind of a, that will go the way of a Sando language, so to speak. But I think, and I think that the idea,

Alp Uguray (47:10.692)

It's beautiful.

Stephen Wolfram (47:12.716)

we, it's kind of nice, wasn't, I didn't plan it this way, but we've kind of arrived at this sort of unique place in the technology universe with the technology stack that we built, because we have this language that is really talking, expressing the world computationally. It's not talking about, you know, how to set variables. You can set variables if you want to, but that's not the thing people most do. It's, it's, it's kind of expressing at a high level, expressing things about the world and things that you're thinking about in a computational way.

And that's kind of, so for me, that's, and it's nice to see how that dovetails so successfully with kind of the world of LLMs, where the LLMs provide this sort of linguistic interface to this underlying kind of computational representation of the world.

Alp Uguray (48:00.193)

It's in a way that they complement one another really well. It brings the capability to do.

And for someone like who is a student maybe right now and I don't know, freshmen in college or in high school or before, how can they think more differently when it comes to adopting the skills to think computational and then not be stuck in just being focused on that systemic repetitive?

coding or programming language itself. Like how can they seem the way they think to be relevant?

Stephen Wolfram (48:42.178)

Yeah.

Stephen Wolfram (48:47.884)

Right. mean, you know, I've tried to build a bunch of tech to make that possible. I think it's unfortunately fairly unique in the world. So it's, it's, it's kind of like, you know, the message has to be learn our tech because that's, that's, that's the best path to get to sort of computational thinking. Now, in terms of what to learn, you know, what to study in college, for example, I think that there has been a tradition for decades that sort of the, that the best thing to do is to be extremely specialized.

to learn a very particular kind of set of almost mechanical skills and learn this very tall tower of mechanical skills. That has been a way to be sort of valuable in the world for quite a while. I think it's going away. I think that those things which are those sort of tall towers of specific sort of personal functionality are the things which are readily automatable. And...

increasingly will be automated. were sort of blocks to automating those things that came because we didn't have LLMs. We didn't have kind of a linguistic interface. We didn't have this way of operating that sort of had this layer of this sort of humanizing layer. And I think, so I think that the, know, the sort of my advice these days to people who are like going to college and so on is learn about a bunch of different things. Try and learn as broadly as possible about

different ways of thinking, the ways of thinking that might exist in pure mathematics versus biology versus philosophy versus history. All these different fields have different ways of thinking. It's very useful to learn all those kinds of things as well as to learn the facts of a lot of these different fields. Because I think the future is much more generalized. It's much more about generalism than specialization. I think that, and I think

computational thinking is a tool for general thinking. It's a tool for general thinking in the same kind of a way that philosophy, for example, is a tool for general thinking or logic is a tool for general thinking. I think it's a very broad tool and has the additional care that computers can help you do it. If you're doing philosophical thinking, so to speak, you're kind of on your own for the most part until very modern times. with computational thinking,

Stephen Wolfram (51:11.166)

once you've formulated your thoughts in that kind of precise computational way, then you're often running able to get a computer to help you sort of to give you a kind of superpower to go beyond what you can merely do with your mind.

Alp Uguray (51:18.804)

When it comes to, especially like when you started your company and then looking at from the lens of entrepreneurship and then building conviction into while everyone says no, but saying, no, no, this, I'm sure this is the way.

What are some things that you've experienced over time that help you build that conviction? And for other entrepreneurs out there as well, I think that would be very valuable for them to hear too.

Stephen Wolfram (52:06.166)

Well, you it helps to have been successful in the things you've done, because after you've been successful, you see a pattern of, yeah, it worked, even though people said it couldn't possibly work. And, the things I've tried to do have gotten bigger over time. You know, I was lucky enough to be pretty successful as a teenage particle physicist, which is kind of a weird thing to do. But that was, you know, it happens to be a, and then, you know, I was built by first big computer system.

right at beginning of my 20s and that turned into my first company which I made many mistakes. You I was a pretty successful academic so those were all kind of, yeah, I can do those things and these things work out fairly well and that gives one the confidence to sort of say, yeah, well, you this thing I'm thinking about doing, you know, I don't really care what other people say because I'm pretty sure I know what I'm talking about. Now at some level, you know, my own way of thinking tends to be very kind of foundationally based.

That is, I try to really get to the root cause of things, the kind of the foundations of how to think about things. And one feature of that is you're kind of, if you can do that, you're sort of always on bedrock. You know, it's not the case that somebody is going to come along and say, wait a minute, you misunderstood that. It doesn't work that way because you built your thinking on this very, very solid foundation. And I don't think at the very beginning, I don't think I was doing that. I think I was more, you know, and then it was more important.

that I had sort of practical success in doing the things that I was doing and built sort of confidence from that. But I think now, you know, I'm doing things which are sufficiently far outside kind of the things that there's really a kind of an ambient understanding of that the only way I can have any belief that what I'm doing makes sense is if it's sort of built on bedrock. And for me also, you know, I do a fair put a fair amount of effort into explaining what I'm doing.

and providing exposition of various kinds, that's very useful to me because that's part of the way that I can tell what I know what I'm talking about is, can I explain it to people? If I can't, well, that's not a good sign. it's like get the foundational understanding, be able to explain it to people, get to the point where you're confident in what you're saying, so to speak. And people will say all kinds of crazy things. And people,

Stephen Wolfram (54:33.664)

It depends on whether the things you're doing are kind of how far out from what's been done before they are. The further out they are, the less likely you are to find people who will really kind of appreciate them because to get to the point of thinking like you're thinking is often quite a bit of effort. And if you just say to somebody, hey, what do you think of this? There isn't, you know, they just don't have that background.

Now, it's also the case, and I think it's worth listening to what people say, because even when they say something that on the face of it is like, they kind of missed the point, sometimes that thing, in my experience at least, gets you to think about something in a slightly different way. How can I explain? That's a confusion I've never heard before, so to speak. How can I possibly explain something based on that confusion? And as you try to do it,

you realize, wait a minute, this is actually interesting. You I now understand something better than I understood before. But I think that's, I mean, you know, it has to be said that for somebody like me, you know, I've done a bunch of things, which I suppose from the outside look like they're high risk propositions. From the inside, nothing I've ever done has ever looked like it had any significant risk. So in other words, one has a, know, for me, it's like, I'm to do this project and I kind of see my way through this project and

I think it's gonna work. that's, you I don't really, now, you know, I kind of have this, this kind of, I'm convinced it's gonna work. I'm not worried about whether it's not gonna work. Now, that doesn't mean that I don't look and say, well, here's the 10 things that could go wrong. Let me try and figure out how to avoid those 10 things that could go wrong. But I have no doubt that the project is gonna work. Yes, there could be some bumps along the way, and there could be things that, you know,

come in and try and block it, okay, I'm gonna get rid of those things. I'm gonna figure out how to push past those things. But I think that's kind of my approach. It helps when you're doing entrepreneurial kinds of things. I mostly built products that I want myself. I I'm user number one for most of the things I built. this is the, Wolfram Language, for example, that's the thing, that's my kind of...

Stephen Wolfram (56:55.426)

computational superpower, so to speak, is that I can use this thing that I've just been building for the last 40 years. And it makes me really efficient. And it makes me, you know, relative to people where I mean, it's kind of a funny situation because back in the 70s, when I was a kid, I started using computers, pretty much theoretical scientists didn't really use computers. And so that was a weird kind of secret, secret weapon that I had.

And I, you know, it's only years later that I kind of realized how surprised I should have been that other people were not using these tools. And then, then I started building tools and, know, I think we're in this situation where, the tools I built are available to everybody. I mean, I, you know, I get to run a version that's a few months ahead of other people, but that's about it. It's, you know, other, other than that, it's kind of available to everybody. it's not clear people would want to run the version that's, the pre-release builds.

Alp Uguray (57:32.34)

Yeah.

Stephen Wolfram (57:53.646)

of the system. was just running one this afternoon and it froze horribly on me. I don't think it's not, you know, one shouldn't wish for that. But it's still, it's really, the most important thing for me is it's sort of this superpower tower of capabilities that I've kind of, you know, built, gotten built for myself that really allows me to

Alp Uguray (58:02.718)

Yeah.

Stephen Wolfram (58:22.306)

get a long way and by using those tools, know, it's kind of like use the tools. Now, you know, having said that, in terms of like building companies and things like this, you know, you have to be sort of, you have to be capable, I think, of thinking about a broad range of things, which is comes back to this whole education question of, know, do you become a specialist in some very particular thing? Or do you just learn to think in general? And, you know, I think

the thing that you you run companies and things and you kind of lots of different stuff comes up and the question is can you apply whatever intellectual or analytical skills you have to all those different things that come up or do you say well i know about this but you know about that i'm just going to delegate that because i don't really understand it that's usually i know in the history of my company i started my current company what is it 39 years ago now and it's

You know, there are areas of the company which I just was never that interested in and, you know, things about sort of the operations side of the company and so on and some of the commercial sides of the company. And, you know, I discover in the end that anything I don't understand isn't done very well. And that's not because the people are, you it's just you can't really manage it well unless you really understand what's going on. At least that's my experience, you know, and I think

Alp Uguray (59:47.512)

Mm hmm.

Stephen Wolfram (59:50.9)

It's, kind of, really, feel for people who try and run, you know, technology intensive companies, and don't understand technical kinds of things. Because it's like, you know, people will tell them people tell me all the time, you know, that's impossible, etc, etc, etc. And it's like, well, you know, why? It's like, because you know, this or that technical thing, blah, blah, blah. And I'm like, well, did you think about this? And they're like, that's an interesting idea. Or

You know, sometimes it's like, you know, we just, you know, this just doesn't work. And eventually it's like, for me, it's like, well, show me the code. And, usually I have to say in, in, scale of company that we have, usually the show me the code thing. I never actually end up seeing the code because before I see the code, I'm asking all these kinds of questions about how it actually works. And there's usually this moment where it's, well, we didn't think of that.

Alp Uguray (01:00:34.928)

Okay.

Stephen Wolfram (01:00:49.608)

and you never actually have to open up the code, but sometimes it's useful as a backdrop to have the code on the screen, because that kind of makes a reality to the, you know, what does this actually do? often it's complicated enough, you can't possibly see what it does just by looking at it on the screen, but it still has a, it sort of grounds the discussion about what the general picture of what's going on is.

Alp Uguray (01:00:50.078)

Okay.

It's still like the way to think and then.

And then your questions are actually unfolding what's in their mind. And then they realize where the problem is at. they didn't ask the questions, the right questions to the problem to go down and deep. And then to your point about like conviction, it is very powerful to have that strong bedrock that

Stephen Wolfram (01:01:26.488)

Right.

Alp Uguray (01:01:49.64)

because then that makes you less fearful. I think a lot of the entrepreneurs, they fear of failure. And then that fear of failure leads to the failure. Whereas you already know, I know my foundations are right at first principles, then that really builds the convection. for...

Stephen Wolfram (01:02:16.93)

I mean, let's be real realistic. mean, you know, somebody like me is something of an optimist, right? I always believe that this project can be done and, you know, such and such a project can be done, even though the objective information does not necessarily support that. I mean, it is just because I think I can see how to do it. And I think I'm confident enough that if there's some, you know, some bump that I can overcome that, so to speak. And I think that's some...

Alp Uguray (01:02:45.64)

Thank you.

Stephen Wolfram (01:02:46.654)

You know, that is a more, you know, it's partly personal psychology, partly it's experience from doing things. But I would say that, you know, I suppose at some level it's kind of, you know, I don't know, what does it matter? You know, from some point of view, right, I do projects, I'm pretty sure they're gonna succeed, but at some level,

It's like, well, if it isn't succeeding, I'm going to figure out, I I guess I've never really had a project that's just gone splat in my whole life. mean, you I've never really, I guess there's one that I did once long time ago that was a kind of stupid project I should never have started that I kind of delegated a large part to other people. you know, eventually I just said, this is not working and pulled the plug, but usually, you know, major projects I've done.

they've always ended up getting to the finish line. Now, what the finish line looked like relative to what I thought the finish line would be like is not always the same thing. In other words, they got to a finish line, but not necessarily precisely the one that I thought they'd get to. Often, the one they get to is much better than the one I thought they'd get to. Sometimes it's like I'm really putting a lot of effort into one aspect of the thing, the product we're building or whatever else. And turns out as we build it, we realize this is other thing.

Alp Uguray (01:03:46.347)

Lincoln.

So,

Stephen Wolfram (01:04:14.36)

And then we realized people are really going to care about that other thing. And the first thing that we were initially building it for, it's just nobody actually cares about that. That wasn't the important thing, but we couldn't see the thing that people really do care about without having built up kind of a level of capability. mean, even actually I'm thinking even this afternoon, I was doing a design review, which we live streamed as we do most of our design reviews about a fairly abstruse area of

Alp Uguray (01:04:22.631)

Mm-hmm.

Stephen Wolfram (01:04:44.654)

of systems engineering which has caused us to have to realize that there's some more general things that are worth building that are much more important I think than the sort of systems engineering details that this started from so that's it's kind of an example of how that works but I think I think it's it's I don't know the the I think sort of the the

Alp Uguray (01:04:48.276)

Okay.

Stephen Wolfram (01:05:13.678)

The trick is to have a belief that there's a path to succeed and to be able to divert in real time. mean, sometimes you'll see with companies, it's like we have a plan. Well, it turns out the plan isn't a very good plan. And you discover that halfway through. And if it's just like, we have a plan, we're going to follow our plan, that's not a good thing. mean, in my company over the years,

Alp Uguray (01:05:30.359)

Yeah.

Stephen Wolfram (01:05:41.122)

I think we've developed a fairly good culture of being able to sort of pivot plans. It's always people, it's always, I mean, I'm doing this right now with a couple of projects, actually. It's always a, can't pivot that, we were on this path. but it's like, so you have to put a certain amount of pressure and thought into the whole thing to make that work.

Alp Uguray (01:05:47.028)

So I can await the scenario planning as well. And I know we are almost at time, but I would like to thank you very much for your time. This has been...

This has been wonderful to hear your thoughts about AI, about entrepreneurship, about your journey, and about the world around us. It's been a pleasure. Thank you very much.

Stephen Wolfram (01:06:27.522)

Thank you.

Great.

Founder, Alp Uguray

Alp Uguray is a technologist and advisor with 5x UiPath (MVP) Most Valuable Professional Award and is a globally recognized expert on intelligent automation, AI (artificial intelligence), RPA, process mining, and enterprise digital transformation.

https://themasters.ai
Next
Next

Part I: From MIT Researcher to YC Entrepreneur. Building Workflow Stack with AI w/ Bernard Aceituno