Navigating AI Policy, Job Market, Research, and Generative AI w/Eric Daimler

It is going to come from my ability to generate these nefarious use cases where I can simulate the voice of somebody in my family... and then have that be really customized to be convincing me from somebody I trust, and then repeat that 20 million times over 20 million people.”
— Dr. Eric Daimler on voice simulation.

Daimler delves into the potential risks posed by advanced AI technologies, particularly in voice simulation. He brings to light the ethical implications of AI being used maliciously to deceive individuals on a massive scale. The quote underscores the challenges of ensuring AI technologies are used responsibly.

This is what it’s going to look like for many companies and that’s going to apply for any company more than, I’m going to say, any company that has more than maybe three to five databases... they’re going to find that data integration broadly, databases talking to each other, is where they’re going to find the advantage in speed, execution, and reliability.”
— Dr. Eric Daimler
 

Today’s guests Dr. Eric Daimler

The Future of Work Area Artificial Intelligence, Generative AI, Digital Transformation, Entrepreneurship, AI Ethics

Listen to the full episode :

Joining us today is an exceptional pioneer in the field of technology, Eric Daimler. Dr. Eric Daimler is an authority in Artificial Intelligence & Robotics with over 20 years of experience in the field as an entrepreneur, investor, academic researcher, and policymaker. Daimler has co-founded six technology companies that have done pioneering work in fields ranging from storage software systems to statistical arbitrage.

He's been an integral part of the leadership team at Carnegie Mellon University's Silicon Valley Campus, a Presidential Innovation Fellow during the Obama Administration, and most notably, the Chair, CEO & Co-Founder of Conexus. Currently, he also serves on the Board of Directors at companies like Petuum, Inc. and Welwaze Medical, Inc. He has also served as Assistant Dean and Assistant Professor of Software Engineering in Carnegie Mellon’s School of Computer Science. His academic research focuses on the intersection of Machine Learning, Computational Linguistics, and Network Science (Graph Theory). He has a specialization in public policy and economics, helped launch Carnegie Mellon’s Silicon Valley Campus, and founded its Entrepreneurial Management program.

Eric Daimler advised the Obama administration on how to have conversations about AI. His work led to the creation of the AI office within the Science Advisory Group of The White House which has now become a cabinet-level position reporting to The President. Eric's a walking encyclopedia about AI policy and he shares all in this fascinating discussion about the future of technology, ethics, and society.

Transcript

Alp Uguray:

welcome to Masters of Automation podcast. And today's episode, we have Eric Daimler. Eric, thank you very much for joining the podcast sessions. It's a pleasure to have you.

Eric Daimler:

It's good to be here.

Alp Uguray:

So joining us today is an exceptional pioneer in the field of tech. Dr. Eric Daimler is an authority in artificial intelligence and robotics with over 20 years of experience as an entrepreneur, investor, academic researcher, and policymaker. Daimler has co-founded six technology companies that have done pioneering work in the fields ranging from storage software systems to statistical arbitrage. He's been an integral part of the leadership team at Carnegie Mellon University Silicon Valley campus, at Presidential Innovation Fellow during the Obama administration, and most notably the chair, CEO, and co-founder of Connexus. Currently, he also serves on the board of directors at companies like Petume and Wellways Medical. He also served as assistant dean and assistant professor of software engineering in Carnegie Mellon. His academic research focuses on the intersection of machine learning, computational linguistics, and network science, which involves graph theory. He has a specialization in public policy and economist and helped launch Carnegie Mellon's actually Silicon Valley campus and founded this entrepreneurial management program. Eric Daimler advised the Obama administration on how to have conversations about AI. His work led to the creation of the AI office within the Science Advisory Group of the White House, which has now become a cabinet level position reporting to the president. So Eric is a walking encyclopedia about AI policy, and he shares all in this fascinating discussion about the future of tech, ethics and society. So once again, welcome to Masters of Automation. It's a pleasure to have you, to kick things off. How you got into this space? Where did it all start?

Eric Daimler:

Yeah, that's funny. It's a long time ago, but I think it's just a series of fortunate circumstances. The old adage that technology that is built before your 10 or 15 is background, it's kind of part of the firmament technology that's invented between the time you're... 10, 15, and 25 is technology you can build a career on. And technology built after you're 25 or 30 is against the law of nature. I think I just was fortunate to have fit in that time frame during one of the trendy waves of AI, where I thought this was a terrific place to potentially bet my career on. And computers. inherently for me represented this type of freedom available to people that could release the, we'll say the yoke of repetitive tasks. That's really what got me fascinated about it. It may seem odd to people not in the field, but I've thought that computers could help us become more human.

Alp Uguray:

That's a great narrative as well, too, by eliminating the tasks and processes that we don't want to do, and then do that mundane as that comes to it. And especially right now, everyone is talking about generative AI. And I think there's the aspect of how internet started and the data gets collected, and then how to arrange that data, how to organize that data, and then understand And now we're at the stage of really generating and conversing with that data in ways that we can't imagine, which is really impressive. So tying to that point, like you've been seeing that journey at first hand of evolution of the internet and it's tied to AI and the data. Can you talk a little bit about where did it start at? Where is it going and how, what led you to start Connexus? some of the issues that you've seen.

Eric Daimler:

Yeah, it's really hard to predict with any degree of specificity more than two to five years. I think if anybody is going to say what's going to happen in 10 years with any degree of confidence and not be blowing smoke, they're going to be talking in these broad generalities. I remember working with. with somebody at IBM, and there's very smart people there. Back in 2000 or so, we had predicted with them the demise of webpages. You might have remembered their marketing campaign about essentially e-business. They knew webpages would be diminishing in value and that they should be creating something else. but they couldn't have foreseen the app economy, right? They didn't know, they obviously missed it. IBM in many respects is managing their own decline for the last generation. But they failed to see in specifics, actionable specifics are what matter here, that there would be a job called app developer. And no one could foresee today in 2023 that there's now a job called influencer we see around the planet affecting the places where we go on vacation. So we can think of broad brushes, but two to five years is generally where to look. I can say that what's changed from my early views in AI to now, it's not just the nomenclature where You know, we used to really call AI research, and then once it got solved, we called it something else. And now people are throwing around the terms of artificial general intelligence, and somebody talked about super intelligence yesterday, which kind of hurts my brain. Besides the nomenclature changing over the past generation, what's changed is the unforeseen implications of having massive amounts of data. And that the world is moving more quickly. People know that the world is having a lot more data. Data is the new oil, is kind of background. But what's less known is that there are a large number of data sources, really an exponential growth of data sources, in addition to this exponential growth of data in absolute terms, that then creates this intersection of knowledge, data and data sources. What does that mean? What's the model that this data represents? That's the part that is hard to get one's head around. It's a combinatorial explosion. And the implications for that are just difficult to imagine very far up. That's what we're living with now. And that's what people are confronting now. In my world, I worked in natural language processing for 10, 20 years. We always thought that more data would be better, but we thought there would be a law of diminishing returns. What we didn't quite expect is the point at which we get diminishing returns was much, much bigger than we thought. With GTP4, the the current LLM of our time, using a trillion parameters. That was not what we had in mind in academia as the place where we should be aiming. We thought there were other parts of that technology stack that would also need to be coming along. But a trillion parameters was beyond our comprehension. We're seeing the consequence of that today, which is that the entities that can afford that sort of work is relegated more to corporations and the governments, of course, rather than academic institutions. You know, Sam Friedman said that it costs something on the order of $100 million just to train GPT-4, but what's less reported is the amount of electricity, just electricity to power the computers. People have said it costs 10, 20, 30 million dollars just in electricity. That's not something a university is throwing around to be able to train these models. So that's the sense of what's changed. It's the scale in addition to this speed that then generates this consequence of abruptness that I don't think we all talk about enough. the characteristic of our modern times that needs more attention.

Alp Uguray:

I would like to touch on the three things that you mentioned. Like one of them, like nobody knew about app developers before when it was coming. So then I think with the generative AI's emergence, the job market will get impacted obviously. And then everyone is asking a lot of questions about that. And what are some of the potential roles that can come out of it? Like I've been hearing prompt engineer. or like there's a human in the loop mechanism to help with reinforcement learning. And then there is the different aspects of it. We tie the job creation to the user experience of like, I think a lot of people, when they interact with chat GPT, they don't know that it's actually costed a lot of money and they don't know that it can tell very different things to them. which are like hallucinations if they don't work. So like from a societal lens of like one trillion, like I think 63 billion worth of parameters of neural network that costs a lot of money to build, and obviously is gonna shape the society in a different way and impact the job market and the things that we do, not on what the level for just conversational, but also like the... actual processes and tasks at the company. But there's also the impact of the data side of it as people generate more data in the internet and then chat GBD is trained on the internet's data. So if they continue to retrain it based on the generation that comes through, there will be a lot of bad data essentially out there. So how do you see some of the implications on the society, starting from maybe like a bit of the job market along with some of the algorithmic training that comes into play and the risks of data generation.

Eric Daimler:

Yeah, you're starting with the jobs. It's a real serious concern. I think that we have a multifaceted problem here to be addressed in regulation, to be sure, for our own safety in ways that we've talked about and we can talk about some more. But we need to reevaluate what our societal employment contract is, essentially. Because we have not. set up expectations that our job could be eliminated with such speed. In the past, we have acclimatized ourselves, like I might say, to this idea that jobs can fade. When elevator operators or switchboard operators were jobs that became automated, those people didn't have to go find new work the next day. They just knew that the kids coming up wouldn't have those jobs available. The speed of the job elimination changed a little bit when you saw, for example, floor traders on the New York Stock Exchange getting eliminated. That happened much more quickly. To someone that lived through that, when we were automating parts of the job of a treasury bond trader, we began to see the power of digital technology, which is very different than analog technology, such as a switchboard operator. When I can do something digitally, I'll work to have that routine repeat itself reliably until it is reliable more than a human. The nature of those things is that they don't work, they don't work, they don't work, but when they work, they work infinitely well. So I don't have to wait until Wednesday afternoon to get rid of the staff that did my treasury bond trading. There's just no reason to have that team around for another day as a unfortunate consequence of developments in digital technology in that way. You know, we need to reevaluate how we take care of people that for no fault of their own, if they're living by the rules, they are performing to society's expectations, and yet their job gets eliminated with this abruptness, you know, how do we interact with that? That's a question we need to be addressing with more care. And to have people be a little... too confident about their own job being secure, I can remind us all that, you know, we had thought that initially jobs would be automated doing with manual work, they'd be replacing manual work. And then they might come for some white collar work. And they know the last would be creative work. What we found over the past six, nine months is that it might be in reverse. And so that just may suggest to people that we don't know where this automation is going to be most useful and therefore where jobs are going to be eliminated most abruptly. So be careful for all of us not to get too smug in the belief that our job is safe. And that may motivate us to have this conversation about how we reevaluate the social. contract around employment.

Alp Uguray:

Yeah, it's it is interesting because there's also the element of how can someone react to this? Like, for example, for someone who sees that their tests are getting eliminated with chat, GPT or other type of LLMs, what was the next course of action for them at an individual level to do? And then there's the obviously the other course of action that the government can do to. help them and then create the rails for them to walk on, which comes next. So it's, what are some of the things that you think people can do? I mean, do you think with full democratization, like everyone will have a chat GVD account, but also maybe a barb account, you know, there'll be a lot of competitors out there. And then, now we've seen that the first to innovate. typically is not the first to scale. So there could be other products out there that will be the norm for people. So what are some of your thoughts around this?

Eric Daimler:

Boy, there's really a lot to say here. There are many other companies, entities, getting on the bandwagon of creating LLMs and creating even expressions of LLMs. I think in the time we're talking in July 2023, the general reporting is that there's some, gosh, I think it was 16, but I think the number might be higher. I may have that number off. It might be double or triple this number. There's 16 companies that have series A funding valuations of over a billion dollars doing LLMs. I mean, that's crazy money. Because there's going to be a lot to do in that space, but it's unclear that there's going to be 16 winners. The general sense is there might be four. And so that's a lot of cash to burn through. I was just talking last week to to this organization who bought Nvidia chips, just speculating that they would then have a team to use them. I mean, this is amazing. They spent $60 million on 5,000 Nvidia chips before they even had people that could use the chips. I mean, that is a sign of something. Of something.

Alp Uguray:

It's same as people buying a lot of toilet papers during COVID.

Eric Daimler:

I suppose, okay. All right. You're creating two of the favorite Nvidia chips. I like it. But it's, yeah, it's people maybe getting ahead of themselves as we can square that circle. I think it's a really exciting place to be. And when I was working with Obama White House, I was... yelling as loudly as I could to have people involved in the conversation about AI and how it affects society. But no amount of my nerdy self could have the same impact as a chat GPT has had in bringing people into the conversation. Nothing like showing people what could happen in their world to get them involved. With all that said. These technologies are pretty immature. The example with one of my companies was working with last week showed OpenAI and its expression in Bing and also Bard with this simple question. plan for me a trip to the LA Lakers basketball game, the next LA Lakers basketball game from San Francisco. I'm gonna go from San Francisco to watch the LA Lakers in Los Angeles. The results we got from Bing said, well, there's a great game in May, 2023. And this is July, 2023. So it's already saying something in the past. And we know it's already trained to 2021, but... Clearly, that's still just a failure mode. And Bard, the smart people at Google, they said, or their software and Bard said, well, there's a great game for the LA Lakers in February 2024. It was February 24th, I think, or 2024. And there was a couple of problems wrong with that. Not only was that not the next Lakers game, but there is no Lakers game on February 24th, 2024. So

Alp Uguray:

Yes.

Eric Daimler:

it's made stuff up. So. As smart as we think these technologies are, and even with a trillion parameters, there's a lot of weaknesses. This is really where I'd like people to focus, and the proverbial Silicon Valley new, new thing. I'm sitting here in San Francisco, and we always want to say, well, what's next, and next after next? And the next after next, the new, new thing is generative AI, but generative AI that you can bet your life on. there is a symbolic AI, got a good old-fashioned AI that we can use in cooperation with, in conjunction with, generative AI using probabilistic models that creates a hybrid. And it's in that hybrid that I think you will see most applications express themselves over the next. five to 10 years. This is right after I said, I can't predict five to 10 years. I'm going to give the broad brush. The broad brush, five to 10 years, is we're going to have more of a hybrid AI. You're going to have probabilistic models, stochastic models, and deterministic models. Because with the examples I just gave you, you don't want to put even treasury bonds trading into that model, let alone the management to your power plant. Right? You can't have life-critical applications with those sort of errors. And that's really foundational to large language models. You might be able to investigate protein folding, which certainly has some very important components to it, but you're not going to build an airplane with it. That's foundational. You have to bring in something else. And that other thing... is a generative AI based on symbolic AI,

Alp Uguray:

Mm-hmm.

Eric Daimler:

expert system AI, that can now scale from discoveries in mathematics. And that's what our little MIT spin out of Kinexis AI does. So Kinexis AI got spun out of MIT from a discovery in math in this domain called category theory from this professor Spivak, who's one of my partners. that then allows these systems to scale and be applied to massive applications that you can bet your life on. Merging that with these LLMs or other sorts of probabilistic models is going to be the solution for large organizations over the next decade.

Alp Uguray:

What are some of the main problems that you see? I've seen the hearing for the Congress where Sam Altman spoke, and then there were different perspectives there about the government, or we should come up with regulations where government or an independent authority can monitor pre-processing and then post-processing the training data and then report that. And as someone who reported and worked in the White House with the president and

Eric Daimler:

Yeah.

Alp Uguray:

the cabinet, what are they not seeing? Like, do you think, like what are some of the blind spots right now that can later evolve to be problematic for the society? And then it's the same way as social media did, right? like heavy on advertisement and whatnot. I can elaborate more on that point, but what are some of your tools, especially around like what are people not seeing? Because I think we are seeing that the LLMs need to be improved and then there are hallucinations, they cannot be fully trusted right now. We do need to monitor it more. So there's that emphasis by all the founders out there. and even big authorities. And there's always the worry that I've been seeing in the news is that does the government understand about AI enough to regulate it? And then there's the worry about if they do, then what would be the future implications of current restrictions?

Eric Daimler:

Yeah, yeah. I wrote an open letter during my time, right before I left as a prescription of my recommendations to the next administration, where there were actually some in the world of AI in my domain, kind of in the science advisory group, there was some very smart people, well-meaning people over the last five, six years. And I did it again. It's an open letter to the current administration, where again, there's some very, very smart people in my group, you know, that's what I can speak to in science advisory. So I actually have a good amount of confidence in the right people being in the right jobs there. And I also am pretty optimistic about the staff, you know, if not the congressional staff, if not the Congress people and the senators themselves. in understanding or looking to understand more of this technology. I think that we can add some nuance to this that there's not regulation or not regulation. I think one thing we missed in the debacle that was social media and the obvious dangers around misinformation and encouraging tribalism that social media engendered is that we can do something. you know, besides just restrict behaviors as an interim step. And the example I give to taking off on part of what you said is that we can at least evaluate some of these algorithms. So, first of all, we would be well served to separate out research from the expressions of research. You know, one of the misguided interpretations of that open letter to stop AI. work was it conflated research with commercial offerings. It also was quite thin on recommendations of any sort. The science just stopped. And certainly there's a lot of people that have some skepticism about the conflicting interests of some of the participants in that letter, signatories to that letter, and even the conflicting interests of some of the people that are advocating for regulation because historically that can just entrench the leading players of that moment. So all that said we can go back to what we could do in the in the short term that will just give us some flexibility for the future while educating us all about what we want to have happen for society. It could go like this that algorithms can be submitted to a third party for evaluation. You can either have the intricacies of that algorithm inspected by experts like myself, group of experts like myself to evaluate, does this do what it says it would do? Or you could do something like a zero trust evaluation, much like we do with credit scoring. We don't know what's actually inside the credit scoring algorithms. We just know that we have fed it data enough times with output that seems to correlate to what we want as a society that we generally allow for these black boxes to exist called credit scoring algorithms. We could start that process now, especially for some of the most critical functions upon which these AI algorithms are placed. not in a heavy-handed way. At first, that could just be just revealing

Alp Uguray:

Thanks for watching!

Eric Daimler:

the outcomes without actually being any sort of restriction on these sort of things. Call that whatever you will, but it doesn't need to necessarily be regulation in the way that we might think about restrictions being imposed by the government. And in any case, we're gonna need to do something because this is a global conversation. We want to find that balance and therefore be part of the conversation. What we have found from social media is that doing nothing is not a really good path to go down.

Alp Uguray:

Yeah, and there's the part of the open source community. So I've been viewing a lot about what are the people doing with these LLMs or how can, I know we see it's very expensive to spin off a chat GPT or an LLM as powerful as them, but there's been a lot of activity in the open source community that just takes part of LLM and then find an application to it, whether that's a Chrome extension write how to do a LinkedIn post, but also a Chrome extension to be able to then scan videos and then give reports on like sentiment and whatnot. So there's been a lot of applications like that, but how I think the open source community and then like the network effects within the communities really scale some of these ideas. And then that's also the part of, I think chat GPT success that people spoke about a lot. And We see Sam Altman being an ex-president of Y Combinator, he really understands about network effects and how to scale a business. And so like when it comes to that open source community and what can come out of it, because essentially any user from any country can leverage those models. And today generate an output, paraphrase that output and then release it and again on internet. And I think that can lead to more maybe misinformation and also better information, like access to knowledge in good ways as well. So there's obviously pros and cons, but as we are seeing this scale, there's the corporate section of it where the government speaks to OpenAI, Microsoft and Google of the world, but how can... it also speak to the open source community. Like what are some of your thoughts around maybe not only regulation, but understanding how the behaviors of differences.

Eric Daimler:

I think we're going to need to just account for the idea that this technology can grow in ways that are hard to imagine. That's just consistent with the zeitgeist of the open source community. In this case, I don't necessarily separate out the open source community from the activities of anybody else. It's just the nature of an organized group of motivated smart people that we've been able to put a word around. that we're going to expect more data providence to be built into these systems. We're going to expect more data lineage to be built into these systems. In some areas, they could be in the form of digital watermarks. That term might be something people don't understand even as we apply it to other domains that may not be as visual. the sources of these data ultimately need to be tracked down to a place that we respect. It's really quite strange when we think about our current world. There's a lot out there that we will read day to day that we are trusting having been based in some source data often just because of the particular brand of the media that we're following. But... outside of those areas. There's a lot that we just read that has no facts. And, you know, even as a scientist, we are careful not to be too smug about that because we'll have a lot of scientific papers that certainly can be peer-reviewed but are often never repeated. So the veracity of the research itself may not be as solid as we would at some point. apply in these little 30-second one-minute sound bites will get on media to be representing any sort of scientific finding. So I think we will just need to respect the idea that there'll be this infinite range of expressions available and that is expressed right now in the computing power where in the example I gave earlier you know 60 million dollars worth of Nvidia chips is what is going to be applied to the training of a new LLM. Once that's trained and we know the weights for the ultimate result, I can probably run that thing on my phone. So the applications

Alp Uguray:

Thanks

Eric Daimler:

for

Alp Uguray:

for watching!

Eric Daimler:

that LLM are going to be democratized just by their nature. And I am a little hesitant to then prescribe any sort of restrictions on that, I would really want to see where that could take us.

Alp Uguray:

Yeah, I recently seen a paper, I don't remember on top of my head who it was written, but apparently it can theoretically, an LLM can consume about like 1 billion tokens. And then, and it was speaking about that the scale that it can lead to obviously over time, the costs are going to be down for especially on the chips. And then and then the processing power is gonna increase. Maybe there'll be some cross-section with quantum computing as well. And then obviously things are gonna get cheaper and also provide more ability to digest more content. I'd like to ask you about some of your perspectives on identity confirmation. So this discussion came up in one of my previous podcast episodes. And my guest speaker, Tatiana Mamoud, was saying that as the technology is intermediary between the communication of the people, how do we make sure that who we speak to? And then obviously, there's the element of text generation and then the voice generation and then the image generation, video generation, and that have great positive impact, obviously, for certain business cases. But the discussion comes in at the point which when we can simulate everything, how can we make sure the identity stays the same? And I just wanted to quickly throw in there to see some of your perspectives.

Eric Daimler:

Yeah, my perspective is that I can get frightened pretty easily by this. You know, the sophistication is such that, you know, we're no longer concerned about my media being biased or being in a bubble or, you know, twisting my beliefs, which it has kind of manifestly does. We pretend that we're conscious and aware of it. The new distortion. is going to come from my ability to generate these nefarious use cases where I can simulate the voice of somebody in my family, give it a heuristic, learn everything there is to know about me, learn the voice of a family member, have a heuristic saying, convince to vote for this person or by this person or by this thing, and then have that be really customized to be convincing me from somebody I trust, and then repeat that 20 million times over 20 million people. That's frightening, and that's gonna be possible. How I combat that is unclear today in a specific sense. But I think I can say that over the next couple of years, I've seen some technologies that look promising that will give us a personal AI to combat these nefarious AIs. I think we're going to have a digital agent that is going to follow us around to help us interact with this nasty emerging nastiness that we're going to see around the world coming at us. It's a different. a type of spam. We might say it's a different level of sophistication to spam and we're going to have to adopt those. The people that don't adopt those are going to be the ones that are most vulnerable. That's the type of education I think that needs to happen. That's probably actually the scarier part for me than the technology, which is that we just need to, we'll be needing to educate themselves to protect everyone from these these new manipulations. That speaks to this identity concern for which there's a lot of technical exploration as solutions.

Alp Uguray:

Yeah, it sounds like there's the cyber security itself will have to also evolve to adhere to the changes that are happening in the market. And I do have a question. So it's interesting the thing right now, obviously, the internet is west. There are very different data lakes and then different data like vector databases to classical databases and whatnot. And a classical enterprise typically is not obviously as progressive, like maybe as not ready to integrate even a simple LLM today to their knowledge management system. So just thinking a little bit of tying this to Connexus and then the work that you've been doing with the MIT professor and whatnot, how does some problems that you solve today help those customers to be, I don't want to say digital transformation, but help those customers to be more innovative and then be more up to speed with the current tech stacks.

Eric Daimler:

You know, it's a perfect direction to go in the conversation. The Conexus AI was founded on the premise that these systems will become too large for us to reason about. The generating problem came in two places. One was the US supply chain. Way before, we appreciated the vulnerabilities that we experienced during COVID. much of the supply chain within the United States. Therefore, the world was manual and presented a certain vulnerability, not vulnerability necessarily in security, although it has that, but a vulnerability in its rigidity, a vulnerability in its ability to scale the throughput because of all these places that databases don't interact. The other motivation that... that originally seeded this technology spin-off from MIT was around the defense of the United States and its allies, particularly on some airplanes, where there was some north of a million parts involved in this airplane. And the concept was, how do we take what we learned from the last airplane and apply it to this airplane? And how do we? guarantee the integrity, just the security integrity, let alone the technical integrity, of the million suppliers of this airplane. We can't reason about that. No, there's no one person, obviously, smart enough. And you can't put that in Excel. It's too big. So it doesn't take anybody. I might. I might like to think of myself as a smart, but I don't need to be that smart to think, oh, this probably is a problem that's going to apply to other organizations over the next 10 years. If these two, broadly speaking, organizations consider this to be a problem back when we spun out of MIT in the mid-2010s, what's it going to look like in the 2020s to a large organization across the globe? So that's what Conexus AI was developed to do. developed to bring together databases and data models knowledge into a universal view that people could then reason about themselves. So we apply an AI to the discovery of the common data and data models around an organization. The implications for this are not just the ability to more quickly. take advantage of opportunities that emerge from signals given across an organization. But it also provides some answer to your security questions that we talked about earlier, which is how you guarantee the integrity of an identity. Well, first of all, you can look across the panoply of touch points for that identity to guarantee that this is... consistent with what's been seen so far. Many data vulnerabilities come because they're isolated and they don't communicate with other parts of a larger system in order to be validated. So that's part of what Connexus AI does is allows leadership to better view the command and control of increasingly large organizations. It also allows for a speed of execution in a range of opportunities, from just the technical cloud migrations going more quickly to data integration systems from application upgrades being able to be executed more quickly and take advantage of opportunities. We are replacing these technologies such as RDF and OWL, which were attempts at scaling. database integration in the early 2000s that have just shown to be inadequate to the scale at which we're operating today in 2023.

Alp Uguray:

I think there's also like two elements that are really important because I think the data, so there's the part of the knowledge management, especially with the move with LLM, people will want to feed that to a generative AI or LLM model to create conversational interface to communicate with that knowledge better. And then there's the other part to understand the current underlying organizations processes, business processes that... engage on and then there's some process mining players out there that scan the IT systems to be able to design a process graph based on the repetitions of event logs. So it is definitely where the companies are lacking because they don't have the transparency into what is going on in the business but also they are unable to move fast to integrate with the current innovative technologies like the generative AI and enable their users and customers to better interfaces, whether that's through products and whatnot. Obviously, some of it is also because of, again, there's a lot of players out there and then everyone has a different solution for databases. And then that solution is like somebody buys a vectorized database and then buys another type of database, another type of database, like an SAP and Oracle. And then everyone doesn't know what is going on in the business at that point anymore. So, so it definitely sounds like it is something extremely applicable for the future and then allow a company to innovate more. And then tying that. to a little bit, to obviously the future of it, right? Like I know we said 10, not 10 years, but two years, maybe three years. So I think with every disruptive innovation, there are new companies forming. And then there, and with that comes old companies evolving, maybe like SAPs of the world. I've seen an interesting business case by Morgan Stanley. So Dave adopted OpenAI to within their wealth management and enabled access to all the wealth managers who are working and then allow the knowledge access to that. And so like any wealth manager who wants to know, okay, where did we succeed by client has XYZ requests? and it gives them that intel very quickly. And obviously to get there, they need to do those two to three things first. So what are some of your thoughts around getting there? Like what can they do as companies to be able to more innovative and then write this way a little bit?

Eric Daimler:

You know, it's an easy answer for me because I'm going to say, it connects us AI, right?

Alp Uguray:

Oh yeah, I know, I know, but more from the like industry level, industry perspective

Eric Daimler:

Yeah,

Alp Uguray:

on.

Eric Daimler:

I mean, the Kinectus AI integrates data and integrates databases. You know, if you're developing an airplane and you have a data model for a wing and a data model for a fuselage and a data model for an engine, you know, you have this common variable called vibration. You know, you can't just merge them. Because what happened to the variable called vibration? You know, this is a real industrial application. And today, what we have lost, much of the formal methods training that has gone into the development of the fuselage or the engine, when it then comes time to scale. So you have to apply, you know, if it's not Conexus AI, you have to apply some other modern type of generative AI that's not probabilistic, that's not stochastic. You don't want to then rely on probabilities for joining the wing to the fuselage and the engine to the wing. It doesn't help you reduce the chance of these catastrophic failures. NASA's now doing this in the development of their rockets. What they say is that over the past generation, since the beginning of NASA, their large projects take a decade. they recognize that they can no longer afford that length of development with technology moving so fast. The result of their old way of doing it means that by the time they finish a project, it's obsolete. You know, there's many stories about this. There were stories, you know, even in World War II, for weapon systems that were designed and deployed after the need was long, long since gone. So the way in which these companies need to respond at the largest level, but progressively smaller and smaller organizations is to have their databases interact with each other. That's really fundamentally the answer. That's the easy way of saying it. It can get more technically complex, but databases need to talk to each other. When databases talk to each other, you can have knowledge sharing, you can have collaboration, you can then... act more quickly and you can develop more quickly. You can respond to problems more quickly. You know NASA's goal is to bring that 10-year development cycle for complex projects down to a couple of years. You know that's where they need to go. This is fast iteration in action. This is what it's going to look like for many companies and that's going to apply for any company more than Gosh, I'm going to say any company that has more than maybe three to five databases, whatever those databases are, they could have a customer database, they can have a manufacturing database, they can have their financial database, any more than three to five databases, they're going to find that data integration broadly, databases talking to each other, is where they're going to find the advantage in speed, execution and reliability.

Alp Uguray:

That answers my question really well, I think for the businesses, especially who wants to be more innovative and adopt innovative technologies out there, they will have to innovate themselves first and then their operations first when it comes to IT and broad. So I know we're almost out of time, but I do have one last question, which is more on the philosophical side. So we've seen a lot of books like, you know, like there's the Brave New World, there's the Westworld, there's the 1984s. So there's a good depiction of a dystopian world and obviously there are also great depictions of utopian worlds. But what is recurrent in both of these is the... is the ability to create consciousness or replicate consciousness that can either speak as ourselves or speak independently as someone else. So like a simple example of it is chat GPT asking chat GPT to write a poem like Shakespeare does about a certain topic. So in a way it's of oneself and then allow them to be maybe two angles immortal or there's the frown part of it as well. If it hallucinates, it's obviously not great. But do you think as the future unfolds and I know we said two to 10 years, but let's say you are infinite, the humanities need to... to be mortal and then create this external intellect? How would that define our evolution of doing things? I know it's an open-ended question and a philosophical one, so I'll let you answer it the way you like to.

Eric Daimler:

Yeah, I'm not sure I have any special expertise to be able to answer that excellent line of inquiry. I can offer a few things, which is that, people will often talk about me when I was in the White House about the degree to which we would have a utopia or a dystopia consistent with some Hollywood narrative. And the answer I gave then, I still believe now, which is, it's up to us. Where we are on that spectrum is up to us. One of the values of studying history isn't in just discovering names and dates, it's in the realization that our current world didn't have to be this way. It wasn't inevitable that we turned out this way in all of We have these series of events that turned out the way they did, that all contributed to the world in which we now live, in all respects. That should give us all this new feeling of agency about how we can participate in the conversation about the future we want to leave to our children, to the future generations. The degree of consciousness these computers have, in one sense can be entertaining, but in another sense, it can kind of miss the point because way before we get consciousness, some very scary things can happen. As seen by the manipulation by social media, where we all will become smaller. in some sense, in our ability to consume other viewpoints around the world. You know, I was just reminded by one of my friends, Nita Farahani, her book, The Battle for Your Brain, she is a professor, actually, of law and philosophy. That's a book I could encourage people to get, besides my forthcoming book, which isn't out yet, and my wife's book, which is on organizational... structure and how to operationalize culture, reculturing, I can recommend The Battle for Your Brain, because that's the type of book that is suggestive of the dangers we have now way before sentience, way before AGI, way before artificial super intelligence, where we need to be involved in the conversation. I can say one more thing about sentience itself. I was just talking to one of my friends about the degree to which forecasters can think that there is a less than 1% chance of AI producing an existential threat to humanity over the next decade. That's still pretty high. Less than 100% doesn't really

Alp Uguray:

Hehehehe

Eric Daimler:

make me talk at all. But I can give people maybe some comfort that it's not going to happen anytime soon, and not just because of me telling you that. that me and my friends don't think it's going to happen over the next 20 years or maybe ever. It's that what we know about thinking isn't really represented in computers. You can't, you or anybody, can't tell me or anybody I know how to program a brain. We just don't know. We don't know who it is that's staring through our eyes. We don't know how to then program that up. into what a computer would recognize that we have a voice in our head. Like, who's that talking in our head? The way of representing it in a computer is really ultimately deterministic. The chip is ultimately deterministic. Even though the computer languages can have stochastic elements. The chip is ultimately deterministic. It's probably no accident that we both have, we as human beings have a biological system a probabilistic brain and a deterministic nervous system working in conjunction. We don't really have one without another. We don't have brains operating without a nervous system. We don't have nervous systems, obviously, working without brains. I suspect that whenever we get sentience, you will find that expression take more of a biological form. And we are just way off from that.

Alp Uguray:

Yeah, it's way ahead in the future, especially since the AI can't even tell when the next Lakers game is.

Eric Daimler:

Very good. Very good. It's been said that the battle of robots will be very difficult because we won't be able to determine who are the robots and who are ourselves. One way of interpreting that is that we will start to augment our own bodies with progressively more sophisticated devices. So we will just transform ourselves rather than some Terminator figure looking like us and eliminating us.

Alp Uguray:

Yeah, I think that would be quite interesting. And at that point, and I know there's some discussion on Elon Musk and his company on testing on maybe some robotics integrations. But I think maybe that will be a discussion for another time that I will cover. That's it. I want to be cognizant of your time as well. So this was fantastic episode and it was very thought provoking. current events as well. And then we touched on some of the most important aspects of AGI, AI, and actually how to get there along with obviously the philosophical aspects of it, the regulative aspects of it, how the governments are viewing on it. So once again, thank you very much, And again, always, we'll have an open invitation to come back again anytime.

Eric Daimler:

Thank you.

Alp Uguray:

Thank you very much, Eric. Stop the recording.

Creator & Host, Alp Uguray

Alp Uguray is a technologist and advisor with 5x UiPath (MVP) Most Valuable Professional Award and is a globally recognized expert on intelligent automation, AI (artificial intelligence), RPA, process mining, and enterprise digital transformation.

Alp is a Sales Engineer at Ashling Partners.

Previous
Previous

Building A Culture of Radical Candor: Innovative, Creative & Collaborative Teams w/Jason Rosoff

Next
Next

A Future Unwritten: Short & Long Term Impacts of AI on Economy, Products & Life