Innovation Stories

Rethinking Value: When AI Reprices Human Work with Doug Shannon

Daniele Di Veroli

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 42:14

What if AI didn’t just complete tasks, but managed goals—and did it across a team of specialized agents that understand your company’s language? We sit down with an enterprise AI practitioner to map the shift from lone assistants to orchestrated multi-agent systems, where an “orchestrator” breaks objectives into parts, routes work, and explains the why behind every step. Along the way, we dig into the governance that keeps agents safe, the incentives that quietly shape behavior, and the practical guardrails leaders need before turning software loose on real processes.

So much of AI’s promise begins and ends with context. We talk candidly about messy datasets, the limits of RAG when your storage is a swamp, and why automation is the quiet “AI factory” that stabilizes inputs and outputs. Once you have reliable processes, models can reason over reality instead of brittle checklists. That’s also where ontologies shine. By codifying how different teams use the same words, a living ontology gives agents the nuance to interpret intent, translate across functions, and avoid expensive misfires. In media, that same machinery enables hyper-personalized content and product placement—powerful and risky—raising hard questions about consent, biometrics, and pricing games.

The conversation tackles bias and alignment head-on. Bias lives in data, not in math, so we model better by seeking multiple viewpoints, demanding citations, and resisting echo chambers. Alignment means deciding where humans must stay in the loop, when autonomy is acceptable, and how systems justify actions. We also challenge the default layoff narrative with a simpler ROI: don’t fire, stop hiring. Keep institutional memory, upskill teams, and let AI multiply the people who already know the work. As knowledge work gets repriced, the companies that retain context and build explainable, orchestrated systems will move faster and break less.

Support the show

Visita il mio sito web https://danielediveroli.com

Se vuoi discutere di un progetto, fare brainstorming di un'idea o per qualsiasi altro motivo, programma una videochiamata di consulenza gratuita.

https://danielediveroli.as.me/

Meet The Enterprise AI Practitioner

SPEAKER_00

Doug, nice to meet you.

SPEAKER_01

I'm an AI leader. So I'm a practitioner in the space, doing this for different, you know, for companies and enterprises for the last we've been doing automation for almost eight years, AI for the last three, well, a little over three, but Gin AI for the last three. And so yeah, so I come at this from an enterprise background, from from understanding like where their hurdles are, what is the ass demand or pain point we have, and then understanding and taking that from a very human perspective, a very human-first approach of saying, how do we drive this? What does it look like? And how do we enable, empower, and embolden the users around us to really drive what that next step looks like? Because we don't want to leave people behind. But we can we can dive into that.

Will Agents Replace Jobs

SPEAKER_00

Uh I I I would like that you touched a very different point, like uh, for example, uh how agent uh will change uh the workplace, like or jobs in general, because uh uh, in my opinion, will be a lot of layoff because of that. Things are moving very fast.

Why GenAI Changed Adoption

SPEAKER_01

Uh as of last year, we were seeing like in 2025, we were seeing that for every two years of AI was essentially one year of human and large language models and that generative AI that started with that aspect with the LLMs, makes it an easier transition for humans to actually utilize the technology because we can we can talk to it, we could talk to it in different languages, we could talk to different people in different languages, because like the world we live in is very diverse. It's fantastic. That's why generative AI helped to really drive that connectivity, because traditional AI was very much like you have to name the things, you have to build the ML, and like knowledge graphs would would go away every you know 30 days to 90 days, and you have to redo them all because like the knowledge changed, the the way we call things, the nuance changed. Now we can actually keep up with a lot of that.

Limits Of LLMs And Conscious Use

SPEAKER_00

LLM uh uh mimic um human behavior, but uh they are they don't have the same insight. Uh like they are very good at mimicking uh human beings, but they are not uh they they are not dreaming yet. Uh what do you think uh about that? Like uh the relationship with uh human being and the eye and how people use AI if they use it in a conscious or not conscious way.

Risks And Governance For Agents

From Single Agents To Orchestrated Systems

Chain Of Thought, Custody, And Reasoning

Philosophy And Critical Thinking Return

SPEAKER_01

Yeah, I I think one of the biggest say like keep going 24 hours a day, of course it's gonna say no, telling your agent what to do, which is highly dangerous. So if you're going to do it, make sure you control the presence or control like the incentives that you're giving your agents and control like the overall governance. But if you're gonna allow other people to have that control, uh there's a there's risk. So yeah, I think I think the biggest thing there is just understanding that that's the nature of it. Agents and LLMs, and even like a gentic is a step in the right direction. It's not the end all be all. We're seeing, and we've seen this, and I've talked about this online for years now, is that multi-agent systems are are the future, but it's not just multi-agent systems, it's the fact of orchestration of those agents. And and not just like an agent's other peers, but like an actual like an orchestration agent that doesn't handle like your tasks, doesn't just handle your what you wanted to do and go take action. No, no, it's gonna handle like the goal. Like, hey, I I want to do a thing, I want to drive to that goal, and here's here's what I need to know about it. And so that orchestrator agent can actually take that goal and go, okay, here's how we break this down. And and I can send it to another agent to go, like, break down that question more. Kind of like what we have right now with LLMs, we have chain of thought. So chain of thought helps out to make it easy for all of us to actually interact with these, not just because of language, but because of if we don't know how to say something, we go, man, I really want to understand this. How can I look at that differently? Or how do I prompt this to get a better response to chain of thought goes, oh hey, here's how you do that prompt. Like, oh, so I don't need to learn prompt engineering? That's fantastic. It's already built in. So I don't want to say that too blanketly because at the design level, at the system level, the non-end user level, yes, prompt engineering is still a thing. At the very but for everyone else, it's not. Uh and so then you have like the chain of custody sites where the agent or the LLM goes. You ask it a question, it gives you some information. You should never always trust that information, not because of hallucinations, because the fact that is the way AI works when it's when it's going through like its chain of thought and understanding, it's it's thinking a thousand different things, but it's only serving you up one. You should ask why. Like people should be more critical thinking of what is this giving to me, why is it giving to me? Chain of custody gives you a little bit of insight and it says, I pulled these articles, I pulled these things, here's where it came from. Okay, so I asked you what to do, you gave me some insight, now I can do some some some reasoning. And then again, like the chain of reasoning side of that beyond that is here's why the AI thought what it did, and we're starting to see more of that. Uh the last GPT model gave us a little bit of insight there. A lot of the coding models give us some like, how did you write that code? What is this line? And then we go, that's not good code. Like, hold on, like this reason or that didn't work because of this X, Y, and Z. That's what we're getting to now is that that whole reasoning dynamic. And so when it comes to agents, that's that's what I have to say about it. Is like, think about it, think about it from a very human ask. Don't always trust the output. Definitely ask why, be critical thinking. And and this is why I think a lot of people I've talked to is we're saying that the future of where humans should really rely on or drive forward, we should get back to philosophy. We should get back to understanding humans and understanding why we think the way we do, why when the apple fell out of the tree it hit Isaac Newton in the head and goes, hmm, gravity, that's interesting. Let's go do the math behind that. Let's let's critically think about why this happens. It didn't just give out because that's it is trying to drop the seed to do the thing, but actually there is there's other things and other mediums or materials that matter that we should be curious about. Yeah.

Black Box Or Not And Model Differences

SPEAKER_00

Yeah, I'm totally agree with you. Like I have another question, like uh talking about uh LLMs. Uh I've heard a statement, a very similar statement. It talked about the fact that uh we really don't know uh how why LLMs behave like this. I I would like to to um to have your opinion about that. Uh what do you think about that?

SPEAKER_01

I think the the best way for the rest of everyone to think about you know the agent side, software side, and then the robots as a physical side, and we have to kind of put them in two different categories. The reason why is because every time NVIDIA comes out with anything and they say, here's our here's our map, here's our moonshot, that is all a busy thing, physical out there from every robot. And they're gonna say, We're building the backbone, we're putting them literally, they're gonna put them on on cell phone towers. And then so that's going to have that that layer of infrastructure to track all of these robots and all so physical AI. And then you have the software AI that we're dealing with, and and going back to LLMs, again, it's it's language, pseudo-brains. Like it's it's how it's how we communicate because as from coding and and everything that we've done before this, we it's all assembly, it's all putting it together. We've been rec and just talk to it, it can then do code, and this is why we have things, you know, agents taking actions and having agency to be able to take those actions. Now, at the same time, when people say uh AI is a black box, it is and it isn't. Now it's not because we anthropic has definitely shown that it is mathematical equipment, it thinks backwards first. So as we ask it the question of like rabbit, get the carrot and jump over the log, it's saying what jumped over the log? It was a rabbit. Well, what the rabbit looked like? Was it colorful? Like what kind of story are we looking at? Okay, and how is that narrative? You have like two different moderators kind of like working together, and then you have open AI's secret back and stuff that however they do the weights. That's that's the stuff we don't know. So we don't know who has what weights. We know that Anthropic uses uh constitutional governance and and the way it looks with that, and they're trying to be more open with things. Fantastic. OpenAI is being less open about things, but they they have open AI in the name. So we have to be mindful of what we're using when. And and really that's why a lot of companies are looking to say, when we want to use, because you don't want to give out too much information, because I would say to that.

SPEAKER_00

Yeah, uh, coming back, you you you talked about enterprises. Uh first of all, uh, my experience, uh uh the biggest cycle that I I experienced working with enterprises is that uh is that uh data sets are messy.

unknown

Okay.

Messy Data, Context, And RAG

SPEAKER_00

How is it important uh like to to for an enterprise for a company data set organized uh in a very clever way? Like uh before introducing uh AI models and ontologies, you need to clear all the data set in order to avoid uh uh any issues. But it would.

Don’t Fire, Stop Hiring As ROI

Automation As The AI Factory

SPEAKER_01

So if you have the wrong information or you're pulling the wrong nuance, it'll or if you zero shot prompt something, right? If if you have no context, which humans, as humans, our our job is to provide context, like because we are the ones in you know in control. We have the known knowledge, you know, whether it's general or narrow, even in our own space in humans, like we have narrow humans that are very good at one thing. We have generalists that are very good at lots of things and had a lot of experience. And it works the same way. And the more context we give it, the better answers we get. And this is why like rag models came in very quickly and said, let's let's build context, let's build you know, vector storage and understand and put it all together like a house of cards and boom, and it's together, put it put the deck together, and now we have all this information. It's that's great. But at the same time, and that and that does give you less line, yet that doesn't cover all the instances that could be asked. That's whatever you stored in it, like if it's your HR data or it's like certain aspects of your company. So that's where we move into how do we how do we do this better? How do we drive this context? And you have like the Microsoft's of the world, and most leaders are saying, categorize your data, filter your data, get it, get it down, like add those little bubbles that say, like, what is this about? Is this an HR thing? Is it an IT thing? Is it a business thing? Um, is this downstream or sales? Like those kind of things are great. But when you tell companies that, they go, What do I gotta do? What like how how do I do that? Where do I start? Because we tried in the DevOps days 10 years back, you know, if not longer, let's go get our data in line. Let's do our C A D CI C D pipeline, let's deliver that into a data store. It became a data lake, or we had like, you know, then we had then those data lakes became like data swamps because data was never really governed to get in there correctly, then you got this wonky data sitting around. That's where the the nuance has to come from somewhere. It goes back to the original question. If we're if we're people are if companies are getting rid of people, well then where's the context going? And if that people go, well, there goes the context around why why would you buy it? Why because of the MBA degree that a lot of these leaders have, that they learn that humans cost the most money because we have to pay them, they tend to get rid of them first. But but they have the context, they have the understanding we should be, and this is where I talk about, I say the the answer to the ROI question, yeah, like people as humans, we're all we're all humans, we're all very biased creatures by nature, but we're very short-sighted. We we don't see very far in the future, we don't think ahead very far. I don't know why, but we do it. We don't, we just don't do it. So since we don't do that, the the ROI answer is don't fire, just stop hiring. And so if you don't fire people because of the fact that you're you wanted to get rid of a headcount, but you fire the ones you may need to or let go of them if you can, but don't go getting rid of 10,000 people because you have no idea the amount of context you just let go. And then you may have to hire them back at a higher rate, and they don't want to come back because people also don't want to be hired back if you told them to go away. So, but if you stop hiring people, then you can say, great, if we're going to make more money people. But if the technology, and again, this is where like the whole conversation that these leaders are having falls on its face. They're always saying AI is going to make us more money. The company's going to grow X amount. Great. Well, in the past, you need more people to meet that growth. Well, if we if we get rid of, we still grow more. Sure, but if you get rid of people, that context goes away. And if you use AI to make more business, those people can do more than if they were by themselves. So you if you enable them and if you empower them, and then they become emboldened to teach everybody else 100% you're going to make more money. And you don't need to get rid of anybody. And if you're not hiring people, well, the instant ROI there is you're not training them, you're not bringing them on and they leave because they get a different job or better job. You also, by not firing and taking the not hiring approach, your people internal care more about your business because your your internal employees are your first customer. And if you can keep them happy and you say, hey, we're just going to help you train, we're going to upskill you, we're going to show you the way, we're going to empower you, embolden you, you're going to be a lot happier work for it. And if you get to the point where you're saying, you know, dang, we we built this AI, we have this agent, it does these things. We also have automation in there because you can't forget automation because that's like kind of like the groundwork of like how do you build processes, what that looks like that AI can then run on and bolt on. Those aspects have to be in play. And if they're not, you're, you're kind of they're risking a lot. Well, I don't know if this is operational theater that people are doing. I don't know if it's a mixed bag of playing with numbers to show value because of market share, because the markets do play a big role in this, and and people have stakeholders and even leaders have stakeholders. So there's there's aspects of that that are is very concerning. And I don't think that jobs should be going away. I think that we should be enabling people to drive more, do more, and see what their end result of that look like.

SPEAKER_00

Yeah, it's uh very interesting your point. Uh first of all, you talk about uh biases. Do you think that uh AI could be biased? Basically, like models could be biased.

What Ontologies Actually Are

Personalized Media And Ethics

SPEAKER_01

Yes. And the reason why, though, it's not a blanketed AI is biased because that's not that's not correct. The data going into the model, it's it's biased. Uh it's biased data because because we are biased. So if if if it if AI is taking our data, things that we write, things that we put out there in perspectives, all these things, uh, stuff from the media. The media is 100% biased depending on what channel you listen to. Is is AI biased not in its nature, not in like what's just like the the technology itself is not bias. The data being put into it is biased and as it learns with nuance, as it learns through ingesting of all these data points, yeah, it's gonna it's gonna give you bias answers because of that. And that's why you have to add the context. That's why you have to even give it like just like that answer is like super biased, and it's like, oh, sorry, but like don't don't just let it off there. Like tell it kind of give it more context. Like, I want to hear it, or go find different viewpoints and then give me a synopsis or summary of that, and then let's actually have a conversation about like what this may look to me or others, because this is the the stance that I take. Um, but be careful at the same time, because depending on how you you may be pushing too much of your own bias and creating your own echo chamber with AI. And and then nobody wins with that. There were some companies here in the Bay Area that I talked to early on, and they were like, we're gonna make media that people choose what they want to ingest, and people choose how they want to take that in, and and only the people that they want to connect with, I was like, you want to create echo chambers? Like, hold on. This is what this is where humans should go. No, it's it's it's it's okay to not agree with everybody. It is okay to have a different opinion. We should be able to talk about that civilly and be like, hey, I don't agree with this, you don't agree with that. But down the middle, we at least find something we agree with, and and that's the way forward. And then we say, okay, we can I can carve out and I can say, I'll give you that. And I can say, yeah, you're probably right about this. But like, let's go find that path to make the next step forward. And that's where humans and AI, really collaborating with AI, humans that collaborate with AI are hopefully find a better path forward so that we can act because of the bias, right? Because the humans, every human carries about 50 different biases at all times. Whether or not you walk down the street and you see somebody's shoes, or you look at their or anything that about like we're just that's how we are. Because if you look at humans, and I've studied a lot about this, so I'm not just like talking randomly, but we we act like we are we're kind of our own AI because AI is mimicked off of us. We have our own models in our own head from our life experiences. We've seen someone in the past that that had unkipped hair or had horrible shoes, and you're like, man, that person was mean to me before. It wasn't that person. Like it's a new person. Yet, like your model in your head is saying that's the same. Is it like give people the benefit of the doubt? And so I think AI is going to help us to see that better as it holds a mirror up to us to say, are you being biased? And how can you be better? Maybe look at it from this perspective. Then also, again, be curious. Ask, ask it, hey, I think this. I know a few times I've put in stuff in there like, oh, I'm thinking about that incorrectly. Like, I didn't realize that my my tone, I didn't realize that the way I put in those the letters or the sentence that I put in might have been emotionally like in it may talk and and and try to like be very profound and how it relates, even with all the dating stuff that's coming out around AI, just be mindful and and be smart about it. Be be a better human.

SPEAKER_00

Yeah, yeah. I think a human being must be at at the center. Like it's very important. I I think that the IE will help us to rethink as a species, like to to work all together.

SPEAKER_01

One of the biggest super things to get people to start thinking a little bit differently. And and really, but before, especially me being from the United States, I I know some languages. I know, you know, I but I'm not fluent in so many yet, like people from Italy are um amazing. They know like six different languages, like two on average. Uh, you guys kill it compared to us. And but yet technology is now breaking down that barrier. And language of itself, if you break down different languages, you know, in the United States, our language is very much like if we see a car, like, oh, it's it's a it's it's a it's like a woman, like we we treat it femininely, and Germany is very manly, and these aspects. So language in its own right, its own nuance. And so depending on your own cultural language, you're you're seeing AI kind of pick up on that, and you have to be mindful of that too.

SPEAKER_00

And and you think that uh different uh LLMs have different ones, like for example, I experienced uh Claude and the ChatGPT and uh you know Gemini behaving in a very different way, like if we talk about the same topic.

Alignment With Human Values

SPEAKER_01

The a lot of the enterprises and a lot of companies that are trying to pull AI in is very agnostic because and sometimes we're you know there's companies you know looking at like, let's bring in ChatGPT to drive the language side and help these other aspects because that in itself becomes another auditable game, like business process mapping or business process management to drive that overall process. Because right now in all enterprises all around the world, because I've talked to GPS, just people and companies, and the the biggest issue I see right now is that we're all looking at we're looking for processes, but we're only getting tasks and workflows, and those aren't processes, those are very segmented. And I don't know if it's because of OKRs and KPIs, because we we kind of lost track of what people were doing. We took away some of the rights and some of the understandings that middle management had. I mean, middle management was there for a reason, but when they're just supposed to just like report out numbers and that, they can't really manage the teams, which then you lose that context, you lose that nuance. Now we're just focused on like what do you deliver? And if you don't deliver that, what'd you do this week or last week or that month? And if you can't prove it with your KPIs, what are you doing? It comes down to everyone has to report at a very linear level, and we're just doing swivel chair stuff that we solved back in automation days, like six to eight years ago. That's the issue we're coming up against. And although AI is helping to drive that understanding and the nuance there, but it's a lot of companies are also missing the fact that automation does it better for different aspects. So use the right tool, use AI, use automation as like your AI factory to normalize all your reporting, get everything in line, take your upstream, understand your downstream, and then once you have like some processes, real processes, that's when AI becomes highly powerful because it can pull those processes, pull the outputs of those processes, gain the context, and then everybody wins.

SPEAKER_00

Yeah. Uh let's talk about uh ontologies. Okay, in my experience, uh I usually work with uh production companies and uh company that works mainly in entertainment. But when I talk about ontologies, I really don't understand what I'm talking about. So, first of all, I would like to ask you if you can explain to our audience uh what ontologies are.

Are We Near AGI

Play, Sci‑Fi, And Learning

Human Value Is Being Repriced

SPEAKER_01

They are very important, but nobody asks about it. It's almost like inferencing. Nobody knows what inferencing is, but that's just understanding the the a tokenized word that's been used before. Uh and just to talk about inferencing since I brought it up. It's like when Sam Altman said, stop saying thank you to the AI because it's burning the data centers, it's all it's all BS. That's not true. Uh if you use inferencing, it it is that's a non-issue anymore. It actually saves you money on your token cost if that's a problem. So, but how do you inference, which goes into the ontology? So, ontology is the what, where, and why of the words that are being used by your company, by the teams inside it, by the organization itself, and what market or vertical it's in. So that's a lot. Ontology, the best way to understand it is if you have a doctor and you have a lawyer, we'll say they both speak English. Or it really doesn't really matter what language, but the same language that they're speaking, those words mean different things to each person. You know, something over here on the doctor's side means that they need to triage to do something, or somebody's choking that lawyer people over here don't understand. And then lawyers are looking for. Or law, family, or litigation, or all these different things. Again, same language, multiple different aspects, different understandings. So the ontology, the way it works is, and the way that it needs to be implemented is to help the nuance of AI, help the understanding of what does your company do? Where does it play? What are you guys good at? What does each team do? Because again, like I mentioned previously, in the AI days, we had knowledge graphs. Knowledge graphs were kind of like here's the word slash salad of the day. How do we normalize that in the company? And here's here's the things that jump out most and say, wow, we're doing those things really good because they show up a lot. But it changes. And so this is where we go into like living ontologies or living knowledge graphs, those kind of aspects. But how do you make them living? Is that you track it at the team, the organization, the overall enterprise level, because even as teams people move or people leave or people new, people come in whose words that are being used are probably going to change. But the more we understand what they are, and the more we understand what group or what team or what organization inside your enterprise is utilizing what well, we have a better way better understanding then. Now the AI has context. Now the AI has living context that can pull from and go, oh, this organization is always talking to these other guys, and they always kind of lose this nuance, or I know how to help them build that better. I know how to help them understand that better. And as we get to multi-agent systems or orchestration of agents, if your orchestrator agent has that understanding, that ontology, it can use that to kick off the other agents down their line, like the task-based agents or the workflow agents, to then have that nuance, have that understanding. And now we're cooking with grease and fire and doing some cool stuff. That's kind of the approach that we have that in our own heads, right? Because like when we're inside of a company, we we start you, I mean, we do this too. Like it takes a little while to learn the wording, takes a little while to learn the lingo, how to do the what and why. Ontology is that. And this is something that even Microsoft has said now about two years ago, they said, we're gonna start capturing all of your ontology. Here's why, because we're gonna sell it back to you. Should you capture it? Yes. Should you be looking at this? 100%. Are you gonna get better results if you use it? I believe simple doing it. There are some companies out there doing it now. There are some startups doing it now, and there are some companies that just focus on ontology on the aspect of if you understand everybody that works at your company and your company says, we want to go do this deal or that deal, or jump into this kind of category. Well, if you know what everybody's background is, you can then say, No, who to hit up and say, let's go do this project together. So now you're just choosing the right people to get the job done versus just randomly putting people together that may not work very well because they don't have the context themselves. So again, it goes back to the human side. Do we have the right context or do we just get it?

SPEAKER_00

Uh let's dive a little bit into my thing in the into the entertainment. We talk about tauntologies, uh, and uh I know that tautology will be ontologies will be applied to uh like streaming platforms like Amazon Prime and uh Netflix because they want to use it uh uh um like for example to do product placement uh for uh movies made with AI or basically, yes, basically made with AI. So I would like to know first of all your opinion on about um what's going on uh in the generative AI field where you can create uh anything from scratch uh and as something that it's uh could not be meaningful, but uh it's uh pleasant to view. Yeah. So what's your opinion about that?

SPEAKER_01

That's the the other side of ontology plays different roles in different markets or verticals. In the media side of special, we we've seen this. I guess one way to look at it is is your social, not your social media, but like your streaming media, like Netflix and some of these other ones in the United States, in my geographical area, I get this subset of shows. Yet on the other coast of the United States, it's a whole other set of shows. And then over in Europe, it's a whole other set, not because of language. It's not, but but it is cultural sometimes. But when we have AI having the ability to to change the culture at scale, to to change how it represents itself to different subsets or different regions or different aspects, or again, understanding like what if it had ontology of go-what are the things that Americans like, you know, used to be baseball, hot dogs, and you know, everything else and hamburgers and stuff like that, and wearing blue jeans. But that's changed as well. Like, you know, in the cities, you we have a different myopic and demographic and people they're outside the regions that are more rural, again, they have a different aspect and they want certain aspects. They they like to keep a cultural about that. And so both are great, yet both are very different. If a company that's a media company, like say, say to your example, if AI come, I mean, we're seeing AI now doing some amazing videos, just get on social media, you're like, wow, uh, Tom Cruise and Brad Petter fighting, they never fought like that. Like this is crazy, or like Godzilla and a cat turned into a giant Godzilla cat, and like, wow, that luxury looks really good and funny. But what if what if they could change it enough where all the media that comes out becomes to the individual level? And that's where we're going with it. Like, yeah, we're gonna hit countries, we're gonna do regions, and all that. But what if it's like, wait, you know, what if the next show that comes out goes, well, what if do you want to you you watch a lot of action stars? Do you want you like action? Like, oh yeah, I like action. What if I could put you in the movie? And you go, what? You know, yeah, take a picture of yourself, do a thing side by side, and and give me some action lines, like say this and say that. It's kind of like karaoke in a way where you like follow the thing and try to match the attenuation of the scene. And now you watch that movie, you're in the movie. Like that's what's happening. And so it's it's highly tailorized into where like and then like kids' films and stuff, like you and your kids can be, and it's it's doable, it's coming, and it's it's it's looking very interesting. So I think ontology plays a big role there as well because it helps us to understand, helps the AI to understand we need it needs the nuance, it needs the nuance of understanding uh what where and why.

SPEAKER_00

Yeah, but do you think it's good or not?

SPEAKER_01

How would I answer that? It's good and bad. It is good because I I want to see it. I think it'd be great to be in a movie, an action movie myself, and like be like, wow, I I don't know if I can swing from that vine, but I'll try. But I think it's bad on the other side of that. So the good side is it's amazing and fun. The bad side is it can be used for anything. Uh there's a there's a story. How would I tell the the thing about data is when do you upload your face? Say you want to do this and and and it's going to happen, and we're going to upload our face. Sometimes we do this to even get on social media, we do like the whole thing and the up, down thing, you know, whatever. And when they scan that biometric data in, there is a difference between first touch, second touch, third touch, fourth touch data. I say touches because that's the easiest way to explain it because it's ambiguo and it has ambiguity or it's it's nebulous on purpose. But first touch is I'm putting my information in, my first name, last name, my address, my social, like all these things that tell me who I am, and I give that to something. It's companies and user license agreements, all these things say, we will always give you back your first touch data. Great. I like that. Fantastic. I feel safer. But when they scan in biometrics, there's now a digital image of yourself. And that digital image, not all the time, you get that, that becomes theirs. And they can use that image to say, we can watch how you age over time. We there are platforms that actually do this. And if you look at it, it seems very cool at first. It seems very, yeah, it's fine. But the fact that they keep that digital data and what they do with it is not always clear. And so that is a risk. So there's a bad side to this. Or there's the other bad side of if our faces are there in grocery stores or the local, you know, bodega or what's down the road where we go shop at, if it knows we always buy these certain things, there's a risk that Doug's coming back. He's going to get that one thing he always buys because he really likes, I don't know, potato chips or something. It's going to then change the price on me and say, oh, well, he'll pay more because he always buys it. That's a habit. That is a way that we can gamify, that is a way that we can get more money because we'll charge him more. So there is there are bad sides. The bad sides, the bad parts about humanity, like the greed or the I want to do this or doing something, getting one up on each other, doing the this guy at the back to get one ahead. Again, like we need to find a way to be better. We need to find a way that it like we need to get back to the golden rule, like treat on others as you want to treat yourself. Again, like just the same way that enterprises go about this, and enterprises should be treating their employees as the first customer, and they should be enabling them and empowering them and bolding them, so should we with our citizens. The citizens of the world should be enabled. They should be enabled to use these technologies, but they should be empowered to know that it's safe to actually use these technologies and feel safe to use them for like their kids or in their homes and whatnot. Because everybody wins, because if everybody can feel safe about it, and and entertainment industries go wild, uh, schools go wild, uh everything gets better to an extent if we're doing it with the right incentives. And then the empower and the emboldenment end of that is we then teach our kids and we drive it with them and we we teach them to ask why. I do it with my kids all the time. Like think about it. Like, don't just take it for you, ask the questions. And what are even if AI says this is the answer, is it because I've actually walked them down some paths and like that's not the answer. It's it sure as heck said it was, but it really wasn't. And that's where we have to get to. And then and I do think, especially after having some conversations with some of the other think tanks that I'm involved in, you know, that that future of theory and practice and really the philosophy of why we do what we do. I mean, there's a whole other maybe we have like an AI renaissance where we actually get back into this and we actually start being better humans because of it. And that that's the hope, because the the bad stuff happens if we incentivize the wrong things.

SPEAKER_00

Yeah, I'm totally agree with you. And uh you also talk about parenting and mind-changing topics, like in the way that uh you know, parenting with human beings is very important. But uh, some people you as no Arari said that uh uh now AI it's uh like uh in an amoeba stage, like it's like a newborn child. What do you think about the alignment problem, the fact that uh we need to align AI to human values?

SPEAKER_01

I think it needs to be more talked about and it's not being talked about. I was actually surprised you brought it up. The let's explain the alignment problem because not a lot of people understand. And and alignment does kind of go with the incentive incentivization or to incentivize it. But alignment really is does it align with humanity? In the in Europe, it's more aligned with humanity than it is in the United States. There's a lot we have to understand that the companies that are in play are building very fast and they have different models that are already being built right now. There, there's there's stuff in the labs behind the scenes, we don't know what's coming out. There's stuff that's already available that will blow all of our minds, and we're like, wow, that's crazy. And they can't release it because it'll scare us. And nobody wants everyone to be scared. The alignment side of luckily there is some testing, but again, in the United States, there's less testing, and it's like, hey, if you put too much restraint on it, then innovation slows down. We can't slow down because of other world actors of who's going to win what and what power is generating the behind-the-scenes, and how do we drive that forward? And there's a whole lot of global stuff happening. But at the same time, how do we drive that forward? It's understanding we all need to actually talk about it more as humans and say, what do we want out of this? How do we want to show up? Do we want AI dictating our health? Because there's yeses and no's to that, right? Because there's yeses as when IoT devices get to come out and these LLMs that are now foundational models become more on world models and actually start involving symbolic AI in a lot of these other aspects. Yeah, these IoT every day, all the day, and it'll notice, you know, what's going on, and it'll make a call. It should keep you in the loop and say, like the example of and just playing soccer with the kids, I fall down. Like, oh, do you need emergency assistance? You fell down. I'm like, yeah, I'm fine. Like, don't worry. But at least it asked me. It didn't just like an ambush doesn't show up and charge me$2,000 or$8,000 and be like, well, your watch told me you needed help. I'm like, I didn't it. But if I'm in a car wreck and I'm unconscious, 100% take over because I I obviously need help. But we need to have those conversations. We need to have our own human guardrails of when is it okay for it to take over? When is it okay to be my boss? When is it okay to tell me what I need to do, how I need to do it? And and if I want to make a bad decision, or I want to, heaven forbid, I want to smoke or something, or I want to do something bad, or I want to drink too much, or I want to like make a bad choice, I'm a human. I can make that choice. That that's the caveat. And right now we're going down the path of you might not get the choices you thought you once had. I don't know how to feel about it, but it is, it is showing up, and that's why we have a lot of these leaders, Jeffrey Hinton, all these other guys, and they have all these big warnings. The hype of that and the and the push for some of these big warnings is to get people involved. Because again, humans, all of us are very short-sighted. We don't take action until something happens or until something forces us to take action. Especially here in the United States, we're classically known for we don't put a stoplight up, we don't put a stop sign up, unless somebody gets hit by a car or something dies, heaven forbid, we're very reactive. And for these leaders to be saying very polarizing, hey, get your head out of it, we need to talk about this stuff, although they don't really tell us where we can talk about it, but we need to actually be engaged. So I think we need more groups to actually bring that out and have like these conversations and go have like public forums and take a consensus and talk about this at a higher level. That's that's the next step, is like we need to be more less proactive, more proactive. And AI is going to help us do that as well as we collaborate with it and as we put more information into it. But that's that's the future I hope we have is a more proactive, more human approach so that we can then grow with this and get better for it, because the end all be all is the whole classic like sci-fi. It's all of us working together to get off this planet, do the work stuff when we need to do it and get it done. And anything else eventually will just be autonomous and work with us and how and teach our kids is like if everything goes away, how do we survive and how do we still grow plants? And what are the basic foundations of like how civilizations work? And then if we can do that, then we're doing amazing.

SPEAKER_00

Yes. You think that we are near to AGI or not? Also, the concept of AGI, like is something that is not uh really clear.

SPEAKER_01

Yeah, artificial general intelligence is interesting because then after artificial intelligence you have artificial superintelligence, potential AI, the best way to answer this is I don't think AGI is going to come from a large language model. It is language. I think AGI is going to come from multiple different things, from a large language model that we can interact with it, a symbolic model that understands shapes and understanding, stuff that Meta is doing with uh large concept models of like concepts and figures and understanding of like actual grounded truths, and even to the fact of large geospatial models that we see coming out that understand the world around us. Once all that intelligence has become model, which who knows if even we have the data centers to do that, maybe that's why we need to build more data centers. The phase though is that even if someone or some of these companies have gotten to potentially AGI, I can probably guess that it is probably not aligned. And therefore, are we close or will we have it? The answer would be no. I think the the most consensus people would say no, because again, if it's not an aligned AGI, we we don't want it. And it's dangerous.

SPEAKER_00

Yeah, uh three books that uh Mayonas need need to read uh about uh AI, like uh three books that you you you can you can miss.

SPEAKER_01

Books about AI. I I I like to try to have fun, and play is where people learn best as humans. This is why we see like open claw and all this stuff, and people are all about it, and it is interesting, it is fun to play with, but it's play. And it's play because we're learning. And I think we have to keep an open mind about things. I don't know if people get to think dis dis differently about my my book uh point here, but I think the book series of like the Bobby verse, but I think it's called the I Am Legion, was pretty fantastic. And I've I've listened to it we even with my own kids to say like this isn't the future, this is just a sci-fi book, but the way it's explained has some interesting correlations to to the things around us and the things, especially uh like from the inter entertainment phone where things are kind of moving from how we think about things and how how these technologies mimic us over time, and if it is in the amoeba state and it is learning us more and more, it's and we are trying to move into a spacefaring race at some point, hopefully, uh, because of all the reasons that we don't have to name, but like you can just think about or just ask your AI like why would why should we move to a spacefaring race and not just be stuck on one planet? And then it'll tell you all these reasons. It's interesting. And actually, like the funny thing is like we're talking about Dyson spheres now. The Dyson spheres have been around for a long time in the sci-fi realm, yet even this book kind of like talks about it and how to get energy from it, and that's the direction many people in the today's scientists are like that's if you want power, there's a way to there's a way to do that. But that's that's that's very far, far. And uh there is uh any other topic that uh you would like to talk about uh or I think that the one thing I've talked about lately, but and it does help people to understand the the dynamic of like job loss and all this other stuff, because that's the biggest thing on everybody's mind. Because all of us, right? We we become our jobs in a way, especially here in the United States, we're horrible about this. Whatever job we have, that that is us. That that's the person, like they're at the job. The first one of the first things that most United States people go, like, what do you do? And you know, tell us what it's like, oh geez, stop it. Luckily, Europeans like you guys are amazing, and you actually live your lives, and we need to take notes and be better. Like this is where like as things come together, we need to find that mix and that that better way to live. And and we don't on this. But the the point being is the thing I talked to is like the what we're seeing right now in the space around us, in the in the companies and the enterprises and how they're laying off. I what I what I do believe it is I I believe that human value right now, because a lot of the people on this planet are knowledge workers, and we you know, we're building stuff, we're putting stuff together, we're using our brains to build the next thing and explain it on dashboards and get the job done. So, knowledge worker-wise, what we're seeing is human value is being repriced. And it's being repriced because as intelligent systems come in and seeing what it can actually take action on, we don't have to take those actions anymore. We're we're in the middle of we're being repriced on certain aspects. But again, we don't know what all those are. I think a lot of companies are gonna start leaning into that as they try to proof point where do we get rid of people, where is that value at? How do we leverage this? If we just say how much value is being produced per person, we can probably lean on that as a new metric. And so I see that coming in because I see a lot of companies talking about it. So just something to be aware of that I want to give back to other humans. Hopefully, we all have conversations and feel free to reach out to me on LinkedIn and everything else and hope to have a better future for all of us.

SPEAKER_00

Very much, Doug, and thank you very much to our audience. Uh we will see in the next episode. Thank you very much to everybody. Cheers. Yeah, okay. Um

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

Plannix - Il Podcast sulla Finanza Personale Artwork

Plannix - Il Podcast sulla Finanza Personale

Luca Lixi, Lorenzo Brigatti, Matteo Cadei, Andrea Bosio, Lorenzo Volpi