Automatic transcription, it can have errors.
Alessandro Oppo (00:00)
Welcome to another episode of the Democracy Innovator Podcast and our guest of today is Stefan Fairholst. So, thank you for your time.
Stefaan Verhulst (00:09)
Thanks
for having me, Alessandro.
Alessandro Oppo (00:11)
And yeah, as I said, I don't have a very specific question now to start with because you have a lot of experience so I didn't really know where to start. so the first question is how everything started. How?
Stefaan Verhulst (00:27)
Yeah,
so the Genesis question, Alessandro, which is a big question, but I would say ⁓ the thing, especially as it relates to kind of GovLab, which is of course the organization I co-founded, for me, it started ⁓ when I was working.
⁓ as head of research in a foundation in New York, which really was focused on ⁓ how do we leverage ⁓ information technology to address critical public needs. And the key realization from doing that work was that to a large extent as a society, we actually are very arcane ⁓ and archaic when it comes to decision-making. A lot of the public problems that
we have cannot be solved without also changing and transforming the way we go about making decisions and especially changing the way we make ⁓ decisions in both a legitimate and effective manner. Because that turns out one of the key challenges is that there is a lot of effort or in the past at least there was a lot of effort to ⁓ make decisions.
⁓ in the public interest more effective, but then quite often they were not legitimate or not as legitimate as one would ⁓ want to have them. And then there were a lot of efforts to make decisions more legitimate, but then it turns out they were not effective, right? Quite often you can be as inclusive as you want, but never make a decision. And so the real challenge from my perspective was then how do we actually advance decision-making?
which anyway we can also call governance in society. How do we advance decision-making in ways that are both legitimate and effective at the same time? And how do we do this by actually leveraging new methods and new assets that we quite often ignore are there? Because as I said, a lot of the current decision-making cycle is still kind of based in 19th century, sometimes
in the 20th century kind of practices. And then the question was, of course, okay, so what are the new assets and what are the new...
⁓ methods that we can start leveraging. And here ⁓ we came to the realization, and again, this has evolved somewhat, but that there are kind of two important assets that if you leverage those in new ways, you can actually change the way you go about making decisions, making them more legitimate and more effective. And those two are on the one hand, ⁓ people, right? And so here, the question is not
only how do you engage with people, but also how do you know who knows what in society and how do you bring this kind of expertise and wisdom to the table when you make decisions, which is kind of fundamentally ⁓ underdeveloped from my perspective.
in society. Typically when decision makers ⁓ seek to make decisions, they either consult the usual suspects or they assume that they need to hire a consultancy firm, but they never really take the stock in what are
what is the level of expertise that already resides, whether it's in my city or in my organization or in my region, for instance. And so if we can find ways to tap into what we call this collective intelligence in new ways, then the assumption is that you also would change, i.e. transform, i.e. improve the way you make decisions. So that's kind of the first area that we started.
rather to explore. And then the second area, surprise, surprise, and that's where I've spent more and more time. But of course, the first area is very closely aligned with the second area, which is, of course, ⁓ Because while society has become more complex, society has also become vastly more datafied. Right. So we have this datafication happening at the same time, which basically is a result of digitalization, meaning that every time you have a digital
interaction, the one that we have right now, or ⁓ you ⁓ pay with a credit card, which is a digital transaction, or you ⁓ make a phone call, which is a digital transaction these days. Each time when there's a digital transaction, there's a data trail. The question is, of course, ⁓ A, how do we make sure that we don't?
have a datafication that leads to surveillance, is one key question. But then the other question is, which is the one that I have focused on, is how do we actually start leveraging this data to improve society? And how do we actually start leveraging this data in ways that can make decisions more legitimate and effective at the same time? And so those are kind of ⁓ coming back to your question, Alessandro. So the realization was that unless we change
the way we make decisions. We're not going to make much progress in terms of ⁓ solving public problems, but we're also not going to make progress in maintaining and sustaining democracy as we knew it. So that was kind of the first realization. And then the question was, OK, so how do we make progress? Because it's not just enough to ⁓ diagnose the problem. It's also about how do we actually then start leveraging new ways of doing things. And this is then focused on kind of people and data.
And of course that leads then to AI, leads to blockchain, which is basically a data disclosure ⁓ methodology, ⁓ and so on and so on.
Alessandro Oppo (06:36)
Okay and I see that all those things are connected because you mentioned like critical public needs and then how do you know who know what? ⁓ Yeah because if you want to fix something that is not working you need that skill competence and then also data governance because data as you said
Stefaan Verhulst (06:49)
Who knows what?
Alessandro Oppo (07:03)
can be used for surveillance or also as something for collective intelligence. ⁓ And so like in a sort of more practical way, ⁓ how do you think like this collective intelligence can be used, ⁓ let's say also inside the institutions ⁓ without maybe...
Because I see, we change the institution now and we say, okay, from tomorrow there will be AI and we will use collective intelligence. Or maybe there should be a sort of transition ⁓ time period, I don't know. And I'm very curious of...
Stefaan Verhulst (07:48)
Yeah, well, one
area that I focused on, ⁓ Alessandro, that might be relevant to your question and also relevant to how do you start leveraging collective intelligence and even how do you start leveraging collective intelligence that can inform then the use of AI.
as well is really around this issue of ⁓ agenda setting and prioritization. What are the needs that you seek to address or what are what we have done? What are the questions that you should seek to answer in order to make progress vis-a-vis a problem? And and quite often my view is that quite often ⁓
decisions are poorly developed or ⁓ evidence is poorly ⁓ gathered or ⁓ projects are poorly implemented, mainly because ⁓ the question that you seek to answer ⁓ has been poorly defined. And it turns out that one of the ⁓ key ⁓
challenges and at the same time key capabilities in society ⁓ is directly related with how do we formulate the questions that we should prioritize and how do we then start structuring the quest in order to answer that question. And that turns out to be kind of ⁓ under explored. ⁓ It's part of, of course, how do you set the agenda? How do you prioritize?
Because to a large extent, whatever question you define that gets prioritized will also define ⁓ what will be funded, it will define what data is being collected and used, and it will define what is the kind of decisions that you eventually will ⁓ make as well. And so in that space, what I've been trying to do is to actually ⁓ leverage collective intelligence to improve
the questions we seek to answer in society or the questions that if answered can inform decisions in much better way. And so that's an initiative that I launched, which by the way, talking about institutionalization, every organization can do and every, from my perspective, every government department should engage. And so the initiative that I launched was something called the 100 Questions Initiative, which is like, what are the 100 questions that matter?
that anyway, if answered, we would make progress on particular kind of public needs or public problems. ⁓ And we probably can answer it because there's a lot of data, but we just have not ⁓ made the data accessible because we never have prioritized the question in the first place. Right. And so by doing this work of the 100 questions, ⁓ we developed a methodology that is kind of a collective intelligence methodology.
where we ⁓ first around a particular problem, like we just did one, Alessandro, on women's health, right? Which is very interesting because women's health is what remains that clearly ⁓ suffers from questions inequities, by which I mean is that women's health is the least developed medical ⁓ field, mainly because the questions women had were never taken seriously.
because you can actually have questions and equity, questions that never get taken seriously. Now, what we did in the field of women's health, ⁓ which we did together with SEPs and the Gates Foundation, was to first of all define what is women's health. And I think that's kind of what I call is developing a gestalt of a particular kind of problem or a particular kind of domain, because women's health is more than just ⁓
⁓ for instance, one particular aspect of women's health as it relates to a medical condition. It's also ⁓ linked to ⁓ research, it's linked to ⁓ gender ⁓ discrimination and so on. And so we need to have a broader perspective, what I call a gestalt of a particular kind of domain. Now, this gestalt can already be developed through collective intelligence, right? So can actually bring kind of, you can do kind of a...
a public mapping of all the issues, including ⁓ capturing lived experiences that makes it real as well. The second thing which we did then was to ⁓ basically bring ⁓ and create a cohort of what we call bilinguals. And so, and here this is very much informed by what Phil Tetlock is doing in collective intelligence. ⁓
I don't know, Alessandro, whether you know Phil Tetlock, but what he has been doing is trying to ⁓ develop cohorts of what he calls super forecasters, right? People that have a kind of innate capacity ⁓ to forecast on particular kind of topics. And he typically brings those super forecasters together and then ⁓ develops kind of forecasts that turn out to be much better than
any other kind of method as well. So what I'm interested in in the ⁓ questions ⁓ effort that we are having is who are super questioners, right? And so that goes back, Alessandro, to kind of the collective intelligence question is how do we know who knows what? Right. And here it's about how do we know who has particular kind of capacities that can inform the question process?
And so here we develop this cohorts of bilinguals, which are people that have a domain expertise, in this case, women's health or an aspect of women's health, but also ⁓ quite often a certain ⁓ other expertise, whether it's data or AI, but also anyway, kind of a characteristic to be open to form with questions, right? Because it's also kind of an innate characteristic ⁓ to be a super questioner and we bring them together. So typically we have around 100.
that we have curated, right? And then we source the questions and then ultimately we come up with like 200 types of questions. We prioritize them, they prioritize those questions and then we have a public ⁓ conversation and a public voting on those questions as well, which then allows us to really prioritize a few areas that from my perspective will enable
a more legitimate, right? Because it has gone through a process that is not just, you know, someone who happens to be close with the research funder ⁓ that decide this is the question that matters. So it has gone through kind of a legitimate process ⁓ to identify what the questions are. And it's also far more effective because now we know, right, which one should we ⁓ focus on first?
in order to make progress on a particular kind of issues. And so that's kind of a collective intelligence example, Alessandro, that allows us to ⁓ do agenda setting in a new way. And by the way, combine that with AI, now we also can then already start using AI to kind of rapidly prototype answers to some of those questions and see...
⁓ What does Claude say about that question? And so you could actually have a kind of a hybrid way of going about ⁓ scoping out ⁓ the quest ⁓ of ⁓ answering a particular kind of questions as well. And it can also inform better prompts. ⁓ And by the way, it can also allows you to understand what data you need. ⁓ And you can start developing data comments around particular kind of questions.
that provides you access to high quality data so that you then also can produce ⁓ high quality answers to some of those ⁓ questions as well with or without AI ⁓ involved.
Alessandro Oppo (16:31)
This is very interesting, so a sort of citizen assembly, maybe with people that are very able to forecast a particular...
Stefaan Verhulst (16:40)
Yeah, yeah, yeah. So you can do, yeah,
we can do this in a variety of formats, right? So like, anyway, we're going to do another one here during open data week in New York, which is in March. And so we basically going to bring in ⁓ folks from Brooklyn, from a particular part of Brooklyn and say, well, what are the quite because like the open data, right? The open data community is focused on the supply of data. Right. So let's let's get data open. ⁓
The question that we try to answer is, okay, but what are the questions for which you need data? And who should define those questions? And so in a city, could have, like for instance, in Brooklyn, you could have exactly kind of a citizens assembly, which invites people from a particular kind of part of the city. Could be community assemblies, could be community meetings for that matter. It doesn't have to be that complicated. But you could actually start...
understanding what are some of the questions that will make a difference, right? And then you know what is your open data program, ⁓ what should be your open data program and how does it currently score vis-a-vis the questions that actually people have, right? And so that's kind of one way to do it, but you could also do it anyway in ⁓ universities, right? I mean, it's kind of remarkable, Alessandro, to me is that
⁓ universities, and again, this is all kind of, assume that everyone will figure out their own question, right? And typically they don't share their question because if they share their question, others might run with the question, then they don't get any funding. So it's kind of, everyone's kind of focusing on their little domain. But if a university would say, okay, well, what are the questions that as a university we should prioritize and then align ourselves to make progress vis-a-vis those questions?
you would have much better, well, my assumption would be is that you would have much more effective alignment of kind of the collective wisdom that resides within the university to answer some of those questions as opposed to assuming that everyone will come up with a question that will make a difference in the society that they are working in, And so it happens at multiple, it can happen at multiple levels. And it's surprisingly to me,
⁓ Alessandro that we have not as a society have not come up with a better way to do this kind of agenda setting right and and especially in a in more legitimate way right that is not just ⁓ by a few ⁓ that happen to be there when the questions are formulated.
Alessandro Oppo (19:27)
I was thinking before when you were talking about the questions about the hitchhiking guide I don't know if you were referring to it because also there ⁓ hitchhiking guide I don't know how to pronounce it the famous...
Stefaan Verhulst (19:38)
Do you want to
Oh yeah, the hitchhiker guide. Yeah, 42. Yeah, exactly. Yeah. No, exactly.
I give presentations and I start with 42, right? Which is basically is that anyway, okay, so this is kind of the situation you are in with AI these days, right? Is that, anyway, we get an answer and everyone's thinking about, okay, what was the question? Right? And that's kind of, we are obsessed with the answers. And I always say we are answer rich and especially in today's age where we can get any answer we want.
right, ⁓ to any prompt that we are formulating, whether it's a correct answer or a hallucinated one doesn't get an answer. But then the questions of, well, is this the question that matters? Right? And is this a question that actually will make a difference if answered? Right.
Alessandro Oppo (20:35)
I was thinking many times I asked ⁓ to the guest of the podcast like why people do not participate but then I thought maybe it's not the right question and still I don't know if this is the right question or not and I wanted also to ask you which open questions do you have in mind now
Stefaan Verhulst (20:58)
Yeah, all right. So you throw the ball back to me with with questions. Well, there are many. ⁓ And again, first of all, ⁓ what topic? Right. So that's the question. But on the topic of questions. Right. So ⁓ I think there are a lot of meta questions that ⁓ that that we can ⁓ that we need to start answering. Right. And so one meta question is, course, what is a good question? So that's already kind of a ⁓
Again, there is no real sense of criteria, by the way, on what that might look like. And this will be different for different contexts and for different people and different cultures for that matter as well. But then it's also about who are good questioners, right? And are there certain kind of characteristics that define whether you are...
you are ⁓ more of a questioner than a taker to a large extent. And it would be interesting, like even there, Alessandro, ⁓ my hunch is that you could actually kind of develop a map of the world in terms of societies where questions are permissive and actually promoted and other societies where questioning is seen as a threat.
And then the question is, how does that translate into actually advances in societal advances? And then the other question that I'm interested in, Alexandre, is that what are the questions that we should not ask? Because questions gets ⁓ very political quite often. And some questions are harmful to society ⁓ as well. So not all questions should be asked.
or answered for that matter. And so these are kind of the meta questions that I'm trying to develop. But I think it's ultimately getting a more scientific approach to questioning and then also translate that into new methods that are more legitimate, right? And also, again, leverage ⁓ collective intelligence and AI can also be leveraged more and more towards that end.
Alessandro Oppo (23:25)
⁓ Which question do you think can be dangerous and should not be discussed?
Stefaan Verhulst (23:37)
There are many questions that go to the heart of identity and I think some of those questions should not be answered or should not be posed, right? Because they are divisive, they are also offensive quite often and I think these are kind of, yeah, you could say, well, you can ask all the questions, but I think some of those questions ⁓ basically don't contribute to... ⁓
a healthy society, but are actually more questions that would be divisive. And that also, again, what are you going to do with the answers, right? So it's like, it doesn't make you a better society,
Alessandro Oppo (24:19)
As you were saying before, we should think ⁓ what to ask. ⁓ That is the important thing. think that lot of times we ⁓ ask things and we would like to ask different things.
Also, at this moment maybe I would have liked to ask a precise thing, but then I'm asking something else.
Stefaan Verhulst (24:54)
Yeah, no, it's and again, look, of course, there are different situations, right. But I think, yeah, I think we we we we as we have taken it as a given, right, that all questions are kind of good questions. And we've also taken it as a given that everyone can ask questions, right. Because anyway, we grew up.
Alessandro Oppo (24:56)
It's.
Stefaan Verhulst (25:16)
And again, there's all kinds of studies, right? When we grow up, when we were younger, we were asking questions all the time because that was then a device. Anyway, questioning was a device for learning. But then, of course, we forgot about questions. And so now, a society, we are actually pretty unsophisticated when it comes to questioning. And we are pretty unsophisticated when it comes to prioritization.
because anyway, you can have an argument, well, all questions should be answered. some questions are more important than others. And so how do we start prioritizing a few of them? And how do we then structure what I call the... ⁓
the inquiry, ⁓ how do we structure the inquiry? So how do we structure the quest? the interesting thing about questions, Alessandro, is the first part, It's quest. And so one question leads to another question. so what's, anyway, you started off with the Genesis question. Well, what's the Genesis question of certain kinds of ⁓ topics, right? And in some cases, we will know that certain questions cannot be answered unless we answer another
question first, right? And so this is where this taxonomy of questions comes in, where we have descriptive, diagnostic, predictive, prescriptive, and so on, But I think having even ⁓ questions fluency to understand that there are different types of questions and that actually you need to structure a quest when you are ⁓ trying to, for instance, for evidence-based policy making, it's not going to be one question. It's going to have to structure the quest. ⁓
That turns out to be poorly understood, Alessandro. And again, because the assumption is that everyone can formulate questions, which is true, but it doesn't mean that everyone has a fluency in questioning. And I think that's kind of what I feel, especially at the time of AI, when any question we can answer, ⁓ that we need to start prioritizing. And this is also where I think collective intelligence can help.
Alessandro Oppo (27:25)
And another question is... No, mean, but actually interviewing is sort of like asking questions, so...
Stefaan Verhulst (27:27)
Yeah, I know. Sorry for going on this question trajectory. We just kind of got distracted.
Well, there are, of course,
professions that require certain questions fluency, So journalism, of course, one, right? Podcast is another one. lawyers are trained to ask questions, right? And so there are clear professions that depend on questioning as a skill, right? ⁓ But ⁓
But then anyway, why is this not part of other kind of professions? Like for instance, in the policy community, why is questioning not kind of a skill that is actually ⁓ more focused upon? And I ⁓ think that's kind of what we need to learn from ⁓ other professions on how do you go about questioning, for instance.
Alessandro Oppo (28:33)
And maybe it's also very related to what we do, what we study. And so it's such an issue that it's a wish.
Stefaan Verhulst (28:47)
Yeah, yeah, yeah, no, it's definitely,
well, again, that's why anyway, think it would be interesting to understand and we have not really, we have not much research in that space, right, on who are.
Who formulates what question? What are the variables and determinants of actually the question that one asks? And I think that would already be interesting. Anyway, you study history, right? Which is kind of all about big questions as well. ⁓ in some, anyway, like in history you have...
topics like the Polish question, right, which was basically kind of ⁓ an area that anyway, this was kind of an area of study, right, or even an area of history ⁓ that is still unresolved, by the way, but that at least is kind of a way to address a particular kind of development in society. And I think, again, and then even understanding questions as different devices in different disciplines and in different areas would be interesting.
Alessandro Oppo (29:52)
Yeah, and also, I I had other questions here, but because we are talking about meta questions, it seems like that we as humans are not really able to deal with questions that do not have an answer. ⁓ So often we create something ⁓ that could be believing in a religion, in God, in another God, in another...
Stefaan Verhulst (30:14)
Yeah.
Alessandro Oppo (30:20)
and ideology. ⁓
Stefaan Verhulst (30:23)
Yeah, yeah, no, no. And again, we are moving
into the big questions, right? What's the meaning of life and so on? But the and how was the world created? But exactly right is that ⁓ it makes us very uncomfortable. ⁓ But I think being able to ⁓ to engage with the question, I think, requires a certain kind of.
⁓ sophistication or a certain kind of openness, right? ⁓ That is quite often not there. And again, of course, multiple...
answers have been developed and again who are we to evaluate those as long as they are not harmful to kind of society at large. ⁓ But I think exactly there is a lot of being uncomfortable about ⁓
questions that are not answered, which is also, the way, from my assumption, by the way, Alessandro, is that it's so easy in the current world of dis and misinformation, right, to be misinformed or to be misled by conspiracies because we like answers. And I ⁓ think ⁓ if we would be more critical and say, but what was the question to what is being presented to me? Right. ⁓
would
have ⁓ less impact of the current misinformation ecosystem that we are in, right? Because we are kind of bombarded with answers, bombarded with statements, without even reflecting about what is this, what's the question here? And is this the question that matters to the problem that we seek to solve or the state of the world that we are in, right? We just get bombarded with.
information and in most cases now misinformation. And that's why think the misinformation ⁓ pandemic or epidemic that we are having is kind of also the result of our ⁓ lack of ⁓ question ⁓ fluency from my perspective.
Alessandro Oppo (32:38)
How do you imagine society in 5, 10 or 20 years if let's say we are able to give a good answer to the question that we have now? Yeah, if you... Because I see technology that, I mean, it's evolving very fast at a level that we have never seen it in history.
And so also like ⁓ an expert in the field can be like a sort of every every new week there is a new AI model, a new AI platform and so on. So I don't know how do you see like are you optimistic, pessimistic? ⁓ Like I'm pessimistic in the short term and optimistic in the long term. ⁓
Stefaan Verhulst (33:34)
Yeah, well, it's always like, we underestimate the impact in the long term and overestimate the impact in the short term, guess, somewhat. the I mean, it remains to be seen, right? And I think obviously, I think there is a lot of well, so a lot of the work that we've been doing is kind of democratizing access to knowledge. Right. And that has been kind of one of the work that I've worked on for a long time.
And of course that was also by the way the excitement when the internet arrived, right? That is that you would have access to knowledge as never before, which is of course true. Unfortunately what has happened is of course as I said knowledge has become weaponized and I think I'm not anyway...
that worried about kind of the models. I'm just worried about what happens if the models get either monetized or weaponized. ⁓ And that's what happened with the internet, is that the internet got monetized. It was all about advertising. The advertising business model is the only one that has been promoted at en masse and with massive investment, ⁓ which is why we're having the current internet.
that we have today, which is basically an ad based kind of internet. ⁓ And which then leads to all kinds of ⁓ behaviors. ⁓ And and of course, it also has become weaponized in ways that we didn't anticipate because we forgot to govern it in ways that that that that we didn't anticipate. Right. And and I think so my worry, Alessandro, is less about
the models itself, it's more about, anyway, because there's this massive investment at the moment in AI, right? Eventually, they're gonna have to show in our return on investment. And my worry is that the only return on investment is gonna be, again, an ad-based model, right? And so, and that's gonna skew the whole ecosystem.
Because at the moment I would say we are everyone is worried, but I think we probably are in a
the honeymoon moment of AI at the moment where many of those models are freely accessible. are actually, they are not that bad ⁓ and they are definitely not yet weaponized, right? In ways that skew certain answers ⁓ because of certain forces behind it. My worry is that this honeymoon period might end ⁓ at a certain point in time and it might again become monetized and ⁓ weaponized.
very rapidly, right? And I think that's what I'm worried about. And I think...
that's where governance comes in but unfortunately most of the governance has basically ignored that kind of aspect it's all about kind of output ⁓ evaluation but it's not about the broader funding and ⁓ business model and governance ⁓ system that is out there right and so so that's kind of what i'm worried about
Alessandro Oppo (37:02)
And I mean, I see always this tension between, let's say, policies and code. Because I see how internet was used to, let's say, to make money and also to influence, control, and so on. So I wonder if we want to...
let's say build something and protect what we build ⁓ if we should use law or code because sometimes I see ⁓ let's say lawyers that want to ⁓ they want a law about everything but at the same time I see this the technology
the code that is also sort of, there is a parallel between code and law, ⁓ what I mean is that sometimes ⁓ if you write in the code that something cannot be done, then it cannot be done. ⁓ But if in the code it can be done, but by law it cannot be done, then you can decide if to do it or not to do it.
Yeah. ⁓
Stefaan Verhulst (38:25)
Yeah, no, that's it. course,
Larry Lessig many years ago, of course, code is law. But then things that I worked on was exactly, Alessandro, to show that law is code as well. And so it's not ⁓ like, anyway, and we've seen this in many places, like financial services, health services, where clearly law determines what the code is. ⁓ And I think we need to start ⁓ developing that ⁓
more and more. But again, I think what it is also about, it's about ⁓ alternatives. And so what would, again, there are a lot of people that are working on kind of a public interest AI ⁓ ecosystem. And so it would be interesting to see, so what is an alternative to prevent this kind of weaponization?
to prevent this kind of monetization, right? So at the moment, like for instance, why is social media so... ⁓
harmful to a large extent in the current environment is because there's no other alternative than the ones that we have that is actually anyway that is scalable and that anyway we can say well you know because now it's the only thing we can do is ban all social media as opposed to say anyway let's have social media as we anticipated which was all about connecting people sharing updates in a way that is not
in a polarizing in a way that does not discriminate or leads to hate speech, for instance. But we don't have a public, we don't have a social media that provides an alternative. Of course, there are some efforts there, but they are not ⁓ they are not at the level of ⁓ investment, nor at the level of ⁓ use. Right. ⁓ And so what I'm interested is, OK, so are we going to make the same mistake with AI?
where we're gonna have a few bad ⁓ options and no options that provides more of a public interest ⁓ kind of ⁓ option. And we see, anyway, there's efforts around that, but anyway, they cannot compete with the scale of investments that are going on. I think, so if there is be one area of, ⁓ it's not just the code, it's also about investment. Who oversees the investment?
and who basically determines what is done with the investment. So if you would basically have ⁓ conditions with the investment, then you would have ⁓ far more powerful leverage. It's like, okay, we're going to invest, but you can only use it for this purposes. anyway, and if we see that it's done for other purposes, we pull back for instance, right? Or you have to pay back. mean, these are stronger leverages from my perspective,
any law will be able to do, make it conditional in terms of investments. And this of course anyway
was we started working on that a little bit in the ESG kind of context, right? Where we made, anyway, if we invest, we have to show an environmental footprint. Now this has become too woke, so no one talks about it anymore, but we could have, anyway, what are the ESG criteria for investments in AI? I think that's something that would be worthwhile exploring to ensure that exactly decisions with regard to code, decisions with regard to the business model, ⁓
behind models that that gets steered in a way that is societally beneficial.
Alessandro Oppo (42:08)
And I think we have time just for last question. ⁓ No, mean it's... No, if you have more time we can also say... I mean just a message for the people in the space that are working on whatever could be a new software, new ways of use the collective intelligence.
Stefaan Verhulst (42:15)
Yeah, I've been going on and on, yeah. No, unfortunately I have to.
Alessandro Oppo (42:36)
Also message maybe about question and what prioritizes, I don't know.
Stefaan Verhulst (42:42)
For me, any questions? Yeah, no, well, I think we need a lot more innovation in the question space, where that's kind of, and I made a point here, right? Let's take questions seriously. Let's develop a science behind it.
and also make this more participatory, right? So we can actually do a lot more effort to not only improve the questions, but also making it more participatory and democratizing that. The other one which we didn't talk about, Alessandro, is that ⁓ we see massive ⁓ asymmetries.
⁓ in today's world and quite often this is a result of both data and information asymmetries. And so I think if we really want to make ⁓ AI more inclusive and if we want to actually have AI that works for the majority of the world, we're also going to have to start unlocking data.
in ways that is less extractive and in ways that is more inclusive. And I think that's kind of another big topic that I work on, which is kind of this whole notion of how do you set up data commons, which deals with this notion of extraction, but also deals with the notion that we actually do need to have access to data. Otherwise, if we are moving to kind of AI becomes our way to learn about society, right? Well,
large parts of society are not in AI. So we will kind of have invisibles and we will have ⁓ kind of massive asymmetries between those that have access to tools and those that don't have access to tools or those that have access to tools that speak their language and those that don't have access to tools that speak their language. And I think that's kind of a big asymmetry that is emerging as well.
And I think that would be kind of one message is to not just focus on kind of AI governance in terms of risk profiles, but also focus on the asymmetries that currently exist that will, ⁓ that where the risk would be not having access to the tools, which is kind of an equally important risk than the risk of the tool will be, will hallucinate or will be misused. ⁓ Actually having no access to the tools is
for many a much bigger risk. And you this also playing out in, as we speak, of course, we have the AI summit in India. I the global majority, I mean, they want access to AI, right? And I think the biggest risk that they see is that they're gonna be left behind, like in many other kind of contexts. And I think that's kind of an area that we need to address more.
Alessandro Oppo (45:45)
ok now yeah sorry got disconnected on the last world
Stefaan Verhulst (45:52)
just when I had my best moment of the whole podcast. No, I'm joking.
Alessandro Oppo (46:23)
Yeah, I mean... So, thank you a lot Stefan.
Stefaan Verhulst (46:30)
Hope this was somewhat of interest, Alessandro, to your efforts. So keep me in the loop.
Alessandro Oppo (46:33)
Yeah,
absolutely.