
From AI working alongside other AI to the power of patience and perspective, this episode is an invitation to slow down just enough to see what really matters as change unfolds.
To catch full episodes of all The Motley Fool's free podcasts, check out our podcast center. When you're ready to invest, check out this top 10 list of stocks to buy.
Will AI create the world's first trillionaire? Our team just released a report on the one little-known company, called an "Indispensable Monopoly" providing the critical technology Nvidia and Intel both need. Continue »
A full transcript is below.
When our analyst team has a stock tip, it can pay to listen. After all, Stock Advisor’s total average return is 920%* — a market-crushing outperformance compared to 196% for the S&P 500.
They just revealed what they believe are the 10 best stocks for investors to buy right now, available when you join Stock Advisor.
*Stock Advisor returns as of February 11, 2026.
This podcast was recorded on Feb. 04, 2026.
David Gardner: About this week's guest, founding editor of WIRED Magazine, Kevin Kelly, the long time technology journalist David Pogue had this to say. Anyone can claim to be a prophet, a fortune teller, or a futurist, and plenty of people do. What makes Kevin Kelly different is that he's right. [LAUGHTER] Kevin has spent decades paying attention to what most of us rush past. This week, he returns with that same calm, long view perspective unhurried, curious, quietly provocative. What does the future hold? Only on this week's Rule Breaker Investing.
Welcome back to Rule Breaker Investing. Well, I'm absolutely delighted to welcome back Kevin Kelly to this podcast. I've had Kevin on a couple of times before, and each time it's such an enjoyable. Well, for me, anyway, I hope for our listeners, too, I don't know whether Kevin enjoys it or not, but it's such an enjoyable conversation. I do want to mention that Kevin started blogging again. He's written so many things over the years, but on Substack in August of 2025. I noted months ago, and I've just been reading each of his essays as they come out every week or so, and in conjunction with this week's podcast, I decide to be the guy who's not the cheap guy reading stuff for free on the Internet, which is what so many of us are. I'm actually now subscribing and paying at kevinkelly.substack.com, and I truly believe dear listener, that if you enjoy our conversation this week, you should consider that as well. At least I would suggest read for free, but maybe it's worth subscribing to. For me, it is, Kevin Kelly is senior Maverick at WIRED magazine, which he cofounded in 1993, where he served as executive editor for its first seven years. He's the author of multiple best-selling books about technology in the future, including and I know we're going to go there once again here 10 years later, the Inevitable 2016 book. He's also a longtime board member of the Long Now Foundation, which is an organization dedicated to one of my favorite things, long term think and as of August 2025, as I mentioned, he's blogging again now on Substack. He is known for his radical optimism about the future and that is also shared by many listeners today. Kevin, welcome back.
Kevin Kelly: Thank you, David, and it is a joy. I do enjoy our conversations. [LAUGHTER] It's a delight to be here, and I am honored that you are sharing your attention with me, which is the most precious thing that we have.
David Gardner: Thank you very much. I was reminded of Ralph Waldo Emerson. This was a question that you would ask a friend who he hadn't seen for a few years. I thought I'd start our conversation this way. Kevin Kelly, what has become clear to you as Emerson would say to his friends? What has become clear to you since we last met?
Kevin Kelly: It's a great, fabulous question that I hope to steal and use in others. My answer is that it's very clear that the US is no longer the sole superpower, and China is just about appear. In other words, there's a duopoly, that we're no longer alone in trying to exert power in the world. That was coming along for a while, but it's very clear that we're at this moment. We're in this transition period of time. I think there's no doubt about that. That's what clear to me is.
David Gardner: Do you view that optimistically? I'm curious. One thing I've said to friends occasionally at a cocktail or happy hour, talking about the market and the world markets, sometimes I've said, the US and China share many things. I realize we have many different views. But one of the things that we share is an appreciation generally of the status quo, because it is sort of a duopoly. We are the big dogs, and therefore, there's a shared interest at maintaining that and having peace and calm. Do you feel that? Because a lot of people would feel somewhat threatened. Maybe Americans would feel threatened by what you just said.
Kevin Kelly: Well, first of all, culturally, I think there's almost no other culture that's as close to Americanism as the Chinese. There's huge overlap in the personalities. In fact, I think the Chinese have a sense of humor that's closest to the Americans. Secondly, I think America was built from this immigrant energy, and China has that, too. But it's all internal immigration. You have people who don't speak the same languages coming to mix in the cities and it's vast continent, basically. There's many shared things, including the self identity of being a powerful entity in the world. I think it's good to have competition. I think if we can structure our relationship to be competitors rather than adversaries, it's good for the world and good for us in the same way that if you're entering a new business, you actually, at some point, want to have competitors. Yes, I think it's good for the world, and, of course, it's good for the Chinese. The key thing is not to let this slide into adversarial relationship.
David Gardner: Thank you. Let's go a different direction because I was just reading a week or two ago, something you published last month on your Substack. The essay is entitled, How Will the Miracle Happen Today. You introduced this idea, your phrase here, of being kinded of unexpected kindness arriving as a feature of the world, not a fluke. What does it mean, Kevin, to live as if that's true? Could you take us back maybe to your 20s, and tell the story of being kinded, the discovery and what you learned?
Kevin Kelly: There are lots of sermons on the value of giving and helping strangers and fabulous stories of people who are the recipient of that. I came to a little slightly different viewpoint in my 20s when I hitchhiked to work every day on a regular basis, and I was never late. The punchline was I came to depend on the kindness to strangers. I wasn't doing it as a stunt. I literally had no money. It was a little too far to bicycle. There wasn't a bus. I didn't have a car. That was the best way for me to get there, and that worked. Then later on, I went on to travel around Asia. Even though I was in a much more privileged position, I was still the recipient of great acts of kindness. I learned that there was a certain grace in being able to receive it in the same way that giving is this weird, universal principle that the more you give, the more you get. I think the most selfish thing in the world you could do is to be generous. It's an oxymoron, but it works. To complete the other half of that, you have to receive something. Somebody has to receive, and I think you can make that transaction better, more important for both halves of the exchange by becoming a gracious kind, someone who's been kinded. The essay was exploring the idea of what it takes to actually be able to receive gifts, and to do it well and to do it gracefully and to do it in a way that honors and makes the people who are giving also be better. That was a little shift in the perspective. I called it the idea that when I was hitchhiking, the question I asked myself is, how will the miracle happen today? It was the miracle. But it was I know it's going to happen. I don't know how it is or who is, but the miracle is going to happen. There was this idea of surrendering, and there was a term that my friend John Barlow, I think, coined, which was the opposite or the inverse of paranoia was suspecting the entire world is out to get you. You have pronoia or you're suspecting that the entire world is conspiring to help you. That's a little bit of the pronoia that I have.
David Gardner: It's a great word, and I love the concept, and I feel a lot more paranoia in my life than paranoia at the age of 59, it remains the case, and it's just a delightful thing to lean on. That's a little bit of the point that you're making the essay is that it actually will show up. It does show up. Now, a lot of us, it is easier for us to give than to receive. Part of what I received from your essay was a thought or two or tool or just expectation that we should be able to graciously receive. It takes humility. Sometimes it might even take shame to the point of guilt to receive, depending on who's giving what when we need it. But I appreciated that point that you are actually being kind and how to approach that and receive that is its own talent.
Kevin Kelly: I do want to make a caveat that I have seen this phenomenon around the world and not just in people like me who are very privileged, who are very lucky to have been born, where and when and how I've been born. I've seen this happen in places, where people don't have much to share where they are impoverished, and still it's still true there that people whobrace the pronoia gift of treating the world as if it was conspiring to help them will have and do often do better than the paranoia view. I just want to say that it happens even without privilege.
David Gardner: I really appreciate you saying that. I can think of stories that I've told elsewhere on this podcast, I won't do it now in my own life, which are perfectly illustrative of what you just said. There is obviously an optimism at the root of that. There's a belief that, things are for us and that whether you think you can or whether you think you can't as Henry Ford was purported to have said, you're right. That comes through, Kevin, not just in the story that you told, but in so much of your writing and speaking. Indeed, I would say, by example in the life that you're leading. Now, any unapologetic optimist continually, has to confront those who think that optimism might just be naivete or wishful thinking. When you're confronted by that perspective, what do you say back?
Kevin Kelly: I acknowledge the horrible problems of the world. I go even further to say that the current set of technologies that we are making will produce problems beyond anything we've ever seen. AI, for instance, is going to make the biggest, most gnarly problems that we've ever seen as a society. My claim, though, is that while the problems are large and real, we only need a small percentage of betterment to overcome that, that while we are inventing new problems that are worse, we also inventing our capacities to solve the problems a little tiny bit faster. If we can even if we destroy 49% of the world every year and make it harm and crappy, if we can make 51% good, then that 2% Delta compounded year by year, is what we call civilization. The world can be better, but only by a little bit, and it's that little bit that is not really visible because you see the 49% crap and horror and everything else. I acknowledge the terrible state. But I say the good is harder to see in many ways because it's boring. A lot of what progress is about is what did not happen to you today. [LAUGHTER] It's about the fact that you didn't die, your kid didn't die. You weren't robbed on the way to work. It's all the things that didn't happen that are really the mark of progress. We don't see that. We see mostly the bad stuff, and the bad stuff, if it happens fast, good stuff happens slow. If we ask what's happened in the last five minutes, it's going to be bad. We have this bias built into the world to make the good stuff a little bit not as visible. For me, optimism is primarily a choice. I choose to be optimistic. I choose to be as optimistic as I possibly can. I choose to be more optimistic this year than last year. It's a choice. It's a stance, it's saying I will see the best. I will seek out that which is good. I will try to promote that which is good because it is 1 million times easier to imagine what breaks down because that's entropy. That's the whole point of entropy is that it's easy, natural cause. It's much more difficult to see how everything works out for the good, but I believe that we have to imagine it first in order to make it happen. I think that's why our current environment, everything we have in the world today that is good, was made by some crazy, unreasonable optimist, who believed against all odds that this could happen. We are living in a museum of optimistic passion projects. If you want to be someone who's going to shape the future, you should be as optimistic as you possibly can.
David Gardner: A museum of new possibilities fashioned, I would say crazy rule breakers from the past. Let's veer that direction now, Kevin, from optimism, and I so appreciate your point that it is a choice. Love that. I want to veer into another really important word to me. I hope you too, foolishness, which by definition, in this context, involves challenging conventional wisdom. I would say we've already indulged in some foolishness here in just our first 10 minutes or so together. But let's take a look through your eyes, Kevin at the received moors. I dropped a note ahead of time, just saying, could you think about 3-5 examples where you might pull on your ester cap and look at our world today, our cultures, our technologies, our perspectives. I want to give you leave to speak your mind. Shakespeare once put it this way, that you may through and through cleanse the foul body of the infected world. Well, maybe not that aggressively. Those were Jacques words in the play As you Like it, but you get the point. Let's take a look at your list as you look at the conventional wisdoms that maybe could be punctured or challenge. What's number 1?
Kevin Kelly: I hadn't really thought about ranking them. I have a slight refinement on that assignment, which may or may not help, but it's how I think. It's like I ask myself and others, what is it that I believe or that you believe, not that most people don't believe, but the people that I most respect don't believe? It's a little bit higher level, but I think it's more genuine in a sense, because for me, I can say, there's lots of things I think that the general population, but we'll probably have those in common. What I would like to suggest is maybe something that I believe that you don't believe. Hey, what's this? Because I respect you and admire you so that's a harder.
David Gardner: That is a higher bar.
Kevin Kelly: That's a higher bar.
David Gardner: I love it.
Kevin Kelly: Again, we can go down this path, but I'm speaking here as an American, I believe that for every American, one single number should be public, which is the amount of total tax they pay the federal government. The amount of money that you pay in to the Commonwealth should be public. Nothing else about your tax returns needs to be. Just how much are you paying to the Commonwealth. I think that would do a lot to again, bring confidence into the world. It would maybe have peer pressure on those who obviously maybe aren't pulling their weight or maybe they are trying to outdo each other and showing how much they are. I think that transparency would be good for the world. A lot of people find that a little scary because it seems like it's an invasion. In the same way of transparency, and this is far more controversial is I believe that your DNA should be public. I think that the benefits to medicine would just be so overwhelmingly great that the few cases where there was a real danger of this would be outweighed. Usually it's insurance concerns that people have. I think we could figure out a system that would not penalize people for their DNA, and yet still share that publicly in the same way that the most personal thing about you is probably your face, but it's also the most public thing about you. Actually, we require your face to be public, because it's your password, in some ways, it's your biometric. I think DNA is in the same league as your face. It is, yes, the most personal thing about you, but it should be in some ways, a public thing about you because the benefits that we would have as a society from all the medical knowledge known and linked to that would be just so immense that everybody would benefit.
David Gardner: I sometimes just operate under the assumption it's already out there. As somebody who used 23M 10 years ago, and then not a month goes by, it seems, where all of our passwords haven't been stolen from our bank or somewhere else. There's the dark web, as well. Even, I'm saying that half joking, but I'm half serious about that. The half serious part of me thinks that it's almost unstoppable and one of your favorite words inevitable that those things would end up being the case. It wouldn't be hard just to lift somebody from a fingerprint, lift somebody's DNA if you actually cared to do so. That might be an illegal breach of privacy right now, but AI robots, that's the world we live in. It can't be stopped.
Kevin Kelly: Yeah, it's actually unclear whether or not if you did it publicly, if you left a fingerprint on the street, we haven't really decided as a society what we think about that. But yeah, so I think it is in the general direction of something that's inevitable, that transparency. I think transparency in general is always better for the society and eventually for the individual.
David Gardner: We certainly favor that as stock market investors. At least those of us who are investing, becoming part owners of businesses, there was a little bit of a debate. Would we change the reporting requirements so companies wouldn't have to report earnings quarterly, but rather maybe just two times a year, and I oppose that. I'm a big fan of companies being transparent with their results and understanding how they're doing. We all benefit as co owners, and so that would be one example. But transparency, certainly for us at the Motley Fool is something that we've been looking for to be better investors from day 1. Those are a couple of good examples. I love the way you're reframing that. What is something that you believe that people that you respect don't believe? The way I've sometimes phrased it or heard it phrase, this is a half step away, but it continues the conversation is, I've sometimes said, what is something you believe that most people don't believe? By the way, if you're right and you can turn that into a product or a service, those are often the great rule breakers of any generation in terms of investments, et cetera.
Kevin Kelly: It's true.
David Gardner: Thinking about the things, it's a fun water cooler question for almost anybody or a walk and talk question, what is something you say to a stranger or a friend? What is something you believe that most people don't believe? I encountered something like that recently when I read your wonderful and provocative essay, Kevin, again, on your Substack, this one, December 1, 12 assumptions for extra terrestrial life. Now, we won't go through all 12, but I'm the one in my friend group. I'm like, It's so obvious. The universe is teeming with life in all forms and you obviously also agree with that. I also learned from you, and it makes a lot of sense that while the math is just so clearly suggestive that there's huge amounts of life out there. Nevertheless, the math would also say, I learned from your essay, It's still pretty rare if you were to bump into one solar system or another. But when you aggregate it all across numbers that just boggle the human mind, but what surprised me as we went down the list of 12 or so somewhere down there. Yes, I'm looking at the essay right now. Number 11, you say this. This is an eye opener for me. I'm open to the thought. Every day you write a few probes of billions of interstellar civilizations out, their visit our planet, scoping out our technological state. These technological probes appear briefly in order to see us and disappear once they've inspected our inventory. So far, we have little to offer, nothing that can't be found on millions of other planets. Kevin, that one jumps out because that is an assumption that you have made, and tell me, do tell.
Kevin Kelly: I thought about what technology we would really need to do interstellar travel. It's very, very daunting. The distances are so great, you have to find some way through some interdimensional something or others.
David Gardner: It's all wormholes.
Kevin Kelly: All wormholes. But that's a long way off. To get to that level, I figure by that time, a civilization will be able to basically make whatever they want. There's like what would the rest of the universe even have to offer them? There's not going to be spice on this planet Dune, because they can make spice at home. They just making the big that. You don't need to travel gillion miles to find the spice. You're just going to make it. I thought the only thing that might be worth trying to visit and travel throughout the universe would be if you could find ideas or technology that you could not come up with or hadn't thought of yourself. What else? Why else? You're not going for a vacation. You could just make that. You could just produce that anywhere you want. This idea of visiting other places, and you're looking for ideas. You're looking for technology, something that you haven't thought of. For most of the places that people or this probe or landing, there's not going to be, and so, like, why are they going to be around? There's nothing there for them. Just making more probes on a planet somewhere? Why? There's no reason. They're looking for ideas and information and technologies that don't exist. We don't have that, and they're gone. I think in order to see, you have to be seen. Briefly, in other words, you have to come out of it, whatever it is. In order to actually pick up those light photons, you actually have to have something that can be seen. There's these brief moments where they appear while they're looking before they're taking off. That's my solution to the Fermi Paradox, which is, if there's trillions of civilizations, how come we haven't seen them? The answer is, we are seeing them. Very, very briefly, they're bored. There's nothing here for them, and so they move on.
David Gardner: Really provocative.
Kevin Kelly: By the way, I'm not going to die on that, Mel.
David Gardner: No, I know. That's not a hill that you're going to die on.
Kevin Kelly: I'm not trying to convince you of this?
David Gardner: Not at all. No. It's the exploratory nature of your mind and your work that a lot of us are fans of me included.
Speaker 2: Could AI help you do more of what you love? Workday is the next gen ERP powered by AI that actually knows your business. We help you handle the have to dos, so you can focus on the can't wait to dos. It's a new workday.
David Gardner: Let's stick with science fiction because another of your essays, another one you published last month, in fact, is entitled The Unpredicted versus the Over-expected. Now, This one is a little bit more grounded for most of us. Science fiction, you aver, basically did not predict the Internet. There aren't a lot of movies or shows or popular culture of the past that was expecting the Internet. Now, the opposite case is when our future is over expected. I'd love to hear you if at the start on, yeah, sure. How and why did the Internet surprise us? But then what technologies today, Kevin, might we be over obsessing about?
Kevin Kelly: Sure. By the way, the genesis came from a little chart, very throwaway [inaudible] that the great science fiction author Arthur C Clarke talked about that he was noticing that there were expected and unexpected technologies and that like X rays were one of those unexpected, whereas flying machines were long expected. He wrote that pre Internet, although Clark himself was one of the closest to come to the, he imagined satellites around the planet.
David Gardner: Yeah, I've seen the videos. He was such a genius.
Kevin Kelly: Yeah, he was very present, but it is weird that we don't have a lot. There's a few little glimmers and hints of it, but we don't really have a good corpus of expectations or predictions about the Internet. But we do have a century of expectations about AI. I think there's a correlation between how much we are worried about AI versus worried about the Internet. Internet came so fast there wasn't time to worry. It was suddenly there and then people were using it. But the AIs, we've been worried about it for a century, and most of the images and stories that we have about it are negative. They're downers, because, again, it's much, much easier to think about how this goes wrong than to think about how it goes right. That's just entropy. Most of the stories we have are the lazy ones of it being not ending well. We have a cultural larger uphill to get over that. We need a lot more convincing that this is going to be good. What I'm trying to do is say the primary way that we want to do this is to be evidence-based. It's like, let's just keep looking at the actual evidence of actual harm versus accounts of the imaginary harm of the third friend of a friend of a friend that we could imagine getting harmed because if we just follow our imaginations, we're not going to be ready for the good stuff. Let's look at what actually happens. We're in the beginning of 2026, how many people to date have been fired, have lost their jobs because of AI? The answer is, hundreds, maybe thousands, but not much more than that. There is no actual evidence of massive unemployment. Now, people say, we can imagine. Yeah, I know, we certainly can imagine it.
That's the problem. We're just imagining it. But let's just look at the actual data. Let's look at the actual data of people who are actually harmed by AI, how many people have been harmed by the self driving cars. Someone did this calculation that every day, there are eight jumbo jets, who follow the sky and people are killed. Every day, there were eight crashes of the jets and everybody on board were killed, no survivors every day. There wouldn't be a single day when we would put a stop to that. But that's how many people are dying hit by cars. That's how many people are dying automobile deaths every day, and it's like, we're OK with that. The new technologies are often judged an unfair double standard with the existing technologies. One of the things we always have to ask is compared to what? Whatever harms there may be, we also, in addition, want to say, and then compare to what? Same thing with nuclear. How many deaths compared to what, to how many people die in coal plants or natural gas power plants, explosions, whatever it is. If we look at the actual data for AI, it hasn't been harmful. At the same time, it hasn't really transformed people's lives yet either. There's only two. The only other thing where there's total embrace of AI is in Waymo. Self driving taxis. If you've been in a Waymo, everybody wants more Waymo. It's like they're just, OK, that works, that's good. We're in this period right now where we don't have enough of the actual change your life stuff because it's still very very early, and we don't have an actual it's really going to harm you. We have this period where our imaginations are running wild and imagination is great. But let's imagine how it could turn out to be the best thing for us.
David Gardner: What if things actually went right?
Kevin Kelly: Exactly, that is the assignment.
David Gardner: Medieval mapmakers, as has been pointed out before, when they didn't know what was up there in the corner Ultima Thule, they would just write, there be dragons. That, to me, has always been a reminder as an investor and a fellow liver of life entrepreneur, as well, that obviously many of us tend to fear the unknown. That is the default setting, it's in our DNA, it has been replicated by ancestors who survived because they feared the downside and so it's understandably human. But if you are one of those rule breakers, somebody who thinks, I want to be part of the future museum, of crazy ideas that ends up being the best answer for the future. You probably should start to get in touch with the possibility that things could go right. You know, Kevin, one of the things you just pointed out is, if you have decades of literature and stories saying, this is going to be bad. Then when it starts getting near, people think, this is going to be bad.
Kevin Kelly: Only story they have, they don't have any other pictures. Every Hollywood movie, the AI is except for maybe Star Wars and Archer G2 VP. They didn't kill anybody. But most of the time, how and beyond, it's tragedy if you give AI's any autonomy. We do want to change that valence and try to at least offer some other versions and that's sort of what I'm trying to do is to say, it doesn't have to go down that path. It could be something where we actually, and this is something that I claim is that AI's and robots will help us to become better humans, so I truly believe it. I think that we can use them to become better humans and so that's an alternative vision. It's less of a prediction and more of a scenario of possibility.
David Gardner: I really find that provocative, I believe that's true. I think the Internet has made us better humans. Obviously, there's downsides, we can all see some negative use of the net, and social media, people are really taking that to task, and it's being portrayed as if it's a horrific thing. I think Net, it's a gain, but we can see the things that don't work out. Anyway, your essay, it was November 10th, which happens to be my wife's birthday last year, robots will make us better humans. I would also recommend that to anybody listening who wants to hear more from Kevin on that one. You also wrote on October 7th of last year, another eye opener. This one was entitled paying AIs to Read My Books. Here you are flipping the usual intellectual property debate on its head. Could you walk us through that inversion?
Kevin Kelly: Very very briefly, the LLMs AIs that we have right now, the chat bots are language text based, and they've been trained on the Internet, primarily some books, social media and some videos and stuff. No one really knows exactly, but a lot of people were very concerned about losing their job, and a lot of creative people were concerned they were going to lose your job to these AI's that were basically trained on the stuff that they were making. They're a creator, they're a book writer, they're an author and here we have this thing reading their book and then maybe replacing them. That was the fear, being able to write something as good as them or in their style. There was a movement among them to demand that they be paid, and the only way they could really get this done was to sue the AI companies for copyright infringement, which is a workaround, a loop hole, saying, well, the text that you had, you had illegally, you didn't buy all those books. A lot of Authors Guild sued Anthropic and there was a and they won because Anthropic had a shadow library. It turned out that they actually didn't use it, but they had it, and that was enough for guilt. There was this moment of thinking, well, maybe in the future, the AI's companies would be paying the humans, the painters and the photographers and everybody else for their work. I was saying, no, I think you have this backwards. Because what's going to happen is the AIS, as they digest all this, they're like the ultimate readers. When the AI is reading my books, which have tons of footnotes and end notes and bibliographies and they're lots of words, they're the best readers, I know, because they literally read every word and they digest the whole thing. They're becoming much more the answer giver, the Oracle. If you have a question, you're going to them and whatever they say is what is learned.
Therefore, if I want to influence the world with my ideas, I want to make sure that the AIS are reading my books. When anthropic revealed in the disclosure of the lawyer and suit, all the 500,000 books that were in that library, I was going to be really disappointed if my books weren't in it. But happily, four of my books were in it and I realized that I would be willing to pay something. It's like the marketing fees that you pay in a bookstore to have your book put upfront for the purchase in near the cash register. I'd be willing to pay to make sure that the AIS read my books and really pay attention to it and maybe even favor it. I think that that dynamic is going to flip into the future where people are going to really really want to make sure that their stuff is what the AIS are trained on. It's like, you want to be in the classroom when they're training the AIs, and you want to make sure your books are the textbooks. I think that's where we're headed for is this understanding that you want the AIs to read your work, and there's an economist at George Mason, Tyler Cohen, who's been writing his memoirs for the AIS. He's just writing them and posting them up with the idea that the AIs will read it. No humans are interested in this, but the AIS are going to read it, and they will know they will have his biography in the end. More and more, there's a shift to the AIs becoming the audience of things. There are companies right now in Silicon Valley, they're rewriting their software. From the ground up so that it is much more accessible to the AI coder the cloud codes. They're like they're basically rewriting their code for the audience of the AI coders and so there is this shift in that. That's what's going on is saying there's at least a secondary audience. Maybe it becomes a primary, but you want to have them then the AIs as in mind when you're creating things.
David Gardner: Such a contrary take, which is what I appreciate about it. I also want to just point out that for a lot of people hearing that we might end up having to pay the AI. For our own content creation. They already didn't like AI, that would sound even worse to them. But this comes from somebody who, on the other hand, is saying, they're going to make us better, and the world will be better. Reconciling those thoughts, F Scott Fitzgerald said, if you can keep two of pose thoughts in mind at once, that's genius. That's what you're asking of us, I think, genius, but I think it's kind of a genius point.
Kevin Kelly: Exactly, I am an author, I write content for a living. I do want that because it's also useful to me, too, I've taken all my writings and digitizing and feeding into the AIs so that I can even inquiry what I've written in the past, because I've written so much I don't even remember everything.
David Gardner: I wrote that.
Kevin Kelly: I know and so it's useful to me, even directly. I think we're just at the beginning, too, by the way, I say this is that in 30 years from now, they'll look back to this moment, 2026 and they'll say, you didn't even have AI. What were you talking about? There are no AI experts back here now and so we're a Day 1 of this and we have no idea what intelligence is. We don't know what our own intelligence is, we don't know how it works. Here we're trying to make a synthetic version of it. We've got a long way to go, I don't adhere to this fast take off idea, I just don't see the evidence for that. I think it's going to be a long slog with detours and backtracking, maybe some periods of winter, some bubbles along the way. This idea that, 2027 next year, this is all going to take off, I am skeptical.
David Gardner: I first had you on Rule Breaker Investing to discuss your 2016 book, which we've referenced, the Inevitable. You were speaking of 12 formidable forces that will shape our future. Maybe I should say, inevitably shape our future and I love the book. I still go back and reread little sections here and there. Sounds like the AIs do, too, along with me, I want to name. Once again, your 12 forces each is a single word. As I do so, Kevin, I would love for you to think about which is most compelling and showing its stripes here in 2026, and which maybe feels most distant or most presently unimportant?
Kevin Kelly: Fair enough.
David Gardner: This is especially a love note to fans of this book, who will know what I'm talking about. For a lot of others, they're about to hear a bunch of words, but you're going to help us make sense of it. Here are the 12 formidable forces that will shape technology and our future. Here we go, becoming cognifying flowing, screening, accessing, sharing, filtering, remixing, interacting, tracking, questioning, beginning. Which one as you think about that list, again, which one do you want to underline and say, there's something really important going on with that one?
Kevin Kelly: Well, it's very clear that cognifying has come into its own in a way that surprises many people. The one that I think has not gone as far as I would have imagined was accessing. The idea there was that with increasing digitization of things, you would need to own less because you could have access to it. If you could have access to every movie that ever was, why would you own a movie? That has happened, and all the digital assets have gone that way. People don't really own music anymore, they access it. But it hasn't really gone beyond that, I was trying to imagine it extending even to people renting clothes, borrowing clothes. Some of the Uber and Airbnb are examples of that, but it didn't go as far. Anyway, that was one that maybe stalled, and it still might change, the other one was filtering.
David Gardner: I think about that a lot.
Kevin Kelly: That also, I think, has this downsize, too, because we have people basically filtering news sources to just give what you want and a little bit of polarization in that. That was one of the things I was using the chapter to wrestle with was just like, if you had the ultimate news source, how much of it do you want it to be stuff that you agree with and how much do you want to see of things that you had not thought of and disagreed with or didn't know anything about? That's interesting balance that we don't have much education with, we don't have much experience with. That, I would say that whole field, that whole thing hasn't gone as far as I think it could go, and maybe it's been waiting for more of AI to play a role in that in making that work. That would be one that's again, behind what I was expecting might go screening it's taking over.
David Gardner: You're people of the screen.
Kevin Kelly: People on the screen. For better or worse, we're no longer people of the book, we're people of the screen. Screen is dominating, screen has its own dynamics of liquid, fluid, ephemeral versus the monumental fixity of print. I think people are recognizing finally that is the center of gravity for our culture now is what the things we have right in front of us right now and so that's in full swing. Again, I think the repercussions and the consequences of it haven't been fully understood or even realized that we still have further to go. By the way, I see virtual realities as another version of the screens, even though, basically screens in front of your eyeballs, in 3D games and all that stuff, the virtual environments more of the same screening. Screening and cognition are like they are in full force. That's a big thing because by becoming, I want to emphasize, not only that we are in transition and change and being able to what's the word I want? Not just to embrace the change, I think people still have of this idea that human nature is fixed and sacred, and I think part of the idea of becoming is that we have been in the process of making up humans, and we're still inventing ourselves, and that's why the AIs can have influence because we are remaking ourselves. I want to emphasize the fact that we are we are still in the process and inventing who we are, and it isn't done. These identity crisis that we're moving into brought about by these artificial aliens that we're making with the AIs, causing us to say, well, who are we? What are we good for? That's part of this process of becoming, that's part of the thing of we're in the process of changing who we are and who we think we are. The AIs are part of that process of challenging us. Well, you thought you were the only creative thing. You thought you were about creativity, you thought you were about making tools. If that's not true, then what are you about? That becoming is also, I think, in full force.
David Gardner: Really well said. Early days of the web, you found it wired in 1993. We founded the Motley Fool in 1993. The World Wide Web wasn't a phrase in the vernacular yet. It was still AOL, CompuServe prodigy, well, early days for things. But when I think about just how abstract the web was initially, and then later it translates into visceral, a real-world things like Airbnb or Quipper. Those are all web-driven, but you couldn't have seen it at the start. It was email. It was HTML and chat rooms. I think we're probably, as you said, day one with AI, we're not seeing it out there yet. Now, as we move toward the end of our conversation, I'm pretty sure that a really big industry that's coming is robotics and robots. As I think about AI being so abstract and chat room oriented right now, then imagining it going forward, becoming tangible, becoming kinetic and becoming mobile. That's when maybe, as you once wrote the robot takeover, which is not supposed to be a fear inducing phrase. This is actually well, let me just quote you from the Inevitable because here it is. I highlighted this one when I first read your book, and I'm going to quote. This is the greatest genius of the robot takeover. With the assistance of robots and computerized intelligence, we already can do things we never imagine doing 150 years ago. We can today remove a tumor in our gut through our navel. Make a talking picture video of our wedding. Drive a cart on Mars, print a pattern on fabric that a friend mailed to us as a message through the air. I've sometimes mused about how interesting it would be to try to explain today's technology to a 10th century Viking. In some ways, I think it would be easier to explain robots to that Viking, by the way, than say Wi-Fi. I wanted to go their robots here as we start closing down the conversation general thoughts.
Kevin Kelly: You'll notice that even though we have these LMS, these AIs that are as smart as PhDs, we don't have robots, and there's a reason for that. The reason is that this spectacular flourish of AI that we have are all based on language and text. They have been trained on the text of the world. Robots need to be trained on the actual world, the real world. They need what we call not large language models, but large world models. They have to be trained on the actual data from physics and chemistry and everything, not what the language says about them. There is a feeling among researchers and I am among this clan that feels that we need something else in addition to LMS before we're going to have robots. Also, we need them to be small to fit inside the robot because we don't want that latency to have to have that latency going back up to the cloud and back. You want something that's embodied, it should be mind embodied. Well, it should be the dualistic thing that we have inside our body. The point is that we don't have that yet. We don't have that missing piece. There are attempts at robots, and Waymo is a great example. But Waymo was interestingly said, that's classic AI. Tesla has gone the other route trying to make a self-driving car with the LLM view of the world is you bottom up. You have no structure. You just give it a billion trillion hours of video of cars driving and you teach it that way. It remains to be seen so far, the classical way has been better because we don't have full autonomous Tesla cars yet. The researchers aren't waiting, but society is waiting for that intelligence that has been trained on the world that has a world model, and it's going to be small enough to fit into the devices. We know that this is possible because we have an existence proof of ourselves. We're running on 25 watts and we only need 12 examples of a difference between a cat and a dog. We don't need a million. We know it can be done, and so that's what's happening right now. There will be lots of attempts to try and take the existing things that make it. But I don't think we're going to see that until we have another big breakthrough in supplemental. The LLMs are good. The neuronets are going to be the basis, but you need something extra. In addition to them to have long-term planning. You need continuous learning, which we don't have in LLMs. There's a whole bunch of things which we don't have yet that we're pretty sure we're going to need to have a robot deal with the complexity of the real-world and not just with what people write about the real world. That's all ahead of us, and if you're a young entrepreneur head West.
David Gardner: Well, just as you've taught us that there are many AIs to just say the word AI doesn't really describe the real life, 'cause there are just probably now millions of different types of AIs. Similarly, there's not just one optimist robot or one Rumba. There are going to be millions, probably. We're not there yet types of robots.
Kevin Kelly: We didn't get to one of the Jur runs, which was remixes, but one of the lessons from the current economy, particularly among creators, is that the niches are the riches. In the niches are the riches being very specific. There is a movement, the biggest companies in the world, that believe that general intelligence is the way to go. I would not bet on that. I think we're going to specialize very quickly in the same way we don't have a general motor. Even though we have a company called General Motors, we don't have a general motor. We have specific motors that are all made for each different purpose, and we're going to have specialized AIs to do translation. This one does driving, this one does warehouse work, this one. Each of those is going to be fine tuned for that particular domain. While billions and billions of dollars are being aimed toward the general centralized version of things. I think that's not going to go away, but I don't think that's where the real frontier in the consumer world would be in the actual world, I think is much more likely to be in the niches of specialized intelligences to do this or that and that the centralized would be like a Swiss Army knife, which is really cool in concept, but not really used.
David Gardner: You know what's interesting is that for most of us, the smart technology that we use mostly every day is our phone, which in many ways, has been a general technology that has replaced so many other technologies. Our analog for understanding future might be the thing that does so many different things like the hitchhikers guide to the gallery.
Kevin Kelly: Right away.
David Gardner: But I was looking back at your excellent advice for living, which is how I want to close, by the way. I want you to premeditate, I'm telling you one of your excellent advice for living points for my Rule Breaker Investing Listeners is reclosed, but I'm going to give one right now because it pertains to the conversation we're having. One of your excellent advices for living sitting right there, I'm not sure which page, but it was this. This is in a book from somebody who is wise that we enjoy an author who feeds his stuff in AIs, and I love the guy, and I love having these conversations. This is what that page says. It says, A balcony or porch needs to be at least six feet, two meters deep, or it won't be used. Here we are talking about what is actually going to be used? How do we design things so they work? So those billions of dollars of R&D pay off. They don't end up equaling zero, and sometimes it's as simple as six feet two meters deep.
Kevin Kelly: That wisdom was only one by lots of people making lots of mistakes, by things, people trying it, by hard won experience. It was not something that you arrived at by thinking about it. You can think about balconies, but you would never, ever occur to you that you needed to be bigger than six feet. The same thing about AI and stuff is that I think there's something called thinkism, which is people are thinking about things that's not going to solve things. We actually have to do stuff. We're not going to figure out how this works by thinking about it. We're not going to figure out whether AI is going to replace humans by thinking about them. We actually have to use them every day, see what's the evidence of the actual use. So we need to embrace them through use. It's through use that we steer technology, not through thinking about them. To the balcony example, I would extend that and say what you want to do is you want to have as much experience as you possibly can with AI on a daily basis, because you're much more likely to arrive at the insight that you need than just by thinking about it or listening to people like me talk about it. Don't do that. Go and actually open up Claude, put eight tabs on it, get it going, be the manager trying to use it for every possible thing you can think of. You'll learn far more than anybody else. That is what you want to be doing right now using it as much as possible.
David Gardner: Get your feet wet, get your hands dirty. Kevin Kelly, I've asked you for one excellent advice for living, and before I do, that's going to be the grand finale. I was just looking through the index of the inevitable, and this word never appears in it. I'm curious, without putting you on the spot, as to your thoughts about cryptocurrency today. Is this an important use case? Is this all early stage for something really important, or is this a distraction?
Kevin Kelly: I think I'm surprised that cryptos not inevitable. I think I had blockchain. I've been saying that, basically, blockchain and the crypto adjacent currencies were fabulous inventions waiting to find a problem. There were solutions waiting for a problem to really solve. I don't think currency for humans was the currency was the solution or the answer. But I think I'm very excited about the idea of having a stable coin that is an L1 level that you could, actually, do cheap enough to do micro transactions because what you get from that is you get money for AI agents. I think the scale of the economy agent to agent, the agentic economy will dwarf the human economy very quickly. I know that companies like Stripe are gearing up for this, and they were the ones who sponsored something called Tempo, which is a protocol based around on stable coins or diagnostic stable coins to function, basically, as a way to have micro payments for agents, what I call money for AIs. If you have that, I don't think there's very many people who are going to be spinning these stable coins. I don't think it's that useful for humans, but I think it's incredibly useful for AI agents who wanted to do work. If we can equip the agentic world with money and micro payments that are really cheap and really secure and work, and you have all these other trust issues which are phenomenally important and unsolved to have an agentic world, I think we have the possibility of really accelerating that creation of this agentic economy, which, as I say, I believe is going to dwarf the human economy. I think there is a role for that cryptocurrencies in the world.
David Gardner: Thank you. We will leave the agentic economy to our next conversation. Let's close this one, Kevin. Throw out, if you will, an excellent advice for living for Rule Breaker Investing listeners here as we close out this podcast in the first week of February 2026.
Kevin Kelly: One of the bits of advice I have for young people, which my apply here, is try to work in an area where there's no language and no names. What is it that you're doing? Try to be out in front of language, because that means you are in a real frontier. Try to work on something where it takes you 20 minutes to explain to your parents what you're actually doing. That's a good sign. Now, it doesn't guarantee success. But it does guarantee that you are breaking the rules. It does guarantee that you are out there on the frontier, because that's the definition of a frontier is that we don't have the words or the language for it. I'm reminded of the people that I knew who were doing this thing 10 or 15 years ago, and they were doing something that they had to explain to the parents. They say, it was radio, but it's not radio because it's recorded and it goes on the Internet. It's called podcasting. I was like, what's that? You had to make up new words, streaming for what it is that you're doing. Try to work in an area where they don't have words or language for what it is that you're.
David Gardner: Wonderful note to end on. Kevin, you've been once again so generous, not just with your time, but much more importantly, your insight, your challenge. You're never trying to be a provocateur, but you are by your very nature, a provocateur toward the good and a fellow Rule Breaker, and I really appreciate this conversation. I wish you the best year in 2026.
Kevin Kelly: It's been a pleasure. Again, I appreciate everybody listening. Our time is the most precious asset that we have in the entire world, and you've given me some of your time, and I appreciate that. Thank you.
David Gardner has positions in Tesla. The Motley Fool has positions in and recommends Tesla. The Motley Fool recommends General Motors. The Motley Fool has a disclosure policy.