Tag: AI

  • “Big Ideas and Bold Futures: Reflections on AI, Innovation, and Avoiding Stupidity”

    “Big Ideas and Bold Futures: Reflections on AI, Innovation, and Avoiding Stupidity”

    According to a Japanese Proverb;

    Table of Content

    • 1 Getting Visual…
    • 2 If You Read One Thing Today – Make Sure it is This…
    • 3 Consequential Thinking about Consequential Matters…
    • 4 Big Ideas…
    • 5 Big thinking…
    • 6 Patience…

    1 Getting Visual…

    What makes up the world around us? Go find out the details of the biomass in this interesting interactive: https://biocubes.net/ 

    Wild WorldStable (forecasted) Economy – The IMF produces forecasts for 196 countries in the world, and their latest forecast shows that a record-low share of countries are expected to be in recession in 2025 and 2026.

    Make your country great againthe rise of domestic agendas – In the political realm, I see a reversion to the mean. After decades of globalism and cosmopolitan thinking, many countries—from the US to Argentina, Canada to India—are pivoting inward, focusing on nationalism or populist policies. That isn’t inherently surprising; if you zoom out, these pendulum swings happen – although we’re at the highest level since 1930s.

    “When I look back on our recent history—from around 2014 to today—I see an era that future historians will call an inflection point. Much as the 1920s and 1930s fundamentally differed from the world of the late 1800s, we’re going through a similarly significant transition now. We have game-changing technologies and volatile geopolitics. In my view, this period will be studied as the moment when economic and political models were reshaped. I wrote a book about this in 2022; if you’re new to my work you can pick it up here. I draw heavily on the work of economist Carlota Perez who outlined how the changing underlying technological paradigm brings with it financial speculation, bubbles and wealth accumulation. It requires an institutional recomposition which fundamentally heralds a golden age.Things are breaking apart, but also recombining in new ways. Yes, it is about tech but it’s also about deeper social, cultural and economic realignments.” – Azeem Azhar

    Global prices for large-scale energy storage systems have plunged 73% since 2017, according to BNEF. China, which requires batteries to be installed at new solar or wind farms, overtook the US as the world’s biggest energy storage market in 2023 and was expected to add 36 gigawatts of batteries in 2024, equivalent to the output of 36 nuclear reactors. The US, in contrast, was on track to add almost 13GW in 2024, according to BNEF research, with an additional 14GW coming in 2025.

    Exponential Power – Human ingenuity drives progress. A look at DNA sequencing costs is a powerful example…

    Learn more here: https://www.genome.gov/about-genomics/fact-sheets/DNA-Sequencing-Costs-Data 

    Disaster – As private insurance pulls out of insuring homes in the most disaster-prone American states, public last resort backstops are absorbing the risks, so far to the tune of $ 1 trillion.According to a 2018 study by the University of Cambridge and Munich Re, if a Category 5 hurricane hit Miami and the Florida coast, it could cause a staggering $1.35 trillion in damages, more than $60,000 for every person in the state. … Fire damage from 2017 and 2018 wiped out more than twice the previous 25 years’s worth of underwriting profits for the California insurance market.

    On the Move – There were roughly 41,000 Americans living in Spain in June, a 39% increase from three years ago and double the number from 2014. Nearly 15,000 golden visas were issued in Spain since the program’s launch in 2013, according to the most recent annual government figures. Spain recently announced it would officially end its golden visa program on April 3, roughly a year after the change was initially proposed. But that hasn’t stopped Americans from seeking ways to gain residency in the country. And there’s been increased interest in the aftermath of the contentious presidential election. “It’s been non-stop requests,” said Matt Anderson, an American who works as a real estate agent in Mallorca. He said there was a 12% increase in US buyers on the Spanish island over the last year and cited the quality of international schools and warm weather as key reasons driving many Americans to relocate.

    2 If You Read One Thing Today – Make Sure it is This…

    Brian Klaasauthor of the interesting book; Fluke – Chance, Chaos and why everything we do matters – shares the essays he most enjoyed writing in 2024.

    Plenty of interesting perspectives and his short video; ‘The Edge of Chaos’ at the intro is a good starting point – it’s something to think about…go do it here:

    https://www.forkingpaths.co/p/the-highlights-of-2024

    https://www.youtube.com/watch?v=TLm6dC34gYk

    Depending on which scientist you ask, Homo sapiens emerged between 200,000 and 315,000 years ago. Let’s split the difference, go with 257,000, and say that there have been about 9,500 generations of humans (the average generation throughout human history lasts 26.9 years).

    Our way of life was dominated by hunting and gathering for 9,100 of those 9,500 generations—96 percent of the existence of our species.

    Then, about 10,000 years ago, agrarian societies formed. Farming replaced hunting and gathering. (It was disastrous for our health). In the 18th century, society shifted again, toward industrialization. And now, in the last thirty years or so, we’ve ushered in another shift, with computerization, the internet, and artificial intelligence, a society defined by interconnected information and an unprecedented flow of ideas.

    More than half of the world’s population is under the age of 30, meaning that more than half of us have only lived in those 11 seconds—an era that is, without question, the weirdest period in human history.” 

    Consider, for example, this extraordinary fact, conveyed by the Dutch sociologist Ruut Veenhoven:

    “the average citizen lives more comfortably now than kings did a few centuries ago.” 

    We, not them, are the weird ones.

    Read in full here: https://www.forkingpaths.co/p/we-are-different-from-all-other-humans 

    Sadly, so much of our discourse around intelligence and stupidity gets hijacked by pseudoscience, racism, and debates over whether arbitrary measurements like IQ are valid. We ignore more interesting questions around intelligence and stupidity that we can learn not from ourselves, but from other species. In particular:

    Pondering these questions requires going on a bit of a wild ride, exploring fascinating animal worlds from chimpanzees to cephalopods, as we begin to understand our own cleverness—and stupidity—through the eyes of an octopus, the closest thing to alien intelligence on Earth.

    Read it here in full: https://www.forkingpaths.co/p/the-evolution-of-stupidity-and-octopus 

    In 2028, if all goes according to plan, a six foot five narcoleptic scientist with a bushy white beard will resurrect the first living woolly mammoth in 4,000 years. And that mammoth—if the science works—may soon be lumbering, in all its hairy glory, across the frozen plains of North Dakota.

    This is the story of the science, the ethics, and the risks and rewards of the emerging field of de-extinction—the revival of species that no longer exist.

    Humans may soon be able to bring species back from the dead. But should we?

    Read it here in full: https://www.forkingpaths.co/p/de-extinction-and-the-resurrection 

    3 Consequential Thinking about Consequential Matters…

    WEF has dropped 104 pages on the Global Risks for ‘Davos Man’ to worry about for 2025 – as much as it has been maligned (often rightly so) the WEF does provide good insights and I like their exercise of thinking out 2 years and then 10 years. There is some Consequential Thinking about Consequential Matters in this report – go take a look: https://reports.weforum.org/docs/WEF_Global_Risks_Report_2025.pdf?utm_  

    “The multi-decade structural forces highlighted in last year’s Global Risks Report – technological acceleration, geostrategic shifts, climate change and demographic bifurcation – and the interactions they have with each other have continued their march onwards. The ensuing risks are becoming more complex and urgent, and accentuating a paradigm shift in the world order characterized by greater instability, polarizing narratives, eroding trust and insecurity. Moreover, this is occurring against a background where today’s governance frameworks seem ill-equipped for addressing both known and emergent global risks or countering the fragility that those risks generate.” 

    “Concerns about state-based armed conflict and geoeconomic confrontation have on average remained relatively high in the ranks over the last 20 years, with some variability. Today, geopolitical risk – and specifically the perception that conflicts could worsen or spread – tops the list of immediate-term concerns.”

    “An estimated two-thirds of the world’s population – 5.5 billion people – is online and over five billion people use social media.  The increasing ubiquity of sensors, CCTV cameras and biometric scanning, among other tools, is further adding to the digital footprint of the average citizen. In parallel, the world’s computing power is increasing rapidly.48 This is enabling fast-improving AI and GenAI models to analyse unstructured data more quickly and is reducing the cost to produce content. With Societal polarization ranking #4 in the GRPS two-year ranking, the vulnerabilities associated with citizens’ online activities look set to continue deepening hand in hand with societal and political divisions. Taken as a whole, these developments threaten to fundamentally undermine individuals’ trust in information and institutions.

    Like last year, Misinformation and disinformation tops this year’s GRPS two-year ranking. The amount of false or misleading content to which societies are exposed continues to rise, as does the difficulty that citizens, companies and governments face in distinguishing it from true information. The interplay of rising Misinformation and disinformation with political and Societal polarization creates greater scope for algorithmic bias. If human, institutional and societal biases are not addressed, and/or best practices in modelling are neglected, the conditions will be ripe for algorithmic bias to become more prevalent. Such bias, whether inherent in data, models or their creators, can lead to unjust outcomes.” 

    Super-ageing societies 

    Countries are termed “super-ageing” or “super- aged” when over 20% of their populations are over 65 years old.  

    Several countries have already exceeded that mark, led by Japan and including some countries in Europe. Many more countries across Europe and Eastern Asia in particular are projected do so by 2035. Globally, the number of people aged 65 and older is expected to increase by 36%, from 857 million in 2025 to 1.2 billion in 2035.

    By 2035, populations in super-ageing societies could be experiencing a set of interconnected and cascading risks that underscore the GRPS finding that the severity – albeit not the ranking – of the risk of Insufficient public infrastructure and social protections is expected to rise from the two-year to the 10-year time horizon. An ongoing concern is that government funding for public infrastructure and social protections gets diverted during short-term crises.

    Some super-ageing societies could be facing crises in their state pensions systems as well as in employer and private pensions, leading to more financial insecurity in old age and exacerbated pressure on the labour force, which includes a growing number of unpaid caregivers. Indeed, super-ageing societies by 2035 are likely to face labour shortages.

    The long-term care sector will be especially affected by labour shortage. Care occupations are expected to see significant demand growth globally by 2030. Care systems – health care and social care – in super-ageing societies are already under clear and immediate strain. They will struggle to serve a fast-growing population over 60 years of age that has additional care needs while recruiting and retaining enough care workers. Care systems are, in great part, funded by governments and account for about 381 million jobs globally – 11.5% of total employment. The accumulation of debt and competing spending needs on, for example, security and defense are likely to constrain the reach and sustainability of public expenditure
    on care systems over the next decade. Without increased public or blended investment, care demand will continue to be unmet.

    Economies already experiencing this challenge are resorting to stop-gap measures, including attracting migrant care workers from other economies. But if this turns into a talent drain from countries with more youthful societies, those countries may then struggle to reap the benefits of their demographic dividend and will, several decades from now, run into super-ageing society challenges of their own.

    There will be no easy solutions to this problem set, given the sustained strength to 2035 of the two underlying trends generating higher average dependency ratios, not only across super-ageing societies, but at the global level: declining fertility rates and rising life expectancy, though not necessarily in better health.

    4 Big Ideas…

    Upon the second birthday of ChatGPT, Sam Altman took to his Blog to post some ‘reflections’ – they hold a few interesting ideas and insights to ponder on GenAI, Tech and building a startup company – Go do it here: https://blog.samaltman.com/reflections 

    In 2022, OpenAI was a quiet research lab working on something temporarily called “Chat With GPT-3.5”. (We are much better at research than we are at naming things.) We had been watching people use the playground feature of our API and knew that developers were really enjoying talking to the model. We thought building a demo around that experience would show people something important about the future and help us make our models better and safer.

    We ended up mercifully calling it ChatGPT instead, and launched it on November 30th of 2022.

    We always knew, abstractly, that at some point we would hit a tipping point and the AI revolution would get kicked off. But we didn’t know what the moment would be. To our surprise, it turned out to be this.

    The launch of ChatGPT kicked off a growth curve like nothing we have ever seen—in our company, our industry, and the world broadly. We are finally seeing some of the massive upside we have always hoped for from AI, and we can see how much more will come soon.

    The road hasn’t been smooth and the right choices haven’t been obvious.

    In the last two years, we had to build an entire company, almost from scratch, around this new technology. There is no way to train people for this except by doing it, and when the technology category is completely new, there is no one at all who can tell you exactly how it should be done.

    Building up a company at such high velocity with so little training is a messy process. It’s often two steps forward, one step back (and sometimes, one step forward and two steps back). Mistakes get corrected as you go along, but there aren’t really any handbooks or guideposts when you’re doing original work. Moving at speed in uncharted waters is an incredible experience, but it is also immensely stressful for all the players. Conflicts and misunderstanding abound.

    Nine years ago, we really had no idea what we were eventually going to become; even now, we only sort of know. AI development has taken many twists and turns and we expect more in the future.

    Some of the twists have been joyful; some have been hard. It’s been fun watching a steady stream of research miracles occur, and a lot of naysayers have become true believers. We’ve also seen some colleagues split off and become competitors. Teams tend to turn over as they scale, and OpenAI scales really fast. I think some of this is unavoidable—startups usually see a lot of turnover at each new major level of scale, and at OpenAI numbers go up by orders of magnitude every few months. The last two years have been like a decade at a normal company. When any company grows and evolves so fast, interests naturally diverge. 

    “Our vision won’t change; our tactics will continue to evolve.” 

    We are now confident we know how to build AGI as we have traditionally understood it. We believe that, in 2025, we may see the first AI agents “join the workforce” and materially change the output of companies. We continue to believe that iteratively putting great tools in the hands of people leads to great, broadly-distributed outcomes.

    We are beginning to turn our aim beyond that, to superintelligence in the true sense of the word. We love our current products, but we are here for the glorious future. With superintelligence, we can do anything else. Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own, and in turn massively increase abundance and prosperity.

    This sounds like science fiction right now, and somewhat crazy to even talk about it. That’s alright—we’ve been there before and we’re OK with being there again. We’re pretty confident that in the next few years, everyone will see what we see, and that the need to act with great care, while still maximizing broad benefit and empowerment, is so important.

    5 Big thinking…

    Explore Inversion and The Power of Avoiding Stupidity in this excellent short exploration from the good people at the FS Blog: https://fs.blog/inversion/ 

    “It is not enough to think about difficult problems one way. You need to think about them forwards and backward. Inversion often forces you to uncover hidden beliefs about the problem you are trying to solve.” 

    “Many problems can’t be solved forward.” 

    “Say you want to improve innovation in your organization. Thinking forward, you’d think about all of the things you could do to foster innovation. If you look at the problem by inversion, however, you’d think about all the things you could do that would discourage innovation. Ideally, you’d avoid those things. Sounds simple right? I bet your organization does some of those ‘stupid’ things today.

    Another example, rather than think about what makes a good life, you can think about what prescriptions would ensure misery.” 

    “Avoiding stupidity is easier than seeking brilliance.” 

    “While both thinking forward and thinking backward result in some action, you can think of them as additive vs. subtractive.

    Despite our best intentions, thinking forward increases the odds that you’ll cause harm (iatrogenics). Thinking backward, call it subtractive avoidance or inversion, is less likely to cause harm.

    Inverting the problem won’t always solve it, but it will help you avoid trouble. You can think of it as the avoiding stupidity filter. It’s not sexy but it’s a very easy way to improve.” 

    “Spend less time trying to be brilliant and more time trying to avoid obvious stupidity.”

    6 Patience…

    Have a Great Weekend when you get to that stage…

    Sune Hojgaard Sorensen



  • “The Intersection of AI, Business Strategy, and Learning: Preparing for Tomorrow”

    “The Intersection of AI, Business Strategy, and Learning: Preparing for Tomorrow”

    Table of Content

    • 1 Getting Visual…
    • 2 If You Read One Thing Today – Make Sure it is This…
    • 3 Consequential Thinking about Consequential Matters…
    • 4 Big Ideas…
    • 5 Big thinking…
    • 6 Hierarchy of Thinking Styles…

    1 Getting Visual…

    Learn more here: https://www.visualcapitalist.com/the-value-of-the-global-semiconductor-industry-in-one-giant-chart/ 

    Take a tour of the ‘Heart of the Internet’ – Northern Virginia has far more data centers than anywhere else on earth.

    Watch a short video covering the background and current developments here: https://www.youtube.com/watch?v=td-7WGAQKgA

    BIG Trend – The surge in BIG TECH capex is one of the most dramatic trends in the US economy in the last 15 years…

    Digital Reality – The typical person is awake for about 900 minutes a day. American kids and teenagers spend, on average, about 270 minutes on weekdays and 380 minutes on weekends gazing into their screens, according to the Digital Parenthood Initiative. By this account, screens occupy more than 30 percent of their waking life.

    The BIG Cost of Building public infrastructure in the US & UK…

    Risk Assessment – Uninsurable areas of the US is growing fast – Private insurers had already begun to pull out of the California fire market. Farmers Insurance has the most exposure with 6.9% of the state’s commercial fire market, while Berkshire Hathaway is second at 6.6% and Travelers is third at 6.3%, according to trade publication Business Insurance. Zoom out: California’s fate may resemble the flood insurance market in Florida, where the state has become the insurer of last resort for a mounting number of parcels. The U.S. saw 28 weather and climate disasters costing at least $1 billion in 2023 — the highest on recordAxios’ Erica Pandey reports. Damages totaled $93 billion. 2024 disaster data is not yet out, though it’s expected to follow the trend. 

    The Long View – The changing nature of US employment…

    Who has the Skills? Compared with the last set of assessments a decade earlier, the trends in literacy skills were striking. Proficiency improved significantly in only two countries (Finland and Denmark), remained stable in 14, and declined significantly in 11, with the biggest deterioration in Korea, Lithuania, New Zealand and Poland. Among adults with tertiary-level education (such as university graduates), literacy proficiency fell in 13 countries and only increased in Finland, while nearly all countries and economies experienced declines in literacy proficiency among adults with below upper secondary education. Singapore and the US had the biggest inequalities in both literacy and numeracy. “Thirty per cent of Americans read at a level that you would expect from a 10-year-old child,” Andreas Schleicher, director for education and skills at the OECD, told me — referring to the proportion of people in the US who scored level 1 or below in literacy. “It is actually hard to imagine — that every third person you meet on the street has difficulties reading even simple things.”

    2 If You Read One Thing Today – Make Sure it is This…

    Cembalest and the JPM Research Crew looks into the brew of ‘The Alchemists’ in their Outlook 2025. 

    The Alchemists: deregulation, deportations, tariffs, tax cuts, cost cutting, oil & gas, crypto, medical freedom and Agency purges. What could possibly go wrong?

    It raises some interesting questions worth thinking about – get started here:

    https://assets.jpmprivatebank.com/content/dam/jpm-pb-aem/global/en/documents/eotm/the-alchemists.pdf?utm_

    Policies and statements from Trump nominees (both cabinet-level and those not requiring Senate confirmation) indicate that the Alchemists aim to “break” something, whether it’s globalization, the Federal bureaucracy, the IRS, the FBI, Medicare, US vaccine policy, lax US border policies, its “Deep State” opponents or something else. Whatever the goals, I take the Alchemists at their word: they are going to break something, I just don’t know what.

    For investors, there’s little room for error with valuations this high and since valuations are now driving markets just as much as earnings growth. Also: the S&P 500 just registered two 20%+ years in a row, something which occurred just ten times since 1871. Only during the 1990’s bull market and the Roaring Twenties did the good times continue for another two years. I expect a 10%-15% correction at some point in 2025 as the Alchemists apply their elixirs to the US economy.

    Plan accordingly: US equity markets should end the year higher than they began but be sure to have plenty of liquidity to take advantage of what might be a volatile year. Ultimately, the 10-year Treasury will be the best barometer of the new administration. If the supply side benefits from deregulation and tax cuts overpower the inflationary impacts of tariffs, a shrinking labor supply and large budget deficits, the 10-year Treasury should remain in the range of 4.5% to 5.0%.

    3 Consequential Thinking about Consequential Matters…

    We are bombarded by messages telling us to worship the gods of efficiency and optimization, life hacking our way to prosperity. It’s a trap. Resilience is a smarter, sturdier goal. Brian Klaas makes the case for some slack…go explore it here in full…it’s well written and it will make you think…

    https://www.forkingpaths.co/p/against-optimization

    “…Weber argued, the truest path to revealing godliness was through endless toil, a daily bodily prostration to the divine.

    Today, religiosity has plummeted across a variety of Western countries, but that “spirit of capitalism” dominates most modern lives. However, these days, it’s not enough to work hard; to get ahead, you have to work smart. Maximize your output. Minimize your inefficiency. Optimize your life.” 

    “The most recent episode of one of America’s top podcasts—the Huberman Lab—is titled “Optimize your learning and creativity with science-based tools,” as though creativity is a machine. Oil your brain with the right mental lubricant, feed it a diet of a few peer-reviewed papers, and presto: become the next Picasso, Shakespeare, or Einstein. (Or, more realistically, someone who tells disembodied heads in rectangular boxes on a Microsoft Teams meeting about an innovative brainwave for better “synergy” to boost profits).

    These dystopian social tendencies have worsened through the quantification of everything, in which we most value what can be measured and translated into data. From Q3 benchmarks to sleep scores and step counts, data can help guide us, but too often we live in a world where abstract passions are discounted in favor of metrics.” 

    “…slathering the internet with a thick coating of “life hack” articles, it is clear that many people are devoted to a new religion.

    The new god is called Optimization—and the disciples are legion.” 

    “The problem, however, is that there’s a trade-off being ignored. What one person mistakes as inefficiency may actually be resilience. Rather than a demon to be slayed by a McKinsey exorcism, social slack is required for robustness. From modern social systems to our individual lives, we are over-optimized, courting disaster because we are deliberately slicing away the sinews that make ourselves and our world sturdier.” 

    “Criticality and Slack, from the Suez to Estonia

    In 2021, a gust of wind hit a container ship nearly twice as long as the Titanic. The ship twisted sideways, both its ends wedged awkwardly into the banks of the Suez Canal. The Ever Given was stuck—and, in an instant, the shipping pipeline for 12 percent of global trade was blocked.

    For six days, supply chains broke down, products were spoiled, and delays piled up, sending shockwaves across the global economy. One analysis suggested that the total economic damage from that one boat twisting sideways was $73 billion. 

    Never before in human history could one boat cause a global economic calamity, clear evidence of a system optimized to the point of fragility.” 

    “More recently, in late December 2024, the Estlink 2 power cable connecting Estonia and Finland was severed, most likely by a vessel operated under Russia’s “Shadow Fleet.” It was a major setback, damage that will take months to fully repair. But there were no blackouts. Unlike the Suez Canal incident, the Estonian power grid was resilient, able to withstand an unexpected blow.

    When you consider these two systems, it’s obvious there’s a direct trade-off between optimization and resilience.” 

    “When one node fails, the network adapts, creating resilience through redundancy. Inevitably, that will introduce some inefficiency into the system.

    We often call such inefficiency slack. But the obvious lesson we too easily ignore in the modern world is this: slack is often both necessary and desirable. 

    Our ability to predict the future is limited, and in an era of hyper-uncertainty, relying on ever-more precise past data only yields ever-more misguided certainty that the patterns of the past will be a reliable guide to our future.” 

    “In complex systems—which includes pretty much everything humans arrange themselves to do in large groups—we can inadvertently reach a state of criticality. (In more conversational English, this is often referred to as a “tipping point” or a “phase transition” or an “avalanche”). 

    These terms highlight a simple but important dynamic: a system that’s put under too much strain can rapidly change, sometimes collapsing—with catastrophic consequences.” 

    “…when buffers are razor-thin, even minor delays in a system can cascade across nodes in a network, compounding and creating cataclysmic “shocks”.

    But “shock” is a misnomer; it’s not some unexpected bolt from the blue, but too often the inevitable result of a system designed to be brittle, precisely because it’s over-optimized, made that way by people who obsess over inefficiency but neglect resilience.” 

    “Confusion arises because the same word—optimization—is used to mean different things in different contexts. It’s useful, then, to categorize three particularly relevant kinds of optimization, as only two of them tend to come with major risks attached.

    The Suez Canal debacle was created by an over-optimization on efficiency. The attempt to eliminate any buffer in the system in pursuit of slightly higher profit margins created a cascading disaster from one minor mistake. 

    Optimizing for efficiency often comes with a tradeoff: less resilience and reduced ability to adapt to uncertain, fluctuating conditions. At a moment when the world is changing faster than ever before, banking on a system that is tailor-made to break down in the face of the uncertain and the unexpected is a terrible bet for humanity.” 


    Optimizing for goals rather than pure efficiency can be wise, but also comes with risks—particularly when the wrong goal is fetishized because it can be easily measured, or when goal setting eclipses the intrinsic purpose of the activity itself.” 

    “The Barren Desolation of Optimization Culture

    Beyond social systems, over-optimizing for efficiency or goals on an individual level can suck the joy out of existence, reducing life’s dazzling unquantifiable flourishes into mere “inefficiencies” to be excised. As I previously wrote, on the relentless goal setting of “hustle culture”:

    Hustle culture is the pinnacle of what I call a checklist existence, the ultimate form of a world in which every box we tick gets replaced by yet another one, the same way that e-mail inboxes are the ever-regenerating many-headed hydras that plague our daily lives. 

    We slay them endlessly, hoping that each slash of the delete key or reply button will get us closer to that mythic allure of “inbox zero,” a profoundly dystopian goal. In that never-ending battle, which we always lose, the checklist itself becomes the achievement, an utterly bizarre, tragicomic approach to living—and yet one that, like most of us, I struggle to resist. 

    We are, too often, chained to our checklists, inmates held inside our own inboxes.” 

    “I stopped trying to create who I was… then I stopped trying to discover who I was… and now I allow who I am.”

    This isn’t an invitation to resigned complacency, but rather a corrective compass: a reminder that personal striving should be guided by internal motivation, not to satisfy some unicorn-like social fantasy about the perfectly optimized life—astonishingly efficient, ruthlessly goal-oriented, and utterly nightmarish.

    We love to achieve greatness through hard work and passion, but many of the pinnacles of human emotion emerge unexpectedly, often within the slack that we allow for ourselves to truly live. As with social systems, we all need a buffer.” 

    “Topple the churches to the god of Optimization. Replace them with shrines to a wiser, more caring deity: Resilience.

    To see why, we need to draw on lessons from unexpected places: the shells of molluscs, the carefully engineered robustness of ant colonies, and by debunking the mistaken interpretations of evolutionary biology that have infected the dominant—but incorrect—view as to how our world works.

    The popular reduction of evolutionary principles to “survival of the fittest”—with overtones of relentless, flawless optimization—is a tragic mistake. (Many incorrectly attribute the phrase to Charles Darwin, but it was first coined by Herbert Spencer). 

    While it is true that evolution does often fine-tune species to greater fitness over time through natural selection, the ultimate engine of evolution is survival and reproduction—which often requires robustness and the ability to adapt to uncertainty.” 

    “A hyper-optimized species that can only survive in one environment will get wiped out if that environment changes.” one reason why evolution routinely works in unexpected ways, through what the brilliant evolutionary biologist Zachary Blount calls “the genomic junk drawer.” 

    “The specific evolutionary path that a species took—along with plenty of accidental, contingent events along the way—leaves extra stuff in the genome that might at first appear to be junk.

    The awe-inspiring genius of our natural world is that evolution provides a mechanism to repurpose that genomic “slack” into something more useful when the environment changes. It’s the evolutionary wizardry of resilient adaptation.” 

    It’s not survival of the perfectly optimized, but survival of the resilient, as only the most robust inherit the Earth.” 

    “One of nature’s overarching lessons is this: what may look to a naive human eye as waste, or inefficiency, or under-optimized slack is often evolution’s secret weapon, providing the adaptive resilience to survive in an ever-changing world.” 

    “The attempted assertion of perfect control over an uncontrollable world isn’t just a fool’s errand that will always end in disappointment; it’s also a blueprint for a miserable life.” 

    “It is only in encountering the uncontrollable that we really experience the world. Only then do we feel touched, moved, alive,” Rosa says. That doesn’t mean abandoning striving, or giving up, or passively floating through life. But when everything becomes an instrumental goal to be optimized, intrinsic passion falls by the wayside, an infinite regress where each task and goal gives way to yet another—until you die. Worse, along the way, personal resilience is jettisoned, a casualty of a false belief that life hacks are gospel and life slack is waste.

    Social systems, like individual lives, are made fragile by the optimization creed, sacrificed to the God of Efficiency. Goal setting unlocks achievement, efficiency is mostly good, and striving for a better self and a better world forges progress. But when we take these ambitions to their limit, we stretch to the breaking point, engineering fragility. And that leaves us with a potent reminder that you’ll rarely see mentioned in the productivity industry, which seeks to cash in on transforming the unruly beauty of humanity into optimized metric-driven drones:

    Few humans on their deathbed have celebrated their achievement of “Inbox Zero.”

    4 Big Ideas…

    Unzip.Dev shares an exploration of AI’s path ahead and the impacts on businesses. Rethink your MOAT and those of the businesses you are invested in, start by giving this a read in full – do it here:

    https://unzip.dev/0x01f-ai-and-startup-moats/

    The curve

    I think we can boil the possible futures down into two outcomes when it comes to AI:

    The blue indicates AI eventually plateauing, becoming great at regurgitating outputs based on its training, and the red line indicates AI being able to improve itself and create results it hasn’t been trained on. Most people believe in one of these two potential futures, with each camp often being very emotional about their stance (since it touches on touchy things, like job security, business resilience, the fight between humans and machines etc…). 

    Because of all of these feelings flying around, I think it’s important to try to be objective and face reality head on – things can progress faster than you might allow yourself to believe, and you better be ready.

    I am leaning towards the red camp, but I’m not here to convince you of that as the arc prize results should be sufficient on their own (tl;dr: o3 managed to solve a problem it wasn’t trained on, with orders of magnitude better performance than other state of the art models).” 

    “The reason I am showing you this graph is to say that even in the blue scenario (slowing growth), things are about to change drastically. Even if we’re being super conservative, the current capabilities of AI – like Claude 3.5, GPT-o1 – are already powerful enough to disrupt nearly every industry we know.

    Let’s assume for a minute that reality lies with the blue curve. Even in that case, I can still confidently say that AI will:

    This means that all the tasks we already see AI making strides in will get even better, including in many creative professions like designing, writing, coding and the like. 

    Whether you’re riding the red curve or the blue curve, we all need to prepare for what is already headed our way. Let’s start defining some terms and assumptions before we kick things off more properly.”

    We can’t make any predictions without agreeing on a few base assumptions. I think Bezos nailed it on this topic:

    “I very frequently get the question: ‘What’s going to change in the next 10 years?’ And that is a very interesting question; it’s a very common one. I almost never get the question: ‘What’s not going to change in the next 10 years?’ And I submit to you that that second question is actually the more important of the two – because you can build a business strategy around the things that are stable in time. … [I]n our retail business, we know that customers want low prices, and I know that’s going to be true 10 years from now. They want fast delivery; they want vast selection. It’s impossible to imagine a future 10 years from now where a customer comes up and says, ‘Jeff I love Amazon; I just wish the prices were a little higher,’ [or] ‘I love Amazon; I just wish you’d deliver a little more slowly.’ Impossible. And so the effort we put into those things, spinning those things up, we know the energy we put into it today will still be paying off dividends for our customers 10 years from now. When you have something that you know is true, even over the long term, you can afford to put a lot of energy into it.”

    You should consider what won’t change, and the following is a (non-exhaustive) list of things that I think won’t change:

    I believe AI is and will continue to gain intelligence, even if we don’t consider it traditional human intelligence. Wait But Why wrote a great piece about this topic circa 2015, and if  you haven’t already read it, you’re in for a treat: Wait but why – AI revolution.

    We will still want cheaper, better, faster products.

    AGI will not eliminate economics and capitalism – we can have a big philosophical discussion here, but honestly, this is a bit over my current grasp of what is possible – so let’s limit this to something tangible.

    Resources are finite and limited.

    Accountability – we will need to blame someone when things don’t go well, and blaming AI will not work, at least not right out of the box.

    We’ll still be here – no apocalyptic scenarios in this thought experiment.

    And a few things that are already changing:

    R&D: With AI taking over more and more programming work, and with AGI lurking around the corner, many traditional moats around R&D seem to be in question.

    Remember those 6+ people ML teams a few years back, working full-time on outcomes that one LLM call could achieve today? Who says it won’t continue?

    What if that SaaS product that took you 10 developers and 2 years to build will be copied in 2 months with a 2-person team?

    Traditional Costs: Many traditional human (computer-facing) work will be replaced with LLMs, costing a fraction of the cost and being done in a fraction of the time.

    New costs: There was an assumption that with this new wave of AI, hardware and inference costs would go down, but what we are seeing with o3 is that it might be the opposite: it seems like we might pay for “more intelligence” (see chart x-axis) which is constrained by hardware that currently isn’t widely available.” 

    ⚠️ If your business relies heavily on one of these moats, I’d strongly suggest re-evaluating your strategy to mitigate potential vulnerabilities.

    “Better product”: We need to define “better” clearly, but if you’re basing this off your R&D efforts, I would very much fear the competition coming my way. If someone can use enough compute to copy you and use AGI to make a product better than what you currently have, is it still “better”?

    R&D Team Size: Traditionally, big corporations had the advantage of capital, which was used on R&D labor to produce more and better products. Today that might mean you are slowing yourself down: the bigger the team, the slower you are.

    Superior Customer Support: What if agents could provide 99.9% or better customer support than you currently have, 24/7 at a fraction of the cost? We aren’t there yet, but I don’t see a reason we can’t get there eventually.

    Superior UI/UX: With tools like v0Figma AI, etc. your competitors could copy many UI/UX components you have pretty rapidly – unless they require heavy backend R&D work tied to the UX (architecture, scalability decisions that aren’t copied easily).

    Personalization: I would argue there aren’t many successful personalization products out there, but now this will be a commodity with things likeGenUI.” 

    1. Physical world: Anything digital will be replaced much faster, but physical industries like robots, defense, biology, construction, and the like will take a bit more time to disrupt (as translating LLM outputs to the real world is harder than moving bits). I think this is going to be one of the next big frontiers to tackle.
    1. Business operations: Being able to codify your business with strong processes, specifically with textual documentation, will make your company more suitable for AI “handover.” You are laying the groundwork to make parts of your business automated. Just make sure documentation and those processes don’t slow you down too much – those who fail to act now risk falling far behind.
    1. Access to capital: Having more cash to spend on more compute, better operations, and distribution seems like a no-brainer as a strong moat. If before, being a scrappy startup was a plus, over time AI could use those resources to optimize things, lower costs, dominate the market by deploying these resources in a smarter way, and be creative. Throwing more money into compute is going to be a big one. If I can run a team of 100 agents, and you can only pay for one, I’ll have the advantage. The reason this isn’t further down the list is that many players have capital, so this isn’t unique.
    1. Physical resource dependent: If you are in an industry that relies on finite physical resources, that will be a moat – think land in real estate, satellite lanes, or lithium for electric cars. If you have access to those resources and others don’t, that’s an advantage.
    1. Partnerships: Having big players helping you with resources, data, and distribution will give you a leg up. Note: this is a great time to solidify these relationships while those big incumbents are looking to not lose their crown.
    1. Regulation: If you have the regulator on your side, you still have a strong moat. This has a people and bureaucracy bottleneck and takes years and years to change.
    1. Data: Having data others don’t have, that is valuable, will be a strong moat – I don’t see this changing any time soon. Bonus points for data that you have exclusive rights to, is private, and can’t be obtained after the fact. Don’t confuse this with cleaning data and processing it – that will become a commodity.
    1. Supply chain control: This can be a strong moat due to its reliance on agreements and processes.
    1. Accountability: Industries that require accountability will create a moat – will an AI bot be accountable for a mistake? Find places where accountability plays a role, like legal, insurance, governance, healthcare, and defense – those industries will have higher barriers to entry even if others can use AI.
    1. Network effects: Think about how adding more users/data/partners makes your business better and would make it harder for others to compete. Exclusivity rights play a big role here.
    • Evaluate your moats: Are you holding onto a dying moat? Identify a better one and move there.
    • Use and learn about AI: Don’t stick to your ego, try new tools, and stay ahead.
    • Systematize your business: Document and add proper processes to everything you can – it’s a quick win that sets you up for automation, improvement, and scalability down the line.
    • Move quickly
      • Smaller teams (fewer meetings, fewer things to be agreed upon)
      • Better tools (debugging, easy to use, etc.)
      • Faster response times (to new models and techniques)
      • Better teams (smarter people, good at getting things done as a team)
      • Less technical debt (that slows you down)
    • Strong decision-making: Try to operate from first principles to increase your odds when everything changes so quickly.
    • Forecasting skills: Work on your ability to predict the future – being proactive rather than reactive gives you a huge advantage – which basically improves your speed.

    5 Big thinking…

    FS Blog – a site to revisit at least once a month – always something to take away and ponder – shares some perspectives on the art/science of learning in this piece focused on the ’Feynman Technique’ – go spend 2 minutes to read it in full and then go take a long walk in nature to think about it…it will be worth it…think about it; A 2 min Read worth a lifetime of knowledge…

    “Complexity and jargon often mask a lack of understanding.” 

    Feynman’s learning technique comprises four key steps:

    “Feynman’s secret lay in understanding the true essence of a concept rather than merely knowing its name, leading to his remarkable achievements.” 

    “The person who says he knows what he thinks but cannot express it usually does not know what he thinks.” – Mortimer Adler 

    Select a concept and map your knowledge

    “Start with a blank page. Write everything you know about your chosen topic, using a different color pen for new information as you learn. This creates a visual map of your growing understanding.” 

    “You haven’t grasped it fully if you can’t explain it simply.” 

    “Writing things down helps in a lot of ways. First, it encourages better thinking. Second, you can organize your thoughts. Third, clear writing reveals gaps in understanding; it’s hard to ignore mistakes when you see them so clearly on the page. Fourth, writing will allow you to review and refine your work.”

    “Anyone can make a subject complicated but only someone who understands can make it simple.” 

    “When you find weak spots, return to the source material. Study those sections until you can explain them simply. When you realize your understanding has improved and the section could be improved, re-write it.” 

    “Test your understanding by teaching someone else.”

    6 The Hierarchy of Thinking Styles…

    “One of the clearest signs of learning is rethinking your assumptions and revising your opinions.” – Adam Grant

    Have a Great weekend when you get to that stage,

    Sune Hojgaard Sorensen