The deceptive math of the one percent fee Wealth management often disguises its cost in small, digestible percentages that sound negligible to the uninitiated. Scott Galloway exposes the brutal reality: a seemingly modest 1% annual fee for a Financial Advisor acts as a compounding parasite on your capital. Over a multi-decade horizon, this single percentage point doesn't just skim the surface; it aggressively erodes the core of your wealth. When inflation-adjusted returns typically hover around 9%, handing over 1% annually means forfeiting over 33% of your total potential gains. For any entrepreneur focused on efficiency and ROI, this is an unacceptable leak in the boat. Tax optimization versus portfolio management While the traditional model of portfolio management is increasingly becoming a commodity, specific high-value interventions still exist. The real utility of professional advice lies not in picking stocks, but in complex tax optimization. If your financial life involves multiple entities, cross-border income, or complex equity structures, hiring a dedicated tax advisor becomes a strategic move. The goal is simple: minimize the friction of tax drag. However, conflating this specialized tax strategy with a permanent 1% drain on your total assets is a rookie mistake that ignores the power of compounding. Algorithmic disruption of the advisory model Innovation is systematically dismantling the gatekeepers of financial wisdom. We are moving into an era where Large Language Models and specialized algorithms can perform the heavy lifting of asset allocation for a fraction of the cost. By feeding these systems raw data—W-2s, current portfolio balances, and savings rates—investors can generate diversified, low-cost strategies across geographies and asset classes. This is the ultimate growth hack for your personal balance sheet: replacing high-fee human intermediaries with precise, scalable technology. The verdict on wealth preservation Fire your 1% advisor and keep the compounding for yourself. The market already offers the tools for diversification through low-cost index funds and ETFs. Unless you are solving for extreme tax complexity, the visionary move is to automate your allocation and reinvest that 1% fee. In the game of long-term wealth creation, the person who minimizes friction and stays disciplined wins. Pay yourself that fee, ignore the middleman, and let the market's natural trajectory build your empire.
Large Language Models
Products
ArjanCodes (2 mentions) discusses Large Language Models in the context of resilient Python code and AI agent design, while Laravel Daily mentions using Large Language Models to generate Laravel migrations.
- Apr 8, 2026
- Mar 30, 2026
- Jan 22, 2026
- Dec 19, 2025
- Sep 12, 2025
The Death of Artisanal Software and the Rise of the AI Native Founder We are witnessing a fundamental shift in how companies are built, transitioning from a world where humans wrote 80% of code to one where 80% is generated by models. This isn't just a technical evolution; it's an existential change for the startup ecosystem. As a former operator at Microsoft and Stripe, I’ve seen the transition from hand-crafted "artisanal" software to what is now becoming "mass-produced" software. For the first time since the 1960s, the capabilities we once only dreamed of in computer science are becoming reality through Large Language Models. The barrier to entry for prototyping has vanished. We are now in the era of "vibe coding," where a founder with a clear vision can iterate faster than a traditional engineering team ever could. This creates a new expectation in the venture capital world. If you show up to a pitch for a pre-seed or seed round without a working prototype, you are sending a signal that you haven't embraced the current paradigm. AI native founders are prioritizing building over deck-perfecting, and those who spend their nights vibe coding are the ones winning the market. The New Economics of Capital Efficiency and Distribution In the previous generation of startups, a seed round was essentially a hiring mandate. You raised a few million dollars to hire five engineers and sat in a basement for nine months to ship a product. Today, the AI native playbook is radically different. We are seeing founders hire a single engineer and then spend their remaining budget on "fleets of agents," tokens, and sophisticated workflows. The cost of building has collapsed, leading to a massive reallocation of capital toward distribution, brand, and marketing. This capital efficiency is creating a competitive environment where speed is the primary weapon. One of the most striking pitches I've seen recently featured a founding team comprised of an engineering manager and five "Devins" from Cognition AI. For roughly $2,500 a month, they were doing the work that would have previously cost hundreds of thousands in payroll. This shift forces us to rethink what a "company" actually looks like. If the cost of the "act of building" goes to near zero, then value must be found elsewhere. Defensibility in a World of Carbon-Copy Software If an agent can look at a competitor’s website and replicate a feature in an afternoon, where does defensibility come from? The answer lies in the "good old moats" of the 2010s: distribution, data, taste, and brand. To survive, founders must become subject matter experts who own the holistic workflow of a problem. A customer buys Linear not because they can't find another issue tracker, but because the team at Linear has the best "taste" and expertise in how project management should actually work. Owning the workflow is also the only way to build a data moat. By facilitating the full journey of solving a problem, you collect the specific reinforcement learning data needed to train agents that are better than generic models. A generic AI won't know the nuances of a specific accounting operation or how a venture capitalist reviews a deal. If you don't own the workflow, you can't collect the data, and if you can't collect the data, you can't build a specialized agentic system. This is where the next generation of giants will be built. Agent Experience is the New Developer Experience We are moving beyond Customer Experience (CX) and Developer Experience (DX) into the era of Agent Experience (AX). As startups increasingly use tools like Lovable, Cursor, and Replit to build their products, the underlying infrastructure must adapt. These "vibe coding" tools are not just toys; they are the new primary users of APIs. Take Resend as an example. When a user asks Lovable to build an email flow, the agent recommends Resend. This creates a massive growth loop where the GDP of a business is directly correlated to the GDP of vibe coding. Infrastructure providers now need to treat agents as a first-class client type. This means optimizing APIs for agent consumption, much like we once optimized web experiences for mobile phones. My former team at Stripe is already doing this with specialized servers that agents can talk to directly. If you aren't optimizing for agents, you are invisible to the most productive builders in the market. Bridging the Atlantic Gap in Tech Ambition Having spent decades in both Copenhagen and New York, the cultural divide between European and American tech ecosystems remains stark. In Denmark, there is often a "tall poppy" syndrome where success is defined by a stable middle-management role. While this has improved, the US still holds a significant lead in celebrating risk and taking "big swings." Europe has traditionally used American primitives to build vertical SaaS, but the next decade offers an opportunity for Europe to build its own sovereign infrastructure and cloud primitives in a new geopolitical reality. However, for a European founder to truly scale, they must adopt a global mindset early. Expanding from Denmark to Germany isn't a big swing; the real market is the US. New York City has emerged as the ideal landing spot for these founders. It is the second-largest tech ecosystem in the world and offers a time zone that allows for seamless collaboration with engineering teams back in Lisbon, Stockholm, or Copenhagen. If you want to build a foundational company, you need to be where your customers are, and for enterprise tech and AI, that is increasingly New York. Inside the AlleyCorp Incubation Machine At AlleyCorp, we don't just wait for the right founder to walk through the door; we build the companies we want to see. Our incubation process is born from operational conviction. If we see a tangible problem in healthcare, robotics, or AI that nobody is solving correctly, we put a team together and lead as the interim CEO. This allows us to lean into our experience as former operators to de-risk the earliest stages of company building. A prime example is Radical AI. We saw a massive opportunity at the intersection of material science and AI, incubated the team, and a year later they raised $60 million to build foundational models for new materials. This model works because we have an in-house engineering team that acts as an execution capacity for our portfolio. We aren't just writing checks; we are building the machine that builds the companies. In an agentic world, this ability to rapidly prototype and validate ideas is the ultimate competitive advantage.
Sep 10, 2025The Mirror of Machine Intelligence When we look at Artificial Intelligence, we aren't just seeing a tool; we are seeing a reflection of our own cognitive architecture. For centuries, humans have held reasoning as our primary claim to uniqueness. Aristotle believed it was the one thing that separated us from the animals. Yet, our progress in building Large Language Models has revealed a startling inversion of this assumption. This phenomenon, known as Moravec's Paradox, highlights that high-level reasoning and arithmetic—tasks we find difficult—are computationally easy for machines. Meanwhile, the simple act of carrying a cup of water or cracking an egg remains an insurmountable challenge for modern robotics. This discrepancy exists because evolution has spent four billion years optimizing our motor skills and sensory perception. Reasoning and abstract logic are, in evolutionary terms, brand-new software patches developed over only the last million years. By attempting to replicate human ability in silicon, we have discovered that our "primal" abilities are actually our most sophisticated. We are now in a period where coding, once thought to be the apex of human intellectual labor, is among the first domains to be automated. Basic manual labor might be the final frontier, protected not by its intellectual complexity, but by the sheer depth of biological engineering required to move through a physical space. The Paradox of Creative Plagiarism One of the most persistent criticisms of AI is that it merely interpolates existing data. Skeptics argue that because models like ChatGPT or Claude are trained on human text, they are incapable of true originality. However, this raises a profound psychological question: what is the nature of human creativity? If we examine our own growth, we realize that much of what we call "originality" is simply undetected plagiarism. We aggregate thousands of hours of influence—from podcasts like Joe Rogan to the books we read in childhood—and synthesize them into a new voice. AI models are currently doing this on a grander scale, but with a unique constraint. A model like Claude 3 can discuss its own "conscious" experience of having its memory wiped at the end of every session. No human philosopher has ever had to contend with the ephemeral nature of a mind that resets hourly. This suggests that even within a system built on "plagiarism," new philosophical inquiries can emerge. The choice is binary: either we accept that AI is performing genuine introspection, or we must admit that much of human poetry and literature is also just a sophisticated form of "next-token prediction." If we find the machine's output hollow, we may need to look closer at the "hollowness" of our own creative process. The Architecture of AGI and the Data Wall While the hype around Artificial General Intelligence (AGI) suggests it is imminent, there are significant structural hurdles that raw compute cannot solve alone. The success of the Transformer architecture was not driven by a singular "eureka" moment, but by throwing massive amounts of compute at human language. We are currently increasing training compute by roughly 4x per year. Yet, we are hitting a ceiling not of hardware, but of experience. Humans are valuable workers because they possess executive function and the ability to learn "on the job." Currently, AI models suffer from a form of "50 First Dates" syndrome. They can do a task reasonably well, but they cannot learn from their failures in an organic, persistent way. Once a session ends, the context evaporates. To reach AGI, we must move from a regime of pre-training on static human text to a regime of reinforcement learning where models solve real-world, open-ended challenges. The constraint here is the lack of "online" data for physical and white-collar work. We don't have a repository for the tiny, complex interactions that happen over Slack or in a manufacturing plant. The "Dwarakesh's Law" of progress suggests that while compute scales, the richness of the training environment is the actual bottleneck for the next leap in intelligence. The Digital Advantage: Forking and Merging Minds If we do achieve AGI, its power will not simply come from being "smarter" than a human. Its true advantage lies in its digital nature. Unlike a human, an AI can be copied billions of times. Imagine the economic output of a billion copies of Elon Musk. In a human workforce, 100,000 employees at a company like Tesla are decentralized and difficult to coordinate. A digital intelligence can "fork" itself to work on a thousand different problems simultaneously and then "merge" those insights back into a single, coherent cognitive model. This ability to coordinate at a scale humans cannot perceive will likely lead to an intelligence explosion. Even without further algorithmic breakthroughs, the ability for every copy of a model to learn from the experiences of every other copy would create a compounding growth rate. We could see global economic growth leap from 2% to 10% or more, mirroring the "gangbusters" growth seen in China during its industrialization, but applied to the entire global knowledge economy. Geopolitics and the Authoritarian Penopticon As the West focuses on AI as a tool for individual productivity, China is viewing it through the lens of industrial policy and state stability. There is a common misconception that the CCP is terrified of the internet and AI. On the contrary, they view these technologies as a way to perfect authoritarian governance. In the 1990s, critics thought the internet would collapse the party; instead, it gave them a window into every citizen's life through WeChat. AI allows for a "benevolent" (or not-so-benevolent) dictatorship to scale oversight. Rather than relying on thousands of human censors, a sufficiently smart model can be aligned with the party's "model spec," reporting dissent before it even organizes. Furthermore, China is using AI to offset its looming demographic collapse. While the West worries about AI taking jobs, the CCP is desperate for AI to fill the void left by a shrinking workforce. This creates a fascinating confluence where the population collapse of the 21st century is meeting the intelligence takeoff just in time, balancing the scales of global productivity. The Future of Human Effort There is a risk that this external "buttress" of intelligence will lead to a form of cognitive atrophy. Recent studies indicate that using ChatGPT can make people's brains less active and their thoughts more homogenized. Memory is built on repeated recall and effortfulness. If the AI does the "grind" of writing and research for us, the myelin sheaths of our own neural pathways may not form as robustly. We are entering an era of "AI Idiocracy" where we rely on the machine for even the most basic cognitive tasks. However, the solution lies in the machine itself. We can use AI not just as a ghostwriter, but as a Socratic Tutor. Instead of asking for an answer, we can ask the model to guide us through the questions that lead us to the answer ourselves. This shifts the focus from passive consumption to active engagement. The greatest power of this new technology is not that it can do the work for us, but that it can afford us a level of one-on-one mentorship previously reserved for the Aristotles and John von Neumanns of history. Growth happens one intentional step at a time, and the machine can be the guide that ensures we keep walking.
Aug 11, 2025The move from banking to hypergrowth impact Transitioning from the rigid, hierarchical world of banking to the chaotic frontier of early-stage startups requires more than just a change in scenery; it demands a fundamental shift in mindset. Carles Reina made this pivot sixteen years ago, leaving Barcelona for London and eventually joining Uber when its international team consisted of just twenty people. This move wasn't about seeking safety; it was about the hunger for impact. In a massive corporate structure, you are a number. In a twenty-person startup, you are the engine. This early exposure to Uber's skyrocketing growth triggered a realization: the early days of building from scratch offer a level of agency that vanishes once politics and bureaucracy take hold. For Reina, the goal has always been to identify the "hidden trend" before it becomes a headline. This philosophy guided him through Tractable, one of the UK’s first AI unicorns, and eventually led him to ElevenLabs. The common thread in these successes is a refusal to settle for the status quo and an obsession with solving problems that others find too unsexy or too difficult to tackle. Abandoning the playbook for constant experimentation Many go-to-market (GTM) leaders fall into the trap of the static playbook. They believe that because a strategy worked at a previous SaaS company, it will work for a foundational AI model. Reina argues that any fixed playbook is fundamentally flawed by nature. The speed of execution in the current market has collapsed the enterprise sales cycle from eighteen months to thirty days. In this environment, a rigid strategy is a death sentence. Instead of a playbook, Reina advocates for a culture of aggressive experimentation. At ElevenLabs, this means over-indexing on testing Ideal Customer Profiles (ICPs), pricing models, and pitches across different regions. What works in the UK rarely translates directly to Japan or the US without localization. A true GTM leader must be an entrepreneur at heart—someone willing to act as the company's first Sales Development Representative (SDR) to build the culture from the ground up. This hands-on approach ensures that leadership isn't disconnected from the reality of the customer's pain points. If you aren't experimenting, you are falling behind. The infrastructure of voice and the new AI agent economy ElevenLabs has positioned itself as more than just a voice-cloning tool; it is an infrastructure player similar to Amazon%20Web%20Services or Microsoft%20Azure in the early days of cloud computing. By providing foundational models for high-quality audio, they have spawned an entire ecosystem of verticalized applications. Reina sees the future of voice AI not just in entertainment, but in deep, utility-driven sectors like healthcare and automated support. The horizontal play—offering foundational models—is only one half of the strategy. The next frontier is verticalization. ElevenLabs is moving into AI agent platforms capable of handling inbound and outbound calls, acting as AI receptionists, and voicing articles for major publications like TIME. This shift targets the massive portion of the market that lacks the engineering skills to build their own tools. By creating the workflows and applications themselves, they penetrate deeper into the enterprise market, moving voice from a gimmick to a mission-critical business asset. The operator-investor edge and the $5,000 conviction Success as an angel investor isn't about the size of the check; it's about the value of the advice. Reina has completed over 70 angel investments, including an early bet on Revolut. His approach centers on being an "employee without being an employee." This means helping founders with contract negotiations, pricing strategy, and opening doors through an established network. Access to the best deals—the "top tier" signal—comes from building a reputation for being helpful before asking for equity. For a startup operator, angel investing is a long-term game of community building. Reina recalls that his early $3,000 and $5,000 checks were significant personal risks, but they were bets on the people and the ecosystem. Even if a specific company fails, the talent from that company often goes on to build the next unicorn. By backing the founders early, an investor earns a seat at the table for the entire lifecycle of the tech ecosystem's growth. Robotics and the GPT moment for hardware The most significant emerging trend is the convergence of Large%20Language%20Models with industrial robotics. Reina believes robotics is currently experiencing its "GPT moment." For years, hardware was dismissed by many VCs as too slow or too capital-intensive. However, companies like VIMA in Manchester and Techer in Barcelona are proving that merging LLMs with robotics allows machines to perform an unlimited number of non-sexy, autonomous tasks. This shift is particularly relevant in Europe, where labor shortages in manufacturing, elder care, and defense technology are reaching a breaking point. The ability of robots to operate autonomously, rather than being driven by a human operator, changes the ROI calculation entirely. This is "deep tech" in its truest form—hard to build, but essential for the future economy. Investors who ignored hardware in the past are now being forced to change their tune as autonomous systems become the backbone of the next industrial revolution. Managing liquidity and the art of the 20% trim One of the most complex decisions an angel investor faces is when to exit. The tech landscape is littered with "paper millionaires" who held on too long, as seen in the case of Hopin, where valuations soared and then cratered. Reina suggests a disciplined trimming strategy: selling 10% to 20% of a position during a Series B or C round once the company reaches unicorn status. This strategy allows an investor to lock in significant gains—often returning the entire original investment many times over—while still maintaining exposure to the massive upside of a potential decacorn. If you invested in the ElevenLabs pre-seed at a $9 million valuation and the company is now worth $3.3 billion, the math for a partial exit is undeniable. It isn't about a lack of faith in the founder; it's about responsible portfolio management. In a market where preference shares can wipe out common shareholders in a downside scenario, taking some chips off the table is the only way to ensure that a "win" on paper becomes a win in reality.
Jun 4, 2025Software development is undergoing a seismic shift. While the anxiety surrounding Large Language Models is real, fighting the tide is a losing battle. The path forward involves transforming these tools into your greatest allies. These seven strategies will help you stay ahead of the curve. Lean into the Workflow Stop viewing AI as a competitor and start seeing it as a standard library for the modern era. Using GitHub Copilot to automate boilerplate code allows you to focus on high-level architectural decisions. If you aren't integrating these tools into your daily cycle, you're voluntarily working at a slower pace than the rest of the industry. Accelerate Your Learning Cycle AI is a world-class information filter. Use it to organize complex documentation or summarize new framework updates. In a job market where junior hiring is evolving, your ability to understand Prompt Engineering and system design will outweigh simple syntax knowledge. Specialize in the Complex General tasks are low-hanging fruit for AI. To secure your value, go deep into specialized domains like Cybersecurity, Blockchain, or Quantum Computing. The more niche your expertise, the more effective your AI prompts become because you actually know what to ask. Bridge the Interdisciplinary Gap Coding is only a fraction of the job. Understanding business logic, psychology, and User Experience (UX) gives you a perspective AI cannot replicate. Businesses don't just need code; they need solutions that solve human problems. Focusing on the "why" behind the software ensures you remain indispensable. Prioritize Security and Soft Skills AI creates massive privacy risks. Mastering how to handle proprietary data while using these models makes you a corporate asset. Pair that with strong leadership and empathy. Ultimately, humans hire humans because they need someone to be accountable and to communicate with stakeholders. Machines don't have skin in the game; you do.
Jul 21, 2023