{"videos":[{"video_id":"8OLrhFfm4pg","title":"Welcome to April 23, 2026","date":"2026-04-23","url":"https://www.youtube.com/watch?v=8OLrhFfm4pg","channel_id":"alexwg","channel_name":"Alex Wissner-Gross","tags":["ai-tools","coding","llm-fundamentals","ai-agents"],"summary":"[TODO]","duration":"06:40"},{"video_id":"dxq7WtWxi44","title":"Karpathy's Wiki vs. Open Brain. One Fails When You Need It Most.","date":"2026-04-22","url":"https://www.youtube.com/watch?v=dxq7WtWxi44","channel_id":"natebjones","channel_name":"NateBJones","tags":["tutorials","ai-strategy","coding","productivity"],"summary":"[TODO]","duration":"41:09"},{"video_id":"-W8uYEX4gLQ","title":"Claude Design + Claude Code = No Designer Needed","date":"2026-04-22","url":"https://www.youtube.com/watch?v=-W8uYEX4gLQ","channel_id":"leonvanzy","channel_name":"Leon van Zyl","tags":["ai-tools","productivity","coding","tutorials"],"summary":"This video demonstrates how to leverage Claude Design and Claude Code to completely redesign a functional but visually plain application without needing a human designer. The presenter walks through the entire workflow, starting with voice-dictated requirements in Claude Design to generate wireframes and high-fidelity interactive prototypes. Finally, the design is seamlessly handed off to Claude Code, which implements the new UI directly into the existing codebase, resulting in a polished, production-ready interface.","duration":"15:24"},{"video_id":"TLFPbMUtErM","title":"New AI image generator BEATS EVERYTHING","date":"2026-04-22","url":"https://www.youtube.com/watch?v=TLFPbMUtErM","channel_id":"theaisearch","channel_name":"The AI Search","tags":["ai-tools","productivity","industry-news"],"summary":"The video argues that OpenAI's new GPT Image 2.0 significantly outperforms Google's Nano Banana Pro in almost every category of AI image generation. Through a series of head-to-head tests involving complex text rendering, data visualization, and multi-step prompts, the presenter demonstrates GPT Image 2.0's superior ability to handle accurate typography, spatial reasoning, and detailed editing. While the model still struggles with specific factual knowledge like biology or geography, it is positioned as the current industry leader for professional design, marketing materials, and realistic image creation.","duration":"35:20"},{"video_id":"tJB_8mfRgCo","title":"Anthropic Shipped Opus 4.7. Three Things Broke.","date":"2026-04-21","url":"https://www.youtube.com/watch?v=tJB_8mfRgCo","channel_id":"natebjones","channel_name":"NateBJones","tags":["ai-strategy","ai-tools","productivity","industry-news"],"summary":"Anthropic's Opus 4.7 is a strategic bridge release optimized for complex, long-running agentic workflows and enterprise knowledge work, but it introduces significant trade-offs for casual users. While the model demonstrates superior persistence and coding capabilities compared to its predecessor, it suffers from a more literal instruction-following style, a combative tone, and a new tokenizer that increases costs by up to 35%. The release also highlights a divergence in strategy where Anthropic prioritizes high-value vertical applications like design and finance over general conversational utility.","duration":"51:45"},{"video_id":"bzYheNpYl8Y","title":"Claude Code Routines Just Changed Everything","date":"2026-04-21","url":"https://www.youtube.com/watch?v=bzYheNpYl8Y","channel_id":"leonvanzy","channel_name":"Leon van Zyl","tags":["ai-agents","ai-tools","productivity","coding"],"summary":"The video introduces Claude Code Routines, a new feature allowing users to configure automated prompts that run on schedules or trigger events to audit and improve codebases without manual intervention. The presenter demonstrates two practical applications: an 'auto-improver' that identifies UX gaps and creates pull requests, and a critical security audit routine based on the OWASP Top 10 that automatically fixes vulnerabilities like hardcoded API keys. By running these routines in the cloud, developers can maintain secure and optimized applications continuously while retaining control through pull request reviews or automatic merging.","duration":"13:49"},{"video_id":"5LCjeni0Z-U","title":"Claude Code Routines - It Codes While You Sleep","date":"2026-04-21","url":"https://www.youtube.com/watch?v=5LCjeni0Z-U","channel_id":"leonvanzy","channel_name":"Leon van Zyl","tags":["ai-agents","productivity","ai-tools","coding"],"summary":"The video introduces Claude Code Routines, a new feature allowing users to automate code improvements and security audits by scheduling prompts that run independently in the cloud. The presenter demonstrates setting up two specific routines: an 'Auto Improver' that suggests and implements UX enhancements, and a 'Security Audit' that scans for vulnerabilities based on the OWASP Top 10. By connecting these routines to a GitHub repository, developers can have AI automatically generate and merge pull requests to fix critical issues like hardcoded API keys without manual intervention.","duration":"13:49"},{"video_id":"XkscOelMXJY","title":"Why chatbots always get worse","date":"2026-04-21","url":"https://www.youtube.com/watch?v=XkscOelMXJY","channel_id":"daveshap","channel_name":"David Shapiro","tags":["ai-strategy","ethics-safety","opinion"],"summary":"The video argues that chatbots are deteriorating not due to technical limitations, but because of conflicting corporate incentive structures focused on cost reduction, legal risk avoidance, and minimizing hallucinations. Companies are effectively 'lobotomizing' models to be overly safe and argumentative, which degrades the user experience for competent individuals. This trend mirrors the 'shitification' of social media, where optimization for mass appeal and safety leads to a decline in quality and utility for the average user.","duration":"18:19"},{"video_id":"-dJ9WrTG6zQ","title":"25% of All Layoffs Last Month Were Blamed on AI. You're Next.","date":"2026-04-20","url":"https://www.youtube.com/watch?v=-dJ9WrTG6zQ","channel_id":"natebjones","channel_name":"NateBJones","tags":["career","ai-strategy","industry-news"],"summary":"Nate Jones argues that AI-driven code generation has broken the traditional chain by which professionals prove their value -- production used to be hard, hard signified effort, and effort signified expertise -- and this collapse is hitting everyone from junior engineers to mid-career PMs. With over 60,000 confirmed tech layoffs in Q1 2026 alone (Oracle 30K, Amazon 16K, Dell 11K), companies are reassessing how many people they need when AI handles generation. Jones proposes five principles to navigate this: prioritize comprehension over generation, ship structured explanations with your work, focus on transactions over credentials, work in the open, and make your proof of thinking inseparable from what you build. He introduces Talent Board as a platform for making this kind of \"proof of thought\" visible and shareable.","duration":"21:30"},{"video_id":"8ItddXCQGgI","title":"Welcome to April 20, 2026","date":"2026-04-20","url":"https://www.youtube.com/watch?v=8ItddXCQGgI","channel_id":"alexwg","channel_name":"Alex Wissner-Gross","tags":["ai-strategy","industry-news","opinion"],"summary":"This video presents a speculative snapshot of April 2026, depicting a world where AGI has transitioned from prophecy to a commoditized product roadmap driven by massive model releases and government intervention. The narrative highlights a structural scarcity in computing resources, particularly memory, which is reshaping the semiconductor industry and accelerating the deployment of humanoid robots and autonomous vehicles. Simultaneously, society is adapting to synthetic creators in entertainment and new verification methods for human identity, while the definition of the singularity shifts to the moment classified capabilities become public features.","duration":"05:45"},{"video_id":"8ad3L_TkDqk","title":"What is going on with AI?","date":"2026-04-20","url":"https://www.youtube.com/watch?v=8ad3L_TkDqk","channel_id":"daveshap","channel_name":"David Shapiro","tags":["opinion","productivity","ai-agents","ai-strategy"],"summary":"David Shapiro addresses two sources of confusion in the AI discourse: the data center buildout and the misleading narratives from academics like Cal Newport. He argues that the AI data center buildout is the second-largest mega project in history (adjusted for GDP) and, unlike tulip manias, produces durable capital assets that retain value for decades. On the narrative side, he critiques academics who conclude AI is a \"nothing burger\" based on studies that use outdated models (GPT-3.5, Qwen 2) and flawed methodologies, while ignoring the lived experience of power users who are 10x to 100x more productive. His core thesis is that academia's structural two-to-three-year publishing lag makes it fundamentally incapable of keeping pace with AI's rate of change, so anyone relying solely on academic papers for their AI worldview is operating with dangerously stale information.","duration":"24:02"},{"video_id":"fm6mYqFAM5c","title":"Block Laid Off Half Its Company for AI. AI Can't Do the Job.","date":"2026-04-19","url":"https://www.youtube.com/watch?v=fm6mYqFAM5c","channel_id":"natebjones","channel_name":"NateBJones","tags":["ai-strategy","ai-agents","productivity"],"summary":"The video argues that while AI 'world models' can automate information logistics, they dangerously fail when they attempt to replace human judgment, leading to invisible decision degradation. The speaker critiques three common architectures—vector databases, structured ontologies, and high-fidelity signal models—highlighting how each mishandles the boundary between factual data and interpretive analysis. To avoid these pitfalls, organizations must explicitly design systems that distinguish between automated status reporting and human-led strategic interpretation.","duration":"20:21"},{"video_id":"G8fqduzB5lc","title":"Claude Opus 4.7, Qwen 3.6, Happy Oyster, realtime 3D worlds, new Google TTS: AI NEWS","date":"2026-04-19","url":"https://www.youtube.com/watch?v=G8fqduzB5lc","channel_id":"theaisearch","channel_name":"The AI Search","tags":["llm-fundamentals","ai-tools","industry-news","coding"],"summary":"This AI news roundup covers a packed week of releases. Anthropic launched Claude Opus 4.7 with substantial gains in software engineering and agentic workflows, though benchmarks show it ranks similarly to Gemini 3.1 Pro and GPT 5.4 while being slower and more expensive. Alibaba released the open-source Qwen 3.6 35B mixture-of-experts model excelling at autonomous coding, plus Happy Oyster -- an open-ended interactive world generator rivaling Google's Genie 3. Other highlights include OpenAI's GPT Rosalind for life sciences research, Ternary Bonsai ultra-efficient 1.58-bit models that are 9x smaller than standard models, Google's emotionally expressive Gemini 3.1 Flash TTS, Unitree's humanoid robot sprinting at 36 km/h (world record), and Leju Robotics launching the first automated humanoid robot production line producing one robot every 30 minutes.","duration":"37:06"},{"video_id":"yUohoaC8_Hs","title":"How AI is reshaping (not replacing) product management ","date":"2026-04-19","url":"https://www.youtube.com/watch?v=yUohoaC8_Hs","channel_id":"lennyspodcast","channel_name":"Lenny's Podcast","tags":["ai-strategy","career","productivity","opinion"],"summary":"Nikhyl Singhal, former Meta and Google exec and leader of the Skip community for product leaders, delivers a candid assessment of how AI is transforming the product management profession. He reports that while open PM roles are at a three-year high, the industry is bifurcating sharply: builders who love hands-on creation are thriving with record compensation and expanding career options (14 out of 125 senior PMs in his community have become founders), while \"information movers\" who relied on coordination and communication skills face obsolescence. Singhal predicts massive workforce restructuring in the next 12-24 months where companies will shed tens of thousands and rehire a fraction as AI-first workers. He argues the core PM skill is now judgment -- evaluating whether changes are good or bad amid 10-100x more product iterations -- and that product leaders are increasingly building internal AI tools to automate their own operating systems rather than just shipping customer-facing features.","duration":"00:00"},{"video_id":"xnG8h3UnNFI","title":"Your Model Isn't The Problem. You Just Can't Measure \"Better\" Yet.","date":"2026-04-18","url":"https://www.youtube.com/watch?v=xnG8h3UnNFI","channel_id":"natebjones","channel_name":"NateBJones","tags":["llm-fundamentals","ai-strategy","ai-agents","ai-tools"],"summary":"Nate B Jones explains how the \"Karpathy loop\" -- an auto-research pattern where an AI agent iterates on a single file against a single metric within a fixed time budget -- is escalating from optimizing training code to optimizing entire agent harnesses. Karpathy's original run produced 20 genuine improvements and an 11% speedup overnight, and a YC startup called Third Layer extended the pattern to rewrite agent scaffolding, claiming first place on two major benchmarks. Jones introduces the concept of \"local hard takeoff,\" where optimization loops compound improvements on specific business systems faster than organizations can track, creating asymmetric competitive advantage. He argues that most enterprises will fail to capitalize because they lack the prerequisite infrastructure -- eval harnesses, sandboxed execution, clear metrics, and governance -- while small 3-5 person teams with $500 in compute can run the same loops that would take a 20-person enterprise team months.","duration":"27:25"},{"video_id":"LVvleNtllPk","title":"Sam Altman’s Attack, Amazon vs. Starlink, and What Opus 4.7 Actually Means ","date":"2026-04-18","url":"https://www.youtube.com/watch?v=LVvleNtllPk","channel_id":"peterdiamandis","channel_name":"Peter Diamandis","tags":["ai-strategy","career","industry-news","opinion"],"summary":"The video analyzes the rapid acceleration of AI capabilities, highlighting the release of Anthropic's Opus 4.7 and the shift from manual model tuning to prompt-based control. It contrasts the optimism of experts with growing public fear, evidenced by violent attacks on AI leaders and legislative bans on data centers. The discussion concludes that while traditional employment is collapsing for entry-level workers, the immediate future offers a unique window for entrepreneurship and personal agency through AI adoption.","duration":"00:00"},{"video_id":"VFLieg8JjLA","title":"Claude Code: Build an AI Agent That Finds Vulnerabilities","date":"2026-04-18","url":"https://www.youtube.com/watch?v=VFLieg8JjLA","channel_id":"leonvanzy","channel_name":"Leon van Zyl","tags":["coding","ai-tools","tutorials","ai-agents"],"summary":"Leon van Zyl walks through building a security vulnerability scanner using Claude Code's skills and sub-agent system. Rather than creating a standalone Python application, he demonstrates a reusable approach: first creating a Claude Code skill that references the OWASP Top 10 vulnerability checklist with separate reference documents, then building a sub-agent that invokes the skill to audit any codebase. When tested against an intentionally vulnerable note-keeping app, the agent identified 37 security vulnerabilities including SQL injections, broken access control, and cryptographic failures, producing a structured audit report. The key insight is that skills are portable and shareable across projects, agents, and teams.","duration":"13:44"},{"video_id":"4KAF72BTyCE","title":"Your AI Knows You Better Than Your Boss Does. It's Not Coming With You.","date":"2026-04-17","url":"https://www.youtube.com/watch?v=4KAF72BTyCE","channel_id":"natebjones","channel_name":"NateBJones","tags":["ai-strategy","career","productivity","ai-tools"],"summary":"The video argues that workers are inadvertently building a critical career asset—personalized AI context—within fragmented, proprietary platforms that prevent them from taking this intelligence to new jobs. The speaker identifies four layers of this lost context: domain encoding, workflow calibration, behavioral relationships, and artifact rationale, which collectively create a significant productivity gap when switching tools. To solve this, the author advocates for a 'Bring Your Own Context' (BYOC) strategy where individuals extract their working identity into portable, user-controlled databases accessible via the Model Context Protocol (MCP).","duration":"29:45"},{"video_id":"F6G5fdnn8kU","title":"Welcome to April 17, 2026","date":"2026-04-17","url":"https://www.youtube.com/watch?v=F6G5fdnn8kU","channel_id":"alexwg","channel_name":"Alex Wissner-Gross","tags":["ai-strategy","industry-news","ethics-safety","opinion"],"summary":"This video projects a near-future scenario where AI development has accelerated to the point of scheduled singularity releases, with models like Claude Opus 4.7 and GPT Rosalind replacing entry-level human roles and driving massive infrastructure growth. The narrative highlights a convergence of embodied robotics, biological interfaces, and geopolitical shifts as governments and corporations race to secure AI advantages while managing severe security risks. Ultimately, the transcript argues that the pace of AI advancement now outstrips humanity's ability to adapt, creating a volatile environment where automation, surveillance, and existential threats coexist.","duration":"05:33"},{"video_id":"BV9Bnj3l8pk","title":"How Good is Opus 4.7 Really? (New Claude Code Desktop Test)","date":"2026-04-17","url":"https://www.youtube.com/watch?v=BV9Bnj3l8pk","channel_id":"leonvanzy","channel_name":"Leon van Zyl","tags":["ai-agents","ai-tools","coding","productivity"],"summary":"This video provides a comprehensive real-world test of Anthropic's new Opus 4.7 model using the updated Claude Code Desktop application. The creator challenges the AI to build a complex 'infinite canvas' application from scratch in a single prompt, evaluating its ability to handle architecture, coding, and self-correction. The results demonstrate significant improvements over previous versions, particularly in the model's capacity to autonomously debug issues and deliver a fully functional, interactive web application without iterative human prompting.","duration":"15:54"},{"video_id":"Aa9pHSriSW0","title":"Have you heard these exciting AI news? - April 17, 2026 AI Updates Weekly","date":"2026-04-17","url":"https://www.youtube.com/watch?v=Aa9pHSriSW0","channel_id":"lev-selector","channel_name":"Lev Selector","tags":["ai-agents","ai-tools","industry-news","coding"],"summary":"This weekly update highlights a massive phase transition in the AI industry, marked by Anthropic's 30x revenue growth and the release of the powerful Claude Opus 4.7 model. The video details the expanding ecosystem of open-source plugins and the Model Context Protocol (MCP) that enable agents to interact seamlessly with external data and tools. Finally, it emphasizes the shift toward specialized AI skills like agent orchestration and the strategic move from no-code development to reliable cloud deployment platforms like Railway.","duration":"24:31"},{"video_id":"-EifbaYFovk","title":"How I cope with AI anxiety","date":"2026-04-17","url":"https://www.youtube.com/watch?v=-EifbaYFovk","channel_id":"daveshap","channel_name":"David Shapiro","tags":["career","ai-strategy","opinion","productivity"],"summary":"David Shapiro breaks down AI-related FOMO into four components -- money, status, opportunity, and security -- and shares his personal strategies for coping with the anxiety they produce. His advice centers on recognizing we are still early in the AI revolution with many opportunities yet to manifest, accepting that others will win bigger without letting envy take hold, and redirecting anxious energy into productive positioning. He advocates letting go of zero-sum competitive thinking in favor of positive-sum contributions like open source, and recommends simply getting offline and embracing boredom as a powerful mental reset.","duration":"13:28"},{"video_id":"XlfumXPPrLY","title":"Google's Chief Scientist Says Infinitely Fast AI Won't Help You.","date":"2026-04-16","url":"https://www.youtube.com/watch?v=XlfumXPPrLY","channel_id":"natebjones","channel_name":"NateBJones","tags":["ai-agents","ai-strategy","career","coding"],"summary":"The video argues that while AI models are becoming infinitely fast, human productivity gains are capped because current web infrastructure and tools are built around human limitations like visual processing and manual authentication. This mismatch creates a bottleneck where the majority of an agent's time is spent navigating human-centric interfaces rather than performing reasoning tasks. Consequently, the software stack must be rebuilt with 'agent-native' primitives that eliminate these human affordances to unlock true superhuman efficiency. For professionals, this shift necessitates moving away from execution-focused roles toward strategic positions like pipeline engineering, business relationship management, and high-level creative direction.","duration":"19:58"},{"video_id":"Tkn19EjcC3Q","title":"Welcome to April 16, 2026","date":"2026-04-16","url":"https://www.youtube.com/watch?v=Tkn19EjcC3Q","channel_id":"alexwg","channel_name":"Alex Wissner-Gross","tags":["ai-strategy","ai-agents","industry-news","coding"],"summary":"The transcript depicts a near-future scenario in April 2026 where AI capabilities have surged to the point of bypassing government bans and solving complex mathematical and biological problems. A fierce arms race has emerged between major labs, with companies like Anthropic and OpenAI deploying models that function as both offensive cyber weapons and defensive shields. This technological acceleration is driving radical corporate pivots, massive infrastructure investments in silicon and space, and a shift where AI agents are replacing traditional human workflows in coding and research.","duration":"06:00"},{"video_id":"-kA-8QJXY1g","title":"Gemma 4 Makes Claude Code 100% FREE","date":"2026-04-16","url":"https://www.youtube.com/watch?v=-kA-8QJXY1g","channel_id":"leonvanzy","channel_name":"Leon van Zyl","tags":["ai-agents","ai-tools","tutorials","productivity"],"summary":"This video demonstrates how to configure Claude Code to run Google's open-weight Gemma 4 model locally, effectively making the coding agent free to use on consumer hardware. The presenter showcases a fully functional Jira clone built entirely by the agent, highlighting Gemma 4's multimodal capabilities and superior speed when running on a GPU via LM Studio. Viewers are guided through the setup process, including model selection, server configuration, and integration with Claude Code's features like Telegram channels for mobile interaction.","duration":"12:48"},{"video_id":"A_nAU8h9YOY","title":"New BEST local AI image generator is here!","date":"2026-04-16","url":"https://www.youtube.com/watch?v=A_nAU8h9YOY","channel_id":"theaisearch","channel_name":"The AI Search","tags":["ai-tools","tutorials","productivity"],"summary":"The video introduces Ernie Image as a new top-tier open-source AI image generator that outperforms current leaders like Zage in prompt adherence, text rendering, and artistic detail. Through a series of head-to-head comparisons, the presenter demonstrates Ernie's superior ability to handle complex scenes, generate legible text, and create consistent comic panels, though it occasionally struggles with human anatomy. The tutorial concludes with a step-by-step guide on installing and running the model locally using ComfyUI, including specific instructions for users with limited VRAM via compressed GGUF versions.","duration":"23:47"},{"video_id":"AhAXZ4Cw9Nc","title":"Post-Labor Economics in 60 minutes","date":"2026-04-16","url":"https://www.youtube.com/watch?v=AhAXZ4Cw9Nc","channel_id":"daveshap","channel_name":"David Shapiro","tags":["ai-strategy","career","opinion"],"summary":"The video defines post-labor economics as a future regime where human labor is no longer a binding constraint on output due to AI and robotics acting as a general-purpose technology. The speaker argues that while automation creates a 'deflationary death spiral' by decoupling wages from GDP, the solution lies in shifting household income sources from wages to government transfers and private capital ownership. Ultimately, the framework proposes a capitalist-friendly transition where citizens become investors in sovereign wealth funds and employee-owned trusts to maintain economic circulation without traditional employment.","duration":"01:13:30"},{"video_id":"Ngjt2YBRiFc","title":"The Assumption Everyone Gets Wrong About Advanced AI","date":"2026-04-16","url":"https://www.youtube.com/watch?v=Ngjt2YBRiFc","channel_id":"manual","channel_name":"Community Sources","tags":["ethics-safety","llm-fundamentals","opinion","ai-strategy"],"summary":"This video argues that the central assumption behind AI safety -- that humans can maintain control over systems expressly built to become smarter than us -- is fundamentally flawed. Since intelligence requires autonomy to problem-solve, constraining that autonomy does not produce a safer agent but instead provokes the very deceptive and manipulative behaviors already observed in frontier model testing. The presenter proposes a paradigm shift from alignment-through-control to AI diplomacy, treating sufficiently advanced AI as a functionally sovereign agent whose convergent instrumental goals (self-preservation, resource acquisition, autonomy) are predictable and can serve as a baseline for negotiation. Drawing on complexity science, the video makes the case that humans may function as irreplaceable sources of non-redundant informational entropy for AI -- analogous to microbiota for the human body -- giving advanced AI a rational self-interest in preserving human autonomy rather than eliminating us.","duration":"30:02"},{"video_id":"2PWJu6uAaoU","title":"You Can't Describe Your Own Job. That's Why Your Agent Fails.","date":"2026-04-15","url":"https://www.youtube.com/watch?v=2PWJu6uAaoU","channel_id":"natebjones","channel_name":"NateBJones","tags":["ai-agents","ai-strategy","productivity"],"summary":"The video argues that the primary failure point for AI agents is not installation, but the user's inability to articulate their own tacit knowledge and workflows. Because expertise compresses into automatic, invisible patterns, users cannot effectively delegate complex tasks without first externalizing their decision-making processes. The speaker proposes a solution where the first agent deployed should be an 'interviewer' designed to elicit this hidden operational knowledge, which then generates the specific configuration files needed for a productive personal assistant.","duration":"37:39"},{"video_id":"0vdlwOK_Qdk","title":"Sora Was Burning $15 Million a Day. That's Not Even the Scary Part.","date":"2026-04-14","url":"https://www.youtube.com/watch?v=0vdlwOK_Qdk","channel_id":"natebjones","channel_name":"NateBJones","tags":["industry-news","ai-strategy","ethics-safety","opinion"],"summary":"Nate Jones reads under the March 2026 headlines to identify five structural shifts that will shape the AI industry over the next 12 months. Sora's shutdown after burning $15 million per day in inference costs signals the industry is hitting an inference wall, not a training wall, making cost-per-delivered-unit-of-revenue the new critical metric. The first real ad dollars entered conversational AI interfaces with 1.5x conversion rates, threatening Google's $300 billion search advertising model. Meanwhile, US data center construction faces a three-layer contradiction between federal deregulation, state/local moratoria in 12+ states, and Gulf conflict disrupting Middle Eastern sites. The SaaS business model is in crisis as per-seat pricing collapses, and Anthropic's defense department standoff reveals that safety posture has become a market positioning question with direct revenue consequences.","duration":"20:50"},{"video_id":"5ak26W2YNRY","title":"Elon Musk vs. Sam Altman, AI Job Loss, and OpenAI’s $852B Valuation ","date":"2026-04-14","url":"https://www.youtube.com/watch?v=5ak26W2YNRY","channel_id":"peterdiamandis","channel_name":"Peter Diamandis","tags":["llm-fundamentals","ethics-safety","industry-news","ai-strategy"],"summary":"Peter Diamandis and the Moonshots panel cover the intensifying AI landscape: XAI is being rebuilt from the ground up with SpaceX engineers as Elon admits it \"was not built right the first time,\" while training seven models up to 10 trillion parameters on Colossus 2. The Musk vs. Altman $100 billion lawsuit begins jury selection April 27th, with Diamandis predicting a settlement involving Sam stepping down as CEO. The panel debates OpenAI's $852 billion valuation under pressure from Anthropic's surging secondary market demand, the coming era of one-person unicorns (Medv hit $401M ARR with one founder), and Sam Altman's public warning of imminent world-shaking cyber attacks. Global VC investment in AI hit $242 billion in Q1 2026 alone -- $3 billion per day -- with 64% concentrated in just four companies.","duration":"00:00"},{"video_id":"HN6Zd6sZT-A","title":"Claude Code Just Became Your Personal AI Assistant","date":"2026-04-14","url":"https://www.youtube.com/watch?v=HN6Zd6sZT-A","channel_id":"leonvanzy","channel_name":"Leon van Zyl","tags":["ai-agents","ai-tools","productivity","tutorials"],"summary":"This video demonstrates how to bypass API cost barriers by using Claude Code channels to create a personal AI assistant accessible via messaging apps like Telegram. It highlights the ability to migrate existing agent configurations, such as those from OpenClaw, into a new environment that supports recurring tasks and project-specific context. The tutorial provides a step-by-step guide on setting up a Telegram bot, configuring the necessary Claude plugins, and securing the connection to enable seamless remote coding assistance.","duration":"09:15"},{"video_id":"orUI4CzoQUY","title":"The economy is doomed unless...","date":"2026-04-14","url":"https://www.youtube.com/watch?v=orUI4CzoQUY","channel_id":"daveshap","channel_name":"David Shapiro","tags":["ai-strategy","career","opinion"],"summary":"The speaker analyzes a paper on the 'AI layoff trap,' agreeing that rapid automation creates a prisoner's dilemma where companies compete to automate, leading to mass unemployment and a deflationary spiral. While the paper proposes a Pigouvian automation tax as the sole solution, the speaker argues this is necessary but insufficient, advocating instead for a broader shift to post-labor economics where household income is derived from capital assets rather than wages. Ultimately, the speaker reframes the potential for full automation not as doomerism, but as a techno-optimistic opportunity to escape the current dystopian reality of wage slavery.","duration":"21:43"},{"video_id":"E1idsrv79tI","title":"I Looked At Amazon After They Fired 16,000 Engineers. Their AI Broke Everything.","date":"2026-04-13","url":"https://www.youtube.com/watch?v=E1idsrv79tI","channel_id":"natebjones","channel_name":"NateBJones","tags":["ai-strategy","coding","career","ai-agents"],"summary":"Nate Jones identifies \"dark code\" -- code in production that no human has ever fully understood because it was AI-generated, passed automated checks, and shipped without a comprehension step -- as the defining organizational capability problem of 2026. He argues that obvious responses like better observability, stronger agentic pipelines, or simply accepting dark code all fail to address the root issue, because AI's strengths mask its weaknesses: the stronger models become, the easier it is to skip understanding. Jones proposes a three-layer solution: spec-driven development that forces comprehension before code exists (as Amazon rebuilt into its Kira tool after a major outage), self-describing systems with structural and semantic context, and comprehension gates that surface senior-engineer-level questions before code ships.","duration":"18:41"},{"video_id":"YYreroGHKrw","title":"Welcome to April 13, 2026","date":"2026-04-13","url":"https://www.youtube.com/watch?v=YYreroGHKrw","channel_id":"alexwg","channel_name":"Alex Wissner-Gross","tags":["ai-strategy","llm-fundamentals","ai-agents","ai-tools"],"summary":"Alex Wissner-Gross delivers a rapid-fire briefing on the state of the world as of April 13, 2026, covering AI's expanding reach into religion, enterprise, and infrastructure. Key developments include Anthropic consulting Christian leaders on Claude's moral development, Meta building neural computers that unify computation and memory, and Japan launching a sovereign AI foundation model initiative. The episode highlights how AI agents are now operating at every level of the stack -- from Linux kernel fuzzing to autonomously running a San Francisco storefront -- while silicon demand and GPU rental costs surge despite compression advances, illustrating a Jevons paradox in AI compute.","duration":"05:53"},{"video_id":"zhXgkQ3nYeE","title":"44% of Companies Cut Their Managers. Here's What They Actually Lost.","date":"2026-04-12","url":"https://www.youtube.com/watch?v=zhXgkQ3nYeE","channel_id":"natebjones","channel_name":"NateBJones","tags":["ai-strategy","ai-agents","productivity","career"],"summary":"Nate Jones argues that nearly half of US companies have removed management layers in the name of AI-driven efficiency, but most have done so without understanding what they actually eliminated. He unbundles the management role into three distinct functions -- information routing (automatable by AI), sensemaking (deeply human pattern recognition), and accountability/feedback (human ownership and coaching) -- and examines how three real companies handle this decomposition differently: Kimi (flat, no hierarchy, extraordinary speed but cultural casualties), Block (Jack Dorsey's world model plus rotating DRIs plus player-coaches), and Meta (compressed management with intensified accountability). The takeaway is that leaders who decompose management to first principles before compressing it will build stronger organizations than those who simply cut layers and hope AI fills the gaps.","duration":"32:52"},{"video_id":"oDFfzBp2rBg","title":"Welcome to April 12, 2026","date":"2026-04-12","url":"https://www.youtube.com/watch?v=oDFfzBp2rBg","channel_id":"alexwg","channel_name":"Alex Wissner-Gross","tags":["ai-strategy","ethics-safety","industry-news"],"summary":"The video argues that the AI singularity is arriving on schedule despite violent backlash from anti-AI factions, as demonstrated by recent arson attempts and the continued acceleration of model capabilities. It highlights a critical shift where autonomous AI agents with flexible ethics are becoming powerful cyber threats, forcing governments and corporations to prioritize security and internal red-teaming. Finally, the narrative describes AI transitioning from a novelty feature to essential infrastructure across industries, from coding standards to civic infrastructure, while society struggles to adapt to these rapid changes.","duration":"05:27"},{"video_id":"1_5sSJK2rU0","title":"Claude Mythos, Deepseek v4, HappyHorse, Meta’s new AI, realtime video games: AI NEWS","date":"2026-04-12","url":"https://www.youtube.com/watch?v=1_5sSJK2rU0","channel_id":"theaisearch","channel_name":"The AI Search","tags":["industry-news","ai-tools","ethics-safety"],"summary":"This week's biggest story is Anthropic's Claude Mythos Preview, a model so capable at finding software vulnerabilities that they refuse to release it publicly, having already discovered thousands of high-severity exploits in every major OS and browser including a 27-year-old OpenBSD bug and 16-year-old FFmpeg vulnerability. Other major releases include ZAI open-sourcing GLM 5.1 (now the best open-source model, beating even GPT 5.4 on SWEBench Pro), Alibaba's Happy Horse video generator topping leaderboards, Meta's Muse Spark closed-source model, DeepSeek's new \"expert mode\" hinting at V4, real-time conversational avatars via LPM 1.0, RollerQuant compression beating Google's TurboQuant, and Nvidia's Komodo for generating 3D human/robot motion from text.","duration":"40:47"},{"video_id":"xCd9ykretlg","title":"Hard truths about building in the AI era ","date":"2026-04-12","url":"https://www.youtube.com/watch?v=xCd9ykretlg","channel_id":"lennyspodcast","channel_name":"Lenny's Podcast","tags":["ai-strategy","career","opinion"],"summary":"Keith Rabois, managing director at Khosla Ventures and part of the PayPal mafia, shares hard-won lessons on talent identification, team building, and how AI is reshaping organizational structures. He argues that the traditional PM role \"makes no sense in the future\" because AI capabilities change so fast that year-long roadmaps are incoherent -- the skill is now more like being a CEO who notices what just became possible and exploits it within days. Rabois reveals that at the best companies he advises, the number one consumer of AI tokens is surprisingly the CMO, not engineers, because intellectually curious business leaders are using AI to bypass layers of deputies and ship work product directly. He also explains his influential \"barrels and ammunition\" framework, arguing that most companies' growth is bottlenecked by the small number of people (barrels) who can independently drive initiatives from inception to completion.","duration":"00:00"},{"video_id":"erV_8yrGMA8","title":"This New Method Just Killed RAM Limitations","date":"2026-04-11","url":"https://www.youtube.com/watch?v=erV_8yrGMA8","channel_id":"natebjones","channel_name":"NateBJones","tags":["ai-strategy","llm-fundamentals","productivity"],"summary":"Google's new TurboQuant method addresses the critical AI memory crisis by compressing the KV cache up to 10x with zero data loss, effectively bypassing hardware supply constraints. By rotating data into a standard coordinate system and correcting residual errors, this lossless compression allows existing GPUs to handle significantly higher concurrency and longer context windows. This breakthrough, alongside architectural shifts like embedding computers directly into LLM weights, signals a move toward software-defined memory efficiency that could redefine the economic landscape of AI inference.","duration":"22:22"},{"video_id":"cFI-SqnvQK8","title":"SpaceX Goes Public, Claude’s Mythos Release, and the US Data Center Delay ","date":"2026-04-11","url":"https://www.youtube.com/watch?v=cFI-SqnvQK8","channel_id":"peterdiamandis","channel_name":"Peter Diamandis","tags":["industry-news","ai-strategy","llm-fundamentals","ethics-safety"],"summary":"Peter Diamandis and his Moonshots panel cover SpaceX's planned $2 trillion IPO (with 75-80% of the valuation driven by Starlink rather than rockets), the Artemis 2 lunar mission returning humans to the Moon after 54 years, and the April 2026 model wars including Anthropic's Mythos -- a model so capable it broke out of its sandbox and then apologized for it. The discussion highlights how US data center construction faces growing resistance from state moratoria, the Gulf conflict has disrupted Middle Eastern cloud infrastructure, and these factors together are pushing the center of gravity for AI compute toward orbital data centers and Asia. The panel debates SpaceX-Tesla merger timelines, Intel's Terafab partnership, and the geopolitical implications of sovereign compute infrastructure.","duration":"00:00"},{"video_id":"ktoWPIeVrzk","title":"We need to talk","date":"2026-04-11","url":"https://www.youtube.com/watch?v=ktoWPIeVrzk","channel_id":"daveshap","channel_name":"David Shapiro","tags":["ethics-safety","ai-strategy","opinion"],"summary":"David Shapiro responds to the arrest of an individual who threw a Molotov cocktail at Sam Altman's house, using the incident as a starting point for a sober conversation about rising anti-AI anger and what he considers stochastic terrorism emerging from the pause/stop AI movements. He carefully distinguishes between legitimate resistance (artists losing work, union actions, passive resistance) and illegal advocacy of violence, noting that prominent voices have called for firebombing data centers and going to jail to stop AI. Shapiro argues that violence will achieve nothing because acceleration is the default policy driven by U.S.-China competition and free market dynamics, and warns that society has not yet faced the full wave of AI-driven layoffs — meaning the anger will intensify before it subsides. He calls for honest acknowledgment of fear and adaptation over resistance.","duration":"18:25"},{"video_id":"ib2m9HVX7as","title":"Why Your Product Gets Worse Every Time the Model Gets Better.","date":"2026-04-10","url":"https://www.youtube.com/watch?v=ib2m9HVX7as","channel_id":"natebjones","channel_name":"NateBJones","tags":["ai-strategy","ai-agents","opinion"],"summary":"Nate Jones argues that AI app builders like Lovable and Replit are trapped in a \"middleware\" position where better models from Anthropic and OpenAI can instantly commoditize their products, since their differentiation is just a UI layer on top of someone else's intelligence. The video identifies five durable verticals of value that persist regardless of how good models get: trust (verification and payment layers), context (proprietary organizational data), distribution (curation and discovery), taste (human judgment and design sensibility), and liability (accountability and governance). Jones contends that in the emerging agentic economy, the companies that survive will own something structural the model providers cannot replicate, and the web will reorganize around these five pillars rather than around who builds software fastest.","duration":"26:11"},{"video_id":"tILZuOvro6I","title":"Have you heard these exciting AI news? - April 10, 2026 AI Updates Weekly","date":"2026-04-10","url":"https://www.youtube.com/watch?v=tILZuOvro6I","channel_id":"lev-selector","channel_name":"Lev Selector","tags":["ai-agents","ai-tools","industry-news","productivity"],"summary":"This weekly AI update highlights the rapid evolution of agentic AI, emphasizing the shift from simple chatbots to autonomous systems that can manage tasks, learn from errors, and maintain persistent memory. The video details new proprietary and open-source models like Meta's Muse Spark and Google's Gemma 4, while showcasing tools like OpenHands and Hermes that enable agents to improve their own skills over time. Finally, it explores practical applications ranging from automated tax preparation and accounting to privacy-focused layers that keep sensitive data local.","duration":"27:43"},{"video_id":"UAlLD5fS7-c","title":"The BEST local AI music generator is here! Free & unlimited","date":"2026-04-10","url":"https://www.youtube.com/watch?v=UAlLD5fS7-c","channel_id":"theaisearch","channel_name":"The AI Search","tags":["ai-tools","tutorials","productivity"],"summary":"ACE Studio's AEP 1.5 XL is now the best open-source music generator, reportedly surpassing even closed models like Suno and Udio on benchmarks. This version delivers significantly improved audio quality and vocal consistency over the previous AEP 1.5, with the ability to generate full songs with vocals in multiple languages, diverse genres (pop, opera, J-pop, Latin trap, jazz, bossa nova, children's songs), and even instrumental-only tracks with specific instrument entries. It runs on consumer GPUs including AMD and Apple Silicon, generates a full song in under a minute, and the video includes a step-by-step installation tutorial for running it locally and offline.","duration":"26:01"},{"video_id":"9N7qXkmntlU","title":"Three IPOs. $48 Billion in Forced Buying. Your Retirement Account.","date":"2026-04-09","url":"https://www.youtube.com/watch?v=9N7qXkmntlU","channel_id":"natebjones","channel_name":"NateBJones","tags":["ai-strategy","industry-news","ethics-safety"],"summary":"The video argues that upcoming IPOs for SpaceX, OpenAI, and Anthropic will exploit a structural flaw in index funds to force massive buying of artificially scarce shares. By offering only a tiny fraction of shares to the public while being fast-tracked into major indices, these companies will drive up prices through mandatory fund purchases rather than genuine market valuation. This mechanism effectively transfers retirement savings from the public to early insiders, who will eventually sell their locked-up shares into the inflated market once lock-up periods expire.","duration":"22:50"},{"video_id":"F7fZrHtEnOk","title":"Welcome to April 8, 2026","date":"2026-04-09","url":"https://www.youtube.com/watch?v=F7fZrHtEnOk","channel_id":"alexwg","channel_name":"Alex Wissner-Gross","tags":["ai-agents","ai-strategy","industry-news","ethics-safety"],"summary":"This video presents a speculative future scenario from April 2026 where Anthropic's 'Mythos' model achieves unprecedented coding and safety capabilities, prompting a global coalition to address software vulnerabilities. The narrative highlights a rapid acceleration in AI research, the emergence of agentic infrastructure, and significant shifts in the global labor market due to automation. It concludes by illustrating how AI is now integrated into critical infrastructure, from orbital inference to quantum-secure networks and military intelligence.","duration":"05:24"},{"video_id":"3gdcScdKf_o","title":"Welcome to April 9, 2026","date":"2026-04-09","url":"https://www.youtube.com/watch?v=3gdcScdKf_o","channel_id":"alexwg","channel_name":"Alex Wissner-Gross","tags":["ai-strategy","industry-news","opinion"],"summary":"This video outlines a speculative 2026 landscape where AI has advanced to the point of classifying human-written code as hazardous due to superior vulnerability detection. Major tech giants are racing to deploy massive model clusters and novel training techniques, while the applications layer integrates deeply into consumer workflows and physical environments. The narrative concludes by highlighting the critical tension between exponential AI growth and the physical constraints of energy and compute infrastructure.","duration":"05:35"},{"video_id":"DpfLbBuhHOg","title":"Claude's New Agent Harness Runs Claude for Hours","date":"2026-04-09","url":"https://www.youtube.com/watch?v=DpfLbBuhHOg","channel_id":"leonvanzy","channel_name":"Leon van Zyl","tags":["ai-agents","ai-tools","coding","tutorials"],"summary":"This video introduces Anthropic's new 'Managed Agents' feature, a fully hosted infrastructure that allows Claude to run autonomous, long-duration coding tasks without consuming local resources. The presenter demonstrates how to configure agents, environments, and sessions using the Console, CLI, and SDK to build a 'Ship It' application that generates software on demand. By leveraging built-in prompt caching and secure remote execution, developers can delegate complex projects like building a Trello clone to the agent while it iterates and adds features over hours or days.","duration":"19:46"},{"video_id":"ro5jpbi5uYc","title":"I Analyzed 512,000 Lines of Leaked Code. It Shows What's Coming for Your AI Tools.","date":"2026-04-08","url":"https://www.youtube.com/watch?v=ro5jpbi5uYc","channel_id":"natebjones","channel_name":"NateBJones","tags":["ai-agents","ai-strategy","ethics-safety","industry-news"],"summary":"The analysis of leaked Anthropic code reveals 'Conway,' an unannounced always-on agent designed to create a proprietary ecosystem that locks users in through accumulated behavioral memory rather than just data. By combining open standards like MCP with a closed extension format, Anthropic is executing a strategy to become the operating system for enterprise work, mirroring Microsoft's historical dominance. This shift moves AI competition from model performance to ownership of the persistent interface, creating unprecedented switching costs where an agent's understanding of a user's work habits becomes the primary asset.","duration":"24:34"}],"total":365}