Current Time 0:00
Duration -:-
Loaded: 0%
Stream Type LIVE
Remaining Time 0:00
 
1x
    • Chapters
    • descriptions off, selected
    • captions off, selected

      261 AI Monetization Threats = Rising Competition + Open-Source Momentum + China’s Rise To understand where AI model development is headed, it helps to examine how two distinct approaches – closed-source and open-source – have evolved and diverged. In the early days of modern machine learning (2012-2018), most models were open-source, rooted in academic and collaborative traditions. But as AI systems became more powerful and commercially valuable, and as development shifted from academia to industry, a parallel movement emerged – around 2019 (when GPT-2 launched with restricted weights), the development of proprietary (closed-source) models, motivated by proprietary interests, competitive advantage, and safety concerns. Closed models follow a centralized, capital-intensive arc. These models – like OpenAI’s GPT-4 or Anthropic’s Claude – are trained within proprietary systems on massive proprietary datasets, requiring months of compute time and millions in spending. They often deliver more capable performance and easier usability, and thus are preferred by enterprises and consumers, and – increasingly – governments. However, the tradeoff is opacity: no access to weights, training data, or fine-tuning methods. What began as a research frontier became a gated product experience, served via APIs, licensed to enterprises, and defended by legal and commercial firewalls. Now, the AI race is coming full circle. As LLMs mature – and competition intensifies – we are seeing resurgence of open-source models owing to their lower costs, growing capabilities, and broader accessibility for developers and enterprises alike. These are freely available for anyone to use, modify, and build upon, and thus are generally preferred by early-stage startups, researchers / academics, and independent developers. Platforms like Hugging Face have made it frictionless to download models like Meta’s Llama or Mistral’s Mixtral, giving startups, academics, and governments access to frontier-level AI without billion-dollar budgets. Open-source AI has become the garage lab of the modern tech era: fast, messy, global, and fiercely collaborative. And China (as of Q2:25) – based on the number of large-scale AI models* released – is leading the open-source race, with three large-scale models released in 2025 – DeepSeek-R1, Alibaba Qwen-32B and Baidu Ernie 4.5**. The split has consequences. Open-source is fueling sovereign AI initiatives, local language models, and community-led innovation. Closed models, meanwhile, are dominating consumer market share and large enterprise adoption. We’re watching two philosophies unfold in parallel – freedom vs. control, speed vs. safety, openness vs. optimization – each shaping not just how AI works, but who gets to wield it. *Large-scale AI models = Models with training compute confirmed to exceed 1023 floating point operations. **To be made open-source as of 6/30/25, per Baidu.

      2025 | Trends in Artificial Intelligence - Page 262 2025 | Trends in Artificial Intelligence Page 261 Page 263