Uncategorized
The DeepSeek-R1 Effect and Web3-AI

The artificial intelligence (AI) world was taken by storm a few days ago with the release of DeepSeek-R1, an open-source reasoning model that matches the performance of top foundation models while claiming to have been built using a remarkably low training budget and novel post-training techniques. The release of DeepSeek-R1 not only challenged the conventional wisdom surrounding the scaling laws of foundation models – which traditionally favor massive training budgets – but did so in the most active area of research in the field: reasoning.
The open-weights (as opposed to open-source) nature of the release made the model readily accessible to the AI community, leading to a surge of clones within hours. Moreover, DeepSeek-R1 left its mark on the ongoing AI race between China and the United States, reinforcing what has been increasingly evident: Chinese models are of exceptionally high quality and fully capable of driving innovation with original ideas.
Unlike most advancements in generative AI, which seem to widen the gap between Web2 and Web3 in the realm of foundation models, the release of DeepSeek-R1 carries real implications and presents intriguing opportunities for Web3-AI. To assess these, we must first take a closer look at DeepSeek-R1’s key innovations and differentiators.
Inside DeepSeek-R1
DeepSeek-R1 was the result of introducing incremental innovations into a well-established pretraining framework for foundation models. In broad terms, DeepSeek-R1 follows the same training methodology as most high-profile foundation models. This approach consists of three key steps:
Pretraining: The model is initially pretrained to predict the next word using massive amounts of unlabeled data.
Supervised Fine-Tuning (SFT): This step optimizes the model in two critical areas: following instructions and answering questions.
Alignment with Human Preferences: A final fine-tuning phase is conducted to align the model’s responses with human preferences.
Most major foundation models – including those developed by OpenAI, Google, and Anthropic – adhere to this same general process. At a high level, DeepSeek-R1’s training procedure does not appear significantly different. ButHowever, rather than pretraining a base model from scratch, R1 leveraged the base model of its predecessor, DeepSeek-v3-base, which boasts an impressive 617 billion parameters.
In essence, DeepSeek-R1 is the result of applying SFT to DeepSeek-v3-base with a large-scale reasoning dataset. The real innovation lies in the construction of these reasoning datasets, which are notoriously difficult to build.
First Step: DeepSeek-R1-Zero
One of the most important aspects of DeepSeek-R1 is that the process did not produce just a single model but two. Perhaps the most significant innovation of DeepSeek-R1 was the creation of an intermediate model called R1-Zero, which is specialized in reasoning tasks. This model was trained almost entirely using reinforcement learning, with minimal reliance on labeled data.
Reinforcement learning is a technique in which a model is rewarded for generating correct answers, enabling it to generalize knowledge over time.
R1-Zero is quite impressive, as it was able to match GPT-o1 in reasoning tasks. However, the model struggled with more general tasks such as question-answering and readability. That said, the purpose of R1-Zero was never to create a generalist model but rather to demonstrate it is possible to achieve state-of-the-art reasoning capabilities using reinforcement learning alone – even if the model does not perform well in other areas.
Second-Step: DeepSeek-R1
DeepSeek-R1 was designed to be a general-purpose model that excels at reasoning, meaning it needed to outperform R1-Zero. To achieve this, DeepSeek started once again with its v3 model, but this time, it fine-tuned it on a small reasoning dataset.
As mentioned earlier, reasoning datasets are difficult to produce. This is where R1-Zero played a crucial role. The intermediate model was used to generate a synthetic reasoning dataset, which was then used to fine-tune DeepSeek v3. This process resulted in another intermediate reasoning model, which was subsequently put through an extensive reinforcement learning phase using a dataset of 600,000 samples, also generated by R1-Zero. The final outcome of this process was DeepSeek-R1.
While I have omitted several technical details of the R1 pretraining process, here are the two main takeaways:
R1-Zero demonstrated that it is possible to develop sophisticated reasoning capabilities using basic reinforcement learning. Although R1-Zero was not a strong generalist model, it successfully generated the reasoning data necessary for R1.
R1 expanded the traditional pretraining pipeline used by most foundation models by incorporating R1-Zero into the process. Additionally, it leveraged a significant amount of synthetic reasoning data generated by R1-Zero.
As a result, DeepSeek-R1 emerged as a model that matched the reasoning capabilities of GPT-o1 while being built using a simpler and likely significantly cheaper pretraining process.
Everyone agrees that R1 marks an important milestone in the history of generative AI, one that is likely to reshape the way foundation models are developed. When it comes to Web3, it will be interesting to explore how R1 influences the evolving landscape of Web3-AI.
DeepSeek-R1 and Web3-AI
Until now, Web3 has struggled to establish compelling use cases that clearly add value to the creation and utilization of foundation models. To some extent, the traditional workflow for pretraining foundation models appears to be the antithesis of Web3 architectures. However, despite being in its early stages, the release of DeepSeek-R1 has highlighted several opportunities that could naturally align with Web3-AI architectures.
1) Reinforcement Learning Fine-Tuning Networks
R1-Zero demonstrated that it is possible to develop reasoning models using pure reinforcement learning. From a computational standpoint, reinforcement learning is highly parallelizable, making it well-suited for decentralized networks. Imagine a Web3 network where nodes are compensated for fine-tuning a model on reinforcement learning tasks, each applying different strategies. This approach is far more feasible than other pretraining paradigms that require complex GPU topologies and centralized infrastructure.
2) Synthetic Reasoning Dataset Generation
Another key contribution of DeepSeek-R1 was showcasing the importance of synthetically generated reasoning datasets for cognitive tasks. This process is also well-suited for a decentralized network, where nodes execute dataset generation jobs and are compensated as these datasets are used for pretraining or fine-tuning foundation models. Since this data is synthetically generated, the entire network can be fully automated without human intervention, making it an ideal fit for Web3 architectures.
3) Decentralized Inference for Small Distilled Reasoning Models
DeepSeek-R1 is a massive model with 671 billion parameters. However, almost immediately after its release, a wave of distilled reasoning models emerged, ranging from 1.5 to 70 billion parameters. These smaller models are significantly more practical for inference in decentralized networks. For example, a 1.5B–2B distilled R1 model could be embedded in a DeFi protocol or deployed within nodes of a DePIN network. More simply, we are likely to see the rise of cost-effective reasoning inference endpoints powered by decentralized compute networks. Reasoning is one domain where the performance gap between small and large models is narrowing, creating a unique opportunity for Web3 to efficiently leverage these distilled models in decentralized inference settings.
4) Reasoning Data Provenance
One of the defining features of reasoning models is their ability to generate reasoning traces for a given task. DeepSeek-R1 makes these traces available as part of its inference output, reinforcing the importance of provenance and traceability for reasoning tasks. The internet today primarily operates on outputs, with little visibility into the intermediate steps that lead to those results. Web3 presents an opportunity to track and verify each reasoning step, potentially creating a «new internet of reasoning» where transparency and verifiability become the norm.
Web3-AI Has a Chance in the Post-R1 Reasoning Era
The release of DeepSeek-R1 has marked a turning point in the evolution of generative AI. By combining clever innovations with established pretraining paradigms, it has challenged traditional AI workflows and opened a new era in reasoning-focused AI. Unlike many previous foundation models, DeepSeek-R1 introduces elements that bring generative AI closer to Web3.
Key aspects of R1 – synthetic reasoning datasets, more parallelizable training and the growing need for traceability – align naturally with Web3 principles. While Web3-AI has struggled to gain meaningful traction, this new post-R1 reasoning era may present the best opportunity yet for Web3 to play a more significant role in the future of AI.
Uncategorized
What’s Next for Bitcoin and Ether as Downside Fears Ease Ahead of Fed Rate Cut?

Fears of a downside for bitcoin (BTC) and ether (ETH) have eased substantially, according to the latest options market data. However, the pace of the next upward move in these cryptocurrencies will largely hinge on the magnitude of the anticipated Fed rate cut scheduled for Sept. 17.
BTC’s seven-day call/put skew, which measures how implied volatility is distributed across calls versus puts expiring in a week, has recovered to nearly zero from the bearish 4% a week ago, according to data source Amberdata.
The 30- and 60-day option skews, though still slightly negative, have rebounded from last week’s lows, signaling a notable easing of downside fears. Ether’s options skew is exhibiting a similar pattern at the time of writing.
The skew shows the market’s directional bias, or the extent to which traders are more concerned about prices rising or falling. A positive skew suggests a bias towards calls or bullish option plays, while a negative reading indicates relatively higher demand for put options or downside protection.
The reset in options comes as bitcoin and ether prices see a renewed upswing in the lead-up to Wednesday’s Fed rate decision, where the central bank is widely expected to cut rates and lay the groundwork for additional easing over the coming months. BTC has gained over 4% to over $116,000 in seven days, with ether rising nearly 8% to $4,650, according to CoinDesk data.
What happens next largely depends on the size of the impending Fed rate cut. According to CME’s Fed funds futures, traders have priced in over 90% probability that the central bank will cut rates by 25 basis points (bps) to 4%-4.25%. But there is also a slight possibility of a jumbo 50 bps move.
BTC could go berserk in case the Fed delivers the surprise 50 bps move.
«A surprise 50 bps rate cut would be a massive +gamma BUY signal for ETH, SOL and BTC,» Greg Magadini, director of derivatives at Amberdata, said in an email. «Gold will go absolutely nuts as well.»
Note that the Deribit-listed SOL options already exhibit a strong bullish sentiment, with calls trading at 4-5 volatility premium to puts.
Magadini explained that if the decision comes in line with expectations for a 25 bps cut, then a continued calm «grind higher» for BTC looks likely. ETH, meanwhile, may take another week or so to retest all-time highs and convincingly trade above $5,000, he added.
Uncategorized
Asia Morning Briefing: Native Markets Wins Right to Issue USDH After Validator Vote

Good Morning, Asia. Here’s what’s making news in the markets:
Welcome to Asia Morning Briefing, a daily summary of top stories during U.S. hours and an overview of market moves and analysis. For a detailed overview of U.S. markets, see CoinDesk’s Crypto Daybook Americas.
Hyperliquid’s validator community has chosen Native Markets to issue USDH, ending a weeklong contest that drew proposals from Paxos, Frax, Sky (ex-MakerDAO), Agora, and others.
Native Markets, co-founded by former Uniswap Labs president MC Lader, researcher Anish Agnihotri, and early Hyperliquid backer Max Fiege, said it will begin rolling out USDH “within days,” according to a post by Fiege on X.
According to onchain trackers, Native Markets’ proposal took approximately 70% of validators’ votes, while Paxos took 20%, and Ethena came in at 3.2%.
The staged launch starts with capped mints and redemptions, followed by a USDH/USDC spot pair before caps are lifted.
USDH is designed to challenge Circle’s USDC, which currently dominates Hyperliquid with nearly $6 billion in deposits, or about 7.5% of its supply. USDC and other stablecoins will remain supported if they meet liquidity and HYPE staking requirements.
Most rival bidders had promised to channel stablecoin yields back to the ecosystem with Paxos via HYPE buybacks, Frax through direct user yield, and Sky with a 4.85% savings rate plus a $25 million “Genesis Star” project.
Native Markets’ pitch instead stressed credibility, trading experience, and validator alignment.
Market Movement
BTC: BTC has recently reclaimed the $115,000 level, helped by inflows into ETFs, easing U.S. inflation data, and growing expectations for interest rate cuts. Also, technical momentum is picking up, though resistance sits around $116,000, according to CoinDesk’s market insights bot.
ETH: ETH is trading above $4600. The price is being buoyed by strong ETF inflows.
Gold: Gold continues to trade near record highs as traders eye dollar weakness on expected Fed rate cuts.
Elsewhere in Crypto:
Uncategorized
BitMEX Co-Founder Arthur Hayes Sees Money Printing Extending Crypto Cycle Well Into 2026

Arthur Hayes believes the current crypto bull market has further to run, supported by global monetary trends he sees as only in their early stages.
Speaking in a recent interview with Kyle Chassé, a longtime bitcoin and Web3 entrepreneur, the BitMEX co-founder and current Maelstrom CIO argued that governments around the world are far from finished with aggressive monetary expansion.
He pointed to U.S. politics in particular, saying that President Donald Trump’s second term has not yet fully unleashed the spending programs that could arrive from mid-2026 onward. Hayes suggested that if expectations for money printing become extreme, he may consider taking partial profits, but for now he sees investors underestimating the scale of liquidity that could flow into equities and crypto.
Hayes tied his outlook to broader geopolitical shifts, including what he described as the erosion of a unipolar world order. In his view, such periods of instability tend to push policymakers toward fiscal stimulus and central bank easing as tools to keep citizens and markets calm.
He also raised the possibility of strains within Europe — even hinting that a French default could destabilize the euro — as another factor likely to accelerate global printing presses. While he acknowledged these policies eventually risk ending badly, he argued that the blow-off top of the cycle is still ahead.
Turning to bitcoin, Hayes pushed back on concerns that the asset has stalled after reaching a record $124,000 in mid-August.
He contrasted its performance with other asset classes, noting that while U.S. stocks are higher in dollar terms, they have not fully recovered relative to gold since the 2008 financial crisis. Hayes pointed out that real estate also lags when measured against gold, and only a handful of U.S. technology giants have consistently outperformed.
When measured against bitcoin, however, he believes all traditional benchmarks appear weak.
Hayes’ message was that bitcoin’s dominance becomes even clearer once assets are viewed through the lens of currency debasement.
For those frustrated that bitcoin is not posting fresh highs every week, Hayes suggested that expectations are misplaced.
In his telling, investors from the traditional world and those in crypto actually share the same premise: governments and central banks will print money whenever growth falters. Hayes says traditional finance tends to express this view by buying bonds on leverage, while crypto investors hold bitcoin as the “faster horse.”
His conclusion is that patience is essential. Hayes argued that the real edge of holding bitcoin comes from years of compounding outperformance rather than short-term speculation.
Coupled with what he sees as an inevitable wave of money creation through the rest of the decade, he believes the present crypto cycle could stretch well into 2026, far from exhausted.
-
Business11 месяцев ago
3 Ways to make your business presentation more relatable
-
Fashion11 месяцев ago
According to Dior Couture, this taboo fashion accessory is back
-
Entertainment11 месяцев ago
10 Artists who retired from music and made a comeback
-
Entertainment11 месяцев ago
\’Better Call Saul\’ has been renewed for a fourth season
-
Entertainment11 месяцев ago
New Season 8 Walking Dead trailer flashes forward in time
-
Business11 месяцев ago
15 Habits that could be hurting your business relationships
-
Entertainment11 месяцев ago
Meet Superman\’s grandfather in new trailer for Krypton
-
Entertainment11 месяцев ago
Disney\’s live-action Aladdin finally finds its stars