<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Autonomous Driving: End-to-End, VLA, and Beyond on Xu'Blog</title><link>https://xuquant.com/posts/autodrive/</link><description>Recent content in Autonomous Driving: End-to-End, VLA, and Beyond on Xu'Blog</description><generator>Hugo -- 0.152.2</generator><language>en</language><lastBuildDate>Wed, 29 Apr 2026 14:00:00 +0800</lastBuildDate><atom:link href="https://xuquant.com/posts/autodrive/index.xml" rel="self" type="application/rss+xml"/><item><title>Qwen3.5 vs Qwen3: A Deep Architectural Comparison</title><link>https://xuquant.com/posts/autodrive/qwen3-vs-qwen3-5-architecture/</link><pubDate>Wed, 29 Apr 2026 14:00:00 +0800</pubDate><guid>https://xuquant.com/posts/autodrive/qwen3-vs-qwen3-5-architecture/</guid><description>A deep architectural comparison of Qwen3.5 versus Qwen3, examining hybrid attention, native multimodal fusion, high-sparsity MoE, and partial RoPE across attention, vision, and MoE dimensions</description></item><item><title>Reinforcement Learning for End-to-End Autonomous Driving: From Offline DPO to Iterative Self-Improvement</title><link>https://xuquant.com/posts/autodrive/basic_rl/</link><pubDate>Tue, 20 Jan 2026 10:00:00 +0800</pubDate><guid>https://xuquant.com/posts/autodrive/basic_rl/</guid><description>Comprehensive analysis of applying reinforcement learning to end-to-end autonomous driving, covering metric caching, Direct Preference Optimization (DPO) across action representations, and strategies for breaking sampling ceilings in iterative self-improvement.</description></item><item><title>Vision-Language-Action Models for Autonomous Driving: The Cosmos-Reason Approach</title><link>https://xuquant.com/posts/autodrive/nvidia_vla/</link><pubDate>Sun, 11 Jan 2026 10:00:00 +0800</pubDate><guid>https://xuquant.com/posts/autodrive/nvidia_vla/</guid><description>Technical deep-dive into Nvidia&amp;#39;s Cosmos-Reason (Alpamayo) VLA system for autonomous driving, covering tri-plane vision encoding, ego-shortcut avoidance, Cause-of-Change dataset paradigm, and reasoning-action alignment via reinforcement learning.</description></item><item><title>End-to-End Autonomous Driving: From Modular Decoders to VLA Architectures</title><link>https://xuquant.com/posts/autodrive/e2e-autonomous-driving-evolution/</link><pubDate>Thu, 01 May 2025 10:00:00 +0800</pubDate><guid>https://xuquant.com/posts/autodrive/e2e-autonomous-driving-evolution/</guid><description>A technical survey on the architectural evolution of end-to-end autonomous driving, covering planner decoder selection (AR vs Diffusion vs Flow Matching), VLA integration strategies, and engineering best practices for data infrastructure, training optimization, and evaluation systems.</description></item><item><title>Policy Optimization for End-to-End Autonomous Driving: From REINFORCE to GRPO</title><link>https://xuquant.com/posts/autodrive/rl-policy-optimization-e2e-driving/</link><pubDate>Wed, 30 Apr 2025 10:00:00 +0800</pubDate><guid>https://xuquant.com/posts/autodrive/rl-policy-optimization-e2e-driving/</guid><description>A systematic derivation of policy optimization methods for end-to-end autonomous driving: from REINFORCE through PPO to GRPO, covering advantage estimation, sampling differences between LLM and driving, multi-objective loss design, and the role of noise in diffusion-based exploration.</description></item><item><title>InSpatio-World: Real-Time 4D World Simulation via Spatiotemporal Autoregressive Modeling</title><link>https://xuquant.com/posts/autodrive/inspatio-world-4d-simulator/</link><pubDate>Sun, 20 Apr 2025 10:00:00 +0800</pubDate><guid>https://xuquant.com/posts/autodrive/inspatio-world-4d-simulator/</guid><description>A deep technical analysis of InSpatio-World: a 1.3B-parameter real-time 4D world simulator that combines implicit spatiotemporal caching with explicit geometric constraints, achieving 24 FPS novel-view synthesis from monocular video.</description></item><item><title>Trajectory Tokenization for Autoregressive Planning: Clustering, Matching, and the AR+Diffusion Paradigm</title><link>https://xuquant.com/posts/autodrive/ar-trajectory-tokenization/</link><pubDate>Tue, 01 Apr 2025 10:00:00 +0800</pubDate><guid>https://xuquant.com/posts/autodrive/ar-trajectory-tokenization/</guid><description>A deep dive into trajectory tokenization for autoregressive driving planners: from state-based discretization via k-means clustering, through token matching and reconstruction, to the AR+Diffusion paradigm and GRPO-based reinforcement learning post-training.</description></item><item><title>Why Generative Planning? The Non-Convexity Argument Against Regression in Autonomous Driving</title><link>https://xuquant.com/posts/autodrive/generative-planning-nonconvex/</link><pubDate>Sat, 15 Mar 2025 10:00:00 +0800</pubDate><guid>https://xuquant.com/posts/autodrive/generative-planning-nonconvex/</guid><description>A first-principles analysis of why regression-based planners fail in autonomous driving: the feasible set is non-convex, MSE averages into obstacles, GMM is a patch not a solution, and generative approaches are necessary.</description></item><item><title>Multi-Head Latent Attention: Efficient KV Cache Compression in DeepSeek-V2</title><link>https://xuquant.com/posts/autodrive/deepseek_series1_mla/</link><pubDate>Sat, 15 Feb 2025 10:00:00 +0800</pubDate><guid>https://xuquant.com/posts/autodrive/deepseek_series1_mla/</guid><description>Deep technical analysis of Multi-Head Latent Attention (MLA) from DeepSeek-V2, covering low-rank KV cache compression, decoupled RoPE design, and computational cost comparison with MHA, MQA, and GQA.</description></item></channel></rss>