Tejas Networks' 100,000-site 6G sensing deployment accelerates AI-native RAN race
The convergence of telecom infrastructure spending and AI workloads is creating a new battleground where 6G sensing capabilities determine network dominance.
Tejas Networks' 100,000-site 6G sensing deployment accelerates AI-native RAN race
The Incident / Core Event
Tejas Networks has deployed its AI-integrated mobile radio technology across 100,000 sites over the past two years, establishing one of the largest operational footprints for 6G sensing capabilities in the telecommunications industry. This deployment coincides with a $45 million Series A funding round for ORAN Development Corporation (ODC) to scale its AI-native radio access network platform, and Ericsson's partnership with Forschungszentrum Jülich supercomputing center to design AI solutions for 6G networks. These developments signal an industry-wide shift from theoretical 6G research to practical deployment of AI-integrated telecommunications infrastructure.
The Catalyst
Telecom operators are confronting explosive growth in AI workloads that demand ultra-low latency processing at the network edge. Autonomous systems, industrial automation, and agentic AI applications require sub-millisecond response times that cannot be guaranteed by centralized cloud processing alone. This technological imperative is forcing telecommunications vendors to rethink network architecture from passive connectivity providers to active AI computing platforms capable of sensing, processing, and acting on data in real-time.
Capital & Control Shifts
The financial backing reveals a clear alignment of interests: ODC's $45 million investment is backed by a consortium including Nvidia, Cisco, Nokia, major telecom operators (MTN, Telecom Italia, AT&T), and investment firms Booz Allen Hamilton, Phoenix Venture Partners, and Cerberus Capital Management. This consortium represents both the semiconductor industry's push for AI at the edge and telecommunications operators seeking new revenue streams beyond traditional connectivity. Tejas Networks' deployment of 6G sensing and pragmatic Open RAN technology across 100,000 sites demonstrates commercial validation of this approach, while Ericsson's supercomputer partnership indicates a shift from theoretical modeling to AI-hardware co-design for next-generation networks.
Technical Implications
The structural comparison reveals three distinct but complementary approaches to AI-native RAN development. Tejas Networks leads with field-proven deployment at scale (100,000 sites), ODC provides the platform funding stage for broader commercialization, and Ericsson represents the research partnership pushing technological boundaries. Nvidia's Aerial RAN technology integration enables real-time, low-latency processing essential for autonomous systems and industrial applications. Meanwhile, neuro-inspired computing approaches (neuromorphic) being explored by Ericsson and Forschungszentrum Jülich promise breakthroughs in energy efficiency for complex network task processing. The emerging architecture shows AI workloads flowing from devices to edge RAN nodes, then to core networks and cloud infrastructure, with sensing data creating feedback loops that continuously optimize network performance.
The Core Conflict
The fundamental tension lies between centralized cloud AI processing and distributed edge AI inference. Cloud providers (AWS, Azure, GCP) continue to promote centralized AI models that leverage their massive scale and resource efficiency. In contrast, telecommunications vendors (Nokia, Ericsson, Tejas Networks) are advocating for edge AI architectures that leverage their unique ownership of last-mile connectivity and ability to process data at the point of origin. This conflict mirrors earlier battles in computing history between centralized mainframes and distributed client-server architectures, but with far higher stakes given the real-time requirements of autonomous systems.
Structural Obsolescence
Several technological approaches face imminent obsolescence as a consequence of this shift. Standalone AI chips lacking integrated sensing and connectivity capabilities will become unsuitable for telecom use cases where environmental awareness is critical. Traditional RAN vendors clinging to legacy architectures without AI-native capabilities will lose relevance in the 6G rollout. Most significantly, network operators continuing to treat AI as purely a cloud workload will miss substantial edge computing revenue opportunities and risk disintermediation by specialized edge providers.
The New Power Dynamic
The winners in this structural shift are clear: telecommunications vendors with proven 6G sensing capabilities. These companies gain an unassailable structural moat by owning both the connectivity infrastructure and the AI processing capabilities riding on it. Their advantage stems from physics itself—the speed of light limitations that prevent cloud providers from guaranteeing sub-millisecond latency for distributed autonomous systems. The losers are pure-play cloud AI providers who, despite their algorithmic superiority, cannot overcome the fundamental latency constraints of long-distance data transmission without owning the last-mile infrastructure.
The Unspoken Reality
Industry discourse remains fixated on 6G as primarily a speed upgrade, overlooking that the true battleground is sensing capabilities that enable agentic AI to interact with the physical world. Current AI infrastructure investments dangerously overlook that telecommunications networks are poised to become the primary distributed AI computing platform, not merely a conduit for cloud-bound AI workloads. This represents a fundamental misunderstanding of where value will accrue in the AI-native network era.
The Foreseeable Future
In the short term (0-6 months), telecommunications vendors will accelerate AI-native RAN deployments to capture early agentic AI use cases in industrial automation, smart infrastructure, and autonomous vehicle networks. Mid-term (6-24 months), telecom companies will begin reporting AI infrastructure revenue that surpasses traditional connectivity services as their networks transform into the primary AI compute fabric for enterprise and industrial applications. This shift will redefine telecommunications from a utility service to an essential AI infrastructure layer, creating new valuation paradigms and competitive dynamics that favor owners of physical network assets over pure software players.
Strategic Directives
Enterprise technology leaders must immediately audit their AI workloads for latency sensitivity, identifying specific processes requiring sub-10ms response times within the next 30 days. Organizations should then partner with telecom vendors offering proven 6G sensing capabilities for edge AI deployment within 60 days, particularly for industrial IoT and autonomous systems applications. Finally, enterprises must redesign their industrial IoT architectures within 6 months to leverage telecom edge computing rather than relying solely on private 5G networks, recognizing that the telecommunications providers owning the infrastructure will capture the majority of long-term value in the AI-native network era.
Stay ahead of the AI shift
Daily enterprise AI intelligence — the decisions, risks, and opportunities that matter. Delivered free to your inbox.