Samsung Foundry unveiled at ISSCC 2026 a new temperature sensor IP designed to address one of the major challenges in advanced nodes: better heat management without sacrificing useful chip area. The core idea is to relocate the sensor from the FEoL—where transistors reside—to the upper interconnect layer BEoL, using a low-coefficient-of-thermal-expansion metallic resistor. In the official conference program, Samsung describes this solution as a “fully stacked RC-based temperature sensor” for 2nm Gate-All-Around process technology, with a 625 μm² footprint, operating at 0.6 V, and focused on improving the balance between accuracy, power consumption, and area.
The significance of this move revolves around the placement of sensors within the chip. In very advanced nodes, every bit of FEoL space is precious, competing directly with transistors, caches, and logic circuits. Positioning the sensor in the BEoL frees this critical area while still enabling thermal monitoring inside the die. Practically, this opens the possibility of adding more measurement points without significantly impacting computational area. The ISSCC abstract explicitly states that Samsung has developed this design within a 2 nm GAA process, aligning it directly with their most strategic node development.
The technical background is well known across the industry. As nodes shrink, thermal density increases, and leakage problems worsen. In this scenario, simply having a generic thermal sensor isn’t enough: its placement, number, and reaction speed and accuracy are crucial. The interest in deploying sensors in the BEoL isn’t new, but historically there was a challenging trade-off between precision and conversion time. Samsung’s presentation suggests that this trade-off can be sufficiently mitigated to enable more serious adoption in advanced designs. This interpretation is supported by the technical description in the paper and the analysis from multiple Korean media outlets regarding its strategic relevance for foundry processes.
Beyond the sensor itself, the announcement arrives at a time when Samsung needs to bolster any competitive advantages in 2 nm. In their Q4 2025 results, the company stated that in 2026 the Foundry division plans to scale up second-generation 2 nm production and continue enhancing competitiveness with integrated logic, memory, and advanced packaging solutions. This roadmap suggests that improvements in thermal monitoring and area utilization aren’t mere technical details but part of broader efforts to make their advanced nodes more attractive to high-performance and AI customers.
This also aligns with Samsung’s official timeline for 2 nm. The company previously indicated that mass production of 2 nm for mobile would commence in 2025, extend to HPC in 2026, and automotive applications by 2027. Therefore, any IP capable of enhancing density, thermal stability, or energy efficiency at this node could impact not only smartphone chips but also high-power, complex processors where hotspots are even more problematic.
One of the most intriguing aspects is the potential application of this technology to Samsung’s internal products, particularly Exynos. However, caution is warranted: there is no official confirmation that this sensor will be integrated into a specific SoC. The expectation, widely discussed in Korean media, is that such a solution would be meaningful in chips where leakage control and thermal efficiency remain critical areas. Nonetheless, referring to Exynos as the final deployment target is an industry hypothesis rather than an official Samsung statement.
The key takeaway is that this innovation aims not only to measure temperature but to enable a much denser thermal map within the chip. With the sensor consuming no valuable FEoL area, deploying dozens or even more measurement points across different die regions becomes feasible. This enhances real-time hotspot detection and allows more accurate adjustments of voltage, frequency, or protective policies. In modern chips—especially in AI, premium mobile, and HPC—such precision can be the difference between stable performance and premature throttling. This logical conclusion stems from the shift to the BEoL and the device’s design goals outlined in the paper, though Samsung refrains from explicitly stating it as such in the conference.
Ultimately, Samsung hasn’t just presented another sensor; at ISSCC 2026, it has showcased a way to make thermal monitoring less invasive for the chip’s useful area at a time when competition in 2 nm manufacturing is intensifying. If Samsung effectively integrates this technology into its design ecosystems and process tools, it could become a subtle yet significant competitive advantage in the next generation of advanced chips.
Frequently Asked Questions
What exactly did Samsung present at ISSCC 2026?
Samsung introduced a fully stacked RC temperature sensor design for 2nm GAA process technology, based on a low TCR metallic resistor in the BEoL layer, aiming to improve area efficiency and thermal monitoring capabilities.
Why is shifting the sensor from FEoL to BEoL important?
Because FEoL is where transistors and other critical compute blocks are located. Moving the sensor to the BEoL frees up this space and allows better utilization of the internal chip area.
Has Samsung confirmed that this technology will be used in Exynos?
No. There is currently no official confirmation or announcement about integration into a specific SoC. The possibility in Exynos is based on industry and media speculation, not an official Samsung statement.
How does this fit into Samsung Foundry’s 2 nm strategy?
Samsung has stated that in 2026 it plans to increase production of second-generation 2 nm devices and further strengthen its position in advanced nodes. This IP aligns with their technical differentiation approach within that strategic framework.
via: zdnet.co.kr

