X

AI Puts HBM and Functional Safety in the Driver's Seat

June 13, 2025

Functional safety, as described by the ISO 26262 specification, and covered in multiple blog posts, is something that has only recently been supported in low-power double data rate (LPDDR) 5 automotive memory. It is probably safe to assume that going forward, functional safety will also be supported for storage devices. Make no mistake, both memory and storage devices are complex devices and have many different safety elements. Without safety mechanisms to detect and flag a failure in these elements, unpredictable and perhaps catastrophic results can occur.

A simple example of an otherwise undetectable error would be the case of a failed address decoder. While the host device believes that it is either reading from or writing to a specific memory location, a failed address decoder can lead to extreme data corruption, resulting in unpredictable system-level behavior because of writing or retrieving data from the wrong location. A safety mechanism that can detect an addressing failure and provide a failure flag allows for the system to take the appropriate action, ranging from disengaging the advanced driver assistance system (ADAS) to deliberately crippling the vehicle. The point, again, is that the adoption of state-of-the-art technologies is being driven by the automotive industry.

A relative newcomer to the memory market, HBM (high bandwidth memory), is also finding its way into the automobile as multi-modal generative AI is actively being employed to implement context-aware navigation. This class of navigation takes ADAS well beyond recognizing a street sign, pedestrian, cyclist, etc., while addressing basics such as lane keeping or auto emergency braking.  

Context-aware navigation relies upon a class of neural networks referred to as large language models (LLM), which demand extreme levels of compute performance. Ultimately, through the real-time understanding of the environment, more intelligent driving decisions and behaviors can be made, mimicking those of the human driver. Examples of this include pulling over to the side of the road when an emergency vehicle is approaching with lights and siren engaged or cautiously entering an intersection or roadway when there is a lot of traffic, pedestrian or otherwise. When I was first learning how to drive, this was referred to as “defensive driving,” which basically is all about anticipating how a scenario might unfold and either acting or being prepared to act in case that’s what’s required to avoid an accident.

Multi-modal generative AI refers to the fact that, in addition to supporting text data sets, the LLM can also support other input data sources – most notably, video and even audio.  Currently LLMs are all the rage as they can predict the next word in a sentence with reasonable degrees of accuracy – a concept that we are all becoming familiar with as AI continues to expand its reach into just about every facet of our lives. (As I write this, Microsoft Word is trying to predict which words I am going to type – nothing like someone telling you how to think!)

Multi-modal generative AI, when applied to ADAS, can predict possible scenarios and act accordingly – replicating the equivalent of “defensive driving.” Equally as important, ADAS that employs generative AI communicates directly to the driver exactly which actions are going to be taken and the rationale for those actions. This extended communication leads to increased driver and passenger confidence in the operation of the ADAS system.

With an appreciation of what multi-modal generative AI brings to the table, it should become apparent that there is an extreme amount of compute performance required to address context-aware navigation. For the past several decades, the bottleneck for compute performance has been, and continues to be, memory bandwidth, not the performance of the CPU or the AI offload engine. This has given relatively recent rise to the introduction of HBM, which is an “in-package” memory solution. “In package” means that these devices are not available in discrete packages and need to be tightly integrated into a common single package alongside the AI or CPU compute engine.  

The latest-generation HBM 3E offers up to 1 terabyte per second of memory bandwidth that is derived from 1024 I/O pins that operate at multi-gigabit signaling rates. Generative AI and LLMs are hot, driving oversubscribed, insatiable demand for HBM to the point where it has been publicly stated that all HBM capacity has been sold out through 2025. Here again, the use of LLMs in the auto is driving the use of state-of-the-art, oversubscribed, HBM.

Not only has the automotive industry progressed to the point where it is now seen as the main driver of memory and storage, but the importance of these technologies has also moved from being in the back seat of the car to the front seat in terms of their importance in realizing the vehicle of today and the future.

Functional safety, as described by the ISO 26262 specification, and covered in multiple blog posts, is something that has only recently been supported in low-power double data rate (LPDDR) 5 automotive memory. It is probably safe to assume that going forward, functional safety will also be supported for storage devices. Make no mistake, both memory and storage devices are complex devices and have many different safety elements. Without safety mechanisms to detect and flag a failure in these elements, unpredictable and perhaps catastrophic results can occur.

A simple example of an otherwise undetectable error would be the case of a failed address decoder. While the host device believes that it is either reading from or writing to a specific memory location, a failed address decoder can lead to extreme data corruption, resulting in unpredictable system-level behavior because of writing or retrieving data from the wrong location. A safety mechanism that can detect an addressing failure and provide a failure flag allows for the system to take the appropriate action, ranging from disengaging the advanced driver assistance system (ADAS) to deliberately crippling the vehicle. The point, again, is that the adoption of state-of-the-art technologies is being driven by the automotive industry.

A relative newcomer to the memory market, HBM (high bandwidth memory), is also finding its way into the automobile as multi-modal generative AI is actively being employed to implement context-aware navigation. This class of navigation takes ADAS well beyond recognizing a street sign, pedestrian, cyclist, etc., while addressing basics such as lane keeping or auto emergency braking.  

Context-aware navigation relies upon a class of neural networks referred to as large language models (LLM), which demand extreme levels of compute performance. Ultimately, through the real-time understanding of the environment, more intelligent driving decisions and behaviors can be made, mimicking those of the human driver. Examples of this include pulling over to the side of the road when an emergency vehicle is approaching with lights and siren engaged or cautiously entering an intersection or roadway when there is a lot of traffic, pedestrian or otherwise. When I was first learning how to drive, this was referred to as “defensive driving,” which basically is all about anticipating how a scenario might unfold and either acting or being prepared to act in case that’s what’s required to avoid an accident.

Multi-modal generative AI refers to the fact that, in addition to supporting text data sets, the LLM can also support other input data sources – most notably, video and even audio.  Currently LLMs are all the rage as they can predict the next word in a sentence with reasonable degrees of accuracy – a concept that we are all becoming familiar with as AI continues to expand its reach into just about every facet of our lives. (As I write this, Microsoft Word is trying to predict which words I am going to type – nothing like someone telling you how to think!)

Multi-modal generative AI, when applied to ADAS, can predict possible scenarios and act accordingly – replicating the equivalent of “defensive driving.” Equally as important, ADAS that employs generative AI communicates directly to the driver exactly which actions are going to be taken and the rationale for those actions. This extended communication leads to increased driver and passenger confidence in the operation of the ADAS system.

With an appreciation of what multi-modal generative AI brings to the table, it should become apparent that there is an extreme amount of compute performance required to address context-aware navigation. For the past several decades, the bottleneck for compute performance has been, and continues to be, memory bandwidth, not the performance of the CPU or the AI offload engine. This has given relatively recent rise to the introduction of HBM, which is an “in-package” memory solution. “In package” means that these devices are not available in discrete packages and need to be tightly integrated into a common single package alongside the AI or CPU compute engine.  

The latest-generation HBM 3E offers up to 1 terabyte per second of memory bandwidth that is derived from 1024 I/O pins that operate at multi-gigabit signaling rates. Generative AI and LLMs are hot, driving oversubscribed, insatiable demand for HBM to the point where it has been publicly stated that all HBM capacity has been sold out through 2025. Here again, the use of LLMs in the auto is driving the use of state-of-the-art, oversubscribed, HBM.

Not only has the automotive industry progressed to the point where it is now seen as the main driver of memory and storage, but the importance of these technologies has also moved from being in the back seat of the car to the front seat in terms of their importance in realizing the vehicle of today and the future.

Transcript

Robert Bielby

Automotive System Architecture & Product Planning Consultant

Subscribe to TechArena

Subscribe