This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

What does "the architecture permits caching of GPT information in a TLB" mean?

Hello, everyone, I am reading chapter D9 "The Granule Protection Check Mechanism" in A-profile architecture, and I have get some question here:

  1. I_PZSYC says "For implementations that choose to do so for area or performance reasons, the architecture permits caching of GPT information in a TLB.". I can understand setting up a cache similar to the TLB for GPT lookup acceleration, but is it appropriate to put the GPT directly into the TLB, which seems to save a table but essentially destroys the TLB's original purpose of translating VA and PA?
  2. What new instructions have been added to the ARM for GPT table processing, other than TLBI PAALLOS, etc.?