Page-table lookups, Operating System

Assignment Help:

How exactly is a page table used to look up an address?

The CPU has a page table base register (PTBR)which points to the base (entry 0) of the level-0 page table. Each process has its own page table, and so in a context switch, the PTBR is updated along with the other context registers. The PTBR contains a physical address, not a virtual address.When theMMU receives a virtual address which it needs to translate to a physical address, it uses the PTBR to go to the the level-0 page table. Then it uses the level-0 index fromthemost-signi?cant bits (MSBs) of the virtual address to ?nd the appropriate table entry, which contains a pointer to the base address of the appropriate level-1 page table. Then, from that base address, it uses the level-1 index to ?nd the appropriate entry. In a 2-level page table, the level-1 entry is a PTE, and points to the physical page itself. In a 3-level (or higher) page table, there would be more steps:

This sounds pretty slow: N page table lookups for everymemory access. But is it necessarily slow? A special cache called a TLB1 caches the PTEs from recent lookups, and so if a page's PTE is in the TLB cache, this improves a multi-level page table access time down to the access time for a single-level page table.

When a scheduler switches processes, it invalidates all the TLB entries (also known as TLB shoot- down). The new process then starts with a "cold cache" for its TLB, and takes a while for the TLB to "warm up". The scheduler therefore should not switch too frequently between processes, since a "warm" TLB is critical to making memory accesses fast. This is one reason that threads are so useful: switching threads within a process does not require the TLB to be invalidated; switching to a new thread within the same process lets it start up with a "warm" TLB cache right away. So what are the drawbacks of TLBs? The main drawback is that they need to be extremely fast, fully associative caches. Therefore TLBs are very expensive in terms of power consumption, and have an impact on chip real estate, and increasing chip real estate drives up price dramatically. The TLB can account a signi?cant fraction of the total power consumed by a microprocessor, on the order of 10% or more. TLBs are therefore kept relatively small, and typical sizes are between 8 and 2048 entries.


Related Discussions:- Page-table lookups

Introduction to paging, We will brie?y introduce paging to ?nish off this l...

We will brie?y introduce paging to ?nish off this lecture. When a process is loaded, not all of the pages are immediately loaded, since it's possible that they will not all be need

Producer-consumer using condition variables, Now let us present an implemen...

Now let us present an implementation of a producer-consumer system using condition variables. This implementation works. dequeue() lock(A) while (queue empty) { wait(A, C)

Describe the state when you run an unlink operation, Describe when you run ...

Describe when you run an unlink() operation to remove a file on the ext3 file system. Be specific about what disk blocks have to be written where in what order. State your assumpti

Define a program that is in execution is known as, Define a program in exec...

Define a program in execution is known as A program in execution is known as a process

What are the advantages of using unequal- size partitions, In fixed portion...

In fixed portioning scheme, what are the advantages of using unequal- size partitions? With unequal-size partitions there are two probable ways to assign process to partitions.

What are the requirements for a swapper to work, The swapper work s on t...

The swapper work s on the biggest scheduling priority. Initially it will look for any sleeping process, if not get then it will see for the ready-to-run process for swapping. Bu

Multilevel queue scheduling algorithm suffer from starvation, 1. Suggest so...

1. Suggest some mechanism(s) to lower the dispatch latency while scheduling for a real-time system. Also, discuss any trade-off or consequent constraint associated with it, if a

What is indexed allocation, What is indexed allocation? Every file has ...

What is indexed allocation? Every file has its own block of pointers to the sectors of the file.

Locks - mutexes, Locks (also known as mutexes, short for mutual exclusion l...

Locks (also known as mutexes, short for mutual exclusion locks) provide mutual exclusion to shared data inside a critical session. They are implemented by means of two atomic routi

List the four steps that are essential to run a program, List the four step...

List the four steps that are essential to run a program on a completely dedicated machine. a. Reserve machine time. b. Manually load program into memory. c. Load starting

Write Your Message!

Captcha
Free Assignment Quote

Assured A++ Grade

Get guaranteed satisfaction & time on delivery in every assignment order you paid with us! We ensure premium quality solution document along with free turntin report!

All rights reserved! Copyrights ©2019-2020 ExpertsMind IT Educational Pvt Ltd