Sunday, June 23, 2019
Cache coherence Term Paper Example | Topics and Well Written Essays - 2250 words
Cache coherence - Term Paper ExampleThe riddle of memory cache coherence in hardware is reduced in todays microprocessors through the implementation of various cache coherence protocols. This article reviews literature on cache coherence with particular attention to cache coherence caper, and the protocols-both hardware and software that have been proposed to solve it. Most importantly, it identifies a specific job associated with cache coherence and proposes a novel solution. Keywords microprocessor, latency, cache coherence, bandwidth, multiprocessor, cache coherence protocol, shared memory, multicore processor I. Introduction Currently, there is undeniable interest in the information processing system architecture domain with regard to shared-memory multiprocessors. Often, proposed multiprocessor designs include a reserved cache for each processor within the system. This, in turn, results in the cache coherence problem (Cheng, Carter, & Dai, 2007). This situation, in which se veral caches are allowed to have simultaneous copies of a certain memory location, requires that a certain memory location be in place. This is to defecate sure that when the contents of that particular memory location are changed, there needs to be a mechanism that ensures all copies remain unchanged. Consequently, some systems expend a software mechanism to ensure multiple copies do not occur. This it achieved by labeling shared blocks so that they are not cached (Chang & Sohi, 2006). Additionally, task data in all caches are prohibited or restricted from migration. Alternatively, all blocks may be allowed to be cached by all processors and to depend on a cache coherence protocol to be responsible of ensuring that there is consistency. Various such protocols have been proposed, designed and/or described with some ideal for shared-bus and others specifically adequate for a general-purpose interconnection network. There is a substantial difference between shared-bus protocols and general network protocols. Firstly, share-bus protocols depend on every cache restrainer monitoring the bus transactions of all the other processors within the system and take appropriate action to ensure consistency is maintained. Secondly, each blocks state within the system is encoded in a distributed manner among all other cache controllers. As such, the cache controllers are able to monitor the calling of the bus for the purposes of coherence these are referred to as snooping cache controllers (Kurian et al., 2010). Recently, many studies and researches have been conducted and have mainly focused on shared-memory multiprocessors. They are common mainly because of their transparent programming model, which means that they are simple to implement. Normally, address space is shared among all multiprocessors. This enables them to communicate to one another via a solitary address space. As had been earlier noted, a system with cache coherence results in the event that there is s ame cache block within multiple caches (Stenstrom, 1990). When such a scenario occurs, it does not affect the read process however, in the event that a processor, for writes, writes to a single location, the resulting change must be updated to all caches. Therefore, cache coherence, match to (Archibald & Baer, 1986), refers to all caches having consistence data in the event of data write.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.