Computer Science 代写 A Report On In Simplest Language

8年前 982次浏览 Computer Science 代写 A Report On In Simplest Language已关闭评论 , , ,

 Computer Science 代写 A Report On In Simplest Language

1.2.3 Non-Uniform Memory Access multiprocessing (NUMA)

It is designed for increasing stability .In NUMA instead of having a single bus connecting all the processors and other resources, the processor and memory modules are divided into partitions. And each partition is called node. These nodes contain processors and memory modules and all these nodes are connected by a high speed interconnect network. Memory of nodes becomes local to processor of these and nodes and non local for processor of other nodes. Processor can access its own local memory faster than non local memory.

After 1970's speed of processors crossed speed of memory .so processors has to stall while they wait for memory accesses to complete. this means more of high-speed cache memory and But the dramatic increase in size of the operating systems and of the applications run on them has generally overwhelmed these cache-processing improvements. Multi-processor systems make the problem considerably worse because when two processors need to access memory in parallel one of them had to wait. NUMA attempts to address this problem by providing separate memory for each processor, avoiding the performance hit when several processors attempt to address the same memory.

1.3 Types of Multi-Processor based on Processor coupling

  • Tightly-coupled
  • Loosely-coupled

1.3.1Tightly coupled: in this multiple CPUs are connected at bus level. These may be SMP , share common memory or NUMA share both local and shared memory. Intel Xeon and AMD opteron has their onboard cache and the Xeon processors via a common pipe and the Opteron processors via independent pathways to the system provides access to shared memory RAM.

1.3.2 Loosely-coupled multiprocessor systems are based on multiple standalone single or dual processor interconnected via a high speed communication system like high speed Ethernet. These are often called clusters and are explained later.

1.4 Advantages

Multi Processors is not necessary that they will be more faster then multi core or single core processors. Unless software or operating system supports multi processor .Some application only use single processor .Even some high end application may not require two processors, but certainly when using two high end software multi processors will outperform the other processors.

  • Cost effective : as they processors share many common resources
  • Reliable: reliability of system is also increased. The failure of one processor does not affect the other processors though it will slow down the machine. Suppose I have five processors and one of them fails due to some reasons then each of the remaining four processors will share the work of failed processor. So it means that system will not fail but definitely failed processor will affect on its speed.
  • More speed: we increase the number of processors then it means that more work can be done in less time.

1.5 Limitation & Disadvantages

The performance of modern multiprocessor systems is increasingly limited by interconnection delays or long latencies of memory subsystems. Multiprocessor would have a disadvantage over a processor due to its ability to process more than one application at a time making the opportunity for errors to occur should the wrong process is selected.

Some other limitations

  • Longer distances and slower (as compared to multi core) bus speeds when shuttling data between the two CPU cores.
  • Access to RAM is serialized
  • The nature of the different programming methods would generally require two separate code-trees to support both uni processor and SMP systems with maximum performance.

1.6 Future Scope With clock speed of processors already hit the barrier due to generating heat problems the need of multi processors is surely there. Moreover With the release of New windows 7 and other high end software's like 3dsmax 2009 and adobe masters collection CS4 the computation power needed for desktop computer is increasing. So requirement of high end processors for desktops is also increasing

2.1 Multi-computer

True parallelism can be only achieved in multi computer. In simplest language multi computer means having 2 or more CPU (multi CPU that are complete computers (with onboard CPU, storage, power supply, network interface, etc.)) connected or network and network may be public , private or internet .

A computer made up of several computers. The term generally refers to an architecture in which each processor has its own memory rather than multiple processors with a shared memory. They can be used for multiple tasking that are processor extensive. One CPU can do 3D rendering while on other you can play game or do video processing

2.2 Types of Multi -computer

  • Distributed computing
  • Cluster Computing (Grid computing)
  • Cloud computing
  • desktop clustering

2.2 Distributed computing

Distributed computing refers to the means by which a single computer program runs in more than one computer at the same time. A distributed system consists of multiple autonomous computers that communicate through a computer network. The computers interact with common goal. A distributed system is a collection of independent computers that appear to the users of the system as a single coherent system.

In particular, the different elements and objects of a program are being run or processed using different computer processors. In a distributed computing setup, there is one or more servers which contain the blueprint for the coordinated program efforts, the information needed to access member computers, and the applications that will automate distribution of the program processes when such is needed. It is also in the distributed computing administrative servers that the distributed processes are coordinated and combined, and they are where the program outputs are generated.

With more and more advances in technology the same system may be characterized both as parallel and distributed. Parallel computing may be seen as a particular tightly-coupled form of distributed computing, and distributed computing may be seen as a loosely-coupled form of parallel computing

2.2.1 Application of Distributed computing:-

    • when data is produced in one physical location and it is needed in another location.
    • It may be more cost-efficient to obtain the desired level of performance by using a cluster of several low-end computers, in comparison with a single high-end computer.

2.2.2 Distributed computing Projects

  • Folding@Home looking for cure of cancer
  • BOINC (Berkeley Open Infrastructure for Network Computing)
  • BURP - to develop a publicly distributed system for rendering 3D animations.
  • FreeHAL@home - to parse and convert big open source semantic nets for use in FreeHAL
  • AQUA@home - predicts the performance of superconducting adiabatic quantum computers.
  • SETI@home which is a project dedicated to finding signs of extraterrestrial life

2.3 Cluster computing

It may also be said as type of distributed computing linking local computers. Cluster computing is the technique of linking two or more computers into a network (usually through a local area network) in order to take advantage of the parallel processing power of those computers. Clusters are usually deployed to improve performance and/or availability over that of a single computer, while typically being much more cost-effective than single computers of comparable speed or availability. Large scale cluster computing is called grid computing. The size of grid computing may vary from being small - confined to a network of computer workstations within a corporation, for example - to being large, public collaboration across many companies and networks.

Cluster computers are used in computational simulations of weather or vehicle crashes.

2.3.1 Types of cluster computing

  • High Availability Clusters: designed to ensure constant access to service applications. The clusters are designed to maintain redundant nodes that can act as backup systems in the event of failure. The most common size for an HA cluster is two nodes, which is the minimum requirement to provide redundancy
  • High-performance Clusters: to exploit the parallel processing power of multiple nodes. They are most commonly used to perform functions that require nodes to communicate as they perform their tasks - for instance, when calculation results from one node will affect future results from another node.
  • Load-balancing Clusters: These operate by routing all work through one or more load-balancing front-end nodes, which then distribute the workload efficiently between the remaining active nodes.

 

这些您可能会感兴趣

筛选出你可能感兴趣的一些文章,让您更加的了解我们。