hub  Large-Scale Server Terminology  hub

Maximum rate at which an interconnect (such as a computer system bus or network) can propagate data once the data enters the interconnect, typically measured in Mbytes/second (Mbps). The bandwidth can be increased by making the interconnect wider or by increasing the frequency of the interconnect so that more data is transferred.
Big-Bus SMP
A type of Symetric Multi-Processing (SMP) architecture that relies on a single, shared bus among muliple processing elements (CPUs, memory, and input/output devices). Big-Bus SMP machines offer good scalability up to a few dozen processors (24 to 36).
Fast memory that holds copies of the most recently accessed data. The word is taken from the French "cacher," which means to hide.
Cache Coherence
In multiprocessor servers, the individual CPUs have caches of the most recently used data. Multiple caches mean that multiple copies of the same data can now simultaneously exist. Cache Coherence relies on a hardware and/or software mechanism that guarantees that all outstanding copies of data are kept identical, and if any one processor updates a copy, all other copies are deleted first.
Cache Hit
When a processor finds a needed data item in its cache.
Cache Miss
When a processor does not find a needed data item in its cache, a cache miss is said to occur. A request to retrieve the data must then be issued to the next level cache or main memory. Too many cache misses could mean a potential problem.
Capacity Miss
A miss that occurs because the cache is not large enough to hold all the needed memory blocks.
Cache-Coherent NonUniform Memory Access. A variation of the NUMA-Q design features distributed memory tided together to form a single address space. Cache coherence is perfomred in hardware, not software.
A collection of inter-related whole computers or nodes that are utilized as a single, unified computing resource. The nodes of a cluster run independent copies of the operating system and application(s), but share other computing resources in common, such as a pool of storage.
Coherence Miss
A cache miss based on the need to keep the contents of a memory block the same when it is shared by the caches attached to more than one processor. It applies to multiprocessor systems only.
At any one time, there is only one possible value uniquely held or shared among CPUs for every datum in memory.
Cache-Only Memory Architecture is a rival to ccNUMA in that it uses multiple levels of large caches rather tan a single large memory. Data coherency is maintained by hardware.
Compulsory Miss
The first time a block of memory is referenced, it will not be in the cache. A compulsory miss occurs on the first reference of the block.
Conflict Miss
A cache miss that occurs because the portion of the cache assigned to a region of memory is not large enough to hold all of that region's blocks. It applies to certain organizations of caches only.
Directory-Based Cache Coherence (DBCC)
A mechanism that preserves data coherency by keeping the sharing sttus and location of each data block in one and only one place for each data block. The place the sharing status is kept is called a directory or it may be distributed across several directories. The SCI standard implements DBCC in a NUMA-Q machine. (Compare to snooping cache coherence.)
Failover Cluster
A cluster of computers with specialized software that automatically moves an active application(s) from one machine to the other in the event of an outage involving one node in the cluster.
Global Shared Memory
A term requently used with MPPs to describe the collection of memories from each cell or node in the system. The system has the appearance of one shared memory by using a software layer to fetch data from remote memories on other nodes.
L1 Cache
The first cache searched by the processor for data or code.
L2 Cache
The second cache in the cache hierarchy searched by the processor for data or code. It is only searched if the L1 cache fails to find the requested data.
The length of time required to retrieve data, starting when the initial request is made and ending when the request is satisfied. It is usually more difficult and expensive to decrease the latency than it is to increase the bandwidth.
The tendency of multiple code and data accesses to stay within a given address space. There are two types of locality:
    ballSpatial locality describes the closeness of the addresses of multiple accesses to each other.
    ballTemporal locality is the extent to which references over time request the same addresses.

How easy it is to manage a computer system or collection of systems. It includes such diverse tasks as administration, backup, security, performance tuning, disaster recovery, and fault recovery.
A single member of a cluster. A node is a whole computer, running its own copy of the operating system and applications.
The building block of Sequent's NUMA-Q architecture, consisting of four Pentium Pro processors, two PCI busses with seven PCI slots, memory, and a 500 Mbs system bus.
Replicated Memory Cluster is a memory replication or memory transfer mechanism between nodes, and a software lock traffic interconnect, that maintain coherency between copies of shared memory distributed across nodes.
Remote Quad Memory
The portion of the single physical memory that resides on quads other than the one quad that contains the processors seeking memory access.
Scalable coherent Interface is a IEEE based standard for internal processor-memroy connections within computers. A variant is used to implement cache coherence in NUMA-Q. SCI connected processors do not have to use I/O operations to communicate; they can use ordinary load and store instructions to directly access one another's memory.
Shared-Memory Cluster
A cluster in which SMP nodes are interconnected together utilizing very high-bandwidth interconnects (1 Gbyte+/sec). Each SMP node has its own "local" memory as well as peer-to-peer access to memory on other nodes. Memory is managed coherently and appears to be uniform to the higher-level RDBMS and application layers.
Shared-Memory Model
A logical architecture for parallel computing, in which mutliple processors run a single copy of the operating system. The operating system presents the illusion of a single, large physical memory (single address space) and a signel, very fast processor to all applications running on top of the operating system. Most commercial software is written to this model today. NUMA-Q preserves the shared-memory model, even though it's a distributed memory implementation.
Snooping Cache Coherence
A mechanism in which every cache that has a copy of a memory block also has a copy of the sharing status of the block. It employs hardware support known as a snoopy bus.
Snoopy Bus
The hardware bus used by most big-bus SMP machines. All CPUs "snoop" all activity on the bus and check to see if transmissions on it affect CPU cache contents. Each CPU is responsible for tracking the contents of only its own cache.


Date of last revision: 17 December 2001.
Extracted from: SolutionsIntegrator magazine dated February 1, 1998, pages 29-32, Click the Tornado to go to their website. Solutions Integration Web Site