P. 1
Gap Analysis

Gap Analysis

|Views: 465|Likes:
Publicado porss2315

More info:

Published by: ss2315 on Jan 14, 2010
Copyright:Attribution Non-commercial


Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less





These represent the heritage of Globus GT2 [Globus-B] and Condor [Condor] to support the classic computing model
of jobs running on distributed computers accessing data stored on, in general, different distributed resources.
Commercially this model is of great importance to support utility computing or computing-on-demand. The initial Grid
Systems InnerGrid product is in this area [GridSystems]. This model is also needed by particle physics (LHC) where
the data is many petabytes per year of individual events that need to be analyzed independently and then looked at
collectively to find signals of new science or to measure cross-sections. We note that the Globus team calls this a data
grid and so to avoid confusion with database-centric applications highlighted in UK e-Science, we choose the
compute/file grid label. We will later point out that this style of grid is, in fact, needed by applications like
Bioinformatics which need to fetch sequences from a database (an Information Grid) and analyze them on dynamically
allocated compute resources. The functional capabilities needed by compute/file grids are well understood and
illustrated by the work of the European Data Grid (EDG) [EDG-A] and the trio of projects in the US Trillium
consortium [Trillium]. These capabilities include generally important functionalities including Grid information
systems (MDS-2 today), security and network monitoring discussed in section 7. Characteristic of this style of Grid are
resource brokering both within collections of computers and the meta-schedulers and planners between separately
managed computer subsystems. Some variant of distributed file and storage (tape) systems are needed as is the ability to
create and manage data replication (caching). Fabric management – the reliable deployment of software on the tens of
thousands of resources on such grids is also very relevant. Finally such grids also needed to be very robust (to cope with
the inevitable glitches in data analysis on LHC scale) and so such grids need good management frameworks to deliver
autonomic characteristics. Note that Compute/File Grids can in fact access their information from databases (as
illustrated by the use of Objectivity in some particle physics experiments) but still the computing style is that of the
classic file-based model.

You're Reading a Free Preview

/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->