Data-intensive refers to huge volumes of data, complex patterns of data integration and analysis and intricate interactions between data and users. Current methods and tools are failing to address data-intensive challenges effectively: they fail for several reasons, all of which are aspects of scalability.
Science is witnessing a data revolution. Data are now created by faster and cheaper physical technologies, software tools and digital collaborations. Examples of these include satellite networks, simulation models and social network data. To transform these data successfully into information then into knowledge and finally into wisdom, we need new forms of computational thinking. These may be enabled by building "instruments" that make data comprehensible for the "naked mind" in a similar fashion to the way in which telescopes reveal the universe to the naked eye.
Date and time:
Tuesday, 9 February, 2010 - 09:30
Seminar Room, Biomedical Systems Analysis, Human Genetics Unit, Medical Research Council, Edinburgh, UK
Presenting the research of the Data-Intensive Research Group as part of a visit of Professor Robin Stanton (Pro Vice-Chancellor) and Professor Lindsay Botten (Director, National Computational Infrastructure), Australian National University, to the UK National e-Science Centre.
This presentation's focus is on the computer science research performed at the National e-Science Centre as part of the University of Edinburg and the University of Glasgow. Another submission reports on the community support offered by the National e-Science Centre.
To explore, analyse and extract useful information and knowledge from massive amounts of data collected from geographically distributed sites, one has to overcome both data and computational intensive problems in distributed environments.