Content uploaded by Alan N. Beard
Author content
All content in this area was uploaded by Alan N. Beard on Apr 19, 2016
Content may be subject to copyright.
(C:\AB\JAFS12)
Published in Journal of Applied Fire Science, 23, pp 193-204, 2013-14
Problems with Computer Models
Alan N Beard,
Civil Engineering Section,
School of the Built Environment,
Heriot-Watt University, Edinburgh, Scotland, U.K.
ABSTRACT
Computer-based models are widely used today as part of fire safety design. There is, though,
considerable concern about whether or not the use of such models may be leading to
unacceptable options being adopted. This concern covers all types of models, including those
based on computational fluid dynamics. Models should only ever be used as an aid within a
context of fire knowledge and experience and certainly not as a sole source of decision-
making. Models are not the real world. The limitations and conditions of applicability of a
model need to be thoroughly understood by users. This paper* discusses these and related
issues.
INTRODUCTION
Basic questions surround the degree to which models, especially computer-based models,
may or may not have the potential to represent the real world reasonably accurately and the
ways in which such models may be used and results interpreted. Beyond that there are
questions around the role models may play in fire safety decision-making. It is often stated in
research articles that a model has been ‘validated’ and a reader may think that this means that
the model has somehow been ‘proven correct’ and that use of the model will accurately
represent the real world; however, this may be far from the case. The concept of ‘validation’
is discussed later in this paper.
DIFFERENT USERS MAY PRODUCE VERY DIFFERENT RESULTS
Different users of a model may produce very different results for the same case and this may
be so even for a model which has the potential to be valuable. Problems of this kind have
been discussed in references [1,2] where three kinds of comparisons between theoretical
predictions using deterministic models and experimental results have been identified. Brief
*This article is a substantial development from two articles which appeared under the titles:
‘Computer models and their limitations in fire safety design’, Industrial Fire Journal, 74,
pp38-40, January 2009 and ‘Reliability of computer models in fire safety design’, Industrial
Fire Journal, 71, pp39-40, 2008.
descriptions of the types of comparisons are given here:
(1) An a priori comparison is one in which the user has, effectively, not ‘seen’ or used any
results from an experiment being used for comparison.
(2) A blind comparison is one in which the user has, effectively, not ‘seen’ all the results
from an experiment being used for comparison but some limited data from that
experiment have been used as input, e.g. heat release rate or mass loss rate over time.
(3) An open comparison is one in which data from an experiment being used for comparison
have been seen and possibly used. The user is free to adjust input after initial
comparisons.
‘a priori’ COMPARISONS AND ROUND-ROBIN STUDIES
In comparison types (1) and (2) above, the modeller is not free to adjust input after the
comparison. Most of the comparisons in the literature are of type (3), a very few are of type
(2); mention may be made of references [3-7]. Extremely few indeed are of type (1);
examples are given in [8-13]. Comparisons of all three types are needed. (See also the CFD
study carried out as part of the Benelux tunnel fire tests [14] which concluded that,
quantitatively, with CFD simulations “substantial deviations” can occur.) When reporting a
comparison it should be made explicit which type it is and full details given. In using models
for real-world decision-making we are effectively in the realm of type (1); we cannot have the
results from a real fire which is yet to happen, if it does happen. While there is a place for
open comparisons, far more a priori and blind comparisons need to be carried out and
reported in the public scientific literature.
Most comparisons between theory and experiment in the literature are of the ‘open’ kind and
very few are of the ‘blind’ type. However, in real world design, the user is effectively in an a
priori position (ie type 1) with respect to a proposed facility or building. That is, a building or
facility is being designed and a potential fire has not yet occurred in it. It is crucial, therefore,
to conduct a priori comparisons with well instrumented experimental tests, but very few
indeed have ever been performed. Also, it is very important to carry out ‘round-robin’ studies
in which different model users carry out one or more simulations using their model, for a set
fire test case specified by an independent party. The users would be given details of the set-
up, but not experimental results. Results predicted by the different users would then be
compared with experimental results for that test case. Such a study was conducted by the
CIB (International Building Council) during the late 1990s and the results showed
considerable differences in results predicted by different users for the same specified case
[10]. As a consequence of the poor showing of model use found in that study the report was
not made widely available; because of this the results are not widely known.
A similar ‘round-robin’ a priori study has been carried out by Edinburgh University in
collaboration with Strathclyde Fire Brigade, centred on the Dalmarnock fire tests [11,12].
The results were presented at a meeting in Edinburgh in November 2007. In these tests a fire
was started on a sofa in a two-bedroomed flat in Dalmarnock, Glasgow. The flat was in a
high-rise block. Model users were given details of the arrangement, materials etc, but not the
experimental results. They were invited to predict the time courses of variables such as heat
release rate, temperature and smoke obscuration. The big question was, as with the CIB
2
study: how would the predictions by model users compare with each other and with
experimental results? Ten model user teams took part, eight using the same CFD model and
two using a zone model. (There is no reason to think that the general nature of the results
would have been different had these teams used a different CFD model or different zone
model.) The CFD model used was Fire Dynamics Simulator (FDS). (With regard to FDS, see
also reference [15].)
Overall, the predictions were not at all good: there was generally a wide scatter amongst the
predictions by users and, also, predictions usually compared poorly with experimental results.
For example, with regard to temperature, predictions tended to vary from about a 45% over-
prediction to about a 90% under-prediction. (Sometimes a model prediction was close to the
experimentally determined value for that variable, over part of the time range.) The basic
message was clear: a predicted result from a model cannot be assumed to be accurate; ie to
reflect the real world. Further, consistency cannot be assumed; ie that a given model will
consistently over-predict or consistently under-predict. Fuller details may be found in
references [11,12]. (These issues also relate to variability within experimental results, which
is a cause of concern [2,24].)
The general conclusions appear to be similar to those which followed from the CIB study of
the 1990s.
Also, in the CIB study of reference [10] it was found that different users applying different
deterministic models to the same case, again, produced very different results [10]. This is not
at all surprising, given the fact that different users applying the same model to the same case
have produced very different results, as described above.
VARIABILITY IN RESULTS FROM USING PROBABILISTIC MODELS
As mentioned above, different users may produce quite different results; and this is true of
probabilistic models as well as deterministic ones, even when using the same probabilistic
model and applying it to the same case. In a European study concerning probabilistic
modelling in the oil and gas industry [16] it was found that risk estimates produced by
different users differed by “several orders of magnitude”. As an ‘order of magnitude’
connotes a factor of ten then this implies factors of about one hundred or one thousand.A
similar point has been made about deterministic models earlier in this article; different users
may produce very different results when applying the same model to the same case. For
further fundamental considerations about probabilistic risk assessment see reference [17].
USING A MODEL IN A RESPONSIBLE WAY IS NOT EASY
A colleague commented to me at a conference on fire safety modelling that “any fool can
press a button”. The point was that it is generally not difficult, in practical terms, to run a
computer-based model and get results, given the availability of packages today. However the
implication was that it is very difficult to employ a model responsibly, so that it makes a
genuinely valuable contribution to real-world decision-making, rather than leading to
inaccuracy and inappropriate interpretation. Whatever else may be required in order for a
model to be used responsibly, adequate sensitivity considerations are essential.
3
Whether or not a model may be reliably used as part of fire safety decision-making depends
not only upon the conceptual and numerical assumptions in the model itself but also upon
how it is used and how the results are interpreted. Using models as part of decision-making
may be dangerous. It should not be concluded, though, that it is impossible for models to be
employed valuably as part of safety decision-making. Also, it may be the case that a model is
more valuable qualitatively rather than quantitatively.
It is necessary for a regulatory framework to be constructed which takes into account the
potential for a model to be valuable, the methodology to be followed in using the model and
the user, who must be knowledgeable. A ‘knowledgeable user’ must be capable of using an
acceptable methodology to apply a model which has the potential to be valuable to a
particular case in a comprehensive and exhaustive way, making all assumptions and
procedures explicit, and interpreting results in a justifiable way. These concerns go across the
board and are not just pertinent to particular industries; they are discussed further in reference
[18]. It is known that UK governmental departments, European organizations and the
International Standards Organization (ISO) are concerned about these matters, yet, thus far,
little has been done at a UK national or international level; although ISO has published an
initial standard [19]. General guidance has been indicated in [18] and also guides have been
produced by the Society of Fire Protection Engineers (SFPE) [20] and the American Society
for Testing & Materials (ASTM) [21]. Also, a recommendation has been made to the
European Parliament that a framework be established to try to ensure the acceptable use of
models as part of fire safety decision-making [22]. (Although this report is nominally about
tunnel safety, many of the issues and recommendations are of a generic nature, covering all
cases; as indicated in this article.)
VARIABILITY IN RESULTS FROM DIFFERENT MODELS, FOR A SINGLE USER
It may be seen in reference [23] that significant differences may be found by the same user in
applying two different CFD-based models to the same case. This further demonstrates the kinds of
problems which exist in using models as part of fire safety decision-making. Inter alia, the knowledge
and experience of the user become crucial. Whatever else may be required to use a model responsibly,
adequate sensitivity considerations are vital. It is also absolutely essential to be open and explicit
about all assumptions made, both qualitative and quantitative, and the procedure followed.
SOURCES OF ERROR IN MODELLING
Sources of error in modelling may be regarded as falling into the following general categories:
{a} Lack of Reality of the Theoretical and Numerical Assumptions
{b} Lack of Fidelity of the Numerical Solution Procedures
{c} Direct Mistakes in Software
{d} Faults in Computer Hardware.
{e} Mistakes in Application
These general categories apply to both deterministic and probabilistic models. Further, there
may be ‘effective errors’ in using a model because of inadequate documentation. For
example, an ability may be implied for a model which it does not possess. This is a crucial
point; model documentation must state clearly and explicitly the conditions for which the
software is suitable or unsuitable. In one particular case, for example, a CFD model was, in
4
reality, suitable only for smooth walls. However, the documentation did not mention this,
implying that the model could also be applied in the case of rough walls.
These matters have been discussed at greater length elsewhere [24] and will not be
considered further here.
RESULTS FROM A MODEL ARE NO SUBSTITUTE FOR EXPERIMENT
It is evident from what has gone before that, in order to test one theoretical model, it is not
adequate to simply compare the results from it with results from another theoretical model,
eg a CFD model. This, however, is sometimes done and it may be implied that comparing
with CFD results is similar to comparing with the real world. Results from any theoretical
model, no matter how widely used the model may have been, are no substitute for
experimental results. There may be value in comparing the results from one theoretical model
with results from another theoretical model, however, this should most certainly not be seen
as an alternative to comparison with empirical evidence. Also, there is a need for full-scale
and large-scale tests; too much reliance on small-scale tests cannot be regarded as adequate
[25].
Even with results from full-scale experiments, there are questions which arise and some of
these are discussed below.
VARIABILITY IN EXPERIMENTS: REPLICATED TESTS MAY PRODUCE VERY
DIFFERENT RESULTS
A theoretical prediction for, say, a temperature cannot be directly compared with a
temperature in a real fire. This means it is necessary to compare with experimental results;
however, this is more difficult than might be thought. There is a humorous saying that
‘nobody believes a theory except the theoretician; everybody believes an experiment, except
the experimenter’. It is essential to assume a questioning attitude to experimental results as
well as to theoretical predictions. False inferences drawn from the results of an experiment
may be associated with:
(a) Lack of control of the conditions of an experiment. For example, ambient humidities may
vary from day to day leading to different temperatures for ostensibly ‘identical’
experiments. Figure 1 shows temperatures measured in two tests which were intended to
be identical. The particular case is not relevant, however, the results of Figure 1 were not
a result of 'bad science'; they came from work in a well respected laboratory conducted by
well known and respected scientists. It shows that there is a need for experimental tests to
be repeated in as identical conditions as possible (ie tests should be replicated) and for
distributions of experimental results to be produced, for each given case. This raises
problems: replication of experimental tests is expensive and there is a lack of willingness
to carry out replication because of this. It is essential that it be done, though, and it
strongly suggests the need for collaboration at an international level which is aimed at
producing acceptable data sets and distributions of results from replicated tests. Results
from a single experimental test may well not be at the mean of a distribution of replicated
test results for the same case. Further, there is a need for large scale experimental tests in
addition to smaller scale tests
5
(b) Design of the experiment.
(c) Direct error in measurement.
(d) Raw data processing algorithms.
These matters have been considered further in references [2, 24] and will not be considered at
greater length here.
THE TERM ‘VALIDATION’
The term 'validation' has become widely used in relation to fire models, especially computer-
based fire models. This is unfortunate, however, because the word is mis-leading, especially
to those outside the field of fire safety engineering. To say that a model has been 'validated'
implies that it has been 'proven correct'. However, this can never be the case and, indeed, it is
directly contrary to scientific method. The word itself carries a partiality and a more neutral
word, such as ‘testing’, is preferable. It is often heard that a particular model has been
‘validated’ for a particular kind of use and this carries the implication, especially to those
outside the subject area, that the results will be accurate (in the sense of corresponding to the
real world) if applied to that case. It can never, though, be guaranteed that results are
accurate, even if the results from the model for a similar case have been found to be
reasonably close to experiment. Often the word 'validation' is used as effectively equivalent to
'comparison with experiment', for example in the study carried out by Khoudja [26]. Also,
Rhodes, in the chapter on CFD modelling in reference [27], distinguishes two kinds of
‘validation’. According to Rhodes, in relation to CFD: “The first is a very scientific kind,
where detailed measurements of a relatively simple flow are made. The boundary details are
specified in complete detail, and the modeller attempts to reproduce the flow given these
boundary conditions”. Also, from [27]: “The second kind of validation is one where models
are applied to full-scale tests. Such tests do not provide the necessary information to specify
the initial and boundary conditions with complete accuracy and detail. Thus cases like this
should be thought of as ‘verifications’ rather than stringent ‘validations’ of models”. Other
relevant chapters in the handbook of reference [27] are [28-29].
People have also used the word ‘validation’ to mean that a computer program has been
shown to be consistent with its 'formal specification' [30]. This leads to the question of how a
fire model may be ‘formally specified’, if at all. Further, the word 'validation' has been used in
a much wider sense, to mean that a piece of software 'functions properly in a total system
environment' [31]. It would be better for the word ‘validation’ not to be used but for people
to say explicitly and clearly what they mean. For example, if a model user wishes to refer to
'comparison with experiment' then it would be much clearer to use that phrase rather than to
refer to ‘validation’. Then, how such a comparison with experiment was being carried out
should be specified and described carefully; see above in this paper. It would be desirable for
the word ‘testing’ to be used, as appropriate, rather than ‘validation’. At the very least, if the
word ‘validation’ is to be used, then a precise, explicit and comprehensive definition of the
way in which the word is being used by a writer in the particular case under consideration
should be given.
FLEXIBILITY OF REPRESENTATION IN A MODEL
6
In general, there will be many sources of flexibility in the way a situation may be represented
in a model and uncertainty in conceptual and numerical assumptions. In such conditions it is
sensible to err on the cautious side. Given the lack of specific knowledge about input to a
model which often exists (ie uncertainty), and the flexibility which may exist in the way in
which a specific case may be represented (ie ‘idealized’) in a model, there may well be
considerable scope for an analyst to make conceptual or numerical assumptions which may be
defended as ‘plausible’ and which present the option which a client may prefer (eg the
cheapest) in a favourable light relative to other options. Upon deeper analysis, however,
including sensitivity considerations, this may not correspond to a risk which the general
public would regard as acceptable.
ARCHIVAL SOURCES AND RESEARCH PAPERS: THE NEED TO BE EXPLICIT
Significant new research should be reported in archival sources. An archival source must be
such that it will be available into the indefinite future. It should be possible to obtain an
archival source through, for example, a national ‘library of deposit’, such as the British
Library. For example, some research publications fall into this category. This is not to say
that an article in a non-archival journal is of no value; such an article may be very valuable if,
for example, it summarizes research published elsewhere and brings it to a wider readership.
Articles in non-archival journals should, however, refer to archival sources for basic research.
For papers published in archival sources, in order to enable replication of the research and to
guard against adverse comments, it is essential that a research paper which reports new
research results should be very explicit about what has been done and how. Full details of all
assumptions made, both qualitative and quantitative, need to be given. This includes all
assumptions made in source codes of any models used.
The cardinal rule for a publication in an archival journal should be that a reader should be
able to replicate the research if they wished to do so. This applies to both theoretical and
experimental work. Sometimes not all details can be given in a paper in a journal because of
limitation of space. In this case, full details must be given to references where details can be
found and these references must be to sources which are readily available to a reader without
excessive effort and expense. If a charge is to be made for a copy of a reference then it should
be limited to essential costs (eg photo-copying) plus postage. (Books and journals should be
available at ‘reasonable’ cost through, eg, national libraries or similar sources.)
As far as possible, in a research paper, references should be to articles etc in archival sources.
Sometimes necessary information is kept secret, for example for commercial reasons. This is
totally unacceptable. Examples of this are source codes of some models. Also, some source
codes are only available for large sums of money and then under very limited conditions. This
also is totally unacceptable. Research must be open to scrutiny by the public in general and
the scientific community in particular and considerable obstacles must not be placed in the
way of such examination. Thorough scrutiny by others, of all details of research reported on,
is one of the essential parts of science.
CONCLUDING COMMENTS
Considerable problems exist in using computer-based models to aid in fire safety decision-
making; for example, the finding that different people or groups, using the same model and
7
applying it to the same case may well produce different results. Indeed this seems to be the
norm. The employment of computer-based models in decision-making may well lead to
totally unacceptable, or even dangerous, options being accepted for a design. It is not
impossible to use models in a way which may be valuable rather than mis-leading; however, it
is not an easy matter to do so. Some of the problems have been discussed in this article and
these need to be seriously addressed at an international level. (For more discussion on these
issues see references [24, 32]). Users must have a thorough knowledge of fire behaviour and
a thorough knowledge of the model being employed. Models should only ever be used as an
aid to decision-making and within a context of fire knowledge and experience. Models should
never be seen as sole instruments in decision-making.
REFERENCES
1 Beard, A.N., ‘On a priori, blind and open comparisons between theory and experiment’,
Fire Safety Journal, 35, pp 63-66, 2000
2 Beard, A.N., ‘Problems with Using Models for Fire Safety’, chapter 29 of The Handbook of
Tunnel Fire Safety, eds. Alan Beard and Richard Carvel, 2nd edition, pp615-634, ICE
Publishing, London, 2012. (NB this chapter is general, not restricted to tunnels only.) ISBN
978-0-7277-4153-0
3. Bettis, R.J., Jagger, S.F., Lea, C.J., Jones, I.P.,Lennon, S. and Guilbert, P.W., ‘The Use of
Physical and Mathematical Modelling to Assess the Hazards of Tunnel Fires’, 8th
International Symposium on Aerodynamics and Ventilation of Vehicle Tunnels, Liverpool,
6-8 July, 1994. Organized by BHR Group Ltd., Mechanical Engineering Publications Ltd.,
London. ISBN 0-85298-923-7
4 Miloua, H, Azzi, A and Wang, HY, ‘Evaluation of different numerical approaches for a
ventilated tunnel fire’, Journal of Fire Sciences, 29, 403-429, 2011
5Miles, SD, Kumar, S & Cox, G, ‘Comparisons of blind predictions of a CFD model with
experimental data’, 6th Int Symposium on Fire Safety Science, Poitiers, 5-9 July, 1999
6 Webb, s, ‘Fire Model Evaluation. Fire and Blast Information Group (FABIG) Newsletter,
Issue 22, May 1998, pp 21-23. Steel Construction Institute, UK
7 Alvares, NJ, Foote, KL, Pagni, PJ. ‘Forced ventilated enclosure fires’, Combustion Science
& Technology, 1984, 39, pp55-81
8. Beard, A.N., 'Evaluation of Fire Models’ ; 10 reports for the Fire Research &
Development Group of the UK Home Office. Unit of Fire Safety Engineering, University of
Edinburgh, 1990. These reports were used as a basis for guidance for UK fire brigades.
9. Beard, A.N., ‘Evaluation of Deterministic Fire Models; Part 1-Introduction’, Fire Safety
Journal, 19, pp 295-306, 1992
10 Hostikka, S., & Keski-Rahkonen, O., ‘Results of CIB W14 Round Robin for Code
Assessment’. Technical Research Centre of Finland, Espoo, Finland, 1998.
11‘The Dalmarnock Fire Tests: Experiments and Modelling’, eds. G.Rein, C.Empis and
R., Carvel; published by The School of Engineering, Edinburgh University, 2007,
ISBN 978-0-9557497-0-4
12 Rein, G. et al, ‘Round-robin study of a priori modelling predictions of the Dalmarnock
Fire Test One’, Fire Safety Journal, 44, pp590-602, 2009
13 Beard, A.N., ‘Major Fire spread in tunnels: Comparison between Theory and
Experiment’, 1st International Tunnel Safety Forum for Road & Rail, Nice, 23-25 April
2007. Organized by Tunnel Management International, UK.
8
14 ‘Project Safety Test-Report on Fire Tests’, Directorate General for Public works &
Water, Civil Engineering Division, Utrecht, The Netherlands, 2002; (known as the ‘2nd
Benelux Tests’)
15 Coyle, P. & Novozhilov ‘Further Validation of Fire Dynamics Simulator Using Smoke
Management Studies’, International Journal on Engineering Performance-Based Codes, 9,
pp7-30, 2007
16. Hawker, C. R., ‘Offshore Safety Cases – BG E&P’s Experience’, proceedings of
conference on Safety Cases – Can We Prove the Next Ten Years will be Safer than the Last
Ten?, London, 1995. Organized by IBC Technical Services Ltd and DNV (UK).
17. Ferkl, L. and Dix, A.,’ Risk analysis – from the Garden of Eden to the seven deadly sins’,
14th Int Symposium on Aerodynamics & Ventilation of Tunnels, Dundee, 11-13 May 2011.
Organized by BHR Group, UK.
18 Beard, A.N., ‘Requirements for acceptable model use’, Fire Safety Journal, 40, pp477-
484, 2005
19 ISO/BSI. ‘Fire Safety Engineering-Part 3: Assessment and Verification of Mathematical
Fire Models’, BS ISO TR 13387-3:1999 ; London
20 Guidelines for Substantiating a Fire Model for a given Application, Society of Fire
Protection Engineers (SFPE), Boston, USA, 2011
21 Standard Guide for Evaluating thePpredictiveCcapability of Deterministic Fire Models,
American Society for Testing & Materials (ASTM), 2005
22 Beard, A.N. & Cope, D., Assessment of the Safety of Tunnels, report prepared for the European
Parliament. Published in 2008 and available on the web-site of the European Parliament under the
rubric of ‘Science & Technology Options Assessment (STOA)’.
23 Tuovinen, H., Holmstedt, G. & Bengtson, S., ‘Sensitivity Calculations of Tunnel Fires
using CFD’ , Fire Technology, 32 (2), pp 99-119, 1996
24 Beard, A. N., ‘Fire Models & Design’, Fire Safety Journal, 28, pp 117-138, 1997
25 Beard, A.N., ‘On comparison between theory and experiment’, Fire Safety Journal, 19,
pp307-308, 1992.
26 Khoudja, N., 'Procedures for Quantitative Sensitivity and Performance Validation Studies
of a Deterministic Fire Model', NBS-GCR-88-544, National Inst of Standards & Technology,
Gaithersburg, USA, 1988
27 Rhodes, N., ‘CFD Modelling of Tunnel Fires’, pp 329-345 of the Handbook of Tunnel
Fire Safety, edited by Alan Beard & Richard Carvel, 2nd edition, ICE Publishing, London,
2012. This chapter contains many general points, it is not just tunnel specific.
28 Beard, A.N., ‘Problems with Using Models for Fire Safety’, pp 615-634 of the Handbook
of Tunnel Fire Safety, edited by Alan Beard & Richard Carvel, 2nd edition, ICE Publishing,
London, 2012. This chapter is general, not just tunnel specific.
29 Beard, A.N., ‘Decision-making and Risk Assessment’, pp 635-647 of the Handbook of
Tunnel Fire Safety, edited by Alan Beard & Richard Carvel, 2nd edition, ICE Publishing,
London, 2012. This chapter is general, not just tunnel specific.
30 Ashley, N., 'Program Validation', contained within Software: Requirements, Specification
and Testing, Edited by T. Anderson, Blackwell Scientific Publications, Oxford, 1985. ISBN
0-632-01309-5.
31 Glass, R.L., Software Reliability Guidebook, Prentice-Hall, Englewood Cliffs, New
Jersey, 1979
9
32. Beard, A.N., ‘Limitations of Computer Models’, Fire Safety Journal, 18, pp375-391,
1992
FIGURE : Next page
Figure 2 : Ostensibly ‘identical’ tests; results from two experiements
intended to be the same.
10
Figure 1: Ostensibly ‘identical’ tests; results from two experiments intended to be the same.
Curves smoothed for clarity. © A.N.Beard, 2008
11