PNWD-3184
J.W. Buck |
M.A. Pelton |
D.A. Tolle |
C.C. Townsend |
G. Whelan |
M.G. Nishioka |
T.J. Mast |
V. Kogan |
M.S. Peffers |
S. Mahasenan |
D.P. Evers |
K.E. Dorow |
R A. Corley |
R.D. Stenner |
M.A. Eslinger
|
D.L. Strenge |
J.L. Kirk |
|
Prepared for
Battelle Pacific Northwest Division
This report was prepared by Battelle Memorial Institute
(Battelle) as an account of sponsored research activities. Neither Client nor
Battelle nor any person acting on behalf of either:
MAKES ANY WARRANTY OR REPRESENTATION, EXPRESS OR IMPLIED, with
respect to the accuracy, completeness, or usefulness of the information
contained in this report, or that the use of any information, apparatus,
process, or composition disclosed in this report may not infringe privately
owned rights; or
Assumes any liabilities with respect to the use of, or for
damages resulting from the use of, any information, apparatus, process, or
composition disclosed in this report.
Reference herein to any specific commercial product, process, or service by
trade name, trademark, manufacturer, or otherwise, does not necessarily
constitute or imply its endorsement, recommendation, or favoring by Battelle.
The views and opinions of authors expressed herein do not necessarily state or
reflect those of Battelle.
The purpose of this research is twofold: design an overarching framework
that is comprehensive, logical, and useful for industrial needs; and within
that framework identify research needs based on gap and sensitivity analyses.
The overarching framework, called the Comprehensive Chemical Exposure
Framework, will be used to house models, algorithms, and databases associated
with micro-environmental exposure modeling. The research needs are defined for
representative high volume compounds that could be involved in home and work
exposure scenarios defined specifically for this study. These exposure
scenarios were used to guide the types of models, algorithms, and databases
required to evaluate each scenario. Model and process flow diagrams were
developed for each exposure scenario, and research
gaps were identified based on publicly available information. Once the gap
analysis was completed for the source, transport, exposure, and health impact
components of each scenario, a qualitative sensitivity of the entire system was
conducted. The Gap Analysis focused on reviewing the process flow diagrams that
had been developed to properly evaluate each of the four example exposure
scenarios to identify models, algorithms, and databases that were missing or
unknown. This was done for the Source, Transport, Exposure, and Impacts
components of the exposure scenarios. In some cases, models existed, but they
were determined to be too simplistic or conservative and were considered a
research gap. In these cases, alternative paths were explored to determine the
type of model or algorithm required to fill the research gap. A qualitative
Sensitivity Analysis was also performed on the models and algorithms identified
for the various compounds and exposure scenarios.
The authors would like to thank the American Chemistry
Council for providing Battelle, Pacific Northwest Laboratories an opportunity the submit a design for the CCEF. The authors would also
like to extend their personal gratitude to the following scientists, engineers,
and researchers who, over the years, have helped promote and support the
development of many of the concepts used in the design of the CCEF:
Table of Contents
Title Page
Abstract
Acknowledgments
There is a growing awareness in recent years that a person’s
exposure to particular chemicals occurs via multiple routes from multiple
sources. To adequately evaluate such exposures, the scientific community
requires models that can predict the occurrence of exposures for each potential
combination of pathways and sources, and then accumulate these exposures over
time. Ideally, the models will account for variations in people's
activity patterns that are influenced by age, gender, occupation, and other
demographic factors. These activity patterns should realistically
simulate the movements of representative people through zones defined by
geographic location and micro-environment.
Recently, the Food Quality Protection Act mandated that the exposure
assessment community address the limitations of existing models and provide
improved models that can be used to estimate exposures to agricultural-related
compounds. Although exposure modeling on agricultural pesticides is
clearly moving forward, there is also a pressing need for improved models to
assess exposures to non-agricultural compounds. To this end, the American
Chemistry Council has requested the design of a Comprehensive Chemical Exposure
Framework (CCEF), which is intended to inform and advise the American Chemistry
Council in its effort to identify, facilitate, and communicate generic research
that will characterize people’s exposure to chemicals, especially
non-agricultural chemicals, and raise the confidence and lower the uncertainty
for quantitative estimates of exposure associated with potential human health
effect to chemicals.
The American Chemistry Council's Human Health Exposure Assessment Technical
Implementaiton Panel funded this research, and Battelle Pacific Northwest
Division and Battelle Columbus Operations staff conducted the work. This
technical report is a final product of this research and is property of the
American Chemistry Council. This report documents the design of the CCEF,
research gaps in exposure modeling, algorithms, and data, and sensitivity of
exposure results to specific models, processes, algorithms, and data.
The overall design of the CCEF consists of developing
framework requirements, architecture design, and data exchange protocols using
a modular approach that allows users the flexibility to construct, combine, and
couple attributes that meet their specific modeling needs. This allows a
variety of models and databases to work within a single construct. There are
various terms that need to be defined to help the reader understand the overall
design of the CCEF documented in this report.
A framework typically consists of a set of modules
that have been specified by the client, an associated framework user
interface, and data exchange protocols. The purpose of a the
framework is to:
The framework typically includes a user-friendly
interface to enable the user to access these capabilities easily. Other terms
that are often used in place of framework are 'system' and 'overarching
architecture.'
An object, in the context of this study, is a module, model,
database, or algorithm that is associated with the CCEF. The Framework treats
all these the same, as objects. An object reads data
from other objects, processes that data, and writes out that data to other
objects. In the case of a database, the data may not necessarily be processed.
From the Framework's point of view, these objects are all the same and must
follow the data exchange protocols associated with them.
A module potentially contains three components: the
user interface, the scientific model, and, for those frameworks that
incorporate legacy models, pre-and/or post-processors. Examples of modules
include source term releases, vadose zone transport, saturated zone transport,
surface water transport, air transport, exposure pathway analysis, dose
estimates, health impacts, and sensitivity/uncertainty support tools.
Model
A model is the set of scientific algorithms, calculations, and databases
that define a particular module. Several models have been developed over the
past 10 years by researchers focusing on developing fully integrated,
multi-media, multi-pathway, multi-route modules that allow a more transparent
connection between individual medium-specific models. The grouping of
these models takes a holistic approach to environmental assessment of potential
contaminant impacts as they simulate:
1. Release of contaminants into the environment
2. Transport and fate through various environmental media
(i.e., groundwater, surface water, air, and overland surfaces)
3. Resultant exposures and impacts to an organism
4. Support tools such as sensitivity, uncertainty, graphical interface
systems, and displaying results.
Module User Interface
The purpose of the module user interface is to make it easy for the user to
collect the data necessary to run the model. Besides gathering the necessary
data, the module user interface often provides online help to the user,
reference storage options for collected data, flexible unit inputs, and other
user support functions.
Module Pre/Post-Processors
As mentioned earlier, time and cost are saved if models can be integrated into
a framework intact. Legacy models that have been tested and reviewed can be
preserved and integrated by the addition of pre- and/or post-processors
to the module. These processors transfer reorganized data into the specified
format of the overall framework, thus allowing the inclusion of models that
were initially created for a media-specific analysis to be used in this more
holistic approach to multiple media assessments. Whether a pre/post-processor
is used depends on the needs of the scientific model and the data exchange
protocols of the framework. Models that have been created or modified with the
data exchange protocols predefined will likely not need pre/post-processors
before integration.
Framework attributes for micro-environmental exposure
modeling vary depending on the needs of the research area. The problem set and
client needs will define the requirements, design, and data exchange protocols
for a framework. This section will provide a brief definition and purpose of
the framework requirements, design, and data exchange protocols.
Framework Requirements
A project starts with the definition of the research needs. What problem or
problems must be solved? What kinds of information are needed? Who are the
ultimate users and how can the framework best meet their needs? This definition
begins with the analysis of the needs, a definition of the functional
components of hardware and software, and packaging of the information. The
requirements analysis is based on communication with the client and needs of
the research area of interest and produces a set of requirements for the
framework that describe what the software should do.
The information in the requirements package should, at a minimum, answer the
following questions:
Specific requirements for a framework include:
Framework Design
If the requirements of a framework describe what the software should do, the
framework design describes how the software will implement the stated
requirements. Design of a framework cannot start until a set of requirements
has been developed. Usually an initial set of requirements is developed and
then a design is implemented based on those requirements. In almost all cases,
issues will arise during the design process that will require changes to the
requirements. In this way the definition of requirements and design is an
iterative process. In many cases, design will include prototyping of software
to provide an early look at the software and its functionality.
Data Exchange Protocols
Before a framework is designed, the appropriate databases and the file structures
of the input and output for the framework and modules must be defined. Module
file structure should be consistent with the framework’s data exchange
protocols. Pre/post-processors may be used to aid in the conversion of
legacy code file formats not already meeting those data exchange protocols.
All file formats should be designed to ensure readability and compatibility
with most spreadsheet programs, and to incorporate necessary information to
communicate the use and purpose of each file.
Within this document, the term "model" will refer
to the software codes inside the Framework, and the Framework will represent
the overall structure linking the models and databases to allow for a seamless
transfer of data between components.
To meet the current and future intent of the CCEF, as stated by the
requirements, six development goals will be considered when formulating the
design of the CCEF:
1. Provide a platform that allows "objects" to access
information generated/produced by other "objects." For
example, a model or database produces a set of data for consumption by another
model. The downstream model accesses the information it needs and expects
from the upstream model or database, but it does not have to accept all of the
information. There is no requirement on the consuming model to utilize
all of the information that is made available to it. A consuming model
should not have to consume information that it does not use.
2. Keep it simple, not simplistic. By definition, simple means
"easy to understand, deal with, use, etc" (Barnhart 1970).
Simplistic refers to "making complex problems unrealistically simple"
(Guralnik 1976). One of the major problems associated with other
frameworks that never became deployable is that they could not capture the
essence of the problem that they were trying to solve. The elegance of a
simple design is that it captures the essence of the problem without burdening
the user with unnecessary extraneous information or components.
3. Make it understandable (shared responsibility). A simple
design allows for the developers and users to understand the design and
structure of the framework. Not only that, but to
maximize the ability for two disparate components to seamlessly communicate,
each component must share in the responsibility for communication.
For example, who should be charged with transferring information from a
database to a model: the database owner (i.e., information producer) or
the model owner (information consumer)? It is not the responsibility of
the model owner to understand the structure of the database; likewise, it is
not the database owner’s responsibility to understand the model that is
consuming the data. Therefore, each owner must share some of the burden
to ensure that the appropriate data are transferred to the model in a form that
the model understands. This philosophy is applicable for communication
between all types of components (e.g., models, databases, and other
frameworks).
4. Develop consistent and repeatable protocols. By developing a
conceptual consistency in the design of the system, the design of the framework
is more easily understood by those who use it. For example, all databases
should link to models in the same manner. In addition, since models,
databases, and other frameworks can be considered "objects" in this
system, the conceptual philosophy for linking any object should be
similar. By developing repeatable protocols, software developed for one
aspect of the system can by reused to support other aspects of the
system. Reusing the same software results in a simpler QA/QC program,
easier maintenance, and more universal trouble shooting, as one fix corrects
many problems.
5. Identify the MINIMUM linkage information. All models have
been developed to solve specific problems in certain way, regardless of how
generic the models are. As such, each model requires "typical"
and "unique" data. Typical data include information expected
from an upstream model or from a database. Unique data tends to be user
supplied and tends to be unique to that model. For example, an atmospheric
model may expect emission rates from a source-term model, joint frequency
distributions, and the number and types of surface disturbances from the
user. When linking to an upstream model, a minimum set of information is
traditionally expected to be made available by that upstream model. If
the two models want to communicate, they must agree as to what that minimum set
is because neither model wants to have to produce or consume unnecessary
information that is irrelevant to its design. Focusing on the truly
relevant data requirements helps to ensure a higher probability that two models
will be able to communicate.
6. Recognize that this is an iterative process to meet these goals.
A well-designed framework will have the ability to solve today’s problems, yet
contain the flexibility and robustness to be modified to address future
problems. This is not to say that the framework needs to include
unnecessary components, but the design should contain the structure to allow it
to be modified to account for new models, new data, new parameters, etc., yet
maintain backward compatibility for the components existing in the
system. Each problem requires a unique solution; therefore, the framework
needs a design that expects change.
The world is traditionally compartmentalized with the flow of information
from compartment to compartment. The basic conceptualization of the problem
begins with a flow of information from beginning to end. If this happens to be
over the "life cycle" of a 1) a person (e.g., conception to death),
2) a process (e.g., production life cycle), 3) an activity (e.g., certain job
type), and 4) a compound (e.g., traditional U.S. Environmental Protection
Agency risk assessment), a beginning and end can be defined, and, hence, a flow
diagram can be constructed to link the individual components. Each of the “life
cycles” listed above represents, in effect, types of frameworks. To capture the
structural relationship among the principle components required in estimating
human exposure to chemicals, multiple frameworks, representing existing legacy
software, may need to be linked to provide the most scientifically defensible
picture of the impacts to non-agricultural chemical exposure. Because the real
world is compartmentalized, each of these life cycles are also
compartmentalized, meaning that the various life cycles can be linked to
address the demanding questions associated with the identification,
facilitation, and communication of generic research that will characterize
people’s exposure to chemicals and raise the confidence and lower the
uncertainty for quantitative estimates of exposure associated with potential
human effects to chemicals.
An abstract view of these various life-cycle "ribbons" is
illustrated in Figure 1.3.
To meet this requirement for current and future situations, the CCEF design
needs to have a System User Interface that is flexible enough to capture not
only the current linkages of models and databases but also other frameworks
(i.e., ribbons in figure) that provide the essence of relationships in general.
These linkages need to be visual so the analyst can immediately construct and
understand the problem. A visual interface is also very effective in conveying
simulation results to various stakeholders.
A software framework integrates the different components of a modeling
system to provide a consistent and efficient architecture to conduct scientific
research and analyses. The components of a framework are 1) a user interface,
2) module and model types, and 3) system databases. This section will discuss
the different aspect of these framework components.
The conceptualization of the problem is the conceptual site
model, which represents the analyst’s understanding of the problem, problem
components, spatial relationships, and flow of information among components.
The real world is very complicated and the traditional way to approximate the
real world is to simplify and compartmentalize it into more manageable
"pieces." These generally represent what we understand, tending to
group all of the things that we do not know or understand into a selected group
of parameters; hence, our conceptualization of the real world tends to be a function
of what we know and understand. The intent of the CCEF is to design a framework
that allows this conceptualization to change and grow more sophisticated as our
understanding of the real world grows, so we may more accurately estimate the
impacts associated with our anthropogenic activities.
The most effective interfaces, which are currently being used to help
construct a conceptual site model, use a drag & drop approach on a
workspace. Drag & drop is where a user double clicks on icons contained in an
icon pallet, the selected icons appear on the work space, icons are rearranged
and connected according to the flow of information (i.e., lines with arrows,
indicating the direction of data flow), and the user chooses the most
appropriate model from a list of models that represent the icon. This type of
approach is fairly common and used by a number of successful frameworks (e.g.,
Stella, FRAMES, MMS,
With the Drag & Drop feature, the user will have an icon pallet from
which to choose their modeling categories, which contain the choice of models.
The user will have the ability to expand the icon pallet to include new types
of models, databases, system supported software, or unique categories. The icon
pallet will also be tiered, so intricate divisions in simulations can be
captured. For example in transport and fate, surface water modeling can be
divided into rivers, lakes, reservoirs, estuaries, bays, oceans, etc. Exposure
route modeling can be divided into inhalation, oral, and dermal contact. The
icons and icon categories can be changed to meet whatever need is identified. Typical icon categories for the standard EPA risk assessment paradigm
is presented in Table 1.3.1 (pdf format).
Micro-environmental modeling would have a separate Domain associated with it, as such it would have its own Classes, Groups, and
SubGroups.
A model or code is the mathematical representation of a
process, device, or concept (Sippl and Sippl 1980) coded into a computer
language for execution on a computer, which is consistent with current
preconceived notions of what a model is intended to represent.
A module consists of a model description, module user interface (i.e.,
input), and the execution code (i.e., run model), which contains pre- and
post-processors for converting the models input/output for recognition by the
model/system. The following figure illustrates these three basic
components of a module.
The code description provides what the model is, who you should contact for
questions, what information it consumes and produces, the connection schemes
with other models that it allows, how the model fits into the system and how it
should be perceived of by the system.
Figure 1.3.2 shows
that the Module User Interface, input files, and executable are separate from
each other. By separating these, any Module User Interface associated
with a legacy code can remain unchanged, so the user will see no change from
expectations developed when they used the original code outside of the
framework. The input may come from the user, which is traditionally
associated with the Module User Interface, database(s), and upstream models
supplying boundary conditions. By distinctly separating the input from
the executable, sensitivity/uncertainty analyses on the input data is
possible. As such, batch files can be established for multiple
runs. Finally, the execution code is represented by the legacy code and its pre- and post-processors. The pre-processor
converts the input data that the system recognizes into a format that the model
recognizes, while the post-processor converts the model output into the
standardized system format.
Specific databases would be available for user choice, as
multiple databases may be required to meet the needs of a model. Upstream data
are accessed by down stream connections in the conceptual site model (i.e., an
inherent priority is built into the system), and all of the linkage protocols
establish procedures to "access" executables and files containing
data. Because the conceptual site model is only a vehicle for communication,
the modules require information to complete datasets, which may be fulfilled by
multiple databases; as such, the icons are linked to databases to complete a
dataset, although the dataset may only be partially completed with the
available databases.
The user would have the option of listing modules by alphabetical order or
by conceptual site model sequencing. For example, under the Chemical Life Cycle
paradigm, the modules are sequenced and grouped by source, fate and transport,
and human exposure/intake/effects, which follows the intuitive conceptual site
model development structure. Also, when viewing the conceptual site model, the
user would have the option of hiding the connections between the Database icons
and Simulation Modules, to help reduce the number of connecting lines that would
be on the conceptual site model screen. The connections would still be there,
only the user could choose to not show them.
This section provides a description of how the literature
search was conducted and its results, as well as a discussion of how the
specific compounds were selected for the exposure scenarios. This literature
review was not meant to be exhaustive but to be a general search of current
research and models in this area. The review was done mainly via the Internet,
and only publicly available items were included. The focus was on Frameworks,
Models, Algorithms, and Databases. Where ever possible, hyperlinks are provided,
along with a purpose and description of the object being reviewed.
This section provides a general review of models, databases, and algorithms
that are readily available via the Internet. The review included all types of
models, databases, and algorithms required for detailed micro-environmental
modeling. The following section (Section 2.2 - Exposure and Impact Review)
provides a detailed review of Models and Databases associated with Exposure and
Impact Components of the CCEF.
The list of Models and Databases considered
for the four scenarios is found below.
|
|
|
Table 2.1.3 Exposure and Impacts
(including toxicity models) |
|||||
Cohne Hubal, et al. (children’s exposure
and microenvironment data) |
||||||
|
||||||
3MRA - Multimedia, Multi-pathway,
Multi-receptor Exposure and Risk Assessment:
A risk-based strategy was developed to generate constituent-specific exemption
levels for low- risk solid wastes as part of USEPA's Hazardous Waste
Identification Rule (HWIR). The 3MRA framework is designed to perform multiple
site-based risk assessments by considering the various types of land-based
waste management units as the source of contaminants and computes the exposures
and the resulting national-scale statistical distributions of human and
ecological risks. The regional site-based risk assessments are conducted with
an Ni ^ Nf realizations of exposure scenarios, where Ni is the number of
Monte-Carlo iterations and Nf is the number of sampled facilities associated
with that type of waste management unit.
In each Monte-Carlo realization, fate-and-transport and exposure-and-risk
analyses are conducted for the human and ecological receptors at a given site.
For a given chemical and a given waste management unit type, the process is
repeated for a number of concentrations in the waste of the chemical within the
possible chemical concentration range. The methodology is an extension of a
regional site-based approach, which accounts directly for correlations between
model parameters and utilizes data from actual waste sites across the
A two-dimensional Monte-Carlo simulation procedure is utilized which allows
the separation of variability and uncertainty in the risk assessment and the
quantification of uncertainty associated with the estimates of protection
measures. The results of the Monte-Carlo simulations are compiled in the forms
of risk matrices that are queried to determine regulatory exemption levels that
meet specified protection levels with specified levels of confidence. Preliminary
risk calculations based on a selected pathway (groundwater) are discussed.
(Saleem, et al., 1999)
http://www.epa.gov/
APAC, a DOE sponsored program, developed a very
detailed analysis of the technical strengths/weaknesses of computer models for
the following 6 areas of consequence assessment: chemical & radiological
source term generation, fire analysis (inside and outside), in-facility
transport, ex-facility chemical transport, ex-facility radiological transport,
and energetic events (e.g. explosions, deflagrations, etc.). Many of these
groups, including ex-facility chemical, evaluated and compared the codes
against test problems. Hundreds of chemical dispersion codes were screened and
23 were evaluated with 13 of the codes involved in test problem evaluation
(CALPUFF and INPUFF included).
http://www.wes.army.mil/el/arams/intro.html
CEAM:
Environmental Protection Agency's (EPA) Center for Exposure Assessment Modeling
(CEAM) (part of Office of Research and Development) CEAM distributes
environmental simulation models and databases for urban and rural nonpoint
sources, conventional and toxic pollution of streams, lakes and estuaries,
tidal hydrodynamics, geochemical equilibrium, and aquatic food chain
bioaccumulation. The most relevant models for exposure work are:
http://www.epa.gov/ceampubl/
EPA National Exposure Research Laboratory, Atmospheric Science Modeling Division .
http://www.epa.gov/asmdnerl/modeling.html
DIAS - Dynamic Information Architecture System:
DIAS is a software framework intended to facilitate the holistic management of
information processes, including the construction of federation simulations
from component models according to a user-supplied context. These processes are
modeled in DIAS as interrelated actions caused by and affecting the collection
of diverse objects - which may range from abstract concepts represented by a
simulation, through data sets to real world objects, and to input from
simulators. The domain of DIAS is flexible, determined by the objects available
within DIAS and by the collection of models and other data processing
applications which have been gathered by users to address specific information
processing concerns.
http://www.dis.anl.gov/DEEM/DIAS/diaswp.html
FRAMES - Framework for Risk Analysis in Multimedia
Environmental Systems:
The FRAMES software was created with many features to aid the user in
conducting assessments. These features serve to enhance the user's interaction
with the underlying scientific models used in many assessments. Included in
these features are items such as a pictorial depiction of the Conceptual Site
Model to ensure the transfer of the user's idea of the analysis flow of
contamination to the modeling scenario correctly. The drag-and-drop environment
enables the user to quickly diagram a contaminant flow and, therefore,
communicate that image to others (i.e. stakeholders, clients, and other
assessment team members). Another timesaving feature is the use of online help.
FRAMES has the ability to encompass many different
environmental models.
A feature designed to aid in quick results assessment and document
preparation is the ability to graphically view data at multiple points
throughout the analysis. The FRAMES software has been modularized based on
media to allow users to view data before and after each media of interest. The
FRAMES platform is a key tool that can be used effectively to analyze
environmental contaminant scenarios, benchmark models, and communicate
scenarios and results to decision-makers, regulators, and the public. (Gelston,
et al., 1998)
http://mepas.pnl.gov:2080/earth/
GoldSim:
GoldSim is a powerful and flexible platform for visualizing and dynamically
simulating nearly any kind of physical, financial or organizational system.
This software offers the power, flexibility and usability necessary to
efficiently deal with the complex issues associated with real-world systems. In
this sense, real world systems a) involve multiple interacting components and
sub-systems, b) are uncertain, and c) include both continuous (gradual)
processes and discrete (sudden) events.
The unique capabilities of GoldSim make it an ideal simulation tool for a
wide variety of real-world applications such as Strategic Planning, Portfolio
Management, Program Planning, Risk Management, Supply Chain Management,
Environmental Modeling, and Engineered Systems Modeling.
http://www.goldsim.com/home/home.asp
HWIR - Hazardous Waste Identification Rule:
The Framework for Risk Analysis in Multimedia Environmental Systems (FRAMES),
Hazardous Waste Identification Rule (HWIR) technology provides the ability to
conduct screening-level risk-based assessment of potential human and ecological
health risks resulting from long term (chronic) exposure to HWIR chemicals
released from land-based waste management units (WMUs) containing currently
listed waste streams. The FRAMES-HWIR model system consists of a series of
components within a system framework.
LifeLine ™, Risk*Assessment, Risk*Works:
Three software packages that model exposures to a defined person as they travel
through various microenvironments, accounting for 100% of a set determined
timeframe. Developed by The LifeLine Group, LifeLine™ is used to determine the
aggregate and cumulative doses occurring from agricultural exposures in the
home and yard (emphasis on pesticides).
http://www.hrilifeline.org/
Modeling Environment for Total Risk studies is a framework for the
conceptual/theoretical formulation for exposure and dose assessments presented
by Georgopoulos and Lioy (1994), and built on EDMAS (Exposure and Dose Modeling
and Analysis System) developed at EOHSI.
http://eohsi.rutgers.edu/decm/EMA/
MIMS - Multimedia Integrated Modeling System:
The MIMS project is a problem solving software framework to support ecosystem
modeling and environmental health assessment.
http://www.epa.gov/asmdnerl/mims/
http://www.epa.gov/asmdnerl/models3/
MMS - Modular Modeling System:
A Modeling Framework for Multidisciplinary Research and Operational
Applications, MMS is developed to enable a user to selectively couple the most
appropriate process algorithms from applicable models to create an
"optimal" model for the desired application. Where existing
algorithms are not appropriate, new algorithms can be developed and easily
added to the system. This modular approach to model development and application
provides a flexible method for identifying the most appropriate modeling
approaches given a specific set of user needs and constraints.
http://wwwbrr.cr.usgs.gov/projects/SW_precip_runoff/mms/
Multi-domain Framework for Integrating
Models and Measurements of Multimedia Environmental Contaminants:
Multi-domain Framework for Integrating Models and Measurements of Multimedia
Environmental Contaminants is a LBNL work, EPA sponsored. The goal of this
project is to develop and apply models to provide a more complete picture of
both how human exposure comes about and how precisely it can be quantified for
a number of important pollutants. These efforts are being organized around two
research components: (1) an indoor/outdoor model for total human exposure to
particulate matter (PM); and (2) the development and evaluation of
source-to-dose models for persistent pollutants. (T.E. McKone
, W.J. Fisk, A.T. Hodgson, R.G. Sextro)
SEDSS - Sandia Environmental Decision Support System:
SEDSS is a tool for decision-makers that provides a basis for quantitative
analyses in support of qualitative questions such as "Is the monitor well
network adequate?”, "How many samples are enough?” etc.
http://www.nwer.sandia.gov/sedss/smeth.html
SHEDs - Stochastic Human Exposure and Dose Simulation:
A physically based stochastic model, SHEDS has been developed to estimate
pesticide exposure and dose to children via dermal residue contact and
non-dietary ingestion. Time-location-activity data are sampled from national
survey results to generate a population of simulated children. For each child,
a sequence of 5 second object contact events is generated probabilistically for
every location-activity combination, yielding sequential micro-level activity
profiles. These profiles are combined with probability distributions for
surface concentrations and exposure factors (e.g., pesticide transfer and
removal efficiency, skin surface area contacted) to yield daily time profiles
for dermal loading, body burden, and eliminated pesticide metabolite.
Population estimates are then generated via
http://www.riskworld.com/Abstract/1999/SRAam99/ab9ab373.htm
TEAHRS:
The aim of the TEAHRS project is to develop and assess methodologies to
determine the acute toxicity of inhalation of fluctuating concentrations of
hazardous substances as a contribution to the improvement of quantitative risk
assessment.
http://www.risoe.dk/rispubl/SYS/ris-r-1208.htm
THERdbASE - Total Human Exposure Risk database and
Advanced Simulation Environment:
The THERdbASE is an integrated database and analytical/modeling software system
for use in exposure assessment calculations and studies. THERdbASE was
developed to provide a collection of frequently used databases and models
related to human exposure assessment all in a single software system. Models
are conveniently linked to databases, or queried subsets of data, on human
activity patterns, U.S. Census data, and related human exposure databases so
that an assessment or analysis can be conveniently run.
http://www.epa.gov/nerlesd1/therd/therd-home.htm
TNRCC:
http://www.tnrcc.state.tx.us/permitting/trrp.htm
TRIM - Total Risk Integrated Methodology:
EPA project to develop models and data for assessing the multimedia residual
health and ecological risk from pollutants released to air sheds.
http://www.pestlaw.com/calendar/1998/EPA-19980407A.html
ADORA is a unique source characterization and
dispersion model for extremely hazardous chemicals in the atmosphere. Existing
atmospheric dispersion models do not treat the chemical reactions and
thermodynamics satisfactorily. In ADORA, the source characterization for
various release scenarios, reacting puff spreading, lift-off and rise, and
transient dispersion under practical meteorological conditions are modeled
based on engineering principles. The complex interactions of buoyant/heavy
cloud turbulent dynamics, multi-phase thermodynamics, and multi-step chemical
reactions for various pollutants are included. The accurate treatment of these
processes allows the application of the model to realistic release scenarios
without being too conservative.
CALMET:
The CALPUFF Modeling System is composed of three basic components: CALMET,
CALPUFF, and CALPOST. CALMET includes a diagnostic wind field model containing
objective analyses and parameterized treatments of slope flows, valley flows, terrain blocking effects, and kinematic terrain effects,
lake and sea breeze circulations, and a divergence minimization procedure. An
energy-balance scheme is used to compute sensible and latent heat fluxes and
turbulence parameters over land surfaces. A profile method is used over water.
CALMET contains interfaces to prognostic meteorological models such as the Penn
State/NCAR Mesoscale Model. CALMET was modified to enable use of vertical
profiles of wind and temperature as characterized by the MM4-FDDA (Mesoscale
Model-4 with Four-Dimensional Data Assimilation) meteorological model.
CALPUFF:
CALPUFF is a multi-layer, multi-species non-steady-state puff dispersion
modeling that simulates the effects of time-and space-varying meteorological
conditions on pollutant transport, transformation, and removal. CALPUFF is
intended for use on scales from tens of meters from a source to hundreds of
kilometers. It includes algorithms for near-field effects such as building
downwash, transitional buoyant and momentum plume rise, partial plume
penetration, subgrid scale terrain and coastal interactions effects, and
terrain impingement, as well as longer-range effects such as pollutant removal
due to wet scavenging and dry deposition, chemical transformation, vertical
wind shear, over-water transport, plume fumigation, and visibility effects of
particulate matter concentrations.
CALPUFF is appropriate for long-range transport (source-receptor distances
of 50km to 200km) of emissions from point, volume, area, and line sources. The
meteorological input data should be fully characterized with
time-and-space-varying three dimensional wind and
meteorological conditions using CALMET. The meteorological fields used by
CALPUFF are produced by the CALMET meteorological model.
The CALPUFF modeling system has 3 main components: CALMET (a diagnostic 3-D
meteorological model), CALPUFF (the transport and dispersion model), and
CALPOST (a postprocessing package). Each of these programs has a graphical user
interface (GUI). In addition to these components, there are several other
processors that may be used to prepare geophysical (land use and terrain) data
in many standard formats, meteorological data (surface, upper air,
precipitation, and buoy data), and interfaces to other models such as the Penn
State/NCAR Mesoscale Model (MM5).
CalTOX - Soil Model:
CalTOX is an innovative spreadsheet model that relates the concentration of an
organic chemical in soil to the risk of an adverse health effect for a person living
or working on or near the contaminated soil. It computes site-specific
health-based soil clean-up concentrations given target risk levels or human
health risks given soil concentrations at the site.
http://www.ntis.gov/fcpc/cpn6462.htm
CARES™:
Residential exposure assessment module currently under development by the
American Crop Protection Association and infoscientific.com, Inc. with input
from a variety of stakeholders. Cumulative and Aggregate Risk Evaluation System
(CARES™) will be developed and deployed through a cooperative effort of
stakeholders, including government, industry, and environmental groups. CARES™
will utilize currently accepted and other relevant databases to evaluate
potential risk from dietary, drinking water, and residential sources. Risks
will be calculated deterministically for Tier 1 screening and probabilistically
using Monte-Carlo simulation of individuals for higher tier analyses. CARES
will allow users to estimate doses and risks from acute, short term, intermediate
duration, and lifetime exposures.
http://infoscientific.com/files/concept.pdf
http://alphacares.org/index.htm
CFAST:
CFAST is a zone model capable of predicting the environment in a
multi-compartment structure subjected to a fire. It calculates the
time-evolving distribution of smoke and fire gases and the temperature
throughout a building during a user-specified fire. CFAST is the result of a merger
of ideas that came out of the FAST and the CCFM.VENTS development projects at
NIST. The organization of the CFAST suite of programs is thus a combination of
the two models. Details of the models including algorithm structure, the
physics-based equations, assumptions, and variables descriptions are provided
in such a way as to permit modification and tailoring of the program to indoor
chemical exposure assessment. With this level of detail, researchers not
intimately involved in the development of CFAST should be able to add to the
model in a straightforward manner. Independent or cooperative efforts to
enhance the capabilities of the model are encouraged. Model developers can use
this version of the model for open or proprietary additions to the model or as
the basis for new models. CFAST is a member of a class of models referred to as
zone or finite element models. This means that each room is divided into a
small number of volumes (called layers), each of which is assumed to be
internally uniform. CFAST is based on solving a set of equations that predict
state variables (pressure, temperature and so on) based on the enthalpy and
mass flux over small increments of time. These equations are derived from the
conservation equations for energy mass, and momentum, and the ideal gas law.
Although it may not be all-inclusive, CFAST has demonstrated the ability to
make reasonably good predictions. Also, it has been subject to close scrutiny
to insure its correctness. Thus it forms a prototype for what constitutes a
reasonable approach to modeling fire growth and the spread of smoke and toxic
gases.
http://fast.nist.gov/
ChemScreen:
ChemScreen Risk Assessment Software is based on the US Environmental Protection
Agency's RMP Offsite Consequence Analysis Guidance document, dated
Cleek and Bunge:
Cleek and Bunge (1993) developed a model to estimate dermal absorption from
infinite dose aqueous solutions; very simplified mathematical model with some
QSAR capabilities. (White Paper HHEA-3)
COMIS:
COMIS models the air flow and contaminant distributions in buildings. The
program can simulate several key components influencing air flow: cracks,
ducts, duct fittings, fans, flow controllers, vertical large openings (windows
and/or doors), kitchen hoods, passive stacks, and "user-defined
components."
COMIS allows the user to define schedules describing changes in the indoor
temperature distribution, fan operation, pollutant concentration in the zones,
pollutant sources and sinks, opening of windows and doors, and the weather
data. The flexible time step implemented in COMIS enables the modeling of
events independent of the frequency with which the weather data are provided.
The COMIS air flow calculation is based on the assumption that indoor air
flows reach steady-state at each time step. The contaminant transport is based
on a dynamic model and has its own time step, based on the time constant of the
most critical zone. The two models are coupled. Results for air flows and
contaminant levels are reported in terms of tables by COMIS and in graphical
form by some of the user-interfaces.
CONSEXPO - CONSumer EXPOsure Model:
CONSEXPO-3 is a multi-route, single-chemical modeling tool for assessing human
exposure to chemicals emitted from consumer products. It focuses on
non-professional indoor use of consumer products. The exposure routes include
inhalation, dermal, and oral. Because of the wide range of exposures associated
with consumer products, CONSEXPO defines five dermal loading scenarios, plus
one scenario to model dermal exposure to airborne compounds. The five dermal
loading scenarios are: 1) Fixed volume scenario assumes that the product
is well mixed, 2) Diffusion in product scenario assumes that the product
is not well mixed and transport of a chemical compound takes place by means of
diffusion, 3) Migration to skin scenario assumes that dermal exposure is
a result of migration of product to the skin, 4 and 5) Transfer coefficient
scenario and contact rate scenario are similar to the dermal models
used in the EPA’s residential SOPs.
http://www.epa.gov/chemrtk/revmodlr.pdf
An Evaluation of the Potential for Use of Existing
Exposure Software (or Software Currently Under Development) in a Tiered
Approach to the Assessment of Exposures and Risks to Children.
The LifeLine Development Team, 2001. Phase 1 Report:
Findings from Literature Search and Review of Modeling Projects Currently
Available or Under Development. Prepared for the American
Chemistry Council, Comprehensive Chemical Exposure Framework (CCEF) Project
(Agreement #1388).
CONTAMW:
CONTAMW is a multi-zone indoor air quality and ventilation analysis computer
program designed to help predict airflows, contaminant air concentrations, and
personal exposure for buildings.
http://www.bfrl.nist.gov/IAQanalysis/CONTAMWdesc.htm
Developed by GOMET for the California Air Resources Board, CPIEM uses Monte
Carlo Simulation to determine distributions of daily time-integrated
concentrations of inhalation exposures (and doses) for Californians.
DEPM - Dietary Exposure Potential Model (DEPM):
DEPM is a model and database system that correlates extant food information in
a format for estimating dietary exposure. The resident database system includes
results from government-sponsored food intake surveys and chemical residue
monitoring programs. A special feature of the DEPM is the use of recipes
developed specifically for exposure analysis that link consumption survey data
for prepared foods to the chemical residue information, which is normally
reported for raw food ingredients. Consumption in the model is based on 11 food
groups containing approximately 800 exposure core food types, established from
over 6500 common food items.
The summary databases are aggregated in a fashion to allow analyst selection
of demographic factors such age/sex groups, geographical regions, ethnic groups
and economic status. Daily intake is estimated by the model for over 300
pesticides and environmental contaminants. In addition, contributions to total
exposure from exposure core food groups and individual exposure core foods can
also be estimated.
http://www.epa.gov/nerlcwww/depm.htm
E-FAST - Exposure & Fate Assessment Screening
Tool:
E-FAST provides screening-level estimates of the concentrations of chemicals
released to air, surface water, landfills, and from consumer products.
Estimates provided are potential inhalation, dermal and ingestion dose rates
resulting from these releases. Modeled estimates of concentrations and doses
are designed to reasonably overestimate exposures for use in screening level
assessment.
E-FAST calculates appropriate human potential dose rates for a wide variety
of chemical exposure routes. and estimates the number
of days per year that an aquatic ecotoxicological concern concentration will be
exceeded for organisms in the water column.
EPI Suite:
The EPI (estimation program interface) SuiteTM is a Windows® based
suite of physical/chemical property and environmental fate estimation models
developed by the EPA’s Office of Pollution Prevention Toxics and Syracuse
Research Corporation (SRC). These properties are the building blocks of
exposure assessment. EPI SuiteTM uses a single input to run the
following estimation models: KOWWINTM, AOPWINTM, HENRYWINTM, MPBPWINTM, BIOWINTM,
PCKOCWINTM, WSKOWWINTM, BCFWINTM, HYDROWINTM, and STPWINTM, WVOLWINTM, and
LEV3EPITM. This suite of models does various fate and transport estimates such
as partition coefficients, rat of volatilization, etc..
EXAMS - The Exposure Analysis Modeling System:
EXAMS simulates an aquatic ecosystem tracing the path and behavior of a toxic
pollutant. It has a database of toxic substances and a command-driven
interface, which allows for the definition of new substances and for
modification of the ecosystem definition.
Each water body may be constituted of up to 32 different segments, for each
of which the balance of up to 28 different substances may be simulated. The
basic phenomena taken into consideration are: accumulation, chemical and
biological transformation, and transport. Environmental conditions may be
constant (in the short or the long term) or varying monthly. It ma be use to
conduct rapid evaluations and error analyses of the probable aquatic fate of
synthetic organic chemicals.
EXAMS combines chemical loadings, transport, and
transformation into a set of differential equations using the law of
conservation of mass as an accounting principle. It accounts for all the
chemical mass entering and leaving a system as the algebraic sum of external
loadings, transport processes that export the compound from the system, and
transformation processes within the system that convert the chemical to
daughter products. The program produces output tables and simple graphics
describing chemical exposure, fate, and persistence.
FIRIN/FIRAC:
FIRAC estimates radioactive and non-radioactive source terms and predicts
fire-induced flows and thermal and material transport within the facilities. It
is applicable to any facility with or without ventilation systems. It is a
fast-running code with a user-friendly interface and includes source term
models for fires.
FIRAC is one of a family of codes designed to provide improved safety
analysis methods for the nuclear industry. The basic material transport
capability of FIRAC includes estimates of entrainment, convection, deposition,
and filtration of material. The interrelated effects of filter plugging, heat
transfer, and gas dynamics are also simulated. A ventilation system model
includes elements such as filters, dampers, ducts, and blowers connected at
nodal points to form networks. A one-dimensional, lumped-parameter zone-type
compartment model is incorporated to simulate flow-induced transients within a
facility. No spatial distribution of parameters is considered in this approach,
but an effect of spatial distribution can be approximated by noding. (Gregory,
W.S., et al.,1992)
FIRAC is designed to estimate radioactive and non-radioactive source terms
and predict fire-induced flows and thermal and material transport within the
facilities. Particular focus is on transport through the ventilation system of
these facilities. FIRAC includes a fire compartment module based on the FIRIN
computer code, which was developed at Pacific Northwest National Laboratory
(PNNL). The FIRIN module calculates fuel mass loss rates and energy generation
rates within the fire compartment. It can also calculate the generation rate
and size distribution of radioactive particles that become airborne as a result
of a fire in a nuclear facility. More recently, a second fire module, based on
the CFAST computer code, was added to FIRAC. CFAST was developed by the
National Institute of Standards and Technology (NIST) to model fire growth and
smoke transport in multi-compartment structures. The new combined code is
called FIRAC2.
GASFLOW:
GASFLOW is a computational fluid dynamics model applied to solving internal and
external engineering type flows. The code accounts for turbulent mixing,
combustion, and chemical kinetics of gases and aerosol species, as well as heat
transfer and condensation to walls and structures. It solves the compressible
form of the Navier-Stokes conservation equations using the ICED-ALE numerical
scheme. It is basically a multi-dimensional (3-D) finite volume field code with
the capability to characterize low-speed, buoyancy driven, diffusion-dominated,
or chemically reactive and non-reactive flows within a compartment network. It
can be used to analyze spatially refined flow phenomena such as circulation
patterns; gas stratification; and chemical distribution and kinetics. The code
is written in FORTRAN 90 and is configured for UNIX and LINUX workstations.
A technical assessment of building a virtual building model from a core CFD
building interior model (such as GASFLOW) is given at the following URL http://eande.lbl.gov/BTP/papers/43006.pdf
GEMS - Geographical Exposure Modeling System:
GEMS is a modernization of OPPT’s older Graphical Exposure Modeling System and
PCGEMS tools. GEMS brings together in one system
several EPA environmental fate and transport models and some of the
environmental data needed to run them. GEMS includes
models and data for ambient air, surface water, soil, and ground water and
makes the models much easier to use than their stand-alone counterparts. GEMS
will have graphics and Geographical Information System (GIS) capabilities for
displaying environmental modeling results.
GEMS will have interactive menus to guide the user in selecting models, selecting
and organizing data to be used as input to model runs, executing model runs,
and presenting model outputs. The menus will also provide user help. The new
system will be modular in design so that EPA can easily add other models to the
system in the future. GEMS will also have the capability of retrieving some
data from EPA Oracle databases, such as the TRIS data in the EPA Envirofacts
Data Warehouse.
GENII:
The GENII computer code was developed at Pacific Northwest National Laboratory
(PNNL) to incorporate the internal dosimetry models recommended by the
International Commission on Radiological Protection (ICRP) into updated
versions of existing environmental pathway analysis models. The resulting
second generation of environmental dosimetry computer codes is compiled in the
Hanford Environmental Dosimetry System (Generation II or GENII). The GENII
system was developed to provide a state-of-the-art, technically peer-reviewed,
documented set of programs for calculating radiation doses from radionuclides
released to the environment. Although the codes were developed for use at
HARVARD VI:
The WPI/Fire Codes are derived from the Harvard Computer Fire Codes. Harvard VI
is a control volume code designed for fire analysis, and can be adapted to
chemical transport. It has many of the same modeling capabilities as CFAST and
can be extended to multi-compartment facilities. Technical reference: Gahm,
J.B., 1983. This info was obtained from www.wpi.edu.
The website contains more information on the latest version of this model.
HGSYSTEM:
HGSYSTEM is a suite of programs for assessing dispersion of vapor from gas,
liquid or 2 phase releases including multi-component mixtures. HGSYSTEM was
first assembled to model the release of Hydrogen Fluoride (HF) and ideal gases
(Version 1.0), and then extended to include multicomponent mixtures (version
3.0). HGSYSTEM has been developed by Shell Research Ltd with the support and
sponsorship of industry groups. HGSYSTEM/UF6 is a collaborative development of
HGSYSTEM by Lockheed Martin Energy Systems and Earth Technology, sponsored by
the United States Department of Energy (DOE) to predict the dispersion of the
hydrolysis products of Uranium Hexafluoride.HGSYSTEM possesses an advanced near
field transport model with a spatial resolution of less than 1 meter. It has
modeling options for calculating time-dependent chemical source terms for
pressurized liquids and gases, buoyancy-driven flows, and evaporating pools of
single or multi-component releases. It is written in FORTRAN 77 and is highly
modularized which allows for program modification to address special needs
analysis.
http://www.hgsystem.com/hgweb.html
HPV - HPV Exposure Assessment Screening Tool - HPVScreen :
Physical-chemical properties and fate. This component of HPVScreen will
define the physical-chemical properties the user will need for use with the
models in the other components. This component will also provide the user with
the ability to estimate removal of a chemical by wastewater treatment.
Models for screening-level exposure estimates.
This component provides modeled screening-level estimates of the concentrations
and potential doses of chemicals released to air, surface water, landfills and
from consumer products. The estimates are designed to be conservative (i.e. to
be on the high end of exposure or even to overestimate exposure).
Multimedia modeling programs. This component
of HPVScreen provides the user with easy access to the EQC and Level III
programs developed by
IA-NBC-HMAS - Indoor Air-Nuclear, Biological and
Chemical-Health Modeling and Assessment System:
The Indoor Air Nuclear, Biological and Chemical Health Modeling and Assessment
System (HMAS for short) was developed to serve as a health impacts analysis
tool for use in addressing these concerns. HMAS is designed to serve as a
functional health modeling and assessment system that can be easily tailored to
meet specific building analysis needs. IA-NBC-HMAS is a model designed to focus
on indoor air (complex buildings). While its original design is to use for
counter terrorism and chemical/biological agent attacks, it also has equal
applicability for modeling and assessing indoor pollution from natural sources
(e.g., building construction materials, process releases, etc.). It is designed
to be coupled with both micro and macro ambient models to be able to model both
finite indoor concerns and the full picture of ambient and indoor effect from
releases either in the building or outside the building that impact indoor air
quality.
IAQX - Simulation Tool Kit for Indoor Air Quality and
Inhalation EXposure:
IAQX includes a variety of stand-alone simulation programs, including a
general-purpose simulation program, VOC emissions from solvent-based indoor
coating products, small-scale solvent spills, VOC emissions from
diffusion-controlled homogeneous slabs, and indoor particulate matter. IAQX
source models can address emission of chemicals from consumer products,
building materials, indoor furnishings, and appliances, including dry sources.
http://www.epa.gov/chemrtk/revmodlr.pdf.
An Evaluation of the Potential for Use of Existing Exposure
Software (or Software Currently Under Development) in a Tiered Approach to the
Assessment of Exposures and Risks to Children.
The LifeLine Development Team, 2001. Phase 1 Report:
Findings from Literature Search and Review of Modeling Projects Currently
Available or Under Development. Prepared for The American
Chemistry Council, Comprehensive Chemical Exposure Framework (CCEF) Project
(Agreement #1388).
INPUFF:
INPUFF is a Gaussian puff model, developed by the U.S. EPA. The model is
intended for simulating the atmospheric dispersion of neutrally buoyant or
buoyant chemical releases. The model allows for a vertically oriented stack
(point source) and a release duration that may be either finite or continuous.
INPUFF can account for plume rise due to both buoyancy and momentum. In
addition, the model can include the effects of stack-tip downwash.
INPUFF allows the user to specify the location and dimension of a receptor
grid where concentration estimates will be calculated downwind of the release.
To estimate concentrations, the model uses the Pasquill-Gifford dispersion
coefficients with modifications to account for initial dispersion and
buoyancy-induced dispersion (if applicable) and a user-specified averaging
time.
MCCEM:Multi-Chamber Concentration and Exposure Model:
MCCEM estimates average and peak indoor air concentrations of chemicals
released from products or materials in houses, apartments, townhouses, or other
residences. And it estimates inhalation exposures to these chemicals,
calculated as single day doses, chronic average daily doses, or lifetime
average daily doses.
MCCEM is a user-friendly software product that estimates indoor air
concentrations using a mass balance approach. It maintains a library of
residences, containing data on zone or area volumes, interzonal air flows, and
whole-house exchange rates and allows its users to tailor their analysis to a
particular location, and to model air concentrations in as many as four zones for
a given residence. It estimates exposure for periods ranging from 1 hour to 1
year and develops seasonal or annual exposure profiles using a long-term model
and offers several different options for dealing with ‘sinks’. A sink is a
material (e.g., carpeting, wallboard) that can absorb chemicals from the air;
the absorption can be either reversible or irreversible.
MEPAS - Multimedia Environmental Pollutant Assessment
System:
The Multimedia Environmental Pollutant Assessment System considers chronic exposure
and human health risks resulting from environmental emissions. Physics-based
models of contaminant processes in the air, groundwater, and surface water are
integrated in a system that considers both chemical and radioactive potential
impacts.
The Multimedia Environmental Pollutant Assessment System (MEPAS) software
utilizes sophisticated modeling codes to quickly and easily assess risks from
activities that could impact human health, such as remediating hazardous waste
sites.
The MEPAS software provides physics-based modeling codes for environmental
risk assessment. It quickly integrates results from separate models of
contaminant behavior in various media (air, soil, ground water, surface water)
and for different scenarios, turning a task that could take weeks (or might
never be attempted) into a few hours' work.
MEPAS integrates and evaluates transport and exposure pathways for chemical
and radioactive releases according to their potential human health impacts
(multimedia in this context refers to multiple environmental transport media).
MEPAS takes the nontraditional approach of combining all major exposure
pathways into a multimedia computational tool for public health impact. MEPAS
is a physics-based approach that couples contaminant release, migration and
fate for environmental media (groundwater, surface water, air) with exposure
routes (inhalation, ingestion, dermal contact, external dose) and risk/health
consequences for radiological and non-radiological carcinogens and
non-carcinogens.
http://www.epa.gov/asmdnerl/models3/
MMSOILS - The Multimedia Contaminant Fate, Transport,
and Exposure Model:
MMSOILS estimates the human exposure and health risk associated with releases
of contamination from hazardous waste sites. The methodology consists of a
multimedia model that addresses the transport of a chemical in groundwater,
surface water, soil erosion, the atmosphere, and accumulation in the food
chain. The human exposure pathways considered in the methodology include: soil
ingestion, air inhalation of volatiles and particulates, dermal contact,
ingestion of drinking water, consumption of fish, consumption of plants grown
in contaminated soil, and consumption of animals grazing on contaminated pasture.
For multimedia exposures, the methodology provides estimates of human exposure
through individual pathways and combined exposure through all pathways
considered. The risk associated with the total exposure dose is calculated
based on chemical-specific toxicity data.
The methodology is intended for use as a screening tool. It is critical that
the results are interpreted in the appropriate framework. The intended use of
the exposure assessment tool is for screening and relative comparison of
different waste sites, remediation activities, and hazard evaluation. The
methodology can be used to provide an estimate of health risks for a specific
site. Since the uncertainty of the estimated risk may be quite large (depending
on the site characteristics and available data), MMSOILS addresses these
uncertainties via Monte-Carlo analysis.
Models-3/CMAQ - Models-3 and Community Multi-scale
Air Quality:
CMAQ modeling system represent the air component to
EPA’s MIMS framework. The Models-3 release contains three types of
environmental modeling systems meteorological, emission, and chemistry
transport. It also includes a visualization and analysis system.
http://www.epa.gov/asmdnerl/models3/index.html
OBODM - Open Burn/Open Detonation Model:
OBODM is intended for use in evaluating the potential air quality impacts of
the open burning and detonation (OB/OD) of obsolete munitions and solid
propellants. OBODM uses cloud/plume rise dispersion, and deposition algorithms
taken from existing models for instantaneous and quasi-continuous sources to
predict the downwind transport and dispersion of pollutants released by OB/OD
operations.
pNEM:
pNEM is an EPA exposure model for particulate matter, adapted and further
developed at UBC for use in
http://hajek.stat.ubc.ca/projects/pnem.html
PRESTO - Prediction of Radiological Effects Due to
Shallow Trench Operations:
PRESTO is a computer model for evaluating radiation exposure from contaminated
soil layers, including waste disposal, soil cleanup, agricultural land
application, and land reclamation. The models in PRESTO are designed to calculate
the maximum annual committed effective dose to a critical population group and
cumulative fatal health effects and genetic effects to the general population
in several scenarios:
The models simulate the transport of radionuclides in air, surface water,
and groundwater pathways, and evaluate exposures through ingestion, inhalation,
immersion and external exposure pathways.
PROMISE - PRObabilistic Methodology for Improving
Solvent Exposure Assessment:
PROMISE© Version 7 is an ongoing modeling project by Silken, Inc., for the
Solvents Council of the American Chemistry Council. The software is designed to
evaluate exposures and doses from single and multiple uses of products that
contain volatile solvents (e.g., adhesives, paints, floor cleaners, etc.).
Appropriate activities for modeling include the use of large volumes of
solvents (open drums or open tanks), use of an applied product, or from spills.
The PROMISE©model can be used to investigate products used in the workplace
and the home. PROMISE© can calculate multi-route exposures from dermal,
inhalation (indoor or outdoors), and/or ingestion routes of solvent exposure.
The source models included in the software are labeled constant concentration,
source and ventilation, pure-substance evaporation, open-can evaporation, and
wall or floor liquid application and evaporation. The program’s algorithms
model the release of solvent from mixtures that change over time.
http://www.epa.gov/chemrtk/revmodlr.pdf.
An Evaluation of the Potential for Use of Existing Exposure
Software (or Software Currently Under Development) in a Tiered Approach to the
Assessment of Exposures and Risks to Children.
The LifeLine Development Team, 2001. Phase 1 Report:
Findings from Literature Search and Review of Modeling Projects Currently
Available or Under Development. Prepared for The American
Chemistry Council, Comprehensive Chemical Exposure Framework (CCEF) Project
(Agreement #1388).
RISK:
A computer model sponsored by the EPA for calculating individual exposure to
indoor air pollutants from sources is presented. The model is designed to
calculate exposure due to individual, as opposed to population, activity
patterns and source use. The model also provides the capability to calculate
risk due to the calculated exposure. RISK is the third in a series of indoor
air quality (IAQ) models developed by the Indoor Environment Management Branch
of U.S. EPA's National Risk Management Research Laboratory.
The model uses data on source emissions, room-to-room air flows, air
exchange with the outdoors, and indoor sinks to predict concentration-time
profiles for all rooms. The concentration-time profiles are then combined with
individual activity patterns to estimate exposure. Risk is calculated using a
risk calculation framework developed by Naugle and Pierson (1991). The model
allows analysis of the effects of air cleaners located in either/or both the
central air circulating system or individual rooms on IAQ and exposure. The
model allows simulation of a wide range of sources including long term steady
state sources, on/off sources, and decaying sources. Several sources are
allowed in each room. The model allows the analysis of the effects of sinks and
sink re-emissions on IAQ. The results of test house experiments are compared
with model predictions. The agreement between predicted concentration-time
profiles and the test house data is good.
http://www.ntis.gov/fcpc/cpn7493.htm
RESRAD - RESidual RADioactivity:
RESRAD is a computer code developed at Argonne National Laboratory for the U.S.
Department of Energy to calculate site-specific RESidual RADioactive material
guidelines as well as radiation dose and excess lifetime cancer risk to a
chronically exposed on-site resident.
A soil guideline is defined as the radionuclide concentration in soil that
is acceptable if the site is to be used without radiological restrictions. Soil
is defined as unconsolidated earth material, including rubble and debris that
might be present. These guidelines are based on the following principles: (1)
the annual radiation dose received by a member of the critical population group
from the residual radioactive material - predicted by a realistic but
reasonably conservative analysis and calculated as committed effective dose
equivalent - should not exceed 100 mrem/yr, and (2) doses should be kept as low
as reasonably achievable, a concept commonly known as ALARA.
Nine environmental pathways are considered: direct exposure, inhalation of
particulates and radon, and ingestion of plant foods, meat, milk, aquatic
foods, water, and soil.
SCIPUFF - Second-order Closure Integrated Puff:
SCIPUFF model is a Lagrangian puff dispersion model developed by Titan's ARAP
Group that uses a collection of Gaussian puffs to represent an arbitrary,
three-dimensional time-dependent concentration. The turbulent diffusion
parameterization is based on turbulence closure theory, providing a direct
relationship between the predicted dispersion rate and turbulent velocity
statistics of the wind field. In addition to the average concentration value,
the closure model also provides a prediction of the statistical variance in the
concentration field resulting from the random fluctuations in the wind field.
The closure approach also provides a direct representation for the effect of
averaging time. SCIPUFF has been incorporated into the Defense Threat Reduction
Agency's (DTRA) Hazard Prediction and Assessment Capability (HPAC) software.
HPAC is utilized for planning and analysis as well as in the field by military
personnel to rapidly determine consequences of dispersing chemical, nuclear and
biological agents. SCIPUFF has been validated against a number of laboratory
and field experiments, demonstrating its usefulness for non-military
applications. It has been recommended as an alternative model by the EPA which
can be used on a case-by-case basis for regulatory applications. The publicly
available version of SCIPUFF is the same version incorporated in HPAC except that
the proprietary and developmental features have been disabled. SCIPUFF runs on
a PC with a user-friendly Graphical User Interface (GUI).
SES - Subway Environmental Simulation: Subway
Environmental Simulation (SES) model developed by Parsons Brinkerhoff
for the DOT
WPEM - Wall Paint Exposure Assessment:
The WPEM estimates the potential exposure of consumers and workers to the
chemicals emitted from wall paint, which is applied using a roller or a brush.
WPEM is a user-friendly, flexible software product that uses mathematical
models developed from small chamber data to estimate the emissions of chemicals
from oil-based (alkyd) and latex wall paint. This is then combined with
detailed use, workload, and occupancy data (e.g., amount of time spent in the
painted room, etc,) to estimate exposure.
http://www.epa.gov/opptintr/exposure/docs/wpem.htm
BDBR:
A predictive tool used to estimate potential human health risks by describing
and quantifying the key steps in the cellular, tissue and organismal responses
as a result of chemical exposure.
ChemSTEER - Chemical Screening Tool For Exposures & Environmental Releases:
Chemical Screening Tool For Exposures & Environmental Releases (ChemSTEER)
estimates occupational inhalation and dermal exposure to a chemical during
industrial and commercial manufacturing, processing, and use operations
involving the chemical. It estimates releases of a chemical to air, water, and
land that are associated with industrial and commercial manufacturing,
processing, and use of the chemical.
It allows users to select predefined industry-specific or chemical
functional use-specific profiles or user-defined manufacturing, processing and
use operations. Using these operations and several chemical-specific and
case-specific parameters and general models, the ChemSTEER computer program
estimates releases and occupational exposures. The methods in ChemSTEER were
developed by the EPA Office of Pollution Prevention and Toxics (OPPT);
Economics, Exposure, and Technology Division; Chemical Engineering Branch.
E-FAST - Exposure & Fate Assessment Screening
Tool:
E-FAST provides screening-level estimates of the concentrations of chemicals
released to air, surface water, landfills, and from consumer products.
Estimates provided are potential inhalation, dermal and ingestion dose rates
resulting from these releases. Modeled estimates of concentrations and doses
are designed to reasonably overestimate exposures for use in screening level
assessment.
E-FAST calculates appropriate human potential dose rates for a wide variety
of chemical exposure routes. and estimates the number
of days per year that an aquatic ecotoxicological concern concentration will be
exceeded for organisms in the water column.
GEMS - Geographical Exposure Modeling System:
GEMS is a modernization of OPPT’s older Graphical Exposure Modeling System and
PCGEMS tools. GEMS brings together in one system
several EPA environmental fate and transport models and some of the
environmental data needed to run them. GEMS includes
models and data for ambient air, surface water, soil, and ground water and
makes the models much easier to use than their stand-alone counterparts. GEMS
will have graphics and Geographical Information System (GIS) capabilities for
displaying environmental modeling results.
GEMS will have interactive menus to guide the user in selecting models,
selecting and organizing data to be used as input to model runs, executing
model runs, and presenting model outputs. The menus will also provide user
help. The new system will be modular in design so that EPA can easily add other
models to the system in the future. GEMS will also have the capability of
retrieving some data from EPA Oracle databases, such as the TRIS data in the EPA
Envirofacts Data Warehouse.
GENII:
The GENII computer code was developed at Pacific Northwest National Laboratory
(PNNL) to incorporate the internal dosimetry models recommended by the
International Commission on Radiological Protection (ICRP) into updated
versions of existing environmental pathway analysis models. The resulting
second generation of environmental dosimetry computer codes is compiled in the
Hanford Environmental Dosimetry System (Generation II or GENII). The GENII
system was developed to provide a state-of-the-art, technically peer-reviewed,
documented set of programs for calculating radiation doses from radionuclides
released to the environment. Although the codes were developed for use at
IA-NBC-HMAS - Indoor Air-Nuclear, Biological and
Chemical-Health Modeling and Assessment System:
The Indoor Air Nuclear, Biological and Chemical Health Modeling and Assessment
System (HMAS for short) was developed to serve as a health impacts analysis
tool for use in addressing these concerns. HMAS is designed to serve as a
functional health modeling and assessment system that can be easily tailored to
meet specific building analysis needs. IA-NBC-HMAS is a model designed to focus
on indoor air (complex buildings). While its original design is to use for
counter terrorism and chemical/biological agent attacks, it also has equal
applicability for modeling and assessing indoor pollution from natural sources
(e.g., building construction materials, process releases, etc.). It is designed
to be coupled with both micro and macro ambient models to be able to model both
finite indoor concerns and the full picture of ambient and indoor effect from
releases either in the building or outside the building that impact indoor air
quality.
IEUBK - Integrated Exposure Uptake Biokinetic Model for
Lead in Children:
The Integrated Exposure Uptake Biokinetic Model for Lead in Children (IEUBK)
attempts to predict blood-lead concentrations (PbBs) for children exposed to
lead in their environment. The model allows the user to input relevant
absorption parameters (e.g., the fraction of lead absorbed from water) as well
as intake and exposure rates. Using these inputs, the IEUBK model rapidly
calculates and recalculates a complex set of equations to estimate the
potential concentration of lead in the blood for a hypothetical child or
population of children (6 months to 7 years of age). The IEUBK model is designed
to predict the probable PbB concentrations for children between 6 months and 7
years of age who have been exposed to lead through environmental media (air,
water, soil, dust, and diet). Integrated Exposure Uptake Biokinetic Model for
Lead in Children (IEUBK) is an EPA model circa 1999. (White Paper HHEA-3)
MEPAS - Multimedia Environmental Pollutant Assessment
System:
The Multimedia Environmental Pollutant Assessment System considers chronic
exposure and human health risks resulting from environmental emissions.
Physics-based models of contaminant processes in the air, groundwater, and
surface water are integrated in a system that considers both chemical and
radioactive potential impacts.
The Multimedia Environmental Pollutant Assessment System (MEPAS) software
utilizes sophisticated modeling codes to quickly and easily assess risks from
activities that could impact human health, such as remediating hazardous waste
sites.
The MEPAS software provides physics-based modeling codes for environmental
risk assessment. It quickly integrates results from separate models of
contaminant behavior in various media (air, soil, ground water, surface water)
and for different scenarios, turning a task that could take weeks (or might
never be attempted) into a few hours' work.
MEPAS integrates and evaluates transport and exposure pathways for chemical
and radioactive releases according to their potential human health impacts
(multimedia in this context refers to multiple environmental transport media).
MEPAS takes the nontraditional approach of combining all major exposure
pathways into a multimedia computational tool for public health impact. MEPAS
is a physics-based approach that couples contaminant release, migration and
fate for environmental media (groundwater, surface water, air) with exposure
routes (inhalation, ingestion, dermal contact, external dose) and risk/health
consequences for radiological and non-radiological carcinogens and
non-carcinogens.
http://www.epa.gov/asmdnerl/models3/
MMSOILS - The Multimedia Contaminant Fate,
Transport, and Exposure Model:
MMSOILS estimates the human exposure and health risk associated with releases
of contamination from hazardous waste sites. The methodology consists of a multimedia
model that addresses the transport of a chemical in groundwater, surface water,
soil erosion, the atmosphere, and accumulation in the food chain. The human
exposure pathways considered in the methodology include: soil ingestion, air
inhalation of volatiles and particulates, dermal contact, ingestion of drinking
water, consumption of fish, consumption of plants grown in contaminated soil,
and consumption of animals grazing on contaminated pasture. For multimedia
exposures, the methodology provides estimates of human exposure through
individual pathways and combined exposure through all pathways considered. The
risk associated with the total exposure dose is calculated based on
chemical-specific toxicity data.
The methodology is intended for use as a screening tool. It is critical that
the results are interpreted in the appropriate framework. The intended use of
the exposure assessment tool is for screening and relative comparison of
different waste sites, remediation activities, and hazard evaluation. The
methodology can be used to provide an estimate of health risks for a specific
site. Since the uncertainty of the estimated risk may be quite large (depending
on the site characteristics and available data), MMSOILS addresses these
uncertainties via Monte-Carlo analysis.
Modeling Benzene Exposures and Absorbed Dose:
Probabalistic model of benzene exposure and absorbed dose applied to 40 million
people represented by EPA Region 5 (MacIntosh, 1995).
Modeling Dietary Exposures to Heavy Metals and
Pesticides:
Chronic (1-yr average) dietary exposure to 11 heavy metals and pesticides for a
population of approximately 120,000 US adults (MacIntosh, 1996).
MTHEM - Multi-pollutant Total Human Exposure Model:
EPA and Stanford model is mainly modeling patterns of exposure to air
pollutants (ozone, PM, carbon dioxide).
http://www.riskworld.com/Abstract/1996/SRAam96/ab6aa368.htm
http://www.riskworld.com/Abstract/1996/SRAam96/ab6aa329.htm
PBPK & PBPD - Physiologically Based PharmacoKinetic
(PBPK) & Physiologically Based PharmacoDynamic (PBPD):
A PBPK/PBPD model for humans describes the body as a set of interconnected
compartments, or continuous stirred tank reactors. Each compartment can
describe either an organ or a tissue. A PBPK/PBPD model is founded on known
physiological processes (blood flow rates, tissues volumes, breathing rates,
etc.), on chemical-specific processes (partition coefficients, chemical
density, metabolic constants, molecular weight, etc.), and on species-dependent
processes.
http://www.pnl.gov/eshs/cap/cd/pbpk.html
QSAR Models - Quantitative Structure Activity
Relationship:
Quantitative-structure activity-relationships models used to relate chemical
structure to the physiochemical parameters that are important to dermal absorption.
QSAR studies attempt to model a variety of molecules and predict their
activities based upon their structures. Models are built from sets of data
referred to as descriptors. Such models, when accurately constructed, allow for
the prediction of the activity of other unknown molecules not originally used
to construct the model. These methodologies have terrific potential in the
field of drug discovery.(White Paper HHEA-3)
REHEX - II:
Regional Human Exposure Model was developed to estimate the population's
exposure to PM concentrations.
http://www.eih.uh.edu/publications/99annrep/rifai.htm
http://www.burningissues.org/abstracts/econstdy.htm
RESRAD - RESidual RADioactivity:
RESRAD is a computer code developed at Argonne National Laboratory for the U.S.
Department of Energy to calculate site-specific RESidual RADioactive material
guidelines as well as radiation dose and excess lifetime cancer risk to a
chronically exposed on-site resident.
A soil guideline is defined as the radionuclide concentration in soil that
is acceptable if the site is to be used without radiological restrictions. Soil
is defined as unconsolidated earth material, including rubble and debris that
might be present. These guidelines are based on the following principles: (1)
the annual radiation dose received by a member of the critical population group
from the residual radioactive material - predicted by a realistic but
reasonably conservative analysis and calculated as committed effective dose
equivalent - should not exceed 100 mrem/yr, and (2) doses should be kept as low
as reasonably achievable, a concept commonly known as ALARA.
Nine environmental pathways are considered: direct exposure, inhalation of
particulates and radon, and ingestion of plant foods, meat, milk, aquatic
foods, water, and soil.
RISC (RISK in webpage): RISC (Risk Assessment Model for
Soil and Groundwater Applications):
RISC is relatively widely established in New Zealand among environmental risk
assessors. RISC models the fate and transport of contaminants in steady state
and transient modes. The model is intuitive to use, and relatively simple.
The present version (3.0) evaluates human health risk only. However the
updated RISC model will incorporate both aquatic and terrestrial ecological
risk assessment by modeling contaminant concentrations at key receptor areas
and comparing these values with specific acceptance criteria. However, in
developing the new version, the developers have highlighted particular
difficulties associated with modeling the fate and transport of metals in
soils.
The main advantages with RISC are that the risk assessment approach (for
human health risk assessment) is relatively widely accepted, the model is able
to incorporate NZ specific data, and that investigators are relatively familiar
with the software.
RISK:
A computer model sponsored by the EPA for calculating individual exposure to
indoor air pollutants from sources is presented. The model is designed to
calculate exposure due to individual, as opposed to population, activity
patterns and source use. The model also provides the capability to calculate
risk due to the calculated exposure. RISK is the third in a series of indoor
air quality (IAQ) models developed by the Indoor Environment Management Branch
of U.S. EPA's National Risk Management Research Laboratory.
The model uses data on source emissions, room-to-room air flows, air exchange with
the outdoors, and indoor sinks to predict concentration-time profiles for all
rooms. The concentration-time profiles are then combined with individual
activity patterns to estimate exposure. Risk is calculated using a risk
calculation framework developed by Naugle and Pierson (1991). The model allows
analysis of the effects of air cleaners located in either/or both the central
air circulating system or individual rooms on IAQ and exposure. The model
allows simulation of a wide range of sources including long term steady state
sources, on/off sources, and decaying sources. Several sources are allowed in
each room. The model allows the analysis of the effects of sinks and sink
re-emissions on IAQ. The results of test house experiments are compared with model
predictions. The agreement between predicted concentration-time profiles and
the test house data is good.
http://www.ntis.gov/fcpc/cpn7493.htm
TEM - Total Exposure Model:
EPA and US Air Force developed The Total Exposure Model (TEM) to estimate
population and individual exposure to waterborne contaminants. TEM models the
fundamental physical and chemical processes that occur as individuals are
exposed to the contaminated water supply. TEM calculates an individual’s
exposure and dose using finite difference techniques, estimating the mass
transfer of the chemicals from the water to the air during water use activities
(i.e. showering).
Thongsinthusak:
Thongsinthusak, et al. (1999) exponential saturation model with lag time to
estimate dermal absorption; used for pesticide exposure; model derived from
experimental data sets. (White Paper HHEA-3)
UCSS - Use Clusters Scoring System:
Use Clusters Scoring System identifies and screens clusters of chemicals
("use clusters") that are used to perform a particular task. A use
cluster is a set of chemicals that may be substituted for one another in
performing a given task. It identifies clusters of potential concern and
provides an initial ranking of chemicals using human and environmental hazard
and exposure data from a number of sources.
For each chemical in a cluster, UCSS allows the user to enter data
indicating the potential for human and ecological exposure and hazard, and the
level of U.S. Environmental Protection Agency (EPA) interest. It calculates
health and ecological risk or toxicity rating scores for each chemical within a
cluster using the information entered and preprogrammed scoring algorithms.
UCSS uses individual chemical scores to calculate an overall cluster score,
which is an indicator of potential risk for the use cluster. It contains data
on nearly 400 use clusters and 4,700 chemicals.
AHS:
The major objective of the Adult Health Study (AHS) was to collect information
on the health condition of the atomic-bomb survivors, especially those health
issues that are not directly related to death. To do this, about 20,000
subjects selected from the RERF Life Span Study sample of atomic-bomb survivors
were followed through biennial health examinations since 1958. About 2,400 Life
Span Study participants and 1,000 in-utero-exposed persons have been added to
the sample since 1978.
http://www.rerf.or.jp/eigo/titles/ahstitle.htm
ATW - Air Toxics Website:
As part of EPA's National Air Toxics Assessment, EPA conducted a national-scale
assessment of 33 air pollutants (a subset of 32 air toxics on the Clean Air
Act's list of 188 air toxics plus diesel particulate matter (diesel PM)).
The assessment includes four steps that look at the year 1996. As of May
2002, the results posted for all four steps include revisions based on input
from scientific peer review.
The goal of the national-scale assessment is to identify those air toxics
which are of greatest potential concern in terms of contribution to population
risk. The results will be used to set priorities for the collection of
additional air toxics data (e.g., emissions data and ambient monitoring data).
http://www.epa.gov/ttn/atw
CARB - California Air Resources Board:
The California Air Resources Board is a part of the California Environmental
Protection Agency, an organization which reports directly to the Governor's
Office in the Executive Branch of California State Government.
The mission of CARB is to promote and protect public health, welfare, and
ecological resources through the effective and efficient reduction of air
pollutants while recognizing and considering the effects on the economy of the
state.
CHAD - Consolidated Human Activities Database:
CHAD is a relational database with a graphical user interface that facilitates
queries and report generation. It contains databases from previously existing
human activity pattern studies, which were incorporated in two forms: (1) as
the original raw data and (2) as data modified according to predefined format
requirements. The latter involved development of a common
activity/location code system, compilation of background questionnaire
information, and the application of data quality flags. CHAD is intended
to be an input file for exposure/intake dose modeling and/or statistical
analysis.
http://www.epa.gov/chadnet1/
Cohne Hubal et al:
Dr. Elaine Cohne Hubal designed studies to evaluate dermal exposure assessment
approaches and to collect exposure factor data in support of the Food Quality
Protection Act. 1998 - present. Dr. Cohne Hubal worked
on the development of a modeling platform to predict contaminant fate and
transport of environmental pollutants and to perform exposure assessments in
support of the Hazardous Waste Identification Rule in 1996 - 1997.
She developed and worked with a variety of computational models to describe
the simultaneous mass transport and reaction of inhaled gases in the airway
lining. This work was part of a larger project (from 1992-1996)designed to reduce uncertainty in risk assessment for
inhaled toxicants, and included research in the area of industrial pollution
prevention. She developed the framework to evaluate environmental impact of
pollution prevention activities which directly relates energy requirements to
process air, water and solid waste emissions. The framework can be used to
facilitate lifecycle analysis.
CPDB - Carcinogenic Potency Database:
The Carcinogenic Potency Database (CPDB) is a widely used resource on the
results of chronic, long-term animal cancer tests. It provides a single,
standardized and easily accessible database that includes sufficient
information on each experiment to permit investigations into many research
areas of carcinogenesis. Both qualitative and quantitative information on
positive and negative experiments are reported, including all bioassays from
the National Cancer Institute/National Toxicology Program (NCI/NTP) and experimental
results from the general literature that meet a set of inclusion criteria.
Analyses of 5152 experiments on 1298 chemicals are presented. For each
experiment, information is included on the species, strain, and sex of the test
animal; features of experimental protocol such as route of administration,
duration of dosing, dose level(s) in mg/kg body weight/day, and duration of
experiment; histopathology and tumor incidence; carcinogenic potency (TD50) and
its statistical significance; shape of the dose-response curve; author's
opinion as to carcinogenicity; and literature citation.
http://potency.berkeley.edu/cpdb.html
CSFII - Continuing Survey of Food Intakes by
Individuals:
Continuing Survey of Food Intakes by Individuals (CSFII) is conducted by US
Department of Agriculture (USDA). The results of the survey can be found at http://www.barc.usda.gov
EPA’s Child-Specific Exposure Factors Handbook:
In April, 1997, President Clinton signed an Executive Order to Protect Children
from Environmental Health Risks and Safety Risks. The Order requires all
federal agencies to address health and safety risks to children, coordinate
research priorities on children’s health, and ensure that their standards take
into account special risks to children. To implement the President’s Executive
Order, EPA established the Office of Children’s Health Protection (OCHP), and
offices within EPA increased their efforts to provide a safe and healthy
environment for children by ensuring that all regulations, standards, policies,
and risk assessments take into account risks to children.
In 1997, EPA/ORD/NCEA published the Exposure Factors Handbook (U.S. EPA,
1997b). The Handbook includes exposure factors and related data on both adults
and children. OCHP’s recently-issued child-related risk assessment policy and
methodology guidance document survey (U.S. EPA, 1999b), highlighted the
Exposure Factors Handbook (U.S. EPA, 1997b) as a source of information on
exposure factors for children.
http://www.epa.gov
EPA Handbook - EPA Exposure Factors Handbook Database:
The Exposure Factors Handbook Database provides a summary of the available
statistical data on various factors used in assessing human exposure. This
Handbook is addressed to exposure assessors inside the U.S. Environmental
Protection Agency as well as outside who need to obtain data on standard
factors to calculate human exposure to toxic chemicals. Recommended values are
for the general population and also for various segments of the population who
may have characteristics different from the general population.
http://www.epa.gov/ncea/exposfac.htm
HazDat:
HazDat, the Agency for Toxic Substances and Disease Registry's Hazardous
Substance Release/Health Effects Database, is the scientific and administrative
database developed to provide access to information on the release of hazardous
substances from Superfund sites or from emergency events and on the effects of
hazardous substances on the health of human populations. The following
information is included in HazDat: site characteristics, activities and site
events, contaminants found, contaminant media and maximum concentration levels,
impact on population, community health concerns, ATSDR public health threat
categorization, ATSDR recommendations, environmental
fate of hazardous substances, exposure routes, and physical hazards at the
site/event. In addition, HazDat contains substance-specific information such as
the ATSDR Priority List of Hazardous Substances, health effects by route and
duration of exposure, metabolites, interactions of
substances, susceptible populations, and biomarkers of exposure and effects.
HazDat also contains data from the U.S. Environmental Protection Agency (EPA)
Comprehensive Environmental Response, Compensation, and Liability Information
System (CERCLIS) database, including site CERCLIS number, site description,
latitude/longitude, operable units, and additional site information.
http://www.atsdr.cdc.gov/hazdat
IRIS - Integrated Risk Information System:
The Integrated Risk Information System (IRIS),
prepared and maintained by the U.S. Environmental Protection Agency (U.S. EPA),
is an electronic data base containing information on human health effects that
may result from exposure to various chemicals in the environment. IRIS was
initially developed for EPA staff in response to a growing demand for
consistent information on chemical substances for use in risk assessments,
decision-making and regulatory activities. The information in IRIS is intended
for those without extensive training in toxicology, but with some knowledge of
health sciences.
The heart of the IRIS system is its collection of computer files covering
individual chemicals. These chemical files contain descriptive and quantitative
information in the following categories:
ITER - International Toxicity Estimates for Risk:
ITER is a free Internet database of human health risk values for over 500
chemicals of environmental concern from several organizations worldwide. ITER
is the only database that provides this data in a table format that allows
side-by-side comparisons of risk values from different organizations. Below the
table is a synopsis that includes an explanation for any differences among the
organizations' values. ITER provides links to these organizations for more
detailed information. ITER currently contains data from Agency for Toxic
Substances and Disease Registry (ATSDR), Health Canada, National Institute of
Public Health and Environment (RIVM), The Netherlands, US Environmental
Protection Agency (EPA), and independent parties whose risk values have
undergone peer review.
http://www.tera.gov/iter
NCHS:
NCHS is the Federal Government's principal vital and health statistics agency.
Since 1960, when the National Office of Vital Statistics and the National
Health Survey merged to form NCHS, the agency has provided a wide variety of
data with which to monitor the Nation's health. Since then, NCHS has received
several legislative mandates and authorities.
The
NCHS data systems include data on vital events as well as information on
health status, lifestyle, and exposure to unhealthy influences, the onset and
diagnosis of illness and disability, and the use of health care. These data are
used by policymakers in Congress and the Administration, by medical
researchers, and by others in the health community.
http://www.cdc.gov/nchs
NHANES - National Health and Nutrition Examination
Survey:
The National Health and Nutrition Examination Survey (NHANES) is a survey
conducted by the
The National Health and Nutrition Examination Survey provides
estimates of the health of Americans by examining a sample of people who
represent the American population. To accomplish this, medical staff and other
professionals travel across the
NHAPS - National Human Activity Pattern Survey:
The National Human Activity Pattern Survey (NHAPS) was initiated to fill a need
for updated activity information on a nationwide scale. Several recent exposure
field monitoring studies have shown that human activities play a critical role
in explaining the variation in human exposure because they impact the
frequency, duration, and intensity of exposure to pollutants. Currently,
activity pattern data bases with adequate potential pollutant exposure
information are available only for a few cities (
NHEXAS - National Human Exposure Assessment Survey:
The National Human Exposure Assessment Survey (NHEXAS) was developed by the
Office of Research and Development (ORD) of the U.S. Environmental Protection
Agency (EPA) early in the 1990s to provide critical information about
multipathway, multimedia population exposure distribution to chemical classes.
The first phase consisted of three pilot studies with the objectives of:
NHGPUS - National Home and Garden Pesticide Use
Survey:
The National Home and Garden Pesticide Use Survey (NHGPUS) was conducted for
EPA during August and September 1990. The purpose was to collect data on the
use of pesticides in and around homes in the
http://www.epa.gov
OEHHA - Office of Environmental Health Hazard
Assessment:
The mission of the Office of Environmental Health Hazard Assessment (OEHHA) is
to protect and enhance public health and the environment by objective
scientific evaluation of risks posed by hazardous substances.
http://www.oehha.ca.gov
RAIS - Risk Assessment Information System:
Risk Assessment Information System (RAIS) website contains risk assessment
tools and information. The Risk Assessment Tools include: Risk-Based
Preliminary Remediation Goal (PRG) calculations, a Toxicity database, Risk
Calculations, and Ecological Benchmarks. The Tools are designed for use at all
DOE sites and can be customized for site-specific conditions. The RAIS also
includes information, guidance, and risk results applicable to the Oak Ridge
Reservation.
http://risk.lsd.ornl.gov/rap_hp.shtml
RTECS® - Registry of Toxic Effects of Chemical
Substances:
Registry of Toxic Effects of Chemical Substances (RTECS) from the US National
Institute for Occupational Safety and Health (NIOSH) provides toxicological
information with citations on over 140,000 chemical substances. These detailed
profiles include toxicological data and reviews; international workplace
exposure limits; references to US standards and regulations; analytical
methods; and exposure and hazard survey data. The data are compiled into
substance records for ease-of-use, and updated data is fully integrated.
SRD - Standard Reference Data Group:
The NIST Standard Reference Data Group (SRDG) at NIST has been proving
well-documented numeric data to scientists and engineers for use in technical
problem-solving research and development. These recommended values are based on
data which have been extracted from the world's literature, assessed for
reliability, and then evaluated to select the preferred values. These data
activities are conducted by scientists at NIST and in university data centers.
The formal existence of the National Standard Reference Data System dates
from 1963, when the Federal Council for Science and Technology asked the
then-National Bureau of Standards (now NIST) to assume primary responsibility
in the Federal Government for promoting and coordinating the critical
evaluation of numerical data in the physical sciences. The program was
conceived as a decentralized national effort with financial support coming from
a variety of government and private sources, but the NBS was responsible for
the overall planning and coordination. In 1968 congress provided a specific
legislative mandate for the program through passage of Public Law 90-396, the
Standard Reference Data Act. This Act details the policy of Congress to make
reliable, critically evaluated data compilations available to scientists,
engineers, and the general public.
http://www.nist.gov/srd
The EPA administers the Superfund program in cooperation with individual states
and tribal governments. The office that oversees management of the program is
the Office of Emergency and Remedial Response (OERR).
TOMES:
Handling hazardous chemicals is routine in many workplaces. Precautions must be
in place to protect workers and to meet regulatory and safety guideline
compliance. The TOMES Plus® System offers rapid, easy access to the critical
medical and hazard data needed for immediate, effective response to any
situation. Whether reducing the risk of injury or determining safe levels of
exposure, TOMES Plus databases include treatment
guidelines for acute chemical exposures, evacuation procedures, personal
protection procedures, and chemical containment and disposal information.
General areas unrelated to chemicals such as ergonomics and human health risk
assessment are addressed. The System's proprietary and government databases are
integrated to ensure quick access to the fully reviewed, referenced
information.
The TOMES Plus System is ideal for:
TOXNET - Toxicology and Environmental Health
Information Program:
The Toxicology and Environmental Health Information Program (TEHIP) is
responsible for the Toxicology Data Network (TOXNET®), an integrated system of
toxicology and environmental health databases that are available free of charge
on the web. The following databases are available for searching via TOXNET:
HSDB®, TOXLINE®, ChemIDplus, IRIS , TRI (Toxic
Chemical Release Inventory), CCRIS (Chemical Carcinogenesis Research
Information System), GENE-TOX, and DART®/ETIC (Developmental and Reproductive
Toxicology/Environmental Teratology Information Center). http://www.nlm.nih.gov
US Census:
http://www.census.gov
USEPA: USEPA Exposure Models Library - focused on
standard models, mainly fate and transport (1996).
http://esm.versar.com/emlimes/emlintro.htm
Dust Resuspension:
A Dust Resuspension model was not identified by the research study. This
means that a model does not exist, or it is not widely used or published for
this process. If this model existed, it would take surface concentrations from
house objects estimated by the indoor air transport model (CONTAMW) and
estimate the atmospheric emission rates of dust from resuspension. This
information would feed back into the indoor air transport model (CONTAMW).
Fuel Spill Frequency & Volume:
A Fuel Spill Frequency & Volume model was not identified by the
research study. This means that a model does not exist, or it is not widely
used or published for this process. If this model existed, it would take in
information related to the fueling of a gasoline-powered lawn mower, lawn
trimmer, or automobile.
Mainstream Cigarette Smoke Emission
A variety of Mainstream Cigarette Smoke Emissions models were identified by the research study. However, it is not
known if these models can take information related to contaminant inhalation
and release by mainstream smoking and estimate the emission rates of
contaminants in mainstream smoke indifferent rooms as input to the indoor air
transport model (CONTAMW).
Micro-Environmental Air Transport Outdoors:
An Micro-Environmental Air Transport Outdoors
model was not identified by the research study. This means that a model does
not exist, or it is not widely used or published for this process. If this
model existed, it would take emission information related to fueling and
painting spills and estimate the outdoor air concentrations that could impact a
human. This information could also be used as an outdoor source to the indoor
air transport model (CONTAMW).
Paint Spill Frequency and Volume:
A Paint Spill Frequency and Volume model was not identified by the research
study. This means that a model does not exist, or it is not widely used or
published for this process. If this model existed, it would take information
related to indoor painting of walls by a nonprofessional and potential for
spills. Output would be number and volume of spills associated with indoor
painting of walls.
Partitioning Between Vapor and Particle Phase:
A Partitioning Between Vapor and Particle Phase
model was not identified by the research study. This means that a model does
not exist, or it is not widely used or published for this process. If this
model existed, it would take air concentrations estimated by the indoor air
transport model (CONTAMW) and partitions the air concentrations into vapor
(gaseous) and particle components. These partitioned air concentrations can
then be used by the indoor air transport model (CONTAMW) to provide refined air
concentrations for more realistic exposure modeling.
Source Emission from Fueling, Combustion, and Service:
Source Emission from Fueling, Combustion, and Service models were not
identified by the research study. This means that such models do not exist or
are not widely used or published for this process. If these models existed,
they would take information related to the fueling, operation, and maintenance
of a gasoline-powered lawn mower, lawn trimmer, or automobile for a typical
household. Output would be emission rates of volatile compounds associated with
fueling, operation, and maintenance of a gasoline-powered lawn mower, lawn
trimmer, or automobile.
VOC Fugitive Emission from Mixing Vessel:
A VOC Fugitive Emission from Mixing Vessel model was not identified by
the research study. This means that a model does not exist, or it is not widely
used or published for this process. If this model existed, it would take
information related to fugitive emission of volatile organic compounds (VOCs)
from a mixing vessel and estimate the emission rates of these compounds as
input to the indoor air transport model (CONTAMW).
This section provides a review of exposure and impact models, databases, and
algorithms that are readily available via the Internet and other forms of open
literature. The review included all types of models, databases, and algorithms
necessary to conduct detailed exposure and exposure micro-environmental
modeling. This review is only meant to be quick review of information readily
available via the Internet or technical libraries and is not an in-depth
literature review.
PubMed Search Date:
Table 2.2.1
Chemical-Specific PBPK Models for Dose to Embryo/Fetus
Table 2.2.2 Chemical-Specific Biologically Based Dose-Response
Models for Pregnancy
Table 2.2.3 Chemical-Specific PBPK Models for Neonatal
Exposure via Lactation
Table 2.2.4 Basic Biology Data Useful in PBPK Model Development
Table 2.2.5 General Models and Dosimetry Considerations Useful
in PBPK Model Development
Table 2.2.6 Reviews of PK/PD in Developmental Toxicology
Table 2.2.7 Role for kinetics/dosimetry in Children’s Health
Issues
Table 2.2.8 PD Models of the Endocrine System
Table 2.2.9 Dose-Response Modeling for Endocrine Active
Compounds
Table 2.2.10 Metabolizing Enzymes as a Function of Age or
Gestational Development (Reviews)
Table 2.2.11 Standard Kinetics during Pregnancy or Lactation
(only selected articles from a large database captured during PBPK searches)
1. Clarke,
D. O., Elswick, B. A., Welsch, F., and Conolly, R. B. (1993). Pharmacokinetics of 2-methoxyethanol and
2-methoxyacetic acid in the pregnant mouse: a physiologically based
mathematical model. Toxicol. Appl. Pharmacol.
121, 239-252.
2. Clewell, H. J., Gearhart, J. M., Gentry,
P. R., Covington, T. R., VanLandingham, C. B., Crump, K. S., and Shipp, A. M.
(1999). Evaluation of the uncertainty in an oral reference
dose for methylmercury due to interindividual variability in pharmacokinetics.
Risk Anal. 19, 547-558.
3. Clewell, H. J., III, Andersen, M. E.,
Wills, R. J., and Latriano, L. (1997). A physiologically
based pharmacokinetic model for retinoic acid and its metabolites. J.
Am. Acad. Dermatol. 36, S77-S85.
4. Faustman, E.M., Lewandowski, T.A.,
5. Fisher, J. W., Whittaker, T. A., Taylor,
D. H., Clewell, H. J., III, and Andersen, M. E. (1989). Physiologically
based pharmacokinetic modeling of the pregnant rat: a multiroute exposure model
for trichloroethylene and its metabolite, trichloroacetic acid. Toxicol. Appl. Pharmacol. 99, 395-414.
6. Gabrielsson, J. L. and Paalzow, L. K.
(1983). A physiological pharmacokinetic model for morphine
disposition in the pregnant rat. J. Pharmacokinet. Biopharm.
11, 147-163.
7. Gabrielsson, J. L., Paalzow, L. K., and
Nordstrom, L. (1984). A physiologically based pharmacokinetic
model for theophylline disposition in the pregnant and nonpregnant rat. J.
Pharmacokinet. Biopharm. 12, 149-165.
8. Gabrielsson, J. L., Johansson, P., Bondesson,
U., and Paalzow, L. K. (1985). Analysis of methadone
disposition in the pregnant rat by means of a physiological flow model. J.
Pharmacokinet. Biopharm. 13, 355-372.
9. Gabrielsson, J. L., Johansson, P.,
Bondesson, U., Karlsson, M., and Paalzow, L. K. (1986). Analysis
of pethidine disposition in the pregnant rat by means of a physiological flow
model. J. Pharmacokinet. Biopharm. 14,
381-395.
10. Gabrielsson, J. L. and Groth, T. (1988).
An extended physiological pharmacokinetic model of methadone disposition in the
rat: validation and sensitivity analysis. J. Pharmacokinet. Biopharm. 16, 183-201.
11. Gargas, M. L., Tyler, T. R., Sweeney, L.
M., Corley, R. A., Weitz, K. K., Mast, T. J., Paustenbach, D. J., and Hays, S.
M. (2000). A toxicokinetic study of inhaled ethylene glycol
ethyl ether acetate and validation of a physiologically based pharmacokinetic
model for rat and human. Toxicol.
Appl. Pharmacol. 165, 63-73.
12. Gargas, M. L., Tyler, T. R., Sweeney, L.
M., Corley, R. A., Weitz, K. K., Mast, T. J., Paustenbach, D. J., and Hays, S.
M. (2000). A toxicokinetic study of inhaled ethylene glycol
monomethyl ether (2- ME) and validation of a physiologically based
pharmacokinetic model for the pregnant rat and human. Toxicol.
Appl. Pharmacol. 165, 53-62.
13. Gray, D. G. (1995). A
physiologically based pharmacokinetic model for methyl mercury in the pregnant
rat and fetus. Toxicol. Appl.
Pharmacol. 132, 91-102.
14. Hays, S. M., Elswick,
B. A., Blumenthal, G. M., Welsch, F., Conolly, R. B., and Gargas, M. L. (2000). Development of a physiologically
based pharmacokinetic model of 2-methoxyethanol and 2-methoxyacetic acid
disposition in pregnant rats. Toxicol.
Appl. Pharmacol. 163, 67-74.
15. Kim, C. S., Binienda, Z., and Sandberg,
J. A. (1996). Construction of a physiologically based pharmacokinetic model for
2,4- dichlorophenoxyacetic acid dosimetry in the
developing rabbit brain. Toxicol.Appl.Pharmacol. 136, 250-259.
16. O'Flaherty, E. J., Scott, W., Schreiner,
C., and Beliles, R. P. (1992). A physiologically based kinetic model of rat and
mouse gestation: disposition of a weak acid. Toxicol.
Appl. Pharmacol. 112, 245-256.
17. O'Flaherty, E. J., Nau, H., McCandless,
D., Beliles, R. P., Schreiner, C. M., and Scott, W. J., Jr. (1995). Physiologically based pharmacokinetics of methoxyacetic acid: dose-
effect considerations in C57BL/6 mice. Teratology 52, 78-89.
18. Olanoff, L. S. and Anderson, J. M.
(1980). Controlled release of tetracycline--III: A
physiological pharmacokinetic model of the pregnant rat. J.
Pharmacokinet. Biopharm. 8, 599-620.
19. Terry, K. K., Elswick,
B. A., Welsch, F., and Conolly, R. B. (1995). Development of a physiologically
based pharmacokinetic model describing 2-methoxyacetic acid disposition in the
pregnant mouse. Toxicol. Appl.
Pharmacol. 132, 103-114.
20. Ward, K. W., Blumenthal, G. M., Welsch,
F., and Pollack, G. M. (1997). Development of a
physiologically based pharmacokinetic model to describe the disposition of
methanol in pregnant rats and mice. Toxicol.
Appl. Pharmacol. 145, 311-322.
21. Welsch, F., Blumenthal, G. M., and
Conolly, R. B. (1995). Physiologically based pharmacokinetic
models applicable to organogenesis: extrapolation between species and potential
use in prenatal toxicity risk assessments. Toxicol.
Lett. 82-83, 539-547.
22. You, L., Gazi, E., Archibeque-Engle, S.,
Casanova, M., Conolly, R. B., and Heck, H. d’A. (1999). Transplacental and
lactational transfer of p,p'-DDE in Sprague-Dawley
rats. Toxicol. Appl. Pharmacol. 157,
134-144.
23. Lau, C., Andersen, M. E., Crawford-Brown,
D. J., Kavlock, R. J., Kimmel, C. A., Knudsen, T. B., Muneoka, K., Rogers, J.
M., Setzer, R. W., Smith, G., and Tyl, R. (2000). Evaluation of biologically
based dose-response modeling for developmental toxicity: a workshop report. Regul. Toxicol. Pharmacol. 31, 190-199.
24. Lau, C., Mole, M. L., Copeland, M. F.,
Rogers, J. M., Kavlock, R. J., Shuey, D. L., Cameron, A. M., Ellis, D. H.,
Logsdon, T. R., Merriman, J. and Setzer, R. W. (2001). Toward a biologically
based dose-response model for developmental toxicity of 5-fluorouracil in the
rat: Acquisition of experimental data. Toxicol.
Sci. 59, 37-48.
25. Leroux, B. G., Leisenring, W. M.,
Moolgavkar, S. H., and Faustman, E. M. (1996). A
biologically-based dose-response model for developmental toxicology. Risk
Anal. 16, 449-458.
26. Setzer, R. W., Lau, C., Mole, M. L.,
Copeland, M. F.,
27. Shuey, D. L., Lau, C., Logsdon, T. R.,
Zucker, R. M., Elstein, K. H., Narotsky, M. G., Setzer, R. W., Kavlock, R. J.,
and Rogers, J. M. (1994). Biologically based dose-response
modeling in developmental toxicology: biochemical and cellular sequelae of
5-fluorouracil exposure in the developing rat. Toxicol.
Appl. Pharmacol. 126, 129-144.
28. Shuey, D. L., Setzer, R. W., Lau, C.,
Zucker, R. M., Elstein, K. H., Narotsky, M. G., Kavlock, R. J., and Rogers, J.
M. (1995). Biological modeling of 5-fluorouracil
developmental toxicity. Toxicology 102 ,
207-213.
29. Barton, H. A. and Clewell, H. J., III
(2000). Evaluating noncancer effects of trichloroethylene: dosimetry, mode of
action, and risk assessment. Environ. Health Perspect.
108 Suppl 2, 323-334.
30. Begg, E. J. and Atkinson, H. C. (1993). Modeling of the passage of drugs into milk. Pharmacol.Ther.
59, 301-310.
31. Byczkowski, J. Z. and Fisher, J. W.
(1994). Lactational transfer of tetrachloroethylene in rats.
Risk Anal. 14, 339-349.
32. Byczkowski, J. Z., Kinkead, E. R., Leahy,
H. F., Randall, G. M., and Fisher, J. W. (1994). Computer
simulation of the lactational transfer of tetrachloroethylene in rats using a
physiologically based model. Toxicol.
Appl. Pharmacol. 125, 228-236.
33. Byczkowski, J. Z., Gearhart, J. M., and
Fisher, J. W. (1994). "Occupational" exposure of
infants to toxic chemicals via breast milk. Nutrition 10, 43-48.
34. Byczkowski, J. Z. and Fisher, J. W. (1995).
A computer program linking physiologically based pharmacokinetic model with
cancer risk assessment for breast-fed infants. Comput.Methods
Programs Biomed. 46, 155-163.
35. Fisher, J., Mahle, D., Bankston, L.,
Greene, R., and Gearhart, J. (1997). Lactational transfer of
volatile chemicals in breast milk.
36. Fisher, J. W., Whittaker, T. A., Taylor,
D. H., Clewell, H. J., III, and Andersen, M. E. (1990). Physiologically
based pharmacokinetic modeling of the lactating rat and nursing pup: a
multiroute exposure model for trichloroethylene and its metabolite,
trichloroacetic acid. Toxicol. Appl.
Pharmacol. 102, 497-513.
37. Schreiber, J. S. (1993). Predicted infant exposure to tetrachloroethylene in human breast
milk. Risk Anal. 13, 515-524.
38. Shelley, M. L., Andersen, M. E., and
Fisher, J. W. (1988). An inhalation distribution model for
the lactating mother and nursing child. Toxicol.
Lett. 43, 23-29.
39. Barr, M., Jr., Jensh, R. P. and Brent, R.
L. (1969). Fetal weight and intrauterine position in rats.
Teratology 2, 241-246.
40. Barr, M., Jr., Jensh, R. P. and Brent, R.
L. (1970). Prenatal growth in the albino rat: Effects of number, intrauterine
position and resorptions. Am. J. Anat. 128, 413-427.
41. Bolon, B., Branham, W. S., Warbritton, A.
R., Sheehan, D. M., Young, J. F. (1996). Morphometric
analysis of rat embryos during early development using laser scanning confocal
microscopy (LSCM). Teratology 53, 106 (Abstract).
42. Buelke-Sam, J., Byrd, R. A., and Nelson,
C. J. (1983). Blood flow during pregnancy in the rat: III. Alterations
following mirex treatment. Teratology 27, 401-409.
43. Buelke-Sam, J., Holson, J. F., and
Nelson, C. J. (1982). Blood flow during pregnancy in the rat: II. Dynamics of
and litter variability in uterine flow. Teratology 26, 279-288.
44. Buelke-Sam, J., Nelson, C. J., Byrd, R.
A., and Holson, J. F. (1982). Blood flow during pregnancy in the rat: I. Flow
patterns to maternal organs. Teratology 26, 269-277.
45. Campbell, R. M., Fell, B. F. and Mackie,
W. S. (1974). Ornithine decarboxylase activity, nucleic acids
and cell turnover in the livers of pregnant rats. J. Physiol. 241,
699-713.
46. CD(SD)IGS Study
Group, c/o Charles River Japan, Inc. (1998). Biological Reference Data on CD(SD)IGS Rats - 1998, (T. Matsuzawa and H. Inoue, Eds).
Best Printing Co., Ltd.,
47. Gfatter, R., Hackl, P., Braun, F. (1997).
Effects of soap and detergents on skin surface pH, stratum
corneum hydration and fat content in infants. Dermatol.
195, 258-262.
48. Goedbloed, J. F. (1977). Embryonic and postnatal growth of rat and mouse. V. Prenatal
growth of organs and tissues, general principles: allometric growth, absence of
growth, and the genetic regulation of the growth process. Acta Anat. (
49. Goedbloed, J. F. (1980). Embryonic and postnatal growth of the rat and mouse. VI.
Prenatal growth of organs and tissues: individual organs; final remarks on
parts I-VI, phase transitions. Acta Anat. (
50. Goedbloed, J. F. (1972). The embryonic and postnatal growth of rat and mouse. I. The
embryonic and early postnatal growth of the whole embryo. A
model with exponential growth and sudden changes in growth rate. Acta
Anat. 82, 305-336.
51. Goedbloed, J. F. (1974). The embryonic and postnatal growth of rat and mouse. II. The
growth of the whole animal during the first 24 days after birth in two inbred
mouse strains (CPB-S and DBA-2). Acta Anat. (
52. Goedbloed, J. F. (1975). The embryonic and postnatal growth of rat and mouse. III. growth of the whole animal in the puberty, adult, and
senescence phases in two inbred mouse strains (cpb-s and kba/2). Exponential growth, sudden changes in the growth rate, and a model
for the regulation of the mitotic rate. Acta Anat. (
53. Goedbloed, J. F. (1976). The embryonic and postnatal growth of rat and mouse. IV. Prenatal
growth of organs and tissues: age determination and general growth pattern.
Acta Anat. 95, 8-33.
54. Goedbloed, J. F. (1972). The growth of the mouse during the first 24 days after birth.
Acta Morphol. Neerl. Scand.
10, 372-373.
55. Goedbloed, J. F. (1972). The growth of two inbred mouse strains during the first 24 days
after birth. J. Anat. 111, 493.
56. Griffith, D. R. and Turner, C. W. (1961).
Normal growth of rat mammary glands during pregnancy and
early lactation. Proc. Soc. Exp. Biol. Med. 106, 448-450.
57. Hanwell, A. and Linzell, J. L. (1973). The time course of cardiovascular changes in lactation in the rat.
J. Physiol. 233, 93-109.
58. Heymann, M. A. (1972). Interrelations
of fetal circulations and the placental transfer of drugs. Fed. Proc. 31, 44-47.
59. Hoffman, W. E., Miletich, D. J. and
Albrecht, R. F. (1981). Repeated microsphere injections in
rats. Life Sci. 28, 216 7-2172.
60. Hou, P.-C. L. and
Burggren, W. W. (1989). Interaction of allometry and development in the
mouse Mus musculus: Heart rate and hematology. Respir.
Physiol. 78, 265-280.
61. International Commission on Radiation
Protection. (1975). Report of the task group on Reference
Man, ICRP Publication 23 (W.S. Snyder et al. Eds).
62. Ishise, S., Pegram, B. L., Yamamoto, J.,
Kitamura, Y. and Fohlich, E. D. (1980). Reference sample microsphere method:
Cardiac output and blood flows in conscious rats. Am. J.
Physiol. 239, H443-449.
63.
64. Jansky, L. and Hart, J. S. (1968).
Cardiac output and organ blood flow in warm- and cold-acclimated rats exposed
to cold.
65. Kalia, Y. N., Nonato, L. B., Lund, C. H.
and Guy, R. H. (1998). Development of skin barrier function
in premature infants. J. Invest. Dermatol. 111,
320-326.
66. Kimmel, C. A., Kimmel, G. L., White, C.
G., Grafton, T. F., Young, J. F., and Nelson, C. J. (1984). Blood flow changes
and conceptual development in pregnant rats in response to caffeine. Fundam. Appl. Toxicol. 4, 240-247.
67. Knight, C. H. and Peaker, M. (1982). Mammary cell proliferation in mice during pregnancy and lactation
in relation to milk yield. Q. J. Exp. Physiol. 67, 165-177.
68. Koo, W. W. K., Walters, J. C. and
Hockman, E. M. (2000). Body composition in human infants at
birth and postnatally. J. Nutr. 130, 2188-2194.
69. Lipshitz, J., Ahokas, R. A., Broyles, K.
and Anderson, G.D. (1986). Effect of hexoprenaline on
uteroplacental blood flow in the pregnant rat. Am. J. Obstet. Gynecol.
154, 310-314.
70. Lundgren, Y., Karlsson, K. and Ljungbled,
U. (1979). Circulatory changes during pregnancy in spontaneously and renal
hypertensive rats. Clin. Sci. 57, 337-339.
71. Malik, A. B., Kaplan, J. E. and
72. Mattison, D. R., Blann, E. and Malek, A.
(1991). Physiological alterations during pregnancy: Impact on toxicokinetics. Fund. Appl. Toxicol. 16, 215-218.
73. McCarthy, J. C. (1967). Effects of litter size and maternal weight on foetal and placental
weight in mice. J. Reprod. Fertil. 14, 507-510.
74. Middle Atlantic Reproduction and
Teratology Association (MARTA). (1993). Historical control data for development
and reproductive toxicity studies using the Crl:CD® BR
Rat, (P. L. Lang, Ed.), Published by Charles River Laboratories,
75. Naismith, D. J., Richardson, D. P. and
Ritchard, A. E. (1982). The utilization of protein and energy during lactation
in the rat, with particular regard to the use of fat accumulated in pregnancy.
Br. J. Nutr. 48, 433-441.
76. Nanbo, T. (1988). Pharmacokinetics
of clearance in the maternal-fetal amniotic fluid system of the rat. Toxicol. Appl. Pharmacol. 92, 381-389.
77.
78. O'Flaherty, E. J. (1994). Physiologic
changes during growth and development. Environ. Health
Perspect. 102 Suppl 11, 103-106.
79. Popovic, V. P. and Kent, K. M. (1964). 120-Day study of cardiac output in unaesthetized rats. Am.
J. Physiol. 207, 767-770.
80. Ramasastry, P., Downing, D. T., Pochi, P.
E. and Strauss, J. S. (1970). Chemical composition of human
skin surface lipids from birth to puberty. J. Invest. Dermatol.
54, 139-144.
81. Robinson, J. J. (1986). Changes in body composition during pregnancy and lactation.
Proc. Nutr. Soc. 45, 71-80.
82. Sikov, M. R. and Thomas, J. M. (1970). Prenatal growth of the rat. Growth 34, 1-14.
83. Sikov, M. R., Thomas, J. M. and Mahlum,
84. Stanek, K. A., Smith, T. L., Murphy, W.
L. and Coleman, T. G. (1983). Hemodynamic disturbances in the rat as a function
of the number of microspheres injected. Am. J. Physiol. 245,
H920-H923.
85. Trieb, G., Pappriz, G. and Lutzen, L.
(1976). Allometric analysis of organ weights. I. Rats.
Toxicol. Appl. Pharmacol. 35, 531-542.
86. Trotter, M. and Peterson, R. R. (1968). Weight of bone in the fetus-a preliminary report. Growth 32,
83-90.
87. Tsuchiya, M., Ferrone, R. A., Walsh, G.
M. and Frohlich, E. D. (1978). Regional blood flows measured in conscious rats
by combined Fick and microsphere methods. Am. J. Physiol.
250, H357-H30.
88. Tuma, R. F.,
89. West, D. P., Worobec, S. and Solomon, L.
M. (1981). Pharmacology and toxicology of infant skin.
J. Invest. Dermatol. 76, 147-150.
90. White, L., Haines, H. and Adams, T.
(1968). Cardiac output related to body weight in small mammals. Comp. Biochem.
Physiol. 27, 559-565.
91. Wu, P. Y. K., Wong, W. H., Guerra, G.,
Miranda, R., Godoy, R. R., Preston, B., Schoentgen, S. and Levan, N. E. (1980).
Peripheral blood flow in the neonate. 1. Changes in
total, skin and muscle blood flow with gestational and postnatal age. Pediatr. Res. 14, 1374-1378.
92. Yosipovitch, G., Maayan-Metzger, A.,
Merlob, P. and Sirota, L. (2000). Skin barrier properties in
different body areas in neonates. Pediatrics 106, 105-108.
93.
94. Crom, W. R. (1994). Pharmacokinetics
in the child. Environ. Health Perspect. 102
Suppl 11, 111-117.
95. Dijkstra, J., France, J., Dhanoa, M. S.,
Maas, J. A., Hanigan, M. D., Rook, A. J., and Beever, D. E. (1997). A model to
describe growth patterns of the mammary gland during pregnancy and lactation.
J. Dairy Sci. 80, 2340-2354.
96. Gaylor, D. W. and Razzaghi, M. (1992).
Process of building biologically based dose-response models for developmental
defects. Teratology 46, 573-581.
97. Gaylor, D. W. and Chen, J. J. (1993).
Dose-response models for developmental malformations. Teratology 47, 291-297.
98. Krauer, B., Krauer, F. and Hytten, F. E.
(1980). Drug disposition and pharmacokinetics in the
maternal-placental-fetal unit. Pharmac. Ther. 10, 301-328.
99. Luecke, R. H., Wosilait, W. D., Pearce,
B. A., and Young, J. F. (1994). A physiologically based
pharmacokinetic computer model for human pregnancy. Teratology 49,
90-103.
100. Luecke, R. H., Wosilait, W. D., and
Young, J. F. (1995). Mathematical representation of organ
growth in the human embryo/fetus. Int. J. Biomed.
Comput. 39, 337-347.
101. Luecke, R. H., Wosilait, W. D., and
Young, J. F. (1997). Mathematical analysis for teratogenic
sensitivity. Teratology 55, 373-380.
102. Luecke, R. H., Wosilait, W. D., Pearce,
B. A., and Young, J. F. (1997). A computer model and program
for xenobiotic disposition during pregnancy. Comput.Methods
Programs Biomed. 53, 201-224.
103. Luecke, R. H., Wosilait, W. D., and
Young, J. F. (1999). Mathematical modeling of human embryonic
and fetal growth rates. Growth Dev.Aging 63, 49-59.
104. Milsap, R. L. and Jusko, W. J. (1994). Pharmacokinetics in the infant. Environ. Health
Perspect. 102 Suppl 11, 107-110.
105. O'Flaherty, E. J. (1994). Physiologically based pharmacokinetic models in developmental toxicology.
Risk Anal. 14, 605-611.
106. O’Flaherty, E. J., Polak, J. and
Andriot, M. D. (1995). Incorporation of temporal factors into physiologically
based kinetic models for risk assessment. Inhal. Toxicol. 7, 917-925.
107. Welsch, F. (1982). Placental
transfer and fetal uptake of drugs. J. Vet. Pharmacol.
Ther. 5, 91-104.
108. Wilson, J. T., Brown, R. D., Cherek, D.
R., Dailey, J. W., Hilman, B., Jobe, P. C., Manno, B. R., Manno, J. E.,
Redetzki, H. M., and Stewart, J. J. (1980). Drug excretion in human breast
milk: principles, pharmacokinetics and projected consequences. Clin. Pharmacokinet. 5, 1-66.
109. Wosilait, W. D., Luecke, R. H., and
Young, J. F. (1992). A mathematical analysis of human
embryonic and fetal growth data. Growth Dev.Aging 56, 249-257.
110. West, D. P., Worobec, S. and Soloman, L.
M. (1981). Pharmacology and toxicology of infant skin.
J. Invest. Dermatol. 76, 147-150.
111. Young, J. F., Branham, W. S., Sheehan,
D. M., Baker, M. E., Wosilait, W. D., and Luecke, R. H. (1997). Physiological
"constants" for PBPK models for pregnancy. J. Toxicol. Environ.
Health 52, 385-401.
112. Gabrielsson, J. L. (1991). Utilization of physiologically based models in extrapolating
pharmacokinetic data among species. Fundam.
Appl. Toxicol. 16, 230-232.
113. Gabrielsson, J. L. and Larsson, K. S.
(1990). Proposals for improving risk assessment in reproductive toxicology. Pharmacol. Toxicol. 66, 10-17.
114. Kavlock, R. J. and Setzer, R. W. (1996).
The road to embryologically based dose-response models.
Environ. Health Perspect. 104 Suppl 1, 107-121.
115. Kavlock, R. J. (1997). Recent advances
in mathematical modeling of developmental abnormalities using mechanistic
information. Reprod. Toxicol. 11, 423-434.
116. Kavlock, R. J. (1991). Symposium on application of pharmacokinetics in developmental
toxicity risk assessments. Fund. Appl. Toxicol.
16, 213-232.
117. Kimmel, C. A., Wellington, D. G.,
Farland, W., Ross, P., Manson, J. M., Chernoff, N., Young, J. F., Selevan, S.
G., Kaplan, N., and Chen, C. (1989). Overview of a workshop
on quantitative models for developmental toxicity risk assessment.
Environ. Health Perspect. 79, 209-215.
118. Kimmel, C. A. (1996). Developmental
toxicity risk assessment: consensus building, hypothesis formulation, and
focused research. Drug Metab Rev. 28, 85-103.
119. Kimmel, C. A. and Schwetz, B. A. (1997).
Evolution and progress in safety assessment for developmental
and reproductive toxicity [editorial]. Toxicol.
Pathol. 25, 664-667.
120. O'Flaherty, E. J. (1997). Pharmacokinetics, pharmacodynamics, and prediction of developmental
abnormalities. Reprod. Toxicol. 11, 413-416.
121. Sheehan, D. M., Young, J. F., Slikker,
W., Jr., Gaylor, D. W., and Mattison, D. R. (1989). Workshop on risk assessment
in reproductive and developmental toxicology: addressing the assumptions and
identifying the research needs. Regul. Toxicol. Pharmacol. 10, 110-122.
122. Young, J. F. and Holson, J. F. (1978). Utility of pharmacokinetics in designing toxicological protocols
and improving interspecies extrapolation. J. Environ. Pathol.
Toxicol. 2, 169-186.
123. Young, J. F. (1991). Correlation
of pharmacokinetic data with endpoints of developmental toxicity. Fundam. Appl. Toxicol. 16, 222-224.
124. Young, J. F. (1998). Physiologically-based
pharmacokinetic model for pregnancy as a tool for investigation of
developmental mechanisms. Comput. Biol. Med.
28, 359-364.
return to top of page
125. Bruckner, J. V. (2000). Differences in
sensitivity of children and adults to chemical toxicity: the NAS panel report. Regul. Toxicol. Pharmacol.
31, 280-285.
126. Faustman, E. M., Silbernagel, S. M.,
Fenske, R. A., Burbacher, T. M., and
127. Selevan, S. G., Kimmel, C. A., and
Mendola, P. (2000). Identifying critical windows of exposure
for Children's health [In Process Citation]. Environ. Health
Perspect. 108 Suppl 3, 451-455.
128. Andersen, M. E., Clewell, H. J., III,
Gearhart, J., Allen, B. C., and Barton, H. A. (1997). Pharmacodynamic
model of the rat estrus cycle in relation to endocrine disruptors. J.
Toxicol. Environ. Health 52, 189-209.
129. Andersen, M. E. and Barton, H. A. (1999).
Biological regulation of receptor-hormone complex
concentrations in relation to dose-response assessments for endocrine-active
compounds. Toxicol. Sci. 48, 38-50.
130. Barton, H. A. and Andersen, M. E.
(1998). A model for pharmacokinetics and physiological feedback among hormones
of the testicular-pituitary axis in adult male rats: a framework for evaluating
effects of endocrine active compounds. Toxicol. Sci.
45, 174-187.
131. Schlosser, P. M. and Selgrade, J. F.
(2000). A Model of Gonadotropin Regulation during the Menstrual Cycle in Women:
Qualitative Features. Environ. Health Perspect. 108
Suppl 5, 873-881.
132.
133. Andersen, M. E., Conolly, R. B.,
Faustman, E. M., Kavlock, R. J., Portier, C. J., Sheehan, D. M., Wier, P. J.,
and Ziese, L. (1999). Quantitative mechanistically based dose-response modeling
with endocrine-active compounds. Environ. Health Perspect.
107 Suppl 4, 631-638.
134. Barton, H. A. and Andersen, M. E.
(1997). Dose-response assessment strategies for
endocrine-active compounds. Regul. Toxicol. Pharmacol. 25, 292-305.
135. Barton, H. A., Andersen, M. E., and
Allen, B. C. (1998). Dose-response characteristics of uterine responses in rats
exposed to estrogen agonists. Regul. Toxicol. Pharmacol. 28, 133-149.
136. Barton, H. A. and Andersen, M. E.
(1998). Endocrine active compounds: from biology to dose response assessment.
Crit. Rev. Toxicol. 28, 363-423.
137. Ben Jonathan, N., Cooper, R. L., Foster,
P., Hughes, C. L., Hoyer, P. B., Klotz, D., Kohn, M., Lamb, D. J., and Stancel,
G. M. (1999). An approach to the development of quantitative
models to assess the effects of exposure to environmentally relevant levels of
endocrine disruptors on homeostasis in adults. Environ. Health Perspect. 107 Suppl 4, 605-611.
138. Besunder, J. B., Reed, M. D., and
Blumer, J. L. (1988). Principles of drug biodisposition in
the neonate. A critical evaluation of the
pharmacokinetic-pharmacodynamic interface (Part I). Clin.
Pharmacokinet. 14, 189-216.
139. Crom, W. R. (1994). Pharmacokinetics
in the child. Environ. Health Perspect. 102
Suppl 11, 111-117.
140. Dvorchik, B. H., Stenger, B. G., and
Quattropani, S. L. (1974). Fetal hepatic drug metabolism in
the nonhuman primate, Macaca arctoides. Drug Metab.
Dispos. 2, 539-544.
141. Giachelli, C. M. and Omiecinski, C. J.
(1987). Developmental regulation of cytochrome P-450 genes in
the rat. Mol. Pharmacol. 31, 477-484.
142. Glockner, R. and Karge, E. (1993). Postnatal development of body mass and of hepatic xenobiotics
biotransformation of male rats with low body mass at birth. Exp.
Toxicol. Pathol. 45, 145-148.
143. Gu, J., Su, T.,
Chen, Y., Zhang, Q. Y., and Ding, X. (2000). Expression of biotransformation
enzymes in human fetal olfactory mucosa: potential roles in developmental
toxicity. Toxicol. Appl. Pharmacol. 165, 158-162.
144. Iwasaki, K., Shiraga, T., Takeshita, K.,
Katashima, M., Nagase, K., Tada, K., Noda, K., and Noguchi, H. (1993). Perinatal development of amine, alcohol and phenol
sulfoconjugations in the rat. Res. Commun. Chem. Pathol. Pharmacol. 81, 183-190.
145. Juchau, M. R., Chao, S. T., and
Omiecinski, C. J. (1980). Drug metabolism by the human fetus.
Clin. Pharmacokinet. 5,
320-339.
146. Juchau, M. R., Lee, Q. P. and Fantel, A.
G. (1992). Xenobiotic biotransformation/bioactivation in organogenesis-stage
conceptual tissues: Implications for embryotoxicity and teratogenesis. Drug Metab. Rev. 24(2), 195-238.
147.
148. Klinger, W. and Muller, D. (1976). Developmental aspects of xenobiotic transformation. Environ.
Health Perspect. 18, 13-23.
149.
150. Mannering, G. J. (1985). Drug metabolism in the newborn. Fed.
Proc. 44, 2302-2308.
151. Miller, M. S., Juchau, M. R.,
Guengerich, F. P., Nebert, D. W., and Raucy, J. L. (1996). Drug
metabolic enzymes in developmental toxicology. Fundam.
Appl. Toxicol. 34, 165-175.
152. Omiecinski, C. J., Hassett, C., and
Costa, P. (1990). Developmental expression and in situ
localization of the phenobarbital- inducible rat hepatic mRNAs for cytochromes
CYP2B1, CYP2B2, CYP2C6, and CYP3A1. Mol. Pharmacol. 38, 462-470.
153. Omiecinski, C. J., Redlich, C. A., and
Costa, P. (1990). Induction and developmental expression of cytochrome P450IA1
messenger RNA in rat and human tissues: detection by the polymerase chain
reaction. Cancer Res. 50, 4315-4321.
154. Omiecinski, C. J., Aicher, L., and
Swenson, L. (1994). Developmental expression of human
microsomal epoxide hydrolase. J. Pharmacol. Exp. Ther. 269, 417-423.
155. Pasanen, M. (1999). The
expression and regulation of drug metabolism in human placenta. Adv. Drug Deliv. Rev. 38, 81-97.
156. Pedersen, R. A., Meneses, J., Spindle,
A., Wu, K., and Galloway, S. M. (1985). Cytochrome P-450
metabolic activity in embryonic and extraembryonic tissue lineages of mouse
embryos. Proc. Natl. Acad. Sci. U.S.A 82, 3311-3315.
157. Rane, A. and Tomson, G. (1980). Prenatal and neonatal drug metabolism in man. Eur. J. Clin. Pharmacol. 18, 9-15.
158. Raucy, J. L. and Carpenter, S. J.
(1993). The expression of xenobiotic-metabolizing cytochromes
P450 in fetal tissues. J. Pharmacol. Toxicol.
Methods 29, 121-128.
159. Renwick, A. G. (1998). Toxicokinetics in infants and children in relation to the ADI and
TDI. Food Addit. Contam 15 Suppl, 17-35.
160. Rich, K. J. and Boobis, A. R. (1997). Expression and inducibility of P450 enzymes during liver ontogeny.
Microsc. Res. Tech. 39, 424-435.
161. Vogel-Bindel, U., Bentley, P., and
Oesch, F. (1982). Endogenous role of microsomal epoxide
hydrolase. Ontogenesis, induction inhibition, tissue distribution,
immunological behavior and purification of microsomal epoxide hydrolase with 16
alpha, 17 alpha- epoxyandrostene-3-one as substrate.
Eur. J. Biochem. 126, 425-431.
162. Bazare, J. J., Jr., Nelson, C. J., and
Young, J. F. (1990). Pharmacokinetics of 2,4,5-T in
mice as a function of dose and gestational status. Reprod. Toxicol.
4, 137-144.
163. Byrd, R. A., Young, J. F., Kimmel, C.
A., Morris, M. D., and Holson, J. F. (1982). Computer
simulation of mirex pharmacokinetics in the rat. Toxicol.
Appl. Pharmacol. 66, 182-192.
164. Clarke, D. O., Mebus, C. A., Miller, F.
J., and Welsch, F. (1991). Protection against 2-methoxyethanol-induced
teratogenesis by serine enantiomers: studies of potential alteration of
2-methoxyethanol pharmacokinetics. Toxicol. Appl.
Pharmacol. 110, 514-526.
165. Clarke, D. O., Duignan, J. M., and
Welsch, F. (1992). 2-Methoxyacetic acid dosimetry-teratogenicity relationships in
CD-1 mice exposed to 2-methoxyethanol. Toxicol. Appl.
Pharmacol. 114, 77-87.
166. Cristofol, C., Carretero, A., Fernandez,
M., Navarro, M., Sautet, J., Ruberte, J., and Arboix, M. (1995). Transplacental transport of netobimin metabolites in ewes. Eur. J. Drug Metab Pharmacokinet. 20, 167-171.
167. Cristofol, C., Navarro, M., Franquelo,
C., Valladares, J. E., Carretero, A., Ruberte, J., and Arboix, M. (1997).
Disposition of netobimin, albendazole, and its metabolites in the pregnant rat:
developmental toxicity. Toxicol. Appl. Pharmacol. 144,
56-61.
168. Cristofol, C., Franquelo, C., Navarro,
M., Carretero, A., Ruberte, J., and Arboix, M. (1997). Comparative
pharmacokinetics of netobimin metabolites in pregnant ewes. Res. Vet.
Sci. 62, 117-120.
169. Franklin, C. A., Inskip, M. J.,
Baccanale, C. L., Edwards, C. M., Manton, W. I., Edwards, E., and O'Flaherty,
E. J. (1997). Use of sequentially administered stable lead
isotopes to investigate changes in blood lead during pregnancy in a nonhuman
primate (Macaca fascicularis). Fundam. Appl.
Toxicol. 39, 109-119.
170. Gabrielsson, J., Paalzow, L., Larsson,
S., and Blomquist,
171. Gaylor, D. W., Sheehan, D. M., Young, J.
F., and Mattison, D. R. (1988). The threshold dose question
in teratogenesis [letter]. Teratology 38, 389-391.
172. Hansen, D. K., LaBorde, J. B., Wall, K.
S., Holson, R. R., and Young, J. F. (1999). Pharmacokinetic
considerations of dexamethasone-induced developmental toxicity in rats. Toxicol. Sci. 48, 230-239.
173. Kim, C. S. and O'Tuama, L. A. (1981).
Choroid plexus transport of 2,4-dichlorophenoxyacetic
acid: interaction with the organic acid carrier. Brain Res. 224, 209-212.
174. Kim, C. S., O'Tuama, L. A., Mann, J. D.,
and Roe, C. R. (1983). Saturable accumulation of the anionic herbicide, 2,4- dichlorophenoxyacetic acid (2,4-D), by rabbit choroid
plexus: early developmental origin and interaction with salicylates. J.Pharmacol.Exp.Ther.
225, 699-704.
175. Kimmel, C. A., Wilson, J. G., and
Schumacher, H. J. (1971). Studies on metabolism and
identification of the causative agent in aspirin teratogenesis in rats.
Teratology 4, 15-24.
176. Kimmel, C. A. and Young, J. F. (1983). Correlating pharmacokinetics and teratogenic endpoints. Fundam. Appl. Toxicol. 3, 250-255.
177. Li, X., Lutz, W. D. and Rozman, K. K.
(1995). Toxicokinetics of 2,3,7,8-tetrachlorodibenzo-p-dioxin
in female Sprague-Dawley rats including placental and lactational transfer to
fetuses and neonates. Fund. Appl. Toxicol. 27, 70-76.
178. Marciniak, M., Chas, J., and
Baltrukiewicz, Z. (1996). Transport of lanthanides in milk
into suckling rats. Q. J. Nucl. Med. 40, 351-358.
179. Mason, H. J. (2000). A
biokinetic model for lead metabolism with a view to its extension to pregnancy
and lactation; (1). Further validation of the original
model for non-pregnant adults. Sci. Total Environ. 246, 69-78.
180. Mebus, C. A., Clarke,
D. O., Stedman, D. B., and Welsch, F. (1992). 2-Methoxyethanol metabolism in
pregnant CD-1 mice and embryos. Toxicol. Appl.
Pharmacol. 112, 87-94.
181. Nau, H., Welsch, F., Ulbrich, B., Bass,
R., and Lange, J. (1981). Thiamphenicol during the first trimester of human
pregnancy: placental transfer in vivo, placental uptake in vitro, and
inhibition of mitochondrial function. Toxicol. Appl.
Pharmacol. 60, 131-141.
182. Perkins, R. A., Ward, K. W., and
Pollack, G. M. (1995). Comparative toxicokinetics of inhaled
methanol in the female CD-1 mouse and Sprague-Dawley rat. Fundam. Appl. Toxicol. 28, 245-254.
183. Perkins, R. A., Ward, K. W., and
Pollack, G. M. (1995). A pharmacokinetic model of inhaled
methanol in humans and comparison to methanol disposition in mice and rats.
Environ. Health Perspect. 103, 726-733.
184. Perkins, R. A., Ward, K. W., and
Pollack, G. M. (1996). Methanol inhalation: site and other factors influencing
absorption, and an inhalation toxicokinetic model for the rat. Pharm. Res. 13,
749-755.
185. Rogan, W. J. and Ragan, N. E. (1994). Chemical contaminants, pharmacokinetics and the lactating mother.
Environ. Health. Perspect.
102 Suppl 11, 89-95.
186. Rowland, J. M., Slikker, W., Jr.,
Holder, C. L., Denton, R., Prahalada, S., Young, J. F., and Hendrickx, A. G.
(1989). Pharmacokinetics of doxylamine given as Bendectin in
the pregnant monkey and baboon. Reprod. Toxicol.
3, 197-202.
187. Sandberg, J. A., Duhart, H. M., Lipe,
G., Binienda, Z., Slikker, W., Jr., and Kim, C. S. (1996). Distribution of 2,4-dichlorophenoxyacetic acid (2,4-D) in maternal and fetal
rabbits. J. Toxicol. Environ. Health 49, 497-509.
188. Scott, W. J., Collins, M. D. and Nau, H.
(1994). Pharmacokinetic determinants of embryotoxicity in rats associated with
organic acids. Environ. Health Perspect. 102 Suppl 11,
97-101.
189. Scott, W. J., Fradkin, R., Wittfoht, W.
and Nau, H. (1989). Teratologic potential of 2-methoxyethanol
and transplacental distribution of its metabolite, 2-methoxyacetic acid, in
non-human primates. Teratol. 39, 363-373.
190. Sikov, M. R. and Kelman, B. J. (1989). Factors affecting the placental transfer of actinides. Hlth. Physics. 57, 109-114.
191. Sleet, R. B., John-Greene, J. A., and
Welsch, F. (1986). Localization of radioactivity from 2-methoxy[1,2-14C]ethanol
in maternal and conceptus compartments of CD-1 mice. Toxicol.
Appl. Pharmacol. 84, 25-35.
192. Sleet, R. B., Greene, J. A., and Welsch,
F. (1988). The relationship of embryotoxicity to disposition
of 2-methoxyethanol in mice. Toxicol. Appl.
Pharmacol. 93, 195-207.
193. Sleet, R. B., Welsch, F., Myers, C. B.,
and Marr, M. C. (1996). Developmental phase specificity and
dose-response effects of 2- methoxyethanol in rats. Fundam.Appl.
Toxicol. 29, 131-139.
194. Slikker, W., Jr., Holder, C. L., Lipe,
G. W., Bailey, J. R., and Young, J. F. (1989). Pharmacokinetics
of doxylamine, a component of Bendectin, in the rhesus monkey. Reprod. Toxicol. 3, 187-196.
195. Sumner, S. J., Stedman, D. B., Cheng, S.
Y., Welsch, F., and Fennell, T. R. (1995). Dose effects on the excretion of
urinary metabolites of 2-[1,2,methoxy-
13C]methoxyethanol in rats and mice. Toxicol. Appl.
Pharmacol. 134, 139-147.
196. Terry, K. K., Elswick,
B. A., Stedman, D. B., and Welsch, F. (1994). Developmental phase alters dosimetry-teratogenicity
relationship for 2- methoxyethanol in CD-1 mice. Teratology 49, 218-227.
197. Ward, K. W. and Pollack, G. M. (1996). Comparative toxicokinetics of methanol in pregnant and nonpregnant
rodents. Drug Metab Dispos. 24, 1062-1070.
198. Ward, K. W. and Pollack, G. M. (1996). Use of intrauterine microdialysis to investigate methanol-induced
alterations in uteroplacental blood flow. Toxicol.
Appl. Pharmacol. 140, 203-210.
Air Resources Board (1998),
Research Notes for 1989 through 1998 at http://www.arb.ca.gov/research/resnotes/toc.htm.
This section provides information on the objections,
approach, requirements, design, and data exchange protocol of the
Comprehensive Chemical Exposure Framework (CCEF) and generic Model Flow
Diagrams developed for the framework.
The objectives provides the aims and goals of the Framework, the approach
provides a brief proposed work, the requirements provides
"what" the framework should do, the design provides
"how" the framework should do it, and the data exchange protocol
provides the file types, structure, and format for models and databases to
communicate.
The purpose of this research is twofold: design an
overarching framework that is comprehensive, logical, and useful for industrial
needs; and within that framework identify research needs based on gap and
sensitivity analyses. The overarching framework, called the Comprehensive
Chemical Exposure Framework, will be used to house models, algorithms, and
databases associated with micro-environmental exposure modeling. The research
needs are defined for representative high volume compounds that could be
involved in exposure scenarios.
The Attributes for the CCEF are listed here with a detailed description in
the Requirements section.
· Comprehensive
· Modular
· User-friendly
· Multi-route, Multi-pathway, & Multi-source for Varying durations
· Accurate
· Open code
· Probabilistic
· Dose-responsive
· Mass-conservative
The American Chemistry Council's Human Health Exposure
Assessment Technical Implementation Panel identified four example exposure
scenarios associated with different high-volume compounds that the design of
the Comprehensive Chemical Exposure Framework (CCEF) should address. Using the
design of the CCEF and its applicability to the four example exposure
scenarios, research gaps and needs were identified. The filling of these
research gaps and needs would increase the confidence and reduce the
uncertainty of the human health exposure assessment results.
The four example exposure scenarios were used to guide the types of models,
algorithms, and databases required to evaluate each scenario. Model and Process
Flow Diagrams were developed for each exposure scenario and research gaps were
identified based on publicly available information. Once the gap analysis was
completed for the source, transport, exposure, and health impact components of
each scenario, a qualitative sensitivity of the entire system was conducted.
The Gap Analysis focused on reviewing the Process Flow Diagrams that had
been developed to properly evaluate each of the four example exposure scenarios
to identify models, algorithms, and databases that were missing or unknown.
This was done for the Source, Transport, Exposure, and Impacts components of
the exposure scenarios. In some cases, models existed but they were determined
to be too simplistic or conservative and were considered a research gap. In
these cases, alternative paths were explored to determine the type of model or
algorithm required to fill the research gap.
A qualitative Sensitivity Analysis was also performed on the models and
algorithms identified for the various compounds and exposure scenarios. This
was conducted on the four modeling components using the following guidelines:
The Gap and Sensitivity Analyses provide guidance on what research should be
conducted in the near future to improve the risk estimates from exposure to
high volume compounds.
The design of the CCEF leverages the concepts associated with multiple
existing framework system software and exposure modeling methods that are in
the forefront of the scientific community as well as new innovative concepts.
The key to the CCEF will be the flexibility of the CCEF in its usability and
ability to integrate and accommodate different exposure models (existing and
future) required for the American Chemistry Council and industry needs.
The CCEF design links models and databases together so they can
transparently communicate between each other. The CCEF is the overarching
framework that houses the models and databases as "separate"
components and provides the data file protocols for communication between
components. A model is represented by a specific set of algorithms that perform
a specific function (e.g., drinking water ingestion model). A module represents
a general set of model types, defined by their "real world"
functions, and includes the model, its user interface, and any pre- and
post-processors that facilitate linkages and communication with/to other
components (e.g., models and databases). This effort focuses on the design of
an overarching framework and not on the models that are housed within the
framework.
To adequately assess the risks associated with chemicals released into the
environment, researchers require software tools for accurately estimating human
exposure under a variety of exposure scenarios. The design of a software
technology system begins with a set of requirements. Requirements are
characteristics and behaviors that a piece of software must possess to function
adequately for its intended purpose. A requirement is sometimes called an
attribute, and a good requirement is testable.
The purpose of these requirements is to state those conditions that define
the design associated with the CCEF. To help define and develop requirements
that will provide some legacy as the state-of-the-art in software technology
advances, the requirements for the design of the CCEF shall initially consider
those that have been suggested by experts, including those recently documented
and outlined by Whelan and Nicholson (2001). The following requirements are
grouped by the categories outlined by the American Chemistry Council for the
CCEF.
Continue reading the detailed requirements in the Design section.
The design of a Framework is a strategy for meeting
requirements and determines how a requirement is implemented. purpose of design is to state those conditions upon which
software specifications can be developed to support development of
Comprehensive Chemical Exposure Framework (CCEF).
Figure 3.4.1 below depicts following aspects of Manager:
3.4.1
Task and Sequence Manager Diagram
A Module consists of following components:
Output consists of following:
Note: Some data sources may feed more than one module. When multiple data
sources fill same data need of a module but data sources differ on what value
should be, some sort of hierarchy must be determined to select which data to
use. assumptions of scenarios (chemicals, ages,
exposure situations, etc.) will always be primary data source to use.
The client (local user) side of CCEF refers to host system.
In this case, as noted by requirements, host system will be a personal
computer, operating within Windows environment. Therefore, System User Interface
will be personal computer compatible, but operation of models and databases is
not limited to a personal computer and can exist on and operate from remote
personal computer and non- personal computer locations. client
side (local user) of design focuses on data transfer protocol, data structure,
and system structure. server side (remote user) of
design focuses on linkage protocol to remotely access and utilizes databases
and models. design of CCEF is divided into following
subject areas:
1. Concepts of a Framework - Architectural differences between a
framework, model and module are discussed
2. System User Interface - system interface is proposed for CCEF, which
represents forum for visually describing conceptual site model, is presented. conceptual site model is a mechanism to convey problem to
user in graphical form. System User Interface directly interacts with user, so
is very important to ensure is user-friendly and relatively intuitive
3. Data Exchange Protocols - data transfer protocols describe foundation
upon which data is universally transferred throughout system. These protocols
represent heart of CCEF
4. Sensitivity/Uncertainty Considerations - Utilizing data transfer
protocols, section discusses how sensitivity/uncertainty models and parameter
estimation models can be incorporated into system
5. Model Space and Time Considerations - Protocol for disparate models
in both space and time are discussed
6. System Integration Tools - To help facilitate ability of linking
disparate models, databases, and frameworks into Comprehensive Chemical
Exposure Framework, a series of system software helpers (i.e., “editors”) are
required to step model or database developer integration (and application)
process
7. Server Side of CCEF - Software tools, which allow user to link to
remote models and databases, are presented
8. Lock and Key - Software tools, which allow a user to fix conceptual
site model by locking icon types and connections between icons, and lock models
are available under each icon
9. Summary of CCEF Design - A summary of design,
requirement-by-requirement, is presented.
The concept of developing a Framework, as opposed to a model, is
critical in design philosophy of a technology software system, which meets
needs outlined by requirements. Models are designed to perform a specific set
of calculations. Although models exist describe a given real-world phenomenon
(e.g., fate and transport in aquifer, exposure to contaminated sediments,
etc.), a model can also be composed of many parts (e.g., LifeLine,
CARES, TRIM, HWIR, etc.), which does not lend itself as a mechanism to link
disparate components together to more accurately describe a solution to
problem. Framework takes on responsibility for connectivity, while model
takes on responsibility of consuming and producing information as part of
assessment.
For example, Figure 3.4.2.1 presents an
example of a standard U.S. Environmental Protection Agency risk paradigm for
performing a multiple media risk assessment, linking source, fate &
transport, exposure, and risk/hazard (http://www.epa.gov/epaoswer/hazwaste/id/hwirwste/risk.htm).
The ellipsoids provide examples of models (e.g., “modules” in figure). The
essence of the Framework is represented by arrows connecting the models
(i.e., modules). The protocol for transferring information from one model or
database to the next is the responsibility of the Framework. The choice
of the model, which represents the best solution to problem, is the
responsibility of the analyst or the user. The Framework should take no
responsibility for what happens inside of the model but takes full
responsibility to ensure data consumed and produced are appropriately provided
to the next model in line, and models fire in the correct sequence. The Framework
controls the flow of information but not how the information is used.
By utilizing this approach in defining the difference between models and
frameworks, solutions to problems can be designed to best meet the needs of the
client. For example, Figure 3.4.2.2 illustrates how best the
features of micro-environmental exposure modeling can be combined with the U.S.
Environmental Protection Agency's standard fate and transport, exposure
modeling. In this example, the analyst is interested in using a
micro-environmental model to track ramifications of chemical exposures,
including a discrete (i.e., short period of time) exposure from a nearby waste
site. Because fate and transport models and their results are considerably more
accurate, using standard risk assessment approaches, than those inherently
included in micro-environmental model, they can be linked to the
micro-environmental models. A similar design could be established when a more
accurate lung model is needed to produce/consume information for/from the
micro-environmental model. Not only that, the user could set up a comparison
between two different micro-environmental models to determine differences
between them using the same input data, and which ones would best meet the
needs of problem.
Figure 3.4.2.2 also illustrates that
databases can be viewed in a similar manner as models. Using this design, any
database can be linked to any model, and the user is free to input
user-specific information. As with linking models, any database can be linked
to any other database. For example, if a site-specific database lacks necessary
information to fulfill all of needs of a model, regional and national databases
could be linked to site-specific database to fill gaps that might exist,
prioritizing data, where appropriate.
As an illustrative example of design, four exposure scenarios have been
identified for conceptual analysis; definitions of each are presented in scenarios section. Typical information flow
diagrams, used to describe these four exposure scenarios, are presented in Figure 3.4.1. Figure 3.4.1 illustrates
connectivity from source term through risk with boxes representing models and
ellipsoids representing information flow between models. Framework
concept can easily be expanded to four exposure scenarios, outlined in scenarios section, with each information flow
diagram described therein.
Using Scenario 3 as illustrative example,
a process flow diagram as illustrated in Scenario Model
3 Flow Diagram, can be developed to describe the process of
implementing occupational exposure of VOC compound Group D (benzene, toluene,
and n-hexane) to adult male. In the process flow diagram, algorithms and models
can be used to describe each of the boxes, and these process boxes can be
matched to information flow diagram, to provide substance behind source, fate
and transport, and effects/impacts modeling.
following design corresponds with Requirements.
Application Programming Interface, using a
series of Dynamic Link Libraries, will be utilized to transfer information
between modules and between modules and databases. DICtionary files will
describe characteristics of data to be transferred and boundary conditions link
modules together. Data transfer will be through Dynamic Link Libraries, where
quality of data can be inspected. Four universal data Application
Programming Interfaces have been identified to support data exchange
protocol:
A. Input/Output - input/output Application
Programming Interface shall consider
• range checking of parameters with storage of information
• data retrieval
• data storage
• units checking
• open/close data sources
• monotonic checks (e.g., time in a time series)
• meta data functions [cardinality, units, definitions, type (e.g., real,
integer, etc.), number of elements, etc.]
• automatic call for conversions (e.g., units conversion)
B. System (Read Only) - System Read Only Application
Programming Interface shall consider
• error handling functions (e.g., assertions, errors, and warnings)
• Command line functions (not standard in FORTRAN)
• producer/consumer relationships
• conceptual site meta data (e.g., type, qualifiers, counts, available models,
description files, Domain, Class, Group, SubGroup)
• Read note
C. System (Write) - System Write Application
Programming Interface shall consider
• setting arguments
• setting producer/consumer relationships
• creating a new module
• locking/unlocking conceptual site models
• model selection
• calling and running interfaces and models
• Run calls between models
• move icon placement routines
• Call-back when error occurs
• adding notes
D. Conversion - Conversion Application
Programming Interface shall consider
• a list of available units
For example, best features of micro-environmental
exposure modeling can be combined with Environmental Protection Agency's
standard fate and transport, exposure modeling. In this example, analyst is
interested in using a micro-environmental model to track ramifications of
chemical exposures, including a discrete (i.e., short period of time) exposure
from a nearby waste site. Because fate and transport models and their results
are considerably more accurate, using standard risk assessment approaches, than
those inherently included in micro-environmental model, they can be linked to
micro-environmental.
A similar design could be established when a more
accurate lung model is needed to produce/consume information for/from
micro-environmental model. Not only , but user could set up a comparison
between two different micro-environmental models to determine differences
between them using same input data, and which ones would best meet needs of
problem.
The explanation of Figure 3.4.3,
as it relates to components comprising server side of CCEF design is presented
in the Appendix. The various editors represent
software for accessing remote databases. The Model Owner Tool, Module Execution
Tool, and Remote Module Client represent software for accessing and running
remote models (i.e., remote computing).
tiered icon system allows for
the CCEF to link to outside frameworks, especially those at remote locations.
The icon pallet includes a Class for Frameworks, both Closed
and Linkable, and therefore, represents the icon choice under Class
type. If these Frameworks have correct linkage specifications to communicate
with other icons, then they could be pictorially connected directly to those
icons. It is anticipated that Frameworks could be located in different
directories, as a "remote" application. The server side of CCEF design
accounts for linkages to remote locations, where models, databases, or other
frameworks might be housed. Frameworks would be linked with the system in a
manner in which all components are linked, using Application Programming
Interface with the consuming and producing Dictionaries.
If the system provides a database, it has
responsibility to "support" the database. This means the system
developers (i.e., system) are responsible for the content of the database and
to ensure that the database is consistent and conforms to the description
files, dictionary files, and Date Client Editor specifications
and linkage protocols; therefore, the Date Client Editor, dictionary and
description files are system's responsibility.
A system database (i.e., original database) can be
located on the local server or at a remote location. As with models, anyone can
"access" a new database, not supported by system (i.e., system
developers), but supported by someone else. They can "add"
functionality to access the new database. Note this procedure is similar to
someone adding a new model to system, a model is only
applicable to their needs. To add a new model, the developer addresses
protocols for a dictionary file and description file. The main difference
between model-based icons and database icons is the linkage information between
the model icons must be complete, but the linkage information between a
database icon and other icons can be incomplete, because the database may not
be able to service all of the data needs of the model.
System databases are supported by the system
(e.g., chemical or lifeform lists). To a certain extent, the system also
services and supports the content of databases. For example, if the analyst is
implementing a chemical assessment, all of the modules need to know what the
chemical is; otherwise, all the modules will expect a different name and
identifier, and none of the models will understand which chemical is being
passed. This information is global and needs to reside in a system database.
Currently envisioned system databases will be separated out from module
databases. Because system databases are global, they do not need to be
physically linked into the conceptual site model as they are already available
to all modules in the system.
Module databases are module specific and
those databases are linked directly to models to which they will supply
information. These databases are identified through traditional Drag & Drop
features of conceptual site model work space. These Module databases
(which constitute linkages to databases outside system) are not supported by
the system but link into the system through editors, which are
system-supported, universal software works for most well-structured and
-designed databases. These "other" databases support specific models
or model types. The Date Client Editor is database-specific and has a specific
dictionary file associated with it. These databases are akin to models in
thesystem. The system takes no responsibility for the models, only ensures that
when specifications are met, the information is transferred properly. The same
is true for the databases. In effect, the module-linkage philosophy is being
reused for database linkage.
For example, when linking national and regional
databases, the national database would be associated with upstream database
icon, and the regional database would be associated with the downstream
database icon. Under this situation, regional database would populate its data
gaps with data from the national database. The conceptual site model icon
linkage sequence would have a national database icon linked to a regional
database icon linked to module icon. Because the regional database and module
icons are directly linked, the user is being informed that the most detailed
and appropriate data are being used in assessment, that is, the regional
database is the primary database supplying information to module. The design
for this requirement will be enforced for each system-supported Date Client
Editor. This will also be strongly recommended for each Date Client Editor not
supported by the system (i.e., developed by those not representing the system).
The Application Programming Interface and
Dictionary file design allows for the "Plug & Play" feature,
which is the most important feature of design. By ensuring Plug & Play,
CCEF inherently includes the ability to
• link any type of model, database, or framework into the system to communicate
with any other component.
• allow model developers, government organization, private companies, etc. to
incorporate their own models and databases into the system without the
necessity of going through the system developer as a middle-man.
• ensure backward compatibility between legacy models and databases, and new
models and databases.
• allow linkage specifications to change with time and company.
• link to remote databases or models.
• link to remote frameworks, without having to
integrate remote frameworks into CCEF.
• link to remote databases and only download information necessary for
assessment.
• integrate change into system without having to redesign components are
already included in system.
A. The CCEF design allows
for time-varying concentrations or doses to be expressed in fractions of seconds,
minutes, hours, days, years, or millennia. Each model, especially numerical
models, calculate time stepping and have a
nodal-spacing mesh are consistent with their own model, while ensuring
numerical convergence and stability. Temporal considerations for linking models
with disparate time-stepping requirements are independent of any model. To
remove model-specific time stepping from dictating manner in which models
communicate, each of model's time-stepping requirements are accounted for when
passing information between models. Each producing model will provide
time-varying output corresponding with each producing model's area-based
polygons, if polygons describe boundary interface between models. The
time-varying output from producing model is a function of time steps used in
generating time-varying curve. system does not know if time information between
data-points on time-varying curve is linear, constant, nonlinear, etc., as
such, each producing model's time-varying curves will have their own time-stepping
protocol for each parameter for each polygon. When consuming model maps
producing model's polygon outputs and transfers information from producing
model's gridding system its own, the consuming model is responsible to account
for time-stepping system producing models provides.
B. The models are inherently designed with respect
to a particular reference time. For example, the standard Environmental
Protection Agency's risk paradigm assumes long-term, chronic exposures in their
assessments. Acute exposures traditionally are not part of SUPERFUND
calculations and Risk Assessment Guidelines for SUPERFUND (RAGS). Models that
are designed for acute exposures would typically expect time-varying data from
producing models or databases, which are consistent with design of model. For
example, the upstream model produces the time-varying output in thousand-year
increments that would not be a good candidate to link to acute exposure model
expects variations in data in terms of hours. The CCEF design allows for the Dictionary
files to be designed to specifically filter acute versus chronic forms of
information. As an example, the air pathway traditionally has Dictionary files
that are specific to Gaussian, sector-averaged models and a different set
associated with puff models. A similar solution to the problem can be
implemented for other components in CCEF.
The user should be allowed to choose either option
or both options together. The first option implies the conceptual site model
picture (i.e., user-defined icons and connections) is locked and the user
cannot make new connections between icons. The second option restricts models
that are available to user, greying out those models that are not allowed as a
choice. Lock and Key software shall
a. allow for a user option to have the conceptual site model locked.
b. allow for a user option to have the models beneath
icons locked.
c. allow for a user option to have the conceptual site
model and models beneath icons locked.
d. allow for a user-defined password
e. allow for use of no password.
f. include algorithms that will alert users when
source codes have been modified when system is locked.
g. include algorithms that will prevent users from
using modified source codes when system is locked.
h. design the CCEF so the framework works the same,
regardless of whether the lock-and-key option is implemented.
Figure 3.4.3 CCEF Design Relationships
between Linkage and Model Servers, Host Client, and Remote Databases and Models
In the past, the traditional approach for linking models was
to directly connect (i.e., “hard-wire”) module types (e.g., aquifer models to
river models). Each connection specifically reflected the data needs of the
consuming model, resulting in an efficient transfer of data but also resulting
in a confusing model structure, difficult to manage and update. Under the
traditional approach, if a user wanted the ability to link several models of a
certain type (e.g., 1-dimensional, 2- dimensional, and 3- dimensional aquifer
models) to several models of a different type (i.e., 1- dimensional, 2-
dimensional, and 3- dimensional river models), each model was directly linked
(i.e., hard-wired”) to each other through a processor. With this approach, a
user could choose to link any one of the aquifer models to any one of the river
models. The following figure illustrates the traditional approach for linking
models (i.e., module types in the figure) and data transfer. This linkage
scheme is very efficient and exact, allowing for direct communication between
models and providing an environment for dynamic feedback between models.
Unfortunately, as illustrated by the figure, this scheme can become extremely
complicated and unmanageable and does not allow the user to add new models,
parameters, data requirements, databases, etc. to the system without having to
modify the entire system and revamp legacy models.
Figure 3.5.1 Traditional and Nontraditional
approach for Linking Models and Transferring Data
With the advent of "object-oriented" modeling concepts, each of
the models, which enters into the system, agrees on a
data exchange protocol. The non-traditional approach identifies system data
specifications to which models adhere when passing information between model
types and databases, as illustrated in the above figure. Pre- and post-data
processors allow legacy models to remain unaffected and facilitate the ability
to "plug" legacy models directly into the system, enhance quality
control, and simplify management of and modifications to multiple models. This
Plug & Play attribute is an extremely powerful and desirable feature, as it
allows users and model developers to insert the most appropriate models to meet
specific needs.
Data specifications between communicating models represent the contract
between what the upstream model produces (e.g., aquifer model) and what the
downstream model consumes (e.g., river model). The data specification is
similar to the telephone numbers in a telephone book. Both parties agree to the
telephone numbers, and, when each wants to communicate, they do so using the
appropriate telephone number (see the following figure). Each year the
telephone books are updated (see the following figure); as such, the design of
the CCEF must accommodate new models, data requirements, parameters, and
linkages but still maintain backward compatibility with existing models in the
system.
As noted earlier, the fundamental responsibility of the Framework is
associated with the arrows that connect the models and databases, as
illustrated in the figures below and in the model flow diagrams for scenario 1, scenario 2,
scenario 3 and scenario 4.
Those arrows represent the transfer of information from one component to
another. The system is charged with managing model and database linkages, the
firing sequence of the models, and data transfer between components. The system
is not responsible for what happens within the models. The key to a successful
transfer is to ensure that the producing component provides information that
meets the needs of the consuming component and that the information is in a
form that is recognizable by both components. This shared responsibility for
managing data transfer is based on datasets, whose metadata information is
described by Data Dictionary File.
This section provides the scenario-specific Model and
Process Flow Diagrams based on the CCEF design, as well as the research gaps
identified in the process of developing the flow diagrams.
These diagrams were developed for this study based on the compounds and
exposure scenarios that the American Chemistry Council Human Health Exposure
Assessment Technical Implementation Panel identified.
The gap analysis was conducted based on the four example exposure scenarios
associated with the CCEF. A gap analysis provides a list of research needs
associated with the specific topic of interest.
In this case, the gap analysis will provide the research needs for the
framework, models, algorithms, and databases associated with the four exposure
scenarios developed to define the design of the CCEF. The gap analysis consists
of a three-step process in evaluating the Modeling and Process Flow Diagrams
developed for the design of the CCEF. The three steps of the gap analysis are:
1) define what exists, 2) define what is needed, and 3) define the process to
achieve the identified needs.
It is critical that the CCEF be able to accommodate the
various life stages of a human to develop the aggregate and cumulative health
effects. To effectively model the aggregate and cumulative health effects of
high volume compounds, a number of different life stages must be taken into
account for each scenario. The age definitions of each life stage and
terminology may vary but the presented information is a good guideline to
follow.
This section introduces these life stages. Move your mouse over the timeline
for a description of the life stages. For information on the specific life
stages involved in each Exposure Scenario, click on the individual scenario of
interest.
Figure 4.1.1
Life Stages For Exposure Scenarios
Scenario 1
Scenario 2
Scenario 3
Scenario 4
Figure 4.2.1 Life Stages For Exposure Scenario 1
Scenario 1
Scenario 2
Scenario 3
Scenario 4
This section provides the conceptual model for the exposure pathways
associated with Exposure Scenario 1. The conceptual model and associated
assumptions are critical in determining the type of the models, algorithms, and
databases required to satisfy the needs of the modeling scenario. Figure 4.2.2
provides an illustration of the conceptual model for Exposure Scenario 1.
Residential Indoor Description
- 1-story Townhouse with 1700 ft2 in MA
- Electric heat pump, forced air and AC
- 3 bedrooms, 1.5 baths, kitchen, living & dining rooms
Exposure in Home to SVOC Compounds A & B
- Compound A: 2-butoxyethanol applied in mineral spirits solution by
immersion of wooden strip ceiling at factory
- Infant born 12 months after moving into home
- Mother stays at home and nurses baby
- Compound B: ethylene glycol contained in latex wall paint
- Entire living space painted when moved into home (-1 year) and repainted 1 wk
after baby arrives
Figure 4.2.2 Illustration Depicting The Setting For Exposure
Scenario 1 In Study
Assumptions about Exposure and Activities
- Embryo embedded at day 11 after fertilization
- Normal length of pregnancy (9 months)
- Nursing from birth until 0.5 yrs (6 months)
- Repaint home when child is age 4
Child Age Timeline
- Fetus (-0.75 years until birth)
- Newborn (birth to 0.25 yrs) (3 months)
- Infant (0.25-2 years)
- Toddler (2-3 years)
- Early Childhood (3-6 years)
Notes
- Health effects will be calculated for the fetus through child. Depending on
model outputs, this can be a cumulative time distribution or a time-weighted
value.
- The infant is born 12 months after moving into home.
- The mother works at home and nurses baby.
- There will be a spiked increase in ethylene glycol at week 1 when
fresh paint is applied over the entire living space.
Figure 4.2.3 CCEF
Model Flow Diagram Scenario 1
Figure 4.2.3 CCEF Model Flow Diagram Scenario 1
Diagram Legend |
|
Solid boxes |
Models with known codes available |
Dashed boxes |
Models with codes not available |
Ovals |
Input/output files with specific
data specifications |
Checkered Ovals |
Output from separate
scenario/input to this scenario |
Cylinders |
Model-Specific databases |
Arrows |
Indicates the flow of the data |
PBPK |
Physiologically-based Pharmacokinetics |
PBPD |
Physiologically-based Pharmacodynamics |
Notes
Figure 4.3.1 Life Stages For Exposure Scenario 2
Scenario 1
Scenario 2
Scenario 3
Scenario 4
This section provides the conceptual model for the exposure pathways
associated with Exposure Scenario 2. The conceptual model and associated
assumptions are critical in determining the type of the models, algorithms, and
databases required to satisfy the needs of the modeling scenario. Figure 4.3.2
provides an illustration of the conceptual model for Exposure Scenario 2.
Residential Indoor Description
- 1-story Townhouse with 1700 ft2 in MA
- Electric heat pump, forced air and AC
- 3 bedrooms, 1.5 baths, kitchen, living & dining rooms
Exposure in Home to SVOC Compound Group C
- Compound Group C: phthalate esters (DEHP, BBP, & DINP) in plastic
DEHP: diethyl hexyl phthalate
BBP: butyl benzyl phthalate
DINP: diisononyl phthalate
- Exposure to phthalates in plastic furniture covers, carpet
backing, and shower curtains
- Exposure to phthalates in plastic food wrap and containers
- Exposure to phthalates in plastic toys
Figure 4.3.2 Illustration Depicting The Setting For Exposure
Scenario 2 In Study
Assumptions about Exposures and Activities
- Nursing from birth until 0.5 yrs (6 months)
- Standard growth patterns
- Standard lactation and nursing patterns
Child Age Timeline
- Newborn (birth- 0.25 yrs) (3 months)
- Infant (0.25-2 years)
- Toddler (2-3 years)
- Early Childhood (3-6 years)
- Late Childhood (6-12 years)
- Early Teenager (12-15 years)
- Late Teenager (15-18 years)
Figure 4.3.3 CCEF
Model Flow Diagram Scenario 2
Figure 4.3.3 CCEF Model Flow Diagram Scenario 2
Diagram Legend |
|
Solid boxes |
Models with known codes available |
Dashed boxes |
Models with codes not available |
Ovals |
Input/output files with specific
data specifications |
Checkered Ovals |
Output from separate
scenario/input to this scenario |
Cylinders |
Model-Specific databases |
Arrows |
Indicates the flow of the data |
PBPK |
Physiologically-based Pharmacokinetics |
PBPD |
Physiologically-based Pharmacodynamics |
Notes
Figure 4.4.1 Life Stages For Exposure Scenario 3
Scenario 1
Scenario 2
Scenario 3
Scenario 4
This section provides the conceptual model for the exposure pathways
associated with Exposure Scenario 3. The conceptual model and associated
assumptions are critical in determining the type of the models, algorithms, and
databases required to satisfy the needs of the modeling scenario. Figure 4.4
provides an illustration of the conceptual model for Exposure Scenario 3.
Occupational Indoor Smoking Description
- Smokes 15-20 cigarettes per day since age 18, including 60% indoors and 10%
outdoors at home and 30% indoors at work
Adhesive Compounding Facility Description
- Blending of solvent-based adhesive in 700,000 ft3 room
Occupational Exposure to VOC Compound Group D
- Compound Group D solvent mixture: benzene, toluene, n-hexane
- Exposure from routine compounding and hourly product sampling
- Exposure from cleaning agitator blades 2-3 times/week with one of solvents
- Exposure during cleaning of spill 2-3 times/year of cleaning solvent (1 gal),
compounding solvent (30 gal), or product (50 gal)
Figure 4.4.2 Illustration Depicting The Setting For Exposure
Scenario 3 In Study
Assumptions about Working Conditions
- Work facility has separate smoking room
- Separate lunch room and no food or tobacco allowed in work area
- Attached shower/locker room with same heating/ventilation system as
manufacturing room, where worker stores lunch and cigarettes
- Worker wears protective coveralls, neoprene gloves & apron, goggles, and
full-face respirator with organic vapor cartridge to fill drums
- Worker adds protective full-face, positive-pressure SCBA and Tyvek coveralls
for blade cleaning & spills
- Works typical schedule and hours
Adult Age Timeline
- Early Adult (18-35 years)
- Middle Adult (35-50 years)
- Senior Adult (50-65 years)
Notes
-Question: Exposure to solvent mixture at different intervals of time
can/cannot cause harmful health effects. The longer the
exposure the greater the potential for toxicity.
- Health effects will be calculated for the adult age 18 through 65. Depending
on model outputs, this can be a cumulative time distribution or a time-weighted
value.
---- Exposure from routine compounding and hourly product sampling.
---- Exposure from cleaning agitator blades 2-3 times/week with one of
solvents.
---- Exposure during cleaning of spill 2-3 times/year of cleaning solvent (1
gal), compounding solvent (30 gal), or product (50 gal).
Figure 4.4.3 CCEF
Model Flow Diagram Scenario 3
Figure 4.4.3 CCEF Model Flow Diagram Scenario 3
Diagram Legend |
|
Solid boxes |
Models with known codes available |
Dashed boxes |
Models with codes not available |
Ovals |
Input/output files with specific
data specifications |
Cylinders |
Model-Specific databases |
Arrows |
Indicates the flow of the data |
PBPK |
Physiologically-based Pharmacokinectics |
PBPD |
Physiologically-based Pharmacodynamics |
Notes
Figure 4.5.1 Life Stages For Exposure Scenario 4
Scenario 1
Scenario 2
Scenario 3
Scenario 4
This section provides the conceptual model for the exposure pathways
associated with Exposure Scenario 4. The conceptual model and associated
assumptions are critical in determining the type of the models, algorithms, and
databases required to satisfy the needs of the modeling scenario. Figure 4.5.2
provides an illustration of the conceptual model for Exposure Scenario 4.
Residential Outdoor Description
- 1-story Townhouse in MA with small grass lawn and shrubbery
Smoking Description
- Smokes 15-20 cigarettes per day since age 18
- Assume man smokes indoors and outdoors at home
Exposure at Home to SVOC Compound Groups A
& B (Scenario 1)
Exposure at Home and in Car to VOC Compound
Group E
- Compound Group E: MTBE in gasoline
- Exposure during car repair (4 times/year)
- Exposure during lawn mowing (20 times/year), lawn trimming (20 times/year),
and shrubbery trimming (4 times/year)
- Exposure during auto fueling (52 times/year) & driving (1 hour/day)
- Background exposure indoors
Figure 4.5.2 Illustration Depicting The Setting For Exposure
Scenario 4 In Study
Assumptions about Working Conditions, Home
Conditions, Exposures & Activities
- Workplace exposures unknown so considered negligible and part of background
- No protective gear when working on car, lawnmower, and other at-home jobs
- Paints the home interior every 5 years
- Exterior is brick so no painting needed
Adult Age Timeline
- Working
- - Adult (25-35 years)
- - Middle Adult (35-50 years)
- - Senior Adult (50-65 years)
- Senescence (65-75 years)
Adult Age Timeline
- Health effects will be calculated for the adult from 25 years of age through
retirement. Depending on model outputs, this can be a cumulative time
distribution or a time-weighted value.
---- Exposure during car repair (4 times/year); assume no protective gear
---- Exposure during lawn mowing (20 times/year), lawn trimming (20
times/year), and shrubbery trimming (4 times/year)
---- Exposure during auto fueling (52 times/year) & driving (1 hour/day)
Figure 4.5.3 CCEF
Model Flow Diagram Scenario 4
Figure 4.5.3 CCEF Model Flow Diagram Scenario 4
Diagram Legend |
|
Solid boxes |
Models with known codes available |
Dashed boxes |
Models with codes not available |
Ovals |
Input/output files with specific
data specifications |
Cylinders |
Model-Specific databases |
Arrows |
Indicates the flow of the data |
PBPK |
Physiologically-based Pharmacokinetics |
PBPD |
Physiologically-based Pharmacodynamics |
The purpose of the Process flow diagrams is to show the flow of information for
key information and data processing steps, describe the four scenarios as a
framework of connected processes, and identify gaps in knowledge.
The Process Flow Diagrams illustrate the
mapping between the models and the process numbers. Select a model or model
description from the corresponding interactive hyperlinked pages (table ordered
alphabetically), and the corresponding processes on the diagram will be
highlighted.
Figure 4.6.1
Scenario 1 Process Framework: Exposure of Compound A (2-BE on ceiling strips)
to fetus via mother
Figure 4.6.2
Scenario 1 Process Framework: Exposure of Compound A (2-BE on ceiling strips)
to newborn (or nursing infant) directly and via mother through nursing
Figure 4.6.3
Scenario 1 Process Framework: Exposure of Compound A (2-BE on ceiling strips)
to non-nursing infant/toddler/child
Figure 4.6.4 Scenario
1 Process Framework: Exposure of Compound B (ethylene glycol) to fetus via
mother
Figure 4.6.5
Scenario 1 Process Framework: Exposure of Compound B (ethylene glycol) to
newborn (or nursing infant) directly and via mother through milk
Figure 4.6.6
Scenario 1 Process Framework: Exposure of Compound B (ethylene glycol in latex
paint)
Figure 4.6.7
Scenario 2 Process Framework: Exposure of Compound C (DEHP or BBP or DINP) to
newborn (or nursing infant) directly and via mother through milk
Figure 4.6.8
Scenario 2 Process Framework: Exposure of Compound C (BBP) to non-nursing
infant/toddler/child
Figure 4.6.9 Scenario
2 Process Framework: Exposure of Compound C (DEHP) to non-nursing
infant/toddler/child
Figure 4.6.10
Scenario 2 Process Framework: Exposure of Compound C (DINP) to non-nursing
infant/toddler/child
Figure 4.6.11
Scenario 3 Process Framework: Occupational exposure of VOC Compound Group D
(benzene, toluene, n-hexane) to adult male compounding
solvent-based adhesive
Figure 4.6.12
Scenario 4 Process Framework: Exposure of Father to Compound A (2-BE on
ceiling strips), B (ethylene glycol in paint) & E (MTBE) Metabolite:
Butoxyacetic Acid
Figure 4.6.14
Scenario 4 Process Framework:Exposure of Father to
Compound A (2-BE on ceiling strips), B (ethylene glycol in paint) & E
(MTBE) Metabolite: Tertiary Butyl Alchohol
Figure 4.6.13
Scenario 4 Process Framework: Exposure of Father to Compound A (2-BE on ceiling
strips),B (ethylene glycol in paint)
& E (MTBE) Metabolites: Glycolic Acid and Oxalic Acid
This section of the report describes the gap analysis that was conducted based
on the four example exposure scenarios associated with the CCEF. A gap analysis
provides a list of research needs associated with the specific topic of
interest. In this case, the gap analysis will provide the research needs for
the framework, models, algorithms, and databases associated with the four
exposure scenarios developed to define the design of the CCEF. The gap analysis
consists of a three-step process in evaluating the Modeling and Process Flow Diagrams
developed for the design of the CCEF. The three steps of the gap analysis are
to define: 1) what exists, 2) what is needed, and 3) what process is required
to achieved the needs.
The gap analysis is focused on the main components of the CCEF and will be
discussed based on these components. These four components are: Source,
Transport, Exposure, and Impacts. The Exposure and Impacts components were
combined in this analysis because they are so closely linked. The gap analysis
will be provided for each component based on each compound and scenario of
interest. The results of this analysis will be input to the Qualitative
Sensitivity analysis and prioritization of research needs that will be
discussed in the following sections.
The source component of the CCEF involves the release of a
contaminant from its initial matrix. The primary mechanisms for release to the
air are diffusion/volatilization or combustion. We have listed the gaps in
models for relevant scenarios and indicated potential sources or research
studies needed to fill these gaps for the source component of the CCEF.
1. Fugitive VOC Emissions from Mixing Vessel: Scenario 3 (Sources or
Research Needs: We expect that algorithms already exist and may be available
from sources such as handbooks by the American Petroleum Institute or EPA.)
2. Aerosolization of Contaminant During Auto Fueling: Scenario 4
(Sources or Research Needs: We expect that algorithms, or data to develop
algorithms, for personal exposure may be available from the South Coast Air
Quality Management District, California Air Resources Board, or EPA.)
3. Release of Contaminant (in breathing zone from Internal Combustion Engine
(auto, lawn mower, and trimmer): Scenario 4 (Sources or Research Needs: We
expect that algorithms, or data to develop algorithms, for personal exposure
may be available from the South Coast Air Quality Management District,
California Air Resources Board, EPA, lawn mower manufacturers, or auto
manufacturers.)
4. Source Emissions During Auto Servicing: Scenario 4 (Sources or
Research Needs: Filling this gap requires data on frequency of servicing
personal automobiles at home and observation or personal logs of typical
activities and use of protective gear during auto servicing by individuals.)
5. Splash Frequency and Volume to Skin During Fueling: Scenario 4 (Sources
or Research Needs: Filling this gap requires laboratory studies of splash
characteristics and frequency logs for fueling automobiles, lawn mowers, and
trimmers.)
6. Splash Frequency and Volume to Skin During Painting: Scenario 1 & 4 (Sources
or Research Needs: Filling this gap requires laboratory studies of splash
characteristics and frequency logs for number of splashes during painting of a
typical room by a nonprofessional.)
7. Spill Frequency and Volume During Fueling:
Scenario 4 (Sources or Research Needs: Filling this gap requires spill
frequency and volume logs for fueling automobiles, lawn mowers, and trimmers.)
8. Spill Frequency and Volume During Painting:
Scenario 1 & 4 (Sources or Research Needs: Filling this gap requires
spill frequency and volume logs for painting of a typical room by a
nonprofessional.)
For the CCEF, we focused on micro-environmental models for
transport indoors or near source outdoors. Indoor models evaluated transport
within and between rooms or other small spaces and fate and partitioning of
vapor on to aerosols or particles (dust), walls, floors, and sinks (e.g.,
furniture, carpet, clothing, blankets). Transport for this component of the
CCEF ends when it reaches the human body and does not include movement within
the body. Ingestion and hand-to-mouth transfer were considered part of the
exposure component of the CCEF. We have listed the gaps in models for relevant
scenarios and indicated potential sources or research studies needed to fill
these gaps for the transport component of the CCEF.
1. Partitioning Between Vapor and Particle
(Aerosol) Phases in Air: Scenarios 1-4 (Sources or Research Needs: We
expect there may be some algorithms for partitioning of vapor to particulates.
Controlled laboratory partitioning studies are needed for a wide variety of
combinations of contaminants and types of particles.)
2. Particle Resuspension from Floors: Scenarios 1, 3, & 4 (Sources
or Research Needs: There is a major gap in models or algorithms for particle
resuspension from floors. Controlled laboratory studies are needed to determine
resuspension of different types of particulates with different indoor air
currents and simulated human activity.)
3. Contaminant Inhalation and Release by Mainstream Cigarette Smoking:
Scenarios 3 & 4 (Sources or Research Needs: Models or algorithms
already exist for mainstream cigarette smoking, which can be located by a literature
search or from review articles on smoking release, such as the article at http://ehpnet1.niehs.nih.gov/docs/1999/Suppl-2/375-381ott/abstract.html.
The reference for this review article is Ott, W.R. 1999. Mathematical
Models for Predicting Indoor Air Quality from Smoking Activity.
Environmental Health Perspectives Volume 107, Supplement 2, May 1999)
There are several approaches that can be taken when choosing
exposure models, algorithms, and databases for estimating human health exposure
and impacts using microenvironmental modeling scenarios; however, the choice is
most frequently made based on the available data and/or models. A gap analysis
of the approaches to modeling exposures and health impacts from chemical
concentrations in the environment in humans follows:
4.7.3.1. Physiologically Based Pharmacokinetic and Pharmacodynamic
Models: The best approach to predicting blood and tissue concentrations as
a function of time following an administered dose (exposure) as well as
interaction of the bioactive form of the compound with the target tissue(s) is
to use a combination of a Physiologically Based Pharmacokinetic (PBPK) model
and a Physiologically Based Pharmacodynamic (PBPD) model. PBPK modeling refers
to the development of mathematical descriptions of the uptake and disposition
(absorption, distribution, metabolism and excretion) of chemicals based on quantitative
interrelationships among the critical biological determinants of these
processes (Krishnan and Andersen, 1994). PBPD modeling refers to developing
mathematical descriptions of interactions of the actual toxicant with its
receptor to produce the observed toxic effect. These models are specific to
species, compound, exposure route, and life stage. Because their bases lie in
the use of physiological parameters such as blood flow, respiration rate,
kidney filtration rates, etc., in addition to chemical-specific parameters,
i.e. binding constants, solubilities, etc., they aid in extrapolation of data
from a laboratory animal model such as a rat to the human.
Unfortunately, a full suite of these types of models applicable to the
conditions described in the four exposure scenarios implemented in this study
does not exist. In lieu of a well-defined PBPK and PBPD model for each
scenario, a hierarchy of alternatives may be employed.
4.7.3.2. PBPK model, no PBPD model: If PBPD models are not available,
compound- and exposure-specific bioavailability data obtained from
toxicokinetic studies can be used to estimate body, organ, and tissue
concentrations. For instance, a PBPK model may be used to provide target tissue
concentrations of a toxicant for a given dose, which can then be plotted
against experimental data to extrapolate dose to effect
relationships without specific knowledge of how the toxicant interacts with the
receptor (i.e. PBPK model without a PBPD model). There is research being
conducted to generate compound-specific bioavailability data for specific
exposure pathways and routes, but there are many gaps that need to be filled to
complete the suite of exposure scenarios being evaluated for this study.
4.7.3.3. Neither PBPK or PBPD models available:
If compound- and exposure-specific bioavailability data are not available,
bioavailability models for surrogate compounds and/or alternate exposure routes
can be used to estimate body, organ, or tissue concentrations. Reference doses
(Rfd) or cancer slope factors may also be used to
predict health effects.
4.7.3.4. Generic bioavailability models: If no appropriate surrogate
compounds or exposure-specific bioavailability models are available, default
generic bioavailability models can be used to provide a very rough estimate of
body, organ, or tissue concentrations. These are very generic and conservative
models but can be used if no other models or chemical-specific data exist.
The tiered approach for addressing the exposure component of the framework
ensures that the best available information and models are used while filling
as many data gaps as possible when completing the exposure scenarios. Below is
the list of 18 specific research gaps identified in the exposure scenarios.
Scenario 1
1. PBPK/PBPD Model for 2-Butoxyethanol for
Residential Exposure of Pregnant Mother. Source or Research Needs:
PBPK/PBPD model for pregnant mother needs to be developed and validated. See
American Chemistry Council Exposure Technical Implementation Panel
Developmental Dosimetry/Lactation Model Review for existing models. Adopt
initial parameter from related chemical (2-methoxyethanol). Refine and include
information from 2-butoxyethanol specific studies on developmental dosimetry
that already exists for male adult and rats/mice as a starting point.
2. PBPK/PBPD Model for 2-Butoxyethanol for Residential Exposure of Fetus
(-0.75 to 0.0 years old). Source or Research Needs: See above for exposure
of pregnant mother. PBPK/PBPD model for fetus also needs to be developed and
validated. Existing PBPK/PBPD model for male adult rats/mice may be used as a
starting point to develop fetus model with subsequent extrapolation to the
human.
3. PBPK/PBPD Model for 2-Butoxyethanol for Residential Exposure of Child (2
to 6 years old). Source or Research Needs: PBPK/PBPD model for child needs
to be developed and validated. Existing PBPK/PBPD model for male adult rats/mice
may be used as a starting point to develop model for the child with subsequent
extrapolation to the human.
4. PBPK/PBPD Model for 2-Butoxyethanol for Residential Exposure of Lactation
Child (0.0 to 2 years old). Source or Research Needs: PBPK/PBPD model for
nursing child needs to be developed and validated. Existing PBPK/PBPD model for
male adult rats/mice may be used as a starting point to develop model for the
nursing offspring with subsequent extrapolation to the human.
5. PBPK/PBPD Model for Ethylene Glycol for Residential Exposure of Fetus
(-0.75 to 0.0 years old). (Source or Research Needs: PBPK/PBPD model for
fetus based on rat embryo data has been developed but has not yet been
published. Existing information for male adult (human) and rats/mice may be
compared with new model. Also compare to adult human male controlled inhalation
and dermal study conducted in
6. PBPK/PBPD Model for Ethylene Glycol for Residential Exposure of Lactation
Child (0.0 to 2 years old). Source or Research Needs: PBPK/PBPD model for
nursing child needs to be developed and validated. Information from the already
existing PBPK/PBPD model for male adult (human) and rats/mice may be used as a
starting point to develop the nursing child model.
7. PBPK/PBPD Model for Ethylene Glycol for Residential Exposure of Child (2
to 6 years old). Source or Research Needs: PBPK/PBPD model for child needs
to be developed and validated. Scaling from existing male adult (human) and
rats/mice may be used but must be validated.
8. PBPK/PBPD Model for Ethylene Glycol for Residential Exposure of Pregnant
Mother. Source or Research Needs: PBPK/PBPD model for pregnant mother needs
to be developed and validated. See American Chemistry Council Exposure
Technical Implementation Panel Developmental Dosimetry/Lactation Model Review
for existing structures. Initial parameters could be adopted from model for related
chemical (2-methoxyethanol), then refine and include data from 2-butoxyethanol
specific studies on developmental dosimetry. Existing information for male
adult (human) and rats/mice may be used as a starting point.
Scenario 2
9. PBPK/PBPD Model for Three Phthalates:
DEHP, BBP, DINP for Residential Exposure of Lactation Child (0.0 to 2 years
old). Source or Research Needs: PBPK/PBPD model for nursing child
needs to be developed and validated. Existing information for male adult
(human) and rats/mice may be used as a starting point to develop model for
nursing child.
10. PBPK/PBPD Model for Three Phthalates: DEHP, BBP, DINP for Residential
Exposure of Child (2 to 6 years old). Source or Research Needs: PBPK/PBPD
model for child needs to be developed and validated. Scaling with existing male
adult (human) and rats/mice data may be used but must be validated.
11. PBPK/PBPD Model for Three Phthalates: DEHP, BBP, DINP for Residential
Exposure of Adolescent (6 to 16 years old). Source or Research Needs: PBPK/PBPD
model for adolescent needs to be developed and validated. Scaling with existing
male adult (human) and rats/mice may be used but must be validated. Some
phthalates have been shown to be endocrine disruptors; however, no models exist
specifically for laboratory animals or humans as they go through puberty.
Scenario 3
12. PBPK/PBPD Model for Benzene, Toluene, or
n-Hexane for Occupational Exposure of Adult Male (18 to 65 years old).
Source or Research Needs: PBPK/PBPD models exist for male adult (human) and
rats/mice, but need to be extended to PBPD model. Ideally a model should
incorporate interactions between the three solvents, as well as with cigarette
smoke. No models at this level of sophistication were identified in the
literature. No accommodation is made for advancing age in this scenario or in
published models. Such an accommodation is important as metabolic capacity and
other physiological functions may change with advancing age.
13. Influence of Smoking on PBPK/PBPD Model for Mixtures of Benzene,
Toluene, N-Hexane for Occupational Exposure of Adult Male (18 to 65 years old).
Research is needed to understand the impacts of smoking on this exposure
scenario based on 1) addition to the source, 2) alternate behavior, and 3)
change in behavior.
Scenario 4
14. PBPK/PBPD Model for 2-Butoxyethanol for Backyard Exposure
of Adult Male (25 to 50 years old): Source or Research Needs: Need to
understand potential interactions between smoking and 2-butoxyethanol and
effect these may have on PBPK/PBPD model that already exists for male adult
(human) and rats/mice.
15. PBPK/PBPD Model for 2-Butoxyethanol for Backyard Exposure of Adult Male
(50 to 75 years old). Source or Research Needs: Need research on PBPK/PBPD
model for aging male. Possibly the existing human male adult model can be
scaled appropriately.
16. PBPK/PBPD Model for Ethylene Glycol for Backyard Exposure of Adult Male
(25 to 50 years old). Source or Research Needs: Need to understand
potential interactions between smoking and ethylene glycol and effect these may
have on PBPK/PBPD model that already exists for male adult (human) and
rats/mice.
17. PBPK/PBPD Model for Ethylene Glycol for Backyard Exposure of Adult Male
(50 to 75 years old). Source or Research Needs: Need research on PBPK/PBPD
model for aging male. Possibly the existing human male adult model can be
scaled appropriately.
18. PBPK/PBPD Model for MTBE for Backyard Exposure of Adult Male (25 to 50
years old). Source or Research Needs: Need to understand potential
interactions between smoking and 2-butoxyethanol and effect these may have on
PBPK model that already exists for male adult (human) and rats/mice.
19. PBPK/PBPD Model for MTBE for Backyard Exposure of Adult Male (50 to 75
years old). Source or Research Needs: Need research on PBPK/PBPD model for
aging male. Possibly the existing human male adult model can be scaled
appropriately.
20. PBPK/PBPD Model for 2-butoxyethanol for Backyard Exposure of Adult Male
(50 to 75 years old): Scenario 4 (Source or Research Needs: Need research
on PBPK/PBPD model for geriatric male human. The existing human male adult
model can be used to scale to the geriatric model)
21. PBPK/PBPD Model for Ethylene Glycol for Backyard Exposure of Adult Male
(25 to 50 years old): Scenario 4 (Source or Research Needs: Need to
understand how mixtures of compound and smoking has on PBPK/PBPD model that
already exists for male adult and rats/mice)
22. PBPK/PBPD Model for Ethylene Glycol for Backyard Exposure of Adult Male
(50 to 75 years old): Scenario 4 (Source or Research Needs: PBPK/PBPD model
already exists for male adult and rats/mice)
23. PBPK/PBPD Model for MTBE for Backyard Exposure of Adult Male (25 to 50
years old): Scenario 4 (Source or Research Needs: Need to understand how
smoking affects parameters of the PBPK/PBPD model that already exists for male
adult and rats/mice)
24. PBPK/PBPD Model for MTBE for Backyard Exposure of Adult Male (50 to 75
years old): Scenario 4 (Source or Research Needs: Need research on
PBPK/PBPD model for the aging male human. The existing human male adult model
could be used to scale to the geriatric model)
A number of research gaps were identified in the models or
frequency/usage logs needed for the source and transport components of the four
exposure scenarios used as examples in the CCEF. Three of the research gaps in
models for the source component (i.e., fugitive VOC emissions from mixing
vessel, aerosolization during auto fueling, and release of contaminant in
combustion zone from internal combustion engine) may already exist as
algorithms, but they were not identified as part of this study because they are
not widely published. Five research gaps in frequency/usage logs needed for the
source component were also not readily available (i.e., frequency of auto
servicing and associated protective gear used at home, splash frequency/volume
during fueling, splash frequency/volume during painting, spill frequency/volume
during fueling, and spill frequency/volume during painting). Two major research
gaps in models needed for the transport component of most of the example
scenarios include: (1) partitioning between vapor and particle (aerosol) phases
in air (Scenarios 1-4) and (2) particle resuspension from floors (Scenarios 1,
3, & 4).
Currently, considerable research is underway, funded by other parts of the
American Chemistry Council (i.e., Endocrine Disruptor Technical Implementation
Panel and CHEMSTAR Program), as well as government agencies and universities,
with the goal of generating data necessary to construct PBPK and PBPD models
for many chemicals to which humans of all life stages are exposed. Due to the
complexity and expense of this research, it will be several years before enough
new information is available to fill in the data gaps that have been identified
when designing the preceding CCEF components. Further difficulties exist in
that the quality of the physiological data available for many of the parameters
needed to develop the models is inadequate. This is especially true when
attempting to model maternal-placental-fetal transfer and metabolism of
chemical substances as well as lactational transfer. The lack of accurate
physiological parameters during pregnancy and lactation not only prevents
construction of a substantive model in laboratory animals, it also precludes
useful extrapolation to the human. Another data gap lies in the lack of
knowledge regarding the effects of exogenous compounds on physiological
parameters during the onset of puberty in both laboratory animal models and
humans.
purpose of this analysis was to identify model
elements (estimation algorithms and parameter databases) likely to produce
greatest uncertainty in exposure estimates obtained from model. Given list of
existing computer models and gaps in models needed, next step was to rank model
elements according to their relative contribution to resulting output.
primary computer models selected
for source or transport in one or more of four scenarios evaluated for CCEF
were: CONTAM, PROMISE, IAQX, WPEM, and CONSEXPO. was
also assumed a good model or algorithm already exists for mainstream and
sidestream cigarette smoke. Data gaps have already been discussed in gap
analysis for source and transport models.
1. Models & Databases on primary exposure pathway were
believed to introduce more sensitivity, and therefore be of higher priority,
than those on secondary exposure pathways;
2. Missing models and databases were considered a higher priority research need
than known models;
3. Models earlier in chain were believed to have potential to introduce more
uncertainty than models later in chain;
4. Inaccurate models were ranked as a higher priority research need than
accurate models; and
5. Models were of lesser compatibility (e.g., differing time and spatial scale)
with upstream and downstream models were considered a higher priority.
Ranking of models considered a number of factors, which are
listed here roughly in decreasing level of importance: (1) whether model
evaluated is on primary exposure pathway (e.g., inhalation, ingestion, dermal)
for scenario of interest, (2) whether model recommended for a particular
scenario needs to be developed or already exists, (3) how close model is to
beginning of chain of analyses from source through effects (i.e., "garbage
in, garbage out") (4) accuracy of model, (5) whether model provides output
and time units needed for downstream components of CCEF.
Determination of primary exposure pathway depends on physical/chemical
characteristics (e.g., volatility) of contaminant and nature of matrix
containing contaminant (e.g., paint or plastic). Exposure pathway determination
also depends on specifics of scenario, such as age, sex, and habits of exposed
individual; physical characteristics of exposure location; or frequency and
length of exposure. For some scenarios, primary exposure pathway is fairly obvious,
i.e. inhalation for Scenario 1 and ingestion
for Scenario 2. For Scenario
3, primary exposure pathway is inhalation, but is not clear whether
more benzene and toluene are inhaled from fugitive emissions or as natural
constituents of cigarette smoke. For Scenario 4,
primary exposure pathway is not obvious without evaluating actual modeling
results, since there are multiple chemicals with different exposure routes.
Table 5.1 Qualitative Ranking of Model
Elements for Source and Transport Based on their Relative Contribution to Total
Uncertainty (highest uncertainty listed first) and Research Need
Scenario |
Source* |
Transport* |
1 |
1. WPEM (walls largest area painted) |
1. Partitioning between vapor and particle phases (gap) |
|
2. IAQX (smaller area wood ceiling) |
2. Dust particle resuspension (gap) |
|
|
3. CONTAMW |
2 |
1. CONSEXPO |
1. CONSEXPO (primary exposure pathway) |
|
|
2. Partitioning between vapor and particle phases (gap) |
|
|
3. CONTAMW |
3 |
1. Fugitive Emissions (gap) |
1. Partitioning between vapor and particle phases (gap) |
|
2. PROMISE (less accurate) |
Dust particle resuspension (gap) |
|
3. Cigarette model** (most accurate) |
3. CONTAMW |
4 |
1. Combustion model (gap) |
1. Partitioning between vapor and particle phases (gap) |
|
2. Fueling model (gap) |
2. Dust particle resuspension (gap) |
|
3. Fuel spill (gap; higher frequency) |
3. CONTAMW |
|
4. Paint spill (gap; higher frequency) |
|
|
5. PROMISE (input less well known) |
|
|
6. WPEM (input more well known) |
|
|
7. IAQX (less accurate) |
|
|
8. Cigarette model (most accurate) |
|
* - Primary basis for ranking indicated in parentheses. Gap = no model or
algorithm currently identified.
** - Two of contaminants of interest for Scenario 3
are benzene and toluene, which are natural constituents of cigarette smoke. is not obvious whether primary exposure pathway to worker
compounding adhesives is inhalation of fugitive emissions or cigarette smoke.
Table 5.1 ranks models or algorithms
identified for each scenario. Models are listed in order of those expected to
contribute from most to least toward results in exposure estimates. Model gaps
were included in prioritization and given especially high priority as a
research need when they were believed to be a component of primary exposure
pathway. explicit rules of prioritization used for
ranking each of five factors were listed above.
Conduct of detailed qualitative sensitivity analyses of
existing models for exposure and impact components of four scenarios is not
possible at this time due to significant lack of models available for use in
these processes. This lack of adequate and/or appropriate models for use in
estimating exposure and impact processes clearly points out need for a broad
range of research in this area.
However, a few generalizations can be made would apply to any
physiologically-based pharmacokinetic or pharmacodynamic model. Any model using
physiological parameters as its foundation will be sensitive to changes in
values of those parameters; thus, is essential values for parameters used in
model be as accurate as possible. accuracy of
physiological parameters will also aid in extrapolating from one species to
another, or from one physiological state to another, i.e. non-pregnant to
pregnant female. At this time is not possible to develop defendable values for
many physiological parameters. For instance, is known blood flow into and
through liver increases during pregnancy in mammalian species; however,
accurate data for liver blood flow during pregnancy is not available for most
species. Exposure and impact models are also sensitive to chemical specific
parameters such as absorption coefficients and partition coefficients. imprecise absorption coefficient for a compound to
bloodstream will generate inaccurate values for all calculations downstream in
model, inaccuracies which may be further amplified when extrapolating from one
species to next.
This section of the report provides recommendations for
future research priorities based on the design of the CCEF and the current
state of the models, databases, and algorithms associated with
micro-environmental modeling. These recommendations are specifically based in
the design developed in this report for the CCEF and the exposure scenarios and
representative compounds defined by the American Chemistry Council for this
research. The design of the CCEF allows for detailed computational toxicology,
linkages to appropriate databases, and access to sensitivity/uncertainty
analyses. The CCEF also facilitates the transformation of the basic science
into information that is understandable and directly useable by the
decision-maker to scientifically support policy issues.
There is a growing awareness in recent years that a person's exposure to
particular chemicals may occur via multiple routes from multiple sources. To
adequately evaluate such exposures, the scientific community requires models
that can predict the occurrence of exposures for each potential combination of
pathway and source and then accumulate these exposures over time. Ideally, the
models will account for variations in people's activity patterns that are
influenced by age, gender, occupation, and other demographic factors. These
activity patterns should realistically simulate the movements of representative
people through zones defined by geographic location and microenvironment.
The recommendations for the development of the CCEF are based on the
micro-environmental modeling needs of the American Chemistry Council and
companies associated with high volume chemicals. They are also based on
Battelle's experience in designing, developing, and applying modeling
Frameworks for the U.S. Environmental Protection Agency, the U.S. Department of
Energy, the U.S. Department of Defense, and the U.S. Nuclear Regulatory
Commission. In many cases, Battelle has coordinated development of Frameworks
among these different governmental agencies to create a "merged
system" that meets the needs of each individual agency but also supports
the needs of various modeling types, scopes, scales (both time and space), and
resources.
The priority of research conducted by this project is based on the Gap and
Qualitative Sensitivity Analyses for each of the four components of the CCEF,
which are: Source, Transport, Exposure, and Impacts. A Gap Analysis was
conducted for each of these components to identify areas where more than one
piece of the modeling puzzle might be missing. This could be a process that
requires a model, database, or algorithm that currently does not exist or is in
development. A Qualitative Sensitivity Analysis was also conducted on the four
components of the CCEF to identify inadequacies in existing models, databases,
or algorithms and determine their impact on the overall Framework.
This study developed the requirements and design of the CCEF
for the American Chemistry Council. In addition, the Lifeline Team has
independently developed requirements and design for the CCEF. An obvious next
step would be to merge the two sets of requirements and design to include the
best from both studies. This should be a combined effort of the American
Chemistry Council, the Lifeline Team, and Battelle.
As for the CCEF development in general, this Framework should satisfy the
current and future needs of the American Chemistry Council and chemical
companies. It is a difficult task to satisfy future needs, but if the CCEF is
developed to be a flexible system that can accept new models, databases, and
algorithms of various scales and purposes, then the CCEF can be a tool that can
accommodate future research needs and changes in policies and regulations. The
CCEF that is presented in this report is designed to be flexible and to support
the micro-environmental modeling scale as well as to span various scales of
modeling (i.e. meso- and macro-environmental modeling).
There is an interrelationship between basic science, information, and
decisions. The basic science provides the foundation upon which decisions are
made, but direct use of these results are generally cumbersome, confusing,
highly technical, and not in a format that is readily comprehensible. Linkages
between meso-scale (i.e., first-order) modeling and visualization, which
provide information for decision-makers, and the science-support basic research
and modeling, which provide the foundation for expressing information upon
which decisions are made, are sorely needed. By formalizing the
interrelationships between basic science, information, and decisions, one
integrates modern computing and information technology with the technology of
molecular biology and chemistry to improve the prioritization of data
requirements and risk assessments for toxic chemicals. The overarching goal is
for science-based quantitative risk assessments to manage chemicals in the
environment without overly burdening the chemistry industry.
The design of a CCEF provides an overarching software framework that links
basic science, information, and decisions by informing and advising the
American Chemistry Council. CCEF supports the American Chemistry Council in its
effort to identify, facilitate, and communicate generic research that will
characterize people's exposure to chemicals, especially nonagricultural
chemicals, and raise the confidence and lower the uncertainty for quantitative
estimates of exposure associated with potential human health effects to chemicals.
The CCEF facilitates key elements of science-based decisions including risk
analysis (human, ecological, financial, and programmatic), hazard assessment,
exposure characterization, micro-environmental modeling, computational
toxicology, cost analysis, and decision analysis. The CCEF provides scientific
information needed to establish and defend science-informed policy and
assistance in setting programmatic direction and research agendas.
If the American Chemistry Council decides to develop the CCEF into a full
system, a critical piece of the development is documentation and testing. If
full sets of documents are developed that include the unit and system testing,
any modification and upgrades can be made easily and efficiently.
Well-documented test plans that are based on the system requirements and design
ensure proper operation, but this also ensures easier modifications and
upgrades to the components and the overall framework. A fine balance between
the development and the documentation of the framework is critical.
Unfortunately, many well-designed and currently developed frameworks and models
are never used because of lack of testing and adequate documentation. Battelle
has learned this lesson and has found that testing and documentation require a
significant investment of the budget to create a defensible and useful
framework, set of models and databases.
Following the design of the CCEF, the following recommendations are made:
This table summarizes the priority analysis for the four
components of each scenario (Source, Transport, and Exposure & Impact).
Select a hyperlinked priority (high, medium, or low) for additional information
based on each scenario. The reader can click on the different boxes of the
table to see the details of the priority or scroll down and read the details
manually in sections 6.2.2, 6.2.3, and 6.3.4.
Table 6.2.1 Table of priority analysis conducted to produce a high,
medium, and low scale for each model, database, and algorithm identified in the
Gap and Sensitivity Analyses.
|
Source |
Transport |
Exposure/Impact |
Scenario 1 |
|||
Scenario 2 |
|||
Scenario 3 |
|||
Scenario 4 |
|||
High-missing data critical to model |
The research recommendations for the Source Component of the
analysis will be defined by scenario because these components are very
sensitive to the type of activities associated with each scenario. These
recommendations for the Source Component are associated with the Gap and
Sensitivity Analyses conducted in this study.
Ranking of the models considered a number of factors, which are listed here
roughly in decreasing level of importance: (1) whether the model evaluated is
on the primary exposure pathway (e.g., inhalation, ingestion, dermal) for the
scenario of interest, (2) whether the model recommended for a particular
scenario needs to be developed or already exists, (3) how close the model is to
the beginning of the chain of analyses from source through effects (i.e., “garbage
in, garbage out”), (4) the accuracy of the model, and (5) whether the model
provides the output and time units needed for downstream components of the
CCEF.
It is critical to have the important processes associated
with the source of high volume chemicals to estimate the environmental
concentration that may be available for exposure to humans of different sex and
age. Fortunately, there are many models and algorithms that have been developed
to support the micro-environmental modeling area. Based on Gap and Sensitivity
Analyses, the following research recommendations are given a low
priority for Source components in Scenario 1:
It is critical to have the important processes associated
with the source of high volume chemicals to estimate the environmental
concentration that may be available for exposure to humans of different sex and
age. Fortunately, there are a several models and algorithms that have been
developed to support the micro-environmental modeling area for Scenario 2.
Based on Gap and Sensitivity Analyses, the following research recommendations
are given a low priority for Source component in Scenario 2:
It is critical to have the important processes associated
with the source of high volume chemicals to estimate the environmental
concentration that may be available for exposure to adult male throughout his
life. Fortunately, there are a good many models and algorithms that have been
developed to support the micro-environmental modeling area. Based on Gap and
Sensitivity Analyses, the following research recommendations are given a high
priority for Source component in Scenario 3:
It is critical to have the important processes associated
with the source of high volume chemicals to estimate the environmental
concentration that may be available for exposure to adult male throughout his
life. Fortunately, there are a good many models and algorithms that have been
developed to support the micro-environmental modeling area. Based on Gap and
Sensitivity Analyses, the following research recommendations are given a high
priority for Source component in Scenario 4:
The research recommendations for the Transport Component of
the analysis will be defined by scenario because these components are very
sensitive to the type of activities associated with each scenario. These
recommendations for the Transport Component are associated with the Gap and
Sensitivity Analyses conducted in this study.
Ranking of the models considered a number of factors, which are listed here
roughly in decreasing level of importance: (1) whether the model evaluated is
on the primary exposure pathway (e.g., inhalation, ingestion, dermal) for the
scenario of interest, (2) whether the model recommended for a particular
scenario needs to be developed or already exists, (3) how close the model is to
the beginning of the chain of analyses from source through effects (i.e.,
"garbage in, garbage out"), (4) the accuracy of the model, and (5)
whether the model provides the output and time units needed for downstream
components of the CCEF.
It is critical to have the important processes associated
with transport of high volume chemicals to estimate the environmental
concentration that may be available for exposure to humans of different sex and
age. Fortunately, there are a good many models and algorithms that have been
developed to support the micro-environmental modeling area. Based on Gap and
Sensitivity Analyses, the following research recommendations are given a medium
priority for transport component in Scenario 1:
It is critical to have the important processes associated
with transport of high volume chemicals to estimate the environmental
concentration that may be available for exposure to humans of different sex and
age. Fortunately, there are a several models and algorithms that have been
developed to support the micro-environmental modeling area for Scenario 2.
Based on Gap and Sensitivity Analyses, the following research recommendations
are given a low priority for the transport component in Scenario 2:
It is critical to have the important processes associated
with transport of high volume chemicals to estimate the environmental
concentration that may be available for exposure to adult male throughout his
life. Fortunately, there are a good many models and algorithms that have been
developed to support the micro-environmental modeling area. Based on Gap and
Sensitivity Analyses, the following research recommendations are given a high
priority for transport component in Scenario 3:
It is critical to have the important processes associated
with transport of high volume chemicals to estimate the environmental
concentration that may be available for exposure to adult male throughout his
life. Fortunately, there are a good many models and algorithms that have been
developed to support the micro-environmental modeling area. Based on Gap and
Sensitivity Analyses, the following research recommendations are given a high
priority for transport component in Scenario 4:
The research recommendations for the Exposure and Impact
Components of the analysis will be defined by scenario because these components
are very sensitive to the type and age of the human involved. All of the
recommendations for the Exposure and Impact Components are associated with the
Gap Analysis because there were so little data to conduct a qualitative
sensitivity analysis on these components.
Physiologic and chemical-specific data are needed for both
humans and appropriate laboratory animal species in order to construct useful
PBPK and PBPD models. Based on gap analysis and the inadequate data for the
models associated with pregnant mother and fetuses compared to the Source and
Transport Components, the following research recommendations are given a high
priority for Scenario 1:
Specific processes are identified in the Process Flow
Diagram for Scenario 1, Compounds A & B in Section
4 of this document.
Physiologic and chemical-specific data are needed for both
humans and appropriate laboratory animal species in order to construct useful
PBPK and PBPD models. Based on gap analysis and the inadequate data for the
models associated with pregnant mother and fetuses compared to the Source and
Transport Components, the following research recommendations are given a high
priority for Scenario 2:
Specific processes are identified in the Process Flow
Diagram for Scenario 2, Compound C in Section 4 of this document.
Based on gap analysis and the adequate data for the models
associated with adult males compared to the Source and Transport Components,
the following research recommendations are given a medium priority for
Scenario 3:
Specific processes are identified in the Process Flow
Diagram for Scenario 3, Compound D in Section 4 of this document.
Based on gap analysis and the adequate data for the models
associated with adult males compared to the Source and Transport Components,
the following research recommendations are given a medium priority for
Scenario 4:
Specific processes are identified in the Process Flow
Diagram for Scenario 4, Compounds A, B, & E in Section 4 of this document.
Andersen ME, Krishnan K., 1994. Environ
Health Perspect. 102 Suppl 1:103-8.
Barnhart, C.L. (ed.).
1970. The
Cox, S.S., J.C. Little, and
A.T. Hodgson. 2002.
"Predicting the Emission Rate of Volatile Organic Compounds from Vinyl
Flooring." Environmental Science and Technology, VOl. 36, No. 4,
pages 709-714.
Gold, C. Last update on 1999-11-09. "The Voronoi Web Site." Université Laval. http://www.voronoi.com/Indexframes.htm.
Guralnik, D.B. (ed.).
1976. Webster’s
Icking, C., Klein, R., Köllner, P., Ma, L.
May 2001. "VoroGlide, interactive Voronoi diagrams." Praktische
Informatik VI, FernUniversität Hagen. http://wwwpi6.fernuni-hagen.de/Geometrie-Labor/VoroGlide/
Ott, W.R. May 1999. "Mathematical
Models for Predicting Indoor Air Quality from Smoking Activity." Environmental Health Perspectives. Vol. 107,
Supplement 2.
RTI. 1998. "Data Collection Plan for the Hazardous
Waste Identification Rule Multimedia, Multipathway, Risk Assessment Model
(HWIR98)." EPA Contract No. 68-W6-0053. Research Triangle Institute,
Sippl, C.J. and R.J. Sippl. 1980. Computer Dictionary and Handbook. Howard W.
Sams & Co.
Whelan, G. and G. F.
Laniak. 1998. “A Risk-Based
Approach for a National Assessment.” In: Risk-Based Corrective Action and
Brownfields Restorations. C.H. Benson, J.N. Meegoda, R.B. Gilbert, and S.P.
Clemence (eds.). Geotechnical
Special Publication Number 82. American Society of Civil Engineers,
Whelan, G and T. Nicholson. (eds.). 2001. Proceedings
of the Environmental Software Systems Compatibility and Linkage Workshop. PNNL-13654.
Appendix
A
•
Comprehensive: applicable to exposure scenarios of interest to the chemical
industry
• Modular: consisting of modules (algorithms and databases), which can be
easily updated and exchanged without affecting other parts of the framework
• User-friendly: IBM or compatible personal computer application with a
menu-driven interface
• Multi-route: applicable to exposures via inhalation, oral, and dermal contact
with consumer products
• Multi-pathway: inhalation (air-to-lungs); dermal (liquid-to-skin,
solid-to-skin, air-to-skin); oral (ingestion in food, hand-to-mouth,
inhalation-to-ingestion, air-to-food-to-ingestion)
• Multi-source: single or multiple compounds with the same target organ
• Varying duration: applicable to acute, intermediate, and long-term exposures
• Accurate: integrates state-of-the-art estimation methods and databases to
estimate or reasonably overestimate the "ground-truth" of the actual
exposure
• Open code: accessible for inspection and review by users and stakeholders (no
proprietary or "black box" code)
• Probabilistic: provides realistic distribution of exposures within the
exposed population based on probabilistic modeling of key exposure factors
• Dose-response: converts exposure estimates to corresponding dose and risk
values whenever appropriate
• Mass-conservative: uses a mass balance approach whenever feasible to account
for fate and transport of pollutant mass.
The
design of the CCEF leverages the concepts associated with multiple existing
framework system software and exposure modeling methods that are in the
forefront of the scientific community, as well as new innovative concepts. The
key to the Comprehensive Chemical Exposure Framework will be its flexibility of
use and ability to integrate and accommodate different exposure models (existing
and future) required for the American Chemistry Council and industry needs.
The CCEF
design links models and databases together so they can transparently
communicate with each other. The CCEF is the overarching framework that houses
the models and databases as "separate" objects and provides the data
file protocols for communication between objects. A model is represented by a
specific set of algorithms that perform a specific function (e.g., drinking
water ingestion model). A module represents a general set of model types,
defined by their "real world" functions, and includes the model, its
user interface, and any pre- and post-processors that facilitate linkages and
communication with/to other components (e.g., models and databases). This
effort focuses on the design of an overarching framework and not on the models
that are housed within the framework.
The
name, Comprehensive Chemical Exposure Framework, evokes the nature of the
software system, whose design is described herein.
• Comprehensive refers to the ability to capture a wide scope of
problems, issues, perspectives, and exposure scenarios of interest to the
chemical industry.
• Chemical refers most importantly to non-agricultural compounds, but when
combined with Comprehensive, the design should allow for all types of chemicals
(e.g., organic, inorganic, radioactive) to be addressed if a future requirement
is needed. In other words, non-agricultural chemicals will be addressed, but
the system should not necessarily be structured to exclude other chemicals.
• Exposure refers to the manner in which people come in contact with chemicals,
which could include exposure routes (e.g., inhalation, oral, and dermal
contact) and exposure pathways [ingestion to food, hand-to-mouth, etc. (i.e.,
oral), liquid-/solid-/air-to-skin (i.e, dermal contact), and air-to-lungs
(i.e., oral)].
• Framework refers to the software structure of the CCEF, which allows for the
incorporation and linkages of a confederation of models and databases. Note
that the Framework houses models and databases, but the Framework itself does
not represent a model in the system, although, technically speaking, it can and
probably will be called a model. Within this document, the term “model” will
refer to the software codes inside the Framework, and the Framework will
represent the overall structure linking the models and databases to allow for a
seamless transfer of data between components.
Data
Dictionary Files provide the metadata describing attributes of the actual data
in datasets. The datasets contain the actual numbers that are consumed and
produced by each model and database using the Data Dictionary File metadata
formats, as illustrated by Table A1.1 through A1.3. Table A1.1 presents a definition of the data fields
associated with a typical Data Dictionary File. Tables A1.2 and A1.3
illustrate the application of Table
A1.1 as they relate to the parameters describing the chemical list,
including degradation/decay products, and chemical toxicity information. If a
parameter is indexed to a parameter in another Data Dictionary File, then the
index contains the name of the other Data Dictionary File and an extension
containing the other parameter. For example, the Inhalation Cancer Potency
Factor in Table A1.3 is a
function of chemical; therefore, it is indexed to the chemical CAS ID in the
ChemList Data Dictionary File (i.e., ChemList.CAS). By providing indices
and referenced parameters, the information only has to be stored once and an
understandable mapping is provided for the user. Also identified in the Data
Dictionary File tables are those parameters that exhibit statistical variation
and can be represented by a distribution in a sensitivity/uncertainty analysis
(e.g., Monte Carlo simulation) (see stochastic column in Tables A1.2 and A1.3).
The Data
Dictionary Files are an effective mechanism for transferring information
between components. The Data Dictionary Files can be cataloged into five
categories, although the design allows for expansion:
With the
advent of standardized Data Dictionary Files, an Application Programming
Interface can be developed for the CCEF to coordinate and manage the input and
outputs between components [e.g., range checking of parameters, data retrieval,
data storage, units checking, open/close data sources, metadata functions
(cardinality, units, definitions, etc.), etc.], Read and Write functionality
(e.g., error handling, command line functions, producer-consumer relationships,
conceptual site model security, model selection, run calls between models,
documenting user comments, etc.), units conversion so each model can work with
its own unique units without concern for unit conversion errors, and the
conceptual site model graphical user interface (e.g., drag & drop
functionality, tiered-icon pallet, etc.).
As noted
earlier, if the producing component’s output Data Dictionary Files match the
consuming component’s input Data Dictionary File requirements, then the two components can communicate. Figure A1.1 illustrates the linkage of two models with
each model receiving input from a database (i.e., Database Data Dictionary
File) and user (i.e., Input Data Dictionary File), and Model 2 receiving
upstream boundary conditions from the upstream model (i.e., Model Data
Dictionary File). In this case the models can communicate because all upstream
model boundary condition requirements of Model 2 are met by the information
produced by Model 1.
The
Application Programming Interface and Data Dictionary File design allows for
the Plug & Play feature, which is the most important feature of the design.
By ensuring Plug & Play, the CCEF inherently includes the ability to
Figures
A2.1 through A2.3 illustrate how these three components fit into the overall
design of the system. Figure A2.1 presents a schematic illustrating
the linkages between a Sensitivity/Uncertainty Module and a database supplying
statistical data, the CCEF conceptual site model, with linked modules, and a
module that analyzes the Sensitivity/Uncertainty output. The Database Data
Client Editor Module represents a database supplying statistical information on
the model input parameters. The R-Squared module represents an illustrative
example of a model that statistical analyzes the probabilistic output results
(i.e., generating r2 values, which identify the parameters having
the greatest influence on variations in the output results). The Discrete
conceptual site model Simulation represents the linkage picture of the
conceptual site model, developed by the user. The Sensitivity/Uncertainty
module and its components are examined in more detail on Figure A2.3 (below).
Figure
A2.2 presents a schematic highlighting the components of a "hard-wired"
Sensitivity/Uncertainty module and its relationships to the system. Also highlighted
in this figure is the sampling method, as different sampling techniques exist
(e.g., straight
In
addition to a
The
ultimate intent is to propose a universal design for meeting protocols for
linking disparate models and ensuring communication between environmental
software products. The design is not intended to be parochial or inflexible but
is intended to set the standard for allowing a number of different approaches
to communicate within a framework. For example, if the requirement is to allow
two models to seamlessly communicate, then the interface design between these
two models should be such that data should seamlessly pass from one model to
the next, irrespective of scale or resolution (within reason), and should not
be model-dependent. These protocols are applicable, regardless of whether the
models are on local or remote systems.
The
responsibilities of the consuming module would include processing, mapping, and
transforming the producing model’s output, as correlated to the Geo-Referencing
system, to a standard format described by a Model Dictionary File. The
consuming model would be charged with:
The
responsibilities of the producing module would include processing, mapping, and
transforming the output to a standard format described by a Model Dictionary
File. The producing model would be charged with:
The
boundary-condition data passing from one module to another will be handled
through the Application Programming Interface Input/Output Dynamic Link
Library. The linkage protocols and design will address the mapping of
information between modules, specifically passing or calculating geo-spatial
data, Geo-Reference coordinates, area-based polygons, and fractions of areas
that overlap between producing and consuming model coordinate systems.
At the
linkage or communication boundary, boundary information provided by the
producing module would include the following:
The
responsibilities of the consuming module would include processing, mapping, and
transforming the producing models output, as described by the producing
module's coordinate system, to the consuming model's coordinate system, which
would involve mapping the producing model's spatially dependent, Geo-Referenced
coordinates onto the consuming model's spatially dependent, Geo-Referenced
coordinates and determining the fractions of areas that overlap. The mapping
process will use the producing model's areas at the boundary that are projected
onto the consuming model's boundary surface.
It is
recognized that the consuming model may not have programs developed to perform
the processing and mapping of and transformation associated with the producing
model's output. Although not a responsibility of the system, the Application
Programming Interface Input/Output Dynamically Link Library will provide a
routine that:
In other
words, if a consuming model needs assistance in transforming a producing
model's output for consumption, a system program will be made available, which
maps and assigns a producing model's output to all consuming models
boundary-condition polygons. Based on this mapping, the consuming model would
be charged with completing the mapping of the information associated with their
polygons to their nodes, if required.
For an
analytical model, a rectangular plane traditionally defines the interface area
with the vertices being defined at the four corners of the rectangle. For a
numerical model containing a gridding system, multiple nodes and/or areas will
be defined at the boundary interface. The producing model will be responsible
for transforming its output to meet the boundary condition metadata
requirements of the Model Dictionary Files, as illustrated in Tables A3.2.1 and A3.2.2. If the producing model provides its output by node,
it will have to convert its node-based output to correspond to an area (i.e.,
polygon) representing each node. Either the model can do the conversion or the
model can request help from the system, and the system will provide software to
help in this conversion.
The CCEF
will assume that the boundary condition information is 1) associated with each
polygon area, representing each boundary condition node, and 2) uniformly
distributed across each polygon. Figure A3.3.2 illustrates two areas
associated with the boundary of a producing model (i.e., polygons #1 and #2),
overlapping with one area associated with the boundary of a consuming model
(i.e., polygon #1). When the producing model defines the output at a node, the
producing model is responsible for transforming that nodal output into a form
representing the area-based polygon. The consuming model has the responsibility
to map the output results of each producing model’s polygons to the area-based
polygons and corresponding nodal locations associated with the consuming model’s
grid system. All interface mapping will be assumed to occur across a flat
plane. If the gridding system boundary for one of the models is curvilinear,
the producing model’s area, projected onto the consuming model’s surface, will
be used in the mapping exercise.
If a
consuming model does not have a mapping routine that transfers the producing
model’s polygon-based output (e.g., time-varying mass flux and polygon
vertices, which describe the area) to the appropriate and corresponding
consuming model’s polygons, then the system will provide software to assist the
consuming model in the mapping exercise. The consuming model can use this
mapping software through the Application Programming Interface, or it can
utilize its own mapping software. If system software is used, the Application
Programming Interface will compute polygons around each node, defining the
polygons by their Geo-Referenced vertices. The system assumes that the output
at each node is associated with the area surrounding the node and that the
output is uniformly distributed across the node’s assigned area. This
assumption would be applicable for both producing and consuming models.
By
utilizing the CCEF Application Programming Interface, a consistent procedure
exists to develop the areas (i.e., polygons) associated with each producing or
consuming model’s nodes. There are a number of methods for generating polygons
(i.e., multiple-connected planar domains) to describe irregular computational
grids (e.g., Thiessen Polygon Method, Voronoi Diagrams); the CCEF will use
Voronoi Diagrams (Icking, 2001). Either module can invoke the Application
Programming Interface to help it with the development of polygons. The extent
of the areal planes associated with the producing and consuming models do not
have to be identical. In other words, the sizes of the overlapping consuming
and producing model planes could be different, as illustrated in Figures A3.3.1
and A3.3.2(above). The CCEF would then calculate the
fraction of each producing polygon that overlaps with a consuming polygon.
When the
producing and consuming model areas do not exactly overlap, the output, being
transferred from the producing module to the consuming module, will be defined
by the weighted-average associated with the overlapping areas (i.e., fraction
of the producing model polygon overlapping with the consuming model polygon).
In other words, the output assigned to the consuming module’s nodal area will
be defined by the output of the producing module’s nodal areas, whose nodal
areas overlap, weighted by aerial extent. The polygons for both the producing
and consuming models are assumed to be convex, that is, the angles between
vertices will not contain any interior angles that are greater than 180
degrees.
The
transformation (i.e., mapping) of a producing model’s grid system onto a
consuming model’s grid system area is illustrated in Figure A3.3.2 (above). In
this illustrative example, the producing model has two polygons associated with
it, while the consuming model (shaded area) has one. Approximately 20% of the
producing model polygon #1 overlaps with the consuming model polygon #1, and
the 35% of the producing model polygon #1 overlaps with the consuming model
polygon #2. In effect, the information passing from the producing model's to
the consuming model’s nodal area will be represented 20% and 35% of the output
supplied by producing model polygons #1 and #2, respectively.
Each
producing model will provide time-varying output corresponding with each
producing model’s area-based polygons, which describe the boundary interface
between models. The time-varying output from the producing model is a function
of the time steps used in generating the time-varying curve. The system does
not know if the time information between data-points on the mass flux rate
curve is linear, constant, nonlinear, etc.; as such, each producing model’s
time-varying curves will have their own time-stepping protocol for each
parameter for each polygon. When the consuming model maps the producing model’s
polygon output and transfers the information from the producing model’s
gridding system to its own, the consuming model is responsible to account for
the time-stepping system that the producing models provides. For example, if
the producing model provides uneven time intervals with its output, but the
consuming model requires even-incremented time steps, then the consuming model
is responsible for ensuring the proper conversion.
For
those consuming models that do not have a protocol for mapping producing model
results into a form that they can recognize, the system Application Programming
Interface provides software that can help in this mapping process. If the
consuming model utilizes the system mapping protocol, then it is assumed that
the consuming model’s time-varying input, which corresponds to a nodal polygon,
is defined by the producing model's area-weighted output, whose producing model’s
polygons overlap with the consuming model’s polygon. By passing area-weighted
information, the system can transform the data from the producing model’s
gridding system into a format that is consistent with the consuming model’s
gridding system. The mapping procedure, using the system Application
Programming Interface, is as follows:
where
where fi represents the fraction of the producing model’s area that
overlaps with the consuming model’s area, Ai represents the i-th
producing model’s area that overlaps with the consuming model’s area, ATi
represents the area in the i-th producing model’s polygon, "i" is the
index on the i-th polygon in the producing model’s gridding system that
overlaps with the area associated with the consuming model’s polygon, n is the
total number of producing model polygons whose areas overlap the area
associated with each corresponding consuming-model polygon, and j is the index
on the j-th polygon in the consuming model’s gridding system.
Figures
A3.3.2 and A3.4.1 (above) illustrate how a producing model’s output for two
polygons can be transformed to produce input to the consuming model’s polygon.
Figure A3.4.1 illustrates the mechanics of implementing Equations (1) and (2),
as they relate to Figure A3.3.2. Five curves are presented in Figure A3.4.1: 1)
two time-varying mass flux rate curves for the producing model’s polygons #1
and #2 (represented by solid squares and triangles, respectively), 2) two
time-varying producing model curves, adjusted for the fraction of overlap
between the producing and consuming model polygons (e.g., 20% and 35%) (represented by open squares and triangles, respectively),
and 3) consuming model’s curve associated with its polygon (represented by open
circles). Multiplying the aerial fractions times the corresponding polygon
output and summing the results produces the input curve (i.e., open-circle
curve in Figure A3.4.1 for the consuming model. The consuming model can then
manipulate and transform this input curve to meet its model-specific input
requirements.
Based on
the CCEF design describing data transfer protocol (i.e., use of Data Dictionary
Files), the linkage of databases to other components in the system can be
addressed in a similar manner to the linkage protocol associated with models.
The model owner defines input requirements for the model through a series of
Data Dictionary Files: Input and boundary condition Data Dictionary Files, of
which the boundary condition Data Dictionary Files could be defined by upstream
models and/or upstream databases. The boundary condition Data Dictionary Files
provide the database owner with a template of the data needs of the models in
the system and can map the data to the needs of the models.
Because
the database owner understands the database structure better than anyone else,
it is most appropriate for the database owner to perform the mapping. To avoid
a "data dump," the initial mapping would be based on "Primary
Keys" that only provide the most appropriate information for selection by
the model. Examples of Primary Keys include chemical and organism. The CCEF
system would provide the database owner with the necessary tools to map the
database contents to the input requirements of the CCEF models, as identified
by Database Data Dictionary Files. Therefore, by knowing the BC requirements of
the models, Database Data Dictionary Files can be developed and the models
would then have the option to consume only that information from the database
that met the model’s needs. A database would populate a dataset meeting the
format specifications of a Database Data Dictionary File. To consume that
information, the model would reference the same datasets (based on the Database
Data Dictionary Files). Data would then be directly transferred from the
database to the model.
In the
instance where the boundary condition Data Dictionary File is describing a
calculated result being transferred from one model to the next model (i.e.,
Model Dictionary File), there is an expectation that the dataset for that Data
Dictionary File or set of Data Dictionary Files will be complete. In the
instance where the boundary condition Data Dictionary File is describing
information coming from a database, the expectation of completeness is not
imposed. It is unrealistic to require a database to understand and meet the
input requirements of every possible model that might want to access its data
repository. The module receiving information from a database must have
provisions to accept incomplete datasets. This means that a default procedure for
completing an incomplete dataset must be performed by the receiving model, or a
user-interface option (i.e., user intervention) to fully populate the dataset
must be provided.
Like
models, databases could be linked to databases,
thereby providing a simple mechanism to prioritize the same type of data (e.g.,
bulk density) from multiple databases. Figure A5.1 illustrates the linkage of
three databases to a model: National, Regional, and Site-Specific. In this
example, the databases closest to the model take precedence over those
databases farther from the model. If a Site-Specific database did not fulfill
the boundary condition requirements of the model, then data gaps would be
filled by the Regional database, then the National database. For conflicting information
(e.g., different cancer potency factors provided by each database), the closest
database could take precedence.
Figure
A5.2.1 CCEF Design Relationships between Linkage and Model Servers, Host
Client, and Remote Databases and Models
An explanation of Figure A5.2.1 as it relates to the
components comprising the server side of the CCEF design is as follows.
The Data
Owner Tool, Data Client Editor, and Data Extraction Tool represent software for
accessing remote databases. The Model Owner Tool, Model Execution Tool, and
Remote Module Client represent software for accessing and running remote
models.
Under
the design of the CCEF, a graphical user interface will allow the user to:
The
complete tool will allow decision makers and modelers to easily modify the
conceptual site model for a modeling scenario. When the modeling has been
completed, the environmental and risk results can then be analyzed to assess
the risk implications of changing conceptual site models. The result will be to
make this capability more directly accessible to client users and more useful
for problems where decision makers disagree or are uncertain about important
site parameters. The essential spatial information required to define the
conceptual site model is passed, via the Data Dictionary File specifications,
containing all spatial modeling data. The Data Dictionary Files have been
structured to allow for the model developer to specify those parameters that
are spatially aware.
Data are
spatially aware when they have been designated as having a Location index,
constituting the coordinates of the vertices associated with a polygon (see Table A3.2.1). Clear distinctions
have to be made between spatial system data (i.e., spatial layout of
components, such as sources and receptors) and non-spatial model data (i.e.,
porosity, temperature, Kds, toxicity, age, etc.). The spatial data entered
through the conceptual site model graphical user interface is divided into
three object categories: points, lines, and polygons, all requiring coordinates
of their vertices.
Figure
A5.3.2 presents an example illustrating the linkage
of an existing Geographical Information System to the CCEF. Data are
traditionally collected, and the Geographical Information System database is
populated with these data. This may be done outside of the assessment process
(Figure A5.3.2), as was done with the Hazardous Waste identification Rule
assessment, or the user may choose to have the spatial data requirements of the
models designated by the user using a conceptual site model Geographical
Information System graphical user interface, delivered as part of the
conceptual site model (Figures A5.3.3 through A5.3.6, below). Each GIS has its
own special file formats [e.g., ESRI Shape files (*.shp), AutoCAD (*.dwg),
Drawing Interchange Files (*.dxf), Windows Bitmap (*.bmp), and Tag Image Files
(*.tif and *.tff)]; as such a program will link the Application Programming
Interface of the Geographical Information System system to that of the CCEF,
thereby allowing for the conversion and transfer of data from the Geographical
Information System to the CCEF. The CCEF uses the data and may produce
spatially aware output results, which could be converted back to the
Geographical Information System file formats for visualization as a Geographical
Information System viewer (see Figure A5.3.2).
The user
does not see the mechanics behind the linkage of the Geographical Information
System to the CCEF conceptual site model, as illustrated in Figure A5.3.2
(above). In the user’s work space (see Figure A5.3.7 below) and from a set of
system icons, the user would choose the Geographical Information System icon
and connect it to all of those models that require spatial information and
expect to receive the spatial data from the Geographical Information System.
The user would then link the Geographical Information System icon to those
models requiring spatial data, as illustrated by Figure A5.3.7 (below). If the
data comes from a standard database, then a Data Client Editor (coupled with a
Data Owner Tool and Data Extraction Tool) is used to transfer the data to each
model. If the user wants to develop a real-time spatial picture of the
conceptual site model, then software would be available to identify the
polygons, lines, and points associated with the conceptual site model, as
illustrated in Figures A5.3.3 and A5.3.4 (below). Figure A5.3.3 pictorially
illustrates a user-defined Geographical Information System -based conceptual
site model, identifying well and air population-usage locations, sources, water
bodies, farms, aquifers, watersheds, and ecological habitats. Figure A5.3.4
illustrates the use of a background map to facilitate the identification of
polygons, lines, and points. Figures A5.3.5 and A5.3.6 illustrate how the
Geographical Information System could be used as a visualization tool to
inspect time varying at specific locations or spatially varying data at a point
in time, respectively.