PhD Thesis

  Development of Improved Gaussian Dispersion Models for Cases of Downwash Past Wide Buildings Using Three-Dimensional Fluid Modeling, by Anita Coulter Flowe. (1997)

 The objectives of this work were to show that a well-tested three dimensional turbulent kinetic energy/dissipation (k-e) computational model, FLUENT, can be used  to model the fluid flow fields and the dispersion effects in  the flow fields generated by a variety of building shapes, and to use the data sets to develop parameterizations useful to air quality modeling needs.  Once the appropriateness of the computational model was proven through comparisons with experimental results, and data generated for several ratios of building width to building heights, the flow field was examined to determine the length of the recirculation cavity as a function of the ratio of building width to building height both in front of and  in the rear of the building.  The dimensions of the recirculation cavity in the front of the building have previously not been included in regulatory models, so both the height and length of this front recirculation cavity was parameterized as a function of the ratio of building width to building height.  The maximum downdraft was also parameterized as a function of the building width to building height ratio.

 The dispersive effects were then examined to determine useful parameters.  The average  concentration in the recirculation cavity was calculated and modeled as a function of ratio of the building width to building height .  Finally, because Gaussian models are generally used for regulatory modeling of dispersion effects, the dispersive field was analyzed to find improved dispersion coefficients to use in Gaussian models.  The vertical and horizontal dispersion coefficients were computed as a function of distance from the dispersive source for each of the ratios of building width to building height, and then these functions were made a function of the ratio of building width to building height.  These new dispersion coefficients, which were a function of both the distance from the stack and the ratio of building width to building height,  were then used in a Gaussian model to compute mass fractions.  The computed Gaussian mass fractions were then compared to the 3-D k-e mass fractions from the FLUENT model, with differences of less than 10%.



 Master's Theses and Projects

 

Evaluation of Ozone Forecasting Models Using MM5 Real-Time Forecasts and Development of Online Calculator for Cleveland and Akron, OH, Ashwini Tandale  (May, 2004)

 Air quality forecasts for more than 250 cities in the United States are made daily by state and local agencies to caution the public about potentially harmful conditions. It is important that the real-time and forecasted air quality information is accurate so that necessary measures can be taken to prevent such conditions. In this study, forecasting models have been developed to predict the daily maximum ozone concentrations and the air quality index (AQI) for the Cleveland and Akron areas in Ohio. The ozone data for the years 1996-2002 obtained from the U.S. Environmental Protection Agency (EPA) and the meteorological data extracted from the National Climatic Data Center (NCDC) for the same time period were used. The data were divided into three groups, namely pre-summer (April to May), summer (June-July), and post-summer (August-October) based on the seasonal variations of ozone during these periods. The popular Kolmogorov-Zurbenko (KZ) filter technique and regression analysis have been adopted for developing the models using the time series for the years 1996-2001. The proposed models defined the natural log of the daily maximum ozone concentration as a function of daily maximum temperature and daily average wind speed. A total of twelve models were developed to predict ozone concentrations for periods of pre-summer, summer and post-summer. Six models considered temperature and wind speed as the independent variables and the other six considered temperature. The performance of the models was evaluated in three different ways: a) Initial evaluation of the models was conducted using 2002 data and model evaluation parameters used in air quality model evaluation studies. b) The models were also compared with an earlier developed model for the entire state of Ohio. c) The effectiveness of these models was further evaluated using available MM5 (a mesoscale meteorological forecasting model) real time forecasts from the Ohio State University for the months of Aug.-Oct., 2003. The study shows that the forecasting ability of models based on KZ filters to predict daily maximum ozone concentration is limited and that the models are less reliable in predicting high concentrations observed in both the Cleveland and Akron areas when the observed values of the independent parameters are considered. However, the models performed well in predicting AQI reported by the USEPA for both areas. Also, it was found that the use of temperature and wind speed increased the accuracy of predictions as compared to the models based on temperature. Based on these models, an online calculator was developed that calculates the ozone concentrations when the temperature, wind speed and the season are provided. This tool can be accessed through the link aprg.utoledo.edu/Forecasting_Templates/Forecast_Home.htm.

 

Explore the Link of PM10 with Meteorological Factors and Ambient Air Concentrations of Ozone, CO and NO2 using Time Series for Cleveland, Ohio, Charanya Varadarajan (May 2004)

Major urban areas are finding air quality problems such as, high concentrations of particulate matter. Particles less than 10 microns (PM10) in diameter are targeted because these small particles can easily penetrate into the deepest regions of the lungs. Epidemiological studies strongly confirm the hazards of breathing fine particles at concentrations typically found in ambient air in U.S. cities. The stringent standards and the rigorous measures taken to reduce the levels of particulates since 1996, has resulted in a considerable decrease in the pollutant concentration. Ambient concentrations of PM10 collected in Cleveland, Ohio are analyzed to study the behavior of PM10 and factors that affect variation in PM10. Two approaches of time series modeling technique – KZ Filter and Box Jenkins Transfer Function are used to analyze the data. The trend models developed from the KZ filter process and the cross correlations from the Box Jenkins procedure are used to establish the relationships of PM10 with meteorological variables as well as the ambient concentrations of the photochemical contaminants (ozone (O3), nitrogen dioxide (NO2) and Carbon monoxide (CO)). The models were developed for both the hourly and daily average values of PM10. The robustness of the KZ models was determined using three evaluation parameters while the transfer function models were tested for their convergence with their residual plots. The typical inputs for the hourly KZ filter models included hourly temperature, wind speed and ozone concentration. The KZ analysis showed that hourly PM10 has positive association with hourly temperature for most of the periods for the city of Cleveland. Hourly PM10 showed comparatively less relation to ozone and wind speed. It was found that for different periods of a year, different types of models were relevant to explain the behavior of the data. The transfer function procedure for PM10 and the ambient concentrations of ozone, carbon monoxide, and nitrogen dioxide revealed the important lags of each of the variables that need to be considered for forecasting PM10 in Cleveland. The transfer function procedure revealed that there is significant correlation between PM10 and the other gas concentrations. The analysis showed that NO2 in Cleveland is strongly correlated with PM10. CO was more significantly correlated to PM10 during the periods of April to October. Significant relation between next day ozone and current PM10 concentrations were also revealed from this study.

 

Incorporation of Natural Ventilation in a Commercial HVAC System Using Temperature as a Comfort Parameter,  Rahul S. Pendse  (May, 2004)

 Researchers are assessing the possibility of using natural ventilation for a certain period of day the to reduce the energy consumption due to HVAC systems. There is a need to develop hybrid systems involving both conventional HVAC systems and natural ventilation as a viable solution to energy consumption issues. This thesis is an effort to develop a hybrid HVAC system with natural night ventilation serving as an energy efficient component of the system. This research is targeted for commercial buildings, especially a manufacturing plant (e.g. metal fabricator plant) and a small office located at the plant. Night cooling times are calculated on a monthly basis for both the manufacturing plant as well as for the office separately. The first step is the development of a model for the calculation of indoor temperature after night cooling. This is done with the aim of using internal room temperature as a comfort index for the occupants. Accordingly, the models are developed for calculating final indoor temperature by incorporating the heat gains and losses observed in a plant as well as in an office building unit separately. The effect of changes in moisture content on indoor temperature at designed relative humidity has been taken into consideration. It is also assumed that the designed relative humidity is remaining constant throughout the cooling period. Both of the models involve a complex equation for the calculation of final indoor temperature, which makes the process time-consuming. To facilitate the calculations, a spreadsheet tool used is composed of five sheets, wherein inputs concerning wall details, window details, lighting details, motor details, heat emitted during different activities by human beings, and outdoor temperature calculations are obtained from the ASHRAE Handbook of Fundamentals 2001; moisture contents at different temperatures at designed relative humidity are obtained from Mark’s Handbook for Mechanical Engineers. Variation of the temperature with respect to internal mass, air change rate, start time, change in relative humidity, duration, and change in load and shifts is studied for five cities, each belonging to one of the five climatic zones into which the United States is divided by the Department of Energy. Based on the pattern of the variation of indoor temperature and comfort ranges of indoor temperature developed from an ASHRAE study, recommendations are made for night cooling in the cities on a monthly basis. The recommendations are made in two sets, one conservative set assuming a low tolerance for temperature fluctuation and a liberal set of recommendations for building occupants having higher degree of adaptability to varying indoor temperatures. In warm regions like Texas, night cooling can be carried out for almost all months of the year. In a city like Raleigh, NC, it is possible to reduce usage of conventional systems in a manufacturing plant by approximately 1500 hours per year and for an office building by approximately 1400 hours per year. In a comparatively colder city like Minnesota, MN, it is possible to reduce usage of conventional systems in a manufacturing plant by approximately 1300 hours per year and for an office building by approximately 1200 hours per year.  

 

Comparison, Evaluation and Use of AERMOD Model for Estimating Ambient Air Concentrations of Sulfur Dioxide, Nitrogen Dioxide and Particulate Matter for Lucas County, Siva Sailaja Jampana (May 2004)

The AERMOD model is evaluated using data from an emission inventory of Lucas County, Ohio for the year 1990, which included actual air pollutant emissions of sulfur dioxide, oxides of nitrogen and particulates. AERMOD is further used to predict 3-hr, monthly, quarterly and annual averages of sulfur dioxide concentrations. For the 3-hr averaging period evaluation for SO2, data has been classified into stable and convective groups based on Monin-Obukhov length, a stability parameter. Uncertainties associated with the model predictions are estimated using the bootstrap resampling method. Confidence intervals on the fractional mean bias, normalized mean square error, geometric mean bias and geometric variance are calculated for each model and difference between models. NMSE versus FB plots are drawn to find whether the model overpredicts or underpredicts. AERMOD did not yield good performance in its prediction of 3-hr and 24-hr average concentrations of the datasets used. The model tended to underpredict in both the stable and convective cases. However, there are equal numbers of overpredictions in the convective case.

 

Evaluation of Regression Models for Forecasting Daily Maximum Carbon Monoxide Concentrations in  Hamilton County, OH and Development of An Online, Web-based Forecasting Tool, Gopi Krishna Manne (December, 2002)

Many state and local authorities have started issuing high pollutant level alerts to protect the public from adverse health effects. It is essential that accurate predictions of pollutant levels be made so that the authorities, industry, and the public can be cautioned to take appropriate measures. In order to make such predictions, accurate forecasting models need to be developed.

This study involved the development and statistical evaluation of regression type forecasting models to predict CO concentrations in Hamilton County, Ohio. Daily maximum CO concentration data collected between 1998 and 2001 at a monitoring station in Cincinnati, Ohio were examined. Five different regression methods were used for model development. The entire data was split into four seasons and ten different forecasting models were developed for each season. The developed models were then ranked using R-squared; statistical parameters (FB, NMSE, and FA2); and forecasting skill (FC) using 2001 data. Based on these rankings four best performing models were chosen for each season and were further evaluated using confidence limits analysis.

It was found that the models developed using the backward elimination method and the robust regression method performed better in predicting daily maximum CO concentrations than those developed using stepwise regression, forward selection, and multiple regression. An interesting finding of this study is that the model predictions of all the four best performing models in the summer season are not significantly (statistically) different from each other. The forecasted results of this study support the claim that R-squared is not a reliable indicator of model performance.

            Once the best performing model was chosen, an online, Web based forecasting tool was developed based on the best performing model for each season. This tool can be easily accessed through the World Wide Web (aprg.utoledo.edu).

 

Development of Graphical User Interface for Vehicle Emission Modeling Software - MOBILE5a, Shashi Makkapaty (August 2002)

Clean air is an important part of a healthy environment. With the passage of the Clean Air Act Amendments (CAAA) of 1990, renewed effort to properly account for the emissions characteristics of on-road motor vehicles was initiated.  As part of this effort, the U.S. Environmental Protection Agency's (EPA's) motor vehicle emission factor models, MOBILE4 and subsequently MOBILE5 (programmed in FORTRAN by EPA) were developed. The updated version of MOBILE5, MOBILE5a, was released on March 26, 1993. By using the national inventory data and the local data provided by the user,  the software estimates hydrocarbon (HC), carbon monoxide (CO), and oxides of nitrogen (NOx) emission factors for gasoline-fueled and diesel highway motor vehicles of eight individual types in two regions (low- and high-altitude) of the country. 

Through this project an attempt has been made to develop a Graphical User Interface (GUI) for the model - MOBILE5a to make the process of executing the model easier, faster, interactive and to overcome the difficulties in data input, formatting and Interpreting.

The process of generating the emission factors requires a formatted input file. The input process has been divided into three sections. The GUI developed in this project simulates the processes of creating a formatted input file and generating the output file (emission factors). Through  this process the user will be able to generate input and the output files. The GUI has been developed in such a way that it will be able to read input data from the user, generate a formatted input file and the corresponding output file. The Graphical User Interface (GUI) was created using Visual Basic 6.0 and the GUI acts as an interface between the user and the model - MOBILE5a . The user is prompted and guided through out the data input process and the GUI creates the input file and the output file without the user ever having to run the MOBILE5a.exe file. The GUI greatly increases the usability of the model with Interactive Input Screens and prompts for input data. 

 The reliability and the performance of the GUI was evaluated by running the model with two comprehensive and robust input files and the results obtained were found to be in compliance in format and content with the output generated through the conventional procedure of running the model in DOS mode.


Determination of Night Time Cooling Hours for an Energy Efficient Hybrid HVAC System, Sandys Thomas (August 2002)

    Contemporary research points to hybrid systems involving both conventional HVAC systems and natural ventilation as a viable solution to energy consumption issues.  This thesis is an effort to develop a hybrid HVAC system with natural night ventilation serving as the natural component of the system. Based on the research, night cooling times have been developed on a monthly basis.

    The first step is the development of a model for calculation of indoor temperature after night cooling. This is done with the aim of using internal room temperature as an index of thermal comfort of the occupants. Accordingly, a model is developed for the final temperature by incorporating the heat gains/losses observed in a building unit. The model requires extensive time consuming calculations for solar radiance, and consequently, the resulting heat gain components.

     To facilitate the calculations, a spreadsheet tool is used composing of four sheets, wherein inputs concerning wall details, window details, outdoor temperature calculations are obtained from the ASHRAE Handbook of Fundamentals 2001.

    Variation of the temperature with respect to internal mass, air change rate, start time, end time, duration is studied for five cities, each belonging to one of the five climatic zones into which the United States is divided, used by the Department of Energy.

    Based on the pattern of the variation of indoor temperature and comfort ranges of indoor temperature developed from an ASHRAE study, recommendations are made for night cooling in the cities on a monthly basis. The recommendations are made in two sets, one conservative set assuming a low tolerability for temperature fluctuation and a liberal set of recommendations for building occupants having higher degree of adaptability to varying indoor temperatures. During peak summer months, natural ventilation can be used for periods of 12 hours or more, depending on the climate of the region under question. In warm regions like Texas, night cooling can be carried out from 7 p.m. to 8 a.m. for the months of May, June and July. In a city like Denver, CO, it is possible to reduce usage of conventional systems by 1110 hours per year.


Development of Forecasting Models for Predicting Maximum Daily PM10 Concentration Near Cincinnati, Ohio, John Coutinho (August, 2002)

Ambient monitored PM10 (particle matter larger than 2.5 micrometers but less than or equal to 10 micrometers) data at Cincinnati, Ohio, are analyzed to forecast atmospheric pollution critical events caused by particulate matter (specifically PM10) based upon the application of Artificial Neural Networks (ANN) and linear regression models. The two types of Artificial Neural Network models developed are the multi-layer perceptron (MLP) model and the radial basis function network (RBNF) model. The models have been developed and validated using the data from 1995-1999.The evaluations of the models have been carried out using the year 2000 data.

The typical inputs for the developed models are maximum daily temperature and maximum daily ozone concentration for the months of April to October. Linear regression analyses showed that PM10 has a positive association with daily maximum temperatures and negative and less significant association with daily maximum ozone concentration.

Overall it was found that the ANN models performed better than the linear regression models. All the developed models met the normalized mean square error (NMSE), factor of two (Fa2) and fractional bias (FB) criteria discussed in model evaluation studies. The RBNF models have a positive value of FB and hence have a tendency to under predict. On the other hand the linear regression model mainly has a negative value of fractional bias (FB) and thus shows a tendency to over predict. The MLP models have NMSE equal to zero in all cases considered, whereas the linear regression model and RBNF have one model each for which the NMSE is not equal to zero. Model development resulted in Fa2 being equal to one except for the linear regression model in which the temperature is less than or equal to 40oF. The forecasting skills of all the developed models vary from 57% to 96 %.


A Pilot Study to Develop Forecasting Models for Predicting Hourly Ozone Concentration near Cincinnati, Ohio, Sunil Ojha (May 2002)

    High concentration of ground level ozone is one of the air quality problems faced by major urban areas around the world. In Northern hemisphere, these high concentrations occur during the summer months from May through September. Many urban areas have now started proclaiming episodes of high ozone concentration as an “Ozone Action Day”(OAD) to protect public from adverse health effects. It is essential that accurate predictions of such an OAD be made so that the authorities, industry and the public can be alerted to take appropriate measures. For making such predictions accurate forecasting systems need to be developed.

    This research involved development of empirical ozone forecasting models for predicting hourly ozone concentrations near Cincinnati, Ohio. To this end, this study examined the hourly ozone data collected between 1995 and 1999 at an ozone monitoring station near Cincinnati, Ohio. Two different models, the KZ model and the NN model, were developed. The KZ model was developed by statistical/ time series analysis of existing data whereas the NN model was developed using artificial neural network. The performance of the models was determined using four evaluation parameters. The evaluation results are presented for each month, for the whole ozone season, and for high ozone concentration period.

    It was found that the NN model performed better as compared to the KZ model for the entire ozone season. However, a detailed analysis of the performance of the models indicated that the performance of the KZ model was better for high ozone concentration periods and peak predictions as compared to the NN model. On the other hand, the predictions by the NN model were better for time periods when the ozone concentrations are lower. The findings of the study indicate that a single robust model using any one of the methods for predicting hourly ozone concentration may not be a great performer. Therefore the use of a composite forecasting system combining the NN and KZ models for different time of the day is suggested for predicting reliable hourly concentrations of ozone.


Development of Web Screening Interface for Estimation of Aerosol Deposition in Human Respiratory Tract and Online Training Modules, Vijay Cinnakonda (December ,2001)

Particulate Matter is ubiquitous in our world. We are constantly exposed to particulates, fibers and other anthropogenic substances day in and day out. And this threat is even more pronounced considering the amount of time a person spends indoors. In the U.S., the average daily time spent indoors is 85- 90 % of the day. Considerable time is also spent in environments that involves over exposure to particulate matter (e.g. vehicles in traffic).

A large percent of the particulates that we breathe in gets deposited in various parts of our respiratory tract leading to various pulmonary and respiratory tract associated diseases such as Chronic Obstructive Pulmonary Disease (COPD), Asthma etc.

Awareness on this issue is alarming because few of us are conscious about the amount of particulates we breathe in and their consequences. A web-screening interface to estimate the particulate deposition in the respiratory tract in humans was developed primarily to identify and quantify the risk as a result of such deposits. 

The screening interface evaluates the deposition fraction of the aerosol inhaled based on an empirical formulae describing aerosol deposition in humans. To establish the input parameters that are requisite in modeling, an interface to calculate the statistical diameters of the aerosols based on lognormal distribution was also developed.

Although the model employed in the predictions forgoes some important parameters such as clearance mechanism of the body, the asymmetric shape of the lungs, and time of exposure, it is hoped that web interface would serve more as a educational tool in understanding and emphasizing the risk to human health from particulate matter deposition. To make it an important educational tool for increasing awareness, the screening interface is made accessible through Internet on the University of Toledo’s Air Pollution Research Site.

The project also includes some of the improvements and restructuring of the web contents of the air pollution engineering I site by adding an online tutorial section that has a easy navigable layout in addition to the quiz applet.


Uncertainty Analysis for Various Fire and Explosion Models and the Calculation of Flammable Mass for Dense Gases Under Downwash Conditions, by Vivek Bhat (May, 2001)

    Accidents involving fire have occurred since the inception of fire and its subsequent use by mankind. Though utmost care is taken while designing of various industrial facilities, accidents may still occur, sometimes leading to death, serious injury, damage to facilities, loss of production, and damage to reputation in the community. An uncertainty analysis was performed for input parameters on models available for BLEVE’s, flash fires, vapor cloud explosions, and pool fires. The sensitivity analysis was carried out with the help of Crystal Ball® 2000. In the point source model and solid flame model for BLEVEs, where atmospheric transmissivity, ?a is involved, it is observed to be significant. Mass of fuel, mf, is significant for the point source model whereas it is not in the case of the solid flame model.
    The second part of the thesis deals with flammable mass calculations for dense gases under building downwash conditions which are pivotal to the calculations of quantitative risk analysis (QRA), risk management plans (RMP’s) and emergency response planning. The flammable mass in the cloud is the total mass released in the appropriate interval in between when the cloud profile dilutes below the upper flammability limit and yet is above the lower flammability limit. The study suggested a simple model and was applied to three chemical releases of Ethane, Butane and Iso-butane for square, squat and tall buildings. It was observed that the stack height, wind speed, effluent strength and type of chemical play an important role in computing the flammable mass and building downwash concentration for heavy gases. The model should be further refined to reflect the effect of other variables.
    The third section deals with developing a database for 172 chemicals listed as hazardous air pollutants by the EPA. The EPA has given a list of 189 hazardous chemicals for which the various physical and thermodynamic data is required by the user. In order to support the need for the same, a database of 111 chemicals was developed. The database is now extended to 172 chemicals. The user can choose the chemical of concern by searching for the CAS number from the database. The database is maintained in MS Access 2000. The detailed database is given in Appendix A along with the equations used in the calculations.


Sensitivity Analysis, Evaluation and Design of Graphical User Interface of CAL3QHC Road Intersection Model, by Irfan Patel (May, 2001)

    In most developed cities, traffic is the most important source of air pollution and pollution from major roads is also important in suburban and rural areas. Vehicular dispersion models are therefore essential computational tools in modern municipal and urban planning. Due to their simplicity and direct applicability for estimates on a local scale, various versions of the Gaussian line source model have been used for dispersion evaluations from a road. The CAL3QHC (Version 2.0) model has been selected as the recommended CO intersection model in the Guideline on Air Quality Models (Revised) for intersection modeling. CAL3QHC enhances the CALINE3 model by calculating pollutant concentration emitting from both idling and free flow vehicles. CAL3QHC requires a considerable amount of input, which the user is required to provide through an input file. The process of preparing the input file is tedious and may lead to manual errors. This thesis provides a systematic approach for developing the input file and executing the model, using the graphical user interface (GUI) that is developed. The user is required to enter the values in text boxes or select from the command prompts, thus avoiding mistakes in preparing the input file. The output can be displayed in report format, which can be routed to a printer or saved in a text file. An online, user manual is provided for quick reference and help. The results are also displayed in chart format. The software was tested on different personal computers with different processors and found to work well in all configurations.
    In 1992, the EPA evaluated eight intersection modeling techniques using a New York City database and emissions from the MOBILE 4.0 model. CAL3QHC and CALINE4 models have been reevaluated using the same database but the MOBILE 5 emissions model. The results indicate that there is not much difference in the performance of both these models. Both, CAL3QHC and CALINE4 models are found to be conservative models as the average predicted values are less than the average observed values.
    Sensitivity analysis of the CAL3QHC model is carried out for a simple roadway intersection with two traffic lanes and two receptor locations: one at the corner of the intersection and the other at the midblock location using the least rigorous approach and the American Society for Testing and Materials (ASTM) method. Using the results of the ASTM method, sensitivity indices were calculated for signal timing, traffic volume, number of traffic lanes and wind speed. Among all, wind speed showed the maximum sensitivity index. The model needs no calibration as none of these parameters showed type IV sensitivity using ASTM results.


Development of a Graphical User Interface for and Dynamic Sensitivity Analysis of EPA’s PART5 Particulate Emission Factor Model, by Dhananjay Thakrey (May, 2001)

    PART5 is a model (programmed in FORTRAN) for calculating PM10 and PM2.5 emissions from vehicles. It calculates particulate emission factors for emissions from mobile sources both primary and secondary (including exhaust), vehicle wear emissions (brake, tire and engine), and fugitive dust emissions (pavement deterioration, wind blown dust).
    The algorithms reflect the low sulfur diesel regulation, the lower particulate standards, and incorporate expansion of heavy–duty diesel vehicle classification into five sub–categories and of light–duty cars and trucks into gasoline and diesel fueled ones. Mileage accumulation rates, vehicle travel mixes, diesel sale fractions, registration distributions, and catalyst fractions have been accounted for. Additionally, options to print gaseous SO2 and calculate fugitive dust for paved and unpaved roads have been provided.
    The program contains default values for most data required for the calculations of all the emissions factors. It could be run interactively, but that would require each line of input to be provided sequentially, which results in complexity and deems this method of input the less-preferred one.
Part of this thesis, is the creation of associated software aimed at being a tool for data input for the model. It involves providing a Graphic User Interface (GUI) to replace the less–preferred method of data input (modifying input files using text editors and then running programs in the DOS mode.  The format of the generated output has also been systematically modified and presented.
    This thesis also conducts and presents results of a dynamic sensitivity analysis of the PART5 model. Equations in the PART5 model calculating emission factors for the particulate pollutant compounds of lead, sulfate, soluble organic fraction, remaining carbon portion, and fugitive dust were analyzed for sensitivity. The categories of brake wear, tire wear, fugitive dust, indirect sulfate, and gaseous sulfur dioxide were also analyzed.
    Results revealed that for gasoline vehicles, the lead particulate emission is primarily governed by the lead content in the fuel. The gaseous sulfur emission factors exhibit dependence on the fraction of percent of fuel that is directly converted in sulfate and the fuel economy. Indirect sulfate depends on the fraction of SO2 converted to sulfate. In the case of diesel-fueled vehicles, fuel economy also plays a significant role in sulfur emission factors and indirect sulfate depends on the fraction of SO2 converted to sulfate. The model shows negligible dependence on speed for these emission factors; however, speed determines the fugitive dust emission factors from unpaved roads. The paved road emission factors are sensitive to road surface silt loading.


Development and Evaluation of a Software (APCD 1.0) for the Design of Air Pollution Control Equipment & Development of a User’s Guide for Ohio Radon Information Systems, by Nirav Shah (May, 2001) (Masters Project)

This project comprises of two parts. The first part i.e. Part A deals with the development and evaluation of a software (APCD 1.0) for the design of air pollution control equipment and the second part i.e. Part B deals with the development of a user’s guide for Ohio Radon Information Systems.
Part A: Air pollution control equipment such as electrostatic precipitators (ESPs), venturi scrubbers and cyclone separators are widely used in the industry to reduce air pollution to acceptable levels.
The prime objective of this part of the project is to create a software (APCD 1.0) for the design of air pollution control equipment. The latest innovations in both fields have been incorporated for the design of the systems. The user can select the appropriate device from the options available. The input parameters required for the design vary from device to device.
For cyclone separators, the design can be carried out by using Lapple’s equation. The user has an option of calculating the collection efficiency and/or the particle size. Consequently, the output generated by the software includes number of vortex turns and collection efficiency and/or particle size depending on the user’s choice.
For venturi scrubbers, the design can be carried out by using the Calvert model and/or the Hesketh model. The output generated in both the cases includes pressure drop, particle penetration and collection efficiency. The user can thus compare the design values generated in both the cases.
For electrostatic precipitators, the design is carried out using the Deutsch-Anderson equation. The user has the option of obtaining the efficiency and/or the design parameters for the device. In case of the first option, the output generated includes drift velocity, particle penetration and collection efficiency. For the latter, the output includes drift velocity, specific collecting area, length of the collecting electrode and particle penetration.
The software has been evaluated for various case studies and illustrative examples. The results obtained from the software are compared with those obtained by manual calculations so as to check its accuracy.
An extensive user’s guide, which explains the different features of the software, has been included in the appendix. The user’s guide shows the snapshots of the various forms developed in the software and gives a step-by-step procedure of using them. The entire Visual Basic code used for the software has been included in the appendix as well.
Part B: The objective of this part of the project is to develop a user’s guide for the Ohio Radon Information Systems. The Ohio radon database helps in obtaining a statistical representation of the indoor radon distribution in individual zip code areas and counties for the state of Ohio. The user’s guide explains the importance of radon gas, its occurrence and different methods for measurement of radon concentration. It includes a detailed description of topics such as creation of the radon database, data collection and loading the radon data. The SQL codes which are required for obtaining the radon statistics have been also included in the guide along with sample results of these queries.


Development of a Model to Study Transient Indoor Aerosol Dynamics under Displacement Ventilation, by Aachi Venkat Naveen (Dec, 2000)

Deposition is an important aspect of air borne particles in indoor environment.  The effect of particle size present in outdoor air and velocity distribution of outdoor particles introduced in an indoor environment under displacement ventilation is incorporated for computing indoor concentration by developing an analytical model.  A software package IADYN 1.0 was developed using Java 2 to calculate the deposition velocity, distance traveled and indoor concentration for different particle diameters.
Four different cases were considered to study the variation of total indoor concentration due to outdoor particles.  In the first case variable deposition velocity with no indoor source was considered.  The total indoor concentration obtained from the first case was then compared with the total indoor concentration obtained by considering an average outdoor concentration of 13.4 ug/cu.m and having average deposition velocities of 0.0001 m/s, 0.0002 m/s and 0.0003 m/s with no indoor source.  Limited tests conducted show that the indoor concentrations are underestimated when constant deposition velocity is used.  The underestimate concentration was in the range of 17 % to 33 % based on the chosen value of deposition velocity.

The variation of total indoor concentration with time for different particles (mercury, oil, silica and borosilicate glass) released from a hypothetical source was studied.  It was observed that the oil particles have the highest total indoor concentration at a particular instant of time, and that the mercury particles have the least total indoor concentration.

The study analyzed the variation of deposition velocity with time.  It was observed that for particles having diameter in the range of 0.01 um to 2 um the deposition velocity first decreases and then increases whereas for particles having diameter in the range of 3 um to 9 um, the deposition velocity increases with time.  There is no change observed in the variation of deposition velocity with time even if the value of S is changed.


Development of a Database for Indoor Emission Factors, by Vijay Kumar Nangia(Aug, 2000) (Project)

Most of the materials that we live around discharge some kind of gas or particles or kinds of materials that may be harmful to our health. In an indoor environment, humans are subjected to more harmful substances than in an outdoor environment as the indoor air quality is generally worse than the outdoor air quality. As we spend more than eighty percent of our lives in indoor environments, it is essential for us to know the kind of emissions coming out of the various substances and processes, which are, either present or occur in our environment. For this reason, a database has been developed for the different chemicals emitted from different sources in an indoor environment.

The database IAQ Emission Factors 1.0.0 has been developed for different sources and chemicals for indoor emission factors. Thirty major sources have been considered. They are cosmetics, ovens, furnaces, toiletries, and so on. The thirty major sources are then divided into one hundred and seventeen sub sources. For example, under ovens the sub sources are new ovens, old ovens, and 11 ovens at 350 degrees F. A total of fifty-four pollutants are considered for which the emission factors have been collected. Note that all of the one hundred and seventeen sub sources do not emit all fifty-four pollutants.



Development and Evaluation of Software to Design the Industrial Ventilation System, by Amulya Duvvuru (Aug, 2000)

Computing the required ventilation volumetric flowrates is perhaps the critical step in the design of a ventilation system. These computations can be effectively made using a design sheet as per Velocity Pressure Method. Prior to the development of software, computation of various design parameters was made manually.  This project aims at the development of software for designing the industrial ventilation system. Velocity Pressure Method was used to develop the design sheet in the software. The equations required for these computations are now incorporated in the code written to develop the flex grid (design sheet) using Visual Basic 6.0

The software package developed intends to perform the calculations in the following manner. The user first collects the available data on various components of ventilation system as input for the design sheet. The next step is to run the computer program. The design sheet follows the welcome screen where the first step is to enter the details of the plant and then the program guides you back to the design sheet to enter the input. The program then calculates various design parameters like flowrate, total pressure, BHP etc.

 The package I V Design 1.0.0 has been developed in Visual Basic 6.0, and the data and information for calculations were stored in database created in MS Access. The results obtained from I V Design 1.0.0 are compared with the example cases in I V Manual. The evaluation results of the software application are in good agreement with the expected results.
 
 



An Analysis of Mercury Data in the Great Lakes Area, by Kalyanakrishnan Venkataraman (2000).

A database on mercury concentration in fish and wet mercury deposition has been developed for the Great Lakes area. Analysis done on mercury concentration in fish suggests that the predator fishes are more contaminated with mercury than the bottom feeders. The mean mercury concentration for predator fishes was found to be in the range of 0.095 ppm to 0.266 ppm while the predator fishes showed a range of 0.137 to 0.429 ppm for the period of 1968 to 1998. The results further show that mercury concentration in fish has reduced considerably after 1990, which may be due to the decline in mercury releases from anthropogenic sources. The mean mercury concentration in fish was found to be 0.183 ppm from 1968 to 1990, which reduced to 0.156 ppm for the period 1991 to 1998 in the Great Lakes.

The mercury concentration in fish for unit wet deposition (mg/m2/y) of mercury was analyzed for the predator and bottom feeder species of fish. The predator species showed a range of 0.016 ppm to 0.047 ppm for unit wet deposition of mercury. The species Walleye and Smallmouth Bass show a higher mercury concentration than the other predator species. In the case of bottom feeders the mercury concentration was in the order of 0.005 ppm to 0.052 ppm for unit mercury wet deposition (mg/m2/y). Although Lake Michigan received a higher wet deposition of mercury, some fish species in Lake Michigan show a lesser mercury concentration when compared to other lakes. This shows that the mercury concentration in fish is dependent on the bio-aquatic factors of each lake irrespective of the deposition of mercury.

An attempt was made to estimate the mercury emission responsible for mercury deposition over the Great Lakes using a box model. It is tentatively estimated that the average mercury release, responsible for deposition over the Great Lakes at any given time is in the range of 0.001 to 0.006 tons depending on the type of mercury release assumed.

Further study was done to estimate the bio-concentration factor (BCF) of mercury in fish from the mercury concentration in rain. A dilution factor of 0.025 has been used to account for the mixing of mercury from rainfall to lake water. The bio-accumulation factors for predator fishes was in the range of 400,000 to 1,000,000 l/kg which were generally higher when compared to the range of 100,000 to 1,000,000 l/kg for bottom feeders.

The amount of mercury emission and bio-concentration factor estimated in this thesis should be used with caution as the parameters used are subjected to assumption and may vary.



Estimation of Concentrations within the Human Body due to Exposure to Fluctuating Atmospheric Pollutant Concentrations, by Taramati Shenoy (1999).

Air pollution and pharmacokinetics were explored in conjunction with environmental health to develop a single-compartment pharmacokinetic software model. The model calculates the effective pollutant concentration in a human body, as well as the external and internal peak concentrations for every instant that the external atmospheric pollutant concentration fluctuates. Each is based on the assumption that the user-entered concentration value is the average over the user specified time interval. Short-term fluctuations are considered by the model, as against the use of mean concentration over time intervals of one hour or more. Three human response parameters viz., uptake time constant, elimination rate constant and saturation concentration have been applied.
The software, BODYCON 1.0 was developed in Visual Basic 5.0 and the supporting database was developed in MS ACCESS. The tables and graphs are generated by the Visual Basic application itself, eliminating the dependency on MS Excel and making BODYCON 1.0 a stand-alone application.
Four case studies, covering different pollutant scenarios were considered and the results for total sampling periods of 24 hours with varying individual time intervals were tabulated and plotted. The effective body concentrations were found to be less than the external pollutant concentrations as expected.
 A sensitivity analysis was performed by maintaining a fixed external atmospheric concentration pattern and observing the effect on the effective body concentration and the external and internal peak concentration values, when each of the human response parameters assumed different values. The analysis showed that the model was insensitive to the uptake time and the elimination rate constant, as also for the higher values of the saturation concentration. The model was sensitive however, to the lower values of the saturation concentration.
These results prove that a full-fledged pharmacokinetic model, incorporating all possible conditions, catering to all population groups, and considering all exposure pathways can be developed with further research using the model developed in this thesis as a base.


Development and Evaluation of Software for Industrial Accidents Involving Toxic Air Pollutants, by Shashank Sharma (1999)

This thesis deals with the development of software for calculating the blast effects and emission rate calculations for toxic chemicals due to commonly caused industrial accidents. The two main causes of loss of life and damage to property in industries are due to accidental releases of toxic chemicals and the presence of flammable liquids and gases which cause fires.  The most widespread and common scenarios are the Boiling Liquid Expanding Vapor Explosion (BLEVE), flash fires, vapor cloud explosions and accidental releases of toxic chemicals.

 The software gives the user the flexibility to chose from the four types of the above accident scenarios.  In BLEVE, two methods (i.e. Solid Flame Method and Point Source Method) are used for getting the results.  The solid flame model is the more realistic of the two models.  The user has to input the mass of the chemical and the CAS number of the chemical in both the models.  The software then gives the result in the form of radiation received at particular ranges of distance from the source.  The software also calculates the fireball diameter and the duration of the fireball.

 For flash fire, the user has to input the mass of the chemical and the wind speed.  The software then calculates the flame height and the radiation heat flux at particular ranges of distance from the source point.

For vapor cloud explosions, the user can choose from the two methods given (i.e. conventional TNT Equivalency method and the Multienergy method).  Here the user has to input the required values such as the chemical type, mass of the chemical and more.  The software calculates the blast effects, on peak overpressures at a range of distances from the point source.

 For the accidental releases the user has to input the storage conditions.  The software then calculates the initial release conditions and further asks user to input more required data for any one of the eight release types.  The software then finds the various release parameters such as emission rate, emitted phase, release density, release diameter, release temperature and also checks for the dense gas criteria.  The data obtained from the above release parameters is useful in the selection and calculation of dispersion models and gives a fair idea of the on-site post accidental situations.

 One of the important features of this software is that it maintains database of around 111 toxic chemicals.  The various physical and thermodynamic properties of the chemicals that are included are molecular weight, boiling point, critical temperature, latent heat of vaporization, liquid density, heat capacity at constant pressure, heat capacity at constant volume, liquid heat capacity, ratio of the heat capacities, vapor pressure, heat of combustion.  Thus the user can choose from these 111 chemicals for calculations.

 The software was verified against the real problems given in Chapter 5 and they satisfy them well.  Another set of examples were applied on the software and the results obtained were quite satisfactory and are given in Chapter 6


  Development of a Software for Estimation of Pesticide Concentrations within the Human Body due to Exposure to Varying Atmospheric Concentrations using Pharmacokinetic Modeling with Complete Absorption, by Aniket Kulkarni (1999)

Single and two compartment pharmacokinetic models were developed to estimate the pesticide concentrations within the human body considering only the inhalation route of exposure. The effect of variations in the outside pesticide concentration pattern was then incorporated into both the models. In both the models the pesticide was considered to be completely absorbed in the body after inhalation. Therefore, the results of the models should be used cautiously. A software, PESTCON 1.0, was developed using Visual Basic 5.0.

The sensitivity analysis was performed on the models. The effect of different pesticides having different half-lives on single compartment model were studied using PESTCON 1.0. For this, the concentration pattern was kept constant. Then the effect of concentration fluctuation was studied keeping everything except the input concentration pattern constant. A similar approach was adopted for the two compartment model analysis. For this analysis, the effect of concentration fraction, x, and hybrid rate constants ? and ? were studied. The results were seen to be influenced by the half-lives of the different pesticides for the single compartment model. However, the two compartment model was seen to be insensitive to the hybrid rate constants and the concentration fraction.

Seven case studies were considered to illustrate the use of the software. Different hypothetical scenarios were considered to simulate realistic conditions and the models were run for the same. The results of these case studies did not vary significantly from the expected outcomes for the models.


  Development of Internet Based ISO 14000 Environmental Management System, by Ronak Desai (1999)

This thesis aims at development of an internet based ISO 14000 environmental management system (EMS). ISO 14000 is the latest set of standards established by International Organization for Standardization (ISO) that will help organizations meet their corporate responsibilities. These standards integrate environmental considerations into management and decision-making structures in a systematic and organized way. They consist of a set of documents that define the key elements of a management system that will help and organization address the environmental issues it faces.  The section ISO 14001 under the ISO 14000 series details the core requirements of an EMS that, when implemented, will allow an organization to identify and manage its environmental responsibilities.

In this thesis, use of the Internet has been advocated as a common platform to access and maintain the documents that are required for implementing ISO 14001 Using the Internet avoids the paper-intensive accumulation of documents, which typifies many management initiatives. Another software tool is developed to carry out the Gap Analysis for organization before implementing the EMS. Three new performance indicators are introduced to monitor the performance of the EMS.
Prior to the development of the EMS, the requirements of the standards were examined to understand the structure of the EMS as defined in the standards. After the initial examination and understanding of the standard requirements a method to implement an EMS is developed which should be adopted for setting up a full functional EMS. To help in implementing the EMS, a software package called as "Gap Assessment Tool for ISO 14001" was developed using MS Excel and Visual Basic for Applications (VBA). This software provides a checklist for carrying out the initial review of the existing system before implementing the ISO 14000-based EMS.

 A manufacturing facility was studied later and then various components of EMS were developed for it. A web module is developed using HTML for the case under study, which provides an instructional overview of the EMS of the facility. This software makes all the procedures for the facility Internet or Intranet accessible. For detailed review links to MS Word documents or MS Excel spreadsheets are provided to the web pages. Another area of study is the introduction of three new performance indicators for evaluation of EMS over the period of time. They are waste generation rate (WGR), waste generation rate index (WGRI) and the estimated total risk (P) from the releases at the facility under study. The above indicators are evaluated for the case under study.

 The highlights of this thesis are the development of the user-friendly gap assessment tool for ISO 14001 EMS; Internet based paperless ISO 14001 EMS and the performance indicators to evaluate the operation and suitability of the developed EMS.



Design and Development of a Software for Conventional Filters and Clean Rooms, by Balaji Ramaswamy (1999)

The objective of this thesis is to provide a single source of comprehensive design for both conventional filters and clean rooms. Conventional filters such as electrostatic precipitators (ESPs), wet scrubbers, venturi scrubbers are widely used in industry to reduce the air pollution to acceptable levels. Clean rooms are used to prevent ingress of contaminants inside a work area in order to ensure that the end product meets the specific standards.

This software deals with design of the conventional filters and clean rooms based on user’s requirements. The latest innovations in both fields have been incorporated for the design of the systems. The user can select the appropriate unit process needed from the list of menu available. The input parameters for the conventional filters are:
     Total collecting surface area (As)
     Particle drift velocity (W)
     Volumetric flow rate (Q)

The user also has the option of selecting different conventional filters. For the design of electrostatic precipitators the Deutsch's equation (Heinsohn and Kabel, 1999) is used. Fabric filters' minimum efficiency is calculated by Lee and Liu's equation (1980). The Kuwabara hydrodynamic factor, which accounts for the alteration of the flow field as airflow around the fiber, is calculated using Kuwabara's equation. For the design of venturi scrubbers the pressure drop is calculated using Hesketh's equation and the efficiency is calculated by Calvert's equation. For wet cyclonic scrubbers, Lapple's empirical equation for the particle cut size at 50% efficiency is calculated. The maximum efficiency is also calculated using Lapple's equation.

For the design of clean rooms, Russel's mathematical model is used. The equation used for calculating the number of HEPA filters is developed by the US Air force technical order 00-25-203, (Standards and Guidelines for the design and operation of clean rooms).
The user has to select the type of industry for which a clean room is to be designed and also the cleanliness condition, specified by the Class condition of the room such as Class 100, 1000, 10000 and 100000 and above. Based on the Class the user selects, the software assigns the air changes per hour (ACH) from the standard values available for that particular class. The user also has the option of using a specific ACH as per the requirement. The length (L), breadth (B) and height (H) (height is normally taken up to false ceiling of the room), is the basic input for the design calculations of a clean room. Based on the volumetric flow rate and the ACH rate, the number of HEPA filters are calculated.

The user has the option of viewing the layout of the room with filters in position and the positions for the return air risers, which ensures recirculation of air. The software also deals with the design of the air handling unit and pressure loss calculations at various points along the duct. This enables a complete design of the clean rooms as well as the conventional filters.


Net Truncation Error for Convective-Diffusion Equation due to Finite Difference Discretization, by Pavankumar Pakala (1999)

In the past, many efforts were made to find solutions to multi-dimensional complex problems using numerical methods. During the process of finding the solutions, the derivatives are discretized or approximate expressions are used. The accuracy of numerical solution is influenced by a number of errors. Two of the errors that come into the picture are the truncation error and the round off error.

In this thesis, an attempt has been made to find out the net truncation error (T.E.) of the non-linear convective-diffusion (C-D) equation that is used in the field of air pollution for calculating the concentration of pollutants. The net truncation error was determined for three finite difference schemes. The expressions for T.E. were developed for 1-dimensional, 2-dimensional and 3-dimensional steady state C-D equations. Stable, neutral and unstable atmospheric conditions were considered for calculating the error. For 1-D C-D equation, two cases of velocity profiles were considered in calculating the T.E. The T.E. was expressed as mg/cc/sec.

Graphs were plotted for T.E. against the downwind distance for 1-D C-D equation, and against the crosswind distance for 2-D and 3-D C-D equations. The following results were observed from the graphs:

Results indicate that the equation used to calculate the concentration values for a 3-D C-D equation might be erroneous at shorter downwind and crosswind distances because of high values of T.E.

A conclusion could be made from this that it is better to develop a new set of finite difference equations for a 3-D non-linear C-D equation, which would give more accurate values of concentrations.


Update and Analysis of a Residential Radon Database for the State of Ohio, by Anupma Sud (1998)

The objective of this study was to update an existing indoor radon information system for Ohio with the intention of obtaining meaningful results concerning the indoor radon problem in the state. The database contains about 80,436 indoor radon measurements from 1440 zip codes in 88 counties taken by commercial testing services, university researchers, and government agencies. The data includes building construction parameters, epidemiological information of Ohio and radon mitigation data, all of which is arranged in separate tables.

A user-friendly and interactive graphical user interface, known as Ohio Radon Information System (ORIS) was developed, and incorporated to edit and update the database and generate statistics files. The statistics files provided by ORIS were used to produce maps showing the variations in radon concentrations across Ohio. A list of 28 "hot counties" was then prepared to target areas that need further investigation and public education on radon remedial measures.

An in-depth analysis of the radon data was conducted and found that 44.8% of the 80,436 measurements had radon values greater than the EPA "action level" of 4 pCi/L. Basement radon measurements were found to be higher than the first floor concentrations by a factor of 1.4. In addition, the radon readings taken in crawl spaces were 1.6 times the basement readings. A comparison of the pre-1990 and post-1990 readings in Ohio counties indicated a similar distribution but decreasing trend in the radon concentrations.

An attempt was also made to estimate the cancer risk from to indoor radon exposure in the state of Ohio using currently available risk factors. Results obtained using five different models were compared with the observed radon-related lung cancer mortality in Ohio counties. The models produced varying results, which were attributed to uncertainties associated with developing them and the difference in their underlying assumptions and data used in their development.


Software development to estimate pressure effects due to vapor cloud explosions, Rajesh Anand, (1998)

This thesis deals with development of software to estimate the pressure effects due to vapor cloud explosions.  Two AICHE models, namely conventional TNT Equivalency method and Multienercy method were used to develop the software.  Prior to the development of the software, equations to estimate the pressure effects at various distances from the point of explosions were developed.  These equations were then incorporated in the code written to develop the screens used to estimate the pressure effects using Visual Basic 5.0. A number of real life accident scenarios were considered to evaluate the software.  The values of the side on overpressures predicted by the software were compared to the values that were observed at the accident sites and also to the values that were obtained by doing manual calculation using the models in consideration.  It was observed that for the Flixborough accident site the damage patterns observed at the site were in close agreement with the damage patterns predicted by the software.  Similarly the damage patterns values obtained by the software run for the Nypro accident case were in close agreement with the observed and the manually predicted damage patterns.

TNT Equivalency and the Multienergy models gave considerably different results when applied to the hypothetical liquid hydrocarbon storage site case study.  The TNTequivalency method systematically predicts heavier blast effects than the multienergy method.  On the other hand, the outcomes of the two methods for the Flixborough vapoi cloud explosion case study show relatively good agreement, particularly for the intermediate field.  In both near and far fields, side-on peak overpressure results diverge.  This divergence is indicative of the difference between decay characteristics for TNT and fuel-air blasts.

The models were also applied to three real life accidents and four hypothetical cases.  It was found that the damage produced by the blast in three accidents were comparable to observed damage patterns.


Uncertainty Analysis in Predicting Concentrations in Downwash Conditions for Passive and Heavy Gases, by Kumar Mantripragada (1998).

The uncertainty involved in predicting concentrations, is a major factor that influences the choice of equations.  In this study, an attempt has been made to determine the amount of uncertainty involved in predicting downwind centerline concentrations in building downwash cases, for passive and heavy gases.  Independent data sets are collected for passive and heavy gas conditions and the estimated values of concentrations obtained from the EPA equations were compared to the observed values.  Uncertainty and sensitivity analysis were done using the Monte Carlo simulation to predict the uncertainty in estimating concentrations.

In the model development phase a new model was developed, by extending the available EPA downwash models, to estimate the concentrations of heavy gases in building downwash cases.  This new model was tested by comparing the estimated concentrations with observed values obtained from an American Petroleum Institute  study.  Uncertainty analysis and sensitivity analysis were performed on this new model to determine its uncertainty in predicting concentrations. From the results of uncertainty analysis on the Guassisn model it can be inferred that the concentrations predicted by the Guassian model were accurate in the case of passive gas dispersion in building downwash cases.  From the results of sensitivity analysis on the Guassian model, the coefficient 'c' was found to have the maximum effect on the concentration values whereas, the coefficient 'b' was found to have the least effect.

The results of uncertainty analysis on the new model developed to predicted the heavy gas concentrations in building downwash cases, it was observed that the concentrations predicted by this new model were close to the observed concentrations obtained from the American Petroleum Institute's data set, thus it can be inferred that this new model can be succesfully used in predicting heavy gas concentrations in building downwash cases.  From the results of sensitivity analysis on this new model, the coefficient 'c' was found to have the maximum effect on the concentration values and the coefficient 'b'  was found to have the least effect on the concentration values.


Estimation of Risk Through Fish Pathway Due to Deposition of Mercury From an Urban Area on water bodies, Satish Sundaresan, (1997)

A literature review indicates that the risk posed by mercury to humans is primarily due to the consumption of fish. Therefore, the exposure of methyl mercury through the fish pathway is incorporated in the traditional risk models for mercury from anthropogenic sources in an urban area.  A distribution factor is used in distributing the total elemental emission of mercury into particulate and gaseous forms. Industrial Source Complex (ISC ST3) model has been used to model gaseous and particulate forms of mercury to obtain mercury concentrations present on the surface of water.  Both gaseous and particulate form of mercury have been treated for dry and wet deposition.  Meteorological data for 1991 is used for modeling of concentration.  The modeling is performed for eight receptor points located in three important lakes nearby an urban area (Lake Erie, Lake St. Clair and Lake Huron).  The result obtained from the ISC ST3 model is input into a modified California Air Pollution Control Officers Association (CAPCOA) methodology to calculate the exposure levels on humans due to consumption of fish.  The modification is based on a proposed linear model to determine the concentration of methyl mercury in fish from deposited mercury on the lakes.

The gaseous deposition is found to be more predominant than the particulate deposition.  The results indicate that both wet and dry deposition for gaseous and particulate forms of mercury decreased as the distance from the source increased.  The exposure due to inhalation of mercury in the atmosphere over the lakes is small when compared with the AEL (Accessible Exposure Limits).  Also, the human exposure due to consumption of fish in varying amounts is calculated.  This value is compared with the reported levels of methyl mercury in humans from cause and effect studies.  From this comparison one may speculate that the risk due to consumption of fish from certain parts of Lake Erie, Lake St. Clair and Lake Huron is negligible, but it may pose a human health hazard if more than one urban area is considered.  This speculative result should be verified from a detailed study of deposition of mercury on water bodies through fish pathway from an urban area.


Development and Evaluation of Software for Emission Rate Modeling of Accidental Toxic Releases, by S. Vashisth (1997)

Modeling the source phenomenon for an accidental rupture or spill of hazardous material is perhaps the most critical step in the accurate estimation of downwind air concentration resulting from an accidental release. This thesis provides a logical, and systematic approach for determining the toxic release scenario and subsequent development of a software package (ERate 1.0) to calculate the emission rate to be used as input for non-source term dispersion models. The software package developed intends to perform the analysis in the following manner. The user first determines the chemical of concern and also provides some information on the containment and on the observable release criteria. The next step is to run the computer program. The program prompts the user for information on the state of containment (e.g. temperature, pressure, or phase) and it then performs calculations to identify the release class. A general approach obtained from the documents published by the U. S. Environmental Protection Agency is followed in characterizing the release of hazardous air pollutants. The release class is then defined based on the information provided by the user and the data available for the specific chemical. The program then calculates the input required to run the hazardous release models. The parameters describing the release class are calculated based on the equations provided in the EPA’s Contingency and Guidance documents.

The emission rate computer model ERate 1.0 has been created in Visual Basic, and the data and information for the analysis were stored in a database created in MS Access. The results obtained from the ERate 1.0 are compared with the example cases covered in the EPA guidance document. The evaluation results indicated that the emission rate obtained for the seven example cases using the software follows closely the expected results obtained in the guidance document. The model is then applied to 16 different cases drawn from the literature. The results of model application are in good agreement with the expected results.


A Dynamic Sensitivity Analysis of the Probit Dose , by F. D'Souza, (1997).

Quantification of the risk from a municipal solid waste incinerator often requires knowledge of the dispersion of the toxic pollutants emitted and the effects of these pollutants on the exposed pollution. The Industrial Source Complex Short Term 3 model (ISCST3) a regulatory air model, is often used to predict the dispersion of these toxic pollutants. The probit analysis has advanced our ability to predict cancer risk as a function of the recieved dose.

The thesis combines both techniques, i.e. ISCST3 and Probit analysis, in a theoritical study of the cancer risk due to a Municipal Solid Waste Incinerator (MSWI). The fictitious MSWI is assumed to exist in the Raleigh, NC area. The air contaminants of concern are metals (e.g., Ag, As, Be, Cd, Cr, Hg, Ni, Pb, Sb) and organic compounds (e.g., PCBs, Polychorinated dioxins/furans, PAHs).MSWI risk assessment process includes both direct and indirect exposure pathways (air inhalation, incidental ingestion of soil) and indirect pathways (food chain exposures such as human consumption of produce, beef, fish, and milk). This thesis uses the proposed approach (ISCST3 and Probit analysis) as well the existing CAPCOA approach to estimate the cancer risk.

A dynamic sensitivity analysis of the Probit dose-responce model, using Crystall Ball®, is done fot the two most dominant pathways for each pollutant, to study the effect of various input parameters on the estimation of cancer risk. The cancer risk estimation seemed to be most sensitive to the median dose (ln D50).

The thesis also contains a comparison made between the ISCST3 model and AERMIC model (AERMOD). These two models are tested for a MSWI situated in the Albany, NY area. The study showed the AERMOD model to predict a higher 1 hr. and 3 hr. ground level concentration than ISCST3. In addition AERMOD is analyzed for its sensitivity to various input parameters.


Evaluation, Sensitivity and Uncertainty Analysis of VISCREEN Model using a newly developed Graphical User Interface , by Madbumita Bhor, (1997)

Air Quality simulation models have been the subject of extensive analysis to determine their performance under a variety of environmental and meteorological conditions.  Many software packages are available to perform individual analysis of these models.  The aim of this thesis is to develop a comprehensive software package which includes all the three performance measures i.e., evaluation, sensitivity and uncertainty analysis of a model.  A relational database in Oracle 7.1 was used to store the data and information of the analysis carried on the model and a graphical user interface tool was used which provides the model with user friendly working screens.  To test the performance of this application, the Visual Impact Screening Model (VISCREEN) was analyzed.

The VISCREEN model was evaluated to find the accuracy of the model's predicted output comparing it that of with the measured.  The model outputs are perceptibility and contrast hence to compare it to the measured visibility it was converted into visibility range.  The analysis was conducted for Toledo, Ohio using the year 1990's data for model input and observed visibility.  The model was evaluated using eight statistical parameters and considering various evaluation methods with and without normalizing.  Results indicated that VISCREEN model over predicts the seventy of the optical parameters of the atmosphere.  But the variation is acceptable, as VISCREEN is a conservative model.

Sensitivity analysis was performed for assessing the impact of variability of the inputs on the calibrated results.  Thorough analysis was carried out using the ASTM standards.  The input parameters analyzed were particulate emission rate, nitrogen emission rates and meteorological inputs wind speed and ozone concentration for atmospheric stability conditions A, C and E. The model did not yield to Type II and Type IV sensitivity to any parameter for any stability.  Most of variables showed Type III analysis.  Based on the results of the standards it was determined that particulate emission rate and wind speed strongly influences the VISCREEN models outputs.  To verify the results of the ASTM method, another technique was applied to perform sensitivity analysis, which uses sensitivity index to determine the influence of changes in the input on the model results.  Particulate emission showed maximum positive stability index whereas wind speed showed maximum negative index in most of the cases.  Results of both the methods were consistent.

The final analysis was Uncertainty Analysis, which uses the most sensitive parameters to determine the uncertainty in the results of a model.  Since particulate emissions and wind speed both proved to have high influence on the model's output, these two parameters were chosen for the analysis.  Sky perceptibility was chosen among the four model outputs for this method, as it is the most consistent output.  Observations for stability class C was used for analysis.  The model showed considerable uncertainty, which could arise due to the fact that VISCREEN is a conservative model hence has several assumptions and takes default values to create the worst weather conditions scenario.


Development an Ozone Forecasting Model for Non Attainment Areas in the State of Ohio, by S. Vedula (1997).

Ground level ozozne is one of the six common air pollutants that have been identified by the Environmental Protection Agency (EPA) to cause adverse effects to public health and environment. High concentrations of ground level ozone occur during the summer months. Ohio EPA spends a considerable amount of time and resources to monitor ozone concentrations and increase public awareness in Ohio. The Ohio EPA conducts ozone action days on very hot summer days. It encourages people to use public transportation, car pool, not use lawn mowers etc., to reduce buildup of ambient ozone layers.

This research involved the development of a model to forecast the natural log of the maximum mdaily concentration of ozone as a function of the maximum surface temperature, for ozone non ambient regions in Ohio. To this end, this research examined the hourly ozone data collected between 1990 and 1995 at six ozone monitors in three non ambient regions in Ohio. The model was developed by statistical analysis of existing data. Site specific models were developed initially. The verification and evaluation of the performance criteria of the model at each site was explored by comparing the model with an independent dataset collected from the site.

These site specific models were then investigated and a generalized state wide model was developed. The state wide model was developed from the site specific models. The verification and evaluation of performance criteria of the state wide model was done by employing the same independent data sets employed for the site specific model. An exceedence model to predice the occurance of an ozone exceedences over 100 ppb has also been presented.

In four of the six datasets examined, the percentage error in the prediction at 10% level of confidence for the site specific models was within -15.35% and +15.86%. The percentage error in the prediction for the state wide model at the same level of confidence was between -14.04% and +15.75%. There were 25 cases of exceedences greater than 100 ppb at the five of the six datasets studied. The exceedence model successfully predicted 24 of these 25 ozone exceedences.


Determination of the Angle and Length of Flare using a Remote Sensing Technique, by Asmina Ali Khan, (1997), (Advisors: A. Kumar and B. McDonald).

The angle and length of the flare is effected by the meteorological conditions and the stability of the atmosphere, which in turn effects the ground level pollutant concentration.  In this study, the angle and the length of a flare is evaluated under neutral and unstable atmospheric conditions.  This involves a remote sensing technique.  Photographs of a flare stack at the Sun Oil refinery in downtown Toledo were taken.  The angle and length of the flare is computed using the photographs.  This technique involves huge magnification of the image as compared to the original.  Digitization of the pictures is done by dividing them into frames to get the angle and length of the flare.  It has been found that the angle or deflection of the flame differed under the two different atmospheric stability conditions and were also different when compared to the standard of 45 degrees given by the U.S. Environmental Protection Agency (EPA), under the SCREEN 3 guideline model.  This observed angle of the flare, also differed from the calculated values as per the Olavo Cunha-Leite Procedure and the API Brzustowski Technique.  The length of the flare under the two different stability classes of the atmosphere also differed.  These values of observed length were also different from the calculated values of length of the flare when using the API and EPA equations.  The flare shape for the two atmospheric stability classes was also obtained.  In conclusion, the photographic remote sensing technique proves to be a useful tool in evaluating flares, although the results in this study may be tentative due to limited field experiments.


Development of a software for modeling VOC emissions from elevated point and area sources, by Arvind Purushothaman, (1997)

This thesis aims at developing a software for modeling volatile organic compound (VOC) emissions from point and area sources.  Prior to the development of the software, physical and chemical properties of volatile organic compounds are examined to determine the important factors, which could affect the concentration of the VOCs.  This is because VOCs are gaining importance as a separate class of pollutants primarily due to the increasing consumption of gasoline.  The regulatory ISCST3 model uses numerical integration for calculating the concentration due to area sources.  In this thesis, an attempt has been made to develop an analytical solution for area sources with the assumption that the concentration at a receptor is mainly dependent on the immediately upwind source.  The decay term has been studied in detail and this has been incorporated for point and area source emissions.  A database of the VOCs has been created using MS Access and MS Excel to facilitate the process of calculating the concentration.  This database acts as a back-end for the software and the front-end tool used is Visual Basic 4.0. The results obtained from this model are then compared with the ISCST3 model and analyzed using four statistical parameters: fractional bias (fb), normalized mean square error (rimse), factor of 2 (fa2) and fractional variance (fs).

The highlights of this thesis are the development of a database of VOCs, a solution for calculating the VOC emissions from area sources and also implementation of a user friendly software for calculating the concentration of VOCs.


Evaluation of Complex Terrain Algorithm in ISC3 For A Rural Area, by Praveen Mungara, (1997)

The Industrial Source Complex model (ISC3) was evaluated using data from emission inventory of Dearborn County, Indiana for the year 1991, which included actual air pollutant emissions of SO2. ISC3 was used to predict 1-hr, 3-hr, 24-hr, and 1-month averages of SO2 concentrations at three receptors. Uncertainties associated with the model predictions were estimated using bootstrap resampling methods. Confidence intervals on the fractional mean bias, normalized mean square error, geometric mean bias and geometric variance were calculated for each model. At Monitor 1, ISC3 overpredicted for all averaging periods. It was less overpredictive for 3-hr and 1-month averaging periods. NMSE values were low for these averaging periods indicating good model performance. The factor of two values for these averaging periods was greater than  0.8, which indicated that at Monitor1, the predicted results were not significantly overestimated by a factor greater than two. At monitor 2, ISC3 underpredicted for 1-hr averaging periods for some months in the first two quarters and overpredicted for later months. For the 24-hr averaging period it overpredicted for the whole year. For averaging periods, 1-month and 3-hr, ISC3 drastically overpredicted at Monitor 2. Confidence limits for these averaging periods were centered around zero suggesting that the predicted concentrations were not in one to one correspondence with the observed concentrations. Performance measures at Monitor 4 showed a positive bias for all averaging periods which indicated that ISC3 underpredicted. The coefficient of correlation was consistently low and varied between positive and negative values indicating inverse correlation. The interesting point to be noted is that Monitor’s 1 and 2 were located at higher elevations (750ft and 800ft) while Monitor 4 was located in a valley and not associated with any hill. Hence it is suggested that ISC3 should be used carefully as a screening procedure for Complex Terrain. It is also suggested that the user must be careful while selecting individual terrain features from an array of hills in a complex mountain range.


Development of Software for Modeling Peak Odor Concentration, Vanitha Venugopal, (1997)

In recent years, increasing attention has been directed towards an obvious indicator of contaminants in the air/ odor.  Odors that were once tolerated as a sign of industrial prosperity are now perceived as an evidence of environmental problems.  Because of what a community may conclude about odors in their cities, industry must address this 'odor' issue for the safety of the residents.  Odor is characterized by certain properties, which make them unique, due to the nature of requirements for measurement.  This is because its sole effect is due to subjective responses to humans to various intensities and also to the fact that with a few exceptions, it is not feasible to evaluate its intensity by chemical analysis because of its varied chemical composition.  This is not to be confused with monitoring air quality which is measurable.

As a result, there has developed a need to quantify the damage caused due to the release of toxic odor.  The existing regulatory models omit the effect of peak concentration and hence the estimated values that are obtained using these models are not accurate.  This thesis involves the development and analysis of an odor concentration model that incorporates the peak concentration.  Therefore, the resulting model hence calculates the peak odor concentration, for a given mass emission rate in terms of odor units per cubic meter (O.U/M3). The highlight of this work is the calculation of peak concentration in terms of odor units.  The Humanistic model, which is used to predict the degree of offensiveness and potential level of source annoyance, helps in outlining a effective odor control strategy.

This thesis develops the use of the "Odor Concentration Model" (OCM) by using Acetone, Benzene and Petrol (light).  The behavior of acetone under varying sampling time, stability class, wind velocity, stack height, terrain and stack exit velocity is studied extensively.  The effect of sampling time is noted to be very prominent and the 1 hour peak concentration predicted by OCM is approximately 6 times the value of the concentration from the USEPA's SCREEN 2 model.  The OCM shows an increase in odor concentration due to the decrease in stack height.  The increase in wind velocity increases the peak concentration value.  And the peak value is a key factor in the determination of peak odor concentration.  The behavior of Benzene and Petrol (light) has been analyzed for a set of selected cases.  These tests indicate that Acetone has a higher odor concentration value compared to the rest.
In addition, a Graphical User Interface (GUI) application was also developed, which provides the model with a user-friendly screen.  The application package permits the user to create, save and edit input files while also provides the capability of printing the output for further analysis.

The results in this thesis are tentative and subjective to change.  Based on the analysis it is concluded that the model needs to be tested further with appropriate field data and more chemicals at a later date.


Numerical Solution of Flash Fire Plume Dynamics Under cross Wind Conditions, Raminder Saluja, (1997)

In recent times, there has been growing instances of large quantities of flammable fuels such as liquid methane, liquefied natural gas (LNG), propane, ethane etc. being handled and transported over long distances.  This increases the possibility of an accidental spill of the fuel causing a flash fire.  As a result, there has developed a need to quantify the damage potential of a flash fire.  Towards this end, the thorough understanding of the dynamics of flash fire plume through a flammable fuel vapor cloud is required.  A single analytical solution to simulate flash fire plume movement is reported in literature.  This research involves development and analysis of a two dimensional numerical model based on the analytical solution to study the plume dynamics of flash fire.  The numerical model development incorporates the variable wind speed, temperature, density and different atmospheric stability classes Methane, propane and ethane are selected to analyze the performance of the numerical model.  Based on the flash fire plume behavior as reported in literature, the predicted results of the numerical model is acceptable.  The analysis showed that the numerical model predicted higher values for maximum flash fire plume height, maximum flash fire plume velocity in the vertical direction and maximum downwind distance traveled by the flash fire plume under unstable and low wind velocity conditions.  The flash fire plume width attained its maximum value at ground level.  It is determined that the plume density does not influence the flash fire plume dynamics to a large degree.

The results of the numerical model are compared to the results of the AICHE model.  It is observed that the maximum plume height values predicted by the AlChE model are much higher than the maximum plume height predicted by the numerical model under similar conditions.

Sensitivity analysis of the numerical model is carried out to evaluate the numerical model sensitivity to input parameter wind velocity, temperature and stability class.  It is determined that the numerical model output - maximum plume height and maximum plume velocity are sensitive to the variation in input parameters (wind velocity,temperature and stability class).  The numerical model output - maximum plume width did not show any sensitivity to the variation in input parameters (wind velocity, temperature and stability class).

Based on analysis of the numerical model, it is concluded that the numerical model needs to be further tested with appropriate field data at a latter date.


Incorporation of Surface Heat Transfer in an Instantaneous Heavy Gas Release Model: Development and Solution, Mahurkar Abhijeet, (1997)

Air quality models help in developing relationships between the amount of pollutant released into the ambient atmosphere by a source and the corresponding incremental contribution in the atmospheric concentration.  Of the various dispersion models, the heavy gas models help in predicting the concentrations due to release of gases heavier than air and the risks associated with the increased concentrations.  Several differences exist among the various models developed to study the heavy gas dispersion
phenomena.  These differences mainly arise because of the varied treatment given to physical processes involved in the dispersion mechanism.  One of these processes which has not been fully considered in many of the existing models is the effect of ground heating on the movement of a cloud.  In this study an improved box model was developed which incorporates the unsteady heating effects on the dispersion of a heavy gas cloud in windy conditions.  The model considered instantaneous release of heavy gases in the presence of wind.  The model also varied in its approach to the phenomenon of air entraimnent, which was assumed proportional to the cloud frontal velocity.  A quasianalytical solution for the equations describing the model was formulated based on standard box model assumptions.  The analytical solution helps identify the influence of various parameters on the cloud dispersion process.

The semi-analytical equations developed were then solved by numerical methods to provide a heavy gas cloud dispersion profile.  The popular Runge-Kutta Fourth Order technique was adopted to solve the differential equations numerically.  A Graphical User Interface (GUI) application was also developed, which provides the model with a userfriendly working screen.  The application package permits the user to create, save and edit input files while also providing the capability of exporting the generated output files to various common fortnats (e.g., MS Excel, MS Word, etc.) for further analysis.

The model evaluation results indicated that the model behavior follows closely the expected dispersion trends and observed cloud characteristics.  The trial run carried out in order to model the scenario of no heat transfer, by equating the source temperature to the ambient temperature, followed variations observed in the field.  The analysis of cloud behavior indicated that the cloud length is strongly influenced by source density and initial cloud temperature.  Other observed cloud parameters showed little or no effect of varying input conditions.  Sensitivity analysis, performed for assessing the adequacy of the model, did not yield Type IV sensitivity to any particular input parameter.  Most variables exhibited Type III behavior toward the input parameters.  The Type I sensitivity displayed by wind velocity towards most output variables indicated negligible influence of the ambient wind conditions on the dispersing cloud.  However, wind speed strongly influenced advection velocity, which gradually increased and stabilized to a fraction of the surrounding wind velocity.


Estimation of Emission Rates and Risk Due to paint Spray Booths, by A. Shrivastava, (1996)

 Pollution from automobile plants from painting operations has been addressed in the Clean Act Amendments (1990).  The estimation pollutants from automobile painting was done mostly by approximate procedures than by actual calculations.  The primary purpose of this study was to develop a methodology for calculating the emissions of the pollutants from painting operation in an automobile plant.  The major pollutants from painting operations are volatile organic compounds and particulate matter.  The amount emission depends upon the composition of the paint, the Air spray rate, the efficiencies of the control equipment and number of hours of operations of the painting operations.  Once the rate  of pollutant emissions was determined, the second step of studying their dispersion.  Finally, the study focused on the risk posed by emitted pollutants.

Five scenarios involving an automobile painting operation, located in Columbus (Ohio), were studied for pollutant emission and concomitant risk associated with that.  In the study of risk, a sensitivity analysis was done using Crystal Ball® on the parameters involved in risk.  This software works on the Monte Carlo principle.  The Pollution impact risk was via the respiratory pathway.  The most sensitive factor in the risk analysis was the ground level concentration of the pollutants.

 The second step in the risk analysis included an uncertainty analysis of the calculated risk.  The final step in the process involved the estimation of confidence in meeting the safety goal.

 All scenarios studied met the safety goal (a risk value of 1 * 10-6 ) with different confidence levels.  The highest level of confidence in meeting the safety goal was displayed by Scenario I (Alpha Industries).

 Another topic studied was the use of sensitivity analysis to determine the importance of different factors viz. the spray rate, the efficiencies of the control equipment and the number of hours of operations in the painting operation.  The simulations were performed using Crystal Ball®, also for this study five of automobile painting operations were studied.

 The results from the scenarios  suggest that risk is associated with the quantity of released toxic pollutants.  The sensitivity analysis of the various parameter shows that average spray rate of paint is the most important parameter in the estimation of pollutants from the painting operations.
 The entire study is a complete module which can be used by the environmental pollution control agencies for estimation of pollution and estimation of associated risk.  The confidence level estimation for meeting the safety requirement in risk study can help to design control equipment, selection of paint and size of operation.  The study can be further extended to other operations in an automobile industry or to different industries.


Analysis and Evaluation of Techniques for Computing Concentrations due to Emissions from Horizontal, Tilted and Capped Sources, by C. Tanna, (1996)

In spite of global efforts to reduce the emissions of specific pollutants from certain source categories, researchers are faced with the challenges posed by ever increasing industrialization which results in substantially higher emission rates. In an effort to gain perspective of the extent of pollution researchers resort to air pollution modeling. Among the various models that have been developed to monitor air pollution, the Industrial Source Complex (ISC) model is considered the most widespread application in compliance with the environmental protection agency (EPA).

However, the Industrial Source Complex Short Term (ISCST3), which is a part of ISC model, does not consider the effect of horizontal, tilted and capped sources. The main purpose of this research was to evaluate the available techniques used in computing the concentrations of pollutants which are released from these type of sources.

Emission data from a hypothetical plant were modified using methods summarized by Westbrook and Tarde (1995) to incorporate the effects of special sources such as horizontal, tilted and capped sources. Data from the same were also modified to use an alternatively developed method known as Kumar, Sahore, and Tanna's approach (KST approach) for capped sources. A comparison was then made between the 3 hr. maximum concentrations obtained from baseline (original) data and those resulting from modified data. Further, the results were also evaluated by calculating statistical parameters like model vias, fraction bias, fractional variance, normalized mean square error, fractional difference etc.

The results indicated that for all the approaches Point of Maximum Impact (PMI) was located within 200 meters of the source and adjusting the input parameters did not have any significant effect on the receptors located relatively far from the source.

The results obtained in the work shall help to strengthen the proposal put forth by the EPA and the State of New Jersey Department of Environmental Protection to the Model Clearing House, purported to incorporate the chages in the currently used regulatory model.

The sensitivity analysis was also carried out to check the validity of the proposed KST model. The results of sensitivity analysis indicate Type III sensitivity. Since there were no data available, the results obtained were in accordance with scientific deductions.


Development of a Software for Dispersion of Mercury Emission, by V. Chigullapalli, (1996), (MS Project).

 A computer based model has been developed to simulate the release of mercury from anthropogenic sources and to perform  risk assessment.   A distribution factor has been developed for anthropogenic emissions based on the sector speciation percentages, percent of total emission from each source type and percent of total sources considered for this study.   This factor called  the 'Distribution factor' has been utilized in distributing the total elemental emission into particulate and gaseous forms.  TSCREEN model algorithms has been used to model gaseous and particulate forms, respectively.  Both gaseous and particulate form of mercury have been treated for wet deposition and dry deposition.  All the meteorological conditions have been used,  with the worst case meteorological scenario being selected automatically for conservative results.  The dispersion characteristics and the concentration of  mercury thus obtained have been processed by the risk assessment module to evaluate  risk parameters.  Oracle Forms, a graphical user interface (GUI) development tool, has been used to design the interface, perform model calculations and integrate dispersion and risk assessment modules.   The data for testing this model has been obtained for Southeast Michigan for the years 1990 and 1991 from Wayne County Department of Environment, Air Quality Management Division, Detroit.   The data was obtained for eighteen stacks considered to be the major (92%) source of mercury emission for Southeast Michigan.  Eight monitoring stations provided the annual observed data, which was used to compare the results from the integrated model.  The concentrations obtained from the above model calculations was used for risk assessment.  CAPCOA methodology was used to calculate the exposure level.  The results obtained from the developed computer model has been compared to TSCREEN results and observed values.  The results showed that the model predicted values are closer to the observed values and that the TSCREEN model  produced very conservative results.  The gaseous wet deposition is found to be predominant, while the particulate deposition was smaller but not negligible.  The plots depicted that the both wet and dry deposition for gaseous and particulate forms of mercury decreased as the distance from the source increased.  The deposition seemed to increase initially and then drop over a long distance.  This strengthens the fact that mercury impact can be experienced over long distances.  Risk evaluation showed that the major impact on humans was through inhalation followed by ingestion of food.  The study showed that food ingestion pathway depends on wet deposition which in turn depends on the vicinity of industries, geographical location, meteorological conditions, food habits of population.  The worst situation has been predicted to be the location close to mercury sources with high precipitation,  population eating locally grown food and eating fish from nearby water bodies.  The study showed that recreation in a local place with high wet deposition of mercury could potentially increase the intake of mercury through indirect ingestion of mercury via dermal absorption from soil.

Measures have to be taken to control the intake of contaminated fish from local water bodies or food grown in the vicinity of mercury sources.  Also, control measures have to be taken reduce the emission of mercury so that the human exposure to this toxic substance is minimal and within safe limits.
 


Comparision, Evaluation, and Use of Industrial Source Complex Models for Estimating Long term, and Short Term Ambient Air Concentrations of Sulfur dioxide, Nitrogen dioxide, and Particualte Matter, by N.K. Bellam (1996)

The Short Term and Long Term versions of Industrial Source Complex Model (ISCST) and ISCLT3 are evaluated using data from emission inventory of Lucas County, Ohio for the year 1990, which included actual air pollutant emissions of sulfur dioxide, oxides or- nitrogen and particulates. ISCST3 is used to predict 24-hr averages in case of sulfur dioxide, oxides of nitrogen and particulates. ISCST3 is further used to predict 3-hr. monthly, quarterly and annual averages of sulfur dioxide concentrations. ISCLT3 is used to predict monthly, quarterly and annual averages of sulfur dioxide concentrations. Uncertainties associated with the model predictions are estimated using the bootstrap resampling method. Confidence intervals on the fractional mean bias, normalized mean square error, geometric mean bias and geometric variance are calculated for each model and differences between models. ISCST3 did not yield a good performance in its prediction of 3-hr, 24-hr average concentrations for the datasets used, with mean biases of 85% or less and normalized mean square error values that are greater than 50% of the mean. Intercomparison of ISCST3 and ISCLT3 indicated that these models yielded relatively good performance in their prediction of monthly, quarterly and annual average concentrations, with relative mean biases of 29% to 55% and normalized mean square error values that are about 13% to 44% of-the mean. Both ISCST3 and ISCLT3 predicted concentrations that are lower than the observed concentrations. The concentrations predicted by ISCST3 were closer to the observed concentrations when compared with the concentrations obtained using ISCLT3. Hence it is suggested that ISCST3 be used to estimate long term concentrations of sulfur dioxide.


Evaluation of Six Dispersion Models for Complex Terrain Dispersion Modeling, by V. Kandi, (1996).

Most of the state and federal air quality regulations require that pllutant concentrations be be calculated at the surface terrain obstacles where the receptor height is more than the stack height. Complex terrain is defined as the terrain at which the elevations are more than the stack height. For calculating the pollutant concentrations in the complex terrain region, dispersion model which incorporate terrain features should be used.

In this study, six dispersion models (COMPLEX1, CTSCREEN. RTDM, VALLEY, SHORTZ, and ISCST2) are evaluated for their performance in a scenario involving complex terrain. COMPLEX1, CTSCREEN. RTDM, VALLEY, and SHORTZ are screening models whereas ISCST2 is a regulatory model.

Two point sources near Walker hill in Bakersfield, California were considered for the air quality study. Monitoring data from 20 sites around the two sources were acquired from the California Air Resources Board (CARB). The predicted values of concentrations of the pollutants (SO2) from the different models are compared with the actual monitoring data. The different models were evaluated using USEPA guidelines for their performance in complex terrain scenarios using statastical measures.

The results of the study indicated that CTSCREEN model would be the best of all the screen models to use in complex terrain scenarios.


Evaluation of a k-e Concentration Model, by S. Sahore, (1996)

This study concentrates on the development of an atmospheric dispersion model for an elevated point source release based on the k-e- (algebraic stress/flux) equation model of turbulence using the second order closure hypothesis.  The model utilizes state of-the-art procedures for estimating surface fluxes of heat and momentum based on simple wind and radiation measurements.  The model is developed specifically for buoyant plumes from tall stacks in relatively flat terrain.  It also assumes no downwash occurring.  The turbulence model chosen is based on an exhaustive literature review, supplemented by state-of-the-art research techniques in turbulence.  The Convection Diffusion (C-D) equation is derived and written in numerical form using the Alternate Direction Implicit (ADI) method after appropriate assumptions being made regarding the nature of the flow field.

After deriving the form of the C-D equation relevant to this work, the numerical implementation of the C-D equation is done using the computationally-intensive ADI technique.  Following this, the FORTRAN programming language was used for coding the numerical formulation in order to obtain pollutant concentrations downwind of the elevated smokestack.

The model is evaluated according to the guidelines laid down by the USEPA pertaining to model evaluation protocol.  This involved use of various statistical tools such as model bias, fractional bias (FB), fractional variance (FS), normalized mean square error (NMSE), coefficient of correlation (r), factor of two (Fa2), geometric mean bias (MG), and geometric mean variance (VG).  However, the primary model evaluation statistics used for estimating model performance are FB and NMSE.  The confidence limits between the observed and predicted values are obtained using Bootstrap resampling techniques.  Three different methods such as Co and Cp comparison, Cs/Cp and Cp/Co comparison, and ln(Co) and ln(Cp) comparison.  This involved comparing the output of the k-e- model with actual field data studies, such as those conducted at the Bull Run and the Kincaid sites.  Sensitivity analysis was also performed on the model by varying the parameters (wind speed, mixing height, and friction velocity) the model is most sensitive to. These parameters were chosen as they govern the values of k and e, which are calculated within the model program.

The model is tested for predictions under the three different stability classes: stable, unstable, and neutral.  Evaluation and comparison of the model suggests that the model performs well during unstable conditions, but has a tendency to overpredict concentrations during neutral and stable atmospheres.  From the tables and figures presented herein, the model performs best for short term averages, during unstable conditions and is expected to be least successful during stable nighttime conditions.  This suggests artificial turbulence being generated in the grid that tends to cause distortions in the model behavior.  Improved model performance may be achieved by applying modifications to the physics on which the models are based.  Alternately, a new improved numerical scheme compounded with the latest developments in the art of planetary boundary layer modeling could be used for testing the validity of model predictions.


Determination of Acid Depostion in Great Lakes Area, by S. Ahuja, (1995)

The MESOPUFF II modeling package for long range transport (LRT) was evaluated in a region wide analysis in the five Great Lake states of Illinois, Michigan, Indiana, Ohio and Pennsylvania. The emission inventories for these states were reduced to eighteen sources according to the concept of mixing rules of multi-component systems. Six receptor locations on Lake Erie, Ohio were chosen since they coincide with monitoring stations. The twelve-hourly ground level concentration estimates at the six monitoring stations, which are not influenced by wind patterns from Canada and regions south of Ohio, were used for a statistical analysis in comparison to the actual monitored data. Six statistical parameters were used using four different methods. The confidence limits on these parameters were determined using the bootstrap, jacknife, seductive and robust resampling techniques. The model predictions were found to be in close proximation of the monitored data. Statistical parameters generated in the analysis were evaluated as per criteria set in previous studies and criteria set for. the purpose of this study. The model was found to satisfy these criteria

Subsequently the model was also studied and found to be accurate in predicting pollutant removal i.e., wet and dry deposition. An attempt was also made to analyze the impact of local sources versus distant sources. The impact of distant sources (greater than 50 km) was found to be very significant.


Sensitivity and Uncertainity Analysis of the Health Risk Due to Chemical Emission from a Glass Manufacturing Facility and Analytical Determination of the Area of Zone of Impact, by A. Manocha, (1995)

Health risk assessments (HRAS) are increasingly being used in the environmental decision-making process. A key issue of concern regarding the results of these risk assessments is the uncertainty associated with them. Previous studies have associated the uncertainty with highly conservative estimates of risk assessment parameters. The primary purpose of this study was to investigate the error propagation through a regulatory risk assessment model, ACE2588. A sensitivity analysis identified the five parameters - mixing depth for human consumption, deposition velocity, weathering constant, the interception fraction for vine crop and the average leaf vegetable consumption - which had the- greatest impact on the calculated risk. A Monte Carlo analysis using these five parameters resulted in a distribution with a lesser percentage standard deviation than the percentage standard deviation of the input parameters.

Another topic studied was the impact of building downwash on maximum ground level concentrations. While the previous portion of the study focused on the risk posed after the chemicals had been transported to the site, this portion of the thesis studied the effect of building downwash on the transport of the chemicals. The height of the stack was seen to be the primary factor affecting the ground level concentrations of the pollutant. For stacks downwind of the building, the ISCST2 model did not predict any change in maximum ground level concentrations with change in the stack-building distance.

In the third part of this thesis, an analytical solution for determining the area of the Zone of Impact was developed.


A Theoretical Model for an Elevated Volume Source, by Ravindranath Madasu, (1996)


Application, Comparision, and Evaluation of Three Air Dispersion Models: ISCST2, ISCLT2, and SCREEN2 for Mercury Emissions, by V.Patel , (1995)

The goal of the this study was to compare and evaluate the performance of three air quality models for mercury releases. The models include Industrial Source Complex Short Term (ISCST2, 93109), Industrial Source Complex Long Term (ISCLT2, 93109), and SCREEN2 (92245). The ISCST2 and ISCLT2 models are refined regulatory models and the SCREEN2 is a screening model. The evaluation was conducted in a multiple point source urban environment using 3 years (1990 to 1992) of meteorological data (except SCREEN2), emission inventory for 3 years (1990 to i992) for Southeast Michigan (Livingston, Macomb, Monroe, Oakland, St. Clair, Washtenaw, and Wayne Counties), and monitoring data from six nearby stations for the years 1990 to 1992. Only major sources were taken into consideration, and a gaseous form of mercury is assumed. This study is representative of a real time situation in which the ISCST2, ISCLT2, and SCREEN2 are generally applied for regulatory work in the State of Michigan. The models were evaluated by using eight statistical parameters such as model bias, fractional bias (FB), fractional variance (FS), the normalized mean square error (NMSE), the coefficient of correlation (r), factor of two (Fa2), geometric mean bias (MG), and geometric mean variance (VG). To compare the differences, if any, between the observed and the predicted values, confidence limits on these parameters are obtained using the Bootstrap resampling- techniques. The study concentrated on four different methods of calculating the model performance measures using observed concentration (Co) and predicted concentration (Cp). These methods are by straight Co and Cp Comparison; by considering Co/Co and Cp/Co by considering Co/Cp and Co/Cp and by considering ln(Co) and ln(Cp). This procedure has not been used in any earlier evaluation studies on the ISC models.

The comparison of models results for both quarterly and annual averaging periods shows that ISCST2 predictions qualitatively match the observed concentrations; whereas, SCREEN2 predicts highest concentrations and ISCLT2 the lowest concentrations. The summary of statistical analysis obtained by using four different methods of Co and Cp comparison shows that the ISCST2 has a better overall performance than the ISCLT2 and SCREEN2 models. However, none of the models met the criteria for a reasonable model. Summaries of 95% confidence limits on NMSE, VG, and MG for each and among model indicate that of the three models, ISCST2 has the best overall performance indicators. Improved model performance may be achieved by applying modifications to the physics on which the models are based.


Development and Evaluation of a Plume Rise Model for Heavy Gases, by K.Srinivas, (1995)

Air quality models are extensively used in predicting the concentrations and the plume behavior for both regulatory and risk assessment purposes. Most of the regulatory models for elevated releases in use are based on the virtual origin concept which is not a tur representation of the near stack region dispersion from an actual elevated source. Also, one will find that there is no model available that can be used to simulate the heavy gas releases from an elevated source. This research involves development and analysis of a model for plume dynamics of an elevated toxic release based on the trajectory of the plume rather than the virtual origin concept such that the near stack dispersion phenomenon could be simulated more closely to an actual stack dispersion. Further, the model development incorporates the variable wind speed, temerature, density, different atmospheric stability classes and type of release (i.e. passive or dense).

This research also covers a literature review on nighttime boundary layer heights and available model equations under calm conditions. This review is conducted to assist incorporating nighttime boundary layer and calm condition concepts into the model at a later date.

Butane, freon -114, and hydrogen cyanic acid (HCN) are selected to represent dense and passive elevated releases to analyze the performance of the model. Based on the selected test cases, the model performance was acceptable. The analysis showed that the model was able to predict the plume behavior for both heavy and passive releases from an elevated source. However, the analysis revealed that the model seems to be showing high entrainment in the case of heavy gas releases causing the centerline of the plume to rise as a passive plume instead of dropping down to the ground.

Based on the analysis and personal discussion with various researchers and scientists in the field, it is concluded that the model needs to be tested with more appropriate field data at a later date and also the initial conditions in formulating the plume path problem should be examined more closely.


Identification of Indoor Air Quality Factors Relative to Characteristics of Residential Dwellings, by D.A. Isley, (1994 ), With MCO


Evaluation of Four Plume Rise Models Under Wind Shear Condition, by J.P. Liebrecht, (1992 ), with Dr. G.Bennett


Modifications to the ISCST Model, its Subsequent Evaluation and Development of an Inclined Plume Model, by R. Riswadker, (1992 )

The ISC is one of the most comprehensive regulatory models available addressing a multitude of dispersion parameters simultaneously. However the ISC is not applicable to flaring stacks, does not account for penetration of an inversion layer, mixed rural, urban modes, and plume rise due to momentum especially for point sources, among many other shortcomings. It is attempted in this research to modify the ISCST to account for all these phenomena.

The concept of an inclined plume model to compute ground level concentration is further refined. An inclined plume model, incorporating the initial phase of the plume rise in determining maximum concentrations and the point of such impact, is developed for different atmospheric conditions (stable, neutral, unstable).

Validation of these modifications to the ISCST is achieved by subjecting it to evaluation studies using six statistical parameters and the seductive Bootstrap, and Jacknife resampling techniques. The original ISC, its latest version, the ISC2, and the modified ISC are evaluated for the 1 hour and 24 hour averaging periods in all the different stability categories using a year of meteorological data, emission inventory of the Toledo, Ohio Lucas county, and monitoring data from two nearby stations for the year 1987 are used for evaluation purposes.

Sensitivity analysis shows that the ISC is a poor performing model in the 1 and 24 hour averaging period, neutral and stable categories, and modifications are necessary to it to improve its performance. Use of such large an emission inventory has suppressed the effect of the modifications. Stability analysis is influenced by the absence of diurnal scaling of emission rates.

The accuracy of prediction of this inclined plume model is tested using the Moore (1967) data at the Tilbury Power Plant. The inclined plume model in the neutral and stable atmospheric conditions does not give real values for the maxima. The unstable category also gives unrealistic solutions in certain conditions and in most of the cases fails to yield a solution. A thorough analysis of dispersion conditions is necessary to find existence of mg solutions.


Evaluation of Four Hazardous Release Models, by J. Luo, (1992 )

The performance of six atmospheric dispersion model for heavy gases were studied. These six models are DEGADIS model (version 2.1, 1991), SLAB model (1989), OME Simple Gas model, OME Heavy Gas model (1983), RVD model (1989), and SCREEN model (1988). Five independent data sets, the Large Scale Propane Release Experiments in Former West Germany, the RWDI Wind Tunnel #2 Tests, the American Petrolium Institute Wind Tunnel Tests, the Wind Tunnel Tests in the Netherlands and Norway were used to test the performance of each model on lower flammability distance (LFD) or concentration. A procedure based on USEPA guidelines on air quality models was followed to evaluate the models. Six statistical measures were used to compare the output of each model with the observations. The confidence limits were cmoputed using bootstrap resampling procedure on the various statistical measures. Three different resampling procedures (seductive, robust and jackknife) were used for confidence limits.

For LFDs prediction, the OME simple gas model is significantly better than the DEGADIS and SLAB model for cyclone type (instantaneous) releases under unstable and neutral atmospheric conditions; and the DEGADIS model is significantly better than the SLAB model for nozzle type (horizontal jet) releases under stable atmospheric condition.

For concentration predictions, the SLAB model is much better than the DEGADIS model for horizontal jet releases under homogeneous terrain condition and the DEGADIS model is much better than the SLAB model for horizontal jet releases under homogeneous terrain condition; the DEGADIS and SLAB models are significantly better than the RVD model for vertical jet releases under both homogeneous and heterogeneous conditions; and the SLAB model is slightly better than DEGADIS model for instantaneous releases. However, the models used for concentration prediction did not meet the criteria chosen for a reasonable model (normalized mean square error <= 0.5, -0.5 <= fractional bias <= +0.5, and percent of predictions within a factor of two of the observations = 0.85).

The results are tentative because the data sets used are not large enough to draw conclusions on confidence limits. Future work should concentrate on a large data set (n > 100).


An Analysis of Indoor Radon Levels and its Associated Risk in the State of Ohio, by S. Agarwal, (1991)

Although there have been many studies on indoor radon in Ohio, their results are scattered and there is no centralized source from which the results can be obtained. This was accomplished in this study, where about 50,626 indoor radon measurements from 1255 zip code areas and 88 counties were compiled from data supplied by commercial testing services, university researchers and government agencies. The data includes building construction parameters, geological parameters, epidemiological information of Ohio and radon mitigation data, all of which were arranged in separate files.

The compiled data was then analyzed with respect to relationships between indoor radon concentrations, geological parameters and building construction parameters and other indoor radon behavior. About 49.6 percent of the 50,626 measurements had radon values greater than 4 pCi/L and 6.6 percent above 20 pCi/L. The data was lognormally distributed with an arithmetic mean of 7.49 pCi/L. Basement radon concentrations were in general higher than those measured on the first floor. Also radon concentrations measured in winter were found to be higher than the summer measurements.

Amongst building construction parameters, basement area correlated best with high indoor radon levels (20-200 pCi/L) with r = 0.36. Amongst geological parameters the best statistically significant correlation was seen between the presence/absence of sand and gravel deposits and indoor radon levels between 20-200 pCi/L with r = 0.37. The next best correlation was seen between indoor radon levels and surficial uranium concentrations (r = 0.21). Stepwise multivariate analysis was done in an attempt to predict indoor radon levels from the available geological and building construction information. No satisfactory relationships were seen between indoor radon and building construction parameters. Good correlations were seen for multivariate relationship between individual radon values and zip averaged geological parameters. The important predictive variables were surficial uranium levels, sand and gravel deposits and to some extent overburden thickness. Principal Component Analysis also showed similar relationship.

Risk assessment studies were done to quantify lung cancer mortality due to indoor radon exposure in Ohio. Based on the currently available risk factors, which consists of many uncertainties, the mortality estimates were not satisfactory. An attempt was made to estimate costs of mitigating elevated indoor radon concentrations to safe levels. Based on the currently available figures it was found that these mitigation efforts may cost in the region of 500 million in Ohio.


A Comparison of Three Plume Rise Models Using Field Data in Near Neutral Atmosphere, by M. M. El Salahat, (1990)

In this study, three plume rise models have been tested to see how well they predict maximum plume rise compared to the observed rise. The three models are the U.S. EPA equations used in screening models, a numerical model ( WSK model ) by Wigley, Slawson and Kumar using simple entrainment assumption, and the model by Schatzmann and Policastro (SP model ).

The comparison was carried using two data bases, Bringfelt (1968) data base, and the Ontario Hydro plant data base. Bringfelt data base was divided into two sets based on the temperature difference between the ambient temperature and the effluent temperature; a) the first set included the data that had temperature difference less than or equal to 50oC, and b) the second set included the data that had temperature difference more than 50oC. Ontario Hydro data base was divided into three sets based on the change in the wind speed with height (i.e. wind shear); a) the first set included data that had wind shear greater than 2* 10-3 m/s/m, b) the second set included data that had wind shear greater than or equal to zero and less than or equal to 2*10-3 m/s/m, and c) the third set included data that had wind shear less than zero. The maximum predicted plume rise obtained from the three models under each of the above five conditions was compared with the corresponding maximum observed plume rise using statistical measures suggested in the U.S. EPA guidelines on model evaluations.

The SP model performed better for negative wind shear cases. The WSK model performed well for small positive wind shear and for negative wind situations. All the three models did not do well for wind shear greater than 2* 10-3 .The use of average wind speed was also examined. The results showed that the Basic model and the SP model performed better than the WSK model.


Evaluation of Four Box Models for Instantaneous Dense-Gas Releases , by G. Venkata, (1989)

One of the disagreements between many simple box models applied for industrial work, and experimental data., could be attributed to erroneous modeling of the motion of the cloud, leading to a much faster cloud travel (i.e., shorter time to reach a given distance) than is observed. An improved box model has been formulated in this study by incorporating newly proposed equations for translational speed (Wheatley and Prince, 1987) of a dense gas cloud. The calibration of the basic box model has been done using a nonlinear least squares method, for which a Levenberg-Marquardt algorithm was used.

Data from the Thorney Island trials was used for the calibration process.

The model has been evaluated with- data from the Burro field scale tests and also laboratory scale experiments performed by Havens and Spicer at the University of Arkansas. The results have been compared with those obtained by three other existing models. These models are the OME model, the DENSI model, and the SLAB model. Statistical measures similar to the U.S. EPA model evaluation protocol (EPA, 1984) were used in the evaluation process. The nonparametric bootstrap resampling procedure, which is a relatively new method, was used to obtain confidence limits on the various statistical measures. The proposed model performs quite well in near neutral and unstable atmospheric conditions, but performs poorly in calm-wind and stable conditions. The SLAB model performed well in the calm wind and the stable cases. The DENS I model Performed well in the calm-wind releases and the OME model did not do well in any of the conditions under which the models were evaluated.


Indoor Radon Measurements for Houses Associated with Outcrops of the Ohio Black Shale, by J. Akkari, (1988)

188 houses in Ashtabula, Cuyahoga Erie, Huron, Franklin, Pike, and Logan counties were tested for radon Concentration. Based on the results obtained , regressional, statistical, and dimensionless analyses were performed to understand the trends and relationships between the various house parameters considered and the indoor radon concentration. It was founsd that the indoor radon concentration increases from East to West. The results also indicated that the radon levels in the basements are higher than in the first floors. Moreover, the reproducibility tests performed proved to be remarkably reliable. The difference in the results varied from 3 to 17 %. From the regressional ayalysis, 2 parameters were found to have a significant effect on indoor radon concentration. The air exchange rate which is inversely proportional to indoor levels , and the penetration factor which is proportional to indoor radon levels.

A screening model to estimate or at best predict indoor radon levels in the study ares was also developed. Penetration factor and/or air exchange rate were found in every individual as well as overall study area models. Furthermore , a dimensionless analysis of the various parameters affecting indoor radon levels was carried out and several groups of related parameters were obtained.

Finally, attempts to predict the source term in the study areas was performed using our results and a mass balance model (Kusuda, 1980 ). The source terms obtained followed the same trend as the indoor radon concentrations. In fact they increased from East to West.


A Comparison of Dispersion Models for Ground Level Releases, by D.Shroder, (1985)

Many large industrial plants, especially in the chemical and electrical power industries, need to predict or estimate off-site ground level concentrations of various contaminants. Often these plants are located in costal regions, since this is where a great number of pollution centers have developed. Pollutant dispersion during seabreeze conditions is of main concern because high ground level concentrations are associated with this case. This project was limited in scope to discussions of dispersion during a coastal fumigation situation which occurs when a thermal internal boundary layer (TIBL) develops.

There have been many studies to evaluate different aspects of seabreeze conditions. Tall stacj models have been described by Lyons and Cole (1973), Meroney et al. (1975). van Dop et al. (1979). and Misra (1980). Kumar and Thomas (1984) have combined various physical aspects of the air flows near a coastal plant and presented a comprehensive concentration model. All of these consider the formation of the thermal internal boundary layer in one way or another when calculating ground level concentrations. One of the parameters that is important when making these concentration calculations id the atmospheric stability. Atmospheric stability affects the dispersion of the plume and shows up in the concentration equations by its influence on the dispersion parameters.

The method for determination of atmospheric stabilty, as well as dispersion coefficients, are not generally agreed upon by atmospheric scientists and the proper method to use is a continuing point of discussion. There have been many papers to compare ways to determine stability classification. Thses include Gifford (1976), Weber (1976), Pasquill (1976), Weber et al. (1976), Hanna et al. (1977), and Scott-Waslik and Kumar (1982). Thses studies were not under coastal fumigation conditions, however.

This report copared ground level concentrations from a low level release (called ground level releases) using four methods to determine the horizontal and vertical diffusion parameters (sigma y and sigma z, respectively) and running them in the same dispersion model. The C/Q (concentration divided by emission rate) versus X (downwind distance) curves for an elevated source using buoyant and momentum plumes was also compared. The general model, as well as two submodels discussed were developed by Kumar and Thomas (1984).


Sensitivity Analysis of Four Air Quality Models, by W. Kang, (1985)

Five air quality models have been developed to be implemented on an IBM-PC microcomputer. Three models are used by US-EPA for initial screening of industry (PTxxx). The other two models are research models which are used to compute plume penetration potential (PENPOT) and acid deposition due to multiple sources (WDLRT). The usefulness of microcomputer to air quality modeling is tested.

Also the sensitivity test to determine the nature and extent of changes in model output due to variation in model input is performed by application of these models. Graphical display is used to show the variation in model output. Based on this, a sensitivity indices are then compared to rank the importance of each input parameter.


Evaluation of a Technique to Fill Missing Data, by J. Lee, (1985)

The missing data problem which may be encountered during time series analysis of acid rain monitoring data is the subject of this study. A method using linear interpolation through averaging of two nearest bracing values is proposed to fill missing data. Any attempt to fill missing data through a scheme of interpolation may affect the general character of data and its ability to forecast. It might produce different dynamic properties such as trends, frequencies and amplitudes of cyclical variations as compared original data. Therefore the stability of this proposed method is examined statically using historical acid rain data collected by the Electric Power Research Institute. The three parameter gamma distribution was utilized in generating computer simulated data representing missing value episodes. It was found that the proposed method for filling missing values does not affect the character of data and the forecasting abilities of a time series model. The time series model for sulfate data also were derived.


Analysis of Acid Deposition due to Sulfur dioxide Emissions from Ohio Sources from July 3 through July 8, 1978, by S. Mermall, (1983)

Acid rain is a public concern and a political issue in North America and Europe. Efforts are being made to develop computer models to estimate the effects of emissions on various receptor points. An improved event type long range transport model has been formulated and programmed. Improvements in the model include: (I) variable dry deposition velocity as a Function of land use, season and atmospheric stability, and (I-I) wet deposition as a function of weighted scavenging velocity.

The period of July 3 to July, 1978 was studied because during this interval the eastern half of the United States was dominated by a stagnated high pressure system and thus the potential of a long range transport episode existed. During this period the following was studied: (i) The relationships between synoptic conditions and continental concentrations of sulfur pollution, (ii) the sensitivity of dry deposition on ground level concentrations, and (iii) the effect that Ohio and its major sulfur dioxide sources had on acid deposition in North America.

The results of the study evaluate the sensitivity of ground level concentration to dry deposition velocity and the impact that Ohio's Major Sulfur dioxide sources on acid deposition in sensitive regions during the period studied.


Back to the Home Page

Drop in your comments and suggestions to


 






 
 

| The University of Toledo | | College of Engineering | | Department of Civil Engineering |
| Air Pollution Research Group |