BioOne.org will be down briefly for maintenance on 14 May 2025 between 18:00-22:00 Pacific Time US. We apologize for any inconvenience.
Registered users receive a variety of benefits including the ability to customize email alerts, create favorite journals list, and save searches.
Please note that a BioOne web account does not automatically grant access to full-text content. An institutional or society member subscription is required to view non-Open Access content.
Contact helpdesk@bioone.org with any questions.
Purkayastha, S., Milligan, J. R. and Bernhard, W. A. The Role of Hydration in the Distribution of Free Radical Trapping in Directly Ionized DNA. Radiat. Res. 166, 1–8 (2006).
The purpose of this study was to elucidate the role of hydration (Γ) in the distribution of free radical trapping in directly ionized DNA. Solid-state films of pUC18 (2686 bp) plasmids were hydrated to Γ in the range 2.5 ≤ Γ ≤ 22.5 mol water/mol nucleotide. Free radical yields, G(Σfr), measured by EPR at 4 K are seen to increase from 0.28 ± 0.01 μmol/J at Γ = 2.5 to 0.63 ± 0.01 μmol/J at Γ= 22.5, respectively. Based on a semi-empirical model of the free radical trapping events that follow the initial ionizations of the DNA components, we conclude that two-thirds of the holes formed on the inner solvation shell (Γ < 10) transfer to the sugar-phosphate backbone. Likewise, of the holes produced by direct ionization of the sugar-phosphate, about one-third are trapped by deprotonation as neutral sugar-phosphate radical species, while the remaining two-thirds are found to transfer to the bases. This analysis provides the best measure to date for the probability of hole transfer (∼67%) into the base stack. It can thus be predicted that the distribution of holes formed in fully hydrated DNA at 4 K will be 78% on the bases and 22% on the sugar-phosphate. Adding the radicals due to electron attachment (confined to the pyrimidine bases), the distribution of all trapped radicals will be 89% on the bases and 11% on the sugar-phosphate backbone. This prediction is supported by partitioning results obtained from the high dose–response curves fitted to the two-component model. These results not only add to our understanding of how the holes redistribute after ionization but are also central to predicting the yield and location of strand breaks in DNA exposed to the direct effects of ionizing radiation.
Roginskaya, M., Bernhard, W. A. and Razskazovskiy, Y. Protection of DNA against Direct Radiation Damage by Complex Formation with Positively Charged Polypeptides. Radiat. Res. 166, 9–18 (2006).
Radioprotection of DNA from direct-type radiation damage by histones has been studied in model systems using complexes of positively charged polypeptides (PCPs) with DNA. PCPs bind to DNA via ionic interactions mimicking the mode of DNA-histone binding. Direct radiation damage to DNA in films of DNA-PCP complexes was quantified as unaltered base release, which correlates closely with DNA strand breaks. All types of PCPs tested protected DNA from radiation, with the maximum radioprotection being approximately 2.5-fold compared with non-complexed DNA. Conformational changes of the DNA induced by PCPs or repair of free radical damage on the DNA sugar moiety by PCPs are considered the most feasible mechanisms of radioprotection of DNA. The degree of radioprotection of DNA by polylysine (PL) increased dramatically on going from pure DNA to a molar ratio of PL monomer:DNA nucleotide ∼1:2, while a further increase in the PL:DNA ratio did not offer more radioprotection. This concentration dependence is in agreement with the model of PCP binding to DNA that assumes preferential binding of positively charged side groups to DNA phosphates in the minor groove, so that the maximum occupancy of all minor-groove PCP binding sites is at a molar ratio of PCP:DNA = 1:2.
Liu, Z., Mothersill, C. E., McNeill, F. E., Lyng, F. M., Byun, S. H., Seymour, C. B. and Prestwich, W. V. A Dose Threshold for a Medium Transfer Bystander Effect for a Human Skin Cell Line. Radiat. Res. 166, 19–23 (2006).
The existence of radiation-induced bystander effects mediated by diffusible factors is now accepted, but the mechanisms and precise behavior at low doses remain unclear. We exposed cells to γ-ray doses in the range 0.04 mGy–5 Gy, harvested the culture medium, and transferred it to unirradiated reporter cells. Calcium fluxes and clonogenic survival were measured in the recipients. We show evidence for a dose threshold around 2 mGy for the human skin cell line used with a suggestion of increased survival below that dose. Similar experiments using direct γ irradiation showed no reduction in survival until the dose exceeded 7 mGy. Preliminary data for neutrons where the γ-ray dose was kept below the bystander threshold do not show a significant bystander effect in the dose range 1–33 mGy. A lack of a bystander response with neutrons occurred at around 1 Gy, where significant cell killing from direct irradiation was observed. The result may have implications for understanding the role of bystander effects at low doses.
Hamada, N., Funayama, T., Wada, S., Sakashita, T., Kakizaki, T., Ni, M. and Kobayashi, Y. LET-Dependent Survival of Irradiated Normal Human Fibroblasts and Their Descendents. Radiat. Res. 166, 24–30 (2006).
Evidence has accumulated showing that ionizing radiations persistently perturb genomic stability and induce delayed reproductive death in the progeny of survivors; however, the linear energy transfer (LET) dependence of these inductions has not been fully characterized. We have investigated the cell killing effectiveness of γ rays (0.2 keV/μm) and six different beams of heavy-ion particles with LETs ranging from 16.2 to 1610 keV/μm in normal human fibroblasts. First, irradiated confluent density-inhibited cultures were plated for primary colony formation, revealing that the relative biological effectiveness (RBE) based on the primary 10% survival dose peaked at 108 keV/μm and that the inactivation cross section increased proportionally up to 437 keV/μm. Second, cells harvested from primary colonies were plated for secondary colony formation, showing that delayed reproductive death occurred in a dose-dependent fashion. While the RBE based on the secondary 80% survival dose peaked at 108 keV/μm, very little difference in LET was observed in the RBE based on secondary survival at the primary 10% survival dose. Our present results indicate that delayed reproductive death arising only during secondary colony formation is independent of LET and is more likely to be dependent on initial damages having been fixed during primary colony formation.
Nobuyuki Hamada, Giuseppe Schettino, Genro Kashino, Mita Vaid, Keiji Suzuki, Seiji Kodama, Boris Vojnovic, Melvyn Folkard, Masami Watanabe, Barry D. Michael, Kevin M. Prise
Hamada, N., Schettino, G., Kashino, G., Vaid, M., Suzuki, K., Kodama, S., Vojnovic, B., Folkard, M., Watanabe, M., Michael, B. D. and Prise, K. M. Histone H2AX Phosphorylation in Normal Human Cells Irradiated with Focused Ultrasoft X Rays: Evidence for Chromatin Movement during Repair. Radiat. Res. 166, 31–38 (2006).
DNA repair within the cell nucleus is a dynamic process involving a close interaction between repair proteins and chromatin structure. Recent studies have indicated a quantitative relationship between DNA double-strand break induction and histone H2AX phosphorylation. The dynamics of this process within individual cell nuclei is unknown. To address this, we have used a novel focused ultrasoft X-ray microprobe that is capable of inducing localized DNA damage within a subnuclear area of intact cells with a 2.5-μm-diameter beam spot. The present investigation was undertaken to explore the influence of focused irradiation of individual nuclei with 1.49 keV characteristic aluminum K-shell (AlK) X rays on H2AX phosphorylation in normal human cells. Immunofluorescence analyses revealed that significant diffusion of the initial spots of clustered foci of phosphorylated H2AX occurred in a time-dependent fashion after exposure to AlK X rays. Irradiation under cooled conditions resulted in a reduction in the size of spots of clustered foci of phosphorylated H2AX as well as of individual phosphorylated H2AX foci. These findings strongly suggest that diffusion of the chromatin microenvironment occurs during the repair of DNA damage. We also found that AlK ultrasoft X rays (71 foci per gray) were 2.2-fold more effective at the initial formation of phosphorylated H2AX foci than with conventional X rays (32 foci per gray), and that the time required to eliminate 50% of the initial number of foci was 3.4-fold longer in AlK-irradiated cells than that in cells exposed to conventional X rays. For conventional X rays, we also report significant accumulation of larger-sized foci at longer times after irradiation.
Connolly, L., Lasarev, M., Jordan, R., Schwartz, J. L. and Turker, M. S. Atm Haploinsufficiency does not Affect Ionizing Radiation Mutagenesis in Solid Mouse Tissues. Radiat. Res. 166, 39–46 (2006).
Ataxia telangiectasia (AT) is a hereditary disease with autosomal recessive inheritance of ATM (ataxia telangiectasia mutation) alleles. AT is associated with severe sensitivity to ionizing radiation and a strong predisposition to develop cancer. A modest increase in cancer, particularly for the breast, has been shown for ATM carriers (i.e. heterozygotes), and a modest increase in radiation sensitivity has also been shown for those patients and their cells. However, the extent of these effects is unclear. Based on the well-established relationship between cancer and mutation, we used a mouse model for Atm haploinsufficiency to ask whether partial loss of Atm function could lead to an increased mutagenic response for solid tissues of mice exposed to radiation. The autosomal mouse Aprt gene was used as the mutational target and kidney and ear as the target tissues in B6D2F1 hybrids. Although induction of autosomal mutations was readily demonstrated in both tissues, a comparison of these data with those from an identical study performed with B6D2F1 mice that were wild-type for Atm (Cancer Res.62, 1518–1523, 2002) revealed that Atm haploinsufficiency did not alter the radiation mutagenic response for the cells of either tissue. Moreover, no effect of Atm haploinsufficiency on reduced cellular viability due to radiation exposure was observed. The results demonstrate that Atm haploinsufficiency does not alter the radiation mutagenic response or decrease viability for normally quiescent cells in solid tissues of the mouse.
Kato, T. A., Nagasawa, H., Weil, M. M., Genik, P. C., Little, J. B. and Bedford, J. S. γ-H2AX Foci after Low-Dose-Rate Irradiation Reveal Atm Haploinsufficiency in Mice. Radiat. Res. 166, 47–54 (2006).
We have investigated the use of the γ-H2AX assay, reflecting the presence of DNA double-strand breaks (DSBs), as a possible means for identifying individuals who may be intermediate with respect to the extremes of hyper-radiosensitivity phenotypes. In this case, cells were studied from mice that were normal (Atm / ), heterozygous (Atm /−), or homozygous recessive (Atm−/−) for a truncating mutation in the Atm gene. After single acute (high-dose-rate) exposures, differences in mean numbers of γ-H2AX foci per cell between samples from Atm / and Atm−/− mice were clear at nearly all sampling times, but at no sampling time was there a clear distinction for cells from Atm / and Atm /− mice. In contrast, under conditions of low-dose-rate irradiation at 10 cGy/h, appreciable differences in the levels of γ-H2AX foci per cell were observed in synchronized G1 cells derived from Atm /− mice relative to cells from Atm / mice. The levels were intermediate between those for cells from Atm / and Atm−/− mice. After 24 h exposure at this dose rate, measurements in cells from four different mice for each genotype yielded mean frequencies of foci per cell of 1.77 ± 0.13 (SEM) for Atm / cells, 4.75 ± 0.20 for the Atm /− cells, and 11.10 ± 0.33 for the Atm−/−cells. The distributions of foci per G1 cell were not significantly different from Poisson. To the extent that variations in sensitivity with respect to γ-H2AX focus formation reflect variations in radiosensitivity for biological effects of concern, such as carcinogenesis, and that similar differences are seen for other genetic DNA DSB processing defects in general, this assay may provide a relatively straightforward means for distinguishing individuals who may be mildly hypersensitive to radiation such as we observed for Atm heterozygous mice.
Igari, K., Igari, Y., Okazaki, R., Kato, F., Ootsuyama, A. and Norimura, T. The Delayed Manifestation of T-Cell Receptor (TCR) Variants in X-Irradiated Mice Depends on Trp53 Status. Radiat. Res. 166, 55–60 (2006).
The influence of Trp53 on the radiation-induced elevation of T-cell receptor (TCR) variant fractions was examined in splenic T lymphocytes of Trp53-proficient and -deficient mice. Wild-type Trp53 / , heterozygous Trp53 /− and null Trp53−/− mice were exposed to 3 Gy of X rays at 8 weeks of age. The fraction of TCR-defective variants was measured at various times after irradiation. Initially, the TCR variant fraction increased rapidly and reached its maximum level at 9 days after irradiation before decreasing gradually. In Trp53 / and Trp53 /− mice, the TCR variant fraction fell to normal background levels at 16 and 20 weeks of age, respectively. In contrast, the TCR variant fraction of Trp53−/− mice failed to decrease to background levels during the observation period. Baseline levels were then maintained for approximately 60 weeks in the Trp53 / mice and approximately 40 weeks in the Trp53 /− mice. After the long flat period, a significant re-increase in the fraction of TCR variants was found after 72 weeks of age in the irradiated Trp53 / mice and after 44 weeks of age in the irradiated Trp53 /− mice. Measurement of the fraction of apoptotic cells in the spleen and thymus 4 h after X irradiation at these ages in Trp53 / and Trp53 /− mice demonstrated a reduction in apoptosis in the irradiated mice compared to the nonirradiated mice. This suggests that the delayed increase in TCR variants after irradiation is due to a reduction in Trp53-dependent apoptosis.
Takabatake, T., Fujikawa, K., Tanaka, S., Hirouchi, T., Nakamura, M., Nakamura, S., Tanaka, I. B., III, Ichinohe, K., Saitou, M., Kakinuma, S., Nishimura, M., Shimada, Y., Oghiso, Y. and Tanaka, K. Array-CGH Analyses of Murine Malignant Lymphomas: Genomic Clues to Understanding the Effects of Chronic Exposure to Low-Dose-Rate Gamma Rays on Lymphomagenesis. Radiat. Res. 166, 61–72 (2006).
We previously reported that mice chronically irradiated with low-dose-rate γ rays had significantly shorter mean life spans than nonirradiated controls. This life shortening appeared to be due primarily to earlier death due to malignant lymphomas in the irradiated groups (Tanaka et al., Radiat. Res.160, 376–379, 2003). To elucidate the molecular pathogenesis of murine lymphomas after low-dose-rate irradiation, chromosomal aberrations in 82 malignant lymphomas from mice irradiated at a dose rate of 21 mGy/day and from nonirradiated mice were compared precisely by microarray-based comparative genomic hybridization (array-CGH) analysis. The array carried 667 BAC clones densely selected for the genomic regions not only of lymphoma-related loci but also of surface antigen receptors, enabling immunogenotyping. Frequent detection of the apparent loss of the Igh region on chromosome 12 suggested that most lymphomas in both groups were of B-cell origin. Array-CGH profiles showed a frequent gain of whole chromosome 15 in lymphomas predominantly from the irradiated group. The profiles also demonstrated copy-number imbalances of partial chromosomal regions. Partial gains on chromosomes 12, 14 and X were found in tumors from nonirradiated mice, whereas losses on chromosomes 4 and 14 were significantly associated with the irradiated group. These findings suggest that lymphomagenesis under the effects of continuous low-dose-rate irradiation is accelerated by a mechanism different from spontaneous lymphomagenesis that is characterized by the unique spectrum of chromosomal aberrations.
Wang, Y., Raffoul, J. J., Che, M., Doerge, D. R., Joiner, M. C., Kucuk, O., Sarkar, F. H. and Hillman, G. G. Prostate Cancer Treatment is Enhanced by Genistein In Vitro and In Vivo in a Syngeneic Orthotopic Tumor Model. Radiat. Res. 166, 73–80 (2006).
Pretreatment with genistein, a bioactive component of soy isoflavones, potentiated cell killing induced by radiation in human PC-3 prostate cancer cells in vitro. Using an orthotopic xenograft in nude mice, we demonstrated that genistein combined with prostate tumor irradiation caused greater inhibition of primary tumor growth and increased control of spontaneous metastasis to para-aortic lymph nodes, increasing mouse survival. Paradoxically, treatment with genistein alone increased metastasis to lymph nodes. This observation is of concern in relation to soy-based clinical trials for cancer patients. To address whether this observation is because nude mice have an impaired immune system, these studies were repeated in orthotopic RM-9 prostate tumors in syngeneic C57BL/6 mice. The combination of genistein with radiation in this model also caused a greater inhibition of primary tumor growth and spontaneous metastasis to regional para-aortic lymph nodes, whereas treatment with genistein alone showed a trend to increased lymph node metastasis. Data from the syngeneic and xenograft models are comparable and indicate that the combination of genistein with radiotherapy is more effective and safer for prostate cancer treatment than genistein alone, which promotes metastatic spread to regional lymph nodes.
Nievaart, V. A., Moss, R. L., Kloosterman, J. L., van der Hagen, T. H. J. J., van Dam, H., Wittig, A., Malago, M. and Sauerwein, W. Design of a Rotating Facility for Extracorporal Treatment of an Explanted Liver with Disseminated Metastases by Boron Neutron Capture Therapy with an Epithermal Neutron Beam. Radiat. Res. 166, 81–88 (2006).
In 2001, at the TRIGA reactor of the University of Pavia (Italy), a patient suffering from diffuse liver metastases from an adenocarcinoma of the sigmoid was successfully treated by boron neutron capture therapy (BNCT). The procedure involved boron infusion prior to hepatectomy, irradiation of the explanted liver at the thermal column of the reactor, and subsequent reimplantation. A complete response was observed. This encouraging outcome stimulated the Essen/Petten BNCT group to investigate whether such an extracorporal irradiation could be performed at the BNCT irradiation facility at the HFR Petten (The Netherlands), which has very different irradiation characteristics than the Pavia facility. A computational study has been carried out. A rotating PMMA container with a liver, surrounded by PMMA and graphite, is simulated using the Monte Carlo code MCNP. Due to the rotation and neutron moderation of the PMMA container, the initial epithermal neutron beam provides a nearly homogeneous thermal neutron field in the liver. The main conditions for treatment as reported from the Pavia experiment, i.e. a thermal neutron fluence of 4 × 1012 ± 20% cm−2, can be closely met at the HFR in an acceptable time, which, depending on the defined conditions, is between 140 and 180 min.
Livingston, G. K., Falk, R. B. and Schmid, E. Effect of Occupational Radiation Exposures on Chromosome Aberration Rates in Former Plutonium Workers. Radiat. Res. 166, 89–97 (2006).
A fluorescence in situ hybridization (FISH) method was used to measure chromosome aberration rates in lymphocytes of 30 retired plutonium workers with combined internal and external radiation doses greater than 0.5 Sv along with 17 additional workers with predominantly external doses below 0.1 Sv. The former group was defined as high-dose and the latter as low-dose with respect to occupational radiation exposure. The two groups were compared to each other and also to 21 control subjects having no history of occupational radiation exposure. Radiation exposures to the high-dose group were primarily the result of internal depositions of plutonium and its radioactive decay products resulting from various work-related activities and accidents. The median external dose for the high-dose group was 280 mSv (range 10–730) compared to a median of 22 mSv (range 10–76) for the low-dose group. The median internal dose to the bone marrow for the high-dose group was 168 mSv (range 29–20,904) while that of the low-dose group was considered negligible. Over 200,000 metaphase cells were analyzed for chromosome aberrations by painting pairs 1, 4 and 12 in combination with a pancentromeric probe. Additionally, 136,000 binucleated lymphocytes were analyzed for micronuclei in parallel cultures to assess mitotic abnormalities arising from damaged chromosomes. The results showed that the frequency of structural aberrations affecting any of the painted chromosomes in the high-dose group correlated with the bone marrow dose but not with the external dose. In contrast, the frequency of micronuclei did not vary significantly between the study groups. The total translocation frequency per genome equivalent × 10−3 ± SE was 4.0 ± 0.6, 9.0 ± 1.1 and 17.0 ± 2.1 for the control, low-dose and high-dose groups, respectively. Statistical analysis of the data showed that the frequency of total translocations and S-cells correlated with the bone marrow dose, with P values of 0.005 and 0.004, respectively. In contrast, these two end points did not correlate with the external dose, with P values of 0.45 and 0.39, respectively. In conclusion, elevated rates of stable chromosome aberrations were found in lymphocytes of former workers decades after plutonium intakes, providing evidence that chronic irradiation of hematopoietic precursor cells in the bone marrow induces cytogenetically altered cells that persist in peripheral blood.
John D. Boice, Sarah S. Cohen, Michael T. Mumma, Elizabeth Dupree Ellis, Keith F. Eckerman, Richard W. Leggett, Bruce B. Boecker, A. Bertrand Brill, Brian E. Henderson
Boice, Jr., J. D., Cohen, S. S., Mumma, M. T., Ellis, E. D., Eckerman, K. F., Leggett, R. W., Boecker, B. B., Brill, A. B. and Henderson, B. E. Mortality among Radiation Workers at Rocketdyne (Atomics International), 1948–1999. Radiat. Res. 166, 98–115 (2006).
A retrospective cohort mortality study was conducted of workers engaged in nuclear technology development and employed for at least 6 months at Rocketdyne (Atomics International) facilities in California, 1948–1999. Lifetime occupational doses were derived from company records and linkages with national dosimetry data sets. International Commission on Radiation Protection (ICRP) biokinetic models were used to estimate radiation doses to 16 organs or tissues after the intake of radionuclides. Standardized mortality ratios (SMRs) compared the observed numbers of deaths with those expected in the general population of California. Cox proportional hazards models were used to evaluate dose–response trends over categories of cumulative radiation dose, combining external and internal organ-specific doses. There were 5,801 radiation workers, including 2,232 monitored for radionuclide intakes. The mean dose from external radiation was 13.5 mSv (maximum 1 Sv); the mean lung dose from external and internal radiation combined was 19.0 mSv (maximum 3.6 Sv). Vital status was determined for 97.6% of the workers of whom 25.3% (n = 1,468) had died. The average period of observation was 27.9 years. All cancers taken together (SMR 0.93; 95% CI 0.84–1.02) and all leukemia excluding chronic lymphocytic leukemia (CLL) (SMR 1.21; 95% CI 0.69–1.97) were not significantly elevated. No SMR was significantly increased for any cancer or for any other cause of death. The Cox regression analyses revealed no significant dose–response trends for any cancer. For all cancers excluding leukemia, the RR at 100 mSv was estimated as 1.00 (95% CI 0.81–1.24), and for all leukemia excluding CLL it was 1.34 (95% CI 0.73–2.45). The nonsignificant increase in leukemia (excluding CLL) was in accord with expectation from other radiation studies, but a similar nonsignificant increase in CLL (a malignancy not found to be associated with radiation) tempers a causal interpretation. Radiation exposure has not caused a detectable increase in cancer deaths in this population, but results are limited by small numbers and relatively low career doses.
Joachim Schüz, Eva Böhler, Brigitte Schlehofer, Gabriele Berg, Klaus Schlaefer, Iris Hettinger, Katharina Kunna-Grass, Jürgen Wahrendorf, Maria Blettner
Schüz, J., Böhler, E., Schlehofer, B., Berg, G., Schlaefer, K., Hettinger, I., Kunna-Grass, K., Wahrendorf, J. and Blettner, M. Radiofrequency Electromagnetic Fields Emitted from Base Stations of DECT Cordless Phones and the Risk of Glioma and Meningioma (Interphone Study Group, Germany). Radiat. Res. 166, 116–119 (2006).
The objective of this study was to test the hypothesis that exposure to continuous low-level radiofrequency electromagnetic fields (RF EMFs) increases the risk of glioma and meningioma. Participants in a population-based case-control study in Germany on the risk of brain tumors in relation to cellular phone use were 747 incident brain tumor cases between the ages of 30 and 69 years and 1494 matched controls. The exposure measure of this analysis was the location of a base station of a DECT (Digital Enhanced Cordless Telecommunications) cordless phone close to the bed, which was used as a proxy for continuous low-level exposure to RF EMFs during the night. Estimated odds ratios were 0.82 (95% confidence interval: 0.29–2.33) for glioma and 0.83 (0.29–2.36) for meningioma. There was also no increasing risk observed with duration of exposure to DECT cordless phone base stations. Although the study was limited due to the small number of exposed subjects, it is still a first indication that residential low-level exposure to RF EMFs may not pose a higher risk of brain tumors.
Brill, A. B., Stabin, M., Bouville, A. and Ron, E. Normal Organ Radiation Dosimetry and Associated Uncertainties in Nuclear Medicine, with Emphasis on Iodine-131. Radiat. Res. 166, 128–140 (2006).
In many medical applications involving the administration of iodine-131 (131I) in the form of iodide (I−), most of the dose is delivered to the thyroid gland. To reliably estimate the thyroid absorbed dose, the following data are required: the thyroid gland size (i.e. mass), the fractional uptake of 131I by the thyroid, the spatial distribution of 131I within the thyroid, and the length of time 131I is retained in the thyroid before it is released back to blood, distributed in other organs and tissues, and excreted from the body. Estimation of absorbed dose to nonthyroid tissues likewise requires knowledge of the time course of activity in each organ. Such data are rarely available, however, and therefore dose calculations are generally based on reference models. The MIRD and ICRP have published metabolic models and have calculated absorbed doses per unit intake for many nuclides and radioactive pharmaceuticals. Given the activity taken into the body, one can use such models and make reasonable calculations for average organ doses. When normal retention and excretion pathways are altered, the baseline models need to be modified, and the resulting organ dose estimates are subject to larger errors. This paper describes the historical evolution of radioactive isotopes in medical diagnosis and therapy. We nonmathematically summarize the methods used in current practice to estimate absorbed dose and summarize some of the risk data that have emerged from medical studies of patients with special attention to dose and effects observed in those who received 131I-iodide in diagnosis and/or therapy.
Stovall, M., Weathers, R., Kasper, C., Smith, S. A., Travis, L., Ron, E. and Kleinerman, R. Dose Reconstruction for Therapeutic and Diagnostic Radiation Exposures: Use in Epidemiological Studies. Radiat. Res. 166, 141–157 (2006).
This paper describes methods developed specifically for reconstructing individual organ- and tissue-absorbed dose of radiation from past exposures from medical treatments and procedures for use in epidemiological studies. These methods have evolved over the past three decades and have been applied to a variety of medical exposures including external-beam radiation therapy and brachytherapy for malignant and benign diseases as well as diagnostic examinations. The methods used for estimating absorbed dose to organs in and outside the defined treatment volume generally require archival data collection, abstraction and review, and phantom measurements to simulate past exposure conditions. Three techniques are used to estimate doses from radiation therapy: (1) calculation in three-dimensional mathematical computer models using an extensive database of out-of-beam doses measured in tissue-equivalent materials, (2) measurement in anthropomorphic phantoms constructed of tissue-equivalent material, and (3) calculation using a three-dimensional treatment-planning computer. For diagnostic exposures, doses are estimated from published data and software based on Monte Carlo techniques. We describe and compare these methods of dose estimation and discuss uncertainties in estimated organ doses and potential for future improvement. Seven epidemiological studies are discussed to illustrate the methods.
Bouville, A., Chumak, V. V., Inskip, P. D., Kryuchkov, V. and Luckyanov, N. The Chornobyl Accident: Estimation of Radiation Doses Received by the Baltic and Ukrainian Cleanup Workers. Radiat. Res. 166, 158–167 (2006).
During the first day after the explosion, the Chornobyl accident of April 26, 1986 exposed a few hundred emergency workers to high dose levels ranging up to 16 Gy, resulting in acute radiation syndrome. Subsequently, several hundred thousand cleanup workers were sent to the Chornobyl power plant to mitigate the consequences of the accident. Depending on the nature of the work to be carried out, the cleanup workers were sent for periods ranging from several minutes to several months. The average dose from external radiation exposure that was received by the cleanup workers was about 170 mGy in 1986 and decreased from year to year. The radiation exposure was mainly due to external irradiation from γ-ray-emitting radionuclides and was relatively homogeneous over all organs and tissues of the body. To assess the possible health consequences of external irradiation at relatively low dose rates, the U.S. National Cancer Institute is involved in two studies of Chornobyl cleanup workers: (1) a study of cancer incidence and thyroid disease among Estonian, Latvian and Lithuanian workers, and (2) a study of leukemia and other related blood diseases among Ukrainian workers. After an overview of the sources of exposure and of the radiation doses received by the cleanup workers, a description of the efforts made to estimate individual doses in the Baltic and Ukrainian studies is presented.
Gilbert, E. S., Thierry-Chef, I., Cardis, E., Fix, J. J. and Marshall, M. External Dose Estimation for Nuclear Worker Studies. Radiat. Res. 166, 168–173 (2006).
Epidemiological studies of nuclear workers are an important source of direct information on the health effects of exposure to radiation at low doses and low dose rates. These studies have the important advantage of doses that have been measured objectively through the use of personal dosimeters. However, to make valid comparisons of worker-based estimates with those obtained from data on A-bomb survivors or persons exposed for medical reasons, attention must be given to potential biases and uncertainties in dose estimates. This paper discusses sources of error in worker dose estimates and describes efforts that have been made to quantify these errors. Of particular importance is the extensive study of errors in dosimetry that was conducted as part of a large collaborative study of nuclear workers in 15 countries being coordinated by the International Agency for Research on Cancer. The study, which focused on workers whose dose was primarily from penetrating γ radiation in the range 100 keV to 3 MeV, included (1) obtaining information on dosimetry practices and radiation characteristics through the use of questionnaires; (2) two detailed studies of exposure conditions, one of nuclear power plants and the other of mixed activity facilities; and (3) a study of dosimeter response characteristics that included laboratory testing of 10 dosimeter designs commonly used historically. Based on these efforts, facility- and calendar year-specific adjustment factors have been developed, which will allow risks to be expressed as functions of organ doses with reasonable confidence.
Steven L. Simon, Robert M. Weinstock, Michele Morin Doody, James Neton, Thurman Wenzl, Patricia Stewart, Aparna K. Mohan, R. Craig Yoder, Michael Hauptmann, D. Michal Freedman, John Cardarelli, H. Amy Feng, André Bouville, Martha Linet
Simon, S. L., Weinstock, R. M., Doody, M. M., Neton, J., Wenzel, T., Stewart, P., Mohan, A. K., Yoder, C., Freedman, M., Hauptmann, M., Bouville, A., Cardarelli, J., Feng, H. A. and Linet, M. Estimating Historical Radiation Doses to a Cohort of U.S. Radiologic Technologists. Radiat. Res. 166, 174– 192 (2006).
Data have been collected and physical and statistical models have been constructed to estimate unknown occupational radiation doses among 90,000 members of the U.S. Radiologic Technologists cohort who responded to a baseline questionnaire during the mid-1980s. Since the availability of radiation dose data differed by calendar period, different models were developed and applied for years worked before 1960, 1960– 1976 and 1977–1984. The dose estimation used available film-badge measurements (approximately 350,000) for individual cohort members, information provided by the technologists on their work history and protection practices, and measurement and other data derived from the literature. The dosimetry model estimates annual and cumulative occupational badge doses (personal dose equivalent) for each technologist for each year worked from 1916 through 1984 as well as absorbed doses to organs and tissues including bone marrow, female breast, thyroid, ovary, testes, lung and skin. Assumptions have been made about critical variables including average energy of X rays, use of protective aprons, position of film badges, and minimum detectable doses. Uncertainty of badge and organ doses was characterized for each year of each technologist's working career. Monte Carlo methods were used to generate estimates of cumulative organ doses for preliminary cancer risk analyses. The models and predictions presented here, while continuing to be modified and improved, represent one of the most comprehensive dose reconstructions undertaken to date for a large cohort of medical radiation workers.
Puskin, J. S. and James, A. C. Radon Exposure Assessment and Dosimetry Applied to Epidemiology and Risk Estimation. Radiat. Res. 166, 193–208 (2006).
Epidemiological studies of underground miners provide the primary basis for radon risk estimates for indoor exposures as well as mine exposures. A major source of uncertainty in these risk estimates is the uncertainty in radon progeny exposure estimates for the miners. Often the exposure information is very incomplete, and exposure estimation must rely on interpolations, extrapolations and reconstruction of mining conditions decades before, which might differ markedly from those in more recent times. Many of the measurements that were carried out—commonly for health protection purposes—are not likely to be representative of actual exposures. Early monitoring was often of radon gas rather than of the progeny, so that quantifying exposure requires an estimate of the equilibrium fraction under the conditions existing at the time of the reported measurement. In addition to the uncertainty in radon progeny exposure, doses from γ radiation, inhaled radioactive dust, and thoron progeny have historically been neglected. These may induce a systematic bias in risk estimates and add to the overall uncertainty in risk estimates derived from the miner studies. Unlike other radiogenic cancer risk estimates, numerical risk estimates derived for radon from epidemiology are usually expressed as a risk per unit exposure rather than as a risk per unit dose to a target tissue. Nevertheless, dosimetric considerations are important when trying to compare risks under different exposure conditions, e.g. in mines and homes. A recent comparative assessment of exposure conditions indicates that, for equal radon progeny exposures, the dose in homes is about the same as in mines. Thus, neglecting other possible differences, such as the presence in mines of other potential airborne carcinogens, the risk per unit progeny exposure should be about the same for indoor exposures as observed in miners. Results of case–control studies of lung cancer incidence in homes monitored for radon are reasonably consistent with what would be projected from miner studies. Measurements of exposure in these indoor case–control studies rely on different types of detectors than those used in mines, and the estimates of exposure are again a major source of uncertainty in these studies.
Beck, H. L., Anspaugh, L. R., Bouville, A. and Simon, S. L. Review of Methods of Dose Estimation for Epidemiological Studies of the Radiological Impact of Nevada Test Site and Global Fallout. Radiat. Res. 166, 209–218 (2006).
Methods to assess radiation doses from nuclear weapons test fallout have been used to estimate doses to populations and individuals in a number of studies. However, only a few epidemiology studies have relied on fallout dose estimates. Though the methods for assessing doses from local and regional compared to global fallout are similar, there are significant differences in predicted doses and contributing radionuclides depending on the source of the fallout, e.g. whether the nuclear debris originated in Nevada at the U.S. nuclear test site or whether it originated at other locations worldwide. The sparse historical measurement data available are generally sufficient to estimate external exposure doses reasonably well. However, reconstruction of doses to body organs from ingestion and inhalation of radionuclides is significantly more complex and is almost always more uncertain than are external dose estimates. Internal dose estimates are generally based on estimates of the ground deposition per unit area of specific radionuclides and subsequent transport of radionuclides through the food chain. A number of technical challenges to correctly modeling deposition of fallout under wet and dry atmospheric conditions still remain, particularly at close-in locations where sizes of deposited particles vary significantly over modest changes in distance. This paper summarizes the various methods of dose estimation from weapons test fallout and the most important dose assessment and epidemiology studies that have relied on those methods.
Cullings, H. M., Fujita, S., Funamoto, S., Grant, E. J., Kerr, G. D. and Preston, D. L. Dose Estimation for Atomic Bomb Survivor Studies: Its Evolution and Present Status. Radiat. Res. 166, 219–254 (2006).
In the decade after the bombings of Hiroshima and Nagasaki, several large cohorts of survivors were organized for studies of radiation health effects. The U.S. Atomic Bomb Casualty Commission (ABCC) and its U.S./Japan successor, the Radiation Effects Research Foundation (RERF), have performed continuous studies since then, with extensive efforts to collect data on survivor locations and shielding and to create systems to estimate individual doses from the bombs' neutrons and γ rays. Several successive systems have been developed by extramural working groups and collaboratively implemented by ABCC and RERF investigators. We describe the cohorts and the history and evolution of dose estimation from early efforts through the newest system, DS02, emphasizing the technical development and use of DS02. We describe procedures and data developed at RERF to implement successive systems, including revised rosters of survivors, development of methods to calculate doses for some classes of persons not fitting criteria of the basic systems, and methods to correct for bias arising from errors in calculated doses. We summarize calculated doses and illustrate their change and elaboration through the various systems for a hypothetical example case in each city. We conclude with a description of current efforts and plans for further improvements.
M. O. Degteva, M. I. Vorobiova, E. I. Tolstykh, N. B. Shagina, E. A. Shishkina, L. R. Anspaugh, B. A. Napier, N. G. Bougrov, V. A. Shved, E. E. Tokareva
Degteva, M. O., Vorobiova, M. I., Tolstykh, E. I., Shagina, N. B., Shishkina, E. A., Anspaugh, L. R., Napier, B. A., Bougrov, N. G., Shved, V. A. and Tokareva, E. E. Development of an Improved Dose Reconstruction System for the Techa River Population Affected by the Operation of the Mayak Production Association. Radiat. Res. 166, 255–270 (2006).
The Techa River Dosimetry System (TRDS) has been developed to provide estimates of dose received by approximately 30,000 members of the Extended Techa River Cohort (ETRC). Members of the ETRC were exposed beginning in 1949 to significant levels of external and internal (mainly from 90Sr) dose but at low to moderate dose rates. Members of this cohort are being studied in an effort to test the hypothesis that exposure at low to moderate dose rates has the same ability to produce stochastic health effects as exposure at high dose rates. The current version of the TRDS is known as TRDS-2000 and is the subject of this paper. The estimated doses from 90Sr are supported strongly by ∼30,000 measurements made with a tooth β-particle counter, measurements of bones collected at autopsy, and ∼38,000 measurements made with a special whole-body counter that detects the bremsstrahlung from 90Y. The median doses to the red bone marrow and the bone surface are 0.21 and 0.37 Gy, respectively. The maximum doses to the red bone marrow and bone surface are 2.0 and 5.2 Gy, respectively. Distributions of dose to other organs are provided and are lower than the values given above. Directions for future work are discussed.
Likhtarev, I., Bouville, A., Kovgan, L., Luckyanov, N., Voillequé, P. and Chepurny, M. Questionnaire- and Measurement-Based Individual Thyroid Doses in Ukraine Resulting from the Chornobyl Nuclear Reactor Accident. Radiat. Res. 166, 271–286 (2006).
The U.S. National Cancer Institute (NCI), in cooperation with the Ministries of Health of Belarus and of Ukraine, is involved in epidemiological studies of thyroid diseases presumably related to the Chornobyl accident, which occurred in Ukraine on 26 April 1986. Within the framework of these studies, individual thyroid absorbed doses, as well as uncertainties, have been estimated for all members of the cohorts (13,215 Ukrainians and 11,918 Belarusians), who were selected from the large group of children aged 0 to 18 whose thyroids were monitored for γ radiation within a few weeks after the accident. Information on the residence history and dietary habits of each cohort member was obtained during personal interviews. The methodology used to estimate the thyroid absorbed doses resulting from intakes of 131I by the Ukrainian cohort subjects is described. The model of thyroid dose estimation is run in two modes: deterministic and stochastic. In the stochastic mode, the model is run 1,000 times for each subject using a Monte Carlo procedure. The geometric means of the individual thyroid absorbed doses obtained in the stochastic mode range from 0.0006 to 42 Gy. The arithmetic and geometric means of these individual thyroid absorbed doses over the entire cohort are 0.68 and 0.23 Gy, respectively. On average, the individual thyroid dose estimates obtained in the deterministic mode are about the same as the geometric mean doses obtained in the stochastic mode, while the arithmetic mean thyroid absorbed doses obtained in the stochastic mode are about 20% higher than those obtained in the deterministic mode. The distributions of the 1000 values of the individual thyroid absorbed dose estimates are found to be approximately lognormal, with geometric standard deviations ranging from 1.6 to 5.0 for most cohort subjects. For the time being, only the thyroid doses resulting from intakes of 131I have been estimated for all subjects. Future work will include the estimation of the contributions to the thyroid doses resulting from external irradiation and from intakes of short-lived (133I and 132Te) and long-lived (134Cs and 137Cs) radionuclides, as well as efforts to reduce the uncertainties.
Kleinerman, R. A, Romanyukha, A. A., Schauer, D. A. and Tucker, J. D. Retrospective Assessment of Radiation Exposure Using Biological Dosimetry: Chromosome Painting, Electron Paramagnetic Resonance and the Glycophorin A Mutation Assay. Radiat. Res. 166, 287–302 (2006).
Biological monitoring of dose can contribute important, independent estimates of cumulative radiation exposure in epidemiological studies, especially in studies in which the physical dosimetry is lacking. Three biodosimeters that have been used in epidemiological studies to estimate past radiation exposure from external sources will be highlighted: chromosome painting or FISH (fluorescence in situ hybridization), the glycophorin A somatic mutation assay (GPA), and electron paramagnetic resonance (EPR) with teeth. All three biodosimeters have been applied to A-bomb survivors, Chernobyl clean-up workers, and radiation workers. Each biodosimeter has unique advantages and limitations depending upon the level and type of radiation exposure. Chromosome painting has been the most widely applied biodosimeter in epidemiological studies of past radiation exposure, and results of these studies provide evidence that dose-related translocations persist for decades. EPR tooth dosimetry has been used to validate dose models of acute and chronic radiation exposure, although the present requirement of extracted teeth has been a disadvantage. GPA has been correlated with physically based radiation dose after high-dose, acute exposures but not after low-dose, chronic exposures. Interindividual variability appears to be a limitation for both chromosome painting and GPA. Both of these techniques can be used to estimate the level of past radiation exposure to a population, whereas EPR can provide individual dose estimates of past exposure. This paper will review each of these three biodosimeters and compare their application in selected epidemiological studies.
Schafer, D. W. and Gilbert, E. S. Some Statistical Implications of Dose Uncertainty in Radiation Dose–Response Analyses. Radiat. Res. 166, 303–312 (2006).
Statistical dose–response analyses in radiation epidemiology can produce misleading results if they fail to account for radiation dose uncertainties. While dosimetries may differ substantially depending on the ways in which the subjects were exposed, the statistical problems typically involve a predominantly linear dose–response curve, multiple sources of uncertainty, and uncertainty magnitudes that are best characterized as proportional rather than additive. We discuss some basic statistical issues in this setting, including the bias and shape distortion induced by classical and Berkson uncertainties, the effect of uncertain dose-prediction model parameters on estimated dose–response curves, and some notes on statistical methods for dose–response estimation in the presence of radiation dose uncertainties.
This article is only available to subscribers. It is not available for individual sale.
Access to the requested content is limited to institutions that have
purchased or subscribe to this BioOne eBook Collection. You are receiving
this notice because your organization may not have this eBook access.*
*Shibboleth/Open Athens users-please
sign in
to access your institution's subscriptions.
Additional information about institution subscriptions can be foundhere