Stories of scientific breakthroughs reveal the ways that profound new insights emerge from research efforts over time:
Ability To Predict Type 1 Diabetes Offers Hope for Disease Prevention
(Download PDF version - 180 KB)
Type 1 diabetes is a devastating disease that most often strikes during childhood, and invariably lasts for the rest of one’s life. During every day of the lives of the millions of people with this disease worldwide, consistent attention and vigilance is required to ward off devastating diabetic complications that shorten and reduce the quality of their lives. Therefore, a key goal of NIDDK research is to develop ways to prevent type 1 diabetes from occurring in the first place. Toward realizing that goal, scientists have cleared a critical hurdle by learning how to identify people who are likely to develop the disease.
Being able to predict who will get type 1 diabetes is of obvious importance in identifying people who would benefit from prevention strategies once they are developed. But in fact it is also a key step in the development of interventions to prevent the disease. With the ability to predict type 1 diabetes risk, it becomes feasible to conduct multiple trials in those at risk, so as to increase the possibility of finding the best prevention approach. This is precisely what is being accomplished today through programs like Type 1 Diabetes TrialNet, led by NIDDK, and TRIGR (Trial to Reduce IDDM in the Genetically at Risk), led by the Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD). Both programs are supported in part by the Special Statutory Funding Program for Type 1 Diabetes Research.
The scientific achievement of predicting type 1 diabetes was developed through decades of efforts by scientists in several disciplines—immunology, genetics, and epidemiology—working in several countries. Although the clinical appearance of type 1 diabetes is often sudden, with symptoms developing over weeks or days, researchers now know that the disease frequently develops gradually and silently over many years. A key advance was the ability to detect the autoimmune hallmarks of the disease prior to the actual development of type 1 diabetes. Researchers in the 1960s recognized that people with diabetes often make antibodies to insulin, a hormone produced by pancreatic beta cells that are aberrantly destroyed in type 1 diabetes, necessitating treatment with exogenously-supplied insulin. Because these antibodies often arose prior to insulin treatment, the scientists correctly surmised that the people were actually developing antibodies to the insulin being made by their own bodies. Antibodies against one’s own proteins are termed “autoantibodies,” and are a hallmark of autoimmune diseases like type 1 diabetes.
Indeed, researchers later discovered that people with type 1 diabetes often produce antibodies not only to insulin, but also to several other proteins produced by pancreatic beta cells. Significantly, the appearance of autoantibodies nearly always precedes the onset of overt symptoms of type 1 diabetes, when a person still has an adequate number of insulin-producing beta cells to control blood glucose. Testing for the presence of beta cell autoantibodies therefore became a promising approach to predicting the disease before its clinical appearance.
Several scientists, including NIDDK-supported researchers, worked to turn the discovery of autoantibodies into a useful tool by developing robust, standardized autoantibody tests. Importantly, they recognized that the simple presence or absence of an autoantibody does not provide as much information as accurate measurement of the levels or titer of antibody in the blood. Assays to measure antibodies can now be performed such that each test has a very low false-positive rate. Although onset of the disease is usually preceded by creation of antibodies to at least one of these proteins, at any given time a person destined to develop type 1 diabetes only makes autoantibodies to a variable subset of them. The presence of any one of these autoantibodies signals substantially elevated risk, and risk increases as the number of autoantibodies rises.
But type 1 diabetes is such a complex disease that, to be accurately interpreted, an autoantibody test needs to be viewed in the context of more information about the patient. It has long been known that people with a parent, brother, or sister with the disease are more likely to get type 1 diabetes than the population at large. However, most people with such relations will not get the disease, and many people without a known relative who has the disease will. The reasons for this are complex, but an important part of the answer stems from the fact that several genes turn out to predispose a person to type 1 diabetes, while several others actually have a protective effect.
The first major breakthrough in the genetic part of the puzzle came in the 1970s, with the discovery that two particular versions of a gene called HLA, which makes a key immune recognition protein, are much more common in people who have type 1 diabetes, suggesting they increase the likelihood of the disease. It was later discovered that certain other versions of this highly variable gene can help protect against the disease. Still other versions of HLA are more neutral in their impact. A person can acquire a high-risk version from one parent, and a protecting version from the other. The overall effect of the HLA variants accounts for a very large proportion of the genetic risk for type 1 diabetes.
With the knowledge of HLA and autoantibody associations with type 1 diabetes, NIDDK-supported scientists designed a prevention trial, the NIDDK-supported Diabetes Prevention Trial Type 1 (DPT-1), which successfully used genetic and autoantibody tests to predict risk for developing type 1 diabetes. To identify “those at risk,” the researchers first selected thousands of people who have a close relative with type 1 diabetes, and then screened them for autoantibodies. Those with autoantibodies were then tested for the protective version of HLA. People who had autoantibodies and no protective HLA were tested for their response to glucose, to see whether they were already displaying signs of diabetes. Indeed, some already had the disease, and simply did not know it. The rest fell into two categories: those with a normal response to a glucose challenge were considered to have a “moderate” (26-50 percent) chance of developing type 1 diabetes within 5 years; those with a response to glucose that was weaker (but did not meet the definition of overt diabetes) were considered to have a greater than 50 percent chance of developing the disease within 5 years. Although the specific prevention strategies tested in this trial did not turn out to have a broadly protective effect, the researchers’ estimates of risk for type 1 diabetes, based on their screening, proved to be remarkably accurate. Thus, the DPT-1 trial was enormously valuable in demonstrating that it is possible to identify those at high risk for type 1 diabetes—enabling researchers to conduct further studies to test new prevention strategies.
NIDDK-supported scientists are continuing to discover potential new ways to improve prediction of diabetes risk. For example, researchers recently identified another autoantibody that, when combined with tests for the previously-known autoantibodies, improves the predictive power of this approach. (Please see the advance on a new autoantibody for type 1 diabetes described earlier in this chapter.) Researchers are continuing to discover other genes that impact the probability of developing type 1 diabetes. At current count there are over 40 such genes, largely discovered through the efforts of the Type 1 Diabetes Genetics Consortium, made possible by support from NIDDK and the Special Statutory Funding Program for Type 1 Diabetes Research. Individually, none of these new genes has as large an impact as HLA, but collectively their effect is significant.
With further genetic, autoantibody, or other predictive markers and tools, it may be possible to define risk for type 1 diabetes even more precisely, and to extend such predictive tests to the population as a whole. Such predictive markers may also help scientists identify potential environmental triggers of the disease. Improved tests to assess risk not only would facilitate additional research on prevention strategies, but could also advance research on ways to reverse the disease in its earliest stages and, importantly, enable the resulting interventions to benefit more people.
Back to Top
Diabetes and Cardiovascular Disease
(Download PDF version - 248 KB)
Seminal clinical trials have revealed the power of good control of blood glucose (sugar) early during the course of type 1 and type 2 diabetes to reduce later risk for eye, kidney, and nerve complications. Now, clinical trials are examining the more complex relationship between blood glucose control and cardiovascular disease (CVD) in type 2 diabetes. One recent study showed that more intensive control than currently recommended, targeting near normal blood glucose levels, can be dangerous in those with long-duration type 2 diabetes with established CVD or at high risk of developing CVD. Two other recent trials found neither cardiovascular harm nor benefit of moving from “good” to near-normal glucose levels. However, another study found that targeting good glucose control early in the course of disease can reduce cardiovascular risks decades later for many patients with type 2 diabetes. Similar cardiovascular benefits emerging long after a finite period of intensive glucose control were reported previously for individuals with type 1 diabetes. Because CVD is the leading cause of death in people with type 2 diabetes, identification of ways to reduce this risk is particularly important. There is very strong evidence that blood pressure and cholesterol control can markedly reduce CVD, but the effects of glucose control on CVD in type 2 diabetes remained an open question. Taken together, the new results refine the approach to treating diabetes and demonstrate the importance of tailoring therapy to individual patient characteristics.
Diabetes Increases the Risk of Death from Cardiovascular Disease
An estimated 23.6 million Americans have diabetes, about 5.7 million of whom have not been diagnosed.1 Type 1 diabetes, which accounts for 5-10 percent of diagnosed diabetes cases, is an autoimmune disease that often begins in childhood or early adulthood, although it can strike at any age. The majority of people with diabetes have type 2 diabetes—a form of the disease that is typically associated with excess body weight and older age. In part due to the increase in childhood obesity, however, children increasingly are being diagnosed with type 2 diabetes. Both type 1and type 2 diabetes are also influenced by genetic susceptibility. While both forms of diabetes are characterized by excessively high levels of glucose in the blood, type 1 diabetes and type 2 diabetes have different causes and are treated differently, particularly at disease onset. From the moment of diagnosis, because their insulin-producing cells have been destroyed, people with type 1 diabetes must depend on exogenous insulin, provided by injections or an insulin pump, for survival. Type 2 diabetes, in contrast, is often managed with changes in diet and exercise in its early stages. Insulin-producing cells may still be functioning in type 2 diabetes, but not sufficiently to overcome the insulin resistance that characterizes this form of the disease. A wide variety of prescription medications have been developed to help lower blood glucose in people with type 2 diabetes. (See inset box). Because these drugs act in various ways to lower blood glucose levels, some may be used in combination with others. Many people with type 2 diabetes also need to take insulin to optimally control their blood glucose levels, especially after having the disease for many years.
Despite markedly different causes and treatment options, type 1 diabetes and type 2 diabetes share a common outcome: excess glucose in the blood gradually leads to damaged blood vessels in organs throughout the body. Injury to small blood vessels, known as microvascular disease, increases the risk of blindness, kidney failure, nerve damage, and lower limb amputation. Injury to larger blood vessels, known as macrovascular disease, leads to elevated rates of heart attack, stroke, and other cardiovascular complications in people with diabetes. In general, two out of three adults with diabetes will die of cardiovascular disease or stroke—a risk that is two to four times higher than that for people without diabetes.1 For people with type 1 diabetes, the risk of death from CVD may be as much as 10-fold greater than the general population of the same age.2,3 This elevated risk of cardiovascular death shortens the expected life span of people with diabetes by several years.
Long-Term Benefits of Intensive Glucose Control Established for Microvascular Complications
Diabetic complications result from many years of gradual glucose-mediated damage to blood vessels. Thus, clinical trials of new therapies for preventing complications are designed to follow participants’ health outcomes over long periods of time following initial treatment.
In 1983, the NIDDK’s Diabetes Control and Complications Trial (DCCT) was launched with 1,441 volunteers with type 1 diabetes randomly divided into two groups. One group received what was standard insulin therapy at the time—one or two insulin injections per day. The other group was taught to manage their blood glucose intensively with frequent monitoring of glucose levels and multiple insulin injections daily or use of an insulin pump. The study was designed to test the ability of intensive glucose control to reduce eye damage and other microvascular complications. The study relied on a blood test (HbA1c) which gauges the average blood glucose over the previous 2 to 3 months. A normal HbA1c is below 6 percent. Throughout the study the average HbA1c value in the standard therapy group was 9.1 percent, whereas in the intensive therapy group the value was 7.3 percent—a significant difference in glucose control.
This difference, when maintained over an average of 6.5 years, yielded multiple health benefits: participants in the intensive therapy group exhibited lower rates of eye disease (76 percent reduction in risk), kidney disease (50 percent reduction), and nerve damage (60 percent reduction). Thus, the intervention to improve glucose control was clearly an effective means to lower the risk of microvascular complications in type 1 diabetes. However, because the DCCT participants were relatively young and healthy at the start of the trial, and because CVD typically takes a longer time to develop than other diabetes complications in patients with type 1 diabetes, it was not possible for researchers to assess the effect of intensive glucose control on cardiovascular risks during the 10 years of the trial.
Longer follow-up demonstrated additional benefits. At the conclusion of the DCCT, participants returned to the care of their regular health care providers. However, researchers continued to observe the health of more than 90 percent of the DCCT participants in an ongoing follow-up NIDDK effort called the Epidemiology of Diabetes Interventions and Complications Study (EDIC). By continuing to observe these well-characterized patient groups, the investigators hoped to determine whether the interventions that had worked so well to reduce microvascular disease risk might also yield a long-term benefit of reducing CVD.
In the EDIC study, the HbA1c levels of the study groups gradually equalized over time as glucose control in the original conventional therapy group improved, while that of the intensive therapy group worsened. Intriguingly, the EDIC initially found that differences in risk for microvascular complications between the original study groups persisted for at least 8 to 10 years, even though the difference in HbA1c levels disappeared. Then, in 2005, EDIC investigators reported for the first time that intensive glucose control during the DCCT trial period could also reduce long-term CVD risks in type 1 diabetes. Twelve years after the DCCT had ended, members of the original intensive therapy control group had a 42 percent lower risk for heart disease and a 57 percent lower risk for non-fatal heart attacks, strokes, or death from a cardiovascular event compared with those who had been in the standard treatment group.
A similar trial for type 2 diabetes was conducted in the United Kingdom (U.K.) from 1977 to 1997. In the U.K. Prospective Diabetes Study (UKPDS), which was supported in part by NIDDK, more than 4,000 newly-diagnosed type 2 diabetes patients were stratified by body weight and randomly assigned to one of four treatment groups: conventional therapy, primarily through dietary changes, or intensive therapy to lower blood glucose levels to close to normal using one of the following three diabetes medications: (1) insulin; (2) a sulfonylurea drug; or (3) metformin. (Only participants who met the trial definition of overweight could be randomly assigned to primary metformin treatment in the UKPDS.) Like the DCCT, the UKPDS demonstrated that intensive therapy to control blood glucose and lower HbA1c levels could reduce the risk of microvascular disease in people with diabetes. UKPDS results suggested that intensive therapy might also confer a benefit with respect to CVD, but, at the conclusion of the intervention—patients were followed for an average of 10 years—the differences were not statistically significant. Therefore, an important question remained unanswered as to whether intensive control could protect people with type 2 diabetes from CVD.
Long-Sought Information Emerges on Glucose Control and Cardiovascular Disease
Because of its substantial impact on the health and lives of people with diabetes, researchers have long sought effective strategies to prevent or manage diabetic CVD. Several clinical trials had proven that carefully controlling blood pressure and cholesterol levels—both of which contribute to CVD risk—substantially reduces cardiovascular events in people with type 2 diabetes. At the conclusion of the intervention, the UKPDS, the first major clinical trial to examine the effects of intensive glucose control in type 2 diabetes, fell short of proving that improved control of blood glucose levels reduced CVD.
Because the DCCT and the UKPDS trials had proven that good glucose control reduced microvascular complications in both type 1 and type 2 diabetes, subsequent expert guidelines for blood glucose management recommended an HbA1c target of 7 percent, the level of control targeted in UKPDS and proven to reduce eye, kidney, and nerve complications. Widespread acceptance of those recommendations meant that any subsequent attempt to prove glucose control could lessen CVD must study even more stringent control so that participants would not be put at increased risk of microvascular disease.
During the past decade, several studies were begun to answer this key question, most notably the Action to Control Cardiovascular Risk in Diabetes Study (ACCORD), which is led by the National Heart, Lung, and Blood Institute with NIDDK support. ACCORD was designed to test three treatment approaches to decrease the high rate of CVD among adults with established type 2 diabetes who are at especially high risk for heart attack and stroke. More than 10,000 patients with type 2 diabetes were assigned to one of two regimens for blood glucose control: now-standard therapy designed to attain an HbA1c value of 7.0-7.9 percent, or intensive therapy with the intent of lowering HbA1c levels to below 6.0 percent. After patients had been treated for an average of 3.5 years, the intensive therapy arm was halted 18 months ahead of schedule due to a higher rate of deaths and no significant reduction in cardiovascular events in this treatment group.
Two other studies, an industry sponsored trial (ADVANCE) and the Veterans Administration Diabetes Trial (VADT), also compared the effects of standard and intensive blood glucose control on CVD in participants with longstanding type 2 diabetes similar to the ACCORD participants. Although neither of these studies found increased mortality with intensive therapy, they both failed to find any significant reduction in cardiovascular events.
The results of the three recent trials generated huge interest in the medical community and their full implications are still being explored. Further analyses over the next year may help to clarify some factors, such as patient characteristics and treatment regimens, contributing to the differences, but may not identify the cause of the excess deaths in the ACCORD trial.
While the ACCORD trial demonstrated the danger of intensive glucose management to near normal glucose levels in patients with longstanding type 2 diabetes who were at especially high risk of CVD, it did not address the question of cardiovascular benefit of good glucose control instituted shortly after diagnosis when good control can be achieved with simpler diabetes control regimens. The best evidence of the benefits of early treatment comes from the recently reported long-term follow-up of the UKPDS participants. There were no early adverse effects of intensive glucose control in the newly-diagnosed type 2 diabetes patients studied in the UKPDS. Three-quarters of UKPDS participants were observed for 10 years after the end of the original intervention trial. In 2008, the UKPDS follow-up study reported similar benefits for type 2 diabetes patients as had been seen in EDIC for type 1 diabetes. The intensive therapy groups had persistent reductions in microvascular complications and substantial reductions in risk for heart attack compared to those assigned to standard therapy. Intensive therapy participants also had a lower overall risk of death during the course of the study. In the UKPDS follow-up study, as in EDIC, the HbA1c levels between groups became equal for most of the follow-up period. Thus, a period of intensive diabetes management to control glucose levels appears to confer enduring benefits in terms of reducing diabetic complications—including CVD—even if an individual’s glucose control subsequently becomes less stringent. This phenomenon, which has been termed “metabolic memory” or the “legacy effect,” provides a powerful motivation for most diabetes patients to maintain their glucose levels as close to normal as possible early in the disease.
One Treatment Approach Is Not Suitable for All People with Diabetes
The results of the DCCT/EDIC and UKPDS represent landmark advances in validating intensive glucose management as a strategy to prevent microvascular and cardiovascular complications in both type 1 and type 2 diabetes. But ACCORD and other large clinical trials of blood glucose control and cardiovascular risk in type 2 diabetes arrived at a seemingly conflicting conclusion. On the surface, the ACCORD outcome seems at odds with the UKPDS finding that intensive glucose control is protective in terms of reducing cardiovascular risks, including death, in people with type 2 diabetes. However, there are important differences between the studies. UKPDS participants had a median age of 53 years and were newly diagnosed with diabetes at the time of enrollment. In contrast, the ACCORD cohort was older, with an average age of 62 years, and had been living with diabetes for a median duration of 10 years. ACCORD participants were also at especially high risk of CVD, and more than a third had already experienced at least one cardiovascular event before the trial began. Moreover, the ACCORD “intensive” therapy protocol attempted to reduce HbA1c values to “near normal” (i.e., non-diabetic) levels, a considerably more aggressive approach to glucose control than the “intensive” therapy regimens of the UKPDS and DCCT. Viewed together, the results of ACCORD and UKPDS suggest that a personalized approach to glucose control in type 2 diabetes might be needed—one that takes into account a person’s duration of diabetes, the presence or absence of diabetes complications, risk of low blood glucose, other complicating illnesses and life expectancy, as well as other health, behavioral, and social factors.
The recent results of long-term clinical trials to reduce diabetes complications are expanding our knowledge of the best ways to manage diabetes. Despite some challenges, progress is being made in improving glucose control and reducing both micro- and macro-vascular complications related to both type 1 and type 2 diabetes. Further investigation is needed, since no current treatment regimens fully replicate the tightly regulated control of glucose levels found in people without diabetes.
Type 1 and type 2 diabetes are complex chronic diseases that have multiple clinical presentations, variability in their rates of progression, and variability in susceptibility to development of chronic micro- and macrovascular complications. Strategies for controlling blood glucose to prevent complications may need to be modified for different groups of patients or even for a single patient as their disease progresses. Such strategies must also take into account other therapies to manage CVD risks, such as drugs that normalize blood pressure, reduce blood lipid levels, or alter blood coagulation.
As the number of people with diabetes in the U.S. continues to climb, the NIDDK investment in long-term clinical trials to optimize diabetes management will help reduce the burden of CVD and premature death in this large segment of the population. In addition, basic research to understand the phenomenon of metabolic memory will shed light on the way intensive glucose control early in the course of diabetes can pay off in terms of fewer complications years later. In time, it may be possible to reproduce the effects of metabolic memory even in patients with poorly controlled diabetes and, thereby, help all people with diabetes achieve better health and longer lives.
Drug Therapies for Type 2 Diabetes
There are many medications available to help people with type 2 diabetes lower their blood glucose. These medications fall into several classes:
- Insulin: moves glucose from blood into cells
- Metformin: reduces output of glucose from the liver and reduces insulin resistance
- Thiazolidinediones: reduce insulin resistance, by a different mechanism than metformin
- Sulfonylureas: promote release of insulin by the pancreas
- Meglitinides: promote release of insulin by the pancreas (shorter and faster acting than sulfonylureas)
- D-phenylalanine derivative: promotes release of insulin
- GLP1-analogs: stimulate production of insulin and slow gastric (stomach) emptying
- DPP-4 inhibitors: slow destruction of GLP1 and stimulate production of insulin
- Amylin analogs: slow glucose absorption from intestine, reduce glucose production by liver, and
- Alpha-glucosidase inhibitors: interfere with digestion and utilization of carbohydrates like starch and table sugar
Other promising therapeutic approaches are currently in development.
2 Krolewski AS, et al: Am J Cardiol 59:750-755, 1987.
3 Dorman JS, et al: Diabetes 33:271-276, 1984.
Back to Top
Leptin as a Treatment for Lipodystrophy: A Translational Success Story
(Download PDF version - 220 KB)
This story begins with an obese mouse and ends with a medical treatment for people who may lack fat tissue altogether. The common link that ties together these two very different entities is a hormone called leptin. Identifying this link was a result of the collaboration among many investigators over several years, including NIDDK-supported scientists at universities, scientists in the NIDDK Intramural Research Program, industry researchers, and many others. This translational success story is a demonstration of how exciting discoveries in the laboratory are used to improve the health of people.
The Obese Mouse and the Discovery of Leptin
In 1950, scientists identified a new mouse model that was extremely obese. They called the unknown gene causing the obesity “ob.” By the 1980s, the identity of the ob gene was still unknown, but it was becoming more and more apparent that research on genetic contributors to obesity was critically important to pursue. Therefore, the NIDDK sought to support research to identify obesity-related genes in rodents, including the ob gene. The Institute sponsored a workshop on this topic and developed an initiative to solicit research applications. In 1989, the NIDDK awarded a grant to Dr. Jeffrey Friedman through this initiative. Dr. Friedman’s subsequent pioneering research led to the 1994 discovery of the mouse ob gene. The hormone produced by this gene was named “leptin,” a term that derives from a Greek word meaning thin. Because the ob mutant mouse was obese, the scientists realized that the normal ob gene—and the hormone it encodes—must contribute to leanness.
The landmark discovery of leptin unleashed a wave of new research advances in fat biology and metabolism. Researchers found that leptin is secreted by fat cells and released in proportion to the amount of fat. These observations drastically altered the former view of normal fat tissue as simply a passive “fat storehouse.” Research fueled by this 1994 discovery also led to the identification of a number of other substances that, like leptin, are secreted by fat cells and influence appetite and metabolism.
Studies demonstrated that obese animals deficient in leptin, including mice carrying the mutant form of the ob gene, lost weight when given the hormone. Therefore, researchers postulated that leptin treatment might also be useful for human obesity. There are, in fact, very rare instances of complete deficiency of leptin in humans that result in morbid obesity from infancy. Leptin treatment in these individuals caused substantial weight loss, providing hope for improved quality of life and longevity.
Unfortunately, in clinical studies done at that time, leptin administration was not effective in treating the vast majority of cases of human obesity, which are not due to leptin deficiency. In most cases, obesity results from a complex interaction among genetic variation (potentially involving many genes not yet identified) and the environment. Obese individuals, in fact, usually have very high levels of leptin, probably a consequence of the many fat cells secreting it. The inability of the high levels of leptin to decrease body weight suggests that the more common forms of obesity are associated with a resistance to leptin’s actions. Although these results were disappointing, scientists did not give up in their quest to use this new knowledge to benefit people.
Testing Leptin as a Treatment for Lipodystrophy
Scientists in the NIDDK’s Intramural Research Program had broad experience with respect to studying people with various forms of insulin resistance. Using this experience and knowledge, they identified a patient population—people with lipodystrophy—who could potentially benefit from leptin treatment.
Lipodystrophy is actually a group of disorders with disparate origins but with a common set of characteristics. Individuals with lipodystrophy lack fatty tissue in the face, neck, or extremities. They sometimes have central obesity and sometimes lack fat tissue altogether. While lipodystrophy is characterized by the loss of fatty tissue in certain areas of the body, tissues such as liver and muscle exhibit significant abnormal accumulation of fat, which impairs metabolic activity. These patients also exhibit resistance to the effects of insulin and are thus at high risk of developing diabetes. They may also have a range of lipid abnormalities. Treatment of lipodystrophy has included the administration of insulin, oral hypoglycemic (blood glucose lowering) agents, and lipid-lowering drugs. In spite of treatment, patients with lipodystrophy continue to have severely high levels of triglycerides, leading to recurrent attacks of acute inflammation of the pancreas; severe problems controlling blood glucose levels, posing risks of diabetic eye and kidney disease; and fat accumulation in the liver, which can result in cirrhosis and liver failure.
Because many people with lipodystrophy have low leptin levels, and because research had demonstrated beneficial effects of leptin on insulin sensitivity and fat metabolism in a number of tissues, researchers in the NIDDK Intramural Research Program and their collaborators investigated whether leptin treatment could ameliorate conditions associated with lipodystrophy. In two small clinical studies of individuals with lipodystrophy treated for short periods of time (3-8 months), leptin therapy had dramatic benefits. In one study of female patients with different forms of lipodystrophy, most of whom also had type 2 diabetes, leptin therapy improved blood glucose levels, lowered triglyceride levels, and decreased liver fat content. In another study, leptin therapy markedly improved insulin sensitivity, lowered lipid levels, and decreased liver fat content in individuals with severe lipodystrophy who also suffered from poorly controlled type 2 diabetes. Patients in these studies were able to reduce or discontinue their diabetes medications.
Seeing such dramatic results, the researchers next examined the effect of long-term leptin therapy (12 months) in patients with severe forms of lipodystrophy and poorly-controlled diabetes. Long-term leptin therapy had similarly remarkable results. Patients had improved blood glucose and blood lipid levels, and decreased fat in their livers. The patients also reported a dramatic reduction in their appetite, which led to moderate reductions in their weight. In addition, patients were able to discontinue or reduce their diabetes medications. These exciting results suggested that leptin was an effective treatment for severe lipodystrophy.
The scientists also examined the effect of leptin on other metabolic abnormalities associated with lipodystrophy. For example, female patients often have irregular or absent menstrual cycles. Leptin treatment was found to be corrective of that condition—eight of eight female patients achieved normal menstrual function following leptin therapy. In a study of 10 patients, leptin effectively improved liver function and reduced liver fat content in people with lipodystrophy and nonalcoholic steatohepatitis, a progressive metabolic liver disease. In a study of 25 patients with lipodystrophy, researchers found that a surprisingly high number had some form of kidney disease. Leptin treatment was found to improve their kidney function. Thus, leptin corrected a broad range of metabolic defects associated with lipodystrophy.
Lipodystrophy can either be inherited or acquired, and can be complete (near total lack of fat) or partial (fat loss in certain parts of the body). Clinical trials conducted by scientists in the NIDDK Intramural Research Program and their collaborators examined leptin treatment for various forms of lipodystrophy and found that leptin effectively treated all forms tested. These results suggest that leptin is generally effective for treating lipodystrophy, independent of the underlying cause.
Testing Leptin for Treating Lipodystrophy: A Team Effort
The clinical trials testing leptin therapy for lipodystrophy conducted by the NIDDK Intramural Research Program required numerous collaborators, and spawned new collaborations. Leading this effort was Dr. Phillip Gorden, a former NIDDK Director who returned to the laboratory to continue his research. Because leptin was manufactured by industry, the Intramural Research Program and the NIDDK Office of Technology Transfer and Development worked with industry to obtain the leptin needed for the studies. In addition, because lipodystrophy affects the liver and kidneys, scientists in the Intramural Research Program with expertise studying those organs were valuable contributors to the studies. Furthermore, collaborators external to the NIDDK have studied the genetic underpinnings of different forms of inherited lipodystrophy; several genes have now been identified. Finally, many of the patients were evaluated and treated at the NIDDK’s Metabolic Clinical Research Unit, a new facility in the NIH Clinical Center that enables scientists to make precise metabolic measurements. It was only through the contributions of all of these collaborators that this translational success story came to fruition.
Looking to the Future
Looking to the future, scientists are continuing research on leptin and exploring approaches for its use in treating other diseases and disorders. As described in this story, knowledge gained from studying a common condition, obesity, led to the discovery of leptin and a treatment for a very rare disease, lipodystrophy. Scientists are now coming full circle by building on the successful clinical studies with leptin in lipodystrophy and applying it to research on common diseases. For example, the NIDDK Intramural Research Program is conducting studies to examine leptin’s effects on treating people with other forms of severe insulin resistance and other common metabolic conditions. If leptin proves effective in these cases, these studies would be an example of how research on rare diseases may additionally benefit people with more common diseases and syndromes. The discovery of leptin has led—and continues to lead—to a cascade of exciting and unexpected findings with broad implications for improving health.
Back to Top
Hepatitis B Research Progress: A Series of Fortunate Events
(Download PDF version - 220 KB)
Over the span of a few decades in the U.S., hepatitis B has been transformed from a disease newly infecting 200,000-300,000 individuals annually to one infecting approximately 46,000 individuals in 2006, the most recent year surveyed.1 This impressive public health achievement can be attributed largely to immunization programs using a safe and effective hepatitis B vaccine, and screening of the blood supply for the virus. Therapy for chronic hepatitis B has similarly improved from a point in time when no effective treatment was available, to the current armamentarium of seven FDA-approved treatment options. These gains in hepatitis B control and care are based on years of careful research, marked by a confluence of serendipity and concerted effort by U.S. and international scientists into understanding the cause and course of hepatitis B, effectively treating it, and preventing its spread. NIH-sponsored research has contributed greatly to advancing knowledge of hepatitis B over the years. This Story of Discovery highlights some of the landmark accomplishments to date in hepatitis B research and their far-reaching impact through translation into improved medical care and public health in this country and around the world.
Silent Disease with a Global Reach
Hepatitis B is an inflammation of the liver caused by infection with the hepatitis B virus (HBV), which results from exposure to an infected person or their blood or blood products. Infection with HBV can result in acute or chronic forms of hepatitis. Symptoms can include fatigue, nausea, fever, loss of appetite, stomach pain, diarrhea, dark urine, light stools, and jaundice (yellowing of the eyes and skin). However, hepatitis B often is a “silent disease,” quietly inhabiting the body for several decades before provoking symptoms or progressing to cirrhosis (scarring of the liver that prevents normal function) and/or hepatocellular carcinoma (liver cancer). This delayed appearance of symptoms can hinder efforts to detect the disease at an early stage and to prevent further transmission. Common ways in which HBV is passed on include: from mother to baby at birth; sex without use of a condom; use of tainted needles or tools for injection drug use, tattoos, or body piercing; accidental needle-stick; or sharing a toothbrush or razor with an infected person. Receiving a blood transfusion in the U.S. used to be another common mode of transmission, during the 1980s and earlier, prior to effective screening of donor blood for the virus.
Chronic hepatitis B currently affects an estimated 1.25 million people in the U.S., resulting in approximately 5,000 deaths each year.1 Recent estimates from the World Health Organization indicate that more than 350 million people worldwide have chronic hepatitis B, out of 2 billion infected with the virus.2 In particular, individuals from parts of the world where hepatitis B is endemic, such as parts of Asia and sub-Saharan Africa, are at increased risk of developing chronic hepatitis B, which is the leading cause of cirrhosis and hepatocellular carcinoma worldwide. People infected with the human immunodeficiency virus (HIV) are also at high risk of being co-infected with HBV, due to common, bloodborne transmission routes.
Discovery of a Bloodborne Threat to Liver Health
Hepatitis epidemics, which likely included hepatitis B as one cause, have spanned the course of human history, dating back to antiquity and observations on an epidemic of jaundice by Hippocrates. Yet it wasn’t until 1883 that a German scientist first described what was later thought to be this particular form of viral hepatitis in a group of people who had developed jaundice after receiving a smallpox vaccine prepared from human blood. Similarly, an outbreak of hepatitis-related jaundice affecting approximately 50,000 U.S. Army personnel during World War II was later attributed to HBV infection transmitted through a contaminated yellow-fever vaccine, based on research performed in the late 1980s with support from the Veterans Affairs Medical Center, the NIDDK and the National Cancer Institute (NCI) within the NIH, and the National Research Council.
The infectious agent responsible for these hepatitis outbreaks, the hepatitis B virus, was identified by Dr. Baruch Blumberg while working at the NIH in the 1960s—a discovery that later earned him the Nobel Prize in Physiology or Medicine in 1976. Strangely enough, Dr. Blumberg and his laboratory did not set out to find the virus causing hepatitis B. As part of their research to identify forms of blood proteins that differ across populations or ethnic groups, they were testing blood from hemophilia patients who had received multiple blood transfusions. When proteins in a donor’s blood are slightly different from those in a recipient’s own blood, the body may mount an immune reaction, including the production of antibodies that stick to the foreign proteins. Thus, the scientists were looking for antibodies in the hemophilia patients as potential markers of differences between their blood proteins and the donors’. In this case, however, the scientists would soon learn that some of the antibodies reflected the presence of a bloodborne infectious agent.
In 1963, Dr. Blumberg and Dr. Harvey Alter identified an antibody in the blood of a patient in New York with hemophilia that reacted against a protein in blood collected from an Australian aborigine. This protein was named the “Australia antigen” or “Au.” This finding piqued their curiosity as to why a patient in New York would produce an antibody against a protein found in the blood of an individual so geographically, ethnically, and culturally distinct as an aborigine living in Australia. They went on to test samples from individuals around the globe and found the antigen in some of these as well, more commonly in people who had received multiple blood transfusions or were from Asian or tropical regions.
A clue that the Australia antigen might be linked to liver disease came in early 1966 when Dr. Blumberg’s group noted that one patient’s blood first tested negative and then positive for the antigen—a shift associated with clinical signs of chronic hepatitis in the form of elevated liver enzymes. Also around this time, one of the technicians in Dr. Blumberg’s laboratory developed a case of acute hepatitis, which was accompanied by a positive test for the antigen. Additional clinical studies during the late 1960s in the U.S. and Japan also found hepatitis associated with the Australia antigen. Soon after, blood banks in the U.S. and abroad started screening donors to ensure that this apparent bloodborne cause of hepatitis was not passed on to transfusion recipients.
The Australia antigen was confirmed to be part of a virus causing hepatitis B (now known as HBV) in 1970 by a research group in London that visualized viral particles in blood from patients with hepatitis who had tested positive for the antigen. Once the protein heretofore known as the Australia antigen was revealed to be HBV, scientists realized how the hemophilia patient in New York could harbor antibodies to a protein found in the blood of an Australian aborigine; presumably, both individuals were infected at some point with HBV. In the following years, research would continue to yield illuminating details about HBV. For example, further investigations of the Australia antigen identified it as a protein on the surface of HBV. NIH-supported studies also revealed the distinctive circular shape and other characteristics of the HBV genome. Valuable knowledge of the mechanisms HBV uses for infection and replication, and its overall life cycle, was gained from research in unique animal models that could simulate human HBV infection, such as ducks, woodchucks, and ground squirrels, as well as from cells grown in the laboratory.
Basic research sponsored by the NIH into under-standing HBV components, infection strategy, and resulting disease processes would later prove to be essential as a basis for additional prevention strategies, as well as effective diagnostic and treatment approaches.
Investigations of the disease resulting from HBV infection also showed that chronic hepatitis B could lead also to a form of liver cancer known as hepatocellular carcinoma. For example, in 1981, a study sponsored in part by the NCI of over 20,000 Chinese government workers showed that chronic hepatitis B was strongly associated with development of and death from hepatocellular carcinoma after 5 years. Later, in 2005, the National Institute of Environmental Health Sciences’ National Toxicology Program would list the hepatitis B virus as a known human carcinogen in its annual Report on Carcinogens.
Medical Success Story: Effective Prevention of Hepatitis B
Soon after discovery of the Australia antigen, researchers developed tests to detect HBV in blood that could be applied to diagnosis and screening of populations for the infection. Screening of the donor blood supply for the virus had an important impact on reducing disease transmission in patients requiring transfusions. But for prevention in the general population, a vaccine was needed.
Basic research on the natural history of HBV infection led to the preparation in the 1970s and 1980s of the first hepatitis B vaccines based on heat-inactivated and blood plasma-derived viruses. Clinical research on the plasma-derived vaccine conducted with NIH support showed that it was effective at protecting against HBV infection. Researchers later developed an improved, “recombinant” version of the vaccine by inserting the hepatitis B surface protein gene into yeast or mammalian cells, which facilitated its purification and preparation for the vaccine. These vaccines also protect against infection by the hepatitis D virus, which requires HBV in order to replicate.
Since the establishment in the U.S. in the 1980s of vaccination programs and donor blood screening for hepatitis B, new cases of acute hepatitis B have declined by more than 80 percent.3 The immunization strategy initially recommended by the Centers for Disease Control and Prevention (CDC) in 1991 entailed universal vaccination of children. In 1992, Federal programs began routine hepatitis B vaccination of infants, and vaccination of adolescents was added in 1995. The vaccine is also currently recommended for individuals in high-risk groups, such as family members of patients with chronic hepatitis B and individuals who inhabit or emigrate from parts of the world with high rates of infection. Worldwide, beginning in the 1990s, the World Health Organization has called for all countries to add the hepatitis B vaccine to their national immunization programs, which presents a challenge in many parts of the developing world. Multi-national public-private partnerships, such as the Global Alliance for Vaccines and Immunization, are working to improve vaccination rates in these areas.
Many Treatment Options for Hepatitis B
Early trials of therapy for hepatitis B focused on the immune-cell chemical interferon. Studies conducted during the 1970s through the 1990s, in the U.S. with NIH support and also abroad, demonstrated the efficacy of treating hepatitis B with interferon, which decreases the stability of HBV genetic material, as well as viral assembly. However, interferon carries potential side effects, including fever, fatigue, headache and muscle aches, and depression. More recently, advances in understanding the viral life cycle and pathogenesis of hepatitis B have paved the way for identifying new therapeutic agents known as nucleoside/nucleotide analogues, some of which were originally developed to treat HIV infection. These drugs protect against hepatitis B by directly inhibiting replication of HBV through targeting its polymerase enzyme. Currently, seven antiviral drugs are approved by the FDA to treat hepatitis B: interferon-alpha, peginterferon, lamivudine, adefovir, entecavir, telbivudine, and tenofovir. However, no definitive guidance yet exists on the most effective use of these drugs, either alone or in combination. Other issues that remain to be resolved concerning use of these drugs against hepatitis B include how to ensure a lasting response once treatment is stopped and how to avoid the development of viral resistance with long-term treatment, in which the virus mutates over time to escape suppression by the antiviral drug. The drugs also differ in terms of efficacy, safety, likelihood of viral resistance development, method and frequency of administration, and cost. The many HBV types (genotypes) in existence also affect response to therapy and disease progression. New antiviral therapies are currently being tested against hepatitis B in clinical trials.
In addition to pharmaceutical agents, liver transplantation is an effective treatment for individuals with hepatitis B who develop cirrhosis and end-stage liver disease. However, organs for transplant remain in short supply.
To resolve the many issues concerning optimal use of available therapies against hepatitis B, the NIDDK has sponsored several consensus-building conferences that bring together experts in the field to make recommendations based on available evidence. For example, in September 2000 and in April 2006, the NIH sponsored workshops to review current health care practices and develop recommendations for optimal management of hepatitis B. Proceedings from these meetings were published in scientific journals. In October 2008, the NIDDK convened an NIH Consensus Development Conference on Management of Hepatitis B together with the NIH Office of Medical Applications of Research, The Johns Hopkins University School of Medicine, and other entities within the NIH and the Department of Health and Human Services.
The purpose of this 3-day conference was to examine important issues in hepatitis B therapy, including which groups of patients benefit from treatment and at what point during treatment, as well as which groups do not show a benefit. The external experts serving on the conference panel addressed major questions regarding hepatitis B management related to current burden, natural history, benefits and risks of current treatment options, who should or should not be treated, appropriate measures to monitor treatment, and the greatest needs and opportunities for future research on hepatitis B. Additional information on this conference is provided in the accompanying feature on “NIH Consensus Development Conference on Management of Hepatitis B.”
More To Discover Through Research
Despite the impressive scientific gains made over the past decades toward preventing and treating hepatitis B, much remains to be learned about this disease, including details of the disease processes associated with HBV infection, as well as ways to optimize approaches to treatment and control. To further advance knowledge of hepatitis B, the NIDDK is funding the Hepatitis B Clinical Research Network. Established in fall 2008, the Network consists of 12 clinical centers, a data coordinating center, a virology center, and an immunology center. The Network is conducting translational research on chronic hepatitis B, focusing on understanding disease processes and applying this knowledge to more effective strategies to treat and control the disease. Its focus has been informed by the research recommendations of recent NIH-sponsored meetings and planning activities on this topic.
Through scientific endeavors such as the Hepatitis B Clinical Research Network and investigator-initiated research, conferences, and research planning efforts including the trans-NIH Action Plan for Liver Disease Research and the new National Commission on Digestive Diseases’ research plan, the NIH is now building upon the extraordinary legacy of past research advances to make additional contributions toward alleviating the burden of hepatitis B, in the U.S. and throughout the world.
Additional information on hepatitis B and research progress on this disease is available through the following NIH resources:
Back to Top
Newly-identified Genetic Variations Account for Much of the Increased Burden of Kidney Disease among African Americans
(Download PDF version – 120 KB)
For the first time, researchers have identified variations near a single genetic locus that are strongly associated with kidney diseases disproportionately affecting African Americans. Two research teams independently studied kidney diseases arising from causes other than diabetes. Kidney disease can lead to kidney failure, requiring long-term dialysis or a kidney transplant to sustain life. Using a type of genome-wide association technique that relies on differences in the frequency of genetic variations between populations, the researchers identified several variations in the region of the MYH9 gene on chromosome 22 as major contributors to excess risk of non-diabetic kidney disease among African Americans. Somewhat surprisingly, both research teams found no association between the MYH9-area variants and diabetes-related kidney failure in this population, a finding that suggests the mechanisms leading to chronic kidney disease and then to kidney failure may be different depending on the underlying cause. This insight may have important implications for the treatment of the very large number of individuals with kidney disease.
Kidney Disease: A Heavy Burden for Some Populations
Early-stage kidney disease often has no symptoms. Left unchecked, however, it can silently progress to kidney failure, a condition in which the kidneys are no longer able to filter waste and excess fluids from the blood. As many as 26 million U.S. adults over the age of 20 are estimated to have some degree of impaired kidney function,1 and over a half million Americans were receiving life-sustaining kidney dialysis or were living with a kidney transplant at the end of 20062 (the most recent year for which complete data are available). Despite recent advances in preserving kidney function in individuals with early-stage kidney disease, serious health complications are common. In fact, roughly half of the people with kidney disease will die from cardiovascular disease before their kidney function further deteriorates and they progress to full-blown kidney failure.3
The two most common causes of kidney failure are diabetes and hypertension (high blood pressure), which together account for about 70 percent of all new cases.2 Both conditions are seen more frequently in members of ethnic minorities, and African Americans bear an especially heavy burden of kidney disease. African Americans are nearly 3 times as likely as whites to develop kidney failure from any cause.4 One such cause is a form of kidney disease called focal segmental glomerulosclerosis (FSGS), in which the glomeruli—the tiny filtering units of the kidneys—are damaged and scarred.5 Most FSGS arises from unknown causes and is termed “idiopathic” FSGS. African Americans are approximately 5 times more likely to develop idiopathic FSGS compared to individuals of other racial backgrounds. The health disparity increases with HIV infection: African Americans are 18 to 50 times more likely than whites to develop FSGS related to infection with HIV, the virus that causes AIDS.6,7 These rather striking disparities represent a serious public health problem, not only because of the kidney disease itself, but also because people who have even mild- to moderately-severe kidney disease typically have high blood pressure and other risk factors for serious complications such as cardiovascular disease.2
What accounts for this dramatically increased risk of severe kidney disease in African Americans? Scientists and physicians have long known that kidney disease tends to run in families and cluster in ethnic groups. These observations indicate that kidney disease is likely to have a genetic component. It is also almost certain that environmental factors play a role in disease susceptibility as well. However, studies that have attempted to identify genes that confer susceptibility to kidney disease and kidney failure have not generally been successful.
Furthermore, it is not clear that all forms of kidney disease originate from a common starting point or progress through a shared pathway. For example, while patients with diabetes or those with hypertension are at increased risk of developing kidney disease and kidney failure, not all patients at risk go on to develop kidney disease. In addition, it is not clear that the underlying disease mechanisms which initiate injury and facilitate progression in diabetic and hypertensive kidney disease are the same. If, in fact, these two conditions cause kidney disease through different pathways, then treatment strategies for people whose kidney disease is a consequence of diabetes could be very different from those for people whose kidney disease is attributed to hypertension. Because of these considerations, it is especially important to identify the genetic contribution to disease development and progression and characterize the biological pathways that lead to diminished kidney function.
New Techniques Allow Researchers To Ask New Questions
For some conditions, mutations in a single gene are sufficient to cause disease, and careful analysis of inheritance patterns in families can often readily identify the gene responsible. These diseases are termed “simple” genetic diseases, because their underlying cause, while not always easy to uncover, tends to lead to disease in a straightforward way.
However, many diseases likely arise not from mutations in a single gene but from the interplay of complex genetic susceptibility—resulting potentially from multiple genes, each of which may have only modest effects—and environmental influences. In the case of these “complex” genetic diseases, identifying the genetic contribution of multiple, widely-spaced chromosomal regions to disease development and progression can be quite difficult.
Recently, a new technique, termed admixture mapping, has been developed to search for genes that cause complex genetic diseases. Admixture mapping is particularly useful in examining the underlying genetic causes of complex diseases in which the frequency of disease is very different between two populations. Using admixture mapping, scientists examine haplotypes—groups of genes spanning multiple chromosomal loci that are transmitted together. These haplotypes are inherited; therefore, haplotypes tend to be similar among members of the same population but to differ between members of different populations. Admixture mapping takes advantage of the fact that genetic variants that are not linked to one another tend to dissociate from one another rather rapidly—within a few generations—while those that are linked tend to stay together longer. The relatively recent (anthropologically speaking) mixing of European and African populations is referred to as “admixture”: the formation of a new population with a heterogeneous mixture of African- and European-derived haplotypes.
A New Window into the Genetics of Kidney Disease
Because of the striking difference in kidney disease and kidney failure rates between whites, who are largely of European ancestry, and African Americans, researchers had speculated that admixture mapping might be an effective way to try to identify which chromosomal regions are associated with the development of kidney disease. The rationale behind these experiments was that chromosomal regions that confer an increased risk of kidney disease would be more common in individuals of African ancestry than in those of European ancestry. At least two groups of scientists hypothesized that, by using admixture mapping, they could identify genetic variants that tracked closely with disease development.
In the fall of 2008, the two research teams reported the identification of genetic variants more common in African Americans that seemed to explain a large proportion of the excess burden of FSGS and HIV-associated and other non-diabetic kidney disease in African Americans. In addition, the contribution of this genetic variation to an individual’s risk of developing kidney disease is higher than that observed for nearly all previously described genetic factors discovered by genome-wide scans, including those for prostate cancer, diabetes, cardiovascular disease, breast cancer, and hypertension.
One research team, which included members of the NIDDK Intramural Research Program’s Kidney Disease Branch and other researchers, studied individuals with FSGS, HIV-associated FSGS, and hypertensive end-stage kidney disease. The other team, consisting of researchers working as part of the NIDDK-funded Family Investigation of Nephropathy and Diabetes Consortium, was led by researchers at The Johns Hopkins University and included collaborating scientists at other institutions. They examined patients with kidney failure arising from multiple causes, including diabetes, hypertension, FSGS, and HIV infection. Using admixture mapping, both groups of scientists identified a genetic variant in a region of chromosome 22 that correlated strongly with susceptibility to certain kidney diseases.
Fine mapping of this chromosomal region revealed that the gene MYH9 was located in the identified area. MYH9 encodes the protein “non-muscle myosin heavy chain 9,” which is part of non-muscle myosin IIA. Myosin is a protein made up of several subunits and serves as a cellular motor, providing the force for cell movement, cell tension, and cell division. The most common form of myosin is found in skeletal muscle and is involved in muscle contraction. Non-muscle myosin IIA is a form of myosin found in many tissues, including—despite its name—muscle. The MYH9 gene is expressed in podocytes, specialized cells within kidney glomeruli that play an important role in the filtering of waste and excess fluid. Podocyte damage is a hallmark of FSGS and other kidney diseases that can lead to reduced kidney function and/or kidney failure. However, it is not known how variations in the MYH9 region might impact podocyte function.
The degree to which these genetic variants increase risk of developing kidney disease in African Americans from certain causes is truly striking. MYH9 risk variants account for nearly all of the increased risk for idiopathic FSGS and HIV-associated FSGS among African Americans compared to European Americans and a portion of the increased risk for hypertensive kidney disease. Surprisingly, however, these variants were not associated with kidney failure arising from diabetes.
The risk of developing kidney disease is strongest when an individual has two copies of the risk variant. Nonetheless, even among individuals with two risk variants, kidney disease is uncommon. Thus 36 percent of African Americans have two copies of the risk variant but only approximately 1 in 50 of these individuals will develop FSGS during the course of a lifetime. It is likely other factors, possibly additional genes or environmental influences, are important in triggering FSGS. Future research efforts will focus on the identification and characterization of these additional factors.
It is important that it is the presence of the variant that confers the increased risk of kidney failure, not African ancestry per se. However, these variants were much more frequently seen among people of African ancestry than among those of European ancestry—60 percent of alleles among African Americans are the risk variant (84 percent of African Americans carry one or two copies of the risk allele), while only 4 percent of alleles among European Americans are the risk variant (8 percent of European Americans carry one or two copies of the risk allele).
Implications and Future Directions
Although both studies described here implicate variations in the chromosomal region surrounding MYH9 as important risk factors for kidney disease, scientists have not identified specific mutations in the MYH9 gene that might suggest a causal mechanism. One possibility is that the critical genetic variations lie not within the coding sequence of the MYH9 gene, but in the surrounding chromosomal regions. The nature of these hypothetical variations, and the ways they might alter cellular metabolism or function so as to confer greater risk of non-diabetic kidney disease, are the subject of ongoing investigations. Future studies will aim to characterize the exact nature of the variations in the MYH9 region and how these variations may influence susceptibility to non-diabetic kidney disease. Additional future studies will focus on the pattern of MYH9 expression across tissues, and investigation into the role played by MYH9 in podocyte function, and how this might be disrupted in individuals carrying the risk variant.
One of the central questions facing researchers who study kidney disease is whether all kidney disease is created equal: although many different conditions—diabetes, hypertension, and FSGS were among the ones studied by these investigators—put people at increased risk for chronic kidney disease and kidney failure, it is not known whether these conditions share a common disease pathway or each have unique characteristics that define them. This distinction is important, because current approaches to therapy are aimed at preserving kidney function and addressing the underlying health problem, not at addressing specific processes that may damage the kidneys. The discovery that a particular genetic variation confers susceptibility to kidney failure by some mechanisms—such as hypertension and FSGS—and not by others—such as diabetes—indicates that there are likely at least two pathways to kidney failure.
These findings also validate the use of admixture mapping to perform genome-wide scans to identify susceptibility genes for complex diseases. Insights gained from the studies have important implications for improved patient care and for understanding the basic biology of kidney disease and kidney failure.
Finally, this story highlights the importance of collaborations between scientists at the NIH and NIH-funded investigators at outside research institutions. Government-academic collaborations of this kind are one way to move translational research forward, from the bench to the bedside and beyond, and provide the knowledge base for developing new therapies for chronic health disorders such as kidney disease and kidney failure.
The investigators in the NIDDK Intramural Research Program, who first identified the MYH9 gene as contributing to kidney disease, have been conducting basic and clinical research studies of kidney disease, focusing on focal segmental glomerulosclerosis, at the NIDDK since 1995. Scientists at the National Cancer Institute’s Center for Cancer Research also contributed to this study. The Johns Hopkins-led research team, that confirmed and extended the MYH9 findings, is part of the NIDDK-funded Family Investigation of Nephropathy and Diabetes (FIND) Consortium. First funded in 1999, the Consortium was established to identify genetic pathways that may be critical for the development of diabetic kidney disease as well as to identify candidate genes and/or pathways that may be amenable to therapeutic strategies to prevent the onset or progression of kidney disease. Though originally conceived as an effort to identify genes associated with diabetes-related kidney disease, FIND investigators discovered an important clue regarding non-diabetic kidney disease. The two studies were published in the journal Nature Genetics in October 2008; the citations are Nat Genet 40: 1175-1184, 2008 and Nat Genet 40: 1185-1192, 2008.
1 Coresh J, Selvin E, Stevens LA, Manzi J, Kusek JW, Eggers P, Van Lente F, and Levey AS: Prevalence of chronic kidney disease in the United States. JAMA 298: 2038-2047, 2007.
2 U.S. Renal Data System, USRDS 2008 Annual Data Report: Atlas of Chronic Kidney Disease and End-Stage Renal Disease in the United States, National Institutes of Health, National Institute of Diabetes and Digestive and Kidney Diseases, Bethesda, MD, 2008.
3 Kundhal K and Lok CE: Clinical epidemiology of cardiovascular disease in chronic kidney disease. Nephron Clin Pract 101:c47-c52, 2005.
4 Kiberd BA and Clase CM: Cumulative risk for developing end-stage renal disease in the US population. J Am Soc Nephrol 13: 1635–1644, 2002.
5 Kitiyakara C, Kopp JB, and Eggers P: Trends in the epidemiology of focal segmental glomerulosclerosis. Semin Nephrol 23: 172–182, 2003.
6 Kopp JB and Winkler C: HIV-associated nephropathy in African Americans. Kidney Int Suppl 63: S43–S49, 2003.
7 Eggers PW and Kimmel PL: Is there an epidemic of HIV infection in the US ESRD program? J Am Soc Nephrol 15: 2477–2485, 2004.
Back to Top
* Documents in PDF format require the free Adobe Acrobat Reader application for viewing.