Clin Transplant Res 2024; 38(4): 354-376
Published online December 31, 2024
https://doi.org/10.4285/ctr.24.0029
© The Korean Society for Transplantation
Minseok Kang1 , Hwon Kyum Park1 , Kyeong Sik Kim1 , Dongho Choi1,2,3,4
1Department of Surgery, Hanyang University College of Medicine, Seoul, Korea
2Hanyang Institute of Bioscience and Biotechnology, Hanyang University, Seoul, Korea
3Research Institute of Regenerative Medicine and Stem Cells, Hanyang University, Seoul, Korea
4Department of HY-KIST Bio-convergence, Hanyang University, Seoul, Korea
Correspondence to: Kyeong Sik Kim
Department of Surgery, Hanyang University College of Medicine, 222-1 Wangsimni-ro, Seongdong-gu, Seoul 04763, Korea
E-mail: toopjoo12@gmail.com
Dongho Choi
Department of Surgery, Hanyang University College of Medicine, 222-1 Wangsimni-ro, Seongdong-gu, Seoul 04763, Korea
E-mail: crane87@hanyang.ac.kr
This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
The progress of transplantation has been propelled forward by animal experiments. Animal models have not only provided opportunities to understand complex immune mechanisms in transplantation but also served as a platform to assess therapeutic interventions. While small animals have been instrumental in uncovering new therapeutic concepts related to immunosuppression and immune tolerance, the progression to human trials has largely been driven by studies in large animals. Recent research has begun to explore the potential of porcine organs to address the shortage of available organs. The consistent progress in transplant immunology research can be attributed to a thorough understanding of animal models. This review provides a comprehensive overview of the available animal models, detailing their modifications, strengths, and weaknesses, as well as their historical applications, to aid researchers in selecting the most suitable model for their specific research needs.
Keywords: Animal models, Graft rejection, Immunosuppression therapy, Immune tolerance, Xenotransplantation
HIGHLIGHTS |
---|
|
In modern medicine, thousands of patients with end-stage diseases receive lifesaving transplants. Advances in transplantation have been driven by collaborative efforts across multiple disciplines. Over the past century, researchers have made strides in immunosuppression [1–3], performed increasingly complex transplant operations [4,5], and started introducing immune tolerance induction therapies [6] into clinical practice (Fig. 1). Behind the scenes, the development of reliable
Transplantation research has evolved using diverse animal models, each presenting unique advantages and disadvantages (Table 1). Most animal experiments utilize small animal models, predominantly mice and rats. Initial studies on alloimmunity were primarily conducted using these rodent models. By the mid-20th century, mice had become an increasingly significant model for studying the human immune system [7]. Mice have been preferred not only for their cost-effectiveness but also for their genetic similarities to humans and the availability of options for immunological modification [8]. Techniques such as manipulating the donor and recipient genome by deleting specific components of the alloimmune response, or creating humanized mice by engrafting a functional human immune system into an immunodeficient mouse, have provided valuable insights into transplant immunology [9,10]. These strains can respectively test the requirements of the alloimmune response or evaluate drugs without risking patient health. Additionally, mouse models offer a wider variety of specific reagents, including monoclonal antibodies, compared to other animal models [11]. However, mouse models also have several limitations. The use of exclusively inbred mice, which have low genetic diversity, may obscure the rejection or tolerance pathways present in humans [12]. Furthermore, their small size poses technical challenges in transplantation procedures, making the outcomes more dependent on the surgeon’s skill.
Table 1. Key advantages and limitations of common animal models in transplant immunology research
Trade-off | Small animal models | Large animal models | |||
---|---|---|---|---|---|
Mice | Rats | Nonhuman primates | Pigs | ||
Pros | • Cost-effective • Genetically similar to human • Availability of genetic modification | • Reduced surgical complexity • Robust transplant outcomes | • High homology with humans • Ideal for studying biologic agents and therapies | • Extensive knowledge of porcine major histocompatibility complex • Availability of genetic modification • Physiological similarities | |
Cons | • Limited genetic diversity may not fully represent human alloimmune responses • Proficient surgical technique is required | • Less genetic manipulation capability compared to mice • May have limited genetic diversity | • High cost • Difficulties in breeding • Ethical concerns • Endangered status limits accessibility and use in research | • High cost • Difficulties in breeding |
Rat models offer distinct advantages over mouse models in certain areas. Their larger size simplifies surgical interventions by reducing technical complexities [13]. Additionally, transplant outcomes in rat models are generally more robust and predictable than those in mouse models [14]. Currently, rats are considered the gold-standard animal model for kidney and liver transplantation, demonstrating reliable technical adaptations (Fig. 2) [15,16]. Rat models also exhibit metabolic and physiological similarities to humans, making them preferable for studies in physiology and pharmacology [17,18]. Although rat eggs are more sensitive to activation and do not tolerate genetic modification well, recent advances in transgenic (Tg) techniques for rats have shown promise [19,20].
While small animal models are invaluable for characterizing novel biological and therapeutic concepts, large animal models are crucial for translating these concepts into clinical applications [21]. Large animal models possess greater genetic diversity, which adds complexity and makes them suitable for assessing practicality, safety, and overall efficacy [22]. However, despite their potential clinical relevance, the high costs and limited accessibility of these models restrict their widespread use.
Currently, pigs and nonhuman primates (NHPs) are the most commonly used large animal models for transplantation. Pigs offer a significant advantage due to the well-documented understanding of their major histocompatibility complex (MHC) and the possibilities for genetic modification [23]. They also share several anatomical and physiological traits with humans, particularly in the cardiovascular, urinary, integumentary, and gastrointestinal systems [24]. NHPs, in contrast, are ideal for studying highly targeted biologic agents and antibody-based therapies due to their high degree of homology with humans [21]. However, ethical issues and restrictions related to their endangered status have curtailed their use, leading to the National Institutes of Health ceasing funding for chimpanzee research [25].
Diagnostic tools for assessing graft rejection or therapeutic efficacy in animal models are particularly important. Several key measurement tools are used to characterize both acute and chronic rejection models, as well as to evaluate therapeutic interventions aimed at mitigating graft rejection (Table 2).
Table 2. Measurements for assessing acute and chronic rejection in animal models
Category | Acute rejection models | Chronic rejection models |
---|---|---|
Histopathology | • Tissue biopsies for cellular infiltration and damage | • Longitudinal analysis for fibrosis, vascular changes, and tissue remodeling |
Serum biomarkers | • Monitoring cytokines (e.g., IL-2, IFN-γ) and chemokines (e.g., MCP-1) • Assessment of donor-specific alloantibodies | • Assessment of donor-specific alloantibodies |
Functional tests | • Elevation of serum creatinine in renal transplant models • Cessation of cardiac pulse in cardiac transplant models • Hyperglycemia in islet transplant models | • Tracking organ-specific function (e.g., serum creatinine levels) |
Graft survival | • Graft failure is examined using the above measurements • Survival analysis is employed to compare grafts under different conditions or treatments |
IL, interleukin; IFN, interferon; MCP, monocyte chemoattractant protein.
Understanding the mechanisms underlying the host's alloimmune response and subsequent allograft injury is crucial for developing successful clinical interventions in transplantation [26]. Animal models have been instrumental in identifying contributors and potential therapeutic targets for complex alloimmune injuries.
The initial damage that occurs in solid organ transplantation is ischemia-reperfusion injury (IRI) [27]. Ischemia is caused by reduced blood flow during transplantation and leads to adenosine triphosphate depletion and cell death. By administering a small-molecule inhibitor of necroptosis in an ischemic injury mouse model, researchers have identified necroptosis as a key immunological aspect of cell death [28,29]. Necroptotic cells release intracellular contents and danger-associated molecular patterns (DAMPs), which promote inflammation [30]. DAMP-associated immune responses have been investigated through necroptosis-inhibitory gene knockout mouse models [31–33].
Reperfusion leads to a sudden increase in reactive oxygen species, causing further damage and simultaneously activating the immune system. Studies using animal models have suggested that T cells may play a role in mediating reperfusion injuries [34]. Increases in natural killer (NK) and natural killer T (NKT) cells have been observed in the kidneys of mice with IRI [35]. Mouse models lacking NKT cells showed protection against renal IRI, accompanied by a decrease in interferon (IFN)-gamma-producing neutrophils [36]. The opposing roles of NKT cell subsets in hepatic IRI were demonstrated in mouse models deficient in either type I or type II NKT cells, revealing that type I NKT cells exacerbate injury [37]. T cell-deficient nu/nu mice and CD4-depleted mice were also found to be protected from hepatic IRI [38]. Furthermore, a study using a CD4/CD8 double knockout mouse model for renal IRI showed improved renal function compared to that in wild-type mice [39].
Large animal models provide several advantages over small animal models, including greater anatomical and physiological similarities to human systems. Although a comprehensive review of large animal models of cardiac IRI is available [40], these models are primarily used as platforms for evaluating preventive therapeutics for IRI. Da Silva et al. [41] developed an NHP renal IRI model by subjecting donor kidneys to cold ischemia and explored the effects of odulimomab. Additionally, a porcine renal IRI model has been documented, which demonstrated the effectiveness of CD47 monoclonal antibody (mAb) blockade in preventing IRI [42]. In clinical settings, organ preservation techniques are utilized to mitigate this undesirable immune response [43–45], with machine perfusion being the most effective method currently available [46].
The inevitable sequence of IRI both promotes and exacerbates the alloimmune response [47,48]. Acute cellular rejection involves the activation of T lymphocytes in response to alloantigens presented by both donor and host antigen-presenting cells (APCs) [49]. Rodent models have provided comprehensive insights into this field.
The role of secondary lymphoid organs as recognition sites for donor antigens in transplantation settings has been extensively studied yet remains a subject of debate. Aly/aly mice, characterized by the absence of lymph nodes and Peyer’s patches and by structurally altered thymuses and spleens, were unable to reject allogeneic skin grafts. Furthermore, splenectomized aly/aly mice failed to reject cardiac allografts [50,51]. In contrast, Hox11–/– knockout mice, which lack spleens, successfully rejected allogeneic skin grafts [51]. These results suggest that skin allograft rejection depends on lymph nodes, whereas the rejection of vascularized allografts can occur in the presence of either lymph nodes or a spleen. The transplantation of intestines in immunodeficient mice further underscored the importance of secondary lymphoid organs in the rejection of vascularized allografts [52,53]. However, a study using mice that lacked both lymph nodes (LTα–/–) and Peyer’s patches (LTβR–/–), and that had undergone splenectomy, showed rejection of both skin and cardiac allografts [54]. Additionally, the rejection of lung allografts in the absence of secondary lymphoid organs has been reported, suggesting that priming occurs within the lung allografts themselves [55].
The role of APCs in priming alloreactive T cells has been explored through animal model studies. Depletion and subsequent restoration of passenger leukocytes in rat models revealed that dendritic cells (DCs) play a key role in alloantigen presentation [56]. Subsequent research using lymphocyte-deficient RAG–/– mouse models investigated how the innate immune system recognizes allografts and activates DCs. Zecher et al. [57] found that injecting allogeneic RAG−/− splenocytes into the ear pinnae of RAG−/− recipients induced more hypersensitivity reactions in host myeloid cells compared to injections of syngeneic splenocytes. Additional depletion and cell transfer experiments indicated that these reactions were mediated by monocytes. Oberbarnscheidt et al. [58] observed that in RAG−/−γc−/− mice, allografts attracted interleukin (IL)-12 producing monocyte-derived DCs, whereas syngeneic grafts resulted in less differentiation of monocytes into DCs. These findings highlight the critical role of monocytes in recognizing allogeneic non-self, which acts as the initial trigger for acute rejection.
Initially, it was believed that allorecognition through APCs occurred exclusively via the direct presentation of donor antigens. Subsequently, an indirect pathway involving processed donor antigen peptides was identified in rat transplant models [56,59]. To further investigate this, Tg mouse models were developed, wherein CD4 T cells are engineered to recognize only allogeneic peptides. Studies using these Tg models demonstrated that activation of CD4 T cells via the indirect pathway persists longer than activation through the direct pathway, potentially leading to chronic rejection [60,61].
The differentiation of naïve T cells upon allorecognition has been investigated. Bolton et al. [62] demonstrated that when naïve CD4 T cells are adoptively transferred at the time of transplantation, they trigger acute rejection in kidney allografts in Piebald Virol Glaxo (PVG) athymic nude rats, whereas naive CD8 T cells do not induce rejection. A study of CD4 and CD8 knockout mice showed that CD4 cells were required to initiate heart and skin allograft rejection [63]. However, orthotopic lung transplantation into CD4 knockout mice led to allograft rejection, showing that mouse lung rejection occurs independently of CD4 T cells [64]. Despite this, CD4 T cells remain a critical factor in acute rejection. The cytokine gene knockout mouse model has emerged as a tool to further explore CD4 T cell subsets and their roles in acute rejection and the induction of tolerance [65,66].
Antibody-mediated rejection (AMR) occurs when alloantibodies target human leukocyte antigen (HLA) on the graft endothelium, activating the complement cascade [67]. Historically, the generation of antibodies during the acute phase of transplantation was considered incidental, as the passive administration of immune serum to rodent allograft recipients did not trigger acute rejection [68,69]. However, as animal models have become better defined, questions regarding AMR have been reignited.
Nozaki et al. [70] demonstrated that transferring alloantibodies from wild-type mice with MHC-incompatible A/J cardiac transplants to RAG–/– mice did not sufficiently induce AMR. However, these researchers found that CCR5 knockout mice (CCR5–/–) with MHC-incompatible A/J transplants produced a higher titer of alloantibodies than wild-type mice, accompanied by intense C3d deposition in the endothelium, indicative of AMR [70,71]. Notably, transferring alloantibodies to RAG–/– mice effectively induced AMR. These findings support the use of CCR5–/– mice as an effective model for studying AMR, demonstrating the critical role of antibodies in mediating graft rejection.
While the cardiac transplant model is preferred for studying AMR due to its surgical simplicity and the adequate intensity of rejection [72], renal and lung transplant models have also been utilized. Renal AMR models have been demonstrated in the A/J to CCR5–/– mouse model [71], in immunodeficient mice through the passive transfer of donor-specific alloantibodies [73], and in a C57BL/6 skin allograft presensitized Balb/c renal transplant model [74]. A similar approach to AMR development was reported in a C57BL/6 lung transplant model, involving presensitization with a Balb/c skin allograft [75].
The clinical link between graft rejection and the complement fragment C4d was established by Feucht et al. [76,77] and later validated as a diagnostic biomarker for AMR [78]. Subsequently, animal models were used to explore complement inhibition as a strategy to prevent AMR. Initial insights were provided by xenotransplantation models. Rats, depleted of C3 through the administration of cobra venom factor and transplanted with guinea pig cardiac xenografts, exhibited significantly prolonged graft survival [79]. Additionally, Balb/c mice, presensitized with rat splenocytes and treated with anti-C5 mAb to block C5 cleavage, received cardiac xenografts from Lewis rats [80]. The findings indicated that complement inactivation can effectively prevent AMR. Further studies using mouse allograft transplant models showed that blocking C5a cleavage or C5a receptors can reduce allograft rejection [81,82].
Despite significant advancements in modern immunosuppression that effectively mitigate acute graft injury, the development of chronic graft rejection continues to pose a critical challenge in achieving long-term allograft survival. Chronic allograft rejection presents variably across different organs, influenced by their unique anatomical features and interactions with the environment [83]. Animal models have proven effective in identifying the mechanisms underlying chronic rejection in various organs. A major advantage of using animal models is their ability to produce lesions similar to those observed in human allograft rejection, but within a much shorter timeframe.
Kidney
Chronic rejection of renal transplants is characterized by chronic allograft nephropathy (CAN), which is often associated with hypertension and increased urinary protein loss [84]. CAN is defined as a histopathological condition marked by chronic interstitial fibrosis and tubular atrophy within the renal allograft [85]. The most widely used model for studying CAN in kidney transplants is the rat Fischer 344 to Lewis (Fischer-Lewis) model, which was first described by White et al. [86] in 1969 [87]. This model has provided extensive pathological and functional insights, demonstrating a gradual deterioration of renal function, progressive destruction of renal parenchyma, and continuous production of alloantibodies, culminating in complete graft rejection after 48 weeks. Diamond et al. [88] adapted the Fischer-Lewis model by incorporating cyclosporin to prevent episodes of acute rejection. This modification has proven the model’s effectiveness in mimicking human renal allograft events, particularly in linking functional abnormalities with chronic allograft injuries.
Building on insights from the Fischer-Lewis model of chronic rejection, various interventions have been developed to prevent this condition. Blocking the CD28-B7 T cell costimulation with CTLA4Ig or anti-CD28 mAb has been effective in preventing both acute and chronic rejection, underscoring the critical role of the T cell costimulatory pathway in alloantigen recognition [89,90]. Additionally, manipulating alloantigen-independent factors by administering agents such as antihypertensive drugs, angiotensin antagonists, and growth factors in the Fischer-Lewis model has shown promising results in reducing chronic injury [87].
Despite the unpredictable long-term outcomes associated with mouse models, Jabs et al. [91] reported utilizing these models with variations in the MHC to study general histopathological changes over a 21-day period. Using this model, early antibody-independent and late antibody-mediated injuries were observed. Similarly, Torrealba et al. [92] proposed the use of an NHP model for chronic renal rejection. Biopsies from renal allografts of CD3 T cell-depleted rhesus monkeys, transplanted with MHC-mismatched allografts at day 84, showed results comparable to those seen in human CAN.
Heart
In heart transplants, chronic rejection primarily manifests as cardiac allograft vasculopathy (CAV), which is likely due to the heart’s tendency toward atherosclerosis [93]. CAV represents an accelerated form of coronary artery disease (CAD) that occurs despite adequate immunosuppression and is a significant long-term complication following heart transplantation [94,95]. In rat studies, Lewis to Fisher 344 transplants have shown diffuse concentric intimal proliferation by terminal rejection or by day 120, which mirrors the characteristics of CAV observed in human cardiac allografts [96,97]. Similar outcomes have been observed in cardiac allografts from Dark Agouti to Wistar-Furth rat models, though these effects were mitigated by cyclosporine A (CsA) in a dose-dependent manner [98]. Poston et al. [99] developed an MHC class II fully mismatched PVG to August Copenhagen Irish rat model specifically to replicate the conditions of CAV. This model displayed a more distinct chronic rejection compared to other rat models and showed minimal response to CsA. Notably, daily administration of rapamycin, targeting either smooth muscle or B cell inhibition, effectively prevented CAD in this model, offering a promising approach for CAD prevention.
Mouse models of CAV have also been studied extensively. Heterotopic heart transplantation from B10.A-strain to B10.BR recipients resulted in intimal proliferative vascular lesions over a period of 30–50 days, serving as a model for CAV [100]. Additional strains in isolated MHC class II mismatched mouse transplant models (bm12 into B6 or H-2(b)) have also demonstrated the development of CAV [101–103]. These models have provided insights into the mechanisms underlying CAV. Nagano et al. [103] demonstrated that IFN-gamma is essential for CAV by comparing bm12 heart transplants in wild-type and IFN-gamma-deficient B6 mice. Kimura et al. [104] proposed IL-16 neutralization as a potential therapy for CAV, based on studies using bm12 to wild-type versus IL-16 deficient mice. Other studies using these cardiac models have elucidated the roles of CD4 Th17 cells [102], chemokine receptors (CCR1, CCR5, Met-RANTES) [105], and effector-memory T cell trafficking [106] in mediating CAV.
Lung
Lung transplants are susceptible to chronic rejection, manifesting as obliterative bronchiolitis (OB), which may be associated with exposure to environmental antigens or infections [107]. Various animal models have been developed to study OB, including orthotopic lung transplants in rodents and larger animals, as well as heterotopic tracheal transplantation at subcutaneous, intraomental, and intrapulmonary sites.
The orthotopic lung transplantation model not only simulates physiological ventilation and perfusion during transplantation but also closely mirrors the surgical procedures performed in humans [108]. Compared to other large animal models, orthotopic lung transplantation in miniature swine provides the benefit of well-defined MHC loci, facilitating detailed studies of alloimmune responses. Transplantation from MHC-matched donors with minor antigen mismatches, coupled with short-term immunosuppression, results in obliterative airway lesions [109]. However, due to the high costs and challenges in postoperative care, researchers often prefer small animals as an OB model [110]. A widely used rat lung transplantation model to study OB is the Fisher 344 to Wistar Kyoto (Fisher-WKY) model, which shows OB patterns from day 49 to day 98 posttransplant [111]. The orthotopic lung transplant mouse model (Balb/c to C57BL/6N) with a major MHC mismatch, receiving daily immunosuppression, has been reported to display various pathological features of OB. While previous studies in mouse lung transplantation identified chronic lung allograft rejection as an airway-centered rejection termed “OB,” this study pinpointed the initial site of chronic rejection at the arteriole [112].
Tracheal transplants offer advantages over lung transplants due to their technical simplicity and the reduced observation time required. Heterotopic tracheal transplantation serves as a reproducible model for studying alloantigen-associated fibrosis. For instance, murine models have elucidated one of the fundamental mechanisms of fibroproliferative tissue remodeling. In experiments involving subcutaneous tracheal transplants in Smad3 knockout mice, the critical role of transforming growth factor beta in promoting fibroproliferation and matrix deposition in OB was demonstrated [113]. Despite the widespread use of heterotopic tracheal transplant models in OB research, a major critique is that they replicate fibrotic obliteration in a large cartilaginous airway, which differs histologically from the small airways where OB typically occurs [108].
Liver
In liver transplants, both bile duct loss and intrahepatic vessel damage contribute to allograft failure [114]. With modern immunosuppressive regimens, chronic rejection occurs in only 2%–3% of cases and typically manifests years after the transplant [115]. Furthermore, the etiology of chronic rejection is complex and only partially understood [116]. Given these circumstances, reports of animal models of chronic liver rejection are rare.
Understanding the dysregulated immune mechanisms driving alloimmune injury, researchers have made significant strides in identifying therapeutic interventions that can mitigate its deleterious effects. Immunosuppression is therapeutically used to manage and achieve tolerance of this undesired immune response. Immunosuppressive agents, each stemming from unique concepts and introduced at different times, have advanced in parallel (Fig. 1).
From bench to bedside, the use of large animal models (i.e., canine, pig, and NHP models) has played a significant role in advancing therapeutic immunosuppression. Unlike conventional laboratory murine models, which are bred in specific-pathogen-free environments, large animal models more accurately mimic the complexities of human environments. This similarity is attributed to their relatively mature immune systems and increased exposure to alloantigens, establishing them as effective platforms for drug testing [117].
The modern era of immunosuppressive therapy began with the introduction of the antiproliferative agent, 6-mercaptopurine (6-MP) [118]. In the 1950s, oncologists began exploring drugs such as nitrogen mustard and 6-MP for treating cancer. A pivotal study by Schwartz and Dameshek [3] in 1959 demonstrated that 6-MP could suppress the immune response in rabbits, sparking interest in its potential for use in transplantation. Subsequently, Calne [119–121] confirmed the effectiveness of the antimetabolites 6-MP and azathioprine in prolonging kidney allograft survival in a canine model. Although azathioprine was initially considered a cornerstone agent, its tolerizing effect in human transplants has proven to be less robust. This limitation has led to a shift toward the development of mycophenolate mofetil (MMF), an antiproliferative agent that offers improved efficacy in preventing graft rejection [122].
MMF, a selective inhibitor of purine synthesis, was developed as an immunosuppressant following observations of selective T and B cell reduction in children with a purine metabolism disorder [123]. It proved effective in rat, NHP, and canine models for reversing acute renal allograft rejection [124,125]. Building on these promising results from animal experiments, MMF was successfully translated from animal models to human use and received approval from the U.S. Food and Drug Administration (FDA) in 1995.
The next paradigm in immunosuppression involved the introduction of T cell depletion strategies. In 1962, Gowans et al. [126] reported that draining the thoracic duct in rats not only depleted their lymphocytes but also induced immunosuppression. Subsequent studies involved creating a thoracic duct fistula in canine renal transplant models, which resulted in prolonged graft survival [127]. van Dicke et al. [128] reported that irradiated mice that received transplants with fractionated spleen cells, depleted of small lymphocytes, experienced significant survival benefits and showed no evidence of graft-versus-host disease. In contrast, those that received nondepleted spleen cell fractions all succumbed to severe graft rejection [128]. To achieve lymphocyte depletion through less invasive and more practical methods, polyclonal antilymphocyte sera were introduced using rat models [129]. This approach effectively suppressed graft rejection in large animal transplant models, including dogs, swine, and NHPs [130–132], and paved the way for confirmatory human studies [133].
The development of mAbs that selectively target T lymphocyte markers has addressed the issues of cross-reactivity and associated toxicity commonly seen with polyclonal antibodies. Kung et al. [134] identified human T cell subpopulations using a panel of mAb and described muromonab-CD3 (OKT3), a murine antibody that targets the CD3 epsilon component of T cells. Although OKT3 has proven superior to conventional steroid treatments in preventing acute rejection and enhancing allograft survival [135,136], it has been withdrawn from clinical use due to adverse effects, including the antimurine antibody response and T cell activation [137]. Lessons learned from the use of animal-derived mAb have spurred the development of less immunogenic variants, such as anti-IL-2 receptor antibodies (e.g., daclizumab, basiliximab), which inactivate T cells during the induction phase [138]. This progress marks a significant step forward in creating safer and more effective immunological treatments.
In recent decades, the focus of immunosuppressive strategies has shifted from suppressing T cells to exploring the roles of B cells, plasma cells, and antibodies. This shift was initially driven by clinical investigations into the feasibility of donor-specific antibody-positive transplantation [139]. A breakthrough in mAb technology has propelled progress in this area. Boulianne et al. [140] developed a mouse hybridoma cell line that produces mouse/human chimeric antibodies. This was achieved by replacing the murine Fc region with a human one, which enhanced therapeutic efficacy and reduced side effects [140]. This chimeric approach has facilitated the safe introduction of anti-CD20 mAb (rituximab) and anti-CD52 mAb (alemtuzumab) into clinical practice.
Even after the clinical introduction of B cell depletion strategies, further evaluations are still being conducted. One study found that B cell-deficient μMT mice are unable to develop donor-specific memory T cell responses after transplantation [141]. Additionally, pretransplantation B cell depletion using rituximab and CsA has been shown to improve islet allograft survival in NHPs [142].
The third landmark in the pharmacological field of transplantation was the groundbreaking discovery of CsA. Identified in 1976 by Jean-Francois Borel, CsA is a fungal metabolite recognized for its selective immunosuppressive effects on T cells [2]. Borel et al. [2] demonstrated that CsA significantly delays the rejection of skin grafts in mice and also postpones the onset of graft-versus-host disease in both mice and rats. Following these findings, Borel et al. [143] further explored the immunosuppressive properties of CsA in small animal models, including mice, rats, and guinea pigs. In the late 1970s, Calne and colleagues published the first results from large animal studies, which showed prolonged survival and reduced myelotoxicity in canine kidney [144] and porcine cardiac models [145].
Based on
Despite advances in immunosuppressive treatments, the problem of generalized immunosuppression and its associated toxicity remains the Achilles’ heel of transplantation. Moreover, current immunosuppressive regimens often fail to prevent chronic rejection. Consequently, there is a growing interest in achieving selective immunological tolerance to transplanted donor antigens while maintaining overall immune competence.
Research on transplantation tolerance relies heavily on both small and large animal models, and integrating observations across species has significantly enhanced our understanding. However, caution is necessary when interpreting studies using murine models in the context of immunosuppression. These models are inbred under specific-pathogen-free conditions, which do not adequately replicate the immune systems of aging, outbred large animals and humans. Additionally, strain-specific phenomena in murine models prevent the generalization of tolerance induction protocols [151]. A notable example is the considerable variability among different strains in their response to costimulation blockade and antibody treatments [152–154]. Furthermore, studies have identified minor histocompatibility antigen mismatch as a major obstacle to bone marrow engraftment [155,156]. This highlights the importance of validating tolerance strategies in donor-recipient models that address both major and minor compatibility barriers.
The concept of immunological tolerance originated with the discovery of chimerism. In 1945, Owen [157] was studying red cell antigens when he observed that dizygotic twin freemartin cattle possessed a mix of their own cells and those of their twin. He attributed this to placental vascular anastomosis between the twins during their embryonic period, which allowed hematopoietic progenitors to be exchanged and established lifelong chimerism. Recognizing the significance of Owen’s findings, the group led by Medawar demonstrated in 1951 that skin grafts exchanged between chimeric bovine twins were accepted [158]. Further research achieved chimerism in mice through the inoculation of embryos or the intravenous injection of newborns with allogeneic cells [1]. Main and Prehn [159] demonstrated that inoculating bone marrow cells into myeloablated adult mice induced chimerism. Skin grafts from the bone marrow donor strain were subsequently accepted, demonstrating tolerance [159]. These foundational observations were later termed “mixed chimerism,” a condition where both recipient and donor hematopoietic cells coexist following donor bone marrow transplantation [160].
The applicability of mixed chimerism has been explored in large animal settings. In dog leukocyte antigen-identical canine models, mixed chimerism was achieved through nonmyeloablative conditioning, either by administering CTLA4-Ig or by directing irradiation to the lymphatic chains. These models successfully accepted renal grafts from their donors [161–163]. Researchers in Boston developed tolerance protocols for miniature swine and cynomolgus monkeys. In the swine models, a nonmyeloablative protocol that included T cell depletion facilitated the acceptance of skin grafts from their donors. However, this profound T cell depletion increased the risk of developing posttransplant lymphoproliferative disorder (PTLD) [164,165]. A similar approach in cynomolgus monkeys led to transient mixed chimerism and successful renal graft acceptance [166,167], although PTLD was observed [168]. In rhesus macaques, mixed chimerism was established using nonmyeloablative conditioning with busulfan, combined with basiliximab induction and ongoing immunosuppression (belatacept, sirolimus, and H106). This approach resulted in high and persistent chimerism (approximately 80% for 145 days). However, challenges such as graft rejection following the cessation of immunosuppression and viral infections were noted [169,170].
Clinical studies on tolerance in HLA-mismatched or matched patients with mixed chimerism are thoroughly described in other articles [154,171]. The current challenge in clinical mixed chimerism is that intense myeloablative or nonmyeloablative conditioning may be intolerable for transplant recipients, while omitting conditioning therapy could compromise the effectiveness of bone marrow transplantation. However, advancements in experimental mixed chimerism regimens have reduced toxic side effects, and nearly nontoxic bone marrow transplantation protocols are now being investigated in mice [172,173]. Hence, although clinical testing of mixed chimerism is relatively lagging, the gap is gradually narrowing [174].
Transplantation research is currently centered on developing maintenance immunosuppressive regimens that enhance long-term outcomes. These regimens aim to prevent acute rejection while minimizing toxicities. One innovative approach involves the use of a tolerance-inducing immunosuppressant that inhibits T cell activation by disrupting costimulatory receptor-ligand interactions. Anergy, defined as the functional inactivation of antigen-reactive cells, may result when an antigenic signal is presented without the requisite costimulatory signals from specific cell surface markers or humoral stimuli [175].
The CD28-B7 axis was the first defined costimulatory pathway and remains the most thoroughly characterized. CD28 on T cells interacts with B7-1 (CD80) and B7-2 (CD86), providing costimulatory signals that activate the cell and induce IL-2 production [176]. Studies involving CD28−/− mice have shown that CD28 signals are essential for the
The investigation of CTLA-4Ig and anti-B7 mAb has been extended to large animals and humans. Kirk et al. [182] noted a modest survival benefit in renal allografts of rhesus macaques using anti-B7 mAb alone. When these antibodies were combined, they significantly increased survival time, although they did not induce tolerance or prevent the development of donor-specific alloantibodies [182]. In a cynomolgus monkey renal transplant model, the combination of anti-B7 mAb with sirolimus extended graft survival but did not achieve tolerance [183]. Continued interest in the CD28 pathway led to the development of a modified version of CTLA4-Ig, known as belatacept, which exhibits significantly higher
Another approach involves inducing anergy by using receptor conjugates of anti-CD40 ligand (CD154) mAb to prevent CD154 from receiving the CD40 signal from APCs [186]. In primates, hu5c8, a mAb targeting CD154, has proven effective in rhesus renal [187], islet [188], and heart [189] allografts, although alloantibody development was not prevented. Other antibodies involved CD40/CD154 blockade, such as IDEC-131 [190,191], ABI793 [192], and H106 [193], have also demonstrated long-term allograft acceptance in NHP models but failed to prevent alloantibody development, eventually leading to graft rejection. Disappointingly, the clinical development of anti-CD154 mAb therapies has been halted due to significant thrombotic adverse events, an outcome that was unexpected from preclinical studies conducted in NHPs [194]. Further analysis revealed that both hu5C8 and ABI793 exhibited prothrombotic effects in NHPs [194–196]. The combined blockade of CD28/B7 and CD40-CD154 has shown promise in preventing graft rejection, although it did not lead to immune tolerance [197,198]. Overall, despite occasional unexpected toxicities, progress with costimulation blockade in NHP studies has led to promising clinical trials, and there still remains potential for inducing immunological tolerance.
Cellular therapy represents a cutting-edge approach that achieves tolerance while avoiding the systemic side effects associated with immunosuppressants. This strategy is grounded in the mixed chimerism approach, which has shown promising outcomes through donor bone marrow transplantation. Further research into the mechanisms of mixed chimerism has prompted investigators to explore various cell populations responsible for immune tolerance. Consequently, efforts to induce tolerance using these cells have been examined in animal models.
Regulatory T (Treg) cells are a specialized subset of CD4+ T cells that naturally occur in the immune system and focus on immune suppression [199]. Given their immunosuppressive nature, the use of Treg cells has been widely examined as a strategy for inducing tolerance. The role of Treg cells in transplantation tolerance was first proposed when it was found that their depletion in mouse models evoked both autoimmunity and accelerated skin graft rejection [200]. Further experiments demonstrated that T cell-deficient mice with allogenic skin grafts achieved stable graft tolerance after the administration of Treg cells and naïve T cells. Nonetheless, rejection was observed with a secondary graft, indicating that alloreactive T cells persist but are controlled by Treg cells [201]. With evidence from preclinical studies, Treg therapy is moving to the clinic, where recent studies have shown encouraging results [202–204].
The ability of mesenchymal stromal cells (MSCs) to induce graft tolerance in kidney and heart transplantation has been demonstrated by preclinical studies [205,206]. For example, infusing MSCs into C57BL/6 to Balb/c cardiac transplant models significantly increased the mean survival of the graft compared to the control group. Additionally, a combined regimen of MSCs and rapamycin led to long-term graft survival of over 100 days [206]. Another study highlighted the immunosuppressive properties of MSCs, showing that their presence in a tumor-bearing mouse model facilitated a shift from proinflammatory Th17 cell dominance to anti-inflammatory Treg cell dominance [207]. These promising observations in animal models have paved the way for pioneering clinical studies. In the context of living donor kidney transplantation, the use of MSCs has been associated with a reduction in maintenance immunosuppressants while maintaining long-term graft function [208].
The shortage of available organs remains a critical challenge in clinical transplantation [209]. Xenotransplantation provides an alternative source of organs, offering the same benefits as transplants from healthy living donors. Initially, efforts focused on obtaining concordant xenografts from species closely related to humans, such as Old World monkeys and apes. For example, in 1985, a baboon heart was transplanted into a newborn diagnosed with hypoplastic heart syndrome, and the infant survived for 20 days before succumbing to cellular rejection [210]. However, due to issues related to limited supply, ethical concerns, and the risk of viral transmission, the focus on concordant xenografts has waned [211,212]. Currently, pigs have emerged as the preferred source for xenografts because of their appropriate size, abundant availability, favorable breeding characteristics, and physiological similarities to humans [213]. Despite these advantages, the significant genetic differences between pigs and humans present a substantial immune barrier. Advances in genetic engineering, including the use of CRISPR-Cas9 and other techniques, have shown promising potential to address these immune challenges in xenotransplantation.
Transplant compatibility among pigs is primarily determined by the expression of α-(1,3)-galactosyltransferase (α-Gal). The binding of α-Gal to preexisting human xenoantibodies is a major cause of hyperacute rejection [214]. Several laboratories have successfully created lines of α-Gal knockout pigs (GTKO) through genetic engineering, disrupting the gene responsible for α-galactosyltransferase [215]. Since then, genetic manipulations targeting various immune components have been carried out to enhance the compatibility of pig organs with the human immune system. Major porcine models include the GTKO-CD46-thrombomodulin Tg model [216], the GGTA1/CMAH/β4GalNT2 triple knockout model [217], and the 10-gene modified model [218]. Another theoretical concern in pig-to-human transplantation is the zoonotic transmission of porcine endogenous retrovirus (PERV) infection, which has been shown to infect human-derived cell lines
The pig-to-NHP model is frequently chosen for
The first xenotransplantations to reach clinical trials were those involving islets, kidneys, and hearts, reflecting the progress made in pig-to-NHP models. Studies conducted in 2023 and 2024 review the current status of xenotransplantation, including its clinical applications [227–230]. Notably, the initial case reports of pig-to-living-human xenotransplantation are expected to be landmark events in the field of transplantation [231–234].
The present review outlines the most relevant animal models used to explore the diverse categories of transplant immunology. Animal models have retained significance owing to their contributions in elucidating immune mechanisms, the potential for genetic engineering, and their historical role in safety and efficacy assessment. Exciting attempts to create an alternative organ source through porcine xenografts are underway (Table 3).
Table 3. Selected outstanding animal studies representing milestones in the progress of transplantation
Study | Animal model | Description |
---|---|---|
Immune-mediated allograft injury | ||
Miyawaki et al. (1994) [50] | Aly/aly mouse model | Recognition of the role of secondary lymphoid organs in acute cellular rejection |
Lechler and Batchelor (1982) [56] | ASXAUG to AS rat model | Dendritic cells play a crucial role in presenting alloantigens |
Nozaki et al. (2007) [70] | CCR5(–/–) mouse model | Antibodies generated in CCR5(–/–) mouse models directly trigger graft rejection |
White et al. (1969) [86] | Fischer-Lewis rat model | Chronic allograft nephropathy was effectively demonstrated, enabling further investigation into therapeutic strategies |
Immunosuppression and tolerance induction | ||
Owen (1945) [157] | Dizygotic twin freemartin cattle model | Discovery of lifelong hematopoietic chimerism |
Calne (1960) [121] | Canine model | Prevention of renal allograft rejection by 6-mercaptopurine |
Borel et al. (1976) [2] | Mouse and rat models | The immunosuppressive properties of cyclosporine A were examined |
Kung et al. (1979) [134] | Mouse model | Monoclonal murine antibodies generated against human T cell receptors were screened |
Shahinian et al. (1993) [178] | CD28(–/–) mouse model | Impaired T cell activation was observed in mice deficient in B7/CD28 costimulation |
Xenotransplantation | ||
Lai et al. (2002) [215] | GTKO porcine model | GTKO were produced to evade hyperacute rejection in pig-to-human xenotransplants |
GTKO, α-(1,3)-galactosyltransferase knockout pigs.
Despite the significant contributions of animal models to transplant immunology, public awareness of the ethical and safety concerns associated with these models is increasing. There are growing questions about the reliability of applying results from animal experiments to clinical settings [235]. Consequently, modern translational research has shifted toward employing advanced
While the complete replacement of animal models with more human-relevant
Conflict of Interest
Kyeong Sik Kim is an associate editor, and Dongho Choi is the editor-in-chief of the journal. They were not involved in the peer reviewer selection, evaluation, or decision process of this article. No other potential conflict of interest relevant to this article was reported.
Author Contributions
Conceptualization: KSK, DC. Investigation: MK. Resources: HKP. Project administration: KSK, HKP, DC. Writing–original draft: MK. Writing–review & editing: all authors. All authors read and approved the final manuscript.