Clin Transplant Res 2024; 38(4): 273-293
Published online December 31, 2024
https://doi.org/10.4285/ctr.24.0056
© The Korean Society for Transplantation
1The Research Institute for Transplantation, Yonsei University College of Medicine, Seoul, Korea
2Division of Nephrology, Department of Internal Medicine, Severance Hospital, Yonsei University College of Medicine, Seoul, Korea
Correspondence to: Jaeseok Yang
Division of Nephrology, Department of Internal Medicine, Severance Hospital, Yonsei University College of Medicine, 50-1 Yonsei-ro, Seodaemun-gu, Seoul 03722, Korea
E-mail: jcyjs@yuhs.ac
This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
Foreign antigen recognition is the ability of immune cells to distinguish self from nonself, which is crucial for immune responses in both invertebrates and vertebrates. In vertebrates, T cells play a pivotal role in graft rejection by recognizing alloantigens presented by antigen-presenting cells through direct, indirect, or semidirect pathways. B cells also significantly contribute to the indirect presentation of antigens to T cells. Innate immune cells, such as dendritic cells, identify pathogen- or danger-associated molecular patterns through pattern recognition receptors, thereby facilitating effective antigen presentation to T cells. Recent studies have shown that innate immune cells, including macrophages and NK cells, can recognize allogeneic or xenogeneic antigens using immune receptors like CD47 or activating NK receptors, instead of pattern recognition receptors. Additionally, macrophages and NK cells are capable of exhibiting memory responses to alloantigens, although these responses are shorter than those of adaptive memory. T cells also recognize xenoantigens through either direct or indirect presentation. Notably, macrophages and NK cells can directly recognize xenoantigens via surface immune receptors in an antibody-independent manner, or they can be activated in an antibody-dependent manner. Advances in our understanding of the recognition mechanisms of adaptive and innate immunity against allogeneic and xenogeneic antigens may improve our understanding of graft rejection.
Keywords: Alloimmunity, Nonself recognition, Transplantation rejection, Transplantation, Xenotransplantation
HIGHLIGHTS |
---|
|
Immune responses to allografts or xenografts, in other words, rejection of allografts or xenografts, start from the recognition by recipients of allogeneic or xenogeneic donor antigens. Allorecognition refers to the capacity to distinguish between an individual’s own molecules (self) and those of donors within the same species (nonself). This capability is crucial for self–nonself discrimination in both invertebrates and vertebrates. Cells from the innate and adaptive immune systems can recognize donors as nonself and participate in rejection responses.
The primary origin of alloimmune injury stems from the adaptive immune system, in which T cells play a crucial role in facilitating graft rejection. In the absence of T cells, neither mice nor humans reject allografts [1,2]. T cells are responsible for activating various mechanisms that lead to both acute and chronic damage to grafts, including the production of alloantibodies by B cells. Importantly, the alloimmune response, which is predominantly driven by T and B cells, depends on the initial recognition of alloantigens provided by antigen-presenting cells (APCs) from the innate immune system, such as dendritic cells (DCs) and their monocyte precursors. Microbial nonself elements—in other words, pathogen-associated molecular patterns (PAMPs)—are recognized by innate immune cells via pattern recognition receptors (PRRs), which activate APCs and subsequently induce T cell activation [3]. Furthermore, innate immune cells can also be activated by PRRs’ recognition of danger-associated molecular patterns (DAMPs) during acute injury after transplantation [4]. However, the danger theory does not explain why rejection can occur long after the initial injuries have healed. T cell-deficient recipients of healed allografts also demonstrate rapid rejection upon T cell reconstitution, irrespective of inflammation and danger signals. These findings have prompted further research on the ability of innate immune cells to differentiate between self and nonself in allogeneic contexts [5]. Based on recent research as well as seminal older works in the field, we provide an overview of the various mechanisms of allorecognition and xenorecognition in both the adaptive and innate immune systems.
Allorecognition appears in the early period of evolutionary history and is not exclusive to vertebrates or organisms with adaptive immune systems. Numerous colonial invertebrates possess the ability to recognize and reject genetically different individuals upon contact, even without an adaptive immune system [6]. For instance, in
The effectiveness of the antidonor immune response to an allograft is determined by the number of host T cells that recognize alloantigens, their differentiation status, and the activation level of innate immune cells presenting these alloantigens. APCs stimulate CD4+ and CD8+ T cells by displaying peptides derived from donors through major histocompatibility complex (MHC) molecules via three distinct pathways: direct, indirect, and semidirect. In the direct pathway, recipient T cells recognize donor MHC-peptide complexes found on donor APCs, with CD4+ T cells assisting CD8+ T cells [11–13]. Contrastingly, in the indirect pathway, CD4+ T cells interact with the recipient MHC II, which displays peptides derived from processed donor MHCs. In this context, CD8+ T cells engage with the recipient MHC I on recipient APCs by cross-presentation and also receive support from CD4+ T cells [14–17]. CD8+ T cells may engage with the donor MHC I on donor APCs; however, this hypothetical four-cell model raises concerns about unlinked assistance from CD4+ T cells that engage with the recipient MHC II on recipient APCs [18]. The semidirect pathway involves CD8+ T cells detecting intact donor MHC class I molecules on recipient APCs while simultaneously obtaining assistance from CD4+ T cells that recognize the recipient MHC II with processed peptides on recipient APCs [18–22]. Through this interaction, activated CD4+ T cells differentiate into various subsets, such as Th1, Th2, or Th17, and activated CD8+ T cells differentiate into cytotoxic T cells (CTL) (Fig. 1). T cells can also respond to minor histocompatibility antigens (mHAs), which are polymorphic peptides that are often derived from housekeeping proteins [23]. These mHA peptides can be presented by donor MHC molecules on donor APCs (direct pathway) or by self-MHC molecules on recipient APCs (indirect pathway). The contribution of these pathways and their associated alloantigens to acute and chronic allograft rejection is shaped by both intrinsic and extrinsic factors related to the transplant. The direct pathway is typically polyclonal and short-lived due to the limited number of donor passenger leukocytes, whereas the indirect pathway is oligoclonal and long-lasting. Both pathways can independently trigger acute rejection, and the indirect pathway facilitates alloantibody production and chronic rejection. The importance of the semidirect pathway has been increasingly recognized, particularly in cases where donor leukocytes are scarce or unable to migrate postsurgery, which could lead to the uptake of soluble MHC molecules by recipient APCs through cross-dressing.
The frequency of indirectly alloreactive T cells that recognize specific allopeptides is lower than 1 in 100,000, whereas the frequency of directly alloreactive T cells against foreign intact MHC molecules is 100–1,000 times higher because thymic selection does not eliminate T cells for unfamiliar MHC alleles. Therefore, the number of unique T cell clones available for an immune response relies on genetic disparities between the donor and recipient, particularly in MHC alleles. Following transplantation, both donor and recipient APCs are capable of activating alloreactive T cells, with their maturation state impacting T cell response quality and magnitude.
DCs are professional APCs that effectively present donor alloantigens to naive T cells. Several subtypes have been identified, including conventional, monocyte-derived, and plasmacytoid DCs. Tissue-resident conventional DCs in transplanted organs initiate alloreactive T cell activation, both directly and indirectly, through donor alloantigen transfer to recipient conventional DCs in lymphoid organs. Monocyte-derived DCs help facilitate the representation of donor alloantigens and the expansion of effector T cells within the allograft, while plasmacytoid DCs promote tolerance in transplantation.
Foreign tissues and organs from a donor activate a strong immune response, leading to acute rejection, where the transplanted material is rapidly eliminated. While the innate immune system is important in this process, the activation of alloreactive T cells from the host is crucial for facilitating acute rejection. Studies have indicated that mice without T cells or those treated with immunosuppressive drugs do not reject allografts. Studies involving skin transplants in mice have shown that after transplantation, donor-derived MHC class II+ cells, known as passenger leukocytes (mostly dendritic cells), migrate to the host lymph nodes to present donor MHC antigens to T cells [24,25]. This recognition of intact allogeneic MHC proteins (direct allorecognition) engages a significant portion (1%–10%) of the T cell repertoire, challenging traditional concepts of self-MHC restriction.
Interestingly, many T cells are activated by allogeneic MHC molecules, a response not seen to the same degree with other protein antigens. Two primary theories—the high-determinant density model and the multiple binary complex model—have been proposed to explain the extensive activation of T cells during direct allorecognition [26–28]. The high-determinant density model posits that T cells can recognize specific motifs on allogeneic MHC molecules independently of the peptides they carry, leading to the activation of various T cell clones, including those with low-affinity T cell receptors (TCRs), through cross-reactivity [29]. In contrast, the multiple binary complex model suggests that alloreactive T cell clones interact with allogeneic MHC linked to specific peptides. This interaction enables T cells to recognize complexes formed by allogeneic MHC paired with endogenous peptides that can mimic self-MHC and foreign peptide combinations, including microbial peptides (molecular mimicry) [30–33]. This perspective clarifies why certain alloreactive T cell clones may be peptide-specific while others are not [34,35].
The predominance of molecular mimicry over cross-reactivity in T cell allorecognition is influenced by the adaptability of TCRs in recognizing peptides and variable residues on both self- and allogeneic MHC molecules [36,37]. For example, in mutant C57BL/6 bm12 mice, a few mutations in MHC class II molecules can trigger a strong CD4+T cell response, causing acute rejection of bm12 skin grafts in normal mice. In contrast, xenogeneic MHC molecules produce weaker T cell responses [38,39]. Therefore, T cell allorecognition can be seen as a combination of “mistaken identity” and “déjà vu,” as noted by Archbold et al. [40].
In direct allorecognition, recipient CD4+ and CD8+ T cells recognize donor MHC class II and I, respectively on donor APCs, respectively [11–13]. Furthermore, the peptides bound to the donor MHC also play an important role in direct presentation. Three hypotheses have been proposed to explain how endogenous peptide cargo can influence the TCR specificity of T cells that are directly alloreactive [41]: (a) the TCR recognizes a foreign MHC molecule independently of the endogenous peptide present (peptide independence), (b) the TCR identifies one of several endogenous peptides associated with the same foreign MHC molecule (peptide degeneracy), or (c) the TCR specifically recognizes a distinct complex formed by a foreign MHC and a endogenous peptide (peptide dependence) [3,42,43]. Son et al. [44] developed a sophisticated mouse model of transplantation tolerance that provided strong
In 1982, Lechler and Batchelor [45] published evidence that recipient APCs presenting alloantigens could contribute to allograft rejection, showing that kidneys stored in the first host could sensitize alloreactive T cells upon retransplantation. This suggested that allorecognition by CD4+ T cells could occur through recipient APCs, as activation by nonprofessional APCs in the graft was assumed to be unlikely. A decade later, three studies confirmed the role of indirect help in the allorecognition of T cells and subsequent T cell activation against allogeneic MHC proteins [14,15,46]. These studies indicated that T cells from mice with skin grafts were activated by recipient APCs presenting donor MHC peptides [14] and that immunization with allogeneic MHC peptides could increase kidney rejection rates in rats [15,47]. Moreover, Liu et al. [48] showed that allogeneic human leukocyte antigen (HLA)-DR peptides presented by autologous APCs could activate human CD4+ T cells. Unlike direct alloimmunity, the indirect T cell response follows immunodominance rules, mediated by specific T cell clones recognizing limited dominant determinants of donor MHC proteins [48,49]. The mechanisms through which recipient APCs process donor antigens posttransplantation remain unclear; nonetheless, they may involve various processes, such as pinocytosis, intercellular transfer, engulfment of extracellular vesicles, or phagocytosis of donor material [50,51].
Additionally, indirect alloreactivity involves CD8+ T cells, as MHC class I molecules present viral and tumor-derived peptides to them. This process is called cross-priming and was first observed by Matzinger and Bevan in 1977 [52–55]. Golding and Singer [56] demonstrated that self-MHC class I molecules on APCs could present allogeneic MHC class I antigens, activating CD8+ CTL
The passenger leukocyte theory, proposed by Snell in 1957, suggests that the removal or replacement of leukocytes in grafts can significantly delay rejection [24,25,62–65]. This theory was supported by Billingham and Silvers [66], who showed that skin allografts survive longer in immune-privileged sites such as the brain or cheek pouch. Further studies by Barker and Billingham [67], along with Tilney and Gowans [68], found that skin allografts on pedicled flaps connected to the recipient’s blood supply survived longer. In a recent study, Celli et al. [50] demonstrated using two-photon microscopy that donor dermal DCs were rapidly eliminated after ear skin transplantation and all detected donor DCs in the lymph nodes appeared inactive. This raises questions regarding the role of passenger leukocytes in priming alloreactive T cells, as lymphatic connections are not fully established until 5–7 days posttransplant, whereas antidonor T cell responses can be detected within 2 days [69,70]. This suggests that different antigen presentation mechanisms may trigger T cell alloimmunity early on. Marino et al. [19] investigated this possibility using imaging flow cytometry and found no donor cells in the lymph nodes of skin-grafted mice, confirming the lack of passenger leukocytes. Instead, recipient cells were coated with vesicles displaying donor MHC molecules shortly after grafting, and by day 7, they expressed both donor and their own MHC proteins. Smyth et al. [71] also observed that recipient APCs continued to acquire donor MHC molecules post-skin transplantation, illustrating their significance in CD8+ T cell responses. An alternative way, known as “cross-dressing,” suggests that recipient APCs can present antigens by transferring ready-made peptide–MHC complexes from the surface of graft DCs to host DCs, bypassing the need for further processing. Allo-MHC cross-dressing of APCs after transplantation may occur through both direct cell–cell contact and the secretion of extracellular vesicles. Research by Smyth et al. [72] has shown that DCs and endothelial cells can acquire MHC complexes via cell interactions, a process dependent on temperature and energy, and that these cross-dressed cells can induce the proliferation of antigen-specific T cells
It is well known that a range of proteins known as mHAs can initiate alloreactive T cell responses following transplantation [75]. Three main categories of mHA antigens have been identified: those arising from allelic variations of polymorphic proteins, those encoded by genes on the Y chromosome (relevant in sex-mismatched transplants), and those that exist in the donor but are absent in the recipient [76]. Various peptides derived from mHA proteins can act as determinants of recognition by alloreactive T cells posttransplantation. These proteins provide a significant source of antigens for both endogenous and exogenous processing by donor and recipient APCs, respectively, leading to direct or indirect presentation to CD4+ and CD8+ T cells in the recipient [77,78].
B cells are believed to play a significant role in antigen presentation related to the indirect activation of donor-specific T cells (Fig. 2). For instance, the presence of CD20+ cells in renal allografts is linked to poor outcomes and acute cellular rejection, though not necessarily associated with antibody-mediated rejection (AMR) [79]. B cells in these grafts likely facilitate their effects through alloantigen presentation and by providing costimulation (ICOS/CD28) to activate T cells [80].
B cells are necessary for indirect allorecognition, which aligns with the traditional immunological concepts of developing an adaptive response to protein antigens. In the indirect pathway, recipient T cells recognize processed allopeptide–self-MHC-II complexes presented by recipient APCs, primarily engaging CD4+ T cells owing to the involvement of self-MHC-II molecules [14,15,17,81,82]. Upon recognizing the antigen on DCs, CD4+ T cells upregulate specific markers (BCL6, CXCR5, and CD40L) and migrate to follicles, becoming follicular T helper cells. These cells interact with follicular B cells that have internalized donor antigens, facilitating germinal center formation through the CD40L/CD40 axis and secretion of interleukin (IL)-21, which promotes the differentiation of B cells [83]. B cells then undergo somatic hypermutation to produce high-affinity donor-specific antibodies (DSAs) and can class switch, differentiating into long-lived plasma cells or memory B cells, depending on the intensity of B cell receptor signaling [84–86]. Consequently, the presence of DSAs serves as a proxy measure of the activity of the indirect allorecognition pathway.
B cells can also recognize soluble carbohydrate alloantigens, usually conjugated with peptides via surface immunoglobulin. An important example is blood group ABO antigens, which can induce anti-ABO antibody-mediated AMR in ABO-incompatible transplantation. Innate B cells, especially B1a cells recognize B cell epitopes with a polysaccharide structure of ABO antigens, and internalized ABO antigens conjugated with peptides were processed in B cells. Then, processed T cell epitopes with a peptide structure of conjugated ABO antigens on top of HLA class II might be presented to innate T cells, such as NKT cells and CD49dhighCD4+ T cells, and innate B cells could get help from innate T cells [87–91]. In this sense, the responses of so-called “T-independent” innate B cells toward carbohydration alloantigens might result from interactions between innate B and innate T cells.
The concept of self/nonself discrimination by innate immune cells, introduced by Charles Janeway over 30 years ago, revolves around the detection of conserved structures called PAMPs by PRRs, such as Toll-like receptors (TLRs), which are present on APCs [92–94]. This recognition triggers activation signals that promote APC maturation, allowing them to initiate an adaptive immune response by presenting antigens to T cells. Early studies on allorecognition sought to assess the role of PRRs in the innate alloimmune response. For example, research involving the deletion of MyD88, a key adaptor protein in TLR signaling, demonstrated that this disruption could prevent the rejection of skin allografts with minor antigen mismatches [95]. This was attributed to impaired maturation of DCs in the draining lymph nodes, highlighting the importance of TLRs in activating DCs posttransplant. However, later studies indicated that TLR signaling did not prevent the rejection of allografts mismatched with MHC or other minor antigens [96]. Other experiments involving islet, skin, or heart transplants showed that acute rejection occurred even without TLR signaling [97,98].
Signal regulatory protein alpha (SIRPα) is highly expressed primarily in myeloid cells but is also expressed in parenchymal cells. It binds to a ubiquitously expressed nonpolymorphic cell surface protein, CD47 [99]. The affinity of monocyte CD47 for SIRPα is influenced by polymorphisms in SIRPα, with a higher binding affinity typically resulting in a stronger downstream activating signal. Normally, this activating signal is counterbalanced by an inhibitory signal that occurs when monocyte SIRPα binds to CD47. However, in recipient-derived monocytes, the stronger activating signal from higher-affinity binding to donor SIRPα variants can override the weaker inhibitory signal that results from binding to the donor’s monomorphic CD47, which closely resembles the recipient’s own CD47 [100]. This interaction leads to monocyte activation and initiates an innate immune response [101,102]. Conversely, in a situation resembling the steady state of a nontransplanted mouse, when a graft is placed from a mouse of the same strain, the activating signal from monocyte CD47 binding to donor-derived SIRPα—identical to the recipient’s own SIRPα—remains countered by the inhibitory signal from monocyte SIRPα binding to CD47. As a result, monocytes are not activated.
Monocytes and macrophages can mount a vigorous response to previously encountered allogeneic cells [102,103], particularly in Rag–/–γc–/– mice immunized with irradiated allogeneic splenocytes and subsequently exposed to allogeneic bone marrow plugs from the same strain. If the bone marrow originates from a different strain, the monocyte response matches the primary response to any allogeneic source. However, when both exposures are from the same strain and the recipient is Rag–/–γc–/–Pir-a–/–, there is no innate immune memory response. This suggests that innate immune memory is mediated by paired immunoglobulin-like receptor (PIR)-A, which binds to nonself-MHC-I molecules and is also expressed on macrophages [104], facilitating memory formation and contributing to increased cytotoxicity and fibrosis (Fig. 3A) [100,105]. The human homologue of murine PIR-A is leukocyte immunoglobulin-like receptor (LILR)-A (Fig. 3B) [106]. Disruption of PIR-A, either through genetic knockout or PIR-A/Fc blocking agents, diminishes memory response and reduces chronic allograft rejection in murine models. The allospecific memory of monocytes relies on PIR-A and requires the binding of CD47 to the SIRPα variant, which is essential for initiating the innate alloimmune response and forming monocyte memory [102]. Monocyte memory facilitates the creation of more monocyte-derived DCs that infiltrate grafts and present alloantigens to T cells [107]. This innate immune memory can last up to 49 days, while the typical lifespan of monocytes is approximately 3 days [108,109]. In contrast, the memory response durations of macrophages and natural killer (NK) cells are weeks and months, respectively, while adaptive immune cells, such as T and B cells, can have years of immunologic memory [110]. Transcriptomic analyses from single-cell RNA sequencing of splenic monocytes in RAG–/–γc–/– mice revealed diverse PIR-A expression after exposure to syngeneic or allogeneic stimuli. Following allostimulation, monocytes with specific PIR-A receptor profiles expand, indicating clonal expansion as a potential mechanism for monocyte memory. This is supported by the observation that splenic monocytes from B6.Rag–/–γc–/– mice exhibited increased binding to MHC class I H2-Dd tetramers after stimulation with Balb/c-irradiated splenocytes, while no increase was observed with C3H or B6 strains. Variations in PIR-A expression and the likely clonal expansion of monocytes following allostimulation resemble NK cell memory responses to microbial nonself [111–113]. As a result, the understanding of immunological memory is evolving to encompass both the adaptive and innate immune systems.
Clinical research on the role of innate allogeneic recognition comes from studies for HLA-identical living donor kidney transplant patients. They found that SIRPα mismatches were associated with trends toward increased graft failure, interstitial inflammation, and significant changes in peritubular capillaritis [107]. Data from 375 patients indicated that these mismatches were correlated with a rise in acute rejection, premature interstitial fibrosis, tubular atrophy, and greater graft loss, highlighting their importance for transplant success, independent of HLA mismatch. Preliminary findings suggest that human SIRPα is polymorphic and that human CD47 is monomorphic [93]. Researchers identified two common variants of the SIRPα allele, differing in their affinity for CD47. Human LILRs subfamily A (LILRA) are comparable to mouse PIR-A, with six stimulatory receptors binding to various HLA molecules [108]. Except for LILRA4, all LILRA are expressed on monocytes, with most immune cells in humans expressing at least one LILRA member. Single-cell RNA sequencing of infiltrating immune cells from kidney transplant biopsies showed the activation of monocytes that overexpressed genes related to LILRA and CD47 [109].
NK cells were the first identified components of what we now refer to as the innate lymphoid cell (ILC) population. They earned their name due to their innate or “natural” ability to eliminate cells without prior sensitization. The self-recognition of class I MHC through inhibitory killer cell immunoglobulin-like receptors (KIRs) prevents the destruction of the body’s own cells (Fig. 4). However, NK cells have a low threshold for detecting insufficient or abnormal MHC I expression, which can occur in certain tumors, allowing them to identify mutated cells when necessary. In cardiac allograft models that lack specific T cell reactivity but maintain NK cell responses, NK cells from F1 B6×BALB/c background activated by the absence of self-MHC class I molecules on donor endothelium from BALB/c background participate in the pathogenesis of cardiac allograft vasculopathy (CAV) by the missing-self mechanism [114]. Thus, NK cells contribute to chronic alloresponses such as CAV formation, as well as acute rejection, as shown by Maier et al. [115].
According to the missing-self hypothesis, NK cells are expected to target transplanted organs, making them significant contributors to alloresponses. Inhibitory KIRs are located on chromosome 19q13.4 and are characterized by a high level of polymorphism. This genetic variation includes differences in nucleotide sequences, as well as the presence or absence of specific genes. Such variability results in a wide range of KIR expression, generating various subpopulations of NK cells with distinct combinations of inhibitory KIRs. Inhibitory KIRs consist of either two or three extracellular immunoglobulin domains, a transmembrane domain, and a long cytoplasmic domain containing immunoreceptor tyrosine-based inhibitory motifs [116]. In addition to inhibitory KIRs, NK cells possess various other inhibitory receptors that influence their activity [117]. For example, NKG2A binds to HLA-E, a nonclassical HLA class I molecule with limited polymorphism [118]. While HLA-E mRNA is present in all nucleated cells, its protein expression is primarily found in endothelial cells, T and B lymphocytes, monocytes, and macrophages [119,120]. The ability of HLA-E to be expressed on the cell surface relies on its binding to peptides derived from classical HLA class I leader sequences [121,122]. Many HLA-B allotypes do not bind effectively to HLA-E, resulting in lower levels of HLA-E-peptide complexes and reduced NKG2A-mediated inhibitory signaling in NK cells [117,122].
In organ transplantation, variations in donor HLA-B allotypes may lead to inadequate HLA-E inhibitory signals from graft endothelial cells, potentially activating recipient NK cells. Moreover, HLA-E may fail to inhibit NK cell activation if it presents peptides from stress proteins instead of classical leader sequences. For instance, during cellular stress events such as ischemia/reperfusion injury, HLA-E may display fragments from heat shock proteins (HSP60), which do not engage NKG2A, thereby activating NK cells that can damage graft endothelial cells [123].
NK cell activation relies on a balance between inhibitory and activating signals from various receptors. In addition to responding to the absence of inhibitory signals, NK cells can also be activated by excessive activating signals, a concept known as "induced self" [124]. One crucial activating receptor is NKG2D, which, under nonstressed conditions, recognizes ligands such as MICA, MICB, and ULBP-2 and 3 in endothelial cells, particularly in renal microvascular cells [125,126]. Donor genetic polymorphisms and stress conditions after transplantation can increase MICA expression in graft endothelial cells, heightening their susceptibility to NK cell-mediated lysis [125,126].
Another important activating receptor is NKp44, which belongs to the natural cytotoxicity receptor family [127]. Recent research has demonstrated that NKp44 can bind to certain HLA-DP molecules, such as DP401, which are frequently found in White populations [128], thereby inducing functional NK cell responses [129]. Unlike larger endothelial cells, renal microvascular endothelial cells also express HLA class II molecules, such as HLA-DP, with expression potentially increasing under stress conditions, particularly due to interferon-γ [130–132]. If graft endothelial cells express HLA-DP that interacts with NKp44, this could activate recipient NK cells against the graft.
Studies have demonstrated that NK cells exhibit characteristics associated with adaptive immunity, including clonal expansion, durability, and robust recall responses [133]. NK cell memory can generally be classified into antigen-specific and antigen-independent types [134,135]. In antigen-specific memory, NK cells develop immune memory specific to an antigen after encountering certain viral particles or haptens, similar to the memory observed in T and B cells. In antigen-independent memory, NK cells undergo persistent changes in their effector functions after being exposed to certain cytokine environments, such as IL-12, IL-15, and IL-18, leading to memory-like NK cells that do not rely on specific antigens [136].
The Ly49H receptor on some NK cells displays antigen specificity by detecting the murine cytomegalovirus (MCMV)-encoded glycoprotein m157. This interaction stimulates the activation and expansion of Ly49H+ NK cells [137]. When transferred into mice lacking the receptor, these cells significantly expand in response to MCMV infection and subsequently contract to establish a pool of memory cells that persist in both lymphoid and nonlymphoid tissues, remaining detectable months after infection [138]. Unresolved questions remain regarding the identity of the antigen receptor, the precise nature of the ligand, and whether these NK cells are best considered mature NK cells or a distinct ILC1 subset.
Furthermore, NK cells exhibit memory responses to alloantigens. NK cells in C57BL6/J mice express Ly49D, an activating receptor, and Ly49A, an inhibitory receptor against the H2-Dd alloantigen. The balance between activating and inhibitory receptors on NK cells determines their activation in response to alloantigens. Rechallenging sensitized C57BL6/J mice with the same alloantigen induces a memory response of NK cells to alloantigens from BALB/c mice [139].
Clinical studies have demonstrated that NK cells play a role in allorecognition and allograft rejection. NK cells in graft biopsies are activated by endothelial cells' failure to provide an inhibitory signal due to the absence of self HLA on donor cells, referred to as "missing-self" [140]. A study of 1,682 kidney transplant recipients determined that assessing for missing-self—characterized by a lack of self-MHC on grafts and the presence of corresponding inhibitory KIR in recipients—can help stratify patient risk and identify those at higher risk for graft failure, particularly when diagnosed with chronic AMR [141]. In a follow-up study involving 924 kidney transplantations, missing-self was linked to an increased risk of transplant glomerulopathy, although no association with graft failure was found among HLA-mismatched recipients [142]. Earlier research by van Bergen et al. [143] indicated that KIR-ligand mismatches (missing-self) were associated with reduced long-term survival of kidney grafts from HLA-compatible donors, suggesting that poor outcomes in these mismatched grafts may also stem from changes in KIR-expressing T cell activity [144].
In recent years, multiomics technologies have enabled a comprehensive exploration of alloimmune responses by integrating various biological data types, leading to insights into underlying mechanisms, biomarker identification, supplementing pathologic diagnosis, and personalized treatment strategies. Especially, the combination of single-cell RNA sequencing and spatial transcriptomics deepens our understanding of immune cell dynamics and mechanisms of graft rejection and tolerance [145]. For example, this approach revealed that higher frequencies of FCGR3A+ myeloid cells and both FCGR3A+ and FCGR3A– NK cells were associated with increased levels of inflammation in 16 kidney allograft biopsy samples, suggesting roles of myeloid cells and NK cells in alloimmune responses [146].
In xenorecognition, T cell responses can involve both direct and indirect presentations of graft determinants (Fig. 5). Research has primarily focused on two-species combinations: human responses to mouse cells and human (or nonhuman primate) responses to pig cells.
The human antimurine xenogeneic response is primarily based on indirect recognition [147,148]. When human APCs are removed from mixed lymphocyte reactions with murine cells, human antimurine CTLs are not generated, indicating that murine APCs cannot directly present antigens to human T cells. This inability of human T cells to interact directly with murine APCs appears to result from faulty interactions between human CD4+ T cells and murine MHC class II molecules, which can be corrected through transfection. Despite this limitation, a xenogeneic response can still occur through murine antigens presented by responder APCs. Murphy et al. [38] demonstrated the indirect presentation of mouse lymphocytes after immunization with synthetic rat class II MHC peptides, revealing varying recognition of RT1B and RT1D peptides during rejection across different mouse strains.
Regarding the indirect human antipig T cell xenoresponse, Yamada et al. [149] demonstrated that human CD4+ T cells could respond indirectly to pig peripheral blood lymphocytes (PBLs) that were significantly depleted of swine leukocyte antigen (SLA) class II+ APCs in the presence of human APCs. This indirect response achieved 25% of the maximum response observed when human and pig PBLs were mixed [149]. Additionally, pig “immature” bronchoalveolar lavage (BAL) cells did not trigger a direct xenoresponse in human APC-depleted peripheral blood mononuclear cells (PBMCs). Although these BAL cells express SLA class II molecules, they lack the B7 costimulatory molecule [150,151]. However, immature pig BAL cells stimulated an indirect response when human PBMCs were used, and this response was inhibited by an anti-HLA-DR monoclonal antibody.
One study assessing precursor frequency through limiting dilution assays found that IL-2-producing T cells recognized pig antigens presented by human APCs at a frequency of one in 20,000 to 150,000, indicating a high frequency of self-MHC-restricted T cells in the indirect xenoresponse. The same research group also examined the indirect human antipig T cell xenoresponse using an SLA class II-immortalized porcine endothelial cell line. An anti-HLA class II monoclonal antibody partially inhibited the response; however, it was unclear whether SLA class II remained negative in the endothelial cell line throughout the culture. Similar experiments showed that SLA class II- primary aortic endothelial cells became strongly class II+ after two days in mixed culture, leading to a sustained human T cell response that was predominantly direct. While a significant self-restricted indirect response induced by xenogeneic cells is expected due to the extensive diversity of sequences and peptides that have evolved over millions of years, further quantification is needed. Unlike in the direct recognition pathway, alterations in accessory signals do not affect the normal presentation process by self-APCs [152].
In contrast, in certain species combinations, such as humans or primates and pigs, there is significant direct presentation of antigens, suggesting that TCR cross-reactivity may occur even with evolutionarily distant MHC molecules [153]. However, the potential limitations of accessory costimulatory signals that influence the immune response have not been confirmed. It is important to recognize that
In Yamada’s model, human T cell proliferation in response to irradiated pig PBL reached 75% of the response observed when human and pig PBL were mixed, indicating that human CD4+ T cells can directly interact with pig MHC class II molecules. This was confirmed by reduced proliferation in the presence of an anti-SLA-DR antibody. This direct xenoresponse is comparable to, or even stronger than, the alloreactive response [149]. Murray’s study demonstrated that PAECs could directly stimulate both CD4+ and CD8+ T cells, although only CD8+ T cells responded to resting SLA class II- PAEC. CD4+ T cell proliferation was observed only when SLA class II+ PAECs were stimulated by swine interferon-γ. This direct xenogenetic response appeared to be stronger than the allogeneic anti-endothelial cell response, a finding supported by similar results using primary PAECs treated with human tumor necrosis factor-α (TNF-α) [154]. Bravery et al. [155] demonstrated that human CD4+ and CD8+ T cells proliferated in response to untreated SLA class-II+ PAEC. Mature pig BAL cells also elicit a direct xenoresponse in human T cells, as confirmed by the inhibitory effect of an anti-SLA-DR antibody, primarily involving CD4+ T cells [156]. These mature BAL cells express SLA class II molecules and the B7 costimulatory molecule, functioning similarly to DCs [150]. However, the direct xenoresponse from pig DCs was significantly weaker than that from allogeneic human DCs and PBL [151]. Caution is warranted when evaluating the strength of xenogeneic direct responses, as different pig and human APCs are utilized in experiments. Comparisons of human antipig T-lymphocyte frequency with mature pig BAL cells or human DCs showed low direct xenorecognition values, ranging from one in 3,500 to one in 14,000 cells, leading to uncertainty in cross-system comparisons. While both direct and indirect pathways of xenorecognition operate in pig-to-human interactions, the extent of T cell xenoresponses is influenced by the specific species combination used [149,150].
Macrophages are activated in xenogeneic conditions through several mechanisms. One involves antibodies, where immune complexes formed between porcine cells and xenoreactive antibodies mainly targeting carbohydrate xenoantigens, such as α1-3Gal, bind to Fc gamma receptors, producing activation signals [158,159]. An antibody-independent pathway has also been identified, in which galectin-3 binds to carbohydrate xenoantigens such as α1-3Gal on porcine cells, serving as an activation signal (Fig. 6) [160–163]. Additional activation can occur through interactions with neutrophils, NK cells, and Th1 cells, as well as by DAMPs released from injured porcine cells [164]. Once activated, macrophages contribute to tissue damage in grafts by releasing proinflammatory cytokines, reactive oxygen and nitrogen species, and complement factors [165].
NK cells rapidly infiltrate porcine xenografts perfused with human blood and reject xenogeneic cells through direct contact or antibody-dependent cellular cytotoxicity (ADCC). In the antibody-dependent manner, NK cells recognize natural and elicited antibodies (α-galactosidase) bound to graft endothelial cells via Fc receptors (FcRs). This interaction activates the release of perforin and granzymes from NK cells, inducing apoptosis in target cells, while anti-SLA class I antibodies further promote ADCC (Fig. 7A). They can mediate cytotoxic effects independently of antibodies, primarily through the secretion of perforin and granzymes [166–168]. The cytotoxic activity of NK cells is regulated by a balance between activating and inhibitory signals mediated by various NK receptors [169]. Activating receptors such as NKG2D/pULBP-1 [170,171], CD2/pCD58, and NKp44/unknown receptor [172] engage porcine ligands to release lytic granules. However, inhibitory receptors such as KIR, ILT2, and CD94/NKG2A do not effectively recognize SLA-I, which prevents inhibitory signaling that would otherwise suppress NK cell activation (Fig. 7B) [172].
T cells recognize donor antigens through three mechanisms—the direct, indirect, and semidirect pathways—and play a pivotal role in graft rejection. Recent research suggests that semidirect alloresponses, where T cells recognize recipient APCs with donor MHC, play important role in allograft rejection, although this not yet been confirmed. Traditionally, it is thought that innate immune cells do not directly recognize donor antigens, instead presenting donor antigens to T cells after the recognition of PAMPs or DAMPs and subsequent maturation. However, recent studies have revealed that innate immune cells can directly recognize alloantigens and show a short-lasting memory response. In xenotransplantation, T cells can recognize xenogeneic antigens, in both direct and indirect presentation modes. Innate immune cells can recognize xenoantigens in both antibody-dependent and antibody-independent manners and thereby contribute to xenograft rejection. Recent progress in research on the recognition mechanisms of adaptive and innate immunity against allogeneic and xenogeneic antigens can contribute to better understanding of graft rejection.
Conflict of Interest
No potential conflict of interest relevant to this article was reported.
Author Contributions
All the work was done by Il Hee Yun and Jaeseok Yang. All authors read and approved the final manuscript.