<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.1 20151215//EN" "http://jats.nlm.nih.gov/publishing/1.1/JATS-journalpublishing1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML" xml:lang="en" article-type="review-article" dtd-version="1.1">
<front>
<journal-meta>
<journal-id journal-id-type="pmc">CMC</journal-id>
<journal-id journal-id-type="nlm-ta">CMC</journal-id>
<journal-id journal-id-type="publisher-id">CMC</journal-id>
<journal-title-group>
<journal-title>Computers, Materials &#x0026; Continua</journal-title>
</journal-title-group>
<issn pub-type="epub">1546-2226</issn>
<issn pub-type="ppub">1546-2218</issn>
<publisher>
<publisher-name>Tech Science Press</publisher-name>
<publisher-loc>USA</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">59455</article-id>
<article-id pub-id-type="doi">10.32604/cmc.2024.059455</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Review</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>A Review of Joint Extraction Techniques for Relational Triples Based on NYT and WebNLG Datasets</article-title>
<alt-title alt-title-type="left-running-head">A Review of Joint Extraction Techniques for Relational Triples Based on NYT and WebNLG Datasets</alt-title>
<alt-title alt-title-type="right-running-head">A Review of Joint Extraction Techniques for Relational Triples Based on NYT and WebNLG Datasets</alt-title>
</title-group>
<contrib-group>
<contrib id="author-1" contrib-type="author">
<name name-style="western"><surname>Mi</surname><given-names>Chenglong</given-names></name><xref ref-type="aff" rid="aff-1">1</xref></contrib>
<contrib id="author-2" contrib-type="author" corresp="yes">
<name name-style="western"><surname>Qin</surname><given-names>Huaibin</given-names></name><xref ref-type="aff" rid="aff-1">1</xref><email>qhb_inf@shzu.edu.cn</email></contrib>
<contrib id="author-3" contrib-type="author">
<name name-style="western"><surname>Qi</surname><given-names>Quan</given-names></name><xref ref-type="aff" rid="aff-1">1</xref></contrib>
<contrib id="author-4" contrib-type="author">
<name name-style="western"><surname>Zuo</surname><given-names>Pengxiang</given-names></name><xref ref-type="aff" rid="aff-2">2</xref></contrib>
<aff id="aff-1"><label>1</label><institution>School of Information Science and Technology, Shihezi University</institution>, <addr-line>Shihezi, 832000</addr-line>, <country>China</country></aff>
<aff id="aff-2"><label>2</label><institution>School of Medicine, Shihezi University</institution>, <addr-line>Shihezi, 832000</addr-line>, <country>China</country></aff>
</contrib-group>
<author-notes>
<corresp id="cor1"><label>&#x002A;</label>Corresponding Author: Huaibin Qin. Email: <email>qhb_inf@shzu.edu.cn</email></corresp>
</author-notes>
<pub-date date-type="collection" publication-format="electronic">
<year>2025</year></pub-date>
<pub-date date-type="pub" publication-format="electronic">
<day>06</day>
<month>03</month>
<year>2025</year></pub-date>
<volume>82</volume>
<issue>3</issue>
<fpage>3773</fpage>
<lpage>3796</lpage>
<history>
<date date-type="received">
<day>08</day>
<month>10</month>
<year>2024</year>
</date>
<date date-type="accepted">
<day>11</day>
<month>12</month>
<year>2024</year>
</date>
</history>
<permissions>
<copyright-statement>&#x00A9; 2025 The Authors.</copyright-statement>
<copyright-year>2025</copyright-year>
<copyright-holder>Published by Tech Science Press.</copyright-holder>
<license xlink:href="https://creativecommons.org/licenses/by/4.0/">
<license-p>This work is licensed under a <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="https://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</ext-link>, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.</license-p>
</license>
</permissions>
<self-uri content-type="pdf" xlink:href="TSP_CMC_59455.pdf"></self-uri>
<abstract>
<p>In recent years, with the rapid development of deep learning technology, relational triplet extraction techniques have also achieved groundbreaking progress. Traditional pipeline models have certain limitations due to error propagation. To overcome the limitations of traditional pipeline models, recent research has focused on jointly modeling the two key subtasks-named entity recognition and relation extraction-within a unified framework. To support future research, this paper provides a comprehensive review of recently published studies in the field of relational triplet extraction. The review examines commonly used public datasets for relational triplet extraction techniques and systematically reviews current mainstream joint extraction methods, including joint decoding methods and parameter sharing methods, with joint decoding methods further divided into table filling, tagging, and sequence-to-sequence approaches. In addition, this paper also conducts small-scale replication experiments on models that have performed well in recent years for each method to verify the reproducibility of the code and to compare the performance of different models under uniform conditions. Each method has its own advantages in terms of model design, task handling, and application scenarios, but also faces challenges such as processing complex sentence structures, cross-sentence relation extraction, and adaptability in low-resource environments. Finally, this paper systematically summarizes each method and discusses the future development prospects of joint extraction of relational triples.</p>
</abstract>
<kwd-group kwd-group-type="author">
<kwd>Relation triplet extraction</kwd>
<kwd>joint extraction methods</kwd>
<kwd>named entity recognition</kwd>
<kwd>relation extraction</kwd>
</kwd-group>
<funding-group>
<award-group id="awg1">
<funding-source>Science and Technology Research Plan of Xinjiang Production And Construction Corps Financial Science and Technology</funding-source>
<award-id>2023AB048</award-id>
</award-group>
</funding-group>
</article-meta>
</front>
<body>
<sec id="s1">
<label>1</label>
<title>Introduction</title>
<p>The relationship triplet is regarded as a crucial component in knowledge graphs, typically represented in the structure (h, r, t), where h denotes the head entity, t signifies the tail entity, and r describes the relationship between the two entities. For instance, the triplet (Paris, capital_of, France) conveys that &#x201C;Paris is the capital of France.&#x201D; To construct knowledge graphs effectively, the extraction of relationship triplets is identified as a vital step, which involves the task of extracting entities and their interrelations from unstructured natural language text. This extraction process is generally divided into two key steps. The first step is named entity recognition [<xref ref-type="bibr" rid="ref-1">1</xref>,<xref ref-type="bibr" rid="ref-2">2</xref>], which involves the identification of all specific entities within the text, including names of people, locations, and organizations. The accuracy of this step is critical, as it relies on the ability to extract entities from diverse and complex text data, providing the foundational elements necessary for subsequent relationship extraction. Following this, the second step is relationship extraction [<xref ref-type="bibr" rid="ref-3">3</xref>,<xref ref-type="bibr" rid="ref-4">4</xref>], wherein the relationships existing between the identified entities are analyzed and categorized. The challenge inherent in this phase is the accurate identification of the semantics of relationships, particularly when relationships are implied or expressed in various forms. Consequently, by integrating the recognized entities with their corresponding relationships, a complete triplet is constructed, which is further utilized for the expansion and refinement of the knowledge graph.</p>
<p>This comprehensive process not only enhances the quality of structured information within knowledge graphs but also establishes a solid foundation for automated information inference and knowledge discovery. Overall, relationship triplet extraction has consistently been a prominent research topic in the field of natural language processing. <xref ref-type="fig" rid="fig-1">Fig. 1</xref> illustrates the timeline of the development of the relation triplet extraction task. In earlier studies, pipeline models [<xref ref-type="bibr" rid="ref-5">5</xref>] were predominantly employed to accomplish the triplet extraction task. Within this model framework, named entity recognition and relationship extraction were systematically decomposed into two independent subtasks. Initially, in the named entity recognition phase, potential entity pairs were identified from the text; subsequently, in the relationship extraction phase, attempts were made to determine and classify the relationships between the recognized entity pairs. Although this sequential processing approach simplifies the complexity of the task, it simultaneously introduces several challenges and limitations, such as the following:</p>
<p>1. <bold>Error Propagation:</bold> The main flaw of the pipeline model lies in the information loss resulting from the independence of the subtasks. The lack of effective context and interaction information transfer between named entity recognition and relationship extraction can lead to the accumulation of errors. For instance, if named entity recognition incorrectly labels an entity, this error will prevent the relationship extraction phase from accurately identifying or classifying the relationship between that entity and others. As the tasks are processed sequentially, errors gradually amplify, negatively impacting the effectiveness of the final relationship triplet extraction.</p>
<p>2. <bold>Lack of Interaction:</bold> Because the two subtasks work independently, the model cannot share information or leverage the potential synergies between entity recognition and relationship classification. For instance, the existence of certain relationships could provide valuable clues for identifying specific entity types, but this information cannot be effectively utilized due to the independent nature of the tasks.</p>
<p>3. <bold>Information Loss:</bold> In each independent subtask, some useful contextual information might be ignored or lost. This loss of information across tasks can negatively impact the overall extraction performance.</p>
<p>4. <bold>Independent Optimization:</bold> The pipeline model usually optimizes each subtask independently, meaning that each step&#x2019;s model is trained and optimized separately without considering the global optimal solution. This independent optimization may result in a decrease in the model&#x2019;s overall performance.</p>
<p>5. <bold>Limited Ability to Handle Complex Relationships:</bold> Since each step can only handle local information, it is difficult to capture the global syntactic and semantic structure. As a result, entities and relationships in complex sentences may not be accurately extracted or parsed.</p>
<p>6. <bold>High Time and Computational Costs:</bold> Each independent subtask requires separate model training and execution, leading to higher time and computational costs for the pipeline model, making it relatively less efficient overall.</p>

<fig id="fig-1">
<label>Figure 1</label>
<caption>
<title>Timeline of the relation triplet extraction task</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_59455-fig-1.tif"/>
</fig>
<p>In recent years, with a deepening understanding of the limitations of the pipeline model [<xref ref-type="bibr" rid="ref-6">6</xref>], joint modeling of entity recognition and relationship extraction has gradually become a focal point of research. This strategy of joint modeling aims to achieve the extraction of relationship triplets through a single model framework, thereby enhancing overall processing efficiency and accuracy while reducing errors in information transfer. The joint extraction model emerges as a transformative approach, markedly enhancing extraction accuracy and efficiency through the simultaneous recognition of entities and extraction of relationships. This method, while adeptly minimizing computational resource consumption, facilitates a more nuanced understanding of contextual information. By fostering synergistic effects between tasks, it enables the model to exhibit greater flexibility and precision throughout the information extraction process.</p>
<p>In particular, the ability to share information across multiple tasks plays a crucial role in reducing error rates during extraction [<xref ref-type="bibr" rid="ref-7">7</xref>]. Furthermore, the incorporation of rich feature representations not only amplifies the model&#x2019;s generalization capabilities but also allows it to adapt seamlessly to diverse data scenarios. Thus, the joint extraction strategy not only streamlines the data processing workflow but also presents a compelling solution for applications in natural language processing and knowledge graph construction [<xref ref-type="bibr" rid="ref-8">8</xref>]. Ultimately, this approach underscores the profound insights and practical significance inherent in cutting-edge research, paving the way for future advancements in the field.</p>
<p>In summary, the contributions of this review can be articulated as follows:</p>
<p>1. The paper systematically categorizes joint extraction models for relationship triplets into two distinct types based on their architectural frameworks: joint decoding methods and parameter sharing methods. Additionally, it provides a succinct overview of commonly utilized datasets and evaluation criteria relevant to the task of relationship triplet extraction, thereby establishing a foundational understanding for researchers.</p>
<p>2. A thorough analysis and comparative study of relationship triplet extraction models that employ joint decoding methods is presented, which are further classified into three categories: table filling methods, tagging methods, and sequence-to-sequence methods. This classification not only highlights the diversity of approaches but also elucidates their respective strengths and applications.</p>
<p>3. Furthermore, the paper delves into a comparative analysis of models utilizing parameter sharing methods, engaging in a comprehensive discussion of related work. This exploration serves to illuminate the nuances and advancements in this area.</p>
<p>4. Finally, a brief yet insightful overview of the strengths and weaknesses of existing joint extraction models for relationship triplets is provided, alongside an exploration of their application domains. This culminates in a forward-looking perspective on future developments in the field, aiming to inspire further research and innovation.</p>
</sec>
<sec id="s2">
<label>2</label>
<title>Task Description</title>
<p>Due to the possibility of relationship triplets sharing one or two entities, this overlap phenomenon complicates the extraction task. Based on the characteristics of entity overlap [<xref ref-type="bibr" rid="ref-9">9</xref>], sentences can be categorized into three types: (i) <bold>No Entity Overlap (NEO):</bold> This type of sentence contains one or more triplets, but there are no shared entities among them. (ii) <bold>Entity Pair Overlap (EPO):</bold> Sentences of this kind feature multiple triplets, where at least two triplets share the same entities, which may be in the same order or in reverse order. (iii) <bold>Single Entity Overlap (SEO):</bold> In this case, a sentence contains multiple triplets, with at least two triplets sharing one common entity. The primary objective of relationship triplet extraction is to identify all existing relationship triplets present in the sentence.</p>
<p>Traditional relationship triplet extraction techniques typically divide this task into two independent phases: first, entity recognition is performed, followed by the analysis of the relationships between these entities based on contextual relationships. However, employing a joint modeling approach allows for the simultaneous execution of entity recognition and relationship extraction, thereby forming an integrated model that simplifies this process. This joint modeling strategy enhances the synergy between entity recognition and relationship extraction by sharing contextual information. Consequently, this information-sharing mechanism significantly improves the accuracy and overall efficiency of the extraction process.</p>
</sec>
<sec id="s3">
<label>3</label>
<title>Datasets</title>
<p>There are many datasets available for relational triplet extraction. For instance, WikiEvents offers a wealth of event descriptions along with their associated entities and relationships, making it particularly suitable for understanding event-driven relations. Meanwhile, TACRED serves as a comprehensive benchmark dataset that encompasses various types of texts and relationships, thus becoming a crucial reference for evaluating model performance. Additionally, the SemEval [<xref ref-type="bibr" rid="ref-10">10</xref>] series of competitions presents multilingual datasets that broaden the application scenarios for relation extraction. Lastly, the ACE [<xref ref-type="bibr" rid="ref-11">11</xref>] dataset is dedicated to extracting events and relationships from news articles and other texts, thereby enriching the research materials available in this domain. However, this paper will focus on detailing the two most widely used datasets in recent years: NYT and WebNLG.</p>
<sec id="s3_1">
<label>3.1</label>
<title>NYT</title>
<p>The NYT [<xref ref-type="bibr" rid="ref-12">12</xref>] dataset is a textual resource primarily sourced from news articles published in The New York Times, encompassing a rich collection of natural language text. This dataset contains information from various domains, including text content, entities, and relationships, making it widely applicable in natural language processing, particularly in the extraction of relationship triplets.</p>
<p>The dataset includes examples with multiple relationships and overlapping entities, annotated with information from the Freebase knowledge graph to ensure accuracy and high quality of the annotations. The NYT dataset comprises a total of 66,194 sentences, covering 24 types of relationship categories. Specifically, 56,195 sentences are allocated to the training set, 4999 sentences are designated as the validation set, while the remaining 5000 sentences are used for testing.</p>
<p>When extracting relationship triplets from the NYT dataset, a variety of challenges regarding annotation quality arises, primarily concerning consistency, noise, and ambiguity. Moreover, contextual dependency and domain specificity significantly influence annotation accuracy, further contributing to the complexity of the extraction process and thereby limiting model performance.</p>
<p>Furthermore, the difficulty intensifies with certain relationship types, such as implicit relationships, multiple relationships, temporal and spatial relationships, and sentiment or attitude relationships. These complexities not only demand advanced reasoning capabilities but also require precise judgment within diverse contexts.</p>
</sec>
<sec id="s3_2">
<label>3.2</label>
<title>WebNLG</title>
<p>The WebNLG [<xref ref-type="bibr" rid="ref-13">13</xref>] dataset was originally developed by INRIA (the French National Institute for Research in Computer Science and Automation) and contains a variety of triplets along with their corresponding human-written natural language descriptions. It is primarily used to study how to generate fluent natural language descriptions from knowledge graphs. The data in the WebNLG dataset is mainly sourced from DBpedia, a knowledge base that extracts structured information from Wikipedia.</p>
<p>The WebNLG dataset comprises a total of 6222 sentences and 246 relationship categories. Each instance in this dataset consists of a set of triplets and several human-written reference sentences, with each reference sentence containing all the triplets for that instance. For detailed statistics on these two datasets, please refer to <xref ref-type="table" rid="table-1">Table 1</xref>.</p>
<table-wrap id="table-1">
<label>Table 1</label>
<caption>
<title>Statistics of the datasets. <italic>N</italic> is the number of triples in a sentence</title>
</caption>
<table frame="hsides">
<colgroup>
<col/>
<col/>
<col/>
</colgroup>
<thead>
<tr>
<th>Dataset</th>
<th>NYT</th>
<th>WebNLG</th>
</tr>
</thead>
<tbody>
<tr>
<td>Training set</td>
<td>56195</td>
<td>5019</td>
</tr>
<tr>
<td>Test set</td>
<td>5000</td>
<td>500</td>
</tr>
<tr>
<td>Dev set</td>
<td>4999</td>
<td>703</td>
</tr>
<tr>
<td>Relations</td>
<td>24</td>
<td>246</td>
</tr>
<tr>
<td>NEO</td>
<td>3222</td>
<td>239</td>
</tr>
<tr>
<td>EPO</td>
<td>969</td>
<td>6</td>
</tr>
<tr>
<td>SPO</td>
<td>1273</td>
<td>448</td>
</tr>
<tr>
<td>N &#x003D; 1</td>
<td>3240</td>
<td>256</td>
</tr>
<tr>
<td>N &#x003D; 2</td>
<td>1047</td>
<td>175</td>
</tr>
<tr>
<td>N &#x003D; 3</td>
<td>314</td>
<td>138</td>
</tr>
<tr>
<td>N &#x003D; 4</td>
<td>290</td>
<td>93</td>
</tr>
<tr>
<td>N &#x003E; 5</td>
<td>109</td>
<td>41</td>
</tr>
<tr>
<td>Triples</td>
<td>8120</td>
<td>1607</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>In the WebNLG dataset, the challenges surrounding annotation quality primarily manifest in the diversity and consistency between generated natural language descriptions and the triplets. While the overarching goal is to transform structured data into fluent text, the reality is that a single triplet can be expressed in multiple valid linguistic forms, leading to significant complexity in evaluating the generated text. Moreover, annotations frequently lack essential contextual information, resulting in generated texts that may be semantically inaccurate or incomplete. Furthermore, the generated texts are not infrequently plagued by grammatical errors or unclear expressions, which, in turn, adversely affects the overall quality and usability of the data.</p>
</sec>
</sec>
<sec id="s4">
<label>4</label>
<title>Evaluation Metrics</title>
<p>In the process of relationship triplet extraction, it is crucial to accurately identify the boundaries and types of entities, as well as the relationship between the head entity and the tail entity. Only when these elements are correctly identified can the extracted relationship triplets be considered valid. For this task, several commonly used evaluation metrics [<xref ref-type="bibr" rid="ref-14">14</xref>] are as follows:</p>
<p><bold>Precision</bold> (Prec.): This refers to the ratio of the number of correctly identified relationship triplets to the total number of triplets identified by the model. The calculation formula is as follows:
<disp-formula id="eqn-1"><label>(1)</label><mml:math id="mml-eqn-1" display="block"><mml:mi>P</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:mi>i</mml:mi><mml:mi>s</mml:mi><mml:mi>i</mml:mi><mml:mi>o</mml:mi><mml:mi>n</mml:mi><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mi>T</mml:mi><mml:mi>P</mml:mi></mml:mrow><mml:mrow><mml:mi>T</mml:mi><mml:mi>P</mml:mi><mml:mo>+</mml:mo><mml:mi>F</mml:mi><mml:mi>P</mml:mi></mml:mrow></mml:mfrac></mml:math></disp-formula>where <italic>TP</italic> (true positive): denotes the identification of a completely correct relational triad; <italic>FP</italic> (false positive): denotes that it is not the relational triad but the model determines it to be a relational triad.</p>
<p><bold>Recall</bold> (Rec.): This measures the proportion of relationship triplets successfully identified by the model among all the relationship triplets that actually exist. The calculation formula is as follows:
<disp-formula id="eqn-2"><label>(2)</label><mml:math id="mml-eqn-2" display="block"><mml:mi>R</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:mi>a</mml:mi><mml:mi>l</mml:mi><mml:mi>l</mml:mi><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mi>T</mml:mi><mml:mi>P</mml:mi></mml:mrow><mml:mrow><mml:mi>T</mml:mi><mml:mi>P</mml:mi><mml:mo>+</mml:mo><mml:mi>F</mml:mi><mml:mi>N</mml:mi></mml:mrow></mml:mfrac></mml:math></disp-formula>where FN (false negative): indicates that it should have been recognized but the model did not actually recognize it.</p>
<p><bold>F1 score</bold> (F1): This metric is the harmonic mean of precision and recall, providing a comprehensive assessment of performance in both aspects. A higher F1 score reflects the overall effectiveness of the model.
<disp-formula id="eqn-3"><label>(3)</label><mml:math id="mml-eqn-3" display="block"><mml:msub><mml:mi>F</mml:mi><mml:mn>1</mml:mn></mml:msub><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mn>2</mml:mn><mml:mi>P</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:mi>i</mml:mi><mml:mi>s</mml:mi><mml:mi>i</mml:mi><mml:mi>o</mml:mi><mml:mi>n</mml:mi><mml:mo>&#x00D7;</mml:mo><mml:mi>R</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:mi>a</mml:mi><mml:mi>l</mml:mi><mml:mi>l</mml:mi></mml:mrow><mml:mrow><mml:mi>P</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:mi>i</mml:mi><mml:mi>s</mml:mi><mml:mi>i</mml:mi><mml:mi>o</mml:mi><mml:mi>n</mml:mi><mml:mo>+</mml:mo><mml:mi>R</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:mi>a</mml:mi><mml:mi>l</mml:mi><mml:mi>l</mml:mi></mml:mrow></mml:mfrac></mml:math></disp-formula></p>
<p><bold>Micro-average</bold>: An overall evaluation of all relational triplets, calculating global <inline-formula id="ieqn-1"><mml:math id="mml-ieqn-1"><mml:mi>P</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:mi>i</mml:mi><mml:mi>s</mml:mi><mml:mi>i</mml:mi><mml:mi>o</mml:mi><mml:mi>n</mml:mi></mml:math></inline-formula>, <inline-formula id="ieqn-2"><mml:math id="mml-ieqn-2"><mml:mi>R</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:mi>a</mml:mi><mml:mi>l</mml:mi><mml:mi>l</mml:mi></mml:math></inline-formula>, and <inline-formula id="ieqn-3"><mml:math id="mml-ieqn-3"><mml:msub><mml:mi>F</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:math></inline-formula>. This approach treats each instance equally and combines the contributions of all categories into a single set of metrics, giving equal weight to each instance.
<disp-formula id="eqn-4"><label>(4)</label><mml:math id="mml-eqn-4" display="block"><mml:mtable columnalign="right left right left right left right left right left right left" rowspacing="3pt" columnspacing="0em 2em 0em 2em 0em 2em 0em 2em 0em 2em 0em" displaystyle="true"><mml:mtr><mml:mtd /><mml:mtd><mml:mi>P</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:mi>i</mml:mi><mml:mi>s</mml:mi><mml:mi>i</mml:mi><mml:mi>o</mml:mi><mml:msub><mml:mi>n</mml:mi><mml:mrow><mml:mi>m</mml:mi><mml:mi>i</mml:mi><mml:mi>c</mml:mi><mml:mi>r</mml:mi><mml:mi>o</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:msubsup><mml:mo movablelimits="false">&#x2211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msubsup><mml:mi>T</mml:mi><mml:msub><mml:mi>P</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:msubsup><mml:mo movablelimits="false">&#x2211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msubsup><mml:mi>T</mml:mi><mml:msub><mml:mi>P</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:msubsup><mml:mo movablelimits="false">&#x2211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msubsup><mml:mi>F</mml:mi><mml:msub><mml:mi>P</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mfrac></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<disp-formula id="eqn-5"><label>(5)</label><mml:math id="mml-eqn-5" display="block"><mml:mtable columnalign="right left right left right left right left right left right left" rowspacing="3pt" columnspacing="0em 2em 0em 2em 0em 2em 0em 2em 0em 2em 0em" displaystyle="true"><mml:mtr><mml:mtd /><mml:mtd><mml:mi>R</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:mi>a</mml:mi><mml:mi>l</mml:mi><mml:msub><mml:mi>l</mml:mi><mml:mrow><mml:mi>m</mml:mi><mml:mi>i</mml:mi><mml:mi>c</mml:mi><mml:mi>r</mml:mi><mml:mi>o</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:msubsup><mml:mo movablelimits="false">&#x2211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msubsup><mml:mi>T</mml:mi><mml:msub><mml:mi>P</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:msubsup><mml:mo movablelimits="false">&#x2211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msubsup><mml:mi>T</mml:mi><mml:msub><mml:mi>P</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:msubsup><mml:mo movablelimits="false">&#x2211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msubsup><mml:mi>F</mml:mi><mml:msub><mml:mi>N</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mfrac></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<disp-formula id="eqn-6"><label>(6)</label><mml:math id="mml-eqn-6" display="block"><mml:mtable columnalign="right left right left right left right left right left right left" rowspacing="3pt" columnspacing="0em 2em 0em 2em 0em 2em 0em 2em 0em 2em 0em" displaystyle="true"><mml:mtr><mml:mtd /><mml:mtd><mml:mi>m</mml:mi><mml:mi>i</mml:mi><mml:mi>c</mml:mi><mml:mi>r</mml:mi><mml:mi>o</mml:mi><mml:mo>&#x2212;</mml:mo><mml:msub><mml:mi>F</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mn>2</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mfrac><mml:mrow><mml:mi>P</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:mi>i</mml:mi><mml:mi>s</mml:mi><mml:mi>i</mml:mi><mml:mi>o</mml:mi><mml:msub><mml:mi>n</mml:mi><mml:mrow><mml:mi>m</mml:mi><mml:mi>i</mml:mi><mml:mi>c</mml:mi><mml:mi>r</mml:mi><mml:mi>o</mml:mi></mml:mrow></mml:msub><mml:mo>&#x00D7;</mml:mo><mml:mi>R</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:mi>a</mml:mi><mml:mi>l</mml:mi><mml:msub><mml:mi>l</mml:mi><mml:mrow><mml:mi>m</mml:mi><mml:mi>i</mml:mi><mml:mi>c</mml:mi><mml:mi>r</mml:mi><mml:mi>o</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mi>P</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:mi>i</mml:mi><mml:mi>s</mml:mi><mml:mi>i</mml:mi><mml:mi>o</mml:mi><mml:msub><mml:mi>n</mml:mi><mml:mrow><mml:mi>m</mml:mi><mml:mi>i</mml:mi><mml:mi>c</mml:mi><mml:mi>r</mml:mi><mml:mi>o</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:mi>R</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:mi>a</mml:mi><mml:mi>l</mml:mi><mml:msub><mml:mi>l</mml:mi><mml:mrow><mml:mi>m</mml:mi><mml:mi>i</mml:mi><mml:mi>c</mml:mi><mml:mi>r</mml:mi><mml:mi>o</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mfrac></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula></p>
<p><bold>Macro-average:</bold> As an evaluation method, this computes the <inline-formula id="ieqn-4"><mml:math id="mml-ieqn-4"><mml:mi>P</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:mi>i</mml:mi><mml:mi>s</mml:mi><mml:mi>i</mml:mi><mml:mi>o</mml:mi><mml:mi>n</mml:mi></mml:math></inline-formula>, <inline-formula id="ieqn-5"><mml:math id="mml-ieqn-5"><mml:mi>R</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:mi>a</mml:mi><mml:mi>l</mml:mi><mml:mi>l</mml:mi></mml:math></inline-formula>, and <inline-formula id="ieqn-6"><mml:math id="mml-ieqn-6"><mml:msub><mml:mi>F</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:math></inline-formula> for each category separately, and then averages these results. In this process, all categories are treated equally, regardless of the number of samples in each category, ensuring that the performance of smaller categories receives appropriate attention and is not overshadowed by larger categories.
<disp-formula id="eqn-7"><label>(7)</label><mml:math id="mml-eqn-7" display="block"><mml:mtable columnalign="right left right left right left right left right left right left" rowspacing="3pt" columnspacing="0em 2em 0em 2em 0em 2em 0em 2em 0em 2em 0em" displaystyle="true"><mml:mtr><mml:mtd /><mml:mtd><mml:mi>P</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:mi>i</mml:mi><mml:mi>s</mml:mi><mml:mi>i</mml:mi><mml:mi>o</mml:mi><mml:msub><mml:mi>n</mml:mi><mml:mrow><mml:mi>m</mml:mi><mml:mi>a</mml:mi><mml:mi>c</mml:mi><mml:mi>r</mml:mi><mml:mi>o</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mi>n</mml:mi></mml:mfrac><mml:munderover><mml:mo movablelimits="false">&#x2211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:munderover><mml:mi>P</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:mi>i</mml:mi><mml:mi>s</mml:mi><mml:mi>i</mml:mi><mml:mi>o</mml:mi><mml:msub><mml:mi>n</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<disp-formula id="eqn-8"><label>(8)</label><mml:math id="mml-eqn-8" display="block"><mml:mtable columnalign="right left right left right left right left right left right left" rowspacing="3pt" columnspacing="0em 2em 0em 2em 0em 2em 0em 2em 0em 2em 0em" displaystyle="true"><mml:mtr><mml:mtd /><mml:mtd><mml:mi>R</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:mi>a</mml:mi><mml:mi>l</mml:mi><mml:msub><mml:mi>l</mml:mi><mml:mrow><mml:mi>m</mml:mi><mml:mi>a</mml:mi><mml:mi>c</mml:mi><mml:mi>r</mml:mi><mml:mi>o</mml:mi></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:mi>n</mml:mi></mml:mfrac><mml:munderover><mml:mo movablelimits="false">&#x2211;</mml:mo><mml:mrow><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:munderover><mml:mi>R</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:mi>a</mml:mi><mml:mi>l</mml:mi><mml:msub><mml:mi>l</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<disp-formula id="eqn-9"><label>(9)</label><mml:math id="mml-eqn-9" display="block"><mml:mtable columnalign="right left right left right left right left right left right left" rowspacing="3pt" columnspacing="0em 2em 0em 2em 0em 2em 0em 2em 0em 2em 0em" displaystyle="true"><mml:mtr><mml:mtd /><mml:mtd><mml:mi>m</mml:mi><mml:mi>a</mml:mi><mml:mi>c</mml:mi><mml:mi>r</mml:mi><mml:mi>o</mml:mi><mml:mo>&#x2212;</mml:mo><mml:msub><mml:mi>F</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>=</mml:mo><mml:mn>2</mml:mn><mml:mo>&#x00D7;</mml:mo><mml:mfrac><mml:mrow><mml:mi>P</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:mi>i</mml:mi><mml:mi>s</mml:mi><mml:mi>i</mml:mi><mml:mi>o</mml:mi><mml:msub><mml:mi>n</mml:mi><mml:mrow><mml:mi>m</mml:mi><mml:mi>a</mml:mi><mml:mi>c</mml:mi><mml:mi>r</mml:mi><mml:mi>o</mml:mi></mml:mrow></mml:msub><mml:mo>&#x00D7;</mml:mo><mml:mi>R</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:mi>a</mml:mi><mml:mi>l</mml:mi><mml:msub><mml:mi>l</mml:mi><mml:mrow><mml:mi>m</mml:mi><mml:mi>a</mml:mi><mml:mi>c</mml:mi><mml:mi>r</mml:mi><mml:mi>o</mml:mi></mml:mrow></mml:msub></mml:mrow><mml:mrow><mml:mi>P</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:mi>i</mml:mi><mml:mi>s</mml:mi><mml:mi>i</mml:mi><mml:mi>o</mml:mi><mml:msub><mml:mi>n</mml:mi><mml:mrow><mml:mi>m</mml:mi><mml:mi>a</mml:mi><mml:mi>c</mml:mi><mml:mi>r</mml:mi><mml:mi>o</mml:mi></mml:mrow></mml:msub><mml:mo>+</mml:mo><mml:mi>R</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:mi>a</mml:mi><mml:mi>l</mml:mi><mml:msub><mml:mi>l</mml:mi><mml:mrow><mml:mi>m</mml:mi><mml:mi>a</mml:mi><mml:mi>c</mml:mi><mml:mi>r</mml:mi><mml:mi>o</mml:mi></mml:mrow></mml:msub></mml:mrow></mml:mfrac></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula></p>
<p><bold>Top-k Evaluation Metrics:</bold> When a system can output multiple candidate triplets, Top-k evaluation metrics such as Top-1, Top-5, Top-10, etc., are used to assess the model&#x2019;s performance. Specifically, the essence of the Top-k metric lies in its ability to assess the proportion of correct results among the top k outcomes returned by a model. This metric enjoys widespread popularity due to its simplicity and ease of understanding; however, it is crucial to acknowledge its limitations. Notably, the Top-k metric focuses solely on the top k results, which may lead to the oversight of the overall quality of the results. This characteristic becomes particularly pronounced in the context of relationship triplet extraction tasks. Consequently, the Top-k metric is often best employed in conjunction with other evaluation metrics to achieve a more comprehensive assessment of model performance.</p>
<p>By integrating multiple evaluation dimensions, researchers are better positioned to gain a nuanced understanding of how models perform on complex tasks, thereby avoiding the pitfalls of relying on a singular metric that could yield a skewed perspective. Therefore, adopting a diversified set of evaluation standards in the assessment of relationship triplet extraction performance is instrumental in uncovering both the strengths and weaknesses of models, ultimately fostering advancement within this field.</p>
</sec>
<sec id="s5">
<label>5</label>
<title>Joint Extraction</title>
<sec id="s5_1">
<label>5.1</label>
<title>Joint Decoding</title>
<p>To overcome certain limitations of traditional pipeline models, researchers have proposed a joint decoding strategy. Currently, research on joint decoding in the field of relationship triplet extraction primarily focuses on three directions: (i) <bold>Table Filling:</bold> This method creates a table or matrix where the rows and columns represent different entities in the sentence, and each cell in the matrix indicates the relationship between the two entities. (ii) <bold>Tagging:</bold> This method treats relationship triplet extraction as a sequence labeling problem, introducing relationship information into the labeling system. (iii) <bold>Sequence-to-Sequence:</bold> This method takes unstructured text as input and directly generates relationship triplets, presenting them in a sequential output format.</p>
<sec id="s5_1_1">
<label>5.1.1</label>
<title>Table Filling</title>
<p>The table filling approach typically maintains a separate table for each relationship, and the triplets are extracted based on the populated relationship tables. TPLinker [<xref ref-type="bibr" rid="ref-15">15</xref>] and UniRE [<xref ref-type="bibr" rid="ref-16">16</xref>] are among the most effective table filling models. TPLinker treats the joint extraction problem as a token-pair linking task, constructs a single-step model that avoids interdependent steps, and introduces a handshake tagging scheme that efficiently and accurately extracts relational triplets through a linking mechanism, addressing exposure bias and error accumulation issues. However, the model has high tagging complexity, resulting in redundant operations and information. UniRE, on the other hand, proposes an innovative unified label space approach, treating relational triplet extraction as a table filling problem. It uses a unified classifier to predict the labels for each cell in the table, simplifying task learning and enhancing the interdependence between tasks. Hu et al. [<xref ref-type="bibr" rid="ref-17">17</xref>] introduce a hybrid deep relational matrix bidirectional method, combining relationship matrix representation and bidirectional processing to achieve joint extraction of relational triplets. GRTE [<xref ref-type="bibr" rid="ref-18">18</xref>] enhances model performance by incorporating a global feature-oriented strategy and an improved matrix-filling strategy, effectively capturing complex dependencies in sentences. Yan et al. [<xref ref-type="bibr" rid="ref-19">19</xref>] propose an innovative partition filtering mechanism that effectively captures fine-grained information and enhances overall task performance through joint learning. Liu et al. [<xref ref-type="bibr" rid="ref-20">20</xref>] view relationships as attention distributions, using attention weights to represent relationships between entities, enabling the model to simultaneously identify multiple entities and capture the relationships between them. <xref ref-type="fig" rid="fig-2">Fig. 2</xref> shows an example of a table filling model.</p>
<fig id="fig-2">
<label>Figure 2</label>
<caption>
<title>Example of a table filling method</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_59455-fig-2.tif"/>
</fig>
<p>OneRel [<xref ref-type="bibr" rid="ref-21">21</xref>] introduced the Rel-Spec Horns tagging strategy, reducing the required number of label matrices from <inline-formula id="ieqn-7"><mml:math id="mml-ieqn-7"><mml:mn>2</mml:mn><mml:mi>N</mml:mi><mml:mo>+</mml:mo><mml:mn>1</mml:mn></mml:math></inline-formula> to <italic>N</italic>, with each relationship having a single matrix. This reduction in redundancy allowed for the efficient extraction of relational triplets using a single module. Meanwhile, Wang et al. [<xref ref-type="bibr" rid="ref-22">22</xref>] combined the strengths of table filling and sequence encoding in an end-to-end approach for joint extraction. Similarly, Ning et al. [<xref ref-type="bibr" rid="ref-23">23</xref>] borrowed the object detection concept from computer vision, treating entities and relationships in text as &#x201C;objects&#x201D; to be detected through a single-stage framework, simultaneously identifying and extracting entities and their relationships. Additionally, Zhang et al. [<xref ref-type="bibr" rid="ref-24">24</xref>] used relationship prompts as part of the input to guide the model in accurately recognizing and extracting entities and relationships within the text. Wang et al. [<xref ref-type="bibr" rid="ref-25">25</xref>] further improved joint extraction by locating potential entity spans and using a relational graph to capture relationships between these entities. To better handle complex sentence structures and overlapping relationships, Han et al. [<xref ref-type="bibr" rid="ref-26">26</xref>] adopted span representations, while Tian et al. [<xref ref-type="bibr" rid="ref-27">27</xref>] enhanced the understanding of entities and relationships through a multi-view information fusion strategy. Finally, Tang et al. [<xref ref-type="bibr" rid="ref-28">28</xref>] proposed the UniRel model, which utilizes the Transformer&#x2019;s self-attention mechanism to model interactions between entity-entity and entity-relationship pairs within a single interaction map. In contrast, Wang et al. [<xref ref-type="bibr" rid="ref-29">29</xref>] employed tensor decomposition techniques to effectively model complex dependencies between entities and relationships, demonstrating excellent performance in processing intricate texts. <xref ref-type="table" rid="table-2">Table 2</xref> show the current state-of-the-art in table filling models.</p>
<table-wrap id="table-2">
<label>Table 2</label>
<caption>
<title>Current state-of-the-art in table filling models</title>
</caption>
<table frame="hsides">
<colgroup>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
</colgroup>
<thead>
<tr>
<th>Model</th>
<th align="center" colspan="3">NYT</th>
<th align="center" colspan="3">WebNLG</th>
</tr>
<tr>
<th></th>
<th>Prec.</th>
<th>Rec.</th>
<th>F1</th>
<th>Prec.</th>
<th>Rec.</th>
<th>F1</th>
</tr>
</thead>
<tbody>
<tr>
<td><inline-formula id="ieqn-8"><mml:math id="mml-ieqn-8"><mml:msub><mml:mtext>TPLinker</mml:mtext><mml:mrow><mml:mi>L</mml:mi><mml:mi>S</mml:mi><mml:mi>T</mml:mi><mml:mi>M</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-15">15</xref>]</td>
<td>86.0</td>
<td>82.0</td>
<td>84.0</td>
<td>91.9</td>
<td>81.6</td>
<td>86.4</td>
</tr>
<tr>
<td><inline-formula id="ieqn-9"><mml:math id="mml-ieqn-9"><mml:msub><mml:mtext>TPLinker</mml:mtext><mml:mrow><mml:mi>B</mml:mi><mml:mi>E</mml:mi><mml:mi>R</mml:mi><mml:mi>T</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-15">15</xref>]</td>
<td>91.4</td>
<td>92.6</td>
<td>92.0</td>
<td>88.9</td>
<td>84.5</td>
<td>86.7</td>
</tr>
<tr>
<td><inline-formula id="ieqn-10"><mml:math id="mml-ieqn-10"><mml:mtext>OneRel</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-21">21</xref>]</td>
<td>93.2</td>
<td>92.6</td>
<td>92.9</td>
<td>91.8</td>
<td>90.3</td>
<td>91.0</td>
</tr>
<tr>
<td><inline-formula id="ieqn-11"><mml:math id="mml-ieqn-11"><mml:msub><mml:mtext>GRET</mml:mtext><mml:mrow><mml:mi>L</mml:mi><mml:mi>S</mml:mi><mml:mi>T</mml:mi><mml:mi>M</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-18">18</xref>]</td>
<td>86.2</td>
<td>87.1</td>
<td>86.6</td>
<td>88.0</td>
<td>86.3</td>
<td>87.1</td>
</tr>
<tr>
<td><inline-formula id="ieqn-12"><mml:math id="mml-ieqn-12"><mml:msub><mml:mtext>GRET</mml:mtext><mml:mrow><mml:mi>B</mml:mi><mml:mi>E</mml:mi><mml:mi>R</mml:mi><mml:mi>T</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-18">18</xref>]</td>
<td>93.4</td>
<td>93.5</td>
<td>93.4</td>
<td>92.3</td>
<td>87.9</td>
<td>90.0</td>
</tr>
<tr>
<td><inline-formula id="ieqn-13"><mml:math id="mml-ieqn-13"><mml:mtext>PFN</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-19">19</xref>]</td>
<td>&#x2013;</td>
<td>&#x2013;</td>
<td>92.4</td>
<td>&#x2013;</td>
<td>&#x2013;</td>
<td>93.6</td>
</tr>
<tr>
<td><inline-formula id="ieqn-14"><mml:math id="mml-ieqn-14"><mml:mtext>Document</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-20">20</xref>]</td>
<td>88.1</td>
<td>78.5</td>
<td>83.0</td>
<td>89.5</td>
<td>86.0</td>
<td>87.7</td>
</tr>
<tr>
<td><inline-formula id="ieqn-15"><mml:math id="mml-ieqn-15"><mml:mtext>Span-RG</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-22">22</xref>]</td>
<td>83.4</td>
<td>68.1</td>
<td>74.9</td>
<td>66.5</td>
<td>62.8</td>
<td>64.6</td>
</tr>
<tr>
<td><inline-formula id="ieqn-16"><mml:math id="mml-ieqn-16"><mml:mtext>DMBRE</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-17">17</xref>]</td>
<td>92.1</td>
<td>93.5</td>
<td>92.8</td>
<td>89.0</td>
<td>88.8</td>
<td>88.9</td>
</tr>
<tr>
<td><inline-formula id="ieqn-17"><mml:math id="mml-ieqn-17"><mml:mtext>OD-RTE</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-23">23</xref>]</td>
<td>94.2</td>
<td>93.6</td>
<td>93.9</td>
<td>92.8</td>
<td>92.1</td>
<td>92.5</td>
</tr>
<tr>
<td><inline-formula id="ieqn-18"><mml:math id="mml-ieqn-18"><mml:mtext>RPSS</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-24">24</xref>]</td>
<td>93.5</td>
<td>93.2</td>
<td>93.3</td>
<td>94.7</td>
<td>95.1</td>
<td>94.9</td>
</tr>
<tr>
<td><inline-formula id="ieqn-19"><mml:math id="mml-ieqn-19"><mml:mtext>SMHS</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-26">26</xref>]</td>
<td>97.5</td>
<td>82.0</td>
<td>89.1</td>
<td>&#x2013;</td>
<td>&#x2013;</td>
<td>&#x2013;</td>
</tr>
<tr>
<td><inline-formula id="ieqn-20"><mml:math id="mml-ieqn-20"><mml:mtext>StereoRel</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-27">27</xref>]</td>
<td>92.0</td>
<td>92.3</td>
<td>92.2</td>
<td>91.6</td>
<td>92.6</td>
<td>92.1</td>
</tr>
<tr>
<td><inline-formula id="ieqn-21"><mml:math id="mml-ieqn-21"><mml:mtext>UniRel</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-28">28</xref>]</td>
<td>93.5</td>
<td>94.0</td>
<td>93.7</td>
<td>94.8</td>
<td>94.6</td>
<td>94.7</td>
</tr>
<tr>
<td><inline-formula id="ieqn-22"><mml:math id="mml-ieqn-22"><mml:mtext>TLRel</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-29">29</xref>]</td>
<td>88.5</td>
<td>85.2</td>
<td>86.8</td>
<td>91.8</td>
<td>92.7</td>
<td>92.2</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>The table filling approach necessitates that the model enumerate all potential entity pairs. While this process demonstrates commendable performance in managing local entities and relationships, it encounters significant challenges when the distance between entities and relationships increases. In such cases, the model&#x2019;s ability to learn the corresponding positions within the matrix becomes compromised. Moreover, as longer sentences are processed, the size of the matrix expands rapidly, leading to a marked increase in the demand for computational resources, including memory and storage. Consequently, this computational complexity becomes particularly pronounced when handling long texts, often resulting in diminished model efficiency. It is essential to underscore that in the pursuit of both accuracy and comprehensiveness, optimizing the use of computational resources remains a critical challenge for this method. This consideration not only highlights the intricacies involved in algorithm design but also points the way toward future research endeavors aimed at developing more efficient model architectures and methodologies.</p>
</sec>
<sec id="s5_1_2">
<label>5.1.2</label>
<title>Tagging</title>
<p>The tagging approach processes input text as a linear sequence by assigning a corresponding label to each word and gradually extracting the required triples by processing each element in the sequence. PRGC [<xref ref-type="bibr" rid="ref-30">30</xref>] is a more advanced tagging model that breaks down the joint extraction of triples into three subtasks: relation judgment, entity extraction, and head-tail entity alignment. It enhances overall extraction accuracy by predicting potential relationships in the text through a latent relationship module combined with a global alignment mechanism. However, it still suffers from issues such as error propagation and exposure bias. Additionally, Cheng et al. [<xref ref-type="bibr" rid="ref-31">31</xref>] propose an innovative cascaded double-decoder architecture, which effectively reduces the negative impact of entity recognition errors on the relationship classification task, thereby better capturing the dependencies between entities and relationships. Moreover, Qiao et al. [<xref ref-type="bibr" rid="ref-32">32</xref>] leverage the powerful representation capabilities of BERT to construct a model that jointly learns entity recognition and relation extraction tasks. Furthermore, CasRel [<xref ref-type="bibr" rid="ref-33">33</xref>] adopts a cascaded binary labeling strategy, breaking down the complex relation extraction task into simpler subtasks by first extracting the subject entity, and then simultaneously extracting the relation and its corresponding object entity, thereby reducing the model&#x2019;s learning difficulty. Lastly, Yuan et al. [<xref ref-type="bibr" rid="ref-34">34</xref>] assign different attention weights to each type of relationship, allowing the model to accurately capture entities related to specific relationships. <xref ref-type="fig" rid="fig-3">Fig. 3</xref> shows an example of a Tagging model.</p>
<fig id="fig-3">
<label>Figure 3</label>
<caption>
<title>Example of a tagging method</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_59455-fig-3.tif"/>
</fig>
<p>Ren et al. [<xref ref-type="bibr" rid="ref-35">35</xref>] proposed a bidirectional extraction framework that enhances the overall performance and accuracy of the model by processing the text from left to right and right to left. Similarly, Xu et al. [<xref ref-type="bibr" rid="ref-36">36</xref>] innovatively employed a joint entity labeling approach, significantly improving the effectiveness of information extraction. Furthermore, Chen et al. [<xref ref-type="bibr" rid="ref-37">37</xref>] enhanced the model&#x2019;s ability to identify entities and relationships by introducing a position-aware attention mechanism and relationship embedding techniques. Hang et al. [<xref ref-type="bibr" rid="ref-38">38</xref>] identified entities in the text using a multi-label annotation method and employed a relationship alignment mechanism to clarify the relationships between these entities. Meanwhile, Zhang et al. [<xref ref-type="bibr" rid="ref-39">39</xref>] utilized the characteristics of capsule networks to capture information in both forward and backward directions and processed complex entity and relationship extraction tasks through a cascading structure. Finally, Wu et al. [<xref ref-type="bibr" rid="ref-40">40</xref>] achieved synchronous extraction of entities and relationships while employing a cross-type attention mechanism to capture the interdependencies between entities and relationships. <xref ref-type="table" rid="table-3">Table 3</xref> show the current state-of-the-art in Tagging models.</p>
<table-wrap id="table-3">
<label>Table 3</label>
<caption>
<title>Current state-of-the-art in Tagging models</title>
</caption>
<table frame="hsides">
<colgroup>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
</colgroup>
<thead>
<tr>
<th>Model</th>
<th align="center" colspan="3">NYT</th>
<th align="center" colspan="3">WebNLG</th>
</tr>
<tr>
<th></th>
<th>Prec.</th>
<th>Rec.</th>
<th>F1</th>
<th>Prec.</th>
<th>Rec.</th>
<th>F1</th>
</tr>
</thead>
<tbody>
<tr>
<td><inline-formula id="ieqn-23"><mml:math id="mml-ieqn-23"><mml:mtext>PRGC</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-30">30</xref>]</td>
<td>93.5</td>
<td>91.9</td>
<td>92.7</td>
<td>89.9</td>
<td>87.2</td>
<td>88.5</td>
</tr>
<tr>
<td><inline-formula id="ieqn-24"><mml:math id="mml-ieqn-24"><mml:mtext>Document</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-31">31</xref>]</td>
<td>89.9</td>
<td>91.4</td>
<td>90.6</td>
<td>88.0</td>
<td>88.9</td>
<td>88.4</td>
</tr>
<tr>
<td><inline-formula id="ieqn-25"><mml:math id="mml-ieqn-25"><mml:mtext>Document</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-32">32</xref>]</td>
<td>61.0</td>
<td>51.3</td>
<td>55.7</td>
<td>&#x2013;</td>
<td>&#x2013;</td>
<td>&#x2013;</td>
</tr>
<tr>
<td><inline-formula id="ieqn-26"><mml:math id="mml-ieqn-26"><mml:mtext>CASREL</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-33">33</xref>]</td>
<td>89.7</td>
<td>89.5</td>
<td>89.6</td>
<td>93.4</td>
<td>90.1</td>
<td>91.8</td>
</tr>
<tr>
<td><inline-formula id="ieqn-27"><mml:math id="mml-ieqn-27"><mml:mtext>RSAN</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-34">34</xref>]</td>
<td>85.7</td>
<td>83.6</td>
<td>84.6</td>
<td>80.5</td>
<td>83.8</td>
<td>82.1</td>
</tr>
<tr>
<td><inline-formula id="ieqn-28"><mml:math id="mml-ieqn-28"><mml:mtext>BiRTE</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-35">35</xref>]</td>
<td>91.9</td>
<td>93.7</td>
<td>92.8</td>
<td>89.0</td>
<td>89.5</td>
<td>89.3</td>
</tr>
<tr>
<td><inline-formula id="ieqn-29"><mml:math id="mml-ieqn-29"><mml:mtext>GraphJoint</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-36">36</xref>]</td>
<td>88.7</td>
<td>83.8</td>
<td>86.2</td>
<td>88.3</td>
<td>87.7</td>
<td>87.9</td>
</tr>
<tr>
<td><inline-formula id="ieqn-30"><mml:math id="mml-ieqn-30"><mml:mtext>PARE-Joint</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-37">37</xref>]</td>
<td>92.9</td>
<td>91.4</td>
<td>92.1</td>
<td>93.4</td>
<td>90.8</td>
<td>92.1</td>
</tr>
<tr>
<td><inline-formula id="ieqn-31"><mml:math id="mml-ieqn-31"><mml:mtext>MLRA-LSTM-CRF</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-38">38</xref>]</td>
<td>74.9</td>
<td>45.3</td>
<td>56.5</td>
<td>&#x2013;</td>
<td>&#x2013;</td>
<td>&#x2013;</td>
</tr>
<tr>
<td><inline-formula id="ieqn-32"><mml:math id="mml-ieqn-32"><mml:mtext>CBCapsule</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-39">39</xref>]</td>
<td>88.4</td>
<td>87.2</td>
<td>87.8</td>
<td>90.9</td>
<td>89.2</td>
<td>90.0</td>
</tr>
<tr>
<td><inline-formula id="ieqn-33"><mml:math id="mml-ieqn-33"><mml:mtext>SDN</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-40">40</xref>]</td>
<td>94.2</td>
<td>91.5</td>
<td>92.8</td>
<td>92.7</td>
<td>89.6</td>
<td>91.1</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>The tagging approach serves as a straightforward and efficient method for extracting relational triplets, which renders it particularly well-suited for handling short texts and tasks characterized by relatively simple structures. However, when confronted with overlapping entities, nested entities, multiple relationships, or long-distance dependencies, this method may encounter significant challenges. Given that it cannot assign multiple distinct labels to the same entity, there arises a need to design more complex labeling structures or to introduce more sophisticated model architectures to effectively address these issues. This reality not only highlights the limitations of the tagging method in specific contexts but also underscores the necessity of exploring more flexible strategies and solutions in the processing of more complex texts.</p>
</sec>
<sec id="s5_1_3">
<label>5.1.3</label>
<title>Sequence-to-Sequence</title>
<p>The basic concept of the sequence-to-sequence method is to transform the input sequence into a fixed-length vector representation, after which a decoder is used to generate a new sequence from that representation. Nayak et al. [<xref ref-type="bibr" rid="ref-41">41</xref>] effectively addressed the interdependence between entity recognition and relationship extraction by leveraging the advantages of the encoder-decoder structure, providing a viable joint extraction strategy for both tasks. Zeng et al. [<xref ref-type="bibr" rid="ref-42">42</xref>] proposed an end-to-end neural network model that combines generation and copying mechanisms, significantly improving accuracy when handling complex entities and relationships. Sui et al. [<xref ref-type="bibr" rid="ref-43">43</xref>] employed a set prediction network to jointly model entity recognition and relationship extraction, enabling the model to simultaneously handle multiple entities and relationships. Zhao et al. [<xref ref-type="bibr" rid="ref-44">44</xref>] designed a joint extraction model based on heterogeneous graph neural networks, which iteratively integrated global information to enhance the precision of the extraction task. Zhang et al. [<xref ref-type="bibr" rid="ref-45">45</xref>] introduced a bias minimization strategy aimed at improving the accuracy of the joint extraction model by reducing exposure bias. Additionally, the BTDM model [<xref ref-type="bibr" rid="ref-46">46</xref>] utilized a bidirectional translation decoding mechanism, allowing the model to better leverage contextual information to ensure the accuracy of entity and relationship extraction. Chang et al. [<xref ref-type="bibr" rid="ref-47">47</xref>] incorporated techniques such as constrained decoding, representation reuse, and fusion to ensure that the generated entity and relationship triples adhere to syntactic and semantic rules. Finally, Ye et al. [<xref ref-type="bibr" rid="ref-48">48</xref>] combined generative Transformers with contrastive learning to enhance the model&#x2019;s discriminative capability, enabling it to handle more complex sentence structures. <xref ref-type="fig" rid="fig-4">Fig. 4</xref> shows an example of a sequence-to-sequence model.</p>
<fig id="fig-4">
<label>Figure 4</label>
<caption>
<title>Example of a sequence-to-sequence method</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_59455-fig-4.tif"/>
</fig>
<p>Wang et al. [<xref ref-type="bibr" rid="ref-49">49</xref>] enhanced the model&#x2019;s capability to recognize entities and relationships by integrating various semantic features, enabling effective handling of texts that contain complex structures and rich semantic information. Chen et al. [<xref ref-type="bibr" rid="ref-50">50</xref>] successfully addressed the joint extraction problem of entities and relationships using an enhanced binary pointer network, while identifying implicit relationships in the text through reasoning patterns. Simultaneously, Tan et al. [<xref ref-type="bibr" rid="ref-51">51</xref>] introduced a query mechanism and instance differentiation method to better distinguish complex contexts and similar instances in relationship extraction tasks, thereby improving the accuracy of entity and relationship extraction. Zhang et al. [<xref ref-type="bibr" rid="ref-52">52</xref>] combined external knowledge resources with contextual information to enhance the accuracy and robustness of entity and relationship extraction tasks. Additionally, Shang et al. [<xref ref-type="bibr" rid="ref-53">53</xref>] simplified the extraction process by simultaneously executing all extraction operations within a unified model. Li et al. [<xref ref-type="bibr" rid="ref-54">54</xref>] emphasized the approach of first identifying relationships within the text and then inferring and filling in the corresponding entities to complete the extraction of triples. Lai et al. [<xref ref-type="bibr" rid="ref-55">55</xref>] proposed a neural network model that incorporates a multi-head attention mechanism, effectively capturing the dependencies between entities and relationships in the text by introducing relationship-aware multi-head attention mechanisms. Furthermore, Liang et al. [<xref ref-type="bibr" rid="ref-56">56</xref>] employed a standard sequence-to-sequence architecture, treating the input text as a sequence and generating an output sequence that represents entity and relationship triples. Finally, Li et al. [<xref ref-type="bibr" rid="ref-57">57</xref>] proposed a decoding scheme called TDEER, which views relationships as transformations from the subject to the object, effectively addressing the issue of overlapping relationship triples. <xref ref-type="table" rid="table-4">Table 4</xref> show the current state-of-the-art in sequence-to-sequence models.</p>
<table-wrap id="table-4">
<label>Table 4</label>
<caption>
<title>Current state-of-the-art in sequence-to-sequence models</title>
</caption>
<table frame="hsides">
<colgroup>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
</colgroup>
<thead>
<tr>
<th>Model</th>
<th align="center" colspan="3">NYT</th>
<th align="center" colspan="3">WebNLG</th>
</tr>
<tr>
<th></th>
<th>Prec.</th>
<th>Rec.</th>
<th>F1</th>
<th>Prec.</th>
<th>Rec.</th>
<th>F1</th>
</tr>
</thead>
<tbody>
<tr>
<td><inline-formula id="ieqn-34"><mml:math id="mml-ieqn-34"><mml:mtext>PNDec</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-41">41</xref>]</td>
<td>89.3</td>
<td>78.8</td>
<td>83.8</td>
<td>&#x2013;</td>
<td>&#x2013;</td>
<td>&#x2013;</td>
</tr>
<tr>
<td><inline-formula id="ieqn-35"><mml:math id="mml-ieqn-35"><mml:mtext>CopyRE</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-42">42</xref>]</td>
<td>61.0</td>
<td>56.6</td>
<td>58.7</td>
<td>37.7</td>
<td>36.4</td>
<td>37.1</td>
</tr>
<tr>
<td><inline-formula id="ieqn-36"><mml:math id="mml-ieqn-36"><mml:mtext>SPN</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-43">43</xref>]</td>
<td>92.5</td>
<td>92.2</td>
<td>92.3</td>
<td>93.1</td>
<td>93.6</td>
<td>93.4</td>
</tr>
<tr>
<td><inline-formula id="ieqn-37"><mml:math id="mml-ieqn-37"><mml:mtext>RIFRE</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-44">44</xref>]</td>
<td>93.6</td>
<td>90.5</td>
<td>92.0</td>
<td>93.3</td>
<td>92.0</td>
<td>92.6</td>
</tr>
<tr>
<td><inline-formula id="ieqn-38"><mml:math id="mml-ieqn-38"><mml:mtext>Seq2UMTree</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-45">45</xref>]</td>
<td>79.1</td>
<td>75.1</td>
<td>77.1</td>
<td>&#x2013;</td>
<td>&#x2013;</td>
<td>&#x2013;</td>
</tr>
<tr>
<td><inline-formula id="ieqn-39"><mml:math id="mml-ieqn-39"><mml:mtext>BTDM</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-46">46</xref>]</td>
<td>93.1</td>
<td>92.4</td>
<td>92.7</td>
<td>90.9</td>
<td>90.1</td>
<td>90.5</td>
</tr>
<tr>
<td><inline-formula id="ieqn-40"><mml:math id="mml-ieqn-40"><mml:mtext>Document</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-47">47</xref>]</td>
<td>92.8</td>
<td>93.1</td>
<td>92.9</td>
<td>90.4</td>
<td>92.4</td>
<td>91.4</td>
</tr>
<tr>
<td><inline-formula id="ieqn-41"><mml:math id="mml-ieqn-41"><mml:mtext>Document</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-49">49</xref>]</td>
<td>93.6</td>
<td>91.7</td>
<td>92.6</td>
<td>94.9</td>
<td>92.3</td>
<td>93.5</td>
</tr>
<tr>
<td><inline-formula id="ieqn-42"><mml:math id="mml-ieqn-42"><mml:mtext>CGT</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-48">48</xref>]</td>
<td>94.7</td>
<td>84.2</td>
<td>89.1</td>
<td>92.9</td>
<td>75.6</td>
<td>83.4</td>
</tr>
<tr>
<td><inline-formula id="ieqn-43"><mml:math id="mml-ieqn-43"><mml:mtext>R-BPtrNet</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-50">50</xref>]</td>
<td>94.0</td>
<td>92.9</td>
<td>93.5</td>
<td>94.3</td>
<td>93.3</td>
<td>93.8</td>
</tr>
<tr>
<td><inline-formula id="ieqn-44"><mml:math id="mml-ieqn-44"><mml:mtext>QIDN</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-51">51</xref>]</td>
<td>93.4</td>
<td>92.6</td>
<td>93.0</td>
<td>94.1</td>
<td>93.7</td>
<td>93.9</td>
</tr>
<tr>
<td><inline-formula id="ieqn-45"><mml:math id="mml-ieqn-45"><mml:mtext>REKnow</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-52">52</xref>]</td>
<td>93.1</td>
<td>94.1</td>
<td>93.6</td>
<td>90.4</td>
<td>87.9</td>
<td>89.1</td>
</tr>
<tr>
<td><inline-formula id="ieqn-46"><mml:math id="mml-ieqn-46"><mml:mtext>DirectRel</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-53">53</xref>]</td>
<td>93.6</td>
<td>92.2</td>
<td>92.9</td>
<td>91.0</td>
<td>89.0</td>
<td>90.0</td>
</tr>
<tr>
<td><inline-formula id="ieqn-47"><mml:math id="mml-ieqn-47"><mml:mtext>RFBFN</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-54">54</xref>]</td>
<td>93.7</td>
<td>93.6</td>
<td>93.6</td>
<td>91.5</td>
<td>89.4</td>
<td>90.4</td>
</tr>
<tr>
<td><inline-formula id="ieqn-48"><mml:math id="mml-ieqn-48"><mml:mtext>RMAN</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-55">55</xref>]</td>
<td>87.1</td>
<td>83.8</td>
<td>85.4</td>
<td>83.6</td>
<td>85.3</td>
<td>84.5</td>
</tr>
<tr>
<td><inline-formula id="ieqn-49"><mml:math id="mml-ieqn-49"><mml:mtext>Seq2Seq-RE</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-56">56</xref>]</td>
<td>88.3</td>
<td>77.3</td>
<td>82.5</td>
<td>&#x2013;</td>
<td>&#x2013;</td>
<td>&#x2013;</td>
</tr>
<tr>
<td><inline-formula id="ieqn-50"><mml:math id="mml-ieqn-50"><mml:mtext>TDEER</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-57">57</xref>]</td>
<td>93.0</td>
<td>92.1</td>
<td>92.5</td>
<td>93.8</td>
<td>92.4</td>
<td>93.1</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>The sequence-to-sequence approach naturally excels at handling nested entities, overlapping entities, and complex multiple relationships, as its decoding process allows for output generation at any position rather than being constrained by a fixed set of labels. However, a notable distinction arises during training, where the decoder typically relies on the ground truth label from the preceding word, while in the inference phase, it must depend on the labels it generates itself. This discrepancy can lead to a decline in model performance during inference.</p>
<p>Moreover, the model may still encounter challenges when generating long sequences, particularly when tasked with extracting relationships that span multiple sentences, where accuracy is likely to diminish. Consequently, although the sequence-to-sequence method possesses significant potential in addressing complex structures, the critical difference between training and inference phases, along with the difficulties associated with long sequence generation, highlights the limitations of the model in practical applications.</p>
</sec>
</sec>
<sec id="s5_2">
<label>5.2</label>
<title>Parameter Sharing</title>
<p>Unlike the joint decoding approach, the parameter-sharing method adopts a multi-module, multi-step process to extract relational triplets. Essentially, parameter-sharing still divides the relational triplet extraction task into two subtasks: named entity recognition and relation extraction. By sharing the parameters of the encoding layer in a joint model, it enables joint learning, thus facilitating interdependency between entity recognition and relation extraction. Existing parameter-sharing methods mainly rely on joint decoding or multi-task learning techniques, enhancing the interaction between these two subtasks by sharing underlying parameters, ultimately achieving joint extraction of relational triplets.</p>
<p>Geng et al. [<xref ref-type="bibr" rid="ref-58">58</xref>] enhanced the model&#x2019;s ability to understand complex relationships within texts by integrating various semantic features, resulting in improved performance when handling multiple entities and longer sentences. Similarly, Gao et al. [<xref ref-type="bibr" rid="ref-59">59</xref>] proposed a relation decomposition-based triple extraction model. This model generates sentence-level vector representations by merging the word vectors of the input text, classifying the relationships within the text first, and then identifying the corresponding entities. Building on CopyRE, Zeng et al. [<xref ref-type="bibr" rid="ref-60">60</xref>] employed a multi-task learning strategy to generate entity and relationship labels for extraction, improving the accuracy of the generated results through a copying mechanism. Li et al. [<xref ref-type="bibr" rid="ref-61">61</xref>] introduced a joint extraction model based on a decomposition strategy, enhancing overall extraction performance by sharing information between tasks. Yu et al. [<xref ref-type="bibr" rid="ref-62">62</xref>] simplified the complex entity-relationship extraction task into several sub-tasks and achieved end-to-end joint extraction. Sun et al. [<xref ref-type="bibr" rid="ref-63">63</xref>] utilized a recursive mechanism to enable continuous information exchange, optimizing entity recognition and relationship extraction to improve the accuracy of the extraction process. Zhang et al. [<xref ref-type="bibr" rid="ref-64">64</xref>] proposed a coarse-to-fine extraction framework, initially performing rough extraction followed by detailed refinement. Finally, Wang et al. [<xref ref-type="bibr" rid="ref-65">65</xref>] adopted a decoupling and aggregation strategy, processing entity recognition and relationship extraction tasks separately and then integrating the results in subsequent stages, thereby enhancing the flexibility of the extraction process. <xref ref-type="fig" rid="fig-5">Fig. 5</xref> shows an example of a Parameter Sharing model.</p>
<fig id="fig-5">
<label>Figure 5</label>
<caption>
<title>Example of a parameter sharing method</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_59455-fig-5.tif"/>
</fig>
<p>Yang et al. [<xref ref-type="bibr" rid="ref-66">66</xref>] incorporated relationship information into the attention mechanism, enabling the model to more effectively focus on entities and their contexts related to specific relationships. Meanwhile, Duan et al. [<xref ref-type="bibr" rid="ref-67">67</xref>] developed an adaptive mechanism that flexibly adjusts processing strategies based on the features of entities and relationships within the context, thereby enhancing extraction accuracy. The triple relationship network mechanism proposed by Wang et al. [<xref ref-type="bibr" rid="ref-68">68</xref>] significantly improved the accuracy and efficiency of joint extraction by directly modeling the complex interactions between entities and relationship triples. Furthermore, Wang et al. [<xref ref-type="bibr" rid="ref-69">69</xref>] leveraged BERT&#x2019;s powerful pre-training capabilities in conjunction with a decomposition strategy to comprehensively enhance the performance of entity and relationship extraction. Yang et al. [<xref ref-type="bibr" rid="ref-70">70</xref>] adopted a bidirectional relationship-guided attention mechanism that integrates semantics and knowledge, enabling the model to better tackle the challenges posed by complex texts. Gao et al. [<xref ref-type="bibr" rid="ref-71">71</xref>] enhanced the model&#x2019;s ability to handle long sentences by combining stage-wise processing with a global entity docking mechanism. In addition, Sun et al. [<xref ref-type="bibr" rid="ref-72">72</xref>] implemented progressive multi-task learning, achieving more precise entity and relationship extraction by gradually learning each sub-task and effectively controlling the flow of information. Huang et al. [<xref ref-type="bibr" rid="ref-73">73</xref>] utilized transformation rules in knowledge graph embeddings to learn relationship representations, proposing a special relationship called NA and dynamically selecting suitable relationships through a bias loss function. Zhuang et al. [<xref ref-type="bibr" rid="ref-74">74</xref>] designed a priority-driven joint extraction model that prioritizes relationship extraction to guide entity recognition and triple generation. Finally, Chen et al. [<xref ref-type="bibr" rid="ref-75">75</xref>] improved the joint extraction capabilities of entities and relationships by transforming and reinforcing the relationships between text and knowledge graphs. <xref ref-type="table" rid="table-5">Table 5</xref> shows the current state-of-the-art in Parameter Sharing models.</p>
<table-wrap id="table-5">
<label>Table 5</label>
<caption>
<title>Current state-of-the-art in Parameter Sharing models</title>
</caption>
<table frame="hsides">
<colgroup>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
</colgroup>
<thead>
<tr>
<th>Model</th>
<th align="center" colspan="3">NYT</th>
<th align="center" colspan="3">WebNLG</th>
</tr>
<tr>
<th></th>
<th>Prec.</th>
<th>Rec.</th>
<th>F1</th>
<th>Prec.</th>
<th>Rec.</th>
<th>F1</th>
</tr>
</thead>
<tbody>
<tr>
<td><inline-formula id="ieqn-51"><mml:math id="mml-ieqn-51"><mml:mtext>RS-Joint</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-58">58</xref>]</td>
<td>80.3</td>
<td>57.2</td>
<td>66.8</td>
<td>&#x2013;</td>
<td>&#x2013;</td>
<td>&#x2013;</td>
</tr>
<tr>
<td><inline-formula id="ieqn-52"><mml:math id="mml-ieqn-52"><mml:mtext>Document</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-59">59</xref>]</td>
<td>91.5</td>
<td>90.0</td>
<td>90.7</td>
<td>91.4</td>
<td>92.2</td>
<td>91.8</td>
</tr>
<tr>
<td><inline-formula id="ieqn-53"><mml:math id="mml-ieqn-53"><mml:mtext>CopyMTL</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-60">60</xref>]</td>
<td>75.7</td>
<td>68.7</td>
<td>72.0</td>
<td>58.0</td>
<td>54.9</td>
<td>56.4</td>
</tr>
<tr>
<td><inline-formula id="ieqn-54"><mml:math id="mml-ieqn-54"><mml:mtext>Document</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-61">61</xref>]</td>
<td>86.5</td>
<td>73.2</td>
<td>79.3</td>
<td>85.3</td>
<td>83.1</td>
<td>84.2</td>
</tr>
<tr>
<td><inline-formula id="ieqn-55"><mml:math id="mml-ieqn-55"><mml:mtext>ETL-Span</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-62">62</xref>]</td>
<td>85.5</td>
<td>71.7</td>
<td>78.0</td>
<td>84.3</td>
<td>82.0</td>
<td>83.1</td>
</tr>
<tr>
<td><inline-formula id="ieqn-56"><mml:math id="mml-ieqn-56"><mml:mtext>RIN</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-63">63</xref>]</td>
<td>83.9</td>
<td>85.5</td>
<td>84.7</td>
<td>77.3</td>
<td>76.8</td>
<td>77.0</td>
</tr>
<tr>
<td><inline-formula id="ieqn-57"><mml:math id="mml-ieqn-57"><mml:mtext>C2FERE</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-64">64</xref>]</td>
<td>93.7</td>
<td>92.6</td>
<td>93.1</td>
<td>91.5</td>
<td>89.2</td>
<td>90.3</td>
</tr>
<tr>
<td><inline-formula id="ieqn-58"><mml:math id="mml-ieqn-58"><mml:mtext>BiDArtER</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-65">65</xref>]</td>
<td>&#x2013;</td>
<td>&#x2013;</td>
<td>92.6</td>
<td>&#x2013;</td>
<td>&#x2013;</td>
<td>94.1</td>
</tr>
<tr>
<td><inline-formula id="ieqn-59"><mml:math id="mml-ieqn-59"><mml:mtext>RGAM</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-66">66</xref>]</td>
<td>90.6</td>
<td>92.0</td>
<td>91.3</td>
<td>93.5</td>
<td>91.9</td>
<td>92.6</td>
</tr>
<tr>
<td><inline-formula id="ieqn-60"><mml:math id="mml-ieqn-60"><mml:mtext>Document</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-67">67</xref>]</td>
<td>81.3</td>
<td>76.7</td>
<td>79.4</td>
<td>67.4</td>
<td>65.1</td>
<td>66.3</td>
</tr>
<tr>
<td><inline-formula id="ieqn-61"><mml:math id="mml-ieqn-61"><mml:mtext>TRN</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-68">68</xref>]</td>
<td>93.0</td>
<td>92.3</td>
<td>92.6</td>
<td>93.5</td>
<td>92.7</td>
<td>93.1</td>
</tr>
<tr>
<td><inline-formula id="ieqn-62"><mml:math id="mml-ieqn-62"><mml:mtext>Document</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-69">69</xref>]</td>
<td>87.0</td>
<td>85.1</td>
<td>86.0</td>
<td>&#x2013;</td>
<td>&#x2013;</td>
<td>&#x2013;</td>
</tr>
<tr>
<td><inline-formula id="ieqn-63"><mml:math id="mml-ieqn-63"><mml:mtext>BRASK</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-70">70</xref>]</td>
<td>93.0</td>
<td>91.5</td>
<td>92.2</td>
<td>94.8</td>
<td>92.2</td>
<td>93.5</td>
</tr>
<tr>
<td><inline-formula id="ieqn-64"><mml:math id="mml-ieqn-64"><mml:mtext>ERGM</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-71">71</xref>]</td>
<td>93.3</td>
<td>91.5</td>
<td>92.4</td>
<td>94.2</td>
<td>91.2</td>
<td>92.7</td>
</tr>
<tr>
<td><inline-formula id="ieqn-65"><mml:math id="mml-ieqn-65"><mml:mtext>PMEI</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-72">72</xref>]</td>
<td>88.4</td>
<td>88.9</td>
<td>88.7</td>
<td>80.8</td>
<td>82.8</td>
<td>81.8</td>
</tr>
<tr>
<td><inline-formula id="ieqn-66"><mml:math id="mml-ieqn-66"><mml:mtext>TransRel</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-73">73</xref>]</td>
<td>90.1</td>
<td>93.9</td>
<td>92.0</td>
<td>92.7</td>
<td>94.9</td>
<td>93.8</td>
</tr>
<tr>
<td><inline-formula id="ieqn-67"><mml:math id="mml-ieqn-67"><mml:mtext>RFTE</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-74">74</xref>]</td>
<td>88.9</td>
<td>90.5</td>
<td>89.7</td>
<td>91.3</td>
<td>92.5</td>
<td>91.9</td>
</tr>
<tr>
<td><inline-formula id="ieqn-68"><mml:math id="mml-ieqn-68"><mml:mtext>MTG</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-75">75</xref>]</td>
<td>95.6</td>
<td>93.1</td>
<td>94.3</td>
<td>94.8</td>
<td>95.1</td>
<td>94.9</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>In joint extraction models, parameter sharing techniques effectively reduce the number of parameters by allowing certain parameters to be shared between the tasks of entity recognition and relationship extraction. This approach enhances training efficiency and improves the model&#x2019;s generalization capability. However, when significant differences exist between these tasks, parameter sharing can have detrimental effects, ultimately weakening overall performance. The adjustment of shared parameters affects all tasks simultaneously, which often leads to difficulties in fine-tuning specific tasks.</p>
<p>Consequently, this limitation not only restricts the flexibility and adaptability of the parameter-sharing model but also raises concerns regarding its performance in diverse task settings. Therefore, exploring how to balance the benefits of parameter sharing with the unique requirements of individual tasks becomes essential. This inquiry not only offers potential avenues for enhancing the performance of joint extraction models but also lays the groundwork for future research in this area.</p>
</sec>
</sec>
<sec id="s6">
<label>6</label>
<title>Experiments</title>
<p>In order to validate the reproducibility of current state-of-the-art relational triplet extraction techniques and address the lack of performance comparisons across different models under uniform conditions, this paper will conduct small-scale comparative experiments under consistent hardware settings. These experiments will utilize the same datasets and evaluation metrics to ensure fairness and reliability. Moreover, additional small-scale experiments will be carried out, focusing on the number of relational triplets present in sentences and the types of overlapping triplets. Through these analyses, the paper aims to evaluate and compare the performance of various extraction models across different datasets, providing a deeper understanding of their strengths and limitations in diverse scenarios.</p>
<sec id="s6_1">
<label>6.1</label>
<title>Datasets and Evaluation Metrics</title>
<p>To provide a clear and intuitive comparison of the strengths and weaknesses of various methods in practical tasks, this paper employs the NYT and WebNLG datasets, as introduced in <xref ref-type="sec" rid="s3">Section 3</xref>, for the comparative experiments. In order to further investigate how different approaches handle complex scenarios, sentences containing varying types of overlapping triplets and sentences with different numbers of triplets are used to test all models.</p>
<p>For the evaluation, precision (Prec.), recall (Rec.), and F1 score (F1) are adopted as the primary metrics, allowing for a comprehensive assessment of the models&#x2019; performance through the combined analysis of these various indicators. This approach ensures a robust and nuanced evaluation of the methods under different conditions.</p>
</sec>
<sec id="s6_2">
<label>6.2</label>
<title>Implementation Details</title>
<p>To facilitate a systematic comparison of the discussed models under uniform conditions, all training processes in our experiments are conducted on a workstation equipped with an Intel Xeon Gold 6133 CPU, 256 GB memory, an RTX 4090 GPU, and running Ubuntu 20.04. The model parameters used are directly derived from those provided in the respective papers, ensuring consistency with the original experimental settings.</p>
</sec>
<sec id="s6_3">
<label>6.3</label>
<title>Baselines</title>
<p>To validate the reproducibility of the models and compare the strengths and weaknesses across different types, we have selected a number of advanced models from each category for the comparative experiments. The models chosen include: (1) <bold>Table filling:</bold> OneRel [<xref ref-type="bibr" rid="ref-21">21</xref>], TPLinker [<xref ref-type="bibr" rid="ref-15">15</xref>], and GRTE [<xref ref-type="bibr" rid="ref-18">18</xref>]; (2) <bold>Tagging:</bold> PRGC [<xref ref-type="bibr" rid="ref-30">30</xref>], CASREL [<xref ref-type="bibr" rid="ref-33">33</xref>], and BiRTE [<xref ref-type="bibr" rid="ref-35">35</xref>]; (3) <bold>Sequence-to-sequence:</bold> BTDM [<xref ref-type="bibr" rid="ref-46">46</xref>], R-BPtrNet [<xref ref-type="bibr" rid="ref-50">50</xref>], and RFBFN [<xref ref-type="bibr" rid="ref-54">54</xref>]; (4) <bold>Parameter sharing:</bold> C2FERE [<xref ref-type="bibr" rid="ref-64">64</xref>], ERGM [<xref ref-type="bibr" rid="ref-71">71</xref>], and MTG [<xref ref-type="bibr" rid="ref-75">75</xref>]. The code for all these models is sourced from the publicly available implementations provided in the respective papers. This selection ensures a representative and comprehensive evaluation of the models&#x2019; performance across various extraction tasks.</p>
</sec>
<sec id="s6_4">
<label>6.4</label>
<title>Results and Analysis</title>
<p>From <xref ref-type="table" rid="table-6">Table 6</xref>, it can be observed that the best-performing model is MTG, which demonstrates exceptional handling of overlapping relational triplets in sentences, particularly excelling with an F1 score of 98.0% on the EPO-type data in the WebNLG dataset. In addition, the table filling model shows generally strong performance across both datasets; however, its performance on the WebNLG dataset, particularly with SPO-type data, is slightly lower than its performance on the NYT dataset. This discrepancy highlights a limitation in the table filling model&#x2019;s ability to effectively handle the more complex sentence structures and natural language expressions present in WebNLG. The tagging model, while performing at an average level across all models, effectively leverages contextual information, enabling it to exhibit stable performance across various sentence types. Moreover, it demonstrates good generalization ability, consistently performing well on different datasets.</p>
<table-wrap id="table-6">
<label>Table 6</label>
<caption>
<title><inline-formula id="ieqn-69"><mml:math id="mml-ieqn-69"><mml:mrow><mml:mi mathvariant="normal">F</mml:mi><mml:mn>1</mml:mn></mml:mrow></mml:math></inline-formula> scores on sentences with different overlapping patterns</title>
</caption>
<table frame="hsides">
<colgroup>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
</colgroup>
<thead>
<tr>
<th>Model</th>
<th align="center" colspan="3">NYT</th>
<th align="center" colspan="3">WebNLG</th>
</tr>
<tr>
<th></th>
<th>NEO</th>
<th>EPO</th>
<th>SPO</th>
<th>NEO</th>
<th>EPO</th>
<th>SPO</th>
</tr>
</thead>
<tbody>
<tr>
<td><inline-formula id="ieqn-70"><mml:math id="mml-ieqn-70"><mml:mtext>OneRel</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-21">21</xref>]</td>
<td>90.6</td>
<td>95.1</td>
<td>94.8</td>
<td><bold>91.9</bold></td>
<td>95.4</td>
<td>94.7</td>
</tr>
<tr>
<td><inline-formula id="ieqn-71"><mml:math id="mml-ieqn-71"><mml:mtext>TPLinker</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-15">15</xref>]</td>
<td>90.1</td>
<td>94.0</td>
<td>93.4</td>
<td>87.9</td>
<td>95.3</td>
<td>92.5</td>
</tr>
<tr>
<td><inline-formula id="ieqn-72"><mml:math id="mml-ieqn-72"><mml:mtext>GRTE</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-18">18</xref>]</td>
<td>91.1</td>
<td>95.0</td>
<td>94.4</td>
<td>90.6</td>
<td>94.5</td>
<td><bold>96.0</bold></td>
</tr>
<tr>
<td><inline-formula id="ieqn-73"><mml:math id="mml-ieqn-73"><mml:mtext>PRGC</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-30">30</xref>]</td>
<td>91.0</td>
<td>94.5</td>
<td>94.0</td>
<td>90.4</td>
<td>95.9</td>
<td>93.6</td>
</tr>
<tr>
<td><inline-formula id="ieqn-74"><mml:math id="mml-ieqn-74"><mml:mtext>CASREL</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-33">33</xref>]</td>
<td>87.3</td>
<td>92.0</td>
<td>91.4</td>
<td>89.4</td>
<td>94.7</td>
<td>92.2</td>
</tr>
<tr>
<td><inline-formula id="ieqn-75"><mml:math id="mml-ieqn-75"><mml:mtext>BiRTE</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-35">35</xref>]</td>
<td>91.4</td>
<td>94.2</td>
<td>94.7</td>
<td>90.1</td>
<td>94.3</td>
<td>95.9</td>
</tr>
<tr>
<td><inline-formula id="ieqn-76"><mml:math id="mml-ieqn-76"><mml:mtext>BTDM</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-46">46</xref>]</td>
<td>90.8</td>
<td>94.7</td>
<td>94.9</td>
<td>91.0</td>
<td>94.3</td>
<td>93.5</td>
</tr>
<tr>
<td><inline-formula id="ieqn-77"><mml:math id="mml-ieqn-77"><mml:mtext>R-BPtrNet</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-50">50</xref>]</td>
<td>90.4</td>
<td>95.2</td>
<td>94.4</td>
<td>89.5</td>
<td>96.1</td>
<td>93.9</td>
</tr>
<tr>
<td><inline-formula id="ieqn-78"><mml:math id="mml-ieqn-78"><mml:mtext>RFBFN</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-54">54</xref>]</td>
<td>91.2</td>
<td>95.6</td>
<td>95.2</td>
<td>91.0</td>
<td>96.5</td>
<td>94.6</td>
</tr>
<tr>
<td><inline-formula id="ieqn-79"><mml:math id="mml-ieqn-79"><mml:mtext>C2FERE</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-64">64</xref>]</td>
<td><bold>91.9</bold></td>
<td>95.1</td>
<td>94.5</td>
<td>91.8</td>
<td>96.3</td>
<td>94.7</td>
</tr>
<tr>
<td><inline-formula id="ieqn-80"><mml:math id="mml-ieqn-80"><mml:mtext>ERGM</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-71">71</xref>]</td>
<td>90.9</td>
<td>94.1</td>
<td>93.6</td>
<td>90.3</td>
<td>96.0</td>
<td>93.8</td>
</tr>
<tr>
<td><inline-formula id="ieqn-81"><mml:math id="mml-ieqn-81"><mml:mtext>MTG</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-75">75</xref>]</td>
<td>91.1</td>
<td><bold>96.7</bold></td>
<td><bold>95.7</bold></td>
<td>90.0</td>
<td><bold>98.0</bold></td>
<td>94.5</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>As for the sequence-to-sequence model, its overall performance on EPO-type triplets is relatively stable across both datasets, showcasing sensitivity to entities and relations in complex sentences. This is particularly important, as errors in entity recognition have a much larger impact than errors in relation classification for EPO-type triplets. However, the sequence-to-sequence model&#x2019;s performance on the SPO-type data in the WebNLG dataset lags behind its performance on the NYT dataset, possibly due to suboptimal utilization of contextual information. Finally, the parameter-sharing model shows superior performance on the EPO-type data in the WebNLG dataset compared to the NYT dataset, reflecting the model&#x2019;s strong capabilities under low-resource conditions. This suggests that, in scenarios with limited resources, the parameter-sharing model can still perform remarkably well.</p>
<p><xref ref-type="table" rid="table-7">Table 7</xref> presents the extraction performance of different models when handling sentences with varying numbers of triplets. When <inline-formula id="ieqn-82"><mml:math id="mml-ieqn-82"><mml:mi>N</mml:mi><mml:mo>&#x003C;</mml:mo><mml:mn>3</mml:mn></mml:math></inline-formula>, C2FERE demonstrates the best performance, indicating its strong ability to handle simpler sentence structures. In contrast, when <inline-formula id="ieqn-83"><mml:math id="mml-ieqn-83"><mml:mi>N</mml:mi><mml:mo>&#x003E;</mml:mo><mml:mn>3</mml:mn></mml:math></inline-formula>, MTG outperforms all other models, highlighting its superior understanding of more complex sentence structures. Overall, the parameter-sharing model excels in handling sentence structures with varying numbers of triplets, which can be attributed to its unique construction approach. By employing a multi-task learning framework that combines named entity recognition and relation extraction, the model enhances extraction performance.</p>
<table-wrap id="table-7">
<label>Table 7</label>
<caption>
<title><inline-formula id="ieqn-84"><mml:math id="mml-ieqn-84"><mml:mrow><mml:mi mathvariant="normal">F</mml:mi><mml:mn>1</mml:mn></mml:mrow></mml:math></inline-formula> scores on sentences with different triple numbers</title>
</caption>
<table frame="hsides">
<colgroup>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
</colgroup>
<thead>
<tr>
<th>Model</th>
<th align="center" colspan="5">NYT</th>
<th align="center" colspan="5">WebNLG</th>
</tr>
<tr>
<th></th>
<th>N &#x003D; 1</th>
<th>N &#x003D; 2</th>
<th>N &#x003D; 3</th>
<th>N &#x003D; 4</th>
<th>N &#x003E; 5</th>
<th>N &#x003D; 1</th>
<th>N &#x003D; 2</th>
<th>N &#x003D; 3</th>
<th>N &#x003D; 4</th>
<th>N &#x003E; 5</th>
</tr>
</thead>
<tbody>
<tr>
<td><inline-formula id="ieqn-85"><mml:math id="mml-ieqn-85"><mml:mtext>OneRel</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-21">21</xref>]</td>
<td>90.5</td>
<td>93.4</td>
<td>93.9</td>
<td>96.5</td>
<td><bold>94.2</bold></td>
<td>91.4</td>
<td><bold>93.0</bold></td>
<td>95.9</td>
<td>95.7</td>
<td>94.5</td>
</tr>
<tr>
<td><inline-formula id="ieqn-86"><mml:math id="mml-ieqn-86"><mml:mtext>TPLinker</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-15">15</xref>]</td>
<td>90.0</td>
<td>92.8</td>
<td>93.1</td>
<td>96.1</td>
<td>90.0</td>
<td>88.0</td>
<td>90.1</td>
<td>94.6</td>
<td>93.3</td>
<td>91.6</td>
</tr>
<tr>
<td><inline-formula id="ieqn-87"><mml:math id="mml-ieqn-87"><mml:mtext>GRTE</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-18">18</xref>]</td>
<td>90.8</td>
<td>93.7</td>
<td>94.4</td>
<td>96.2</td>
<td>93.4</td>
<td>90.6</td>
<td>92.5</td>
<td>96.5</td>
<td>95.5</td>
<td>94.4</td>
</tr>
<tr>
<td><inline-formula id="ieqn-88"><mml:math id="mml-ieqn-88"><mml:mtext>PRGC</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-30">30</xref>]</td>
<td>91.1</td>
<td>93.0</td>
<td>93.5</td>
<td>95.5</td>
<td>93.0</td>
<td>89.9</td>
<td>91.6</td>
<td>95.0</td>
<td>94.8</td>
<td>92.8</td>
</tr>
<tr>
<td><inline-formula id="ieqn-89"><mml:math id="mml-ieqn-89"><mml:mtext>CASREL</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-33">33</xref>]</td>
<td>88.2</td>
<td>90.3</td>
<td>91.9</td>
<td>94.2</td>
<td>83.7</td>
<td>89.3</td>
<td>90.8</td>
<td>94.2</td>
<td>92.4</td>
<td>90.9</td>
</tr>
<tr>
<td><inline-formula id="ieqn-90"><mml:math id="mml-ieqn-90"><mml:mtext>BiRTE</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-35">35</xref>]</td>
<td>91.5</td>
<td>93.7</td>
<td>93.9</td>
<td>95.8</td>
<td>92.1</td>
<td>90.2</td>
<td>92.9</td>
<td>95.7</td>
<td>94.6</td>
<td>92.0</td>
</tr>
<tr>
<td><inline-formula id="ieqn-91"><mml:math id="mml-ieqn-91"><mml:mtext>BTDM</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-46">46</xref>]</td>
<td>90.7</td>
<td>93.4</td>
<td>94.2</td>
<td>96.2</td>
<td>94.0</td>
<td>90.8</td>
<td>92.5</td>
<td>96.1</td>
<td>95.4</td>
<td>92.7</td>
</tr>
<tr>
<td><inline-formula id="ieqn-92"><mml:math id="mml-ieqn-92"><mml:mtext>R-BPtrNet</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-50">50</xref>]</td>
<td>89.5</td>
<td>93.1</td>
<td>93.5</td>
<td>96.7</td>
<td>91.3</td>
<td>88.5</td>
<td>91.4</td>
<td>96.2</td>
<td>94.9</td>
<td>94.2</td>
</tr>
<tr>
<td><inline-formula id="ieqn-93"><mml:math id="mml-ieqn-93"><mml:mtext>RFBFN</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-54">54</xref>]</td>
<td>91.4</td>
<td>93.8</td>
<td><bold>94.8</bold></td>
<td>96.4</td>
<td>93.9</td>
<td>90.8</td>
<td>92.6</td>
<td><bold>96.6</bold></td>
<td>94.7</td>
<td>94.5</td>
</tr>
<tr>
<td><inline-formula id="ieqn-94"><mml:math id="mml-ieqn-94"><mml:mtext>C2FERE</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-64">64</xref>]</td>
<td><bold>91.7</bold></td>
<td><bold>94.2</bold></td>
<td>94.2</td>
<td>96.1</td>
<td>93.6</td>
<td><bold>91.7</bold></td>
<td>92.4</td>
<td>96.2</td>
<td>95.3</td>
<td>94.5</td>
</tr>
<tr>
<td><inline-formula id="ieqn-95"><mml:math id="mml-ieqn-95"><mml:mtext>ERGM</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-71">71</xref>]</td>
<td>90.9</td>
<td>93.4</td>
<td>93.1</td>
<td>95.7</td>
<td>90.1</td>
<td>90.3</td>
<td>91.5</td>
<td>95.1</td>
<td>94.0</td>
<td>92.5</td>
</tr>
<tr>
<td><inline-formula id="ieqn-96"><mml:math id="mml-ieqn-96"><mml:mtext>MTG</mml:mtext></mml:math></inline-formula> [<xref ref-type="bibr" rid="ref-75">75</xref>]</td>
<td>90.6</td>
<td>93.6</td>
<td>94.4</td>
<td><bold>97.8</bold></td>
<td>92.4</td>
<td>89.2</td>
<td>92.0</td>
<td>96.5</td>
<td><bold>95.9</bold></td>
<td><bold>95.4</bold></td>
</tr>
</tbody>
</table>
</table-wrap>
<p>Additionally, the table filling model shows results just below the parameter-sharing model, which can be explained by its need to enumerate all possible entity-relation pairs and construct a relation matrix for each triplet. While this approach performs well in more complex environments, it inevitably increases computational demand, thus reducing efficiency. In the case of sentences with fewer triplets, the tagging model demonstrates relatively stable performance; however, when <inline-formula id="ieqn-97"><mml:math id="mml-ieqn-97"><mml:mi>N</mml:mi><mml:mo>&#x003E;</mml:mo><mml:mn>5</mml:mn></mml:math></inline-formula>, its performance declines significantly. This suggests that, in scenarios involving multiple triplets, the model struggles to fully utilize contextual information, revealing limitations when processing complex sentences. As for the sequence-to-sequence model, it generally performs well, with the RFBFN model standing out in particular when handling <inline-formula id="ieqn-98"><mml:math id="mml-ieqn-98"><mml:mi>N</mml:mi><mml:mo>&#x003C;</mml:mo><mml:mn>3</mml:mn></mml:math></inline-formula>. This model demonstrates superior performance compared to others, which underscores its advantage in more complex extraction scenarios.</p>
</sec>
</sec>
<sec id="s7">
<label>7</label>
<title>Current State-of-the-Art &#x0026; Trends</title>
<p>Despite the significant advancements achieved in accuracy and efficiency by triplet joint extraction models, their limitations remain noteworthy. Primarily, the issue of computational complexity often restricts their practical application. <xref ref-type="table" rid="table-8">Table 8</xref> shows the computational complexity and applicable scenarios of different relation triplet extraction techniques; particularly when handling large-scale data, the training and inference processes of these models can consume substantial computational resources, resulting in extended response times. Additionally, concerns regarding scalability arise, as many existing methods struggle to maintain consistent performance in the face of an ever-increasing variety of entities and relation types. More critically, in resource-constrained environments, the performance of these models may decline markedly, given that their updates typically necessitate extensive annotated data and computational resources.</p>
<table-wrap id="table-8">
<label>Table 8</label>
<caption>
<title>Computational complexity and applicable scenarios of relation triplet extraction techniques, <inline-formula id="ieqn-99"><mml:math id="mml-ieqn-99"><mml:mi>n</mml:mi></mml:math></inline-formula> is the number of entities in the text</title>
</caption>
<table frame="hsides">
<colgroup>
<col/>
<col/>
<col width="75mm"/>
</colgroup>
<thead>
<tr>
<th>Methods</th>
<th>Computational complexity</th>
<th>Applicable scenarios</th>
</tr>
</thead>
<tbody>
<tr>
<td>Table filling</td>
<td>O<inline-formula id="ieqn-100"><mml:math id="mml-ieqn-100"><mml:mo stretchy="false">(</mml:mo><mml:msup><mml:mi>n</mml:mi><mml:mn>2</mml:mn></mml:msup><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula></td>
<td>Structured output, suitable for text data with fixed formats or structures.</td>
</tr>
<tr>
<td>Tagging</td>
<td>O<inline-formula id="ieqn-101"><mml:math id="mml-ieqn-101"><mml:mo stretchy="false">(</mml:mo><mml:mi>n</mml:mi><mml:mo stretchy="false">)</mml:mo><mml:mspace width="negativethinmathspace" /><mml:mo>&#x223C;</mml:mo></mml:math></inline-formula>O<inline-formula id="ieqn-102"><mml:math id="mml-ieqn-102"><mml:mo stretchy="false">(</mml:mo><mml:msup><mml:mi>n</mml:mi><mml:mn>2</mml:mn></mml:msup><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula></td>
<td>Suitable for tasks that extract relationships step by step, ideal for simple relation extraction tasks.</td>
</tr>
<tr>
<td>Sequence-to-sequence</td>
<td>O<inline-formula id="ieqn-103"><mml:math id="mml-ieqn-103"><mml:mo stretchy="false">(</mml:mo><mml:msup><mml:mi>n</mml:mi><mml:mn>2</mml:mn></mml:msup><mml:mo stretchy="false">)</mml:mo><mml:mo>&#x223C;</mml:mo></mml:math></inline-formula>O<inline-formula id="ieqn-104"><mml:math id="mml-ieqn-104"><mml:mo stretchy="false">(</mml:mo><mml:msup><mml:mi>n</mml:mi><mml:mn>3</mml:mn></mml:msup><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula></td>
<td>Applicable to joint extraction of long texts and complex relationships.</td>
</tr>
<tr>
<td>Parameter sharing</td>
<td>O<inline-formula id="ieqn-105"><mml:math id="mml-ieqn-105"><mml:mo stretchy="false">(</mml:mo><mml:msup><mml:mi>n</mml:mi><mml:mn>2</mml:mn></mml:msup><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula><inline-formula id="ieqn-106"><mml:math id="mml-ieqn-106"><mml:mo>&#x223C;</mml:mo></mml:math></inline-formula>O<inline-formula id="ieqn-107"><mml:math id="mml-ieqn-107"><mml:mo stretchy="false">(</mml:mo><mml:msup><mml:mi>n</mml:mi><mml:mn>3</mml:mn></mml:msup><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula></td>
<td>Sharing features in multi-task learning to enhance the overall performance of the model.</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>In addition, as the complexity of relation triplet extraction models continues to increase, the computational resources required for training these models, along with their environmental impact, have become increasingly critical concerns. The training of large-scale deep learning models typically relies on high-performance computing hardware, such as GPUs or TPUs, which consume substantial amounts of electricity and contribute significantly to carbon emissions. Particularly when dealing with vast amounts of textual data, the training process can extend over several days or even weeks, further escalating energy consumption and the environmental burden [<xref ref-type="bibr" rid="ref-76">76</xref>]. Moreover, in order to achieve high model performance, researchers often depend on large datasets and distributed computing frameworks, which not only place greater demands on computational power but also increase energy consumption associated with data storage and transmission. Given the growing importance of sustainability, finding ways to optimize the use of computational resources while reducing environmental impact, without compromising model performance, has emerged as a key research direction in the field.</p>
<p>In recent years, Transformer-based architectures such as BERT [<xref ref-type="bibr" rid="ref-77">77</xref>] and GPT-3 have demonstrated immense potential in the field of relational triplet extraction. BERT, with its bidirectional encoder and pretraining-finetuning framework, effectively captures contextual information, thereby enhancing the accuracy and generalization capability of relation extraction tasks. On the other hand, GPT-3 leverages its powerful generative capacity and few-shot learning advantage, allowing it to excel in complex and low-resource environments. Both models combine robust language modeling abilities and exhibit unique strengths when dealing with intricate sentence structures and multiple relations. Furthermore, the integration of technologies like Graph Neural Networks (GNN) can further enhance model performance, thereby positioning Transformer-based architectures as highly promising and valuable in the context of relational triplet extraction tasks, with significant potential for both application and research advancement.</p>
<p>On the application front, the potential of joint relation triplet extraction technology is vast. It significantly improves answer accuracy in intelligent question-answering systems, thereby serving as a crucial foundation for information extraction and knowledge graph construction. In personalized recommendation systems, it enhances user experience, enabling more precise recommendations. Furthermore, in dialogue systems, it bolsters context comprehension, resulting in more natural and fluid exchanges. In the realm of social network analysis, this technology aids in identifying interactions among users, providing valuable data support for marketing strategies [<xref ref-type="bibr" rid="ref-78">78</xref>]. In biomedical fields, it contributes to drug development by extracting key relations that drive scientific discoveries. Collectively, these applications not only accelerate the rapid advancement of relation triplet extraction technology but also lay a vital technological foundation for the intelligent transformation of various industries. As the field evolves, an increasing array of innovative applications is anticipated, highlighting its profound societal value and academic significance.</p>
</sec>
<sec id="s8">
<label>8</label>
<title>Outlook of Future Works</title>
<p>The future development prospects of joint relation triplet extraction undoubtedly brim with vast potential. On the technical front, the growing emphasis on interpretability and transparency is increasingly vital; this focus not only fosters user trust in the decision-making processes of models but also deepens the understanding of deep learning mechanisms. Simultaneously, the emergence of multimodal learning enables models to integrate diverse data sources such as text, images, and videos, thus enhancing their performance in navigating complex scenarios [<xref ref-type="bibr" rid="ref-79">79</xref>]. Furthermore, the incorporation of continuous learning capabilities is poised to become a crucial factor, allowing models to self-update and adapt flexibly to the rapidly changing landscape of knowledge and environments. In addition, combining few-shot learning and self-supervised learning with relation triplet joint extraction techniques has the potential to significantly enhance the generalization capabilities of relation extraction systems. This is particularly true in low-resource scenarios, where such approaches can improve the scalability of models, enabling them to perform effectively even when labeled data is scarce or difficult to obtain. Additionally, the adaptation to cross-domain and cross-language contexts is likely to emerge as a focal point of research, thereby broadening the applicability of these models across various fields and languages [<xref ref-type="bibr" rid="ref-80">80</xref>].</p>
<p>In terms of applications, the technology of relation triplet extraction is set to intertwine more closely with the construction of knowledge graphs, facilitating ongoing advancements in intelligent question-answering, personalized recommendations, social network analysis, and biomedical information extraction. This synergy is expected not only to optimize user experiences and enhance the efficiency of information retrieval and decision support but also to drive the intelligent transformation of sectors such as corporate decision-making, marketing, and scientific research. As technology continues to evolve and application scenarios diversify, relation triplet extraction is positioned to reveal its profound social value while igniting new avenues for exploration and innovation within academic research, thus underscoring its pivotal role in advancing the construction of a smart society.</p>
</sec>
<sec id="s9">
<label>9</label>
<title>Conclusion</title>
<p>This paper provides a brief review of joint extraction technologies for relational triples, categorizing them into two major approaches: joint decoding and parameter sharing. Joint decoding methods can be further divided into three types: table filling, tagging, and sequence-to-sequence. While joint decoding is effective for extracting triples from short sequence texts and allows for easier overall task model adjustments, its performance diminishes when dealing with long sequence texts and complex texts. On the other hand, parameter sharing methods enable the two subtasks of relational triple extraction to share some or all of the parameters, thereby reducing redundancy in model parameters. However, modifying shared parameters affects all tasks, thus limiting the model&#x2019;s flexibility. With the continuous advancement of deep learning and natural language processing technologies, future research directions primarily focus on optimizing both the performance of joint decoding and parameter sharing methods across different text scenarios. Additionally, enhancing the extraction of cross-document relational triples and exploring few-shot learning and transfer learning will also be crucial for improving adaptability in low-resource languages and domains.</p>
</sec>
</body>
<back>
<ack>
<p>The authors are grateful to all the editors and anonymous reviewers for their comments and suggestions and thank all the members who have contributed to this work with us.</p>
</ack>
<sec><title>Funding Statement</title>
<p>The research leading to these results received funding from Key Areas Science and Technology Research Plan of Xinjiang Production And Construction Corps Financial Science and Technology Plan Project under Grant Agreement No. 2023AB048 for the project: Research and Application Demonstration of Data-driven Elderly Care System.</p>
</sec>
<sec><title>Author Contributions</title>
<p>The authors confirm contribution to the paper as follows: study conception and design: Chenglong Mi; data collection: Chenglong Mi; analysis and interpretation of results: Chenglong Mi; draft manuscript preparation: Chenglong Mi; manuscript guidance and revision: Huaibin Qin, Quan Qi, Pengxiang Zuo. All authors reviewed the results and approved the final version of the manuscript.</p>
</sec>
<sec sec-type="data-availability"><title>Availability of Data and Materials</title>
<p>Not applicable.</p>
</sec>
<sec><title>Ethics Approval</title>
<p>Not applicable.</p>
</sec>
<sec sec-type="COI-statement"><title>Conflicts of Interest</title>
<p>The authors declare no conflicts of interest to report regarding the present study.</p>
</sec>
<ref-list content-type="authoryear">
<title>References</title>
<ref id="ref-1"><label>[1]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Tjong Kim Sang</surname> <given-names>EF</given-names></string-name>, <string-name><surname>De Meulder</surname> <given-names>F</given-names></string-name></person-group>. <article-title>Introduction to the CoNLL-2003 shared task: language-independent named entity recognition</article-title>. In: <conf-name>Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003</conf-name>; <year>2003</year>. p. <fpage>142</fpage>&#x2013;<lpage>7</lpage>.</mixed-citation></ref>
<ref id="ref-2"><label>[2]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Ratinov</surname> <given-names>L</given-names></string-name>, <string-name><surname>Roth</surname> <given-names>D</given-names></string-name></person-group>. <article-title>Design challenges and misconceptions in named entity recognition</article-title>. In: <conf-name>Proceedings of the Thirteenth Conference on Computational Natural Language Learning (CoNLL-2009)</conf-name>; <year>2009 Jun</year>; <publisher-loc>Boulder, CO, USA</publisher-loc>. p. <fpage>147</fpage>&#x2013;<lpage>55</lpage>.</mixed-citation></ref>
<ref id="ref-3"><label>[3]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Zelenko</surname> <given-names>D</given-names></string-name>, <string-name><surname>Aone</surname> <given-names>C</given-names></string-name>, <string-name><surname>Richardella</surname> <given-names>A</given-names></string-name></person-group>. <article-title>Kernel methods for relation extraction</article-title>. <source>J Mach Learn Res</source>. <year>2003 Feb</year>;<volume>3</volume>(<issue>3</issue>):<fpage>1083</fpage>&#x2013;<lpage>106</lpage>. doi:<pub-id pub-id-type="doi">10.3115/1118693.1118703</pub-id>.</mixed-citation></ref>
<ref id="ref-4"><label>[4]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Bunescu</surname> <given-names>R</given-names></string-name>, <string-name><surname>Mooney</surname> <given-names>R</given-names></string-name></person-group>. <article-title>A shortest path dependency kernel for relation extraction</article-title>. In: <conf-name>Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing</conf-name>; <year>2005 Oct</year>; <publisher-loc>Vancouver, BC, Canada</publisher-loc>. p. <fpage>724</fpage>&#x2013;<lpage>31</lpage>.</mixed-citation></ref>
<ref id="ref-5"><label>[5]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Nadeau</surname> <given-names>D</given-names></string-name>, <string-name><surname>Sekine</surname> <given-names>S</given-names></string-name></person-group>. <article-title>A survey of named entity recognition and classification</article-title>. <source>Lingvist Investig</source>. <year>2007</year>;<volume>30</volume>(<issue>1</issue>):<fpage>3</fpage>&#x2013;<lpage>26</lpage>. doi:<pub-id pub-id-type="doi">10.1075/li.30.1.03nad</pub-id>.</mixed-citation></ref>
<ref id="ref-6"><label>[6]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Chan</surname> <given-names>YS</given-names></string-name>, <string-name><surname>Roth</surname> <given-names>D</given-names></string-name></person-group>. <article-title>Exploiting syntactico-semantic structures for relation extraction</article-title>. In: <conf-name>Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies</conf-name>; <year>2011 Jun</year>; <publisher-loc>Portland, OR, USA</publisher-loc>. p. <fpage>551</fpage>&#x2013;<lpage>60</lpage>.</mixed-citation></ref>
<ref id="ref-7"><label>[7]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Li</surname> <given-names>Q</given-names></string-name>, <string-name><surname>Ji</surname> <given-names>H</given-names></string-name></person-group>. <article-title>Incremental joint extraction of entity mentions and relations</article-title>. In: <conf-name>Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL 2014)
</conf-name>; <year>2014</year>; <publisher-loc>Baltimore, MD, USA</publisher-loc>. p. <fpage>402</fpage>&#x2013;<lpage>12</lpage>.</mixed-citation></ref>
<ref id="ref-8"><label>[8]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Yin</surname> <given-names>B</given-names></string-name>, <string-name><surname>Sun</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Wang</surname> <given-names>Y</given-names></string-name></person-group>. <article-title>Entity relation extraction method based on fusion of multiple information and attention mechanism</article-title>. In: <conf-name> 2020 IEEE 6th International Conference on Computer and Communications (ICCC)</conf-name>; <year>2020</year>. p. <fpage>2485</fpage>&#x2013;<lpage>90</lpage>.</mixed-citation></ref>
<ref id="ref-9"><label>[9]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Nayak</surname> <given-names>T</given-names></string-name>, <string-name><surname>Majumder</surname> <given-names>N</given-names></string-name>, <string-name><surname>Goyal</surname> <given-names>P</given-names></string-name>, <string-name><surname>Poria</surname> <given-names>S</given-names></string-name></person-group>. <article-title>Deep neural approaches to relation triplets extraction: a comprehensive survey</article-title>. <source>Cognit Comput</source>. <year>2021</year>;<volume>13</volume>(<issue>5</issue>):<fpage>1215</fpage>&#x2013;<lpage>32</lpage>. doi:<pub-id pub-id-type="doi">10.1007/s12559-021-09917-7</pub-id>.</mixed-citation></ref>
<ref id="ref-10"><label>[10]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Hendrickx</surname> <given-names>I</given-names></string-name>, <string-name><surname>Kim</surname> <given-names>SN</given-names></string-name>, <string-name><surname>Kozareva</surname> <given-names>Z</given-names></string-name>, <string-name><surname>Nakov</surname> <given-names>P</given-names></string-name>, <string-name><surname>S&#x00E9;aghdha</surname> <given-names>D&#x00D3;</given-names></string-name>, <string-name><surname>Pad&#x00F3;</surname> <given-names>S</given-names></string-name>, <etal>et al.</etal></person-group> <article-title>SemEval-2010 Task 8: multi-way classification of semantic relations between pairs of nominals</article-title>. In: <conf-name>Proceedings of the 5th International Workshop on Semantic Evaluation: Recent Achievements and Future Directions (SEW-2009)</conf-name>; <year>2009</year>; <publisher-loc>Boulder, CO, USA</publisher-loc>. p. <fpage>94</fpage>&#x2013;<lpage>9</lpage>.</mixed-citation></ref>
<ref id="ref-11"><label>[11]</label><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Walker</surname> <given-names>C</given-names></string-name>, <string-name><surname>Strassel</surname> <given-names>S</given-names></string-name>, <string-name><surname>Medero</surname> <given-names>J</given-names></string-name>, <string-name><surname>Maeda</surname> <given-names>K</given-names></string-name></person-group>. <source>ACE 2005 multilingual training corpus</source>. <publisher-loc>Philadelphia, PA, USA</publisher-loc>: <publisher-name>Linguistic Data Consortium</publisher-name>; <year>2006</year>.</mixed-citation></ref>
<ref id="ref-12"><label>[12]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Riedel</surname> <given-names>S</given-names></string-name>, <string-name><surname>Yao</surname> <given-names>L</given-names></string-name>, <string-name><surname>McCallum</surname> <given-names>A</given-names></string-name></person-group>. <article-title>Modeling relations and their mentions without labeled text</article-title>. In: <conf-name>Machine Learning and Knowledge Discovery in Databases, European Conference, ECML PKDD 2010</conf-name>; <year>2010</year>; <publisher-loc>Barcelona, Spain</publisher-loc>. p. <fpage>148</fpage>&#x2013;<lpage>63</lpage>.</mixed-citation></ref>
<ref id="ref-13"><label>[13]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Gardent</surname> <given-names>C</given-names></string-name>, <string-name><surname>Shimorina</surname> <given-names>A</given-names></string-name>, <string-name><surname>Narayan</surname> <given-names>S</given-names></string-name>, <string-name><surname>Perez-Beltrachini</surname> <given-names>L</given-names></string-name></person-group>. <article-title>Creating training corpora for nlg micro-planners</article-title>. In: <conf-name>Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)</conf-name>; <year>2017</year>. p. <fpage>179</fpage>&#x2013;<lpage>88</lpage>.</mixed-citation></ref>
<ref id="ref-14"><label>[14]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Liu</surname> <given-names>P</given-names></string-name>, <string-name><surname>Guo</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Wang</surname> <given-names>F</given-names></string-name>, <string-name><surname>Li</surname> <given-names>G</given-names></string-name></person-group>. <article-title>Chinese named entity recognition: the state of the art</article-title>. <source>Neurocomputing</source>. <year>2022</year>;<volume>473</volume>(<issue>1</issue>):<fpage>37</fpage>&#x2013;<lpage>53</lpage>. doi:<pub-id pub-id-type="doi">10.1016/j.neucom.2021.10.101</pub-id>.</mixed-citation></ref>
<ref id="ref-15"><label>[15]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Wang</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Yu</surname> <given-names>B</given-names></string-name>, <string-name><surname>Zhang</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Liu</surname> <given-names>T</given-names></string-name>, <string-name><surname>Zhu</surname> <given-names>H</given-names></string-name>, <string-name><surname>Sun</surname> <given-names>L</given-names></string-name></person-group>. <article-title>TPLinker: single-stage joint extraction of entities and relations through token pair linking</article-title>. In: <conf-name>Proceedings of the 28th International Conference on Computational Linguistics</conf-name>; <year>2020 Dec</year>; <publisher-loc>Barcelona, Spain</publisher-loc>. p. <fpage>1572</fpage>&#x2013;<lpage>82</lpage>.</mixed-citation></ref>
<ref id="ref-16"><label>[16]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Wang</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Sun</surname> <given-names>C</given-names></string-name>, <string-name><surname>Wu</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Zhou</surname> <given-names>H</given-names></string-name>, <string-name><surname>Li</surname> <given-names>L</given-names></string-name>, <string-name><surname>Yan</surname> <given-names>J</given-names></string-name></person-group>. <article-title>UniRE: a unified label space for entity relation extraction</article-title>. In: <conf-name>Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)</conf-name>; <year>2021 Aug</year>; <publisher-loc>Online</publisher-loc>. p. <fpage>220</fpage>&#x2013;<lpage>31</lpage>.</mixed-citation></ref>
<ref id="ref-17"><label>[17]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Hu</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Zhao</surname> <given-names>J</given-names></string-name>, <string-name><surname>Li</surname> <given-names>M</given-names></string-name>, <string-name><surname>Yuan</surname> <given-names>L</given-names></string-name>, <string-name><surname>Zou</surname> <given-names>Q</given-names></string-name>, <string-name><surname>Zhang</surname> <given-names>R</given-names></string-name></person-group>. <article-title>Hybrid deep relation matrix bidirectional approach for relational triple extraction</article-title>. In: <conf-name>2024 27th International Conference on Computer Supported Cooperative Work in Design (CSCWD)</conf-name>; <year>2024</year>. p. <fpage>1728</fpage>&#x2013;<lpage>33</lpage>.</mixed-citation></ref>
<ref id="ref-18"><label>[18]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Ren</surname> <given-names>F</given-names></string-name>, <string-name><surname>Zhang</surname> <given-names>L</given-names></string-name>, <string-name><surname>Yin</surname> <given-names>S</given-names></string-name>, <string-name><surname>Zhao</surname> <given-names>X</given-names></string-name>, <string-name><surname>Liu</surname> <given-names>S</given-names></string-name>, <string-name><surname>Li</surname> <given-names>B</given-names></string-name>, <etal>et al.</etal></person-group> <article-title>A novel global feature-oriented relational triple extraction model based on table filling</article-title>. In: <conf-name>Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing</conf-name>; <year>2021 Nov</year>; <publisher-loc>Punta Cana, Dominican Republic</publisher-loc>. p. <fpage>2646</fpage>&#x2013;<lpage>56</lpage>.</mixed-citation></ref>
<ref id="ref-19"><label>[19]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Yan</surname> <given-names>Z</given-names></string-name>, <string-name><surname>Zhang</surname> <given-names>C</given-names></string-name>, <string-name><surname>Fu</surname> <given-names>J</given-names></string-name>, <string-name><surname>Zhang</surname> <given-names>Q</given-names></string-name>, <string-name><surname>Wei</surname> <given-names>Z</given-names></string-name></person-group>. <article-title>A partition filter network for joint entity and relation extraction</article-title>. In: <conf-name>Proceedings of the 30th International Conference on Information and Knowledge Management (CIKM)</conf-name>; <year>2021</year>; <publisher-loc>Punta Cana, Dominican Republic</publisher-loc>. p. <fpage>185</fpage>&#x2013;<lpage>97</lpage>.</mixed-citation></ref>
<ref id="ref-20"><label>[20]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Liu</surname> <given-names>J</given-names></string-name>, <string-name><surname>Chen</surname> <given-names>S</given-names></string-name>, <string-name><surname>Wang</surname> <given-names>B</given-names></string-name>, <string-name><surname>Zhang</surname> <given-names>J</given-names></string-name>, <string-name><surname>Li</surname> <given-names>N</given-names></string-name>, <string-name><surname>Xu</surname> <given-names>T</given-names></string-name></person-group>. <article-title>Attention as relation: learning supervised multi-head self-attention for relation extraction</article-title>. In: <conf-name>Proceedings of the Twenty-Ninth International Conference on International Joint Conferences on Artificial Intelligence</conf-name>; <year>2021</year>. p. <fpage>3787</fpage>&#x2013;<lpage>93</lpage>.</mixed-citation></ref>
<ref id="ref-21"><label>[21]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Shang</surname> <given-names>Y-M</given-names></string-name>, <string-name><surname>Huang</surname> <given-names>H</given-names></string-name>, <string-name><surname>Mao</surname> <given-names>X</given-names></string-name></person-group>. <article-title>OneRel: joint entity and relation extraction with one module in one step</article-title>. <source>Proc AAAI Conf Artif Intell</source>. <year>2022</year>;<volume>36</volume>(<issue>10</issue>):<fpage>11285</fpage>&#x2013;<lpage>93</lpage>. doi:<pub-id pub-id-type="doi">10.1609/aaai.v36i10.21379</pub-id>.</mixed-citation></ref>
<ref id="ref-22"><label>[22]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Wang</surname> <given-names>J</given-names></string-name>, <string-name><surname>Lu</surname> <given-names>W</given-names></string-name></person-group>. <article-title>Two are better than one: joint entity and relation extraction with table-sequence encoders</article-title>. In: <conf-name>Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)</conf-name>; <year>2020 Nov</year>. p. <fpage>1706</fpage>&#x2013;<lpage>21</lpage>.</mixed-citation></ref>
<ref id="ref-23"><label>[23]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Ning</surname> <given-names>J</given-names></string-name>, <string-name><surname>Yang</surname> <given-names>Z</given-names></string-name>, <string-name><surname>Sun</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Wang</surname> <given-names>Z</given-names></string-name>, <string-name><surname>Lin</surname> <given-names>H</given-names></string-name></person-group>. <article-title>OD-RTE: a one-stage object detection framework for relational triple extraction</article-title>. In: <conf-name>Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)</conf-name>; <year>2023 Jul</year>; <publisher-loc>Toronto, ON, Canada</publisher-loc>. p. <fpage>11120</fpage>&#x2013;<lpage>35</lpage>.</mixed-citation></ref>
<ref id="ref-24"><label>[24]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Zhang</surname> <given-names>Z</given-names></string-name>, <string-name><surname>Liu</surname> <given-names>H</given-names></string-name>, <string-name><surname>Yang</surname> <given-names>J</given-names></string-name>, <string-name><surname>Li</surname> <given-names>X</given-names></string-name></person-group>. <article-title>Relational prompt-based single-module single-step model for relational triple extraction</article-title>. <source>J King Saud Univ-Comput Inf Sci</source>. <year>2023</year>;<volume>35</volume>:<fpage>101748</fpage>. doi:<pub-id pub-id-type="doi">10.1016/j.jksuci.2023.101748</pub-id>.</mixed-citation></ref>
<ref id="ref-25"><label>[25]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Wang</surname> <given-names>X</given-names></string-name>, <string-name><surname>Wang</surname> <given-names>D</given-names></string-name>, <string-name><surname>Ji</surname> <given-names>F</given-names></string-name></person-group>. <article-title>A span-based model for joint entity and relation extraction with relational graphs</article-title>. In: <conf-name>2020 IEEE International Conference on Parallel &#x0026; Distributed Processing with Applications, Big Data &#x0026; Cloud Computing, Sustainable Computing &#x0026; Communications, Social Computing &#x0026; Networking (ISPA/BDCloud/SocialCom/SustainCom)</conf-name>; <year>2020</year>; <publisher-loc>Exeter, UK</publisher-loc>. p. <fpage>513</fpage>&#x2013;<lpage>20</lpage>.</mixed-citation></ref>
<ref id="ref-26"><label>[26]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Han</surname> <given-names>D</given-names></string-name>, <string-name><surname>Zheng</surname> <given-names>Z</given-names></string-name>, <string-name><surname>Zhao</surname> <given-names>H</given-names></string-name>, <string-name><surname>Feng</surname> <given-names>S</given-names></string-name>, <string-name><surname>Pang</surname> <given-names>H</given-names></string-name></person-group>. <article-title>Span-based single-stage joint entity-relation extraction model</article-title>. <source>PLoS One</source>. <year>2023 Feb</year>;<volume>18</volume>(<issue>2</issue>):<fpage>1</fpage>&#x2013;<lpage>14</lpage>. doi:<pub-id pub-id-type="doi">10.1371/journal.pone.0281055</pub-id>; <pub-id pub-id-type="pmid">36749758</pub-id></mixed-citation></ref>
<ref id="ref-27"><label>[27]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Tian</surname> <given-names>X</given-names></string-name>, <string-name><surname>Jing</surname> <given-names>L</given-names></string-name>, <string-name><surname>He</surname> <given-names>L</given-names></string-name>, <string-name><surname>Liu</surname> <given-names>F</given-names></string-name></person-group>. <article-title>StereoRel: relational triple extraction from a stereoscopic perspective</article-title>. In: <conf-name>Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)</conf-name>; <year>2021 Aug</year>. p. <fpage>4851</fpage>&#x2013;<lpage>61</lpage>.</mixed-citation></ref>
<ref id="ref-28"><label>[28]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Tang</surname> <given-names>W</given-names></string-name>, <string-name><surname>Xu</surname> <given-names>B</given-names></string-name>, <string-name><surname>Zhao</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Mao</surname> <given-names>Z</given-names></string-name>, <string-name><surname>Liu</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Liao</surname> <given-names>Y</given-names></string-name>, <etal>et al.</etal></person-group> <article-title>Unified representation and interaction for joint relational triple extraction</article-title>. In: <conf-name>Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing</conf-name>; <year>2022 Dec</year>; <publisher-loc>Abu Dhabi, United Arab Emirates</publisher-loc>. p. <fpage>7087</fpage>&#x2013;<lpage>99</lpage>.</mixed-citation></ref>
<ref id="ref-29"><label>[29]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Wang</surname> <given-names>Z</given-names></string-name>, <string-name><surname>Nie</surname> <given-names>H</given-names></string-name>, <string-name><surname>Zheng</surname> <given-names>W</given-names></string-name>, <string-name><surname>Wang</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Li</surname> <given-names>X</given-names></string-name></person-group>. <article-title>A novel tensor learning model for joint relational triplet extraction</article-title>. <source>IEEE Trans Cybern</source>. <year>2024 Apr</year>;<volume>54</volume>(<issue>4</issue>):<fpage>2483</fpage>&#x2013;<lpage>94</lpage>. doi:<pub-id pub-id-type="doi">10.1109/TCYB.2023.3265851</pub-id>; <pub-id pub-id-type="pmid">37099469</pub-id></mixed-citation></ref>
<ref id="ref-30"><label>[30]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Zheng</surname> <given-names>H</given-names></string-name>, <string-name><surname>Wen</surname> <given-names>R</given-names></string-name>, <string-name><surname>Chen</surname> <given-names>X</given-names></string-name>, <string-name><surname>Yang</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Zhang</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Zhang</surname> <given-names>Z</given-names></string-name>, <etal>et al.</etal></person-group> <article-title>Potential relation and global correspondence based joint relational triple extraction</article-title>. In: <conf-name>Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)</conf-name>; <year>2021 Aug</year>. p. <fpage>6225</fpage>&#x2013;<lpage>35</lpage>.</mixed-citation></ref>
<ref id="ref-31"><label>[31]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Cheng</surname> <given-names>J</given-names></string-name>, 
<string-name><surname>Zhang</surname> <given-names>T</given-names></string-name>,
<string-name><surname>Zhang</surname> <given-names>S</given-names></string-name>, <string-name><surname>Ren</surname> <given-names>H</given-names></string-name>, <string-name><surname>Yu</surname> <given-names>G</given-names></string-name>, <string-name><surname>Zhang</surname> <given-names>X</given-names></string-name>, <etal>et al.</etal></person-group> <article-title>A cascade dual-decoder model for joint entity and relation extraction</article-title>. <source>IEEE Trans Emerg Top Comput. Intell</source>. <year>2024 Jun</year>;<fpage>1</fpage>&#x2013;<lpage>13</lpage>. doi:<pub-id pub-id-type="doi">10.1109/TETCI.2024.3406440</pub-id>.</mixed-citation></ref>
<ref id="ref-32"><label>[32]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Qiao</surname> <given-names>B</given-names></string-name>, <string-name><surname>Zou</surname> <given-names>Z</given-names></string-name>, <string-name><surname>Huang</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Fang</surname> <given-names>K</given-names></string-name>, <string-name><surname>Zhu</surname> <given-names>X</given-names></string-name>, <string-name><surname>Chen</surname> <given-names>Y</given-names></string-name></person-group>. <article-title>A joint model for entity and relation extraction based on BERT</article-title>. <source>Neural Comput Appl</source>. <year>2022 Mar</year>;<volume>34</volume>(<issue>5</issue>):<fpage>3471</fpage>&#x2013;<lpage>81</lpage>. doi:<pub-id pub-id-type="doi">10.1007/s00521-021-05815-z</pub-id>.</mixed-citation></ref>
<ref id="ref-33"><label>[33]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Wei</surname> <given-names>Z</given-names></string-name>, <string-name><surname>Su</surname> <given-names>J</given-names></string-name>, <string-name><surname>Wang</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Tian</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Chang</surname> <given-names>Y</given-names></string-name></person-group>. <article-title>A novel cascade binary tagging framework for relational triple extraction</article-title>. In: <conf-name>Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics</conf-name>; <year>2020 Jul</year>. p. <fpage>1476</fpage>&#x2013;<lpage>88</lpage>.</mixed-citation></ref>
<ref id="ref-34"><label>[34]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Yuan</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Zhou</surname> <given-names>X</given-names></string-name>, <string-name><surname>Pan</surname> <given-names>S</given-names></string-name>, <string-name><surname>Zhu</surname> <given-names>Q</given-names></string-name>, <string-name><surname>Song</surname> <given-names>Z</given-names></string-name>, <string-name><surname>Guo</surname> <given-names>L</given-names></string-name></person-group>. <article-title>A relation-specific attention network for joint entity and relation extraction</article-title>. In: <conf-name>Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence</conf-name>; <year>2021</year>. p. <fpage>4054</fpage>&#x2013;<lpage>60</lpage>.</mixed-citation></ref>
<ref id="ref-35"><label>[35]</label><mixed-citation publication-type="other"><person-group person-group-type="author"><string-name><surname>Ren</surname> <given-names>F</given-names></string-name>, <string-name><surname>Zhang</surname> <given-names>L</given-names></string-name>, <string-name><surname>Zhao</surname> <given-names>X</given-names></string-name>, <string-name><surname>Yin</surname> <given-names>S</given-names></string-name>, <string-name><surname>Liu</surname> <given-names>S</given-names></string-name>, <string-name><surname>Li</surname> <given-names>B</given-names></string-name></person-group>. <article-title>A simple but effective bidirectional extraction framework for relational triple extraction</article-title>. <comment>arXiv:2112.04940v2. 2021</comment>.</mixed-citation></ref>
<ref id="ref-36"><label>[36]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Xu</surname> <given-names>M</given-names></string-name>, <string-name><surname>Pi</surname> <given-names>D</given-names></string-name>, <string-name><surname>Cao</surname> <given-names>J</given-names></string-name>, <string-name><surname>Yuan</surname> <given-names>S</given-names></string-name></person-group>. <article-title>A novel entity joint annotation relation extraction model</article-title>. <source>Appl Intell</source>. <year>2022</year>;<volume>52</volume>(<issue>11</issue>):<fpage>12754</fpage>&#x2013;<lpage>70</lpage>. doi:<pub-id pub-id-type="doi">10.1007/s10489-021-03002-0</pub-id>.</mixed-citation></ref>
<ref id="ref-37"><label>[37]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Chen</surname> <given-names>T</given-names></string-name>, <string-name><surname>Zhou</surname> <given-names>L</given-names></string-name>, <string-name><surname>Wang</surname> <given-names>N</given-names></string-name>, <string-name><surname>Chen</surname> <given-names>X</given-names></string-name></person-group>. <article-title>Joint entity and relation extraction with position-aware attention and relation embedding</article-title>. <source>Appl Soft Comput</source>. <year>2022</year>;<volume>119</volume>:<fpage>108604</fpage>. doi:<pub-id pub-id-type="doi">10.1016/j.asoc.2022.108604</pub-id>.</mixed-citation></ref>
<ref id="ref-38"><label>[38]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Hang</surname> <given-names>T</given-names></string-name>, <string-name><surname>Feng</surname> <given-names>J</given-names></string-name>, <string-name><surname>Yan</surname> <given-names>L</given-names></string-name>, <string-name><surname>Wang</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Lu</surname> <given-names>J</given-names></string-name></person-group>. <article-title>Joint extraction of entities and relations using multilabel tagging and relational alignment</article-title>. <source>Neural Comput Appl</source>. <year>2022</year>;<volume>34</volume>(<issue>8</issue>):<fpage>1</fpage>&#x2013;<lpage>16</lpage>. doi:<pub-id pub-id-type="doi">10.1007/s00521-021-06685-1</pub-id>.</mixed-citation></ref>
<ref id="ref-39"><label>[39]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Zhang</surname> <given-names>N</given-names></string-name>, <string-name><surname>Deng</surname> <given-names>S</given-names></string-name>, <string-name><surname>Ye</surname> <given-names>H</given-names></string-name>, <string-name><surname>Zhang</surname> <given-names>W</given-names></string-name>, <string-name><surname>Chen</surname> <given-names>H</given-names></string-name></person-group>. <article-title>Robust triple extraction with cascade bidirectional capsule network</article-title>. <source>Expert Syst Appl</source>. <year>2022</year>;<volume>187</volume>:<fpage>115806</fpage>. doi:<pub-id pub-id-type="doi">10.1016/j.eswa.2021.115806</pub-id>.</mixed-citation></ref>
<ref id="ref-40"><label>[40]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Wu</surname> <given-names>H</given-names></string-name>, <string-name><surname>Shi</surname> <given-names>X</given-names></string-name></person-group>. <article-title>Synchronous dual network with cross-type attention for joint entity and relation extraction</article-title>. In: <conf-name>Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing</conf-name>; <year>2021 Nov</year>; <publisher-loc>Punta Cana, Dominican Republic</publisher-loc>. p. <fpage>2769</fpage>&#x2013;<lpage>79</lpage>.</mixed-citation></ref>
<ref id="ref-41"><label>[41]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Nayak</surname> <given-names>T</given-names></string-name>, <string-name><surname>Ng</surname> <given-names>HT</given-names></string-name></person-group>. <article-title>Effective modeling of encoder-decoder architecture for joint entity and relation extraction</article-title>. <source>Proc AAAI Conf Artif Intell</source>. <year>2020 Apr</year>;<volume>34</volume>(<issue>5</issue>):<fpage>8528</fpage>&#x2013;<lpage>35</lpage>. doi:<pub-id pub-id-type="doi">10.1609/aaai.v34i05.6374</pub-id>.</mixed-citation></ref>
<ref id="ref-42"><label>[42]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Zeng</surname> <given-names>X</given-names></string-name>, <string-name><surname>Zeng</surname> <given-names>D</given-names></string-name>, <string-name><surname>He</surname> <given-names>S</given-names></string-name>, <string-name><surname>Liu</surname> <given-names>K</given-names></string-name>, <string-name><surname>Zhao</surname> <given-names>J</given-names></string-name></person-group>. <article-title>Extracting relational facts by an end-to-end neural model with copy mechanism</article-title>. In: <conf-name>Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)</conf-name>; <year>2018 Jul</year>; <publisher-loc>Melbourne, Australia</publisher-loc>. p. <fpage>506</fpage>&#x2013;<lpage>14</lpage>.</mixed-citation></ref>
<ref id="ref-43"><label>[43]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Sui</surname> <given-names>D</given-names></string-name>, <string-name><surname>Zeng</surname> <given-names>X</given-names></string-name>, <string-name><surname>Chen</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Liu</surname> <given-names>K</given-names></string-name>, <string-name><surname>Zhao</surname> <given-names>J</given-names></string-name></person-group>. <article-title>Joint entity and relation extraction with set prediction networks</article-title>. <source>IEEE Trans Neural Netw Learn Syst</source>. <year>2024 Sep</year>;<volume>35</volume>(<issue>9</issue>):<fpage>12784</fpage>&#x2013;<lpage>95</lpage>. doi:<pub-id pub-id-type="doi">10.1109/TNNLS.2023.3264735</pub-id>; <pub-id pub-id-type="pmid">37067968</pub-id></mixed-citation></ref>
<ref id="ref-44"><label>[44]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Zhao</surname> <given-names>K</given-names></string-name>, <string-name><surname>Xu</surname> <given-names>H</given-names></string-name>, <string-name><surname>Cheng</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Li</surname> <given-names>X</given-names></string-name>, <string-name><surname>Gao</surname> <given-names>K</given-names></string-name></person-group>. <article-title>Representation iterative fusion based on heterogeneous graph neural network for joint entity and relation extraction</article-title>. <source>Knowl Based Syst</source>. <year>2021</year>;<volume>219</volume>:<fpage>106888</fpage>. doi:<pub-id pub-id-type="doi">10.1016/j.knosys.2021.106888</pub-id>.</mixed-citation></ref>
<ref id="ref-45"><label>[45]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Zhang</surname> <given-names>RH</given-names></string-name>, <string-name><surname>Liu</surname> <given-names>Q</given-names></string-name>, <string-name><surname>Fan</surname> <given-names>AX</given-names></string-name>, <string-name><surname>Ji</surname> <given-names>H</given-names></string-name>, <string-name><surname>Zeng</surname> <given-names>D</given-names></string-name>, <string-name><surname>Cheng</surname> <given-names>F</given-names></string-name>, <etal>et al.</etal></person-group> <article-title>Minimize exposure bias of Seq2Seq models in joint entity and relation extraction</article-title>. In: <conf-name>Findings of the Association for Computational Linguistics: EMNLP 2020</conf-name>; <year>2020 Nov</year>. p. <fpage>236</fpage>&#x2013;<lpage>46</lpage>.</mixed-citation></ref>
<ref id="ref-46"><label>[46]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Zhang</surname> <given-names>Z</given-names></string-name>, <string-name><surname>Yang</surname> <given-names>J</given-names></string-name>, <string-name><surname>Liu</surname> <given-names>H</given-names></string-name>, <string-name><surname>Hu</surname> <given-names>P</given-names></string-name></person-group>. <article-title>BTDM: a bi-directional translating decoding model-based relational triple extraction</article-title>. <source>Appl Sci</source>. <year>2023</year>;<volume>13</volume>(<issue>7</issue>):<fpage>4447</fpage>. doi:<pub-id pub-id-type="doi">10.3390/app13074447</pub-id>.</mixed-citation></ref>
<ref id="ref-47"><label>[47]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Chang</surname> <given-names>H</given-names></string-name>, <string-name><surname>Xu</surname> <given-names>H</given-names></string-name>, <string-name><surname>van Genabith</surname> <given-names>J</given-names></string-name>, <string-name><surname>Xiong</surname> <given-names>D</given-names></string-name>, <string-name><surname>Zan</surname> <given-names>H</given-names></string-name></person-group>. <article-title>JoinER-BART: joint entity and relation extraction with constrained decoding, representation reuse and fusion</article-title>. <source>IEEE/ACM Trans Audio Speech Lang Process</source>. <year>2023</year>;<volume>31</volume>:<fpage>3603</fpage>&#x2013;<lpage>16</lpage>. doi:<pub-id pub-id-type="doi">10.1109/TASLP.2023.3310879</pub-id>.</mixed-citation></ref>
<ref id="ref-48"><label>[48]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Ye</surname> <given-names>H</given-names></string-name>, <string-name><surname>Zhang</surname> <given-names>N</given-names></string-name>, <string-name><surname>Deng</surname> <given-names>S</given-names></string-name>, <string-name><surname>Chen</surname> <given-names>M</given-names></string-name>, <string-name><surname>Tan</surname> <given-names>C</given-names></string-name>, <string-name><surname>Huang</surname> <given-names>F</given-names></string-name>, <etal>et al.</etal></person-group> <article-title>Contrastive triple extraction with generative transformer</article-title>. <source>Proc AAAI Conf Artif Intell</source>. <year>2021 May</year>;<volume>35</volume>(<issue>16</issue>):<fpage>14257</fpage>&#x2013;<lpage>65</lpage>. doi:<pub-id pub-id-type="doi">10.1609/aaai.v35i16.17677</pub-id>.</mixed-citation></ref>
<ref id="ref-49"><label>[49]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Wang</surname> <given-names>T</given-names></string-name>, <string-name><surname>Yang</surname> <given-names>W</given-names></string-name>, <string-name><surname>Wu</surname> <given-names>T</given-names></string-name>, <string-name><surname>Yang</surname> <given-names>C</given-names></string-name>, <string-name><surname>Liang</surname> <given-names>J</given-names></string-name>, <string-name><surname>Wang</surname> <given-names>H</given-names></string-name>, <etal>et al.</etal></person-group> <article-title>Joint entity and relation extraction with fusion of multi-feature semantics</article-title>. <source>J Intell Inform Syst</source>. <year>2024</year>;<volume>61</volume>(<issue>5</issue>):<fpage>1</fpage>&#x2013;<lpage>22</lpage>. doi:<pub-id pub-id-type="doi">10.1007/s10844-024-00871-y</pub-id>.</mixed-citation></ref>
<ref id="ref-50"><label>[50]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Chen</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Zhang</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Hu</surname> <given-names>C</given-names></string-name>, <string-name><surname>Huang</surname> <given-names>Y</given-names></string-name></person-group>. <article-title>Jointly extracting explicit and implicit relational triples with reasoning pattern enhanced binary pointer network</article-title>. In: <conf-name>Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies</conf-name>; <year>2021 Jun</year>. p. <fpage>5694</fpage>&#x2013;<lpage>703</lpage>.</mixed-citation></ref>
<ref id="ref-51"><label>[51]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Tan</surname> <given-names>Z</given-names></string-name>, <string-name><surname>Shen</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Hu</surname> <given-names>X</given-names></string-name>, <string-name><surname>Zhang</surname> <given-names>W</given-names></string-name>, <string-name><surname>Cheng</surname> <given-names>X</given-names></string-name>, <string-name><surname>Lu</surname> <given-names>W</given-names></string-name>, <etal>et al.</etal></person-group> <article-title>Query-based instance discrimination network for relational triple extraction</article-title>. In: <conf-name>Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing</conf-name>; <year>2022 Dec</year>; <publisher-loc>Abu Dhabi, United Arab Emirates</publisher-loc>. p. <fpage>7677</fpage>&#x2013;<lpage>90</lpage>.</mixed-citation></ref>
<ref id="ref-52"><label>[52]</label><mixed-citation publication-type="other"><person-group person-group-type="author"><string-name><surname>Zhang</surname> <given-names>S</given-names></string-name>, <string-name><surname>Ng</surname> <given-names>P</given-names></string-name>, <string-name><surname>Wang</surname> <given-names>Z</given-names></string-name>, <string-name><surname>Xiang</surname> <given-names>B</given-names></string-name></person-group>. <article-title>REKnow: enhanced knowledge for joint entity and relation extraction</article-title>. <comment>arXiv:2206.05123. 2022</comment>.</mixed-citation></ref>
<ref id="ref-53"><label>[53]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Shang</surname> <given-names>Y-M</given-names></string-name>, <string-name><surname>Huang</surname> <given-names>H</given-names></string-name>, <string-name><surname>Sun</surname> <given-names>X</given-names></string-name>, <string-name><surname>Wei</surname> <given-names>W</given-names></string-name>, <string-name><surname>Mao</surname> <given-names>X-L</given-names></string-name></person-group>. <article-title>Relational triple extraction: one step is enough</article-title>. In: <conf-name>Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI-22</conf-name>; <year>2022 Jul</year>. p. <fpage>4360</fpage>&#x2013;<lpage>6</lpage>.</mixed-citation></ref>
<ref id="ref-54"><label>[54]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Li</surname> <given-names>Z</given-names></string-name>, <string-name><surname>Fu</surname> <given-names>L</given-names></string-name>, <string-name><surname>Wang</surname> <given-names>X</given-names></string-name>, <string-name><surname>Zhang</surname> <given-names>H</given-names></string-name>, <string-name><surname>Zhou</surname> <given-names>C</given-names></string-name></person-group>. <article-title>RFBFN: a relation-first blank filling network for joint relational triple extraction</article-title>. In: <conf-name>Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop</conf-name>; <year>2022 May</year>; <publisher-loc>Dublin, Ireland</publisher-loc>. p. <fpage>10</fpage>&#x2013;<lpage>20</lpage>.</mixed-citation></ref>
<ref id="ref-55"><label>[55]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Lai</surname> <given-names>T</given-names></string-name>, <string-name><surname>Cheng</surname> <given-names>L</given-names></string-name>, <string-name><surname>Wang</surname> <given-names>D</given-names></string-name>, <string-name><surname>Ye</surname> <given-names>H</given-names></string-name>, <string-name><surname>Zhang</surname> <given-names>W</given-names></string-name></person-group>. <article-title>RMAN: relational multi-head attention neural network for joint extraction of entities and relations</article-title>. <source>Appl Intell</source>. <year>2022</year>;<volume>52</volume>(<issue>3</issue>):<fpage>3132</fpage>&#x2013;<lpage>42</lpage>. doi:<pub-id pub-id-type="doi">10.1007/s10489-021-02600-2</pub-id>.</mixed-citation></ref>
<ref id="ref-56"><label>[56]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Liang</surname> <given-names>Z</given-names></string-name>, <string-name><surname>Du</surname> <given-names>J</given-names></string-name></person-group>. <article-title>Sequence to sequence learning for joint extraction of entities and relations</article-title>. <source>Neurocomputing</source>. <year>2022</year>;<volume>501</volume>(<issue>1</issue>):<fpage>480</fpage>&#x2013;<lpage>8</lpage>. doi:<pub-id pub-id-type="doi">10.1016/j.neucom.2022.05.074</pub-id>.</mixed-citation></ref>
<ref id="ref-57"><label>[57]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Li</surname> <given-names>X</given-names></string-name>, <string-name><surname>Luo</surname> <given-names>X</given-names></string-name>, <string-name><surname>Dong</surname> <given-names>C</given-names></string-name>, <string-name><surname>Yang</surname> <given-names>D</given-names></string-name>, <string-name><surname>Luan</surname> <given-names>B</given-names></string-name>, <string-name><surname>He</surname> <given-names>Z</given-names></string-name></person-group>. <article-title>TDEER: an efficient translating decoding schema for joint extraction of entities and relations</article-title>. In: <conf-name>Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing</conf-name>; <year>2021 Nov</year>; <publisher-loc>Punta Cana, Dominican Republic</publisher-loc>. p. <fpage>8055</fpage>&#x2013;<lpage>64</lpage>.</mixed-citation></ref>
<ref id="ref-58"><label>[58]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Geng</surname> <given-names>Z</given-names></string-name>, <string-name><surname>Zhang</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Han</surname> <given-names>Y</given-names></string-name></person-group>. <article-title>Joint entity and relation extraction model based on rich semantics</article-title>. <source>Neurocomputing</source>. <year>2021</year>;<volume>429</volume>(<issue>10</issue>):<fpage>132</fpage>&#x2013;<lpage>40</lpage>. doi:<pub-id pub-id-type="doi">10.1016/j.neucom.2020.12.037</pub-id>.</mixed-citation></ref>
<ref id="ref-59"><label>[59]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Gao</surname> <given-names>C</given-names></string-name>, <string-name><surname>Zhang</surname> <given-names>X</given-names></string-name>, <string-name><surname>Liu</surname> <given-names>H</given-names></string-name>, <string-name><surname>Yun</surname> <given-names>W</given-names></string-name>, <string-name><surname>Jiang</surname> <given-names>J</given-names></string-name></person-group>. <article-title>A joint extraction model of entities and relations based on relation decomposition</article-title>. <source>Int J Mach Learn Cybern</source>. <year>2022</year>;<volume>13</volume>(<issue>7</issue>):<fpage>1833</fpage>&#x2013;<lpage>45</lpage>. doi:<pub-id pub-id-type="doi">10.1007/s13042-021-01491-6</pub-id>.</mixed-citation></ref>
<ref id="ref-60"><label>[60]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Zeng</surname> <given-names>D</given-names></string-name>, <string-name><surname>Zhang</surname> <given-names>H</given-names></string-name>, <string-name><surname>Liu</surname> <given-names>Q</given-names></string-name></person-group>. <article-title>CopyMTL: copy mechanism for joint extraction of entities and relations with multi-task learning</article-title>. <source>Proc AAAI Conf Artif Intell</source>. <year>2020 Apr</year>;<volume>34</volume>(<issue>5</issue>):<fpage>9507</fpage>&#x2013;<lpage>14</lpage>. doi:<pub-id pub-id-type="doi">10.1609/aaai.v34i05.6495</pub-id>.</mixed-citation></ref>
<ref id="ref-61"><label>[61]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Li</surname> <given-names>R</given-names></string-name>, <string-name><surname>La</surname> <given-names>K</given-names></string-name> <string-name><surname>Lei</surname> <given-names>J</given-names></string-name>, <string-name><surname>Huang</surname> <given-names>L</given-names></string-name>, <string-name><surname>Ouyang</surname> <given-names>J</given-names></string-name>, <string-name><surname>Shu</surname> <given-names>Y</given-names></string-name>, <etal>et al.</etal></person-group> <article-title>Joint extraction model of entity relations based on decomposition strategy</article-title>. <source>Sci Rep</source>. <year>2024</year>;<volume>14</volume>(<issue>1</issue>):<fpage>59</fpage>. doi:<pub-id pub-id-type="doi">10.1038/s41598-024-51559-w</pub-id>; <pub-id pub-id-type="pmid">38245548</pub-id></mixed-citation></ref>
<ref id="ref-62"><label>[62]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Yu</surname> <given-names>B</given-names></string-name>, <string-name><surname>Zhang</surname> <given-names>Z</given-names></string-name>, <string-name><surname>Shu</surname> <given-names>X</given-names></string-name>, <string-name><surname>Liu</surname> <given-names>T</given-names></string-name>, <string-name><surname>Wang</surname> <given-names>T</given-names></string-name>, <string-name><surname>Wang</surname> <given-names>B</given-names></string-name>, <etal>et al.</etal></person-group> <article-title>Joint extraction of entities and relations based on a novel decomposition strategy</article-title>. In: <conf-name>ECAI 2020</conf-name>; <year>2020</year>. p. <fpage>2282</fpage>&#x2013;<lpage>9</lpage>.</mixed-citation></ref>
<ref id="ref-63"><label>[63]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Sun</surname> <given-names>K</given-names></string-name>, <string-name><surname>Zhang</surname> <given-names>R</given-names></string-name>, <string-name><surname>Mensah</surname> <given-names>S</given-names></string-name>, <string-name><surname>Mao</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Liu</surname> <given-names>X</given-names></string-name></person-group>. <article-title>Recurrent interaction network for jointly extracting entities and classifying relations</article-title>. In: <conf-name>Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
</conf-name>; <year>2020 Nov</year>. p. <fpage>3722</fpage>&#x2013;<lpage>32</lpage>.</mixed-citation></ref>
<ref id="ref-64"><label>[64]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Zhang</surname> <given-names>M</given-names></string-name>, <string-name><surname>Wang</surname> <given-names>J</given-names></string-name>, <string-name><surname>Qu</surname> <given-names>J</given-names></string-name>, <string-name><surname>Li</surname> <given-names>Z</given-names></string-name>, <string-name><surname>Liu</surname> <given-names>A</given-names></string-name>, <string-name><surname>Zhao</surname> <given-names>L</given-names></string-name>, <etal>et al.</etal></person-group> <article-title>A coarse-to-fine framework for entity-relation joint extraction</article-title>. In: <conf-name>2024 IEEE 40th International Conference on Data Engineering (ICDE)</conf-name>; <year>2024</year>. p. <fpage>1009</fpage>&#x2013;<lpage>22</lpage>.</mixed-citation></ref>
<ref id="ref-65"><label>[65]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Wang</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Liu</surname> <given-names>X</given-names></string-name>, <string-name><surname>Kong</surname> <given-names>W</given-names></string-name>, <string-name><surname>Yu</surname> <given-names>H-T</given-names></string-name>, <string-name><surname>Racharak</surname> <given-names>T</given-names></string-name>, <string-name><surname>Kim</surname> <given-names>K-S</given-names></string-name>, <etal>et al.</etal></person-group> <article-title>A decoupling and aggregating framework for joint extraction of entities and relations</article-title>. <source>IEEE Access</source>. <year>2024</year>;<volume>12</volume>:<fpage>103313</fpage>&#x2013;<lpage>28</lpage>. doi:<pub-id pub-id-type="doi">10.1109/ACCESS.2024.3420877</pub-id>.</mixed-citation></ref>
<ref id="ref-66"><label>[66]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Yang</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Li</surname> <given-names>X</given-names></string-name>, <string-name><surname>Li</surname> <given-names>X</given-names></string-name></person-group>. <article-title>A relation-guided attention mechanism for relational triple extraction</article-title>. In: <conf-name>2021 International Joint Conference on Neural Networks (IJCNN)</conf-name>; <year>2021</year>. p. <fpage>1</fpage>&#x2013;<lpage>8</lpage>.</mixed-citation></ref>
<ref id="ref-67"><label>[67]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Duan</surname> <given-names>G</given-names></string-name>, <string-name><surname>Miao</surname> <given-names>J</given-names></string-name>, <string-name><surname>Huang</surname> <given-names>T</given-names></string-name>, <string-name><surname>Luo</surname> <given-names>W</given-names></string-name>, <string-name><surname>Hu</surname> <given-names>D</given-names></string-name></person-group>. <article-title>A relational adaptive neural model for joint entity and relation extraction</article-title>. <source>Front Neurorobot</source>. <year>2021</year>;<volume>15</volume>:<fpage>13195</fpage>. doi:<pub-id pub-id-type="doi">10.3389/fnbot.2021.635492</pub-id>; <pub-id pub-id-type="pmid">33796016</pub-id></mixed-citation></ref>
<ref id="ref-68"><label>[68]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Wang</surname> <given-names>Z</given-names></string-name>, <string-name><surname>Yang</surname> <given-names>L</given-names></string-name>, <string-name><surname>Yang</surname> <given-names>J</given-names></string-name>, <string-name><surname>Li</surname> <given-names>T</given-names></string-name>, <string-name><surname>He</surname> <given-names>L</given-names></string-name>, <string-name><surname>Li</surname> <given-names>Z</given-names></string-name></person-group>. <article-title>A triple relation network for joint entity and relation extraction</article-title>. <source>Electronics</source>. <year>2022</year>;<volume>11</volume>(<issue>10</issue>):<fpage>1535</fpage>. doi:<pub-id pub-id-type="doi">10.3390/electronics11101535</pub-id>.</mixed-citation></ref>
<ref id="ref-69"><label>[69]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Wang</surname> <given-names>C</given-names></string-name>, <string-name><surname>Li</surname> <given-names>A</given-names></string-name>, <string-name><surname>Tu</surname> <given-names>H</given-names></string-name>, <string-name><surname>Wang</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Li</surname> <given-names>C</given-names></string-name>, <string-name><surname>Zhao</surname> <given-names>X</given-names></string-name></person-group>. <article-title>An advanced bert-based decomposition method for joint extraction of entities and relations</article-title>. In: <conf-name>2020 IEEE Fifth International Conference on Data Science in Cyberspace (DSC)</conf-name>; <year>2020</year>. p. <fpage>82</fpage>&#x2013;<lpage>8</lpage>.</mixed-citation></ref>
<ref id="ref-70"><label>[70]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Yang</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Zhou</surname> <given-names>S</given-names></string-name>, <string-name><surname>Liu</surname> <given-names>Y</given-names></string-name></person-group>. <article-title>Bidirectional relation-guided attention network with semantics and knowledge for relational triple extraction</article-title>. <source>Expert Syst Appl</source>. <year>2023</year>;<volume>224</volume>:<fpage>119905</fpage>. doi:<pub-id pub-id-type="doi">10.1016/j.eswa.2023.119905</pub-id>.</mixed-citation></ref>
<ref id="ref-71"><label>[71]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Gao</surname> <given-names>C</given-names></string-name>, <string-name><surname>Zhang</surname> <given-names>X</given-names></string-name>, <string-name><surname>Li</surname> <given-names>LY</given-names></string-name>, <string-name><surname>Li</surname> <given-names>JH</given-names></string-name>, <string-name><surname>Zhu</surname> <given-names>R</given-names></string-name>, <string-name><surname>Du</surname> <given-names>KP</given-names></string-name>, <etal>et al.</etal></person-group> <article-title>Ergm: a multi-stage joint entity and relation extraction with global entity match</article-title>. <source>Knowl Based Syst</source>. <year>2023</year>;<volume>271</volume>:<fpage>110550</fpage>. doi:<pub-id pub-id-type="doi">10.1016/j.knosys.2023.110550</pub-id>.</mixed-citation></ref>
<ref id="ref-72"><label>[72]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Sun</surname> <given-names>K</given-names></string-name>, <string-name><surname>Zhang</surname> <given-names>R</given-names></string-name>, <string-name><surname>Mensah</surname> <given-names>S</given-names></string-name>, <string-name><surname>Mao</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Liu</surname> <given-names>X</given-names></string-name></person-group>. <article-title>Progressive multi-task learning with controlled information flow for joint entity and relation extraction</article-title>. <source>Proc AAAI Conf Artif Intell</source>. <year>2021 May</year>;<volume>35</volume>(<issue>15</issue>):<fpage>13851</fpage>. doi:<pub-id pub-id-type="doi">10.1609/aaai.v35i15.17632</pub-id>.</mixed-citation></ref>
<ref id="ref-73"><label>[73]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Huang</surname> <given-names>H</given-names></string-name>, <string-name><surname>Shang</surname> <given-names>Y-M</given-names></string-name>, <string-name><surname>Sun</surname> <given-names>X</given-names></string-name>, <string-name><surname>Wei</surname> <given-names>W</given-names></string-name>, <string-name><surname>Mao</surname> <given-names>X</given-names></string-name></person-group>. <article-title>Three birds, one stone: a novel translation based framework for joint entity and relation extraction</article-title>. <source>Knowl Based Syst</source>. <year>2022</year>;<volume>236</volume>:<fpage>107677</fpage>. doi:<pub-id pub-id-type="doi">10.1016/j.knosys.2021.107677</pub-id>.</mixed-citation></ref>
<ref id="ref-74"><label>[74]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Zhuang</surname> <given-names>C</given-names></string-name>, <string-name><surname>Zhang</surname> <given-names>N</given-names></string-name>, <string-name><surname>Jin</surname> <given-names>X</given-names></string-name>, <string-name><surname>Li</surname> <given-names>Z</given-names></string-name>, <string-name><surname>Deng</surname> <given-names>S</given-names></string-name>, <string-name><surname>Chen</surname> <given-names>H</given-names></string-name></person-group>. <article-title>Joint extraction of triple knowledge based on relation priority</article-title>. In: <conf-name> 2020 IEEE International Conference on Parallel &#x0026; Distributed Processing with Applications, Big Data &#x0026; Cloud Computing, Sustainable Computing &#x0026; Communications, Social Computing &#x0026; Networking (ISPA/BDCloud/SocialCom/SustainCom)</conf-name>; <year>2020</year>. p. <fpage>562</fpage>&#x2013;<lpage>9</lpage>.</mixed-citation></ref>
<ref id="ref-75"><label>[75]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Chen</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Zhang</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Huang</surname> <given-names>Y</given-names></string-name></person-group>. <article-title>Learning reasoning patterns for relational triple extraction with mutual generation of text and graph</article-title>. In: <conf-name>Findings of the Association for Computational Linguistics: ACL 2022</conf-name>; <year>2022 May</year>; <publisher-loc>Dublin, Ireland</publisher-loc>. p. <fpage>1638</fpage>&#x2013;<lpage>47</lpage>.</mixed-citation></ref>
<ref id="ref-76"><label>[76]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Lin</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Ji</surname> <given-names>H</given-names></string-name>, <string-name><surname>Huang</surname> <given-names>F</given-names></string-name>, <string-name><surname>Wu</surname> <given-names>L</given-names></string-name></person-group>. <article-title>A joint neural model for information extraction with global features</article-title>. In: <conf-name>Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics</conf-name>; <year>2020 Jul</year>. p. <fpage>7999</fpage>&#x2013;<lpage>8009</lpage>.</mixed-citation></ref>
<ref id="ref-77"><label>[77]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Devlin</surname> <given-names>J</given-names></string-name>, <string-name><surname>Chang</surname> <given-names>M-W</given-names></string-name>, <string-name><surname>Lee</surname> <given-names>K</given-names></string-name>, <string-name><surname>Toutanova</surname> <given-names>K</given-names></string-name></person-group>. <article-title>BERT: pre-training of deep bidirectional transformers for language understanding</article-title>. In: <conf-name>Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies</conf-name>; <year>2019 Jun</year>; <publisher-loc>MN, USA</publisher-loc>. Vol. <volume>1</volume>, p. <fpage>4171</fpage>&#x2013;<lpage>86</lpage>.</mixed-citation></ref>
<ref id="ref-78"><label>[78]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Shen</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Ma</surname> <given-names>X</given-names></string-name>, <string-name><surname>Tang</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Lu</surname> <given-names>W</given-names></string-name></person-group>. <article-title>A trigger-sense memory flow framework for joint entity and relation extraction</article-title>. In: <conf-name>Proceedings of the Web Conference 2021</conf-name>; <year>2021</year>; <publisher-loc>New York, NY, USA</publisher-loc>. p. <fpage>1704</fpage>&#x2013;<lpage>15</lpage>.</mixed-citation></ref>
<ref id="ref-79"><label>[79]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Zhao</surname> <given-names>S</given-names></string-name>, <string-name><surname>Hu</surname> <given-names>M</given-names></string-name>, <string-name><surname>Cai</surname> <given-names>Z</given-names></string-name>, <string-name><surname>Liu</surname> <given-names>F</given-names></string-name></person-group>. <article-title>Modeling dense cross-modal interactions for joint entity-relation extraction</article-title>. In: <conf-name>Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI-20</conf-name>; <year>2020 Jul</year>. p. <fpage>4032</fpage>&#x2013;<lpage>8</lpage>.</mixed-citation></ref>
<ref id="ref-80"><label>[80]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Zhong</surname> <given-names>Z</given-names></string-name>, <string-name><surname>Chen</surname> <given-names>D</given-names></string-name></person-group>. <article-title>A frustratingly easy approach for entity and relation extraction</article-title>. In: <conf-name>Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies</conf-name>; <year>2021 Jun</year>. p. <fpage>50</fpage>&#x2013;<lpage>61</lpage>.</mixed-citation></ref>
</ref-list>
</back></article>