<?xml version="1.0" encoding="UTF-8"?>
  <?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?>
  <!-- generated by https://github.com/cabo/kramdown-rfc version 1.7.29 (Ruby 3.0.2) -->


<!DOCTYPE rfc  [
  <!ENTITY nbsp    "&#160;">
  <!ENTITY zwsp   "&#8203;">
  <!ENTITY nbhy   "&#8209;">
  <!ENTITY wj     "&#8288;">

]>


<rfc ipr="trust200902" docName="draft-cui-nmrg-auto-test-01" category="info" consensus="true" submissionType="IRTF" tocInclude="true" sortRefs="true" symRefs="true">
  <front>
    <title abbrev="FALANPT">Framework and Automation Levels for AI-Assisted Network Protocol Testing</title>

    <author fullname="Yong Cui">
      <organization>Tsinghua University</organization>
      <address>
        <email>cuiyong@tsinghua.edu.cn</email>
      </address>
    </author>
    <author fullname="Yunze Wei">
      <organization>Tsinghua University</organization>
      <address>
        <email>wyz23@mails.tsinghua.edu.cn</email>
      </address>
    </author>
    <author fullname="Kaiwen Chi">
      <organization>Tsinghua University</organization>
      <address>
        <email>ckw24@mails.tsinghua.edu.cn</email>
      </address>
    </author>
    <author fullname="Xiaohui Xie">
      <organization>Tsinghua University</organization>
      <address>
        <email>xiexiaohui@tsinghua.edu.cn</email>
      </address>
    </author>

    <date year="2026" month="February" day="22"/>

    <area>Operations and Management</area>
    <workgroup>Network Management Research Group</workgroup>
    <keyword>protocol testing</keyword> <keyword>automation</keyword> <keyword>network verification</keyword>

    <abstract>


<?line 69?>

<t>This document presents an AI-assisted framework for automating the testing of network protocol implementations. The proposed framework consists of components such as protocol formalization, test case generation, test script and configuration generation, and iterative refinement through feedback mechanisms.
In addition, the document defines a set of Automation Maturity Levels for network protocol testing, ranging from fully manual procedures (Level 0) to fully autonomous and adaptive systems (Level 5), providing a structured approach to evaluating and advancing automation capabilities.
Leveraging recent advancements in artificial intelligence, particularly large language models (LLMs), the framework illustrates how AI technologies can be applied to enhance the efficiency, scalability, and consistency of protocol testing.
This document serves both as a reference architecture and as a roadmap to guide the evolution of protocol testing practices in light of emerging AI capabilities.</t>



    </abstract>

    <note title="About This Document" removeInRFC="true">
      <t>
        The latest revision of this draft can be found at <eref target="https://example.com/LATEST"/>.
        Status information for this document may be found at <eref target="https://datatracker.ietf.org/doc/draft-cui-nmrg-auto-test/"/>.
      </t>
      <t>
        Discussion of this document takes place on the
        NMRG Research Group mailing list (<eref target="mailto:nmrg@ietf.org"/>),
        which is archived at <eref target="https://datatracker.ietf.org/rg/nmrg/"/>.
        Subscribe at <eref target="https://www.ietf.org/mailman/listinfo/nmrg/"/>.
      </t>
      <t>Source for this draft and an issue tracker can be found at
        <eref target="https://github.com/USER/REPO"/>.</t>
    </note>


  </front>

  <middle>


<?line 76?>

<section anchor="introduction"><name>Introduction</name>

<t>As protocol specifications evolve at an increasing pace, traditional testing approaches that rely heavily on manual effort or protocol-specific models struggle to keep up. Protocol testing aims to validate whether a device's behavior conforms to the semantics defined by the protocol, which are typically specified in RFC documents. In recent years, emerging application domains, including the industrial Internet, low-altitude economy, modern datacenter networks, and satellite Internet, have further accelerated the emergence of proprietary or rapidly evolving protocols. This trend significantly exacerbates the difficulty of achieving comprehensive and timely protocol testing.</t>

<t>This document proposes an automated network protocol testing framework designed to reduce manual effort, improve test coverage, and adapt efficiently to evolving specifications. The framework consists of four key modules: protocol formalization, test case generation, test script and configuration generation, and feedback-based refinement. It emphasizes modularity, reuse of existing protocol knowledge, and AI-assisted processes to enable accurate, scalable, and maintainable protocol testing.</t>

<t>In addition, this document introduces six Automation Maturity Levels (Levels 0-5) to characterize the maturity of automation in network protocol testing.
These levels serve as a technology roadmap that helps researchers and practitioners assess the current capabilities of their testing systems and identify directions for future improvement. Each level captures progressively stronger capabilities in protocol formalization, orchestration, analysis, and independence from human intervention.</t>

</section>
<section anchor="definition-and-acronyms"><name>Definition and Acronyms</name>

<t>DUT: Device Under Test</t>

<t>Tester: A network device implementing multiple network protocols to support protocol conformance and performance testing. It generates test-specific packets or traffic, emulates target network behaviors, and analyzes received packets to evaluate protocol compliance and performance.</t>

<t>LLM: Large Language Model</t>

<t>FSM: Finite State Machine</t>

<t>API: Application Programming Interface</t>

<t>CLI: Command Line Interface</t>

<t>Test Case: A specification of conditions and inputs to evaluate a protocol behavior.</t>

<t>Tester Script: An executable program or sequence of instructions that controls a protocol tester to generate test traffic, interact with the DUT according to a specified test case, and collect relevant observations for result evaluation.</t>

<!-- Tester Script: An executable program or sequence that carries out a test case on a device. -->

</section>
<section anchor="network-protocol-testing-scenarios"><name>Network Protocol Testing Scenarios</name>

<t>Network protocol testing is required in many scenarios. This document outlines two common phases where protocol testing plays a critical role:</t>

<t><list style="numbers" type="1">
  <t>Device Development Phase:
During the development of network equipment, vendors must ensure that their devices conform to protocol specifications. This requires the construction of a large number of test cases. Testing during this phase may involve both protocol testers and the DUT, or it may be performed solely through interconnection among DUTs.</t>
  <t>Procurement Evaluation Phase:
In the context of equipment acquisition by network operators or enterprises, candidate equipment suppliers need to demonstrate compliance with specified requirements. In this phase, third-party organizations typically perform the testing to ensure neutrality. This type of testing is usually conducted as black-box testing, requiring the use of protocol testers interconnected with the DUT. The test cases are executed while observing whether the DUT behaves in accordance with expected protocol specifications.</t>
</list></t>

</section>
<section anchor="key-elements-of-network-protocol-testing"><name>Key Elements of Network Protocol Testing</name>

<t>Network protocol testing is a complex and comprehensive process that typically involves multiple parties and various necessary components. The following entities are generally involved in protocol testing:</t>

<t><list style="numbers" type="1">
  <t>DUT:
The DUT can be a physical network device (such as switches, routers, firewalls, etc.) or a virtual network device (such as FRRouting (FRR) routers and others).</t>
  <t>Tester:
A protocol tester is a specialized network device that usually implements a standard and comprehensive protocol stack. It can generate test traffic, collect and analyze incoming traffic, and produce test results. Protocol testers can typically be controlled via scripts, allowing automated interaction with the DUT to carry out protocol tests.</t>
  <t>Test Cases: 
Protocol test cases may cover various categories, including protocol conformance tests, functional tests, and performance tests, etc. Each test case typically includes essential elements such as test topology, step-by-step procedures, and expected results. A well-defined test case also includes detailed configuration parameters.</t>
  <t>Test Topology: Each test case must specify the network topology it requires. Before executing a test case, the corresponding topology must be established accordingly. In a batch testing scenario, frequent changes in topology can be time-consuming and inefficient. To mitigate this overhead, it is common practice to construct a minimal common topology that satisfies the requirements of all test cases in a given batch. This minimizes the number of devices and links needed while ensuring that each test case can be executed within the shared topology.</t>
  <t>DUT Configuration: Before executing a test case, the DUT must be initialized with specific configurations according to the test case requirements (setup). Throughout the test, the DUT configuration may undergo multiple modifications as dictated by the test scenario. Upon test completion, appropriate configurations are usually applied to restore the DUT to its initial state (teardown).</t>
  <t>Tester Configuration and Scripts: In test scenarios involving protocol testers, the tester often plays the active role by generating test traffic and orchestrating the test process. This requires the preparation of both tester-specific configurations and execution scripts. Tester scripts are typically designed in coordination with the DUT configurations to ensure proper interaction during the test.</t>
</list></t>

</section>
<section anchor="automated-network-protocol-test-framework"><name>Automated Network Protocol Test Framework</name>

<t>A typical network protocol test automation framework is illustrated as follows.</t>

<figure><artwork><![CDATA[
                                                       +----------+  
               +-------------+   +-----------------+   | Test Env.|
  +--------+   |  Protocol   |   |  Tester Script  |   | +------+ |
  |  RFC   |   |Formalization|   |       and       |   | |Tester| |
  |Document|-->|-------------|-->|DUT Configuration|-->| +-^----+ |
  +--------+   |  Test Case  |   |   Generation    |   |   |  |   |
               | Generation  |   +-----------------+   | +----v-+ |
               +-------------+           ^             | | DUT  | |
                       ^                 |             | +------+ |
                       |                 |             +----------+
                 +-----+-----------------+-----+           |  Test
                 |   Feedback and Refinement   |<----------+ Report  
                 +-----------------------------+               
]]></artwork></figure>

<section anchor="protocol-formalization"><name>Protocol Formalization</name>

<t>Protocol formalization forms the foundation for automated test case generation. Since protocol specifications are typically written in natural language, this step transforms unstructured text into a structured, machine-interpretable representation that can be traversed, queried, and validated by downstream tasks.</t>

<t>To enable a multi-dimensional characterization of protocol semantics, we formalize protocol content along two complementary dimensions: <em>Basic</em> and <em>Logic</em>. Basic formalization captures the static structure and execution context of a protocol, including message formats (e.g., fields and constraints), local data structures (e.g., timers and variables), and state machines that define legal state spaces and transitions. Logic formalization captures operational semantics and behavioral constraints, including event-action rules, protocol algorithms, and error handling behaviors. In practice, effective formalization also needs to explicitly encode relationships across these basic and logical elements, such as which messages trigger particular state transitions or processing rules, so that test generation can reason across modules consistently.</t>

</section>
<section anchor="test-case-generation"><name>Test Case Generation</name>

<t>Once a machine-readable protocol representation is available, the next step is to identify test points from protocol requirements and behavioral constraints, and extend them into concrete test cases. Test points can be derived from normative statements, message format constraints, packet processing logic, and valid/invalid protocol state transitions. Each test case elaborates on a specific test point and includes detailed test procedures and expected outcomes. It may also include a representative set of test parameters (e.g., boundary values and invalid values) to improve coverage of edge conditions. Conformance test cases are generally categorized into positive and negative types. Positive test cases verify that the protocol implementation correctly handles valid inputs, while negative test cases examine how the system responds to malformed or unexpected inputs.</t>

<t>The quality of generated test cases is typically evaluated along two primary dimensions: correctness and coverage. Correctness assesses whether a test case accurately reflects the intended semantics of the protocol. Coverage evaluates whether the test suite exercises protocol definitions and constraints across multiple testing dimensions (e.g., conformance, robustness, performance, and security) and explores representative parameter spaces. However, as test cases are often represented using a mix of natural language, topology diagrams, and configuration snippets, their inherent ambiguity makes systematic quality evaluation difficult. Effective metrics for test case quality assessment are still lacking, which remains an open research challenge.
<!-- As test cases are often expressed in natural language, evaluating their correctness and coverage, both qualitatively and quantitatively, remains an open research challenge. --></t>

</section>
<section anchor="tester-script-and-dut-configuration-generation"><name>Tester Script and DUT Configuration Generation</name>

<t>Test cases are often translated into executable scripts using available API documentation and runtime environments. This process requires mapping natural language described test steps to specific function calls and configurations of test equipment and DUTs.</t>

<t>Since tester scripts and DUT configuration files are typically used together, they must be generated in a coordinated manner rather than in isolation. The generated configurations must ensure mutual interoperability within the test topology and align with the step-by-step actions defined in the test case. This includes setting compatible protocol parameters, interface bindings, and execution triggers to facilitate correct protocol interactions and achieve the intended test objectives.</t>

<t>Before deploying the tester scripts and corresponding DUT configurations, it is essential to validate both their syntactic and semantic correctness. Although the protocol testing environment is isolated from production networks and thus inherently more tolerant to failure, invalid scripts or misconfigured devices can still render test executions ineffective or misleading. Therefore, a verification step is necessary to ensure that the generated artifacts conform to the expected syntax of the execution environment and accurately implement the intended test logic as defined by the test case specification.</t>

</section>
<section anchor="test-case-execution"><name>Test Case Execution</name>

<t>The execution of test cases involves the automated deployment of configurations to the DUT as well as the automated execution of test scripts on the tester. This process is typically carried out in batches and requires a test case management system to coordinate the workflow. Additionally, intermediate configuration updates during the execution phase may be necessary and should be handled accordingly.</t>

</section>
<section anchor="report-analysis-feedback-and-refinement"><name>Report Analysis, Feedback and Refinement</name>

<t>Test reports represent the most critical output of a network protocol testing workflow. They typically indicate whether each test case has passed or failed and, in the event of failure, include detailed error information specifying which expected behaviors were not satisfied. These reports serve as an essential reference for device improvement, standard compliance assessment, or procurement decision-making.</t>

<t>However, due to the potential inaccuracies in test case descriptions, generated scripts or device configurations, a test failure does not always indicate a protocol implementation defect. Therefore, failed test cases require further inspection using execution logs, diagnostic outputs, and relevant runtime context. This motivates the integration of a feedback and refinement mechanism into the framework.
The feedback loop analyzes runtime behaviors to detect discrepancies that are difficult to identify through static inspection alone. This iterative refinement process is necessary to improve the reliability of the automated testing system.</t>

</section>
</section>
<section anchor="automation-maturity-levels-in-network-protocol-testing"><name>Automation Maturity Levels in Network Protocol Testing</name>

<!-- To be refined. -->

<t>To describe the varying degrees of automation adopted in protocol testing practices, we define a set of Automation Maturity Levels.
These levels reflect technical progress from fully manual testing to self-optimizing, autonomous systems. 
These Automation Maturity Levels are intended as a reference model, not as a fixed pipeline structure.
<!-- The classification is intended as a reference model, not as a fixed pipeline structure. --></t>

<texttable title="Automation Maturity Matrix for Network Protocol Testing" anchor="auto-level">
      <ttcol align='left'>Level</ttcol>
      <ttcol align='left'>RFC Interpretation</ttcol>
      <ttcol align='left'>Test Asset Generation &amp; Execution</ttcol>
      <ttcol align='left'>Result Analysis &amp; Feedback</ttcol>
      <ttcol align='left'>Human Involvement</ttcol>
      <c>0</c>
      <c>Manual reading</c>
      <c>Fully manual scripting and CLI-based execution</c>
      <c>Manual observation and logging</c>
      <c>Full-time intervention</c>
      <c>1</c>
      <c>Human-guided parsing tools</c>
      <c>Script templates with tool-assisted execution</c>
      <c>Manual review with basic tools</c>
      <c>High (per test run)</c>
      <c>2</c>
      <c>Template-based extraction</c>
      <c>Basic autogen of config &amp; scripts for standard cases</c>
      <c>Rule-based validation with human triage</c>
      <c>Moderate (Manual correction and tuning)</c>
      <c>3</c>
      <c>Rule-based semantic parsing</c>
      <c>Parameterized generation and batch orchestration</c>
      <c>ML-assisted anomaly detection</c>
      <c>Supervisory confirmation</c>
      <c>4</c>
      <c>Structured model interpretation</c>
      <c>Objective-driven synthesis with end-to-end automation</c>
      <c>Correlated failure analysis and report generation</c>
      <c>Minimal (strategic input)</c>
      <c>5</c>
      <c>Adaptive protocol modeling</c>
      <c>Self-adaptive generation and self-optimizing execution</c>
      <c>Predictive diagnostics and remediation proposals</c>
      <c>None (optional audit)</c>
</texttable>

<t>As shown in <xref target="auto-level"/>, the Automation Maturity Levels are characterized along four dimensions:
RFC interpretation, test asset generation and execution, result analysis, and human involvement. Each level reflects an increasing degree of system autonomy and decreasing human involvement.</t>

<section anchor="level-0-manual-testing"><name>Level 0: Manual Testing</name>

<t>Description: All testing tasks are performed manually by test engineers.</t>

<t>Key Characteristics:</t>

<t><list style="symbols">
  <t>Protocol understanding, test case design, topology setup, scripting, execution, and result analysis all rely on manual work.</t>
  <t>Tools are only used for basic assistance (e.g., packet capture via Wireshark).</t>
</list></t>

<!-- Example: Manually reading RFCs, configuring routers and testers by hand, and verifying protocol behavior line by line. -->

</section>
<section anchor="level-1-tool-assisted-testing"><name>Level 1: Tool-Assisted Testing</name>

<t>Description: Tools are used to assist in some testing steps, but the core logic is still human-driven.</t>

<t>Key Characteristics:</t>

<t><list style="symbols">
  <t>Automation includes test script execution and automated result comparison.</t>
  <t>Manual effort is still required for test case design, topology setup, and exception analysis.</t>
</list></t>

<!-- Example: Using Ixia or Spirent testers to record traffic, with manual analysis of protocol behavior against the RFC. -->

</section>
<section anchor="level-2-partial-automation"><name>Level 2: Partial Automation</name>

<t>Description: Basic test case generation and execution are automated, but critical decisions still require human input.</t>

<t>Key Characteristics:</t>

<t><list style="symbols">
  <t>Automation includes:
  <list style="symbols">
      <t>A framework that performs basic protocol formalization (e.g., extracting fields, message formats, and FSM fragments) and generates baseline test cases and corresponding tester scripts and DUT configurations for standard cases.</t>
      <t>Topology generation for a single test case.</t>
    </list></t>
</list></t>

<!-- Automated testbed deployment????? -->

<t><list style="symbols">
  <t>Manual effort includes:  <list style="symbols">
      <t>Designing complex or edge case scenarios.</t>
      <t>Root cause analysis when tests fail.</t>
    </list></t>
</list></t>

<!-- Example: Robot Framework automatically runs BGP neighbor establishment tests, while route policy verification remains manually defined. -->

</section>
<section anchor="level-3-conditional-automation"><name>Level 3: Conditional Automation</name>

<t>Description: The system can autonomously complete the test loop, but relies on human-defined rules and constraints.</t>

<t>Key Characteristics:</t>

<t><list style="symbols">
  <t>Automation includes:  <list style="symbols">
      <t>Complex test case and parameter generation based on semantic understanding and formalization of RFCs (e.g., structured protocol modules and behavioral constraints).</t>
      <t>Minimal common topology synthesis for a set of test cases.</t>
      <t>Automated result analysis with anomaly detection and iterative refinement driven by execution feedback.</t>
    </list></t>
  <t>Manual effort includes:  <list style="symbols">
      <t>Reviewing the test plan and confirming whether flagged anomalies represent real protocol violations.</t>
    </list></t>
</list></t>

<!-- Example: The system autonomously verifies OSPF LSA flooding behavior, but a human confirms whether a vendor-specific implementation is RFC-compliant. -->

</section>
<section anchor="level-4-high-automation"><name>Level 4: High Automation</name>

<t>Description: Full automation of the testing pipeline, with minimal human involvement limited to high-level adjustments.</t>

<t>Key Characteristics:</t>

<t><list style="symbols">
  <t>Automation includes:  <list style="symbols">
      <t>End-to-end automation from RFC parsing to test report generation.</t>
      <t>Automated result analysis with root cause analysis.</t>
      <t>Automated recovery from environment issues.</t>
    </list></t>
  <t>Manual effort includes:  <list style="symbols">
      <t>Defining high-level test objectives, with the system decomposing tasks accordingly.</t>
    </list></t>
</list></t>

<!-- Example: The system autonomously builds a data center topology and validates end-to-end EVPN protocol compliance. -->

</section>
<section anchor="level-5-full-automation"><name>Level 5: Full Automation</name>

<t>Description: Adaptive testing, where the system independently determines testing strategies and continuously optimizes coverage.</t>

<t>Key Characteristics:</t>

<t><list style="symbols">
  <t>Automation includes:  <list style="symbols">
      <t>Learning protocol implementation specifics (e.g., proprietary extensions) and generating targeted test cases.</t>
      <t>Leveraging historical data to predict potential defects. <!-- (e.g., detecting that a vendor's OSPF implementation often omits optional fields). --></t>
      <t>Iterative self-optimization to improve efficiency.</t>
    </list></t>
  <t>Manual effort: None. The system autonomously outputs a final compliance report along with remediation suggestions.</t>
</list></t>

<!-- Example: An AI-powered testing platform identifies that a switch's IPv6 ND protocol implementation deviates from RFC 4861. -->

</section>
</section>
<section anchor="an-example-of-llm-based-automated-network-protocol-test-framework-from-level-2-to-level-3"><name>An Example of LLM-based Automated Network Protocol Test Framework (From Level 2 to Level 3)</name>
<t>The emergence of LLMs has significantly advanced the degree of automation achievable in network protocol testing. Within the proposed framework, LLMs can serve as core components in multiple stages of the testing pipeline, enabling a transition from Level 2 (Partial Automation) to Level 3 (Conditional Automation). A key enabler is to introduce an explicit protocol formalization step that transforms unstructured RFC text into a structured, machine-interpretable intermediate representation (e.g., a protocol description spanning message formats, state machines, and normative behavioral constraints). With such a representation, downstream generation becomes more systematic and less dependent on ad-hoc, protocol-specific parsers.</t>

<t>At the protocol formalization stage, LLMs can enrich RFCs with structured signals, such as section-level summaries, cross-references across documents, and normative requirement statements (e.g., "must" and "SHOULD"). The agent can further induce protocol modules (e.g., message formats, state machines, event-action rules, and algorithms) and formalize them into a unified representation that supports traversal and query. This representation serves as the semantic backbone for test generation, and it also helps in update scenarios by localizing changes and propagating them to the corresponding formal modules.</t>

<t>Based on the formalized protocol representation, LLMs can generate test cases in a more structured manner by decoupling test case templates from test parameters. The template generation step expands extracted test points into parameterized templates that define test objectives, topology, execution steps, oracles, and static testbed configurations. The parameter instantiation step then populates template placeholders with concrete values, including representative boundary values and invalid values for robustness testing. When oracle values require computation, the system can synthesize small helper programs (e.g., Python scripts) to compute expected outcomes, and can apply equivalence partitioning to reduce redundant parameter combinations without sacrificing meaningful coverage.</t>

<t>For test execution, LLMs can assist in translating abstract test procedures into executable artifacts for both the tester and the DUT. In practice, this translation is often a multi-step workflow that benefits from a structured agent architecture. For example, a core agent can orchestrate the artifact generation process, while specialized sub-agents handle documentation summarization, intent rewriting (turning high-level test objectives into API-aligned actions), and recurring fault fixing based on an experience pool. During execution, feedback from logs and device outputs can be used to iteratively refine generated artifacts, and an adaptive pruning mechanism can decide whether to stop exploring additional parameter instances for a given template when definitive failures are found or sufficient coverage has been achieved.</t>

<t>Despite these capabilities, it is important to note that LLMs are fundamentally probabilistic and cannot guarantee determinism or correctness. Therefore, even when the framework can complete an automated loop with reduced human effort, human oversight remains valuable for validating critical intermediate artifacts (e.g., formalized protocol modules, test oracles, and high-impact configuration changes) and for handling ambiguous or novel protocol behaviors. Nevertheless, integrating LLMs with explicit protocol formalization, systematic template/parameter generation, and execution-time feedback provides a practical path for elevating protocol testing practices toward Level 3 maturity.</t>

</section>
<section anchor="security-considerations"><name>Security Considerations</name>

<t><list style="numbers" type="1">
  <t>Execution of Unverified Generated Code: Automatically generated test scripts or configurations (e.g., CLI commands, tester control scripts) may include incorrect or harmful instructions that misconfigure devices or disrupt test environments. Mitigation: All generated artifacts should undergo validation, including syntax checking, semantic verification against protocol constraints, and dry-run execution in sandboxed environments.</t>
  <t>AI-Assisted Component Risks: LLMs may produce incorrect or insecure outputs due to their probabilistic nature or prompt manipulation. Mitigation: Apply input sanitization, prompt hardening, and human-in-the-loop validation for critical operations.</t>
</list></t>

</section>
<section anchor="iana-considerations"><name>IANA Considerations</name>

<t>This document has no IANA actions.</t>

</section>


  </middle>

  <back>







<?line 353?>

<section numbered="false" anchor="acknowledgments"><name>Acknowledgments</name>

<t>This work is supported by the National Key R&amp;D Program of China.</t>

</section>
<section numbered="false" anchor="contributors"><name>Contributors</name>

<t>Zhen Li<br />
Beijing Xinertel Technology Co., Ltd.<br />
Email: lizhen_fz@xinertel.com</t>

<t>Zhanyou Li<br />
Beijing Xinertel Technology Co., Ltd.<br />
Email: lizy@xinertel.com</t>

</section>


  </back>

<!-- ##markdown-source:
H4sIAAAAAAAAA7Vd+XMbx5X+HX9FL1WViAkA+UxtMc5uaB0OK5TMouRNsuXK
1gDTACYazCBzkITN/O/7vnd09wwAWfbWsuJIBGb6eOf3jm7NZrPJkydPJk/c
VdX5pvLd7EWTrTr3Omve5/V95d757a7MOj/BQ7e+yrbedZuidaui9G7V1FuX
441ZV+f1bF/3DR6Z7Zq6q5d1Od/mrqvd2neu7bKm8/mcxpE5eKxV3WyzztGA
ZzLOVzbGf8y+uq+b9+um7nf0d/6Ihjub81Je1Y0rqqIrstK1vut3U0cvuroq
967ynmf1edHRYmmSomk7tyjr5XtXr+hXX+YtFvItHj/riq70Z/xai/cW3i03
WbX2+e9d7kvfeXeWLRaNvztzxQrzNI7fwbLbTd10GOuy2ruaZmvcsiZiVp1b
ZhXGwjJ8PnWLvuOhs8av+tJVdYfJiqpr6rxf0nNNUze8rLc1KMOrdPdFWeI1
2qTL+q4mahXLrKR1531TVGvZPdZFc+8dDe76SpcvpHpRV78mClfLss9pJ7NP
PjlzRL2zGfjadrSnSqlUMn+xguts4cs2fENMch/BHh1RFtESExZ7GgsjdHVd
Mm1p70Qh+gs+XfZNA0Ld+aYt6ur3tBdaYF4vMdoZpnX+ISMB9LKTdxC8TiUS
M7TufZNtIaizZrW8cJuu27UXz56ti27TL+bLevtsmS3qZ+lTNM7fSFLAnMbT
SEvPa6F1FI0QQZnsdrLYzOXFiv6ClYq4gkLPmcSBcLRQ4jl2gc3RM8tNIB3J
99P5w7bkDf319fXU+W45n8/PsaknT0SWLtzZ5dXssm2LlqTFvaIVexCYX7oU
xtPnb3zHn96ogpGCth2JwdnERolvZlVub2Jd1/4OPOXxkplOjygyjyEvry/f
3Lw7myxpDeu62V+QZKzqyURZdaHCseyLWbVt1jMI6gyiMPvk00nbL7ZFC9J0
+x09e3X77tWk6rcL31xMchrxYkIa0xL1+vbCdU3vJzTp5xOSouzCfbvzDa+/
5f28zqps7bfEi0kQv4uwh/gt2arWZw1x4Rs8Mnnv9/REfjFxM2fGyXWyU3yW
BTrht0rHI8EsVqRu/Pmdr3paq3M26evbb+g32dRoOkciUJQXDtT4Y+G71bxu
aB6HR6KY0uazrsmW730zt4ee0f/w1jNMxGJ84b57+/L22e3Lm2/pM1GxOIYp
CGT9+vLdy7fvJhPaDFkl7JVecI6sTSls+ltNBuN5X/DHNFlWFT/w5i7cu5Yo
sekz911VsD52e37Ky0aItXt6+Y+dPjb3eT9fVkem6KsfvPuL/wVz3O9/+Ozz
P+Lv7fwn5/lzVtz7itTwl2zm/f1nX5ycyI1m+muR1Zu+oD/9z5/qofAP8v4B
6SaTio0DvXYxmUCh4m+zGYnkooVwdJMJ2z3StZ4lm8wSKUsHfYAiZ6bIq4HJ
MIEmhrMBFlGH+zPhDmpQQIAwtOjZnOysx7e7uh0MCy2lqVoMQuK2qyteRtuT
2GdtHI83UiqFpjw1mdvWEwyoVJn103bZFLuOFZsGXxXrXr4ePIlvyYM2TBs4
kKISFe82pGvrjVuR416QFrmth+Mr2i150avKZTl5XpmMNhTol/MIRD72qbSX
xEa+zjryqt0+NZYH9FJaTl1DLgBEZRgEgdmT2lc9QRJ6dklsJk65pzyU++Qc
jkEeAm+qelv3YtOyPNvx3to9MXIbXvnyfIqB7oock9ByyTYuaX3ElGxHX2Tw
MQRz7rKyF07LaHdZteTf4r6W2S5bFCWRwxNtMHyT8dIbvwRN5CUmK3t9gmow
fEBXBFB8WRbEkaWn9eCbZV9mDe2D/p9cYElU6Mnsum2dg2ZPr69ft+dC9Cg8
BGN6yDMRz23qe5JcouNyU9VlvaZFGViijZWFwrdqgzWJd15hNbSE/ZSEJitl
M/upiQ6rAH0Ldo75NB/pT+ubO5pwQWANYpsJJsH22DyTqDGRhZj8fZ3l22zH
QLYvcl3RXV32TNsjU9IHpLjF0jMxiXgbFjSib8NUp80POcIKvy3yvPQTQeMM
CtnvTC4T3Wp3fhlcUsurIMHJoEOAeOQyW54/A7OI3KIBWVyZSQ4trdvQe40n
Rm58dlfQn7QbFWAiOOFa4ESbemZTG58hjut1CXDn3nu/c/1uHlFEmK/YMvAj
IS3g7N39xjNKJljl74hEvyZW+A3NXzNyhvFoDSC2ZEUJly1bVVogSv7CFjWl
4QqYH+IXOWJFxrpSepyof/vqeeA92barymR+Tw67nUamsOgJYekFMt8VfSuw
2axoUeUQYqiFBUxTV9b3s6wk+AV47ZfQbBJMEKmhgcjFYzYf7EgrQttmrFZE
kDgSUYFUpm+EPsslxR4Noz4WOKyTpVQEbtcQZMiaPXjUZLsip42zOIj8CX3Y
mAM0k3zTnMW6YuGpOjz8QCtrFqyRbCALKFlfdqxFJCKF57Fg6hu/Abq9E63o
ii2k5lDRDjwV+xD2VFmAsKfsaWIsco+lihkgc4foaCCXU/gssoxenUvN9sxP
ozkNFgM7ZSOplBkqkPi64y5uhTiPcCNY2ZeesOn/p4czJzZbZPC60c2RyNJu
trsNafYPREteTdaw9Ws8BUFsWR4KMzy6xPdVfV/63IiSAgV2Ti34wlY2W5AO
k7RhZd7Ma6nvQQ06+o8fOsLwkZtNuR8iWzIVxcOHvOxT/fOT2ZfsJcmLw34S
+P5BjO3WXoBgxnFIu08JE4w+wSRXyshs88WaB6+zj4YdhnDjy13rGsXxhOR4
/2LIMRt/Aqq1g9g1teNYHn1XNEGkzaMzhskRY64ocKdAcyn2G/hi1bO7UYkW
lr+Eb+e1Y4KOgQR9vaY/oYWwcURcCkCb4QKIIqeEtMau2AOrzGXlngRC8VWV
+x2ZCLYvDGc2/ZZdSgfKVXhnDs/0AnLJ9BCxWtIq9tuWwOyL795d0Ncw6YSF
yfhxIEkmgf6fIj13GXgldj/CThBqS3anoN8PGMpC2va7HdxR2Js6CsYHzCbf
hN9NAqA3qmUQdUSjwYXtEHRBzRs4SZgKOIK+lEcBa7qwEvNOSiomHBQRboR4
kYfBIhbz6Uppm8WxhRI9CSlduGtGUdeGol7Du04mr97SV69Aa+/edhjyNUxy
Rfjg8uaKyJl4qxtIRrbdgpDsTVZk2SeT59f03PN6u8XE1/Rq+iXY4p6TrQFn
BkZR4H0lSq2iW+360QazuEUj0NyY7d6y1aORK7JMftl3Zj6wTBC99f/szZeR
n2Vgy7OxKiKB1oD12VCvJXNkPBUDG9jHokrK6u4pamYVJYmEXaOwn/13DQgd
kEGw2YYhy5KUEnCItkh6XS9gM7KopqR6JKMBbbM+fPVvBNt+9p5lj1nTsMno
OzZK5kGgWKohczeb/QeU7lSKhuYk890UNenfm1NetYCk/rMvGoFDJAxkPOw9
hQfBZtNqSg6OaDCI7paWA89DnxBuaw5dgNuV2R6Mot1zYpKMaokI9tO5GYMX
MGP1jse/wVgXkxeSumTUkXybBKdYMX84dWR+ctI+MhFEI2SJGiWhmFqhVWsm
AXw+gZZ1s0oNNeN1lD72LhrUSH6KDbqxBu/rpnNbPw3H5CGy7om6gsY5shgJ
ruiRSiWMMYW0/BLFPGoTiD8t0Q54RQNbFmlaYSXuwmVb5G9oAEQMnzHYJjck
wfDLIJdG5KvKdtj5B4k/jKikFvTXVuz4Yh+oXnOyDcRG6hWzE8qknU8RnuWC
3+MgsMoUrNHTlnHP/ZbpiecSw8caGVVPGRABeaQjY4gmnyHM3A8yLW2C75Vg
g8QGQxmWjcr3tAKEhwZ+9ztvnFSN6NueR4KdI+Z7jvQWJeOv+iGJ8HmpJqsK
tg5Ym/KJhkoNkADMKEMcqYh9wJMbpI7F1GASi43MerFlFbcuhiyS0z/sZLZT
wg7D8WdCry9LDetp5acMyeTD9iMTZvoHNZVpPKBgUjUysEh1oY2OnTMHXvTg
Dsanh9zgXQQxMZ+kkJzscX2P+QEQ5MXG8HUyQT6APLpmNT8ESIACmZKWXiAx
I9QDOzVCI08tjdUSeYGUiPlkDT3c/oqk9Z5mbSVzfw7tyNxd0XT9BwZ6dXtL
A2ALT+mv5zYcE4ALRe25qLEipMnlgbdj2jNfAeSS2EnnYqKbLAc8xS91NE3W
5Mc5pgJDoel7hkmgzgm3ao4xgT4IimsGG+EpgcpSx+L3xVu2o4QAto+5opws
vHl7ClaIpJlGTABbJgExdDQfD7M1cPMIGsih7tmdDqgIRfhcaMxwh8K4yWBN
qpewxRxHBuHUYkfhB2mAoxCUJyI56atlkm5RwDjGpipFgvGj4091h0t1rUOQ
VnGJ0xtnTbiESRReI5ChoK3zu9liP8OfSfJRFhBMRWDKpbv3ZTmzpEpcRFa2
dZw/9xT6gS/D0JU0meJlMJNo+4XS9p2u5WK8L3bbYpskd2MybKuHJzSfPHdf
eyKWmUjJeyZQTRwaRV7tDgiV7b6OwvOg3NoCeRXtBkbd0F+5Z0eTuYVU5iw8
UxxErGsYnXVa+ZXSpw2txgNpjxkAQ7+1ZCuRz/IMRIfabclUrVmJ4HggThuf
5VNssWgDoNLkIEut4Q9aG41aUMRmj4XpWctbon27KhS2pC6UYUs5EGb4C7cm
Za9kw+oIeQJOIjAfAsQxCIUdEfx7Lw49+Cf2q+IDaSF+yF4rcgeXRmqpZeOW
wnjGBbIPEpYv2SpTTJKI08VHsBwvGX+16s/mMAUWy6GUtkPo36VeeEi+p1xB
PweNGHbBhNjjcfqhCsBa9Ahy13X0cNs6T1KzpKR5sezYbmnWUjNCInNz990O
XJYEFmy3BuY7Se4JhhruiOvqYu2TRDlpQ1c3PjWGBafxtT2Co8ennSd3UN9X
cDq/M6czZAULgMQxZCavquGCW/W5AyuoVn0a9scS1ZHgSWCAjyHtqNwQuAUl
LPMFxiSuRvxiTFIkhSsDGccAPDk22CPD74y9ZR2zk4LBNpGFjV5SfxMoor+P
UsohI0myvaxZrrJDNzSaKGJSsBT+PPFfeYyBsF7iCppIPlzoj20B9PSlLe94
EixNlCVlmDapxDDqFaRFptyd+Dn5xUf8/HYWfn57fKTkCXlo9Il9+igEeFnd
zR8no1fl+0gq/g3/DUJz+/S39pKNQx+jSKDfv0rTZjoO/0Bswgv036OM/hjH
eaFh9CPF7Y+DHfAnB6aPP6X1/H24nvG+AnYJ+3LfhOxxXA//x38co/Pj4J3H
D9CZP79L1vMT/LKfv4/me2SVcI/Hxzn+lu0m/e2AX0d/Hn/ik1QSj48jTxyh
ysFOlSfHh8G0r6wiDZm5jRVr+varlHi3ntOaJ1TscCmnSI8f2I8nUQcGYkzf
3RxNCzstt3HERQ4tfJiA7mPVjbl7WwDPnqpLDo3nfVN0cAnI1yOTTybLisZa
NGDcSjapamVBfZUUuzl9QaazHtTAp+SEOSE6KyRR4SXjRv5AGiRkL5psEwDX
ZGjRwLsE9gjZ51MNR6U2yY4aPpJm8dnWdVn7Hhj3XSyTiLef5QQFudsLcC1W
K4IbinSxCuaUIHeg/SA/LM2CWYncjqbdrB+jQbFAZyKP/JuvM4pdf8Nr/s11
vaa/E2DGZyOuhpoBwzBQYhkpN/J/SX4oplvTiGeLEH1trWyEl/x8PZ9af6IV
4Im2xAfU/csaPgmFzzhneAsYuolJABAV73BFlHGKMlUTChKhuNKvA5BpUdvW
dBoEptDcHtPjFB1q6yLLEpbwGJa8ZtwdtpHu36P+MVOv3aASOI3sy0qEiN1m
a7EWmjgdxRB5iXdD7YDDDwP+U1QnvaCi4YI5/gqNqBS4lRRccLGWQu4cwl2K
hm2KHRBuU0s5ipRzwXLAEB6USILGaYgapViuDEVVuFijhBSbOpTGCWG1BQDw
i/tFZP9trQkf2IZoFrSrMmuxFVmcFk9jkwbtZi62Kvq16Jrom2+5VhK0m4bL
h9XHkYYjSXJHcaoULSXCfOjEpBRMx1B5EzBZg8NS5UrGTCKCD8mFKE/nJZu7
FbtETywb36WJPg2LdTI1QBQwcMWI5w7dX0J0ZdVQ24ZzS6EpZQezOjFizwii
489BkmfI0IPEA4nUopYiGRcfAmyO1NJYd5wWiNhcWp0GyQYKo8iSgRBXkulO
cwvcdRO5eOetF0uGDAkGMxsLdk5kDpHl9laUkq3KR1w4tq4AawjgpHe+9kk9
a84QLM3HJInZmF605M8PknWqiQ6gnzZAVGSP+BeklpHksi+T8bh3dB8qFafa
7SSdsYSOs9XAm7wtqblNNQaPM8YZ0PoJ64h2Kjb0XGx2mhxppRG51NICqXFf
BebI4Nyw4ckXcrYcxLIkYD7IJqS5dysA5onHomh1O/ZVuq0KuWHxEcIS0D/5
hgvqUl3SrqAkGaU9CTRr41fIQrbahQP1Q7kkWHIpvAcaYxKVAFtvO0iuS2Db
o7hKnrBZosgROZSHIveBfwtmzYJ+SyXFvZvMJglC5JIXFHRhz9M0Iaiej3wx
2hvOTYPKuuEC80BDglaoC5y7P9X36OSbhoRglGSJwcMIRKy+ldTKtnjgQtsh
ELN8U15kqFm2obsuyRC0VbHb+U4C/gIR7UY647Ptgp6CGG2z92j5YGFk5GHy
FUunsdeIrFFwhbS5BswE/IxSYG+LqEj1qoHNxMEIVGy4UCOerfHctoV+I/L4
VejmAEYrS1+R+EnN9vIEwYj2aLGQOP+QREmrpWz/lJRPJREha2f2IWVDD9An
qGfoR9OPWbFUgc1hxnAWwx2ElENX+u7YHtkXlJZPr9NqteU+VFbMq7rLm6tQ
I46ZoqavAOcIm9wVTV1trXpTtKEqFFI122y3w5hjmiKxQpMuzOLAa0vLh3kh
S6o72J/2UCTb4DaS8qaQBgZO4pRulNtR0g1lG+c4xqELH2Pp6jWbjqmcsbFM
ZDSWnG8NaSGPtqmKvnP0rVgcbqUhQ1qXGj7B7sb3RxtKK93bnotMHOMwipXO
1zTPOqgFSKGmLNZJampQHci008KS/+kgEBXlYPD25Jg7awOk9Q2QWHTU2n+B
3hK3KDg1H8CSRRoKN5m99GDBquFNhxIHGbNk2iDNvYh+aPx5wfXiH2I8wOqJ
JpJzTwZ0nybXRqwfVhAOE3eWr4/1l7SBVVKMrP7tnvQBkF7NuLij1CrM3WXZ
bbiaPwAB5jYS3eHcHEuIIcRdaAIOzaPaRtC3wfKi6ZyzvzW6RatOqFuUJDzT
gJFs92RZt0Vru6V5QvcESaiYVHSKejXAgXetVDrUUMsoJQFz7rN6h4WA8lNU
RZMjMwGDxzpvTIkGYBTVgLvOM7j5pJmDe18NtzDBH8zdR9FKySgSE6BDgFtH
xIfhM2fqh+3F0fsM8hrzUdjy0uafCJCK6xn0jcRiOGfDQ2JF5NQ6Xw5zx6GH
qeVyHXv5wQCH8wU+V4nsj2zyAM5JGxKjdVgCrhYpvg62O8Vk23jCSuEmhz9m
+XhWCOqqrO9J9nPrP4evY70mLHpY2nD9LmeMliTF4+Zih83CJ6LEKkeqVSJY
U/A8LPkxuzTHdhn6HU8k58xdNvx8gr6kA7UGBazHiahF+FlSJifbmSMZ3sFr
pPXdHAIVW+FHJbUNDtNkjELQHiqxFi12araaMxLcohz1XMKqEJpJHiIcKoIu
SglW2kwAloJShSQFiRn6Z+pYbcx58VwxE6rERtoqMY/xBAWwW+zvtIbWaexH
SPshA6KbWpbBuplyWivA9IzApDQaB6yb996UY1d3ugASPtb4pTbBRmIKvtip
XY+2JrGIut6xD1CxVxoT/qGhQZusvEdlK3AxOxnakVUhCg8M5CqJnMU4qJqF
vn9Cgzvt9xIYFhWBzBWtC/C8InEkyyVyqI42tC4aLtPMnhV/a7LdodkfuriO
ZbMsHqKSoULKOpypErA4ONTDTdbxzbKud0l7rK4iShc3iOF0DW2ByO93OKdk
mT7ArhAVDJM22g2necyEPohAA145dkIsMXkDFxROD3ApncRRQZX6lWHuO/Zx
A2PEIt2xTnYSvdM9VtItyufLZY25Yvt3dYDBPP8dLZNjSmKRl7bypJiX5fWu
O974FM8dcbpZs6cfcdBt1C6v4bb0yrPFs+bzI2fdkg681perGa0OXQYcliUn
3bQVfu50qg9QEbIQHPXoiBYfPpqKIuKbVfGAJrhi59G4GjPOGuhBQJcljj4E
UFK0//fBNSZ7lBW7Ry4gXoU6hBXmtCDEfuWyBReSItyvIn4IT95Kk7E5K3om
eCt94k/cl38liEKLSvzNxCqOjx+qGn3wy5/96AceofW4T2TJr0VOGsGLo2LZ
q1SU1FRrc83z6ys9DRNNYBgt6c+2pPdaRpcxZ2x80vMLQiT3aULHGZ/nQwd/
04oIo+v80SJsu2qg1VCKvo2naOKa0i3eFf5enpZ8vIzIExZkw57uDFqTeTwP
nHOfmaDIhGHbnXULyPdS64FSkSOLoJHExNwZ/G/0texgSKr60obUKCZ0Lsgx
Dxxpo1j8kQ8fcBPgU92RhjJG5a6viE7ntuzPVWrj+CEGUpI+uhuLETmXmtQK
OMvO7ViDoylYxXUkc0bmI+MGjM4HUhCH+h16Ztuae0eJCAZzsKwv9JlYQmTF
dsVQQR/dtxY+zvKGu6UQX9BSCuU4GYlZV8+Q7E8M8KOkMDVQU3hg52nUfzLm
TDZLm9LurqfSe7FmV0be+5yX/KUs+dKOAgfLziuHbNJ+YF3DYeERJUemd6Ay
N0SCQkK3CB5spQLIGWfzcb2MNeANeVb3tN5pzSzrCcgr23+8cE/4ggc5n8T3
Tvzh7Jg5p780xQPL5MlbJv414QOuBOTvOUfy449x7H/9S+o5P+Eq0uNilpbm
w3tJPnoCAz3kv57Qy9guj6gZqDe1cx/D81J2OipY4cGRrZCtHh7KFXcOtdXQ
Sb2jhDIEee3Bw9El/6cnyS/M3ARk8SKC3At3WSY+GRVsJlI8ZCC2Fr23WhDz
OMXupZ0THePPAz1ZTnAdQeQbd72xhWEHPwDaxbpK0sh6LU+w6dOUqiJ7A8py
EyMfR47HkAVlTgCbauU2X/HDaTnIlVY92VpwUKHJdy2VaQGYG4v/gmCWtvb+
3E7uvJS7M4yeXGcQH0XS0k5DRMCFz6R721qZF1Kz0fob13sGzXHhXDOjB3oa
f9qpHmPnpxe8uXghy3GuRgJoSlI3DZ1pcV9QgKrIoMptQ9ot6zXPwX0WSPGw
fKnJ+wDPL9OTlpoNTLIMiYnJooEMbcaSMKTxOG8yM5nVg+VhLeFs0jDrf0qa
RDmXfqfziugcMPQ71qOrB2I7Dft2V3CNwvjGPZNIE8QGdjb3KnRBINNGjsDL
bI2EvRCXxGTMzs8u4PE4KI30G/FS3PixvppRxhTsDnQVnoYchIXII0IG40Gu
5efxFlfb0BdJsyAHZmo4WlW148dLTe0MscACc3/IuKCt9vPV29eYZ801Ayl7
xaOaQBKsMWmJ5iBt+zEp/WN4iIjisFFrWE+pz41PDrJTphlxla7LQVi4GGTw
/hM/IgoHoh7oywR+wZJtGXUcqsFZKy5Sc8IxHM2Tx2/rGqvA0aMgl/cbLymO
luHHgfTf1os66RUd3RtGyLN1X39zQyExQdIFZreW+a1pSag7s9lzRKliuR/m
dq1yFdxJPohpg0J8jjOoVbiH4qRSvIsV7KXeFyCRY7m3/mgfk7PINYhCIH6X
1gU1a5rK5U6Vcfn252oEs+C58impTFd5Uo5NBEhAMLJthoMH/lJO+g8Uh2wM
nI0pUNL2lkLAsJfjbSnnKiyvT5wgiLBWJTxptohKkUr42DezfTzE4ifv5lE8
vdgn1swSRcfcwZDitxxHDVvAy6yKxb9mmx6XW5XZeh1ihSItncOjl5GURLoy
nI8bKk0ifwPZE6GnMb99e/PKXb8lA0nCl6fdXSKImZpeXWDa0yBHWGND+ihP
SPQlGZhZarQbq9AXFxI9ntQdBLxphKJ5rJAV0gSGuTmVkgOYSfhkW3SCLjY0
oSL8LP9H33ZS3v0l+vPyaBTFiSSg8hh829GxUfD0ccLZHBrKIy9ygX4vkw9r
b20vF/GclEwx3ugMAUSP5BkVIqdJ3VXEiTw1Tje2CSAfFCk+Tg4XfcH9ltJZ
qXfKDKq+VqFs07j15X/dvDl2G8FYxr5UKTopYyE07exwrJwITzYab5Ho1EpA
TxU1CjaV0DcaZvq4l/1p7Mqtgtoq9EuE7dpnTTVA4SNlMyUMJje9Syfc4jgE
JcI53AoxyN3PbcpwndamwFGcIjTA8mF0Dr6TYoWUBdq5Y8brKtSm2ikrsxm/
Vrsz2oR0ctRbnPMJQbogrnPlLFZ2FUxzmh7QxuiYBo+3ax0qwAWnAuYnxVJL
EJwsrbLBfReqyBKQi44m2Ya2J5vdHrfFl3yz3a5GPSom4ZEa46qwlgZi7UAP
7RKxrm7ufufevPhASeauYBUJ5ueLf//dp5bP5WtcbRmwotfXrzWx9dEnc9zT
VxhaowGQWXHQudSI02uccFMal/uGNzPpbWy5XpBgWYO0BsAdEdyb86FLcCjm
DV0ihxf6TWV+Lv1bWY/DxeRyP9wYYa1uhGLW4ZKbI86Fe+T1/F7oNxVCGzWe
HsZG5wmJ3NPjUPEcR1ZxFZO04TfW1xvuzs2q0Cx9KkKRIwbcdHDinAGk4eed
NRhUtEdtyarXWdpWGMwpWviq6kh3/XTUBi/hUmwVPoX/mNPa5z1ayTQ90pBi
Vc/9udI9krTrcTYdpZ5gzLkvOJ9t6mXsfE/v0mlaSR5djjpdxwzgxrggc75q
UIZm9CsHOSMroA9ZmXSutwI41eW2/XabyclsbsechRJOaNAMV76NKZh0eieN
18au78/QfPX9Gb/0/dnbP3373fWL78/OxQDSBvRe51itZfk7wOo63E9y99jh
AungsnMF54OgwSdd5xmJr92ocXjmRe9Mau3UCyc2cj74sg8HKQev6cWI2mYS
IhhA9gXSwSFDc3g7pzR3yxVahbVzJCdHkfvCsRDJTdsZa701YJetQ09laPkZ
BvuyfaMuidrXFmh1m+RYTRI0jXUgiN3wnoPkvLToQVIzkB4+HAkiTel3ZTix
Kuf1Q3GILdyocd3u/dC7qhO1YztExipDi7bmS0IvvZwWkIbzQd0kzpaeizmA
nvFGgOSAqyQEyWYsg3hpLd3yGMOkiV4AG+JbpLsgCgM7ihO+9c5uzLJ98n3a
m7pEzCtKHY5GSJt+erBm1N/80z3+cg9T6KRO3ByWIxu0Ry0dBl/Wh4T/MMdg
YTFpVbtF9hkC7Bu7tClo8c2+28STwnJFHQ/rD487aNM0gtXdDv3ytAxaEnt8
PmeDlWi4o1cb4g/aeZXID8Zf6BFjISQatFqybXwtK/uNDMPgJvkEL7+qx517
ieTHhLG1ALOz1muGD05zjJuDY3ceJ961C9LycMn1RqOzTp1cQKkzSrgr+NVO
07FIWbuUCPiCFGZV2DGd4d23a+kBj1e1zvmfA9CbsKfcitukxjrWFyVcsZ2k
aqmtIpb3Si9dafvFjAdrtcts1Aut3sgu2ivkQF/jceyRr3+hNf5E0CjEvry5
mnHrLvexMe/PrVSC+wbZEmYIfVfFAycgzAoK/sGRRpazGkcg9JKtRBJCpw4T
Fd1EWnni/idD8npWycoMIb0jxzBgdo50bNrdePEy413TK8CxBiIMjKx1Hjvf
0DNCNkvPPLA4hpbBAwu09Ja/kqstgtXhfKgd2sCZOqnJSrmEz7Xy/Wu9XdUR
zwcBey+8N0CNfzIBUe+uEEnh+y3i9YrWF0wxBXlW7bWt6k6bWVnReEqoM8tH
KdekLniI1tAVEQItJus+Q8Ou9yFUBpXqwamCQfMYwIImf4e3lnLWSVOkg7tW
uSVLQy/YGqtc2jWq8lvNF5fjkmJL6vI5Byg96G09A/DcVn0YIN9oGexs6BGH
rL5bS4YDb8R6QUSFSg77QhUpBAwUT1bKeRM0FuGK7ho6dVCrIeq9QWxO1CpZ
t0PXGw3A3LKrtD4UOUxTcGwy9+xY+nfU8C4tKEHp5CpvL7casnFkGacF8D90
gea94RWuB01dJG73qGNYtGTXompf2ls9RoR8e1vk9u8m8GVYL9NW4e8qTWrm
1o9Ef3te54i8B6WC0XGwpGdyVGZRvj+/vuLcM7DN1DyD3u8U3afclScNq7hK
Str/mbnNFg7t8EbItGk99KyjdbNom37XWR07PYfyWq7iCSXxY23m2j5sN7nE
5pgUpmjPObkQPWsUkPGgHmJ1wfRk9/DYaN7sZ2QUE2iG2i19s6jRZjZYPl8M
lv4zHc8tJne3Rfu+vRD5BSnt3q0BKWkpkIZo1WPbbNGMTBKfyfHagrvd4bRm
VTC0Q/J1SEjGNFxcxMrpG9MRfZVYSBGj9P5ZnwQFzTOad8bGKGlAgtjHlurw
73zwzXVXl28uD+R4eFclbHdVy5PqLfUidagb53KWdgsyE3Xy44XceeTzP5yt
KGLx6D7hQe1uFA2a4kmAN5k6I2Qib3/1wu5ahRo9pxguk3tbnkPGi0WPuxOP
T/PfMNvXxfffT772xT8gV38lV0pTIX8UbiR+XpMWXXf5nJ57Kf+CBJkhevV/
Vj/88UFfwL/1gQGzCv9KzS8ccz8aTn7+FyGBYTYRagAA

-->

</rfc>

