Polkadot:异构多链框架的愿景

Polkadot: Vision for a Heterogeneous Multi-Chain Framework

Par Gavin Wood · 2016

Abstract

Abstract

POLKADOT: VISION FOR A HETEROGENEOUS MULTI-CHAIN FRAMEWORK DRAFT 1 DR. GAVIN WOOD FOUNDER, ETHEREUM & PARITY [email protected] Abstract. Present-day blockchain architectures all suffer from a number of issues not least practical means of extensibility and scalability. We believe this stems from tying two very important parts of the consensus architecture, namely canonicality and validity, too closely together. This paper introduces an architecture, the heterogeneous multi-chain, which fundamentally sets the two apart. In compartmentalising these two parts, and by keeping the overall functionality provided to an absolute minimum of security and transport, we introduce practical means of core extensibility in situ. Scalability is addressed through a divide-and-conquer approach to these two functions, scaling out of its bonded core through the incentivisation of untrusted public nodes. The heterogeneous nature of this architecture enables many highly divergent types of consensus systems interoperating in a trustless, fully decentralised “federation”, allowing open and closed networks to have trust-free access to each other. We put forward a means of providing backwards compatibility with one or more pre-existing networks such as Ethereum. We believe that such a system provides a useful base-level component in the overall search for a practically implementable system capable of achieving global-commerce levels of scalability and privacy. 1. Preface This is intended to be a technical “vision” summary of one possible direction that may be taken in further developing the blockchain paradigm together with some rationale as to why this direction is sensible. It lays out in as much detail as is possible at this stage of development a system which may give a concrete improvement on a number of aspects of blockchain technology. It is not intended to be a specification, formal or otherwise. It is not intended to be comprehensive nor to be a final design. It is not intended to cover non-core aspects of the framework such as APIs, bindings, languages and usage. This is notably experimental; where parameters are specified, they are likely to change. Mechanisms will be added, refined and removed in response to community ideas and critiques. Large portions of this paper will likely be revised as experimental evidence and prototyping gives us information about what will work and what not. This document includes a core description of the protocol together with ideas for directions that may be taken to improve various aspects. It is envisioned that the core description will be used as the starting point for an initial series of proofs-of-concept. A final “version 1.0” would be based around this refined protocol together with the additional ideas that become proven and are determined to be required for the project to reach its goals. 1.1. History. • 09/10/2016: 0.1.0-proof1 • 20/10/2016: 0.1.0-proof2 • 01/11/2016: 0.1.0-proof3 • 10/11/2016: 0.1.0 2. Introduction Blockchains have demonstrated great promise of utility over several fields including “Internet of Things” (IoT), finance, governance, identity management, webdecentralisation and asset-tracking. However, despite the technological promise and grand talk, we have yet to see significant real-world deployment of present technology. We believe that this is down to five key failures of present technology stacks: Scalability: How much resources are spent globally on processing, bandwidth and storage for the system to process a single transaction and how many transactions can be reasonably processed under peak conditions? Isolatability: Can the divergent needs of multiple parties and applications be addressed to a nearoptimal degree under the same framework? Developability: How well do the tools work? Do the APIs address the developers’ needs? Are educational materials available? Are the right integrations there? Governance: Can the network remain flexible to evolve and adapt over time? Can decisions be made with sufficient inclusivity, legitimacy and transparency to provide effective leadership of a decentralised system? Applicability: Does the technology actually address a burning need on its own? Is other “middleware” required in order to bridge the gap to actual applications? In the present work, we aim to address the first two issues: scalability and isolatability. That said, we believe the Polkadot framework can provide meaningful improvements in each of these classes of problems. Modern, efficient blockchain implementations such as the Parity Ethereum client [17] can process in excess of 3,000 transactions per second when running on performant consumer hardware. However, current real-world blockchain networks are practically limited to around 30 transactions per second. This limitation mainly originates from the fact that the current synchronous consensus mechanisms require wide timing margins of safety on the expected processing time, which is exacerbated by the 1

摘要

Polkadot:异构多链框架的愿景 草案1 博士。加文·伍德 以太坊和 Parity 创始人 加文@PARITY.IO 摘要。当今的 blockchain 架构都存在许多问题,尤其是可扩展性和可伸缩性的实用方法。我们相信这源于共识架构的两个非常重要的部分,即 规范性和有效性过于紧密地结合在一起。本文介绍了一种架构,异构多链, 这从根本上将两者区分开来。 将这两部分分开,并将提供的整体功能保持在绝对最低限度 在安全和运输方面,我们引入了核心可扩展性的实用方法。可扩展性是通过以下方式解决的 对这两个功能采取分而治之的方法,通过激励来扩展其粘合核心 不受信任的公共节点。 这种架构的异构性使得许多高度不同类型的共识系统能够在一个不信任的、完全去中心化的“联盟”中互操作,从而允许开放和封闭的网络能够无信任地访问 彼此。 我们提出了一种提供与一个或多个预先存在的网络的向后兼容性的方法,例如 Ethereum。我们相信,这样的系统在总体搜索实际应用中提供了有用的基础组件。 能够实现全球商务级别的可扩展性和隐私性的可实施系统。 一、前言 这是一个技术“愿景”摘要 进一步开发 blockchain 范式时可能采取的一个可能方向,以及为什么这个方向是明智的一些基本原理。它布置在 在此开发阶段尽可能详细 一个可以具体改进的系统 blockchain 技术的多个方面。 它无意成为正式或其他形式的规范。它的目的不是全面的,也不是 最终设计。它无意涵盖非核心方面 框架,例如 API、绑定、语言和 用法。 这显然是实验性的;其中参数 已被指定,它们很可能会改变。机制将 根据社区的需求进行添加、完善和删除 想法和批评。本文的大部分内容可能会 作为实验证据和原型进行修改给出 我们提供有关什么有效、什么无效的信息。 本文档包括协议的核心描述以及可能采取的方向的想法 以改善各方面。据设想,核心 描述将用作初始的起点 系列概念验证。最终的“版本 1.0”将是 基于这个完善的协议以及经过验证并确定的其他想法 是项目实现其目标所必需的。 1.1.历史。 • 2016 年 9 月 10 日:0.1.0-proof1 • 2016 年 10 月 20 日:0.1.0-proof2 • 2016 年 1 月 11 日:0.1.0-proof3 • 2016 年 10 月 11 日:0.1.0 2. 简介 区块链在包括“物联网”在内的多个领域展示了巨大的实用前景 (物联网)、财务、治理、身份管理、网络去中心化和资产跟踪。然而,尽管 技术承诺和宏大的言论,我们还没有看到 当前技术在现实世界中的重大部署。 我们认为,这归因于当前的五个关键失败 技术栈: 可扩展性:全球花费了多少资源 系统处理单笔交易的处理能力、带宽和存储以及多少 交易可以合理地处理 峰值条件? 隔离性:能否满足多个人的不同需求 各方和应用程序是否可以在同一框架下达到近乎最佳的程度? 可开发性:这些工具的工作效果如何?做 API 满足了开发人员的需求吗?有教育材料吗?那里有正确的集成吗? 治理:网络能否保持灵活性 随着时间的推移而发展和适应? 决策可以是 具有足够的包容性、合法性和 透明度,以提供有效的领导 去中心化系统? 适用性:该技术本身是否真的能够满足迫切的需求?是否需要其他“中间件”来弥补差距 实际应用? 在目前的工作中,我们的目标是解决前两个问题 问题:可扩展性和隔离性。也就是说,我们相信 Polkadot 框架可以为每一类问题提供有意义的改进。 现代、高效的 blockchain 实现,例如 Parity Ethereum 客户端 [17] 可以处理es 超过 在高性能消费类硬件上运行时每秒处理 3,000 个事务。 然而,目前的现实世界 blockchain 网络实际上仅限于 30 个左右 每秒交易数。 这种限制主要源于当前的同步共识机制需要广泛的时间安全裕度。 预期的处理时间,这会因 1

Introduction

Introduction

Blockchains have demonstrated great promise of utility over several fields including “Internet of Things” (IoT), finance, governance, identity management, webdecentralisation and asset-tracking. However, despite the technological promise and grand talk, we have yet to see significant real-world deployment of present technology. We believe that this is down to five key failures of present technology stacks: Scalability: How much resources are spent globally on processing, bandwidth and storage for the system to process a single transaction and how many transactions can be reasonably processed under peak conditions? Isolatability: Can the divergent needs of multiple parties and applications be addressed to a nearoptimal degree under the same framework? Developability: How well do the tools work? Do the APIs address the developers’ needs? Are educational materials available? Are the right integrations there? Governance: Can the network remain flexible to evolve and adapt over time? Can decisions be made with sufficient inclusivity, legitimacy and transparency to provide effective leadership of a decentralised system? Applicability: Does the technology actually address a burning need on its own? Is other “middleware” required in order to bridge the gap to actual applications? In the present work, we aim to address the first two issues: scalability and isolatability. That said, we believe the Polkadot framework can provide meaningful improvements in each of these classes of problems. Modern, efficient blockchain implementations such as the Parity Ethereum client [17] can process in excess of 3,000 transactions per second when running on performant consumer hardware. However, current real-world blockchain networks are practically limited to around 30 transactions per second. This limitation mainly originates from the fact that the current synchronous consensus mechanisms require wide timing margins of safety on the expected processing time, which is exacerbated by the

POLKADOT: VISION FOR A HETEROGENEOUS MULTI-CHAIN FRAMEWORK DRAFT 1 2 desire to support slower implementations. This is due to the underlying consensus architecture: the state transition mechanism, or the means by which parties collate and execute transactions, has its logic fundamentally tied into the consensus “canonicalisation” mechanism, or the means by which parties agree upon one of a number of possible, valid, histories. This applies equally to both proof-of-work (PoW) systems such as Bitcoin [15] and Ethereum [5,23] and proofof-stake (PoS) systems such as NXT [8] and Bitshares [12]: all ultimately suffer from the same handicap. It is a simple strategy that helped make blockchains a success. However, by tightly coupling these two mechanisms into a single unit of the protocol, we also bundle together multiple different actors and applications with different risk profiles, different scalability requirements and different privacy needs. One size does not fit all. Too often it is the case that in a desire for broad appeal, a network adopts a degree of conservatism which results in a lowest-common-denominator optimally serving few and ultimately leading to a failing in the ability to innovate, perform and adapt, sometimes dramatically so. Some systems such as e.g. Factom [21] drop the statetransition mechanism altogether. However, much of the utility that we desire requires the ability to transition state according to a shared state-machine. Dropping it solves an alternative problem; it does not provide an alternative solution. It seems clear, therefore, that one reasonable direction to explore as a route to a scalable decentralised compute platform is to decouple the consensus architecture from the state-transition mechanism. And, perhaps unsurprisingly, this is the strategy that Polkadot adopts as a solution to scalability. 2.1. Protocol, Implementation and Network. Like Bitcoin and Ethereum, Polkadot refers at once to a network protocol and the (hitherto presupposed) primary public network that runs this protocol. Polkadot is intended to be a free and open project, the protocol specification being under a Creative Commons license and the code being placed under a FLOSS license. The project is developed in an open manner and accepts contributions where ever they are useful. A system of RFCs, not unlike the Python Enhancement Proposals, will allow a means of publicly collaborating over protocol changes and upgrades. Our initial implementation of the Polkadot protocol will be known as the Parity Polkadot Platform and will include a full protocol implementation together with API bindings. Like other Parity blockchain implementations, PPP is designed to be a general-purpose blockchain technology stack, neither uniquely for a public network nor for private/consortium operation. The development of it thus far has been funded by several parties including through a grant from the British government. This paper nonetheless describes Polkadot under the context of a public network. The functionality we envision in a public network is a superset of that required in alternative (e.g. private and/or consortium) settings. Furthermore, in this context, the full scope of Polkadot can be more clearly described and discussed. This does mean the reader should be aware that certain mechanisms may be described (for example interoperation with other public networks) which are not directly relevant to Polkadot when deployed under non-public (“permissioned”) situations. 2.2. Previous work. Decoupling the underlying consensus from the state-transition has been informally proposed in private for at least two years—Max Kaye was a proponent of such a strategy during the very early days of Ethereum. A more complex scalable solution known as Chain fibers, dating back to June 2014 and first published later that year1, made the case for a single relay-chain and multiple homogeneous chains providing a transparent interchain execution mechanism. Decoherence was paid for through transaction latency—transactions requiring the coordination of disparate portions of the system would take longer to process. Polkadot takes much of its architecture from that and the follow-up conversations with various people, though it differs greatly in much of its design and provisions. While there are no systems comparable to Polkadot actually in production, several systems of some relevance have been proposed, though few in any substantial level of detail. These proposals can be broken down into systems which drop or reduce the notion of a globally coherent state machine, those which attempt to provide a globally coherent singleton machine through homogeneous shards and those which target only heterogeneity. 2.2.1. Systems without Global State. Factom [21] is a system that demonstrates canonicality without the according validity, effectively allowing the chronicling of data. Because of the avoidance of global state and the difficulties with scaling which this brings, it can be considered a scalable solution. However, as mentioned previously, the set of problems it solves is strictly and substantially smaller. Tangle [18] is a novel approach to consensus systems. Rather than arranging transactions into blocks and forming consensus over a strictly linked list to give a globally canonical ordering of state-changes, it largely abandons the idea of a heavily structured ordering and instead pushes for a directed acyclic graph of dependent transactions with later items helping canonicalise earlier items through explicit referencing. For arbitrary state-changes, this dependency graph would quickly become intractable, however for the much simpler UTXO model2 this becomes quite reasonable. Because the system is only loosely coherent and transactions are generally independent of each other, a large amount of global parallelism becomes quite natural. Using the UTXO model does have the effect of limiting Tangle to a purely value-transfer “currency” system rather than anything more general or extensible. Furthermore without the hard global coherency, interaction with other systems—which tend to need an absolute degree knowledge over the system state—becomes impractical. 1https://github.com/ethereum/wiki/wiki/Chain-Fibers-Redux 2unspent transaction output, the model that Bitcoin uses whereby the state is effectively the set of address associated with some value; transactions collate such addresses and reform them into a new set of addresses whose sum total is equivalent

POLKADOT: VISION FOR A HETEROGENEOUS MULTI-CHAIN FRAMEWORK DRAFT 1 3 2.2.2. Heterogeneous Chain Systems. Side-chains [3] is a proposed addition to the Bitcoin protocol which would allow trustless interaction between the main Bitcoin chain and additional side-chains. There is no provision for any degree of ‘rich’ interaction between side-chains: the interaction would be limited to allowing side-chains to be custodians of each other’s assets, effecting—in the local jargon—a two-way peg 3. The end vision is for a framework where the Bitcoin currency could be provided with additional, if peripheral, functionality through pegging it onto some other chains with more exotic state transition systems than the Bitcoin protocol allows. In this sense, side-chains addresses extensibility rather than scalability. Indeed, there is fundamentally no provision for the validity of side-chains; tokens from one chain (e.g. Bitcoin) held on behalf of a side-chain are secured only by the side-chain’s ability to incentivise miners to canonicalise valid transitions. The security of the Bitcoin network cannot easily be transitioned to work on behalf of other blockchains. Furthermore, a protocol for ensuring Bitcoin miners merge-mine (that is duplicate their canonicalisation power onto that of the side-chain) and, more importantly, validate the side-chain’s transitions is outside the scope of this proposal. Cosmos [10] is a proposed multi-chain system in the same vein as side-chains, swapping the Nakamoto PoW consensus method for Jae Kwon’s Tendermint algorithm. Essentially, it describes multiple chains (operating in zones) each using individual instances of Tendermint, together with a means for trust-free communication via a master hub chain. This interchain communication is limited to the transfer of digital assets (“specifically about tokens”) rather than arbitrary information, however such interchain communication does have a return path for data, e.g. to report to the sender on the status of the transfer. Validator sets for the zoned chains, and in particular the means of incentivising them, are, like side-chains, left as an unsolved problem. The general assumption is that each zoned chain will itself hold a token of value whose inflation is used to pay for validators. Still in the early stages of design, at present the proposal lacks comprehensive details over the economic means of achieving the scalable certainty over global validity. However, the loose coherence required between the zones and the hub will allow for additional flexibility over the parameters of the zoned chains compared to that of a system enforcing stronger coherence. 2.2.3. Casper. As yet no comprehensive review or sideby-side comparison between Casper [6] and Polkadot have been made, though one can make a fairly sweeping (and accordingly inaccurate) characterisation of the two. Casper is a reimagining of how a PoS consensus algorithm could be based around participants betting on which fork would ultimately become canonical. Substantial consideration was given to ensuring that it be robust to network forks, even when prolonged, and have some additional degree of scalability on top of the basic Ethereum model. As such, Casper to date has tended to be a substantially more complex protocol than Polkadot and its forebears, and a substantial deviation from the basic blockchain format. It remains unseen as to how Casper will iterate in the future and what it will look like should it finally be deployed. While Casper and Polkadot both represent interesting new protocols and, in some sense, augmentations of Ethereum, there are substantial differences between their ultimate goals and paths to deployment. Casper is an Ethereum Foundation-centered project originally designed to be a PoS alteration to the protocol with no desire to create a fundamentally scalable blockchain. Crucially, it is designed to be a hard-fork, rather than anything more expansive and thus all Ethereum clients and users would be required to upgrade or remain on a fork of uncertain adoption. As such, deployment is made substantially more difficult as is inherent in a decentralised project where tight coordination is necessary. Polkadot differs in several ways; first and foremost, Polkadot is designed to be a fully extensible and scalable blockchain development, deployment and interaction test bed. It is built to be a largely future-proof harness able to assimilate new blockchain technology as it becomes available without over-complicated decentralised coordination or hard forks. We already envision several use cases such as encrypted consortium chains and high-frequency chains with very low block times that are unrealistic to do in any future version of Ethereum currently envisioned. Finally, the coupling between it and Ethereum is extremely loose; no action on the part of Ethereum is necessary to enable trustless transaction forwarding between the two networks. In short, while Casper/Ethereum 2.0 and Polkadot share some fleeting similarities we believe their end goal is substantially different and that rather than competing, the two protocols are likely to ultimately co-exist under a mutually beneficial relationship for the foreseeable future.

介绍

区块链在包括“物联网”在内的多个领域展示了巨大的实用前景 (物联网)、财务、治理、身份管理、网络去中心化和资产跟踪。然而,尽管 技术承诺和宏大的言论,我们还没有看到 当前技术在现实世界中的重大部署。 我们认为,这归因于当前的五个关键失败 技术栈: 可扩展性:全球花费了多少资源 系统处理单笔交易的处理能力、带宽和存储以及多少 交易可以合理地处理 峰值条件? 隔离性:能否满足多个人的不同需求 各方和应用程序是否可以在同一框架下达到近乎最佳的程度? 可开发性:这些工具的工作效果如何?做 API 满足了开发人员的需求吗?有教育材料吗?那里有正确的集成吗? 治理:网络能否保持灵活性 随着时间的推移而发展和适应? 决策可以是 具有足够的包容性、合法性和 透明度,以提供有效的领导 去中心化系统? 适用性:该技术本身是否真的能够满足迫切的需求?是否需要其他“中间件”来弥补差距 实际应用? 在目前的工作中,我们的目标是解决前两个问题 问题:可扩展性和隔离性。也就是说,我们相信 Polkadot 框架可以为每一类问题提供有意义的改进。 现代、高效的 blockchain 实现,例如 Parity Ethereum 客户端 [17] 可以处理超过 在高性能消费类硬件上运行时每秒处理 3,000 个事务。 然而,目前的现实世界 blockchain 网络实际上仅限于 30 个左右 每秒交易数。 这种限制主要源于当前的同步共识机制需要广泛的时间安全裕度。 预期的处理时间,这会因Polkadot:异构多链框架的愿景 草案1 2 希望支持较慢的实现。这是由于 底层共识架构:状态转换机制,或者各方核对的方式 并执行交易,其逻辑从根本上联系在一起 进入共识“规范化”机制,或者 指各方就多项协议中的一项达成一致的方式 可能的、有效的、历史的。 这同样适用于 proof-of-work (PoW) 系统,例如 Bitcoin [15] 和 Ethereum [5,23] 以及股权证明 (PoS) 系统,例如 NXT [8] 和 Bitshares [12]: 所有人最终都会遭受同样的障碍。这是一个简单的 帮助 blockchain 取得成功的策略。然而, 通过将这两种机制紧密耦合成一个单元 协议中,我们还将多个不同的协议捆绑在一起 具有不同风险状况、不同可扩展性要求和不同隐私需求的参与者和应用程序。 一种尺寸并不适合所有情况。很多时候,情况是在一个 为了获得广泛的吸引力,网络采取了一定程度的保守主义,从而导致了最低公分母 只为少数人提供最佳服务,最终导致失败 有时表现在创新、执行和适应的能力上 戏剧性地如此。 一些系统,例如Factom [21] 完全放弃了状态转换机制。然而,大部分 我们想要的效用需要能够转换状态 根据共享状态机。丢掉就可以解决 一个替代问题;它没有提供替代方案 解决方案。 因此,似乎很清楚,一个合理的方向 探索可扩展的去中心化计算的途径 平台的目的是将共识架构与 状态转换机制。而且,也许并不奇怪,这就是 Polkadot 所采用的可扩展性解决方案的策略。 2.1.协议、实施和网络。喜欢 Bitcoin 和 Ethereum、Polkadot 同时指网络协议和(迄今为止假定的)主协议 运行该协议的公共网络。 Polkadot 旨在成为一个免费和开放的项目,协议规范采用知识共享许可,并且 代码被置于 FLOSS 许可证之下。该项目是 以开放的方式开发并接受贡献 无论它们在哪里有用。 RFC 系统,与 Python 增强提案将允许一种方法 就协议变更和升级进行公开合作。 我们最初实施 Polkadot 协议 将被称为 Parity Polkadot 平台,并将 包括完整的协议实现和 API 绑定。与其他 Parity blockchain 实现一样, PPP 被设计为通用的 blockchain 技术堆栈,既不是公共网络独有的,也不是 私人/财团运营。它的发展是这样的 Far 已由多方资助,包括通过 英国政府的拨款。 尽管如此,本文仍然描述了 Polkadot 公共网络的上下文。我们在公共网络中设想的功能是公共网络中所需功能的超集 替代(例如私人和/或联盟)设置。此外,在这种情况下,Polkadot 的完整范围可以 进行更清晰的描述和讨论。这确实意味着 读者应该意识到某些机制可能 与 Polkadot 不直接相关的描述(例如与其他公共网络的互操作) 在非公开(“许可”)情况下部署时。 2.2.以前的工作。已经非正式地提议将基本共识与状态转换脱钩 私下里至少有两年的时间——马克斯·凯伊 (Max Kaye) 在公司成立之初就是这种策略的支持者。 Ethereum。 一种更复杂的可扩展解决方案,称为“链” Fibers,可追溯到 2014 年 6 月,随后首次发布 那一年1,提出了使用单个中继链和多个同质链提供透明的链间执行机制的案例。 退相干是付费的 通过交易延迟——交易需要 系统不同部分的协调将 需要更长的时间来处理。 Polkadot 的大部分架构都来自于此以及后续对话 尽管它的设计和规定有很大不同,但它却适用于不同的人。 虽然没有可与 Polkadot 相媲美的系统 实际上在生产中,有一些相关的系统 已提出建议,尽管很少有实质性的建议 细节。这些建议可以是分解成系统 它放弃或减少了全球一致的概念 状态机,那些试图提供全局的 通过同质分片实现连贯的单例机器 以及仅针对异质性的那些。 2.2.1.没有全局状态的系统。 Factom [21] 是一个无需遵循规范即可证明规范性的系统 有效性,有效地允许记录数据。由于避免全局状态和困难 通过这带来的扩展,它可以被认为是一个可扩展的解决方案。然而,正如前面提到的,集合 它解决的问题数量严格来说要小得多。 Tangle [18] 是一种新颖的共识系统方法。 它不是将交易安排到区块中并就严格链接的列表达成共识以给出状态更改的全球规范排序,而是在很大程度上放弃了高度结构化排序的想法,而是 推动依赖事务的有向无环图,其中后面的项目有助于规范化早期的项目 通过显式引用。对于任意状态变化, 这个依赖图很快就会变得棘手, 然而对于更简单的 UTXO model2 这变成 相当合理。因为系统只是松散地连贯,并且事务通常彼此独立 另外,大量的全局并行性变得相当 自然的。 使用 UTXO 模型确实有效果 将 Tangle 限制为纯粹的价值转移“货币” 系统而不是任何更通用或可扩展的东西。 此外,如果没有硬性的全球一致性,与其他系统的交互——这往往需要绝对的 对系统状态的程度了解变得不切实际。 1https://github.com/ethereum/wiki/wiki/Chain-Fibers-Redux 2未花费的交易输出,Bitcoin 使用的模型,其中状态实际上是与某个值关联的地址集; 交易将这些地址进行整理,并将其重组为一组总和相等的新地址

Polkadot:异构多链框架的愿景 草案1 3 2.2.2.异构链系统。侧链 [3] 是 提议对 Bitcoin 协议进行补充,该协议将允许主 Bitcoin 链之间进行无需信任的交互 和额外的侧链。没有任何规定 侧链之间“丰富”相互作用的程度:相互作用将仅限于允许侧链 彼此资产的托管人,在当地发挥作用 行话——双向挂钩 3. 最终愿景是建立一个可以提供 Bitcoin 货币的框架 通过挂钩附加的(如果是外围的)功能 到其他一些具有更奇特状态转换的链上 Bitcoin 协议允许的系统。从这个意义上说, 侧链解决的是可扩展性而不是可扩展性。 事实上,侧链的有效性基本上没有规定;来自一条链的 tokens(例如 Bitcoin) 代表侧链持有的数据仅由 侧链激励矿工标准化的能力 有效的转换。 Bitcoin 网络的安全 不能轻易地转为代表其他人工作 blockchains。此外,还有一个用于确保 Bitcoin 的协议 矿工合并挖矿(即将其规范化能力复制到侧链上),更重要的是,验证侧链的转换是否在 本提案的范围。 Cosmos [10] 是提议的多链系统 与侧链相同,交换了 Nakamoto PoW Jae Kwon 的 Tendermint 算法的共识方法。 本质上,它描述了多个链(在 区域),每个区域都使用 Tendermint 的单独实例,以及通过 主轮毂链。这种链间通信仅限于数字资产的传输(“具体是关于tokens”)而不是任意信息,但是这种链间通信确实有数据的返回路径, 例如向发件人报告传输状态。 分区链的验证器集,特别是 激励他们的手段,就像侧链一样,被留下了 作为一个未解决的问题。一般假设是 每个分区链本身都会持有 token 的价值,其通货膨胀用于支付 validator 的费用。仍处于早期阶段 设计方面,目前该提案缺乏关于实现可扩展的经济手段的全面细节 全球有效性的确定性。然而,区域和中心之间所需的松散一致性将允许 为分区参数提供额外的灵活性 与执行力更强的系统相比,链条 连贯性。 2.2.3.卡斯帕。迄今为止,Casper [6] 和 Polkadot 之间尚未进行全面审查或并排比较 已经制定了,尽管人们可以做出相当全面的 (因此不准确)两者的表征。 Casper 重新构想了 PoS 共识算法 可以基于参与者对哪个分叉的投注 最终将成为规范。充分考虑确保其对网络的鲁棒性 分叉,即使延长,并且在基本 Ethereum 模型之上具有一定程度的可扩展性。作为 因此,Casper 迄今为止往往是一个更 比 Polkadot 及其祖先更复杂的协议,以及 与基本 blockchain 格式有很大偏差。它 Casper 未来将如何迭代仍不得而知 以及最终部署后会是什么样子。 虽然 Casper 和 Polkadot 都代表了有趣的新协议,并且在某种意义上,增强了 Ethereum,它们之间存在显着差异 最终目标和部署路径。 卡斯帕是一个 Ethereum 最初设计的以基金会为中心的项目 是对协议的 PoS 更改,但不希望 创建一个基本可扩展的 blockchain。关键的是,它是 设计为硬分叉,而不是任何更广泛的东西,因此所有 Ethereum 客户和用户都将 需要升级或保留在不确定采用的分叉上。因此,部署变得更加困难,这是分散式项目所固有的,在这种情况下, 协调是必要的。 Polkadot 在几个方面有所不同;首先也是最重要的, Polkadot 被设计为完全可扩展和可扩展的 blockchain 开发、部署和交互测试 床。它是一款基本上面向未来的安全带,能够 同化新的blockchain无需过于复杂的去中心化协调即可使用的技术 或硬分叉。我们已经设想了几个用例,例如 如加密联盟链和高频链 出块时间非常短,这是不现实的 当前设想的 Ethereum 的任何未来版本。最后,它和Ethereum之间的耦合度非常高 松动; Ethereum 无需采取任何行动 启用两者之间的去信任交易转发 网络。 简而言之,虽然 Casper/Ethereum 2.0 和 Polkadot 有一些短暂的相似之处,我们相信他们的最终目标 本质上是不同的,而不是竞争, 这两个协议最终可能会在一个协议下共存 在可预见的未来建立互惠互利的关系。

Summary

Summary

Polkadot is a scalable heterogeneous multi-chain. This means that unlike previous blockchain implementations which have focused on providing a single chain of varying degrees of generality over potential applications, Polkadot itself is designed to provide no inherent application functionality at all. Rather, Polkadot provides the bedrock “relay-chain” upon which a large number of validatable, globally-coherent dynamic data-structures may be hosted side-by-side. We call these data-structures “parallelised” chains or parachains, though there is no specific need for them to be blockchain in nature. In other words, Polkadot may be considered equivalent to a set of independent chains (e.g. the set containing Ethereum, Ethereum Classic, Namecoin and Bitcoin) except for two very important points: • Pooled security; • trust-free interchain transactability. These points are why we consider Polkadot to be “scalable”. In principle, a problem to be deployed on Polkadot may be substantially parallelised—scaled out—over a large number of parachains. Since all aspects of each parachain may be conducted in parallel by a different segment of the Polkadot network, the system has some ability to scale. Polkadot provides a rather bare-bones piece of 3as opposed to a one-way peg which is essentially the action of destroying tokens in one chain to create tokens in another without the mechanism to do the converse in order to recover the original tokens

POLKADOT: VISION FOR A HETEROGENEOUS MULTI-CHAIN FRAMEWORK DRAFT 1 4 infrastructure leaving much of the complexity to be addressed at the middleware level. This is a conscious decision intended to reduce development risk, enabling the requisite software to be developed within a short time span and with a good level of confidence over its security and robustness. 3.1. The Philosophy of Polkadot. Polkadot should provide an absolute rock-solid foundation on which to build the next wave of consensus systems, right through the risk spectrum from production-capable mature designs to nascent ideas. By providing strong guarantees over security, isolation and communication, Polkadot can allow parachains to select from a range of properties themselves. Indeed, we foresee various experimental blockchains pushing the properties of what could be considered sensible today. We see conservative, high-value chains similar to Bitcoin or Z-cash [20] co-existing alongside lower-value “theme-chains” (such marketing, so fun) and test-nets with zero or near-zero fees. We see fully-encrypted, “dark”, consortium chains operating alongside—and even providing services to—highly functional and open chains such as those like Ethereum. We see experimental new VM-based chains such as a subjective time-charged wasm chain being used as a means of outsourcing difficult compute problems from a more mature Ethereum-like chain or a more restricted Bitcoin-like chain. To manage chain upgrades, Polkadot will inherently support some sort of governance structure, likely based on existing stable political systems and having a bicameral aspect similar to the Yellow Paper Council [24]. As the ultimate authority, the underlying stakable token holders would have “referendum” control. To reflect the users’ need for development but the developers’ need for legitimacy, we expect a reasonable direction would be to form the two chambers from a “user” committee (made up of bonded validators) and a “technical” committee made up of major client developers and ecosystem players. The body of token holders would maintain the ultimate legitimacy and form a supermajority to augment, reparameterise, replace or dissolve this structure, something we don’t doubt the eventual need for: in the words of Twain “Governments and diapers must be changed often, and for the same reason”. Whereas reparameterisation is typically trivial to arrange within a larger consensus mechanism, more qualitative changes such as replacement and augmentation would likely need to be either non-automated “soft-decrees” (e.g. through the canonicalisation of a block number and the hash of a document formally specifying the new protocol) or necessitate the core consensus mechanism to contain a sufficiently rich language to describe any aspect of itself which may need to change. The latter is an eventual aim, however, the former more likely to be chosen in order to facilitate a reasonable development timeline. Polkadot’s primary tenets and the rules within which we evaluate all design decisions are: Minimal: Polkadot should have as little functionality as possible. Simple: no additional complexity should be present in the base protocol than can reasonably be offloaded into middleware, placed through a parachain or introduced in a later optimisation. General: no unnecessary requirement, constraint or limitation should be placed on parachains; Polkadot should be a test bed for consensus system development which can be optimised through making the model into which extensions fit as abstract as possible. Robust: Polkadot should provide a fundamentally stable base-layer. In addition to economic soundness, this also means decentralising to minimise the vectors for high-reward attacks.

概括

Polkadot 是一个可扩展的异构多链。这个 意味着与之前的 blockchain 实现不同 其重点是提供不同的单一链 潜在应用的通用性程度,Polkadot 其本身根本不提供任何固有的应用程序功能。 相反,Polkadot 提供了基础 “中继链”上有大量可验证的、 可以托管全球一致的动态数据结构 并排。我们将这些数据结构称为“并行” 链或平行链,尽管没有特殊需要 它们本质上是blockchain。 换句话说, Polkadot 可以被认为等同于一组独立的链(例如包含 Ethereum、Ethereum Classic、Namecoin 和 Bitcoin),但有两点非常重要: • 集中安全; • 免信任的链间交易性。 这些点就是我们认为 Polkadot 是“可扩展的”的原因。原则上,要在 Polkadot 上部署的问题可以基本上并行化(横向扩展) 大量的平行链。由于各个方面 平行链可以由 Polkadot 网络的不同部分并行进行,系统具有一定的能力 规模化。 Polkadot 提供了一个相当简单的部分 3 与单向挂钩相反,单向挂钩本质上是销毁一条链中的 tokens 以在另一条链中创建 tokens 的操作,而无需 执行相反操作以恢复原始 tokens 的机制Polkadot:异构多链框架的愿景 草案1 4 基础设施使大部分复杂性需要在中间件级别解决。这是一个有意识的决定,旨在降低开发风险,使 需要在短时间内开发出必要的软件 并对其安全性和安全性充满信心 鲁棒性。 3.1. Polkadot 的哲学。 Polkadot 应该 提供绝对坚如磐石的基础 建立下一波共识系统,通过 可生产的成熟设计的风险范围 到新生的想法。通过提供安全、隔离和通信方面的强有力保证,Polkadot 可以允许 平行链可以从一系列属性本身中进行选择。 事实上,我们预见到各种实验性的 blockchain 会推动被认为合理的特性 今天。 我们看到保守派, 高价值链类似于 Bitcoin 或 Z-cash [20] 与较低价值共存 “主题链”(这样的营销,很有趣)和测试网 零或接近零费用。 我们看到完全加密的, “黑暗”的联盟链并肩运作,甚至 提供服务——功能强大的开放链 例如 Ethereum 之类的。我们看到实验性的新 基于虚拟机的链,例如主观计时的 wasm 链被用作从更成熟的 Ethereum 类链外包困难计算问题的手段 或更受限制的类似 Bitcoin 的链。 为了管理链升级,Polkadot 本质上将 支持某种治理结构,可能基于 现有稳定的政治制度,并具有类似于黄皮书理事会[24]的两院制。作为 作为最终权力,潜在的 token 持有者将拥有“公投”控制权。为了反映用户的 发展的需要,但开发商需要合法性,我们预计合理的方向是形成 来自“用户”委员会的两个议院(由 保税validators)和一个“技术”委员会组成 主要客户开发人员和生态系统参与者。 的 token 持有者的主体将保持最终的合法性,并形成绝对多数来扩大、重新参数化、替换或解散这个结构,我们 不要怀疑最终的需要:用吐温的话来说 “政府和尿布必须经常更换,并且为了 同样的理由”。 虽然重新参数化通常在更大的共识机制中安排起来很简单,但更多的质变(例如替换和增强)将 可能需要是非自动化的“软法令”(例如 通过块号的规范化和 正式指定新协议的文档的 hash) 或者需要核心共识机制来包含 足够丰富的语言来描述其自身的任何方面 这可能需要改变。后者是最终目标, 然而,前者更有可能被选择,以便 制定合理的开发时间表。 Polkadot 的主要原则和规则 我们评估所有设计决策是: 最小:Polkadot 应具有尽可能少的功能。 简单:不应出现额外的复杂性 在基本协议中比可以合理地 o加载到中间件中, 通过放置 平行链或在以后的优化中引入。 一般:没有不必要的要求、约束 或者应该对平行链进行限制; Polkadot 应该是共识系统开发的测试平台,可以通过以下方式进行优化: 使适合扩展的模型尽可能抽象。 稳健:Polkadot 应该提供一个基本的 稳定的基层。除了经济稳健之外,这还意味着去中心化以最大限度地减少 高回报攻击的向量。

Participation in Polkadot

Participation in Polkadot

There are four basic roles in the upkeep of an Polkadot network: collator, fisherman, nominator and validator. In one possible implementation of Polkadot, the latter role may actually be broken down into two roles: basic validator and availability guarantor; this is discussed in section 6.5.3. Collator Fisherman Validators (this group) Validators (other groups) approves becomes monitors reports bad behaviour to provides block candidates for Nominator Figure 1. The interaction between the four roles of Polkadot. 4.1. Validators. A validator is the highest charge and helps seal new blocks on the Polkadot network. The validator’s role is contingent upon a sufficiently high bond being deposited, though we allow other bonded parties to nominate one or more validators to act for them and as such some portion of the validator’s bond may not necessarily be owned by the validator itself but rather by these nominators. A validator must run a relay-chain client implementation with high availability and bandwidth. At each block the node must be ready to accept the role of ratifying a new block on a nominated parachain. This process involves receiving, validating and republishing candidate blocks. The nomination is deterministic but virtually unpredictable much in advance. Since the validator cannot reasonably be expected to maintain a fully-synchronised database of all parachains, it is expected that the validator will nominate the task of devising a suggested new parachain block to a third-party, known as a collator. Once all new parachain blocks have been properly ratified by their appointed validator subgroups, validators must then ratify the relay-chain block itself. This involves updating the state of the transaction queues (essentially moving data from a parachain’s output queue to another parachain’s input queue), processing the transactions of the ratified relay-chain transaction set and ratifying the final block, including the final parachain changes.

POLKADOT: VISION FOR A HETEROGENEOUS MULTI-CHAIN FRAMEWORK DRAFT 1 5 A validator not fulfilling their duty to find consensus under the rules of our chosen consensus algorithm is punished. For initial, unintentional failures, this is through withholding the validator’s reward. Repeated failures result in the reduction of their security bond (through burning). Provably malicious actions such as double-signing or conspiring to provide an invalid block result in the loss of the entire bond (which is partially burnt but mostly given to the informant and the honest actors). In some sense, validators are similar to the mining pools of current PoW blockchains. 4.2. Nominators. A nominator is a stake-holding party who contributes to the security bond of a validator. They have no additional role except to place risk capital and as such to signal that they trust a particular validator (or set thereof) to act responsibly in their maintenance of the network. They receive a pro-rata increase or reduction in their deposit according to the bond’s growth to which they contribute. Together with collators, next, nominators are in some sense similar to the miners of the present-day PoW networks. 4.3. Collators. Transaction collators (collators for short) are parties who assist validators in producing valid parachain blocks. They maintain a “full-node” for a particular parachain; meaning that they retain all necessary information to be able to author new blocks and execute transactions in much the same way as miners do on current PoW blockchains. Under normal circumstances, they will collate and execute transactions to create an unsealed block, and provide it, together with a zero-knowledge proof, to one or more validators presently responsible for proposing a parachain block. The precise nature of the relationship between collators, nominators and validators will likely change over time. Initially, we expect collators to work very closely with validators, since there will be only a few (perhaps only one) parachain(s) with little transaction volume. The initial client implementation will include RPCs to allow a parachain collator node to unconditionally supply a (relaychain) validator node with a provably valid parachain block. As the cost of maintaining a synced version of all such parachains increases, we expect to see additional infrastructure in place which will help separate out the duties to independent, economically-motivated, parties. Eventually, we expect to see collator pools who vie to collect the most transaction fees. Such collators may become contracted to serve particular validators over a period of time for an on-going share in the reward proceeds. Alternatively, “freelance” collators may simply create a market offering valid parachain blocks in return for a competitive share of the reward payable immediately. Similarly, decentralised nominator pools would allow multiple bonded participants to coordinate and share the duty of a validator. This ability to pool ensures open participation leading to a more decentralised system. 4.4. Fishermen. Unlike the other two active parties, fishermen are not directly related to the block-authoring process. Rather they are independent “bounty hunters” motivated by a large one-offreward. Precisely due to the existence of fishermen, we expect events of misbehaviour to happen seldom, and when they do only due to the bonded party being careless with secret key security, rather than through malicious intent. The name comes from the expected frequency of reward, the minimal requirements to take part and the eventual reward size. Fishermen get their reward through a timely proof that at least one bonded party acted illegally. Illegal actions include signing two blocks each with the same ratified parent or, in the case of parachains, helping ratify an invalid block. To prevent over-rewarding or the compromise and illicit use of a session’s secret key, the base reward for providing a single validator’s illegally signed message is minimal. This reward increases asymptotically as more corroborating illegal signatures from other validators are provided implying a genuine attack. The asymptote is set at 66% following our base security assertion that at least two-thirds of the validators act benevolently. Fishermen are somewhat similar to “full nodes” in present-day blockchain systems that the resources needed are relatively small and the commitment of stable uptime and bandwidth is not necessary. Fishermen differ in so much as they must post a small bond. This bond prevents sybil attacks from wasting validators’ time and compute resources. It is immediately withdrawable, probably no more than the equivalent of a few dollars and may lead to reaping a hefty reward from spotting a misbehaving validator.

参与 Polkadot

Polkadot 的维护有四个基本角色 网络:整理者、渔夫、提名者和 validator。在 Polkadot 的一种可能实现,后一个角色 实际上可以分为两个角色:基本validator和可用性保证人;这将在一节中讨论 6.5.3. 校订者 渔夫 验证者 (本组) 验证者 (其他团体) 批准 变成 监视器 报告 坏 行为 提供块 候选人 为了 提名人 图 1. 之间的交互 Polkadot 的四个角色。 4.1.验证者。 validator 是最高费用, 帮助密封 Polkadot 网络上的新区块。 validator 的角色取决于足够高的债券 正在存入,尽管我们允许其他担保方 提名一名或多名 validator 代表他们行事并担任 validator 债券的此类部分不一定由 validator 本身拥有,而是由这些人拥有 提名者。 validator 必须运行具有高可用性和带宽的中继链客户端实现。在每个街区 节点必须准备好接受批准的角色 指定平行链上的新区块。 这个过程 涉及接收、验证和重新发布候选人 块。提名是确定性的,但实际上是无法提前预测的。由于 validator 不能 合理地期望保持完全同步 所有平行链的数据库,预计 validator 将提名设计一个建议的新的任务 平行链区块交给第三方,称为整理者。 一旦所有新的平行链区块都被指定的 validator 子组正确批准,validators 然后必须批准中继链区块本身。这涉及到 更新事务队列的状态(本质上是 将数据从平行链的输出队列移动到另一个 平行链的输入队列),处理交易 批准的中继链交易集并批准 最终区块,包括最终的平行链更改。Polkadot:异构多链框架的愿景 草案1 5 A validator 没有履行寻求共识的职责 根据我们选择的共识算法的规则受到惩罚。对于最初的、无意的失败,这是通过 扣留 validator 的奖励。反复失败会导致其安全保证金减少(通过销毁)。可证明的恶意行为,例如双重签名或 合谋提供无效区块导致损失 整个债券(部分被烧毁,但大部分被给予 告密者和诚实的行为者)。 从某种意义上来说,validator类似于矿池 当前 PoW blockchains。 4.2.提名人。提名人是股东 谁为 validator 的保证金出资。他们 除了投入风险资本外没有其他作用 这样表明他们信任特定的 validator (或 集)以负责任的方式维护 网络。 他们获得按比例增加或减少 根据债券的增长在存款中 他们做出了贡献。 接下来,提名者与整理者一起参与一些 感觉类似于当今 PoW 网络的矿工。 4.3.校勘者。交易整理者(简称整理者) 协助 validators 出示有效文件的各方是 平行链区块。他们为特定的平行链维护一个“全节点”;这意味着他们保留了所有必要的 能够创作新块并执行的信息 交易方式与矿工在当前 PoW blockchain 上的交易方式大致相同。正常情况下,他们 将整理并执行交易以创建未密封的 块,并与零知识一起提供它 证明,交给目前负责的一个或多个 validator 提出平行链区块。 整理者、提名者和 validator 之间关系的确切性质可能会发生变化 时间。最初,我们希望整理者能够非常密切地合作 与 validators,因为只有少数(也许 只有一个)交易量很小的平行链。的 初始客户端实现将包括 RPC,以允许 平行链整理节点无条件地向(中继链)validator 节点提供可证明有效的平行链 块。 由于维护同步版本的成本 所有此类平行链都会增加,我们预计会看到更多 基础设施到位,这将有助于分离 对独立的、有经济动机的各方的义务。 最终,我们期望看到收集者池相互竞争 收取最多的交易费用。此类整理者可能会签订合同,在一段时间内为特定的 validator 提供服务,以获得奖励收益的持续份额。 或者,“自由职业者”整理者可以简单地创建一个 市场提供有效的平行链区块,以换取立即支付的有竞争力的奖励份额。同样,去中心化的提名人池将允许多个 债券参与者协调并分担责任 validator。这种汇集能力确保了开放参与 导致更加去中心化的系统。 4.4.渔民。与另外两个活跃的政党不同, 渔民与区块创作没有直接关系 过程。相反,他们是独立的“赏金猎人” 受到巨大的一次性奖励的激励。 正是由于 由于渔民的存在,我们预计不当行为事件很少发生,而发生这种情况只是由于 担保方对密钥安全不重视, 而不是出于恶意。名字来了 从预期的奖励频率、参与的最低要求以及最终的奖励规模。 渔民通过及时证明来获得奖励 至少有一个担保方有非法行为。违法行为 包括签署两个区块,每个区块都具有相同的批准父级,或者在平行链的情况下,帮助批准无效的区块 块。为了防止过度奖励或妥协 非法使用会话的密钥,即基本奖励 提供单个 validator 的非法签名消息是 最小。随着更多的增加,这种奖励逐渐增加 证实其他 validator 的非法签名是 提供暗示真正的攻击。渐近线已设定 66% 遵循我们的基本安全主张,至少 三分之二的 validator 表现得仁慈。 渔民有点类似于“全节点” 当前的 blockchain 系统需要资源 相对较小并且承诺稳定的正常运行时间 并且不需要带宽。渔民们意见不一 就像他们必须缴纳一小笔保证金一样。这种结合可以防止 女巫攻击浪费 validators 的时间和计算 资源。可以立即撤回,可能不会 超过几美元,可能会导致 从发现不当行为中获得丰厚的回报 validator。

Design Overview

Design Overview

This section is intended to give a brief overview of the system as a whole. A more thorough exploration of the system is given in the section following it. 5.1. Consensus. On the relay-chain, Polkadot achieves low-level consensus over a set of mutually agreed valid blocks through a modern asynchronous Byzantine faulttolerant (BFT) algorithm. The algorithm will be inspired by the simple Tendermint [11] and the substantially more involved HoneyBadgerBFT [14]. The latter provides an efficient and fault-tolerant consensus over an arbitrarily defective network infrastructure, given a set of mostly benign authorities or validators. For a proof-of-authority (PoA) style network, this alone would be sufficient, however Polkadot is imagined to be also deployable as a network in a fully open and public situation without any particular organisation or trusted authority required to maintain it. As such we need a means of determining a set of validators and incentivising them to be honest. For this we utilise PoS based selection criteria. 5.2. Proving the Stake. We assume that the network will have some means of measuring how much “stake” any particular account has. For ease of comparison to pre-existing systems, we will call the unit of measurement “tokens”. Unfortunately the term is less than ideal for a number of reasons, not least that being simply a scalar value associated with an account, there is no notion of individuality. We imagine validators be elected, infrequently (at most once per day but perhaps as seldom as once per quarter), through a Nominated Proof-of-Stake (NPoS) scheme. Incentivisation can happen through a pro-rata allocation of

POLKADOT: VISION FOR A HETEROGENEOUS MULTI-CHAIN FRAMEWORK DRAFT 1 6 Relay chain Validator swarm (each coloured by its designated parachain) Transaction (submitted by external actor) Parachain bridge Virtual parachain (e.g. Ethereum) Parachain Parachain queues and I/O Propagated transactions Block candidate submission 2nd order Relay-chain Parachain community Account Inbound transaction Outbound transaction Interchain transactions (managed by validators) Collator Propagated block Fisherman Figure 2. A summary schematic of the Polkadot system. This shows collators collecting and propagating user-transactions, as well as propagating block candidates to fishermen and validators. It also shows how an account can post a transaction which is carried out of its parachain, via the relay-chain and on into another parachain where it can be interpreted as a transaction to an account there. funds coming from a token base expansion (up to 100% per year, though more likely around 10%) together with any transaction fees collected. While monetary base expansion typically leads to inflation, since all token owners would have a fair opportunity at participation, no tokenholder would need to suffer a reduction in value of their holdings over time provided they were happy to take a role in the consensus mechanism. A particular proportion of tokens would be targeted for the staking process; the effective token base expansion would be adjusted through a market-based mechanism to reach this target. Validators are bonded heavily by their stakes; exiting validators’ bonds remain in place long after the validators’ duties cease (perhaps around 3 months). This long bond-liquidation period allows future misbehaviour to be punished up until the periodic checkpointing of the chain. Misbehaviour results in punishment, such as reduction of reward or, in cases which intentionally compromise the network’s integrity, the validator losing some or all of its stake to other validators, informants or the stakeholders as a whole (through burning). For example, a validator who attempts to ratify both branches of a fork (sometimes known as a “short-range” attack) may be identified and punished in the latter way. Long-range “nothing-at-stake” attacks4 are circumvented through a simple “checkpoint” latch which prevents a dangerous chain-reorganisation of more than a particular chain-depth. To ensure newly-syncing clients are not able to be fooled onto the wrong chain, regular “hard forks” will occur (of at most the same period of the validators’ bond liquidation) that hard-code recent checkpoint block hashes into clients. This plays well with a further footprint-reducing measure of “finite chain length” or periodic reseting of the genesis-block. 5.3. Parachains and Collators. Each parachain gets similar security affordances to the relay-chain: the parachains’ headers are sealed within the relay-chain block ensuring no reorganisation, or “double-spending”, is possible following confirmation. This is a similar security guarantee to that offered by Bitcoin’s side-chains and mergemining. Polkadot, however, also provides strong guarantees that the parachains’ state transitions are valid. This happens through the set of validators being cryptographically randomly segmented into subsets; one subset per parachain, the subsets potentially differing per block. This setup generally implies that parachains’ block times will be at least as long as that of the relay-chain. The specific means of determining the partitioning is outside the scope 4Such an attack is where the adversary forges an entirely new chain of history from the genesis block onwards. Through controlling a relatively insignificant portion of stake at the offset, they are able to incrementally increase their portion of the stake relative to all other stakeholders as they are the only active participants in their alternative history. Since no intrinsic physical limitation exists on the creation of blocks (unlike PoW where quite real computational energy must be spent), they are able to craft a chain longer than the real chain in a relatively short timespan and potentially make it the longest and best, taking over the canonical state of the network.

POLKADOT: VISION FOR A HETEROGENEOUS MULTI-CHAIN FRAMEWORK DRAFT 1 7 of this document but would likely be based either around a commit-reveal framework similar to the RanDAO [19] or use data combined from previous blocks of each parachain under a cryptographically secure hash. Such subsets of validators are required to provide a parachain block candidate which is guaranteed valid (on pain of bond confiscation). Validity revolves around two important points; firstly that it is intrinsically valid—that all state transitions were executed faithfully and that all external data referenced (i.e. transactions) is valid for inclusion. Secondly, that any data which is extrinsic to its candidate, such as those external transactions, has sufficiently high availability so that participants are able to download it and execute the block manually.5 Validators may provide only a “null” block containing no external “transactions” data, but may run the risk of getting a reduced reward if they do. They work alongside a parachain gossip protocol with collators—individuals who collate transactions into blocks and provide a noninteractive, zero-knowledge proof that the block constitutes a valid child of its parent (and taking any transaction fees for their trouble). It is left to parachain protocols to specify their own means of spam-prevention: there is no fundamental notion of “compute-resource metering” or “transaction fee” imposed by the relay-chain. There is also no direct enforcement on this by the relay-chain protocol (though it is unlikely that the stakeholders would choose to adopt a parachain which didn’t provide a decent mechanism). This is an explicit nod to the possibility of chains unlike Ethereum, e.g. a Bitcoin-like chain which has a much simpler fee model or some other, yet-to-be-proposed spamprevention model. Polkadot’s relay-chain itself will probably exist as an Ethereum-like accounts and state chain, possibly an EVMderivative. Since the relay-chain nodes will be required to do substantial other processing, transaction throughput will be minimised partly through large transaction fees and, should our research models require, a block size limit. 5.4. Interchain Communication. The critical final ingredient of Polkadot is interchain communication. Since parachains can have some sort of information channel between them, we allow ourselves to consider Polkadot a scalable multi-chain. In the case of Polkadot, the communication is as simple as can be: transactions executing in a parachain are (according to the logic of that chain) able to effect the dispatch of a transaction into a second parachain or, potentially, the relay-chain. Like external transactions on production blockchains, they are fully asynchronous and there is no intrinsic ability for them to return any kind of information back to its origin. Destination: gets data from prior block’s validators. Account receives post: entry removed from ingress Merkle tree Account sends post: entry placed in egress Merkle tree for destination parachain egress Source: shares data with next block’s validators proof-of-post stored in parachain egress Merkle tree routed reference placed in destination parachain’s ingress Merkle tree ingress Figure 3. A basic schematic showing the main parts of routing for posted transactions (”posts”). To ensure minimal implementation complexity, minimal risk and minimal straight-jacketing of future parachain architectures, these interchain transactions are effectively indistinguishable from standard externallysigned transactions. The transaction has an origin segment, providing the ability to identify a parachain, and an address which may be of arbitrary size. Unlike common current systems such as Bitcoin and Ethereum, interchain transactions do not come with any kind of “payment” of fee associated; any such payment must be managed through negotiation logic on the source and destination parachains. A system such as that proposed for Ethereum’s Serenity release [7] would be a simple means of managing such a cross-chain resource payment, though we assume others may come to the fore in due course. Interchain transactions are resolved using a simple queuing mechanism based around a Merkle tree to ensure fidelity. It is the task of the relay-chain maintainers to move transactions on the output queue of one parachain into the input queue of the destination parachain. The passed transactions get referenced on the relay-chain, however are not relay-chain transactions themselves. To prevent a parachain from spamming another parachain with transactions, for a transaction to be sent, it is required that the destination’s input queue be not too large at the time of the end of the previous block. If the input queue is too large after block processing, then it is considered “saturated” and no transactions may be routed to it within subsequent blocks until reduced back below the limit. These queues are administered on the relay-chain allowing parachains to determine each other’s saturation status; this way a failed attempt to post a transaction to a stalled destination may be reported synchronously. (Though since no return path exists, if a secondary transaction failed for that reason, it could not be reported back to the original caller and some other means of recovery would have to take place.) 5.5. Polkadot and Ethereum. Due to Ethereum’s Turing completeness, we expect there is ample opportunity for Polkadot and Ethereum to be interoperable with each other, at least within some easily deducible security bounds. In short, we envision that transactions from Polkadot can be signed by validators and then fed into 5Such a task might be shared between validators or could become the designate task of a set of heavily bonded validators known as availability guarantors.

POLKADOT: VISION FOR A HETEROGENEOUS MULTI-CHAIN FRAMEWORK DRAFT 1 8 Ethereum where they can be interpreted and enacted by a transaction-forwarding contract. In the other direction, we foresee the usage of specially formatted logs (events) coming from a “break-out contract” to allow a swift verification that a particular message should be forwarded. 5.5.1. Polkadot to Ethereum. Through the choice of a BFT consensus mechanism with validators formed from a set of stakeholders determined through an approval voting mechanism, we are able to get a secure consensus with an infrequently changing and modest number of validators. In a system with a total of 144 validators, a block time of 4 seconds and a 900-block finality (allowing for malicious behaviour such as double-votes to be reported, punished and repaired), the validity of a block can reasonably be considered proven through as little as 97 signatures (twothirds of 144 plus one) and a following 60-minute verification period where no challenges are deposited. Ethereum is able to host a “break-in contract” which can maintain the 144 signatories and be controlled by them. Since elliptic curve digital signature (ECDSA) recovery takes only 3,000 gas under the EVM, and since we would likely only want the validation to happen on a super-majority of validators (rather than full unanimity), the base cost of Ethereum confirming that an instruction was properly validated as coming from the Polkadot network would be no more than 300,000 gas—a mere 6% of the total block gas limit at 5.5M. Increasing the number of validators (as would be necessary for dealing with dozens of chains) inevitably increases this cost, however it is broadly expected for Ethereum’s transaction bandwidth to grow over time as the technology matures and infrastructure improves. Together with the fact that not all validators need to be involved (e.g. only the highest staked validators may be called upon for such a task) the limits of this mechanism extend reasonably well. Assuming a daily rotation of such validators (which is fairly conservative—weekly or even monthly may be acceptable), then the cost to the network of maintaining this Ethereum-forwarding bridge would be around 540,000 gas per day or, at present gas prices, $45 per year. A basic transaction forwarded alone over the bridge would cost around $0.11; additional contract computation would cost more, of course. By buffering and bundling transactions together, the break-in authorisation costs can easily be shared, reducing the cost per transaction substantially; if 20 transactions were required before forwarding, then the cost for forwarding a basic transaction would fall to around $0.01. One interesting, and cheaper, alternative to this multisignature contract model would be to use threshold signatures in order to achieve the multi-lateral ownership semantics. While threshold signature schemes for ECDSA are computationally expensive, those for other schemes such as Schnorr signatures are very reasonable. Ethereum plans to introduce primitives which would make such schemes cheap to use in the upcoming Metropolis hardfork. If such a means were able to be utilised, the gas costs for forwarding a Polkadot transaction into the Ethereum network would be dramatically reduced to a near zero overhead over and above the basic costs for validating the signature and executing the underlying transaction. In this model, Polkadot’s validator nodes would have to do little other than sign messages. To get the transactions actually routed onto the Ethereum network, we assume either validators themselves would also reside on the Ethereum network or, more likely, that small bounties be offered to the first actor who forwards the message on to the network (the bounty could trivially be paid to the transaction originator). 5.5.2. Ethereum to Polkadot. Getting transactions to be forwarded from Ethereum to Polkadot uses the simple notion of logs. When an Ethereum contract wishes to dispatch a transaction to a particular parachain of Polkadot, it need simply call into a special “break-out contract”. The break-out contract would take any payment that may be required and issue a logging instruction so that its existence may be proven through a Merkle proof and an assertion that the corresponding block’s header is valid and canonical. Of the latter two conditions, validity is perhaps the most straightforward to prove. In principle, the only requirement is for each Polkadot node needing the proof (i.e. appointed validator nodes) to be running a fully synchronised instance of a standard Ethereum node. Unfortunately, this is itself a rather heavy dependency. A more lightweight method would be to use a simple proof that the header was evaluated correctly through supplying only the part of Ethereum’s state trie needed to properly execute the transactions in the block and check that the logs (contained in the block receipt) are valid. Such “SPV-like”6 proofs may yet require a substantial amount of information; conveniently, they would typically not be needed at all: a bond system inside Polkadot would allow bonded third-parties to submit headers at the risk of losing their bond should some other third-party (such as a “fisherman”, see 6.2.3) provide a proof that the header is invalid (specifically that the state root or receipt roots were impostors). On a non-finalising PoW network like Ethereum, the canonicality is impossible to proof conclusively. To address this, applications that attempt to rely on any kind of chain-dependent cause-effect wait for a number of “confirmations”, or until the dependent transaction is at some particular depth within the chain. On Ethereum, this depth varies from 1 block for the least valuable transactions with no known network issues to 1200 blocks as was the case during the initial Frontier release for exchanges. On the stable “Homestead” network, this figure sits at 120 blocks for most exchanges, and we would likely take a similar parameter. So we can imagine our Polkadot-side Ethereuminterface to have some simple functions: to be able to accept a new header from the Ethereum network and validate the PoW, to be able to accept some proof that a particular log was emitted by the Ethereum-side breakout contract for a header of sufficient depth (and forward the corresponding message within Polkadot) and finally to be able to accept proofs that a previously accepted but not-yet-enacted header contains an invalid receipt root. To actually get the Ethereum header data itself (and any SPV proofs or validity/canonicality refutations) into the Polkadot network, an incentivisation for forwarding 6SPV refers to Simplified Payment Verification in Bitcoin and describes a method for clients to verify transactions while keeping only a copy of all blocks headers of the longest PoW chain.

POLKADOT: VISION FOR A HETEROGENEOUS MULTI-CHAIN FRAMEWORK DRAFT 1 9 data is needed. This could be as simple as a payment (funded from fees collected on the Ethereum side) paid to anyone able to forward a useful block whose header is valid. Validators would be called upon to retain information relating to the last few thousand blocks in order to be able to manage forks, either through some protocolintrinsic means or through a contract maintained on the relay chain. 5.6. Polkadot and Bitcoin. Bitcoin interoperation presents an interesting challenge for Polkadot: a so-called “two-way peg” would be a useful piece of infrastructure to have on the side of both networks. However, due to the limitations of Bitcoin, providing such a peg securely is a non-trivial undertaking. Delivering a transaction from Bitcoin to Polkadot can in principle be done with a process similar to that for Ethereum; a “break-out address” controlled in some way by the Polkadot validators could receive transferred tokens (and data sent alongside them). SPV proofs could be provided by incentivised oracles and, together with a confirmation period, a bounty given for identifying non-canonical blocks implying the transaction has been “double-spent”. Any tokens then owned in the “break-out address” would then, in principle, be controlled by those same validators for later dispersal. The problem however is how the deposits can be securely controlled from a rotating validator set. Unlike Ethereum which is able to make arbitrary decisions based upon combinations of signatures, Bitcoin is substantially more limited, with most clients accepting only multisignature transactions with a maximum of 3 parties. Extending this to 36, or indeed thousands as might ultimately be desired, is impossible under the current protocol. One option is to alter the Bitcoin protocol to enable such functionality, however so-called “hard forks” in the Bitcoin world are difficult to arrange judging by recent attempts. One possibility is the use of threshold signatures, cryptographic schemes to allow a singly identifiable public key to be effectively controlled by multiple secret “parts”, some or all of which must be utilised to create a valid signature. Unfortunately, threshold signatures compatible with Bitcoin’s ECDSA are computationally expensive to create and of polynomial complexity. Other schemes such a Schnorr signatures provide far lower costs, however the timeline on which they may be introduced into the Bitcoin protocol is uncertain. Since the ultimate security of the deposits rests with a number of bonded validators, one other option is to reduce the multi-signature key-holders to only a heavily bonded subset of the total validators such that threshold signatures become feasible (or, at worst, Bitcoin’s native multi-signature is possible). This of course reduces the total amount of bonds that could be deducted in reparations should the validators behave illegally, however this is a graceful degradation, simply setting an upper limit of the amount of funds that can securely run between the two networks (or indeed, on the % losses should an attack from the validators succeed). As such we believe it not unrealistic to place a reasonably secure Bitcoin interoperability “virtual parachain” between the two networks, though nonetheless a substantial effort with an uncertain timeline and quite possibly requiring the cooperation of the stakeholders within that network.

设计概述

本节旨在简要概述 系统作为一个整体。更彻底的探索 系统在后面的部分中给出。 5.1.共识。在中继链上,Polkadot实现了 就一组共同商定的有效规则达成低级别共识 通过现代异步拜占庭容错 (BFT) 算法进行阻止。算法将受到启发 通过简单的 Tendermint [11] 和更多 涉及蜜獾BFT [14]。后者提供了一个 对任意的问题达成有效且容错的共识 有缺陷的网络基础设施,给定一组大多良性的权威或 validators。 对于权威证明(PoA)风格的网络来说,仅此一点 就足够了,但是 Polkadot 被认为是 也可以作为完全开放和公共的网络进行部署 没有任何特定组织或信任的情况 维护它所需的权限。 因此我们需要一个 确定一组 validator 并进行激励的方法 他们说实话。为此,我们利用基于 PoS 的选择 标准。 5.2.证明赌注。我们假设网络 将有一些方法来衡量“赌注”的程度 任何特定帐户都有。 为了便于比较 预先存在的系统,我们称之为测量单位 “tokens”。不幸的是,这个词对于 有很多原因,尤其是简单的标量 与账户相关的价值,没有概念 个性。 我们想象 validator 很少被选举(最多 每天一次,但可能少至每季度一次), 通过指定股权证明(NPoS)计划。激励可以通过按比例分配来实现Polkadot:异构多链框架的愿景 草案1 6 继电器 链条 验证者群体 (每个颜色由其 指定平行链) 交易 (提交者: 外部演员) 平行链 桥 虚拟平行链 (例如 Ethereum) 平行链 平行链 队列和 I/O 传播交易 阻止候选人提交 二阶 中继链 平行链社区 账户 入境交易 出境交易 链间交易 (由 validators 管理) 校订者 传播块 渔夫 图 2. Polkadot 系统的概要示意图。这显示了整理者收集和传播用户交易,以及向渔民和 validator 传播候选区块。它还 显示账户如何通过中继链发布在其平行链中执行的交易 然后进入另一个平行链,可以将其解释为那里账户的交易。 来自 token 基地扩张的资金(最多 100% 每年,尽管更有可能在 10% 左右)以及 收取的任何交易费用。虽然基础货币扩张通常会导致通货膨胀,但由于所有 token 所有者 将有公平的参与机会,任何token持有者都不需要遭受其价值的减少 随着时间的推移,只要他们乐意接受 在共识机制中的作用。特定比例 token 的目标将是 staking 进程;的 有效的 token 碱基扩展将通过以下方式进行调整 以市场为基础的机制来实现这一目标。 验证者的权益与他们紧密相连;退出 在 validator 的职责终止后很长一段时间(可能大约 3 个月),validator 的债券仍然有效。这么长 债券清算期允许未来的不当行为 受到惩罚,直到链的定期检查点为止。 不当行为会导致惩罚,例如减少 奖励,或者在故意损害的情况下 网络的完整性,validator 失去部分或全部 向其他validator、线人或利益相关者提供股份 作为一个整体(通过燃烧)。例如,validator 谁试图批准分叉的两个分支(有时 被称为“短程”攻击)可以被识别并且 按后一种方式处罚。 远程“无利害关系”攻击4可以通过一个简单的“检查点”闩锁来规避,该闩锁可以防止超过一个的危险链重组。 特定的链深度。 确保新同步的客户端 不能被骗到错误的链上,常规的 “硬分叉”将会发生(最多在同一时期) validators 的债券清算)将最近的检查点块 hashes 硬编码到客户端中。这与进一步减少足迹的“有限链长”措施或 创世块的定期重置。 5.3.平行链和收集者。每个平行链都会获得 与中继链类似的安全功能: 的 平行链的标头被密封在中继链区块内 确保确认后不可能进行重组或“双重支出”。这与 Bitcoin 的侧链和合并挖矿提供的安全保证类似。然而,Polkadot 也提供了平行链状态转换有效的有力保证。这个 通过将 validator 集合以加密方式随机分割成子集而发生;每一个子集 平行链,每个块的子集可能不同。这个 设置通常意味着平行链的区块时间将 至少与中继链一样长。具体的 确定分区的方法超出了范围 4这种攻击是对手从创世区块开始打造一条全新的历史链的地方。通过控制一个 尽管他们的股权比例相对较小,但他们能够相对于所有其他人逐步增加自己的股权比例 利益相关者,因为他们是另类历史中唯一的积极参与者。由于创作不存在内在的物理限制 区块(与必须花费相当真实的计算能量的 PoW 不同),他们能够在 相对较短的时间跨度,并有可能使其成为最长和最好的,接管网络的规范状态。Polkadot:异构多链框架的愿景 草案1 7 本文件的但可能基于 类似于 RanDAO [19] 的提交-显示框架或 使用每个平行链的先前区块组合的数据 在加密安全的 hash 下。 validator 的此类子集需要提供 保证有效的平行链候选区块(在 债券被没收的痛苦)。有效性围绕两个 要点;首先,它本质上是有效的—— 所有状态转换均忠实执行,并且所有 引用的外部数据(即交易)对于包含有效。其次,任何与其无关的数据 候选者,例如那些外部交易,具有足够高的可用性,以便参与者能够 下载它并手动执行该块。5 验证者可能只提供一个不包含外部“交易”数据的“空”块,但如果这样做,可能会面临奖励减少的风险。他们并肩工作 与收集者(个人)的平行链八卦协议 他们将交易整理成区块,并提供非交互式、零知识证明,证明该区块构成其父区块的有效子区块(并采取任何交易) 为他们的麻烦付费)。 由平行链协议来指定自己的 预防垃圾邮件的手段:没有“计算资源计量”或“交易费用”的基本概念 由中继链强加。中继链协议也没有对此进行直接强制执行(尽管它 利益相关者不太可能选择采用 一条没有提供像样机制的平行链)。 这是对链条可能性的明确认可,与链条不同 Ethereum,例如类似 Bitcoin 的链,具有更简单的费用模型或其他一些尚未提出的垃圾邮件预防模型。 Polkadot 的中继链本身可能会作为一个 类似 Ethereum 的账户和状态链,可能是 EVM 的衍生品。由于中继链节点需要 进行大量其他处理、事务吞吐量 将通过巨额交易费用部分最小化 并且,如果我们的研究模型需要区块大小限制。 5.4.链间通信。 Polkadot 的最后一个关键要素是链间通信。自从 平行链之间可以有某种信息通道,我们允许自己考虑 Polkadot a 可扩展的多链。在 Polkadot 的情况下,通信非常简单:在 平行链(根据该链的逻辑)能够 影响将交易分派到第二条平行链中 或者,可能是中继链。就像外部交易一样 在生产 blockchains 上,它们是完全异步的 他们没有内在能力返回任何 某种信息回到其起源。 目的地:获取 之前的数据 块的 validators。 帐户收到邮件: 条目已删除自 入口 Merkle tree 帐户发送帖子: 条目放置在 出口 Merkle tree 目的地 平行链 出口 来源:股票 下一个块的数据 validators 邮寄证明存储在 平行链出口 Merkle 树 已放置路由参考 在目的地平行链中 入口 Merkle tree 入口 图 3. 基本示意图 发布路由的主要部分 交易(“帖子”)。 为了确保最小的实现复杂性,最小 风险 和 最小的 直夹克 的 未来 平行链架构中,这些链间交易是 与标准的外部签名交易实际上没有区别。 该交易有一个原始段,提供识别平行链的能力,并且 可以是任意大小的地址。与 Bitcoin 和 Ethereum 等常见的当前系统不同,链间交易不附带任何类型的相关费用“支付”;任何此类支付都必须通过源平行链和目标平行链上的协商逻辑进行管理。诸如提议的系统 Ethereum 的 Serenity 版本 [7] 将是一个简单的方法 管理这样的跨链资源支付,但是 我们假设其他人可能会在适当的时候脱颖而出。 链间交易通过简单的方式解决 基于 Merkle tree 的排队机制以确保 保真度。中继链维护者的任务是 将交易移动到一个平行链的输出队列上 进入目标平行链的输入队列。的 传递的交易在中继链上被引用,但不是相关的y-chain 交易本身。为了防止平行链向另一个平行链发送垃圾邮件 交易,对于要发送的交易,需要 目的地的输入队列不要太大 前一个块的结束时间。如果输入 块处理后队列太大,那么它被认为是“饱和”并且没有事务可以路由到 它在后续块中,直到降回低于 限制。这些队列在中继链上进行管理 允许平行链确定彼此的饱和度 状态;这样尝试发布交易失败 可以同步报告到停止的目的地。 (尽管由于不存在返回路径,如果辅助交易因此失败,则无法报告回来 发送给原始调用者以及其他一些恢复方式 必须发生。) 5.5. Polkadot 和 Ethereum。由于 Ethereum 的图灵完备性,我们预计 Polkadot 和 Ethereum 有足够的机会与 彼此,至少在一些容易推断的安全范围内。简而言之,我们预计交易来自 Polkadot 可以由 validators 签名,然后送入 5这样的任务可能在 validator 之间共享,或者可能成为一组紧密结合的 validator 的指定任务,称为 可用性保证人。

Polkadot:异构多链框架的愿景 草案1 8 Ethereum 可以由以下人员解释和制定 交易转发合约。在另一个方向上, 我们预计会使用特殊格式的日志(事件) 来自“突破合同”,以允许快速验证是否应转发特定消息。 5.5.1. Polkadot 至 Ethereum。通过选择一个 BFT 共识机制由 validator 组成 通过批准投票确定的一组利益相关者 机制,我们能够与 validator 的变化不频繁且数量适中。 在总共 144 validators 的系统中,出块时间为 4 秒和 900 个区块的最终结果(允许恶意 双重投票等行为须举报、处罚 并修复),块的有效性可以合理地表示 仅需 97 个签名(144 个签名的三分之二加 1)以及随后 60 分钟的验证期(不存在任何质疑)即可被视为已得到验证。 Ethereum 能够主持一份“闯入合同” 可以维持144个签署者并由其控制 他们。由于椭圆曲线数字签名 (ECDSA) 恢复在 EVM 下仅需要 3,000 个 Gas,并且由于 我们可能只希望验证发生在 validator 的绝大多数(而不是完全一致), Ethereum 的基本成本确认一条指令 经过正确验证,来自 Polkadot 网络的 Gas 不会超过 300,000,仅占 6% 总区块 Gas 限制为 5.5M。增加 validator 的数量(对于处理 然而,数十家连锁店)不可避免地增加了这一成本 人们普遍预计 Ethereum 的交易带宽会随着技术的成熟而增长 基础设施改善。再加上事实并非如此 所有 validator 都需要参与(例如,只有最高的 可能会要求质押的 validators 来执行此类任务) 这种机制的局限性相当好。 假设每天轮换此类 validator(即 相当保守——每周甚至每月都可以接受),那么维护网络的成本 这个 Ethereum-转发桥大约有 540,000 每天天然气,或者目前的天然气价格为每年 45 美元。单独通过桥转发的基本交易将花费 约 0.11 美元;额外的合同计算将花费 当然还有更多。通过缓冲和捆绑交易 总之,闯入授权成本可以很容易地计算出来 共享,大幅降低每笔交易的成本; 如果转发前需要 20 笔交易,则 转发基本交易的成本将降至 约 0.01 美元。 这种多重签名合约模型的一种有趣且更便宜的替代方案是使用门​​限签名来实现多边所有权语义。而 ECDSA 的门限签名方案 与其他方案相比,计算成本较高 比如Schnorr签名就非常合理。 Ethereum 计划引入原语,这将使这样的 在即将到来的 Metropolis 硬分叉中使用成本低廉的方案。如果能够使用这种手段,天然气成本 用于将 Polkadot 交易转发到 Ethereum 网络将急剧减少到接近于零 超出验证基本成本的开销 签名并执行基础交易。 在此模型中,Polkadot 的 validator 节点将具有 除了签署消息之外别无其他。为了让交易实际路由到 Ethereum 网络上,我们 假设 validator 本身也将驻留在 Ethereum 网络,或者更有可能的是,小额赏金 被提供给第一个转发消息的参与者 到网络(赏金可以简单地支付给 交易发起人)。 5.5.2. Ethereum 至 Polkadot。让交易成为 从 Ethereum 转发到 Polkadot 使用日志的简单概念。当 Ethereum 合约希望将交易分派到 Polkadot 的特定平行链时, 它只需要签订一份特殊的“突破合同”即可。 突破合同将收取任何可能的付款 被要求并发出记录指令,以便可以通过 Merkle 证明和相应块头有效的断言来证明其存在,并且 规范的。 在后两个条件中,有效性可能是最重要的 最容易证明。原则上,唯一的要求是对于每个需要证明的 Polkadot 节点 (即指定的 validator 节点)运行标准 Ethereum 节点的完全同步实例。不幸的是,这本身就是一个相当严重的依赖。一个更多 轻量级方法是使用一个简单的证明 通过仅提供 正确执行所需的 Ethereum 状态树的一部分 块中的交易并检查日志(包含在块收据中)是否有效。这种“类似 SPV”6 证明可能还需要大量信息;方便的是,通常不需要它们 all:Polkadot 内的绑定系统将允许绑定 第三方提交标头可能会面临丢失其标头的风险 bond 如果其他第三方(例如“渔夫”,参见 6.2.3)提供标头无效的证明 (具体来说,州根或收据根是冒名顶替者)。 在像 Ethereum 这样的非最终 PoW 网络上, 规范性无法得到最终证明。 为了解决这个问题,尝试依赖任何类型的应用程序 链相关的因果关系等待多个“确认”,或者直到相关交易处于某个状态 链内的特定深度。 在 Ethereum 上,这 深度从 1 个区块(无已知网络问题的最不有价值的交易)到 1200 个区块不等 Frontier 首次发布交易所期间的情况。 在稳定的“Homestead”网络上,这个数字位于 大多数交易所需要 120 个区块,我们可能会采取 类似的参数。 所以 我们 可以 想象 我们的 Polkadot-侧 Ethereum接口有一些简单的功能:能够 接受来自 Ethereum 网络的新标头并验证 PoW,以便能够接受一些证明 Ethereum 侧突破合约发出了特定的日志,以获得足够深度的标头(并且向前 Polkadot 中的相应消息),最后 能够接受先前接受过的证据,但 尚未制定的标头包含无效的收据根。 实际获取 Ethereum 标头数据本身(以及 任何 SPV 证明或有效性/规范性反驳) Polkadot 网络,转发激励 6SPV 指的是 Bitcoin 中的简化支付验证,并描述了一种让客户端验证交易的方法,同时只保留 最长 PoW 链的所有区块头的副本。Polkadot:异构多链框架的愿景 草案1 9 需要数据。 这可以像付款一样简单 (funded from fees collected on the Ethereum side) paid to anyone able to forward a useful block whose header is 有效。验证者将被要求保留与最后几千个区块相关的信息,以便 be able to manage forks, either through some protocolintrinsic means or through a contract maintained on the 中继链。 5.6. Polkadot 和 Bitcoin。 Bitcoin 互操作 presents an interesting challenge for Polkadot: a so-called “双向挂钩”将是一个有用的基础设施 两个网络都有。然而,由于 the limitations of Bitcoin, providing such a peg securely is 这是一项不平凡的事业。交付交易自 Bitcoin to Polkadot can in principle be done with a process similar to that for Ethereum; “突破地址” controlled in some way by the Polkadot validators could receive transferred tokens (and data sent alongside them). SPV 证明可以通过激励 oracle 提供,并且, together with a confirmation period, a bounty given for 识别暗示交易的非规范区块 已被“双花”。然后拥有的任何 tokens “break-out address” would then, in principle, be controlled by those same validators for later dispersal. 然而问题是如何通过旋转的 validator 装置安全地控制存款。 不像 Ethereum 能够根据 upon combinations of signatures, Bitcoin is substantially 更有限,大多数客户仅接受最多 3 方的多重签名交易。 Extending this to 36, or indeed thousands as might ultimately be desired, is impossible under the current protocol.一种选择是更改 Bitcoin 协议以启用 此类功能,但是所谓的“硬分叉” 从最近的尝试来看,Bitcoin 世界很难安排。一种可能性是使用门限签名, 允许单一可识别公众的加密方案 密钥由多个秘密“部分”有效控制, 必须使用其中的部分或全部来创建有效的签名。 不幸的是,阈值签名兼容 使用 Bitcoin 的 ECDSA 的计算成本很高 创建多项式复杂度的 和 。其他方案如 a Schnorr 签名的成本要低得多,但是 它们可能被引入 Bitcoin 的时间表 协议是不确定的。 由于存款的最终安全取决于 多个绑定的 validator,另一种选择是 将多重签名密钥持有者减少到仅大量 bonded subset of the total validators such that threshold 签名变得可行(或者,在最坏的情况下,Bitcoin 的原生 多重签名是可能的)。 这当然减少了 如果 validator 的行为违法,则可以在赔偿中扣除的保证金总额,但是这 是一种优雅的降级,只需设置一个上限 可以在两者之间安全运行的资金量 两个网络(或者实际上,攻击造成的损失百分比 从 validator 成功)。 因此,我们认为放置一个相当安全的 Bitcoin 互操作性“虚拟平行链”并非不现实 两个网络之间,尽管仍然需要付出巨大的努力,但时间表不确定,而且很可能 需要利益相关者的合作 网络。

Protocol in Detail

Protocol in Detail

The protocol can be roughly broken down into three parts: the consensus mechanism, the parachain interface and interchain transaction routing. 6.1. Relay-chain Operation. The relay-chain will likely be a chain broadly similar to Ethereum in that it is state-based with the state mapping address to account information, mainly balances and (to prevent replays) a transaction counter. Placing accounts here fulfils one purpose: to provide accounting for which identity possesses what amount of stake in the system.7 There will be notable differences, though: • Contracts cannot be deployed through transactions; following from the desire to avoid application functionality on the relay-chain, it will not support public deployment of contracts. • Compute resource usage (“gas”) is not accounted; since the only functions available for public usage will be fixed, the rationale behind gas accounting no longer holds. As such, a flat fee will apply in all cases, allowing for more performance from any dynamic code execution that may need to be done and a simpler transaction format. • Special functionality is supported for listed contracts that allows for auto-execution and networkmessage outputs. In the event that the relay-chain has a VM and it be based around the EVM, it would have a number of modifications to ensure maximal simplicity. It would likely have a number of built-in contracts (similar to those at addresses 1-4 in Ethereum) to allow for platform-specific duties to be managed including a consensus contract, a validator contract and a parachain contract. If not the EVM, then a WebAssembly [2] (wasm) backend is the most likely alternative; in this case the overall structure would be similar, but there would be no need for the built-in contracts with Wasm being a viable target for general purpose languages rather than the immature and limited languages for the EVM. Other likely deviations from the present Ethereum protocol are quite possible, for example a simplification of the transaction-receipt format allowing for the parallel execution of non-conflicting transactions within the same block, as proposed for the Serenity series of changes. It is possible, though unlikely, that a Serenity-like “pure” chain be deployed as the relay-chain, allowing for a particular contract to manage things like the staking token balances rather than making that a fundamental part of the chain’s protocol. At present, we feel it is unlikely this will offer a sufficiently great protocol simplification to be worth the additional complexity and uncertainty involved in developing it. 7As a means of representing the amount a given holder is responsible for the overall security of the system, these stake accounts will inevitably encode some economic value. However, it should be understood that since there is no intention that such values be used in any way for the purpose of exchanging for real-world goods and services, it should be accordingly noted that the tokens not be likened to currency and as such the relay-chain retain its nihilistic philosophy regarding applications.

POLKADOT: VISION FOR A HETEROGENEOUS MULTI-CHAIN FRAMEWORK DRAFT 1 10 There are a number of small pieces of functionality required for administrating the consensus mechanism, validator set, validation mechanism and parachains. These could be implemented together under a monolithic protocol. However, for reasons of auguring modularity, we describe these as “contracts” of the relay-chain. This should be taken to mean that they are objects (in the sense of object-orientated programming) managed by the relaychain’s consensus mechanism, but not necessarily that they are defined as programs in EVM-like opcodes, nor even that they be individually addressable through the account-system. 6.2. Staking Contract. This contract maintains the validator set. It manages: • which accounts are currently validators; • which are available to become validators at short notice; • which accounts have placed stake nominating to a validator; • properties of each including staking volume, acceptable payout-rates and addresses and shortterm (session) identities. It allows an account to register a desire to become a bonded validator (along with its requirements), to nominate to some identity, and for preexisting bonded validators to register their desire to exit this status. It also includes the machinery itself for the validation and canonicalisation mechanism. 6.2.1. Stake-token Liquidity. It is generally desirable to have as much of the total staking tokens as possible to be staked within the network maintenance operations since this directly ties the network security to the overall “market capitalisation” of the staking token. This can easily be incentivised through inflating the currency and handing out the proceeds to those who participate as validators. However, to do so presents a problem: if the token is locked in the Staking Contract under punishment of reduction, how can a substantial portion remain sufficiently liquid in order to allow price discovery? One answer to this is allowing a straight-forward derivative contract, securing fungible tokens on an underlying staked token. This is difficult to arrange in a trustfree manner. Furthermore, these derivative tokens cannot be treated equally for the same reason that different Eurozone government’s bonds are not fungible: there is a chance of the underlying asset failing and becoming worthless. With Eurozone governments, there could be a default. With validator-staked tokens, the validator may act maliciously and be punished. Keeping with our tenets, we elect for the simplest solution: not all tokens be staked. This would mean that some proportion (perhaps 20%) of tokens will forcibly remain liquid. Though this is imperfect from a security perspective, it is unlikely to make a fundamental difference in the security of the network; 80% of the reparations possible from bond-confiscations would still be able to be made compared to the “perfect case” of 100% staking. The ratio between staked and liquid tokens can be targeted fairly simply through a reverse auction mechanism. Essentially, token holders interested in being a validator would each post an offer to the staking contract stating the minimum payout-rate that they would require to take part. At the beginning of each session (sessions would happen regularly, perhaps as often as once per hour) the validator slots would be filled according to each would-be validator’s stake and payout rate. One possible algorithm for this would be to take those with the lowest offers who represent a stake no higher than the total stake targeted divided by the number of slots and no lower than a lowerbound of half that amount. If the slots cannot be filled, the lower bound could be repeatedly reduced by some factor in order to satisfy. 6.2.2. Nominating. It is possible to trustlessly nominate ones staking tokens to an active validator, giving them the responsibility of validators duties. Nominating works through an approval-voting system. Each would-be nominator is able to post an instruction to the staking contract expressing one or more validator identities under whose responsibility they are prepared to entrust their bond. Each session, nominators’ bonds are dispersed to be represented by one or more validators. The dispersal algorithm optimises for a set of validators of equivalent total bonds. Nominators’ bonds become under the effective responsibility of the validator and gain interest or suffer a punishment-reduction accordingly. 6.2.3. Bond Confiscation/Burning. Certain validator behaviour results in a punitive reduction of their bond. If the bond is reduced below the allowable minimum, the session is prematurely ended and another started. A nonexhaustive list of punishable validator misbehaviour includes: • Being part of a parachain group unable to provide consensus over the validity of a parachain block; • actively signing for the validity of an invalid parachain block; • inability to supply egress payloads previously voted as available; • inactivity during the consensus process; • validating relay-chain blocks on competing forks. Some cases of misbehaviour threaten the network’s integrity (such as signing invalid parachain blocks and validating multiple sides of a fork) and as such result in effective exile through the total reduction of the bond. In other, less serious cases (e.g. inactivity in the consensus process) or cases where blame cannot be precisely allotted (being part of an ineffective group), a small portion of the bond may instead be fined. In the latter case, this works well with sub-group churn to ensure that malicious nodes suffer substantially more loss than the collaterallydamaged benevolent nodes. In some cases (e.g. multi-fork validation and invalid sub-block signing) validators cannot themselves easily detect each others’ misbehaviour since constant verification of each parachain block would be too arduous a task. Here it is necessary to enlist the support of parties external to the validation process to verify and report such misbehaviour. The parties get a reward for reporting such activity; their term, “fishermen” stems from the unlikeliness of such a reward. Since these cases are typically very serious, we envision that any rewards can easily be paid from the confiscated bond. In general we prefer to balance burning (i.e. reduction to nothing) with reallocation, rather than attempting wholesale reallocation. This has the effect of

POLKADOT: VISION FOR A HETEROGENEOUS MULTI-CHAIN FRAMEWORK DRAFT 1 11 increasing the overall value of the token, compensating the network in general to some degree rather than the specific party involved in discovery. This is mainly as a safety mechanism: the large amounts involved could lead to extreme and acute behaviour incentivisation were they all bestowed on a single target. In general, it is important that the reward is sufficiently large to make verification worthwhile for the network, yet not so large as to offset the costs of fronting a well-financed, well-orchestrated ”industrial-level” criminal hacking attack on some unlucky validator to force misbehaviour. In this way, the amount claimed should generally be no greater than the direct bond of the errant validator, lest a perverse incentive arise of misbehaving and reporting oneself for the bounty. This can be combated either explicitly through a minimum direct bond requirement for being a validator or implicitly by educating nominators that validators with little bonds deposited have no great incentive to behave well. 6.3. Parachain Registry. Each parachain is defined in this registry. It is a relatively simple database-like construct and holds both static and dynamic information on each chain. Static information includes the chain index (a simple integer), along with the validation protocol identity, a means of distinguishing between the different classes of parachain so that the correct validation algorithm can be run by validators consigned to putting forward a valid candidate. An initial proof-of-concept would focus on placing the new validation algorithms into clients themselves, effectively requiring a hard fork of the protocol each time an additional class of chain were added. Ultimately, though, it may be possible to specify the validation algorithm in a way both rigorous and efficient enough that clients are able to effectively work with new parachains without a hard-fork. One possible avenue to this would be to specify the parachain validation algorithm in a well-established, natively-compiled, platform-neutral language such as WebAssembly. Additional research is necessary to determine whether this is truly feasible, however if so, it could bring with it the tremendous advantage of banishing hard-forks for good. Dynamic information includes aspects of the transaction routing system that must have global agreement such as the parachain’s ingress queue (described in section 6.6). The registry is able to have parachains added only through full referendum voting; this could be managed internally but would more likely be placed in an external referendum contract in order to facilitate re-usage under more general governance components. The parameters to voting requirements (e.g. any quorum required, majority required) for registration of additional chains and other, less formal system upgrades will be set out in a “master constitution” but are likely to follow a fairly traditional path, at least initially. The precise formulation is out of scope for the present work, but e.g. a two thirds supermajority to pass with more than one third of total system stake voting positively may be a sensible starting point. Additional operations include the suspension and removal of parachains. Suspension would hopefully never happen, however it is designed to be a safeguard least there be some intractable problem in a parachain’s validation system. The most obvious instance where it might be needed is a consensus-critical difference between implementations leading validators to be unable to agree on validity or blocks. Validators would be encouraged to use multiple client implementations in order that they are able to spot such a problem prior to bond confiscation. Since suspension is an emergency measure, it would be under the auspices of the dynamic validator-voting rather than a referendum. Re-instating would be possible both from the validators or a referendum. The removal of parachains altogether would come only after a referendum and with which would be required a substantial grace period to allow an orderly transition to either a standalone chain or to become part of some other consensus-system. The grace period would likely be of the order of months and is likely to be set out on a perchain basis in the parachain registry in order that different parachains can enjoy different grace periods according to their need. 6.4. Sealing Relay Blocks. Sealing refers, in essence, to the process of canonicalisation; that is, a basic data transform which maps the original into something fundamentally singular and meaningful. Under a PoW chain, sealing is effectively a synonym for mining. In our case, it involves the collection of signed statements from validators over the validity, availability and canonicality of a particular relay-chain block and the parachain blocks that it represents. The mechanics of the underlying BFT consensus algorithm is out of scope for the present work. We will instead describe it using a primitive which assumes a consensus-creating state-machine. Ultimately we expect to be inspired by a number of promising BFT consensus algorithms in the core; Tangaora [9] (a BFT variant of Raft [16]), Tendermint [11] and HoneyBadgerBFT [14]. The algorithm will have to reach an agreement on multiple parachains in parallel, thus differing from the usual blockchain consensus mechanisms. We assume that once consensus is reached, we are able to record the consensus in an irrefutable proof which can be provided by any of the participants to it. We also assume that misbehaviour within the protocol can be generally reduced to a small group containing misbehaving participants to minimise the collateral damage when dealing out punishment.8 The proof, which takes the form of our signed statements, is placed in the relay-chain block’s header together with certain other fields not least the relay-chain’s statetrie root and transaction-trie root. The sealing process takes place under a single consensus-generating mechanism addressing both the relay-chain’s block and the parachains’ blocks which make up part of the relay’s content: parachains are not separately “committed” by their sub-groups and then collated later. This results in a more complex process for the relaychain, but allows us to complete the entire system’s consensus in a single stage, minimising latency and allowing for quite complex data-availability requirements which are helpful for the routing process below. 8Existing PoS-based BFT consensus schemes such as Tendermint BFT and the original Slasher fulfill these assertions.

POLKADOT: VISION FOR A HETEROGENEOUS MULTI-CHAIN FRAMEWORK DRAFT 1 12 The state of each participant’s consensus machine may be modelled as a simple (2-dimensional) table. Each participant (validator) has a set of information, in the form of signed-statements (“votes”) from other participants, regarding each parachain block candidate as well the relaychain block candidate. The set of information is two pieces of data: Availability: does this validator have egress transaction-post information from this block so they are able to properly validate parachain candidates on the following block? They may vote either 1(known) or 0 (not yet known). Once they vote 1, they are committed to voting similarly for the rest of this process. Later votes that do not respect this are grounds for punishment. Validity: is the parachain block valid and is all externally-referenced data (e.g. transactions) available? This is only relevant for validators assigned to the parachain on which they are voting. They may vote either 1 (valid), -1 (invalid) or 0 (not yet known). Once they vote non-zero, they are committed to voting this way for the rest of the process. Later votes that do not respect this are grounds for punishment. All validators must submit votes; votes may be resubmitted, qualified by the rules above. The progression of consensus may be modelled as multiple standard BFT consensus algorithms over each parachain happening in parallel. Since these are potentially thwarted by a relatively small minority of malicious actors being concentrated in a single parachain group, the overall consensus exists to establish a backstop, limiting the worst-case scenario from deadlock to merely one or more void parachain blocks (and a round of punishment for those responsible). The basic rules for validity of the individual blocks (that allow the total set of validators as a whole to come to consensus on it becoming the unique parachain candidate to be referenced from the canonical relay): • must have at least two thirds of its validators voting positively and none voting negatively; • must have over one third validators voting positively to the availability of egress queue information. If there is at least one positive and at least one negative vote on validity, an exceptional condition is created and the whole set of validators must vote to determine if there are malicious parties or if there is an accidental fork. Aside from valid and invalid, a third kind of votes are allowed, equivalent to voting for both, meaning that the node has conflicting opinions. This could be due to the node’s owner running multiple implementations which do not agree, indicating a possible ambiguity in the protocol. After all votes are counted from the full validator set, if the losing opinion has at least some small proportion (to be parameterised; at most half, perhaps significantly less) of the votes of the winning opinion, then it is assumed to be an accidental parachain fork and the parachain is automatically suspended from the consensus process. Otherwise, we assume it is a malicious act and punish the minority who were voting for the dissenting opinion. The conclusion is a set of signatures demonstrating canonicality. The relay-chain block may then be sealed and the process of sealing the next block begun. 6.5. Improvements for Sealing Relay Blocks. While this sealing method gives strong guarantees over the system’s operation, it does not scale out particularly well since every parachain’s key information must have its availability guaranteed by over one-third of all validators. This means that every validator’s responsibility footprint grows as more chains are added. While data availability within open consensus networks is essentially an unsolved problem, there are ways of mitigating the overhead placed on validator nodes. One simple solution is to realise that while validators must shoulder the responsibility for data availability, they need not actually store, communicate or replicate the data themselves. Secondary data silos, possibly related to (or even the very same) collators who compile this data, may manage the task of guaranteeing availability with the validators providing a portion of their interest/income in payment. However, while this might buy some intermediate scalability, it still doesn’t help the underlying problem; since adding more chains will in general require additional validators, the ongoing network resource consumption (particularly in terms of bandwidth) grows with the square of the chains, an untenable property in the long-term. Ultimately, we are likely to keep bashing our heads against the fundamental limitation which states that for a consensus network to be considered available safe, the ongoing bandwidth requirements are of the order of total validators times total input information. This is due to the inability of an untrusted network to properly distribute the task of data storage across many nodes, which sits apart from the eminently distributable task of processing. 6.5.1. Introducing Latency. One means of softening this rule is to relax the notion of immediacy. By requiring 33%+1 validators voting for availability only eventually, and not immediately, we can better utilise exponential data propagation and help even out peaks in datainterchange. A reasonable equality (though unproven) may be: (1) latency = participants × chains Under the current model, the size of the system scales with the number of chains to ensure that processing is distributed; since each chain will require at least one validator and we fix the availability attestation to a constant proportion of validators, then participants similarly grows with the number of chains. We end up with: (2) latency = size2 Meaning that as the system grows, the bandwidth required and latency until availability is known across the network, which might also be characterised as the number of blocks before finality, increases with its square. This is a substantial growth factor and may turn out to be a notable road blocker and force us into “non-flat” paradigms such as composing several “Polkadotes” into a hierarchy for multi-level routing of posts through a tree of relaychains.

POLKADOT: VISION FOR A HETEROGENEOUS MULTI-CHAIN FRAMEWORK DRAFT 1 13 6.5.2. Public Participation. One more possible direction is to enlist public participation in the process through a micro-complaints system. Similar to the fishermen, there could be external parties to police the validators who claim availability. Their task is to find one who appears unable to demonstrate such availability. In doing so they can lodge a micro-complaint to other validators. PoW or a staked bond may be used to mitigate the sybil attack which would render the system largely useless. 6.5.3. Availability Guarantors. A final route would be to nominate a second set of bonded validators as “availability guarantors”. These would be bonded just as with the normal validators, and may even be taken from the same set (though if so, they would be chosen over a long-term period, at least per session). Unlike normal validators, they would not switch between parachains but rather would form a single group to attest to the availability of all important interchain data. This has the advantage of relaxing the equivalence between participants and chains. Essentially, chains can grow (along with the original chain validator set), whereas the participants, and specifically those taking part in dataavailability testament, can remain at the least sub-linear and quite possibly constant. 6.5.4. Collator Preferences. One important aspect of this system is to ensure that there is a healthy selection of collators creating the blocks in any given parachain. If a single collator dominated a parachain then some attacks become more feasible since the likelihood of the lack of availability of external data would be less obvious. One option is to artificially weight parachain blocks in a pseudo-random mechanism in order to favour a wide variety of collators. In the first instance, we would require as part of the consensus mechanism that validators favour parachain block candidates determined to be “heavier”. Similarly, we must incentivise validators to attempt to suggest the weightiest block they can find—this could be done through making a portion of their reward proportional to the weight of their candidate. To ensure that collators are given a reasonable fair chance of their candidate being chosen as the winning candidate in consensus, we make the specific weight of a parachain block candidate determinate on a random function connected with each collator. For example, taking the XOR distance measure between the collator’s address and some cryptographically-secure pseudorandom number determined close to the point of the block being created (a notional “winning ticket”). This effectively gives each collator (or, more specifically, each collator’s address) a random chance of their candidate block “winning” over all others. To mitigate the sybil attack of a single collator “mining” an address close to the winning ticket and thus being a favourite each block, we would add some inertia to a collator’s address. This may be as simple as requiring them to have a baseline amount of funds in the address. A more elegant approach would be to weight the proximity to the winning ticket with the amount of funds parked at the address in question. While modelling has yet to be done, it is quite possible that this mechanism enables even very small stakeholders to contribute as a collator. 6.5.5. Overweight Blocks. If a validator set is compromised, they may create and propose a block which though valid, takes an inordinate amount of time to execute and validate. This is a problem since a validator group could reasonably form a block which takes a very long time to execute unless some particular piece of information is already known allowing a short cut, e.g. factoring a large prime. If a single collator knew that information, then they would have a clear advantage in getting their own candidates accepted as long as the others were busy processing the old block. We call these blocks overweight. Protection against validators submitting and validating these blocks largely falls under the same guise as for invalid blocks, though with an additional caveat: Since the time taken to execute a block (and thus its status as overweight) is subjective, the final outcome of a vote on misbehaviour will fall into essentially three camps. One possibility is that the block is definitely not overweight— in this case more than two-thirds declare that they could execute the block within some limit (e.g. 50% of the total time allowed between blocks). Another is that the block is definitely overweight—this would be if more than two-thirds declare that they could not execute the block within said limit. One final possibility is a fairly equal split of opinion between validators. In this case, we may choose to do some proportionate punishment. To ensure validators can predict when they may be proposing an overweight block, it may be sensible to require them to publish information on their own performance for each block. Over a sufficient period of time, this should allow them to profile their processing speed relative to the peers that would be judging them. 6.5.6. Collator Insurance. One issue remains for validators: unlike with PoW networks, to check a collator’s block for validity, they must actually execute the transactions in it. Malicious collators can feed invalid or overweight blocks to validators causing them grief (wasting their resources) and exacting a potentially substantial opportunity cost. To mitigate this, we propose a simple strategy on the part of validators. Firstly, parachain block candidates sent to validators must be signed from a relay chain account with funds; if they are not, then the validator should drop it immediately. Secondly, such candidates should be ordered in priority by a combination (e.g. multiplication) of the amount of funds in the account up to some cap, the number of previous blocks that the collator has successfully proposed in the past (not to mention any previous punishments), and the proximity factor to the winning ticket as discussed previously. The cap should be the same as the punitive damages paid to the validator in the case of them sending an invalid block. To disincentivise collators from sending invalid or overweight block candidates to validators, any validator may place in the next block a transaction including the offending block alleging misbehaviour with the effect of transferring some or all of the funds in the misbehaving collator’s account to the aggrieved validator. This type of transaction front-runs any others to ensure the collator cannot remove the funds prior to the punishment. The amount of funds transferred as damages is a dynamic parameter yet

POLKADOT: VISION FOR A HETEROGENEOUS MULTI-CHAIN FRAMEWORK DRAFT 1 14 to be modelled but will likely be a proportion of the validator block reward to reflect the level of grief caused. To prevent malicious validators arbitrarily confiscating collators’ funds, the collator may appeal the validator’s decision with a jury of randomly chosen validators in return for placing a small deposit. If they find in the validator’s favour, the deposit is consumed by them. If not, the deposit is returned and the validator is fined (since the validator is in a much more vaulted position, the fine will likely be rather hefty). 6.6. Interchain Transaction Routing. Interchain transaction routing is one of the essential maintenance tasks of the relay-chain and its validators. This is the logic which governs how a posted transaction (often shortened to simply “post”) gets from being a desired output from one source parachain to being a non-negotiable input of another destination parachain without any trust requirements. We choose the wording above carefully; notably we don’t require there to have been a transaction in the source parachain to have explicitly sanctioned this post. The only constraints we place upon our model is that parachains must provide, packaged as a part of their overall block processing output, the posts which are the result of the block’s execution. These posts are structured as several FIFO queues; the number of lists is known as the routing base and may be around 16. Notably, this number represents the quantity of parachains we can support without having to resort to multi-phase routing. Initially, Polkadot will support this kind of direct routing, however we will outline one possible multi-phase routing process (“hyper-routing”) as a means of scaling out well past the initial set of parachains. We assume that all participants know the subgroupings for next two blocks n, n + 1. In summary, the routing system follows these stages: • CollatorS: Contact members of V alidators[n][S] • CollatorS: FOR EACH subgroup s: ensure at least 1 member of V alidators[n][s] in contact • CollatorS: FOR EACH subgroup s: assume egress[n −1][s][S] is available (all incoming post data to ‘S‘ from last block) • CollatorS: Compose block candidate b for S: (b.header, b.ext, b.proof, b.receipt, b.egress) • CollatorS: Send proof information proof[S] = (b.header, b.ext, b.proof, b.receipt) to V alidators[n][S] • CollatorS: Ensure external transaction data b.ext is made available to other collators and validators • CollatorS: FOR EACH subgroup s: Send egress information egress[n][S][s] = (b.header, b.receipt, b.egress[s]) to the receiving sub-group’s members of next block V alidators[n + 1][s] • V alidatorV : Pre-connect all same-set members for next block: let N = Chain[n + 1][V ]; connect all validators v such that Chain[n + 1][v] = N • V alidatorV : Collate all data ingress for this block: FOR EACH subgroup s: Retrieve egress[n −1][s][Chain[n][V ]], get from other validators v such that Chain[n][v] = Chain[n][V ]. Possibly going via randomly selected other validators for proof of attempt. • V alidatorV : Accept candidate proofs for this block proof[Chain[n][V ]]. Vote block validity • V alidatorV : Accept candidate egress data for next block: FOR EACH subgroup s, accept egress[n][s][N]. Vote block egress availability; republish among interested validators v such that Chain[n + 1][v] = Chain[n + 1][V ]. • V alidatorV : UNTIL CONSENSUS Where: egress[n][from][to] is the current egress queue information for posts going from parachain ‘from‘, to parachain ‘to‘ in block number ‘n‘. CollatorS is a collator for parachain S. V alidators[n][s] is the set of validators for parachain s at block number n. Conversely, Chain[n][v] is the parachain to which validator v is assigned on block number n. block.egress[to] is the egress queue of posts from some parachain block block whose destination parachain is to. Since collators collect (transaction) fees based upon their blocks becoming canonical they are incentivised to ensure that for each next-block destination, the subgroup’s members are informed of the egress queue from the present block. Validators are incentivised only to form a consensus on a (parachain) block, as such they care little about which collator’s block ultimately becomes canonical. In principle, a validator could form an allegiance with a collator and conspire to reduce the chances of other collators’ blocks becoming canonical, however this is both difficult to arrange due to the random selection of validators for parachains and could be defended against with a reduction in fees payable for parachain blocks which hold up the consensus process. 6.6.1. External Data Availability. Ensuring a parachain’s external data is actually available is a perennial issue with decentralised systems aiming to distribute workload across the network. At the heart of the issue is the availability problem which states that since it is neither possible to make a non-interactive proof of availability nor any sort of proof of non-availability, for a BFT system to properly validate any transition whose correctness relies upon the availability of some external data, the maximum number of acceptably Byzantine nodes, plus one, of the system must attest to the data being available. For a system to scale out properly, like Polkadot, this invites a problem: if a constant proportion of validators must attest to the availability of the data, and assuming that validators will want to actually store the data before asserting it is available, then how do we avoid the problem of the bandwidth/storage requirements increasing with the system size (and therefore number of validators)? One possible answer would be to have a separate set of validators (availability guarantors), whose order grows sublinearly with the size of Polkadot as a whole. This is described in 6.5.3. We also have a secondary trick. As a group, collators have an intrinsic incentive to ensure that all data is available for their chosen parachain since without it they are unable to author further blocks from which they can collect transaction fees. Collators also form a group, membership of which is varied (due to the random nature of parachain validator groups) non-trivial to enter and easy

POLKADOT: VISION FOR A HETEROGENEOUS MULTI-CHAIN FRAMEWORK DRAFT 1 15 to prove. Recent collators (perhaps of the last few thousand blocks) are therefore allowed to issue challenges to the availability of external data for a particular parachain block to validators for a small bond. Validators must contact those from the apparently offending validator sub-group who testified and either acquire and return the data to the collator or escalate the matter by testifying to the lack of availability (direct refusal to provide the data counts as a bond-confiscating offence, therefore the misbehaving validator will likely just drop the connection) and contacting additional validators to run the same test. In the latter case, the collator’s bond is returned. Once a quorum of validators who can make such nonavailability testimonials is reached, they are released, the misbehaving sub-group is punished, and the block reverted. 6.6.2. Posts Routing. Each parachain header includes an egress-trie-root; this is the root of a trie containing the routing-base bins, each bin being a concatenated list of egress posts. Merkle proofs may be provided across parachain validators to prove that a particular parachain’s block had a particular egress queue for a particular destination parachain. At the beginning of processing a parachain block, each other parachain’s egress queue bound for said block is merged into our block’s ingress queue. We assume strong, probably CSPR9, sub-block ordering to achieve a deterministic operation that offers no favouritism between any parachain block pairing. Collators calculate the new queue and drain the egress queues according to the parachain’s logic. The contents of the ingress queue is written explicitly into the parachain block. This has two main purposes: firstly, it means that the parachain can be trustlessly synchronised in isolation from the other parachains. Secondly, it simplifies the data logistics should the entire ingress queue not be able to be processed in a single block; validators and collators are able to process following blocks without having to source the queue’s data specially. If the parachain’s ingress queue is above a threshold amount at the end of block processing, then it is marked saturated on the relay-chain and no further messages may be delivered to it until it is cleared. Merkle proofs are used to demonstrate fidelity of the collator’s operation in the parachain block’s proof. 6.6.3. Critique. One minor flaw relating to this basic mechanism is the post-bomb attack. This is where all parachains send the maximum amount of posts possible to a particular parachain. While this ties up the target’s ingress queue at once, no damage is done over and above a standard transaction DoS attack. Operating normally, with a set of well-synchronised and non-malicious collators and validators, for N parachains, N × M total validators and L collators per parachain, we can break down the total data pathways per block to: Validator: M −1+L+L: M −1 for the other validators in the parachain set, L for each collator providing a candidate parachain block and a second L for each collator of the next block requiring the egress payloads of the previous block. (The latter is actually more like worst-case operation since it is likely that collators will share such data.) Collator: M +kN: M for a connection to each relevant parachain block validator, kN for seeding the egress payloads to some subset of each parachain validator group for the next block (and possibly some favoured collator(s)). As such, the data path ways per node grow linearly with the overall complexity of the system. While this is reasonable, as the system scales into hundreds or thousands of parachains, some communication latency may be absorbed in exchange for a lower complexity growth rate. In this case, a multi-phase routing algorithm may be used in order to reduce the number of instantaneous pathways at a cost of introducing storage buffers and latency. 6.6.4. Hyper-cube Routing. Hyper-cube routing is a mechanism which can mostly be build as an extension to the basic routing mechanism described above. Essentially, rather than growing the node connectivity with the number of parachains and sub-group nodes, we grow only with the logarithm of parachains. Posts may transit between several parachains’ queues on their way to final delivery. Routing itself is deterministic and simple. We begin by limiting the number of bins in the ingress/egress queues; rather than being the total number of parachains, they are the routing-base (b) . This will be fixed as the number of parachains changes, with the routing-exponent (e) instead being raised. Under this model, our message volume grows with O(be), with the pathways remaining constant and the latency (or number of blocks required for delivery) with O(e). Our model of routing is a hypercube of e dimensions, with each side of the cube having b possible locations. Each block, we route messages along a single axis. We alternate the axis in a round-robin fashion, thus guaranteeing worst-case delivery time of e blocks. As part of the parachain processing, foreign-bound messages found in the ingress queue are routed immediately to the appropriate egress queue’s bin, given the current block number (and thus routing dimension). This process necessitates additional data transfer for each hop on the delivery route, however this is a problem itself which may be mitigated by using some alternative means of data payload delivery and including only a reference, rather than the full payload of the post in the post-trie. An example of such a hyper-cube routing for a system with 4 parachains, b = 2 and e = 2 might be: Phase 0, on each message M: • sub0: if \(M_{\text{dest}} \in \{2, 3\}\) then sendTo(2) else keep • sub1: if \(M_{\text{dest}} \in \{2, 3\}\) then sendTo(3) else keep • sub2: if \(M_{\text{dest}} \in \{0, 1\}\) then sendTo(0) else keep • sub3: if \(M_{\text{dest}} \in \{0, 1\}\) then sendTo(1) else keep Phase 1, on each message M: • sub0: if \(M_{\text{dest}} \in \{1, 3\}\) then sendTo(1) else keep • sub1: if \(M_{\text{dest}} \in \{0, 2\}\) then sendTo(0) else keep • sub2: if \(M_{\text{dest}} \in \{1, 3\}\) then sendTo(3) else keep • sub3: if \(M_{\text{dest}} \in \{0, 2\}\) then sendTo(2) else keep The two dimensions here are easy to see as the first two bits of the destination index; for the first block, the higher-order bit alone is used. The second block deals with the low-order bit. Once both happen (in arbitrary order) then the post will be routed. 9cryptographically secure pseudo-random

POLKADOT: VISION FOR A HETEROGENEOUS MULTI-CHAIN FRAMEWORK DRAFT 1 16 6.6.5. Maximising Serendipity. One alteration of the basic proposal would see a fixed total of c2 −c validators, with c−1 validators in each sub-group. Each block, rather than there being an unstructured repartitioning of validators among parachains, instead for each parachain sub-group, each validator would be assigned to a unique and different parachain sub-group on the following block. This would lead to the invariant that between any two blocks, for any two pairings of parachain, there exists two validators who have swapped parachain responsibilities. While this cannot be used to gain absolute guarantees on availability (a single validator will occasionally drop offline, even if benevolent), it can nonetheless optimise the general case. This approach is not without complications. The addition of a parachain would also necessitate a reorganisation of the validator set. Furthermore the number of validators, being tied to the square of the number of parachains, would start initially very small and eventually grow far too fast, becoming untenable after around 50 parachains. None of these are fundamental problems. In the first case, reorganisation of validator sets is something that must be done regularly anyway. Regarding the size of the validator set, when too small, multiple validators may be assigned to the same parachain, applying an integer factor to the overall total of validators. A multi-phase routing mechanism such as Hypercube Routing, discussed in 6.6.4 would alleviate the requirement for large number of validators when there is a large number of chains. 6.7. Parachain Validation. A validator’s main purpose is to testify, as a well-bonded actor, that a parachain’s block is valid, including but not limited to any state transition, any external transactions included, the execution of any waiting posts in the ingress queue and the final state of the egress queue. The process itself is fairly simple. Once the validator sealed the previous block they are free to begin working to provide a candidate parachain block candidate for the next round of consensus. Initially, the validator finds a parachain block candidate through a parachain collator (described next) or one of its co-validators. The parachain block candidate data includes the block’s header, the previous block’s header, any external input data included (for Ethereum and Bitcoin, such data would be referred to as transactions, however in principle they may include arbitrary data structures for arbitrary purposes), egress queue data and internal data to prove state-transition validity (for Ethereum this would be the various state/storage trie nodes required to execute each transaction). Experimental evidence shows this full dataset for a recent Ethereum block to be at the most a few hundred KiB. Simultaneously, if not yet done, the validator will be attempting to retrieve information pertaining to the previous block’s transition, initially from the previous block’s validators and later from all validators signing for the availability of the data. Once the validator has received such a candidate block, they then validate it locally. The validation process is contained within the parachain class’s validator module, a consensus-sensitive software module that must be written for any implementation of Polkadot (though in principle a library with a C ABI could enable a single library to be shared between implementations with the appropriate reduction in safety coming from having only a single “reference” implementation). The process takes the previous block’s header and verifies its identity through the recently agreed relay-chain block in which its hash should be recorded. Once the parent header’s validity is ascertained, the specific parachain class’s validation function may be called. This is a single function accepting a number of data fields (roughly those given previously) and returning a simple Boolean proclaiming the validity of the block. Most such validation functions will first check the header-fields which are able to be derived directly from the parent block (e.g. parent hash, number). Following this, they will populate any internal data structures as necessary in order to process transactions and/or posts. For an Ethereum-like chain this amounts to populating a trie database with the nodes that will be needed for the full execution of transactions. Other chain types may have other preparatory mechanisms. Once done, the ingress posts and external transactions (or whatever the external data represents) will be enacted, balanced according to chain’s specification. (A sensible default might be to require all ingress posts be processed before external transactions be serviced, however this should be for the parachain’s logic to decide.) Through this enactment, a series of egress posts will be created and it will be verified that these do indeed match the collator’s candidate. Finally, the properly populated header will be checked against the candidate’s header. With a fully validated candidate block, the validator can then vote for the hash of its header and send all requisite validation information to the co-validators in its subgroup. 6.7.1. Parachain Collators. Parachain collators are unbonded operators who fulfill much of the task of miners on the present-day blockchain networks. They are specific to a particular parachain. In order to operate they must maintain both the relay-chain and the fully synchronised parachain. The precise meaning of “fully synchronised” will depend on the class of parachain, though will always include the present state of the parachain’s ingress queue. In Ethereum’s case it also involves at least maintaining a Merkle-tree database of the last few blocks, but might also include various other data structures including Bloom filters for account existence, familial information, logging outputs and reverse lookup tables for block number. In addition to keeping the two chains synchronised, it must also “fish” for transactions by maintaining a transaction queue and accepting properly validated transactions from the public network. With the queue and chain, it is able to create new candidate blocks for the validators chosen at each block (whose identity is known since the relaychain is synchronised) and submit them, together with the various ancillary information such as proof-of-validity, via the peer network. For its trouble, it collects all fees relating to the transactions it includes. Various economics float around this arrangement. In a heavily competitive market where there is a surplus of collators, it is possible that the transaction fees be shared with the parachain validators to incentivise the inclusion of a particular collator’s block. Similarly,

POLKADOT: VISION FOR A HETEROGENEOUS MULTI-CHAIN FRAMEWORK DRAFT 1 17 some collators may even raise the required fees that need to be paid in order to make the block more attractive to validators. In this case, a natural market should form with transactions paying higher fees skipping the queue and having faster inclusion in the chain. 6.8. Networking. Networking on traditional blockchains like Ethereum and Bitcoin has rather simple requirements. All transactions and blocks are broadcast in a simple undirected gossip. Synchronisation is more involved, especially with Ethereum but in reality this logic was contained in the peer strategy rather than the protocol itself which resolved around a few request and answer message types. While Ethereum made progress on current protocol offerings with the devp2p protocol, which allowed for many subprotocols to be multiplexed over a single peer connection and thus have the same peer overlay support many p2p protocols simultaneously, the Ethereum portion of the protocol still remained relatively simple and the p2p protocol as a while remains unfinished with important functionality missing such as QoS support. Sadly, a desire to create a more ubiquitous “web 3” protocol largely failed, with the only projects using it being those explicitly funded from the Ethereum crowd-sale. The requirements for Polkadot are rather more substantial. Rather then a wholly uniform network, Polkadot has several types of participants each with different requirements over their peer makeup and several network “avenues” whose participants will tend to converse about particular data. This means a substantially more structured network overlay—and a protocol supporting that— will likely be necessary. Furthermore, extensibility to facilitate future additions such as new kinds of “chain” may themselves require a novel overlay structure. While an in-depth discussion of how the networking protocol may look is outside of the scope of this document, some requirements analysis is reasonable. We can roughly break down our network participants into two sets (relay-chain, parachains) each of three subsets. We can also state that each of the parachain participants are only interested in conversing between themselves as opposed to participants in other parachains: • Relay-chain participants: • Validators: P, split into subsets P[s] for each parachain • Availability Guarantors: A (this may be represented by Validators in the basic form of the protocol) • Relay-chain clients: M (note members of each parachain set will also tend to be members of M) • Parachain participants: • Parachain Collators: C[0], C[1], . . . • Parachain Fishermen: F[0], F[1], . . . • Parachain clients: S[0], S[1], . . . • Parachain light-clients: L[0], L[1], . . . In general we name particular classes of communication will tend to take place between members of these sets: • P | A <-> P | A: The full set of validators/guarantors must be well-connected to achieve consensus. • P[s] <-> C[s] | P[s]: Each validator as a member of a given parachain group will tend to gossip with other such members as well as the collators of that parachain to discover and share block candidates. • A <-> P[s] | C | A: Each availability guarantor will need to collect consensus-sensitive cross-chain data from the validators assigned to it; collators may also optimise the chance of consensus on their block by advertising it to availability guarantors. Once they have it, the data will be disbursed to other such guarantor to facilitate consensus. • P[s] <-> A | P[s']: Parachain validators will need to collect additional input data from the previous set of validators or the availability guarantors. • F[s] <-> P: When reporting, fishermen may place a claim with any participant. • M <-> M | P | A: General relay-chain clients disburse data from validators and guarantors. • S[s] <-> S[s] | P[s] | A: Parachain clients disburse data from the validator/guarantors. • L[s] <-> L[s] | S[s]: Parachain light clients disburse data from the full clients. To ensure an efficient transport mechanism, a “flat” overlay network—like Ethereum’s devp2p—where each node does not (non-arbitrarily) differentiate fitness of its peers is unlikely to be suitable. A reasonably extensible peer selection and discovery mechanism will likely need to be included within the protocol as well as aggressive planning an lookahead to ensure the right sort of peers are “serendipitously” connected at the right time. The precise strategy of peer make-up will be different for each class of participant: for a properly scaled-out multi-chain, collators will either need to be continuously reconnecting to the accordingly elected validators, or will need on-going agreements with a subset of the validators to ensure they are not disconnected during the vast majority of the time that they are useless for that validator. Collators will also naturally attempt to maintain one or more stable connections into the availability guarantor set to ensure swift propagation of their consensus-sensitive data. Availability guarantors will mostly aim to maintain a stable connection to each other and to validators (for consensus and the consensus-critical parachain data to which they attest), as well as to some collators (for the parachain data) and some fishermen and full clients (for dispersing information). Validators will tend to look for other validators, especially those in the same sub-group and any collators that can supply them with parachain block candidates. Fishermen, as well as general relay-chain and parachain clients will generally aim to keep a connection open to a validator or guarantor, but plenty of other nodes similar to themselves otherwise. Parachain light clients will similarly aim to be connected to a full client of the parachain, if not just other parachain light-clients. 6.8.1. The Problem of Peer Churn. In the basic protocol proposal, each of these subsets constantly alter randomly with each block as the validators assigned to verify the parachain transitions are randomly elected. This can be a problem should disparate (non-peer) nodes need to pass data between each other. One must either rely on a fairly-distributed and well-connected peer network to

POLKADOT: VISION FOR A HETEROGENEOUS MULTI-CHAIN FRAMEWORK DRAFT 1 18 ensure that the hop-distance (and therefore worst-case latency) only grows with the logarithm of the network size (a Kademlia-like protocol [13] may help here), or one must introduce longer block times to allow the necessary connection negotiation to take place to keep a peer-set that reflects the node’s current communication needs. Neither of these are great solutions: long block times being forced upon the network may render it useless for particular applications and chains. Even a perfectly fair and connected network will result in substantial wastage of bandwidth as it scales due to uninterested nodes having to forward data useless to them. While both directions may form part of the solution, a reasonable optimisation to help minimise latency would be to restrict the volatility of these parachain validator sets, either reassigning the membership only between series of blocks (e.g. in groups of 15, which at a 4 second block time would mean altering connections only once per minute) or by rotating membership in an incremental fashion, e.g. changing by one member at a time (e.g. if there are 15 validators assigned to each parachain, then on average it would be a full minute between completely unique sets). By limiting the amount of peer churn, and ensuring that advantageous peer connections are made well in advance through the partial predictability of parachain sets, we can help ensure each node keep a permanently serendipitous selection of peers. 6.8.2. Path to an Effective Network Protocol. Likely the most effective and reasonable development effort will focus on utilising a pre-existing protocol rather than rolling our own. Several peer-to-peer base protocols exist that we may use or augment including Ethereum’s own devp2p [22], IPFS’s libp2p [1] and GNU’s GNUnet [4]. A full review of these protocols and their relevance for building a modular peer network supporting certain structural guarantees, dynamic peer steering and extensible sub-protocols is well beyond the scope of this document but will be an important step in the implementation of Polkadot. 7. Practicalities of the Protocol 7.1. Interchain Transaction Payment. While a great amount of freedom and simplicity is gained through dropping the need for a holistic computation resource accounting framework like Ethereum’s gas, this does raise an important question: without gas, how does one parachain avoid another parachain from forcing it to do computation? While we can rely on transaction-post ingress queue buffers to prevent one chain from spamming another with transaction data, there is no equivalent mechanism provided by the protocol to prevent the spamming of transaction processing. This is a problem left to the higher level. Since chains are free to attach arbitrary semantics on to the incoming transaction-post data, we can ensure that computation must be paid-for before started. In a similar vein to the model espoused by Ethereum Serenity, we can imagine a “break-in” contract within a parachain which allows a validator to be guaranteed payment in exchange for the provision of a particular volume of processing resources. These resources may be measured in something like gas, but could also be some entirely novel model such as subjective time-to-execute or a Bitcoin-like flat-fee model. On its own this isn’t so useful since we cannot readily assume that the off-chain caller has available to them whatever value mechanism is recognised by the break-in contract. However, we can imagine a secondary “breakout” contract in the source chain. The two contracts together would form a bridge, recognising each other and providing value-equivalence. (Staking-tokens, available to each, could be used to settle up the balance-of-payments.) Calling into another such chain would mean proxying through this bridge, which would provide the means of negotiating the value transfer between chains in order to pay for the computation resources required on the destination parachain. 7.2. Additional Chains. While the addition of a parachain is a relatively cheap operation, it is not free. More parachains means fewer validators per parachain and, eventually, a larger number of validators each with a reduced average bond. While the issue of a smaller coercion cost for attacking a parachain is mitigated through fishermen, the growing validator set essentially forces a higher degree of latency due to the mechanics of the underlying consensus method. Furthermore each parachain brings with it the potential to grief validators with an over-burdensome validation algorithm. As such, there will be some “price” that validators and/or the stake-holding community will extract for the addition of a new parachain. This market for chains will possibly see the addition of either: • Chains that likely have zero net contribution paying (in terms of locking up or burning staking tokens) to be made a part (e.g. consortium chains, Doge-chains, app-specific chains); • chains that deliver intrinsic value to the network through adding particular functionality difficult to get elsewhere (e.g. confidentiality, internal scalability, service tie-ins). Essentially, the community of stakeholders will need to be incentivized to add child chains—either financially or through the desire to add featureful chains to the relay. It is envisioned that new chains added will have a very short notice period for removal, allowing for new chains to be experimented with without any risk of compromising the medium or long-term value proposition. 8. Conclusion We have outlined a direction one may take to author a scalable, heterogeneous multi-chain protocol with the potential to be backwards compatible to certain, pre-existing blockchain networks. Under such a protocol, participants work in enlightened self-interest to create an overall system which can be extended in an exceptionally free manner and without the typical cost for existing users that comes from a standard blockchain design. We have given a rough outline of the architecture it would take including the nature of the participants, their economic incentives and the processes under which they must engage. We have identified a basic design and discussed its strengths and limitations; accordingly we have further directions which may ease those limitations and yield further ground towards a fully scalable blockchain solution.

POLKADOT: VISION FOR A HETEROGENEOUS MULTI-CHAIN FRAMEWORK DRAFT 1 19 8.1. Missing Material and Open Questions. Network forking is always a possibility from divergent implementations of the protocol. The recovery from such an exceptional condition was not discussed. Given the network will necessarily have a non-zero period of finalisation, it should not be a large issue to recover from the relaychain forking, however will require careful integration into the consensus protocol. Bond-confiscation and conversely reward provision has not been deeply explored. At present we assume rewards are provided under a winner-takes-all basis: this may not give the best incentivisation model for fishermen. A shortperiod commit-reveal process would allow many fishermen to claim the prize giving a fairer distribution of rewards, however the process could lead to additional latency in the discovery of misbehaviour. 8.2. Acknowledgments. Many thanks to all of the proof-readers who have helped get this in to a vaguely presentable shape. In particular, Peter Czaban, Bj¨orn Wagner, Ken Kappler, Robert Habermeier, Vitalik Buterin, Reto Trinkler and Jack Petersson. Thanks to all the people who have contributed ideas or the beginnings thereof, Marek Kotewicz and Aeron Buchanan deserve especial mention. And thanks to everyone else for their help along the way. All errors are my own. Portions of this work, including initial research into consensus algorithms, was funded in part by the British Government under the Innovate UK programme.

协议详细信息

该协议大致可以分为三部分 部分:共识机制、平行链接口 和链间交易路由。 6.1.中继链 操作。的 中继链 会 可能是一个与 Ethereum 大致相似的链,因为它 是基于状态的,将状态映射地址到帐户 信息,主要是余额和(防止重播) 交易柜台。在这里放置账户可以实现一个目的:提供身份所拥有的记账服务 系统中的权益数量是多少。7 不过,会有显着差异: • 合约不能通过交易来部署;由于希望避免中继链上的应用程序功能,因此不会 支持合约的公开部署。 • 不计算计算资源使用量(“gas”); 因为唯一可供公众使用的功能 将被修复,天然气核算背后的基本原理 不再成立。因此,将收取固定费用 所有情况下,允许从任何情况下获得更多性能 可能需要完成的动态代码执行 以及更简单的交易格式。 • 列出的合约支持特殊功能,允许自动执行和网络消息输出。 如果中继链有一个虚拟机并且它是 基于 EVM,它将进行许多修改以确保最大程度的简单性。 很可能会 有许多内置合约(类似于 地址 Ethereum 中的 1-4)以允许特定于平台的 要管理的职责包括共识合同、 validator 合约和平行链合约。 如果不是 EVM,那么 WebAssembly [2] (wasm) 后端是最有可能的替代方案;在这种情况下总体 结构类似,但没有必要 以 Wasm 为可行目标的内置合约 适用于通用语言而不是不成熟的语言 以及 EVM 的有限语言。 与当前 Ethereum 协议的其他可能偏差也是很可能的,例如对 交易收据格式允许在同一块内并行执行非冲突交易, 正如针对 Serenity 系列变更所提议的那样。 尽管不太可能,但有可能出现类似宁静的情况 “纯”链被部署为中继链,允许 管理 staking token 等事物的特定合同 平衡而不是使其成为一个基本部分 链的协议。目前我们觉得这种可能性不大 将提供足够好的协议简化 值得承担额外的复杂性和不确定性 在开发它的过程中。 7作为代表特定持有人对系统整体安全负责的金额的一种方式,这些权益账户将 不可避免地编码了一些经济价值。然而,应该理解的是,由于无意将这些值用于 以任何方式交换现实世界的商品和服务,应相应注意的是 token 不能比作 货币和中继链保留了其关于应用的虚无主义哲学。Polkadot:异构多链框架的愿景 草案1 10 管理共识机制、validator 集、验证机制和平行链需要许多小功能。这些 可以在单一协议下一起实现。然而,出于模块化的原因,我们将这些描述为中继链的“合约”。这应该 被认为意味着它们是对象(在某种意义上 面向对象编程)由中继链的共识机制管理,但不一定 它们被定义为类似于 EVM 的操作码中的程序,也不 即使它们可以通过 帐户系统。 6.2.质押合约。该合约维护 validator 集。它管理: • 当前哪些帐户是validator; • 短期内可成为validators 通知; • 哪些账户已下注股权提名 validator; • 每个属性包括staking 数量、可接受的支付率和地址以及短期(会话)身份。 它允许帐户注册成为 保税 validator (及其要求),提名某些身份,并让先前存在的保税 validator 登记其退出此状态的愿望。它还 包括用于验证和规范化机制的机制本身。 6.2.1.股权-token 流动性。通常希望 拥有尽可能多的 staking token 总数 自从在网络维护操作中投入以来 这将网络安全与 staking token 的整体“市值”直接联系起来。这可以轻松地 通过货币膨胀并将收益分发给以 validators 身份参与的人来获得激励。然而,这样做会带来一个问题:如果 token 被锁在Stake合约中,受到减持惩罚,如何才能保留足够的大部分 流动性以便发现价格? 对此的一个答案是允许直接的衍生品合约,在基础质押的 token 上确保可替代的 token。这很难以无信任的方式进行安排。 此外,这些衍生品 token 不能受到同等对待,原因与不同欧元区政府的债券不可互换的原因相同: 是标的资产失败并成为的机会 毫无价值。对于欧元区政府来说,可能会有 默认。通过 validator 质押 token,validator 可能会 做出恶意行为并受到惩罚。 遵循我们的原则,我们选择最简单的解决方案:并非所有 token 都被质押。这意味着 token 的一定比例(可能是 20%)将强制保持液态。尽管从安全角度来看这并不完美,但它不太可能对安全产生根本性的影响。 网络的安全; 80% 的债券没收赔偿仍可得到 与 100% staking 的“完美情况”相比。 通过反向拍卖机制可以相当简单地确定质押和流动 token 之间的比率。 本质上,token 持有者有兴趣成为 validator 每个人都会向 staking 合同发布一份报价,说明 他们需要采取的最低支付率 部分。 在每次会议开始时(会议将 定期发生,也许每小时一次) validator 槽位将根据每个潜在的 validator 的股份和支付率。一种可能的算法 因为这将是接受那些出价最低的人 所代表的股份不高于目标股份总额 除以插槽数量,且不低于该数量一半的下限。如果槽位无法填满, 下限可以通过某个因子反复减小以满足。 6.2.2.提名。可以不信任地提名 staking tokens 到活跃的 validator,给他们 validator 的职责。提名作品 通过批准投票系统。每个潜在提名人都可以向 staking 合约发布指令 表达一个或多个 validator 身份 他们准备将责任托付给他们。 每届会议,提名人的债券都被分散到 由一个或多个 validator 表示。分散算法针对一组 validator 的等效总数进行优化 债券。提名人的债券由 validator a 负责并获得利益或遭受 相应减轻处罚。 6.2.3.债券没收/烧毁。某些 validator 行为会导致其债券受到惩罚性减少。如果 债券减少到允许的最低限度以下, 会议提前结束,另一个会议开始。应受惩罚的 validator 不当行为的非详尽清单包括: • 作为平行链团体的一部分,无法提供 对平行链区块有效性的共识; • 主动签署无效的有效性 平行链区块; • 之前无法提供出口有效负载 投票为可用; • 在共识过程中不活动; • 验证竞争分叉上的中继链区块。 某些不当行为会威胁到网络的完整性(例如签署无效的平行链区块和验证分叉的多个侧面),并因此通过债券的总量减少而导致有效的流放。在 其他不太严重的情况(例如,共识中不活跃) 过程)或无法精确分配责任的情况(作为无效群体的一部分),一小部分 债券的金额可能会被罚款。在后一种情况下,这 与子组流失配合良好,以确保恶意 节点遭受的损失比附带损坏的仁慈节点要大得多。 在某些情况下(例如多分叉验证和无效 子块签名)validators 本身无法轻易检测到彼此的不当行为,因为不断进行验证 分析每个平行链区块的任务太艰巨了。这里 需要争取外部各方的支持 验证和报告此类不当行为的验证过程。当事人因举报此类活动而获得奖励;他们的术语“渔民”源于不可能 的这样的奖励。 由于这些案件通常非常严重,我们预计任何奖励都可以轻松地从没收的债券中支付。 一般来说,我们更喜欢平衡燃烧 (即减少到零)通过重新分配,而不是 尝试大规模重新分配。这有以下效果:

Polkadot:异构多链框架的愿景 草案1 11 增加 token 的整体价值,补偿 在某种程度上,网络是一般性的,而不是特定的 参与发现的一方。 这主要是为了安全 机制:如果涉及的金额很大,可能会导致极端和剧烈的行为激励 授予单一目标。 一般来说,重要的是奖励足够大以使网络验证值得,但又不能大到抵消正面验证的成本。 资金充足、精心策划的“工业级”犯罪 对一些不幸的 validator 进行黑客攻击,以强制其行为不当。 这样一来,索赔的金额一般应该是没有的 大于错误 validator 的直接键,以免 行为不端和举报自己以获得赏金会产生不正当的激励。这可以明确地解决 通过最低直接债券要求成为 validator 或通过教育提名人隐含地表明,存入少量债券的 validator 没有很大的激励 表现良好。 6.3.平行链注册表。每个平行链的定义如下 这个注册表。它是一个相对简单的类似数据库的结构,并且保存静态和动态信息 每条链。 静态信息包括链索引(一个简单的 整数),以及验证协议身份, 区分不同类别的方法 平行链,以便可以使用正确的验证算法 由 validators 运行,负责提出有效的候选人。最初的概念验证将侧重于放置 将新的验证算法引入客户端本身,实际上每次都需要对协议进行硬分叉 添加了额外的链条类别。但最终, 可以在中指定验证算法 一种既严格又有效的方式,让客户 能够有效地使用新的平行链,而无需 硬分叉。一种可能的途径是指定 平行链验证算法采用完善的、 本机编译的、平台中立的语言,例如 WebAssembly。需要额外的研究来确定 这是否真的可行,但如果是的话,它可能会带来 随之而来的是消除硬分叉的巨大优势 永远。 动态信息包括交易路由系统的各个方面,这些系统必须具有全局协议,例如 作为平行链的入口队列(第 6.6 节中描述)。 注册表只能添加平行链 通过全民公决投票;这是可以管理的 内部,但更可能被放置在外部 公投合同,以促进重新使用 更一般的治理组件。参数为 投票要求(例如所需的任何法定人数、多数票 需要)用于注册附加链和其他, 不太正式的系统升级将在“主版本”中列出 宪法”,但很可能遵循相当传统的 路径,至少最初是这样。精确的公式是由 当前工作的范围,但是例如三分之二的绝对多数通过,超过整个系统的三分之一 积极投票可能是一个明智的起点。 其他操作包括暂停和删除平行链。 希望永远不会被暂停 发生,但它的设计目的是最少的保障 平行链的验证系统中存在一些棘手的问题。最明显的例子可能是 需要的是导致 validators 无法达成一致的实现之间的共识关键差异 有效性或块。鼓励验证者使用 多个客户端实现,以便他们能够 在没收债券之前发现此类问题。 由于暂停是紧急措施,因此 在动态 validator 投票的支持下 比公投。两者都可以恢复 来自 validators 或公投。 完全删除平行链只会发生 公投后,需要 相当长的宽限期,以允许有序过渡 要么是一个独立的链,要么成为其他链的一部分 共识系统。 宽限期可能是 几个月的顺序,并且可能会在平行链注册表中以每个链为基础进行设置,以便不同的 平行链可以根据情况享受不同的宽限期 他们的需要。 6.4.密封继电器块。密封本质上是指, 规范化过程;也就是一个基本数据 变换哪个将原作映射为根本上独特且有意义的东西。在 PoW 链下, 密封实际上是采矿的同义词。在我们的例子中, 它涉及收集 validators 就某项的有效性、可用性和规范性签署的声明 特定的中继链区块和平行链区块 它代表。 底层 BFT 共识算法的机制超出了当前工作的范围。 我们会 相反,使用原语来描述它,该原语假设 创造共识的状态机。最终我们期望 受到许多有希望的 BFT 共识的启发 核心算法; Tangaora [9] (BFT 变体 Raft [16])、Tendermint [11] 和 HoneyBadgerBFT [14]。 该算法必须并行地在多个平行链上达成一致,因此与通常的算法不同 blockchain 共识机制。我们假设有一次 达成共识,我们可以记录共识 任何人都可以提供无可辩驳的证据 其参与者。我们还假设不当行为 协议内通常可以减少到一个小的 包含行为不端的参与者的小组,以尽量减少 实施惩罚时的附带损害。8 证明采用我们签名声明的形式,一起放置在中继链区块的标头中 某些其他字段,尤其是中继链的状态树根和交易树根。 的 密封 过程 需要 地方 下 一个 单身 达成共识 机制 寻址 两者 的 中继链的区块和平行链的区块使得 转发部分内容:平行链不是由其子组单独“提交”然后整理的 稍后。这导致中继链的过程更加复杂,但允许我们在一个阶段完成整个系统的共识,最大限度地减少延迟并允许 对于相当复杂的数据可用性要求 对下面的路由过程有帮助。 8现有的基于 PoS 的 BFT 共识方案(例如 Tendermint BFT 和原始的 Slasher)满足了这些断言。

Polkadot:异构多链框架的愿景 草案1 12 每个参与者共识机的状态可能 被建模为一个简单的(二维)表。每个参与者 (validator) 都有一组信息,格式为 来自其他参与者的关于每个平行链候选块以及中继链候选块的签名声明(“投票”)。该组信息有两部分 数据: 可用性: 确实 这个 validator 有 出口 来自该块的交易发布信息 他们能够在下一个区块上正确验证平行链候选者吗?他们可能会投票 1(已知)或 0(未知)。一旦他们 投票 1,他们承诺同样投票给 这个过程的其余部分。后来的投票没有 尊重这一点是惩罚的理由。 有效性:平行链区块是否有效,是否全部 外部参考数据(例如 交易) 可用吗?这仅与分配给其投票的平行链的 validator 相关。 他们可以投票 1(有效)、-1(无效)或 0 (尚不清楚)。一旦他们投票非零,他们 致力于以这种方式为其余的人投票 的过程。后来的投票不尊重这一点 是惩罚的理由。 所有 validator 必须提交投票;符合上述规则的投票可以重新提交。的进展 共识可以建模为并行发生的每个平行链上的多个标准 BFT 共识算法。由于这些可能会受到相对 少数恶意行为者集中在 单个平行链组,存在总体共识 建立后盾,限制最坏情况的发生 死锁仅限于一个或多个无效平行链区块(以及 对相关责任人进行一轮处罚)。 The basic rules for validity of the individual blocks (这使得 validator 的总集合作为一个整体来达到 一致认为它成为唯一的平行链候选者 从规范继电器中引用): • 必须有至少三分之二的validator 投票赞成,且无投票反对; • 必须有超过三分之一的validator 对出口队列信息的可用性进行积极投票。 如果对有效性有至少一票赞成票和至少一票反对票,则创建特殊条件 并且整个 validator 集合必须投票决定 如有恶意或意外 叉子。除了有效票和无效票之外,还有第三种票 被允许,相当于投票给两者,这意味着 节点有相互矛盾的意见。这可能是由于 节点的所有者运行多个实现,这些实现 not agree, indicating a possible ambiguity in the protocol. 从完整的 validator 集中计算所有选票后,如果 失败的意见至少有一小部分(到 被参数化;最多一半,也许少得多) 获胜意见的票数,那么假设 be an accidental parachain fork and the parachain is automatically suspended from the consensus process. Otherwise, we assume it is a malicious act and punish the minority who were voting for the dissenting opinion. The conclusion is a set of signatures demonstrating 规范性。然后可以密封中继链区块 并开始密封下一个区块的过程。 6.5.密封继电器块的改进。同时 this sealing method gives strong guarantees over the system’s operation, it does not scale out particularly well 因为每条平行链的关键信息都必须有其 availability guaranteed by over one-third of all validators. This means that every validator’s responsibility footprint 随着更多连锁店的添加而增长。 While data availability within open consensus networks 本质上是一个未解决的问题,有一些方法可以减轻 validator 节点上的开销。一个简单的 solution is to realise that while validators must shoulder the responsibility for data availability, they need not actually store, communicate or replicate the data themselves. Secondary data silos, possibly related to (or even the very same) collators who compile this data, may manage the task of guaranteeing availability with the validators providing a portion of their interest/income in payment. However, while this might buy some intermediate scalability, it still doesn’t help the underlying problem;自从 添加更多链通常需要额外的 validators,持续的网络资源消耗(特别是在带宽方面)随着 的链条,从长远来看是难以维持的财产。 最终,我们可能会不断地摇头 反对基本限制,即 一个被认为可用安全的共识网络, 持续的带宽需求是总带宽的数量级 validators 次输入信息总量。这是由于 不受信任的网络无法在许多节点之间正确分配数据存储任务,这就是 除了明显可分配的处理任务之外。 6.5.1.引入延迟。软化这种情况的一种方法 规则是放松即时性的概念。 通过仅最终而不是立即要求 33%+1 validators 对可用性进行投票,我们可以更好地利用指数数据传播并帮助平衡数据交换的峰值。 合理的平等(尽管未经证实) 可能是: (1) 延迟=参与者×链 在当前模型下,系统规模可扩展 与链的数量,以确保处理是 分布式;因为每个链至少需要一个 validator 并且我们将可用性证明固定为一个常量 validators 的比例,那么参与者同样会增长 与链的数量。我们最终得到: (2) 延迟 = 大小2 这意味着随着系统的增长,整个系统都知道可用性所需的带宽和延迟 网络,也可以表征为数字 最终确定之前的区块数量随其平方增加。这是 一个重要的增长因素,可能会成为一个显着的障碍,迫使我们进入“非扁平”范式 比如将几个“Polkadotes”组成一个层次结构 用于通过中继链树对帖子进行多级路由。

Polkadot:异构多链框架的愿景 草案1 13 6.5.2.公众参与。又一个可能的方向 是通过一种方式让公众参与这一过程 微投诉系统。和渔民一样, 可能是外部人士对声称的 validator 进行监管 可用性。 他们的任务是找到一个似乎无法表现出这种可用性的人。 在这样做的过程中,他们 可以向其他 validator 提出微投诉。工作量证明或 质押债券可用于减轻女巫攻击 这将使该系统基本上毫无用处。 6.5.3.可用性保证人。最终路线是 指定第二组保税 validator 作为“可用性” 担保人”。这些将像普通的 validator 一样进行绑定,甚至可以取自同一组 (尽管如果是这样,他们将在长期内被选择,至少在每次会议中)。与正常的 validator 不同,它们 不会在平行链之间切换,而是会 组成一个小组来证明所有重要的链间数据的可用性。 这样做的好处是放宽了参与者和链之间的等价性。 本质上,链条可以 增长(与原始链 validator 集一起),而 参与者,特别是那些参与数据可用性测试的人,可以至少保持亚线性 并且很可能是恒定的。 6.5.4.校对者偏好。这其中的一个重要方面 系统的目的是确保有一个健康的选择 整理者在任何给定的平行链中创建区块。如果一个 单个整理者主导了一条平行链,然后发生了一些攻击 变得更加可行,因为缺乏的可能性 外部数据的可用性不太明显。 一种选择是对平行链区块进行人工加权 一种伪随机机制,以支持各种整理者。在第一种情况下,我们需要 作为 validator 青睐的共识机制的一部分 平行链候选区块被确定为“更重”。 同样,我们必须激励 validators 尝试 建议他们能找到的最重的块——这可能是 通过将一部分奖励与候选人的体重成比例来完成。 确保给予整理者合理的公平 他们的候选人被选为获胜者的机会 在协商一致的候选人中,我们给出了一个特定的权重 平行链区块候选由与每个收集者连接的随机函数确定。 例如,采取 整理者地址之间的 XOR 距离度量 和一些加密安全的伪随机数 确定在靠近正在创建的块的点处 (名义上的“中奖彩票”)。这有效地为每个 整理者(或者更具体地说,每个整理者的地址) 他们的候选区块“获胜”的随机机会 所有其他人。 为了减轻单个核对者“挖掘”靠近中奖彩票的地址的女巫攻击,从而 对于每个最喜欢的块,我们都会为整理者的地址添加一些惯性。这可能就像要求他们一样简单 地址中有基准资金量。一个更多 优雅的方法是权衡与 中奖彩票的金额与停放在 有问题的地址。虽然模型还没有完成, 这种机制很可能甚至可以使非常 小利益相关者作为整理者做出贡献。 6.5.5。超重块。如果 validator 集合受到损害,他们可能会创建并提出一个块,尽管 有效,需要大量时间来执行并且 验证。这是一个问题,因为 validator 组可能 合理地形成一个块需要很长时间 执行,除非已知某些特定信息,允许走捷径,例如因式分解大 总理。 如果单个整理者知道该信息,那么 他们在拥有自己的产品方面将拥有明显的优势 只要其他人忙于处理旧区块,候选人就会接受。我们称这些块为超重。 针对 validator 提交和验证这些块的保护很大程度上与 无效块,但有一个额外的警告:因为 执行一个块所花费的时间(因此它的状态为 超重)是主观的,投票的最终结果 不当行为基本上可分为三个阵营。一 可能性是该块绝对没有超重—— 在这种情况下,超过三分之二的人宣称他们可以 在一定限制内执行块(例如块之间允许的总时间的 50%)。 另一个是 块是d绝对超重——如果超过 三分之二的人声明他们无法执行该块 在上述限度内。 最后一种可能性是相当平等的 validator 之间存在意见分歧。在这种情况下,我们可以 选择做一些相应的惩罚。 确保 validators 能够预测它们何时会出现 提议超重区块时,要求他们发布每个区块的性能信息可能是明智的。在足够长的时间内, 这应该允许他们分析他们的处理速度 相对于评判他们的同行。 6.5.6。整理者保险。 validators 仍存在一个问题: 与 PoW 网络不同,检查整理者的 为了保证区块的有效性,他们必须实际执行其中的交易。恶意整理者可以向 validator 提供无效或超重的块,导致他们悲伤(浪费 他们的资源)并要求潜在的巨大机会成本。 为了缓解这个问题,我们提出了一个简单的策略 validators 的一部分。首先,平行链候选区块发送 至 validators 必须由中继链账户签名 有资金;如果不是,那么 validator 应该下降 立即吧。其次,这些候选者应该通过组合(例如乘法)进行优先排序 帐户中的资金金额达到一定上限, 整理者过去成功提议的先前区块的数量(更不用说任何先前的区块) 惩罚),以及与获胜者的接近因素 如前所述。帽子应该是一样的 作为本案中向 validator 支付的惩罚性赔偿 其中发送了无效块。 为了阻止整理者向 validator 发送无效或超重的区块候选者,任何 validator 都可以 在下一个区块中放置一项交易,其中包括涉嫌不当行为的违规区块,其结果是转移行为不当的整理者的部分或全部资金 向受害人 validator 负责。 这种类型的交易优先于任何其他交易,以确保整理者无法 在处罚前移走资金。金额 作为损害赔偿转移的资金仍是一个动态参数

Polkadot:异构多链框架的愿景 草案1 14 进行建模,但可能是 validator 区块奖励的一部分,以反映造成的悲伤程度。至 为了防止恶意 validator 任意没收整理者的资金,整理者可以对 validator 的决定提出上诉,并由随机选择的 validator 组成的陪审团作为回报 用于存入小额存款。 如果他们发现 validator 对他们有利,则押金将被他们消耗。如果没有,则 押金被退回,validator 被罚款(因为 validator 处于更加拱形的位置,罚款将 可能相当重)。 6.6.跨链 交易 路由。跨链 交易路由是必不可少的维护之一 中继链及其 validator 的任务。 这是 控制已发布交易(通常简称为“发布”)如何成为所需输出的逻辑 从一个源平行链到成为另一个目标平行链的不可协商的输入,无需任何信任 要求。 我们仔细选择了上面的措辞;尤其是我们 不要求源中存在交易 平行链明确批准了这篇文章。唯一的 我们对模型施加的约束是平行链 必须提供,打包为整体块的一部分 处理输出,帖子是结果 块的执行。 这些帖子被构造为几个 FIFO 队列;的 列表的数量称为路由基础,可以是 16 左右。值得注意的是,这个数字代表的是数量 我们无需求助即可支持的平行链数量 多阶段路由。最初,Polkadot 将支持此 一种直接路由,但是我们将概述一种可能的 多阶段路由过程(“超级路由”)作为一种手段 远远超出最初的一组平行链。 我们 假设 那个 全部 参与者 知道 的 接下来的两个块 n, n + 1 的子组。总而言之, 路由系统遵循以下阶段: • CollatorS:验证者的联系成员[n][S] • 整理者:对于每个子组:确保 至少 1 名 V 验证者[n][s] 成员保持联系 • 整理者: 对于每个子组: 假设 egress[n −1][s][S] 可用(所有传入帖子 数据从最后一个块到“S”) • 整理者: 为 S 构建候选块 b: (b.标头、b.ext、b.proof、b.receipt、b.egress) • 整理者: 发送 证明 信息 证明[S] = (b.header, b.ext, b.proof, b.receipt) 到 验证器[n][S] • CollatorS:确保外部交易数据b.ext 可供其他整理者和 validators 使用 • 整理者: 为 每个 子群 s: 发送 出口 信息 出口[n][S][s] = (b.标头、b.收据、b.出口[s]) 到 的 接收 子组的 会员 的 下一个 块 验证器[n + 1][s] • ValidatorV:预连接所有同组成员 对于下一个块:让 N = Chain[n + 1][V ];连接 所有 validators v 使得 Chain[n + 1][v] = N • 验证器V: 为此整理所有数据入口 块: 为 每个 子群 s: 检索 egress[n −1][s][Chain[n][V ]],从其他 validators v 获取,使得 Chain[n][v] = Chain[n][V ]。 可能会通过随机选择的其他 validator 作为尝试证明。 • 验证器V: 接受候选人的证明 区块证明[Chain[n][V]]。投票块有效性 • 验证器V: 接受候选出口数据 下一个块: 对于每个子组,接受 出口[n][s][N]。投票区块出口可用性;在感兴趣的 validators v 中重新发布,以便 链[n + 1][v] = 链[n + 1][V ]。 • 验证器V:直到达成共识 其中: egress[n][from][to] 是当前的出口队列 从平行链‘from’到的帖子信息 平行链“to”位于区块号“n”中。 Collat​​orS 是平行链 S 的整理者。Validators[n][s] 是区块编号 n 处平行链 s 的 validators 集合。相反, Chain[n][v] 是在区块号 n 上分配 validator v 的平行链。 block.egress[to] 是出口 来自某个平行链区块的帖子队列,其 目的地平行链是 to。 由于整理者收取(交易)费用是基于 他们的区块成为规范,他们受到激励 确保对于每个下一个块目的地,子组的 成员被告知当前的出口队列 块。验证者只会被激励在(平行链)区块上达成共识,因此他们很少关心 哪个整理者的区块最终成为规范。在 原则上,validator 可以与整理者结盟,并合谋减少其他整理者的机会 区块成为规范,但这都很困难 由于随机选择而安排validators 的作用 平行链,并且可以通过减少维持平行链区块的应付费用来防御 共识过程。 6.6.1.外部数据可用性。确保平行链的 外部数据实际上可用是一个长期存在的问题 去中心化系统旨在将工作负载分配给 网络。问题的核心是可用性 问题指出,因为不可能 进行非交互式可用性证明或任何类型的证明 不可用性的证明,以便 BFT 系统正确地 验证其正确性依赖于的任何转换 一些外部数据的可用性,最大数量 系统中可接受的拜占庭节点数,再加上一个 必须证明数据可用。 对于正确扩展的系统,例如 Polkadot,这 引发一个问题:如果 validators 的比例恒定 必须证明数据的可用性,并假设 validators 在断言数据可用之前想要实际存储数据,那么我们如何避免 带宽/存储需求随着系统规模(以及 validator 数量)的增加而增加的问题?一个可能的答案是拥有一套单独的 validators(可用性保证人),其订单不断增长 与 Polkadot 的整体大小呈次线性关系。这是 6.5.3 中描述。 我们还有第二个技巧。 作为一个群体,整理者有一种内在的动机来确保所有数据都是正确的 可用于他们选择的平行链,因为没有它他们 无法创作更多的区块 收取交易费用。收集者也形成一个团体,其成员是多种多样的(由于随机性) 平行链 validator 组)的输入并不简单且简单

Polkadot:异构多链框架的愿景 草案1 15 来证明。因此,最近的整理者(也许是最后几千个区块)被允许向 特定平行链的外部数据的可用性 阻止 validators 以获得少量债券。 验证者必须联系那些来自明显违规的 validator 小组的作证者,要么获取数据并将其返回给整理者,要么升级 通过证明缺乏可用性来解决问题(直接拒绝提供数据将被视为没收债券的罪行,因此行为不当的 validator 可能只是 断开连接)并联系其他 validators 运行相同的测试。在后一种情况下,整理人的保证金 被返回。 一旦达到可以做出此类不可用性证明的 validator 的法定人数,他们就会被释放, 行为不当的子组会受到惩罚,并且区块会被恢复。 6.6.2.帖子路由。每个平行链标头都包含一个 出口特里树根;这是包含以下内容的 trie 的根 路由基础 bin,每个 bin 都是一个串联列表 出口职位。 Merkle 证明可以跨 平行链 validators 来证明特定平行链的 区块对于特定的目标平行链有一个特定的出口队列。 在处理平行链区块开始时,每个 其他平行链的出口队列绑定到该块是 合并到我们块的入口队列中。我们假设强, 可能是 CSPR9,子块排序以实现确定性操作,在任何操作之间都没有偏袒 平行链区块配对。整理者计算新队列 并根据平行链排出出口队列 逻辑。 显式写入入口队列的内容 进入平行链区块。 这样做有两个主要目的: 首先,这意味着平行链可以与其他平行链隔离地进行无需信任的同步。其次, 它简化了整个入口的数据逻辑 队列无法在单个块中处理; validators 和整理者能够处理以下块 无需专门获取队列的数据。 如果平行链的入口队列高于阈值 块处理结束时的金额,然后对其进行标记 中继链饱和,无法再发送任何消息 交付给它,直到它被清除为止。 默克尔证明是 用于证明整理者操作的保真度 平行链区块的证明。 6.6.3.批判。与此基本相关的一个小缺陷 机制是炸弹后攻击。 这就是所有 平行链发送尽可能多的帖子 到特定的平行链。虽然这会限制目标的 立即进入队列,不会造成任何损坏 标准事务 DoS 攻击。 运行正常,具有一组良好同步和 非恶意收集者和 validators,对于 N 个平行链, 每个平行链共有 N × M validator 和 L 个整理者,我们 可以将每个块的总数据路径分解为: 验证者:M −1+L+L:其他 validator 为 M −1 在平行链集合中,L 代表每个提供候选平行链区块的收集者,第二个 L 代表每个收集者 下一个块需要前一个块的出口有效负载。 (后者实际上更像是最坏情况 操作,因为整理者很可能会共享此类 数据。) Collator: M +kN: M 用于连接到每个相关的 平行链块 validator,kN 用于将出口有效负载播种到每个平行链 validator 组的某个子集 下一个区块(可能还有一些受青睐的整理者)。 因此,每个节点的数据路径呈线性增长 与系统的整体复杂性。虽然这是 合理的,当系统扩展到数百或数千条平行链时,可能会出现一些通信延迟 吸收以换取较低的复杂性增长率。 在这种情况下,可以使用多阶段路由算法 为了减少瞬时路径的数量 以引入存储缓冲区和延迟为代价。 6.6.4.超立方体路由。超立方体路由是一种机制,主要可以作为对 基本路由机制如上所述。 本质上, 我们不是通过平行链和子组节点的数量来增加节点连接性,而是仅通过 平行链的对数。帖子可能会在以下之间传输 几个平行链的队列正在等待最终交付。 路由本身是确定性的且简单的。我们从 限制入口/出口队列中的垃圾箱数量; 它们不是平行链的总数,而是 是路由基础 (b) 。这将被固定为数字 平行链的数量发生了变化,路由指数 (e) 反而被提高。在这个模型下,我们的消息量 随着 O(be) 增长,路径保持不变 和延迟(或交付所需的块数) 与 O(e)。 我们的路由模型是 e 维的超立方体, 立方体的每一面都有 b 个可能的位置。 每个块,我们沿着单个轴路由消息。我们 以循环方式交替轴,从而保证 e 块在最坏情况下的交付时间。 作为平行链处理的一部分,国外绑定 在入口队列中找到的消息将立即路由到适当的出口队列的容器,给定 当前块号(以及路由尺寸)。这个 过程需要为每一跳进行额外的数据传输 在送货路线上,但这本身就是一个问题 可以通过使用一些替代方法来缓解 数据有效负载传输并且仅包括参考, 而不是 post-trie 中帖子的完整有效负载。 此类系统超立方体路由的示例 对于 4 个平行链,b = 2 且 e = 2 可能是: 阶段 0,在每条消息 M 上: • sub0: 如果 Mdest ∈{2, 3} 则 sendTo(2) 否则保留 • sub1: 如果 Mdest ∈{2, 3} 则 sendTo(3) 否则保留 • sub2:如果 Mdest ∈{0, 1} 则 sendTo(0),否则保留 • sub3:如果 Mdest ∈{0, 1} 则 sendTo(1),否则保留 第 1 阶段,在每条消息 M 上: • sub0:如果 Mdest ∈{1, 3} 则 sendTo(1),否则保留 • sub1:如果 Mdest ∈{0, 2} 则 sendTo(0),否则保留 • sub2:如果 Mdest ∈{1, 3} 则 sendTo(3),否则保留 • sub3:如果 Mdest ∈{0, 2} 则 sendTo(2),否则保留 这里的两个维度很容易看出,就像第一个维度一样 目标索引的两位;对于第一个块, 单独使用高阶位。 第二块交易 与低位。一旦两者都发生(任意 order)然后帖子将被路由。 9加密安全的伪随机

Polkadot:异构多链框架的愿景 草案1 16 6.6.5。最大化偶然性。基本的一处改动 提案将看到 c2 −c validators 的固定总数,其中 每个子组中有 c−1 validators。每个块,而不是 validators 存在非结构化重新分区 在平行链之间,而不是对于每个平行链子组, 每个 validator 将被分配给一个唯一且不同的 以下区块上的平行链子组。这会 导致任意两个块之间的不变性,对于任意 两对平行链,存在两个 validator 交换了平行链的职责。虽然这不能用于获得可用性的绝对保证 (单个 validator 有时会掉线,即使 仁慈的),但它仍然可以优化一般情况。 这种方法并非没有并发症。添加平行链也需要进行重组 validator 组的。此外,validator 的数量与平行链数量的平方相关, 最初会很小,最终会长得很远 太快了,大约 50 个平行链后就变得难以维持。 这些都不是根本问题。在第一种情况下, validator 集的重组是必须的 无论如何,定期进行。关于 validator 的大小 设置,当太小时,可能会分配多个 validator 对于同一个平行链,将整数因子应用于 总计 validators。 6.6.4 中讨论的多阶段路由机制(例如超立方路由)将 减轻对大量 validator 的要求 当链条数量较多时。 6.7.平行链验证。 validator 的主要目的 是作为一个关系良好的参与者来证明平行链的 区块有效,包括但不限于任何状态转换、任何外部交易、执行 入口队列中的任何等待帖子和最终状态 出口队列的。 这个过程本身相当简单。 一旦 validator 密封了前一个区块,它们就自由了 开始努力提供候选平行链区块 下一轮共识的候选人。 最初,validator 通过平行链整理器(如下所述)或一个找到平行链区块候选者 其共同validators。平行链区块候选数据 包括块的标头、前一个块的标头, 包括任何外部输入数据(对于 Ethereum 和 Bitcoin,此类数据将被称为事务,但原则上它们可以包括用于任意目的的任意数据结构)、出口队列数据和内部数据以证明状态转换有效性(对于 Ethereum 这将是执行每个事务所需的各种状态/存储特里节点)。 实验证据显示了最近 Ethereum 区块的完整数据集 最多几百 KiB。 同时,如果尚未完成,validator 将是 尝试检索与前一个块的转换有关的信息,最初是从前一个块的 validators 及之后所有 validators 签署 数据的可用性。 一旦 validator 收到这样的候选块, 然后他们在本地验证它。验证过程包含在平行链类的 validator 模块中, 必须编写的共识敏感软件模块 对于 Polkadot 的任何实现(尽管原则上 具有 C ABI 的库可以使单个库能够 与适当的实现之间共享 由于只有一个“参考”实施而导致安全性降低)。 该过程获取前一个块的标头并通过最近商定的中继链验证其身份 应记录其 hash 的块。一旦确定了父标头的有效性,特定的平行链 可以调用类的验证函数。这是一个接受多个数据字段(大致为 那些之前给出的)并返回一个简单的布尔值 宣告区块的有效性。 大多数此类验证函数将首先检查 可以直接派生的头字段 父块(例如父块 hash,编号)。正在关注 这样,他们将填充任何内部数据结构 处理交易和/或过账所必需的。 对于类似 Ethereum 的链,这相当于填充 包含所需节点的 trie 数据库 交易的全面执行。其他链条类型可能有 其他p修复机制。 完成后,入口帖子和外部交易(或外部数据代表的任何内容)将被 根据链的规范制定、平衡。 (一 明智的默认设置可能是要求所有入口帖子 在提供外部交易服务之前进行处理,但这应该由平行链的逻辑来决定。) 通过这项立法,一系列出口岗位将被 创建并且将验证它们确实匹配 整理者的候选人。最后,正确填充 标题将与候选人的标题进行检查。 有了经过充分验证的候选块,validator 然后可以对其标头的 hash 进行投票,并将所有必需的验证信息发送到其子组中的 co-validator。 6.7.1.平行链整理者。平行链整理者是无约束的运营商,他们完成了矿工的大部分任务 在当今的 blockchain 网络上。它们是具体的 到特定的平行链。为了运作,他们必须 保持中继链和完全同步 平行链。 “完全同步”的确切含义将取决于平行链的类别,但始终包括平行链入口队列的当前状态。 在 Ethereum 的情况下,它还至少涉及维护 最后几个区块的默克尔树数据库,但可能 还包括各种其他数据结构,包括 Bloom 过滤帐户存在、家庭信息、日志记录 块号的输出和反向查找表。 除了保持两条链同步之外,它 还必须通过维护交易队列并接受经过正确验证的交易来“钓鱼”交易 来自公共网络。有了队列和链,就是 能够为每个区块选择的 validator 创建新的候选区块(由于中继链已同步,其身份是已知的),并将它们与 各种辅助信息,例如有效性证明,通过 对等网络。 为了避免麻烦,它收取与其所包含的交易相关的所有费用。各种经济学都围绕这个展开 安排。在竞争激烈的市场中 整理者有剩余,有可能交易 与平行链 validators 共享费用以激励 包含特定整理者的块。 相似地,

Polkadot:异构多链框架的愿景 草案1 17 号 一些整理者甚至可能会提高所需的费用 为了使该区块更具吸引力而支付 validators。 在这种情况下,就应该形成一个自然市场 支付更高费用的交易无需排队 并更快地融入链条中。 6.8。联网。传统 blockchain 上的网络 像 Ethereum 和 Bitcoin 有相当简单的要求。 所有交易和区块都以简单的无向八卦形式广播。同步涉及的比较多,尤其是 与 Ethereum 但实际上这个逻辑包含在 对等策略而不是协议本身,它围绕一些请求和应答消息类型进行解析。 虽然 Ethereum 通过 devp2p 协议在当前协议产品上取得了进展,这使得许多 子协议在单个对等连接上复用,因此具有相同的对等覆盖支持许多 同时使用 p2p 协议,Ethereum 部分 协议仍然相对简单,p2p 协议暂时尚未完成,其中有重要内容 缺少 QoS 支持等功能。可悲的是,创建一个更普遍的“web 3”协议的愿望在很大程度上 失败了,唯一使用它的项目是那些明确的项目 由 Ethereum 众筹资助。 Polkadot 的要求相当严格。而不是一个完全统一的网络,Polkadot 有多种类型的参与者,每种类型对其同伴构成和多个网络都有不同的要求 参与者倾向于谈论的“途径” 特定数据。这意味着一个更加结构化的网络覆盖——以及支持该网络的协议—— 可能是必要的。此外,可扩展性可以促进未来的添加,例如新型“链” 它们本身需要一种新颖的覆盖结构。 在深入讨论如何网络化的同时 协议可能看起来超出了本文档的范围,但某些需求分析是合理的。我们可以 粗略地将我们的网络参与者分为两组 (中继链、平行链)三个子集中的每一个。我们可以 还声明每个平行链参与者都只是 对彼此之间的交谈感兴趣,而不是 其他平行链的参与者: • 中继链参与者: • 验证者: P,分成子集 P[s],每个 平行链 • 可用性保证人:A(这可以由协议基本形式中的验证人表示) • 中继链客户端: M(注意每个成员 平行链集合也往往是 M 的成员) • 平行链参与者: • 平行链整理者:C[0]、C[1]、。 。 。 • 平行链渔民:F[0]、F[1]、。 。 。 • 平行链客户端:S[0]、S[1]、。 。 。 • 平行链轻客户端:L[0]、L[1]、。 。 。 一般来说,我们命名特定类别的通信 往往会发生在这些集合的成员之间: • P |一个 <-> 普 |答: 的 满 设置 的 validators/担保人 必须 是 人脉广泛 到 达成共识。 • P[s] <-> C[s] | P[s]:每个 validator 作为给定平行链组的成员都会倾向于八卦 与其他此类成员以及整理者 该平行链的发现和共享候选块。 • A <-> P[s] | C | A:每个可用性保证人 需要收集共识敏感的跨链 来自分配给它的 validator 的数据;整理者 也可能会优化他们达成共识的机会 通过将其广告给可用性保证人来阻止。 一旦他们获得了数据,数据将被分配给 其他此类担保人以促进达成共识。 • P[s] <-> A | P[s']:平行链 validators 将 需要从前一组 validator 或可用性保证人收集额外的输入数据。 • F[s] <-> P:报告时,渔民可以将 向任何参与者提出索赔。 • M <-> M |普 |答:一般中继链客户端从 validator 和担保人那里分配数据。 • S[s] <-> S[s] | P[s] |答:平行链客户从 validator/担保人分配数据。 • L[s] <-> L[s] | S[s]:平行链轻客户端 分配来自完整客户的数据。 为确保高效的运输机制,“扁平化” 覆盖网络 - 就像 Ethereum 的 devp2p - 其中每个 节点不会(非任意地)区分其适应度 同行不太可能适合。一个合理可扩展的 对等选择和发现机制可能需要 包含在协议中以及积极的 规划前瞻性以确保正确的同行类型 是“偶然”连接在正确的时间进行了治疗。 对于每一类参与者,同伴组成的精确策略将有所不同:为了适当地横向扩展 多链,整理者要么需要连续 重新连接到相应选择的 validators,或者将 需要与 validator 的子集达成持续协议 以确保它们在绝大多数时间内不会断开连接,因为它们对于 validator 毫无用处。整理者自然也会尝试维护一个 或更稳定的连接到可用性保证人 旨在确保其共识敏感的迅速传播 数据。 可用性保证人的主要目标是维持 彼此之间以及与 validators 的稳定连接(用于共识以及对共识至关重要的平行链数据 他们证明),以及一些整理者(对于平行链 数据)和一些渔民和完整的客户(用于分散 信息)。验证者会倾向于寻找其他 validator,特别是那些在同一子组中的以及任何 可以为他们提供平行链候选区块的整理者。 渔民,以及一般中继链和平行链 客户通常会致力于保持连接开放 validator 或担保人,但还有很多其他类似的节点 否则对他们自己来说。平行链轻客户端同样致力于连接到平行链的完整客户端, 如果不仅仅是其他平行链轻客户端。 6.8.1.同行流失问题。在基本协议提案中,每个子集随着分配给验证的 validators 不断随机改变每个块 平行链转换是随机选择的。这个可以 不同(非对等)节点需要 相互之间传递数据。一个人必须要么依赖 一个公平分布且连接良好的对等网络

Polkadot:异构多链框架的愿景 草案1 18 确保跳跃距离(以及最坏情况下的延迟)仅随着网络大小的对数而增长 (类似 Kademlia 的协议 [13] 可能会有所帮助),或者必须 引入更长的阻塞时间,以允许进行必要的连接协商,以保持对等组 反映节点当前的通信需求。 这些都不是很好的解决方案:阻塞时间长 强制网络可能会使其无用 特定的应用程序和链条。即使是完全公平的 和连接的网络将导致大量的浪费 由于不感兴趣的节点具有扩展带宽 转发对他们无用的数据。 虽然两个方向都可能构成解决方案的一部分, 合理的优化有助于最大限度地减少延迟 是为了限制这些平行链validator的波动性 集,或者仅在一系列块之间重新分配成员资格(例如,以 15 个为一组,在 4 秒后 阻塞时间意味着每次只改变一次连接 分钟)或以增量方式轮换成员资格,例如一次由一名成员更改(例如,如果有 为每个平行链分配了 15 个 validator,那么平均而言,完全唯一的之间将需要整整一分钟的时间 集)。通过限制对等点的流失量,并确保有利的对等点连接在 通过平行链的部分可预测性取得进展 集,我们可以帮助确保每个节点永久保留 偶然选择的同伴。 6.8.2.有效网络协议的路径。很可能是 最有效和合理的开发工作将集中于利用预先存在的协议而不是滚动 我们自己的。 存在多种点对点基本协议 我们可以使用或增强包括 Ethereum 自己的 devp2p [22]、IPFS 的 libp2p [1] 和 GNU 的 GNUnet [4]。对这些协议及其与构建 支持某些结构保证、动态对等引导和可扩展子协议的模块化对等网络 远远超出了本文档的范围,但将是一个 实施 Polkadot 的重要一步。 7. 议定书的实用性 7.1.链间交易支付。虽然一个伟大的 通过放弃对像 Ethereum 的 Gas 这样的整体计算资源核算框架的需求,可以获得大量的自由和简单性,这确实提出了一个重要的问题:没有 Gas,一条平行链如何运作? 避免另一个平行链强迫它进行计算?虽然我们可以依赖事务后入口队列 缓冲区以防止一条链向另一条链发送垃圾邮件 交易数据,协议没有提供等效机制来防止交易处理的垃圾邮件。 这是留给上级的问题。自连锁 可以自由地将任意语义附加到传入的 交易后数据,我们可以确保计算 必须在开始前付款。与此类似 Ethereum Serenity所拥护的模型,我们可以想象 平行链内的“闯入”合约允许 validator 保证付款以换取 提供特定数量的处理资源。 这些资源可以用天然气之类的东西来衡量, 但也可能是一些全新的模型,例如主观执行时间或类似 Bitcoin 的固定费用模型。 就其本身而言,这并不是那么有用,因为我们不能轻易假设链外调用者可以使用它们 闯入所识别的任何价值机制 合同。然而,我们可以想象源链中存在二次“突破”合约。两份合同共同构成一座桥梁,相互承认并 提供价值对等。 (质押-tokens,可用于 每一个都可以用来结算国际收支。) 调用另一个这样的链将意味着代理 通过这座桥,这将提供 协商链之间的价值转移,以便 支付目标平行链上所需的计算资源。 7.2.附加 链子。同时 的 加法 的 一个 平行链是一种相对便宜的操作,它不是免费的。 更多平行链意味着每个平行链更少的 validator 最终,会产生大量 validator,每个都有一个 平均债券减少。虽然攻击平行链的强制成本较小的问题可以通过 渔民们,不断增长的 validator 组本质上迫使 由于底层共识的机制,延迟程度较高方法。此外,每个平行链 它有可能使 validators 悲伤 过于繁琐的验证算法。 因此,将会有一些“价格”,validators 和/或利益相关团体将提取 添加新的平行链。这个连锁市场将 可能会看到添加以下任一内容: • 可能净贡献为零的链(就锁定或燃烧 staking token 而言)将成为其中的一部分(例如联盟链、 Doge 链、特定于应用程序的链); • 为网络提供内在价值的链 通过添加特定的功能困难 到其他地方(例如保密性、内部可扩展性、服务捆绑)。 从本质上讲,利益相关者社区需要 被激励添加子链——无论是经济上还是 通过向中继添加功能链的愿望。 预计新添加的连锁店将具有非常大的 拆除通知期很短,允许新的连锁店 进行试验,没有任何妥协的风险 中期或长期价值主张。 八、结论 我们已经概述了人们可以采取的方向来创作 可扩展的异构多链协议,具有向后兼容某些预先存在的潜力 blockchain 网络。在这样的协议下,参与者 本着开明的自身利益创建一个整体系统,该系统可以以极其自由的方式进行扩展,并且无需为现有用户支付通常的成本 来自标准 blockchain 设计。我们已经给了 所需架构的粗略轮廓,包括 参与者的性质,他们的经济动机 以及他们必须参与的流程。我们有 确定了基本设计并讨论了其优点和 限制;因此我们有进一步的指示 可以缓解这些限制,并进一步为完全可扩展的 blockchain 解决方案奠定基础。Polkadot:异构多链框架的愿景 草案1 19 8.1.缺少材料和悬而未决的问题。协议的不同实现始终有可能导致网络分叉。从这样的情况中恢复 没有讨论特殊情况。鉴于网络必然有一个非零的终结周期, 从中继链分叉中恢复应该不是一个大问题,但是需要仔细集成 共识协议。 债券没收和相反的奖励规定 没有被深入探讨。目前我们假设奖励 在赢家通吃的基础上提供:这可能不会 为渔民提供最佳的激励模式。短期的提交-披露过程将允许许多渔民 为了获得更公平的奖励分配奖品, 然而,该过程可能会导致额外的延迟 发现不当行为。 8.2.致谢。非常感谢所有的 校对员帮助将其模糊化 美观的形状。 特别是 Peter Czaban、Bjorn 瓦格纳、肯·卡普勒、罗伯特·哈伯迈尔、维塔利克·布特林、雷托·特林克勒和杰克·彼得森。 感谢大家 贡献想法或开端的人 其中,Marek Kotewicz 和 Aeron Buchanan 值得特别提及。感谢其他人的帮助 一路上。所有错误都是我自己造成的。 这项工作的部分内容,包括初步研究 共识算法,部分由英国资助 政府根据创新英国计划。