Optimism 技术文档
У Optimism нет традиционного вайтпейпера. Будучи оптимистичным роллапом Layer 2 для Ethereum, его устройство и спецификации задокументированы в технической документации, спецификации OP Stack и исследовательских публикациях, а не в едином формальном академическом документе.
Abstract
Abstract
The paper addresses the problem of scalability in decentralized blockchains by analyzing the trade-off between transaction throughput and hardware requirements to run a node. Rollups, i.e. technologies for on-chain verification of blocks executed off-chain, are presented in the form of fault or validity proofs. We compare Optimistic Rollups and Validity Rollups with respect to withdrawal time, transaction costs, optimization techniques, and compatibility with the Ethereum ecosystem. Our analysis reveals that Optimism Bedrock currently has a gas compression rate of approximately 20:1, while StarkNet achieves a storage write cost compression rate of around 24:1. We also discuss techniques to further optimize these rates, such as the use of cache contracts and Bloom filters. Ultimately, our conclusions highlight the trade-offs between complexity and agility in the choice between Optimistic and Validity Rollups. Keywords Blockchain, Scalability, Rollup 1. Introduction Blockchain technology has gained significant attention due to its potential to revolutionize various industries. However, scalability remains a major challenge, as most blockchains face a trade-off between scalability, decentralization, and security, commonly referred to as the Scalability Trilemma [1, 2]. To increase the throughput of a blockchain, a trivial solution is to increase its block size. In the context of Ethereum, this means increasing the maximum amount of gas a block can hold. As each full node must validate every transaction of every block, as the throughput increases, the hardware requirements also increase, leading to a greater centralization of the network. Some blockchains, such as Bitcoin and Ethereum, optimize their design to maximize their architectural decentralization, while others, such as the Binance Smart Chain and Solana, are designed to be as fast and cheap as possible. Decentralized networks artificially limit the throughput of the blockchain to lower the hardware requirements to participate in the network. Over the years, attempts have been made to find a solution to the Trilemma, such as state channels [3] and Plasma [4, 5]. These solutions have the characteristic of moving some activity off-chain, linking on-chain activity to off-chain activity using smart contracts, and verifying DLT 2023: 5th Distributed Ledger Technology Workshop, May 25-26, 2023, Bologna, Italy $ [email protected] (L. Donno) https://lucadonnoh.github.io/ (L. Donno) 0000-0001-9221-3529 (L. Donno) © 2023 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CEUR Workshop Proceedings http://ceur-ws.org ISSN 1613-0073 CEUR Workshop Proceedings (CEUR-WS.org) on-chain what is happening off-chain. However, both Plasma and state channels are limited in their support of general smart contracts. Rollups are blockchains (called Layer 2 or L2) that publish their blocks on another blockchain (Layer 1 or L1) and therefore inherit its consensus, data availability and security properties. They, unlike other solutions, support arbitrary computation. Rollups have three main components: • Sequencers: nodes that receive Rollup transactions from users and combine them into a block that is sent to Layer 1. The block consists of at least the state root (e.g. a Merkle root) and the data needed to reconstruct and validate the state. The Layer 1 defines the...
摘要
本文通过分析交易吞吐量和运行节点的硬件要求之间的权衡,解决了去中心化 blockchain 的可扩展性问题。 Rollups,即对链下执行的区块进行链上验证的技术,以故障或有效性证明的形式呈现。我们在提款时间、交易成本、优化技术以及与 Ethereum 生态系统的兼容性方面比较了乐观汇总和有效性汇总。我们的分析表明,Optimism Bedrock 目前的气体压缩率约为 20:1,而 StarkNet 的存储写入成本压缩率约为 24:1。我们还讨论了进一步优化这些速率的技术,例如缓存合约和布隆过滤器的使用。最终,我们的结论强调了在乐观汇总和有效性汇总之间选择时复杂性和敏捷性之间的权衡。关键词 区块链、可扩展性、Rollup 1. 简介 区块链技术因其彻底改变各个行业的潜力而受到广泛关注。然而,可扩展性仍然是一个重大挑战,因为大多数 blockchain 面临可扩展性、去中心化和安全性之间的权衡,通常称为可扩展性三难困境 [1, 2]。要增加 blockchain 的吞吐量,一个简单的解决方案是增加其块大小。在 Ethereum 的上下文中,这意味着增加一个区块可以容纳的最大气体量。由于每个全节点必须验证每个块的每笔交易,因此随着吞吐量的增加,硬件要求也会增加,从而导致网络更加集中。一些 blockchain,例如 Bitcoin 和 Ethereum,优化其设计以最大化其架构去中心化,而其他 blockchain,例如币安智能链和 Solana,则被设计为尽可能快速和便宜。去中心化网络人为地限制 blockchain 的吞吐量,以降低参与网络的硬件要求。多年来,人们一直在尝试寻找解决三难困境的方法,例如状态通道 [3] 和 Plasma [4, 5]。这些解决方案的特点是将一些活动移至链下,使用 smart contracts 将链上活动与链下活动链接起来,并验证 DLT 2023:第五届分布式账本技术研讨会,2023 年 5 月 25-26 日,意大利博洛尼亚 $ [email protected] (L. Donno) https://lucadonnoh.github.io/ (L. Donno) Donno) 0000-0001-9221-3529 (L. Donno) © 2023 本文版权归其作者所有。根据 Creative Commons License Attribution 4.0 International (CC BY 4.0) 允许使用。 CEUR 研讨会论文集 http://ceur-ws.org ISSN 1613-0073 CEUR 研讨会论文集 (CEUR-WS.org) 链上发生的事情链下。然而,Plasma 和状态通道对通用 smart contract 的支持都是有限的。 Rollup 是 blockchain(称为 Layer 2 或 L2),它们在另一个 blockchain (Layer 1 或 L1)上发布其块,因此继承其共识、数据可用性和安全属性。与其他解决方案不同,它们支持任意计算。 Rollups 具有三个主要组件: • 定序器:从用户接收 Rollup 交易并将其组合成一个块并发送到 Layer 1 的节点。该块至少由状态根(例如 Merkle 根)和重建和验证状态所需的数据组成。 Layer 1 定义...
Introduction
Introduction
- Introduction Blockchain technology has gained significant attention due to its potential to revolutionize various industries. However, scalability remains a major challenge, as most blockchains face a trade-off between scalability, decentralization, and security, commonly referred to as the Scalability Trilemma [1, 2]. To increase the throughput of a blockchain, a trivial solution is to increase its block size. In the context of Ethereum, this means increasing the maximum amount of gas a block can hold. As each full node must validate every transaction of every block, as the throughput increases, the hardware requirements also increase, leading to a greater centralization of the network. Some blockchains, such as Bitcoin and Ethereum, optimize their design to maximize their architectural decentralization, while others, such as the Binance Smart Chain and Solana, are designed to be as fast and cheap as possible. Decentralized networks artificially limit the throughput of the blockchain to lower the hardware requirements to participate in the network. Over the years, attempts have been made to find a solution to the Trilemma, such as state channels [3] and Plasma [4, 5]. These solutions have the characteristic of moving some activity off-chain, linking on-chain activity to off-chain activity using smart contracts, and verifying DLT 2023: 5th Distributed Ledger Technology Workshop, May 25-26, 2023, Bologna, Italy $ [email protected] (L. Donno) https://lucadonnoh.github.io/ (L. Donno) 0000-0001-9221-3529 (L. Donno) © 2023 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CEUR Workshop Proceedings http://ceur-ws.org ISSN 1613-0073 CEUR Workshop Proceedings (CEUR-WS.org)
on-chain what is happening off-chain. However, both Plasma and state channels are limited in their support of general smart contracts. Rollups are blockchains (called Layer 2 or L2) that publish their blocks on another blockchain (Layer 1 or L1) and therefore inherit its consensus, data availability and security properties. They, unlike other solutions, support arbitrary computation. Rollups have three main components: • Sequencers: nodes that receive Rollup transactions from users and combine them into a block that is sent to Layer 1. The block consists of at least the state root (e.g. a Merkle root) and the data needed to reconstruct and validate the state. The Layer 1 defines the canonical blockchain of the L2 by establishing the ordering of the published data. • Rollup full nodes: nodes that obtain, process and validate Rollup blocks from Layer 1 by verifying that the root is correct. If a block contains invalid transactions it is then discarded, which prevents Sequencers from creating valid blocks that include invalid transactions. • Rollup light nodes: nodes that obtain Rollup blocks from Layer 1 but do not compute the new state themselves. They verify that the new state root is valid using techniques such as fault or validity proofs. Rollups achieve scalability by decreasing the amortized cost of transactions as the number of users increases. This is because the cost of ensuring blockchain validity grows sub-linearly with respect to the cost of verifying transactions individually. Rollups differ according to the mechanism by which they ensure the validity of transaction execution at light nodes: in Optimistic Rollups it is ensured by an economic model and by fault proofs, while in Validity Rollups it is cryptographically ensured using validity proofs. Light nodes can be implemented as smart contracts on Layer 1. They accept the root of the new state and verify validity or fault proofs: these Rollup are therefore called Smart Contract Rollups. If light nodes are independent, they are called Sovereign Rollups [6]. The advantage of using a Smart Contract Rollup is to be able to build a trust-minimized bridge between the two blockchains: since the validity of the L2 state is proven to L1, a system of transactions from L2 to L1 can be implemented, allowing withdrawals. The disadvantage is that the cost of the transactions depends on the cost of verifying the state on L1: if the base layer is saturated by other activities, the cost of transactions on the Rollup also increases. The data and consensus layers are the ones that determine the security of the system as they define the ordering of transactions, prevent attacks and make data available to prove state validity. Paper contribution In this paper, we study Optimistic and Validity Rollups, two innovative solutions to the Scalability Trilemma, with a focus on notable implementations, such as Optimism Bedrock and StarkNet. Our contributions include a comprehensive comparison of these solutions, an analysis of withdrawal times, and a discussion of a possible attack on Optimism Bedrock. Additionally, we calculate their gas compression ratios, provide application-specific optimizations, and present the advantages and disadvantages of moving away from the Ethereum Virtual Machine (EVM).
Paper structure The paper is organized as follows. In section 2 Optimistic Rollups are introduced by analyzing Optimism Bedrock. In section 3 Validity Rollups are introduced by analyzing StarkNet. In section 4 we compare the two solutions. Finally, in section 5 we draw some conclusions.
介绍
一、简介 区块链技术因其革命性的潜力而受到广泛关注 各个行业。然而,可扩展性仍然是一个重大挑战,因为大多数 blockchain 都面临着 可扩展性、去中心化和安全性之间的权衡,通常称为 可扩展性三难困境 [1, 2]。为了提高 blockchain 的吞吐量,一个简单的解决方案是 增加其块大小。在 Ethereum 的上下文中,这意味着增加最大值 一个区块可以容纳的气体量。由于每个全节点必须验证每个交易的每笔交易 块,随着吞吐量的增加,硬件要求也增加,导致更大的 网络的集中化。一些 blockchain,例如 Bitcoin 和 Ethereum,优化了它们的 设计以最大化其架构去中心化,而其他人,例如 Binance Smart Chain 和 Solana 的设计目标是尽可能快速且便宜。去中心化网络 人为地限制 blockchain 的吞吐量,以降低硬件要求 参与网络。 多年来,人们一直在尝试寻找解决三难困境的方法,例如国家 通道 [3] 和 Plasma [4, 5]。这些解决方案具有移动某些活动的特点 链下,使用 smart contracts 将链上活动链接到链下活动,并验证 DLT 2023:第五届分布式账本技术研讨会,2023 年 5 月 25-26 日,意大利博洛尼亚 $ [email protected] (L. Donno) https://lucadonnoh.github.io/ (L. Donno) 0000-0001-9221-3529(L.唐诺) © 2023 本文版权归作者所有。根据 Creative Commons License Attribution 4.0 International (CC BY 4.0) 允许使用。 欧洲欧元区 车间 会议记录 http://ceur-ws.org ISSN 1613-0073 CEUR 研讨会论文集 (CEUR-WS.org)链上正在发生链下的事情。然而,Plasma 和状态通道都受到限制 他们对 smart contract 将军的支持。 Rollup 是 blockchain(称为 Layer 2 或 L2),它们在另一个 blockchain 上发布其块 (Layer 1 或 L1),因此继承其共识、数据可用性和安全属性。他们, 与其他解决方案不同,支持任意计算。 Rollup 具有三个主要组件: • 排序器:从用户接收 Rollup 交易并将其组合成一个 发送到 Layer 1 的块。该块至少包含状态根(例如 Merkle root)以及重建和验证状态所需的数据。 Layer 1 定义 通过建立已发布数据的排序来规范 L2 的 blockchain 。 • Rollup全节点:从Layer获取、处理和验证Rollup块的节点 1、验证root是否正确。如果一个区块包含无效交易,那么 丢弃,这会阻止定序器创建包含无效块的有效块 交易。 • Rollup轻节点:从Layer 1获取Rollup块但不计算的节点 新国家本身。他们使用技术验证新的状态根是否有效 例如错误或有效性证明。 Rollups 通过将交易的摊余成本降低为数量来实现可扩展性 用户数量增加。这是因为确保 blockchain 有效性的成本呈次线性增长 关于单独验证交易的成本。汇总根据不同而不同 他们确保轻节点交易执行有效性的机制: Optimistic Rollups 通过经济模型和故障证明来保证,同时保持有效性 Rollups 使用有效性证明以加密方式确保。 轻节点可以在 Layer 1 上实现为 smart contracts。他们接受事物的根源 新状态并验证有效性或故障证明:因此这些 Rollup 称为智能合约 卷起。如果轻节点是独立的,则它们被称为主权卷[6]。优点 使用智能合约 Rollup 是为了能够在两者之间建立信任最小化的桥梁 blockchains:由于 L2 状态的有效性已向 L1 证明,因此交易系统 可以实现L2到L1,允许提现。缺点是成本较高 交易取决于验证 L1 状态的成本:如果基础层饱和 其他活动中,Rollup 上的交易成本也会增加。 数据层和共识层决定了系统的安全性 他们定义交易的顺序,防止攻击并提供数据来证明状态 有效性。 论文贡献 在本文中,我们研究了乐观和有效性汇总,这两个创新 可扩展性难题的解决方案,重点关注值得注意的实现,例如 Optimism Bedrock 和 StarkNet。我们的贡献包括对这些的全面比较 解决方案、提现时间分析以及对 Optimism 可能的攻击的讨论 基岩。此外,我们还计算它们的气体压缩比,提供特定于应用的优化,并介绍放弃 Ethereum 的优点和缺点 虚拟机 (EVM)。
纸张结构 本文的结构如下。在第 2 节中,乐观汇总是 通过分析 Optimism 基岩引入。在第 3 节中,有效性汇总由 分析 StarkNet。在第 4 节中,我们比较了这两种解决方案。最后,在第 5 节中我们绘制 一些结论。
Optimistic Rollups
Optimistic Rollups
- Optimistic Rollups The idea of accepting optimistically the output of blocks without verifying their execution is already present in the Bitcoin whitepaper [7], discussing light nodes. These nodes only follow the header chain by verifying the consensus rule, making them vulnerable to accept blocks containing invalid transactions in the event of a 51% attack. Nakamoto proposes to solve this problem by using an “alert" system to warn light nodes that a block contains invalid transactions. This mechanism is first implemented by Al-Bassam, Sonnino and Buterin [8] in which a fault proof system based on error correction codes [9] is used. In order to enable the creation of fault proofs, it is necessary that the data from all blocks, including invalid blocks, is available to the network: this is the Data Availability Problem, which is solved using a probabilistic data sampling mechanism. The first Optimistic Rollup design was presented by John Adler and Mikerah Quintyne-Collins in 2019 [10], in which blocks are published on another blockchain that defines their consensus on ordering. 2.1. Optimism Bedrock Bedrock [11] is the latest version of Optimism, a Smart Contract Rollup. The previous version, the Optimistic Virtual Machine (OVM) required an ad hoc compiler to compile Solidity into its own bytecode: in contrast, Bedrock is fully equivalent to the EVM in that the execution engine follows the Ethereum Yellow Paper specification [12]. 2.1.1. Deposits Users can deposit transactions through a contract on Ethereum, the Optimism Portal, by calling the depositTransaction function. When a transaction is executed, a TransactionDeposited event is emitted, which each node in the Rollup listens for to process deposits. A deposited transaction is a L2 transaction that is derived from L1. If the caller of the function is a contract, the address is transformed by adding a constant value to it: this prevents attacks in which a contract on L1 has the same address as a contract on L2 but a different code. The inclusion on L2 of a deposited transaction is ensured by specification within a sequencing window. Deposited transactions are a new EIP-2718 compatible transaction type [13] with prefix 0x7E, where the rlp-encoded fields are: • bytes32 sourceHash: hash that uniquely identifies the source of the transaction. • address from: the address of the sender. • address to: the receiver address, or the zero address if the deposited transaction is a contract creation.
• uint256 mint: the value to be created on L2. • uint256 value: the value to be sent to the recipient. • bytes data: the input data. • bytes gasLimit: the gas limit of the transaction. The sourceHash is computed as the keccak256 hash of the L1 block hash and the L1 log index, uniquely identifying an event in a block. Since deposited transactions are initiated on L1 but executed on L2, the system needs a mechanism to pay on L1 for the gas spent on L2. One solution is to send ETH through the Portal, but this implies that every caller (even indirect callers) must be marked as payable, and this is not possible for many existing projects. The alternative is to burn the corresponding gas on L1. The gas 𝑔allocated to deposited transaction is called guaranteed gas. The L2 gas price on L1 is not automatically synchronized but is estimated using a mechanism similar to EIP-1559 [14]. The maximum amount of gas guaranteed per Ethereum block is 8 million, with a target of 2 million. The quantity 𝑐of ETH required to pay for gas on L2 is 𝑐= 𝑔𝑏L2 where 𝑏L2 is the basefee on L2. The contract on L1 burns an amount of gas equal to 𝑐/𝑏L2. The gas spent to call depositTransaction is reimbursed on L2: if this amount is greater than the guaranteed gas, no gas is burned. The first transaction of a rollup block is a L1 attributes deposited transaction, used to register on a L2 predeploy the attributes of Ethereum blocks. The attributes that the predeploy gives access to are the block number, the timestamp, the basefee, the block hash and the sequence number, which is the block number of L2 relative to the associated L1 block (also called epoch); this number is reset when a new epoch starts. 2.1.2. Sequencing The Rollup nodes derive the Optimism chain entirely from Ethereum. This chain is extended each time new transactions are published on L1, and its blocks are reorganized each time Ethereum blocks are reorganized. The Rollup blockchain is divided into epochs. For each 𝑛 block number of Ethereum, there is a corresponding 𝑛epoch. Each epoch contains at least one block, and each block in an epoch contains a L1 attributes deposited transaction. The first block in an epoch contains all transactions deposited through the Portal. Layer 2 blocks may also contained sequenced transactions, i.e. transactions sent directly to the Sequencer. The Sequencer accepts transactions from users and builds blocks. For each block, it constructs a batch to be published on Ethereum. Several batches can be published in a compressed manner, taking the name channel. A channel can be divided into several frames, in case it is too large for a single transaction. A channel is defined as the compression with ZLIB [15] of rlp-encoded batches. The fields of a batch are the epoch number, the epoch hash, the parent hash, the timestamp and the transaction list. A sequencing window, identified by an epoch, contains a fixed number 𝑤of consecutive L1 blocks that a derivation step takes as input to construct a variable number of L2 blocks. For epoch 𝑛, the sequencing window 𝑛includes the blocks [𝑛, 𝑛+𝑤). This implies that the ordering of L2 transactions and blocks within a sequencing window is not fixed until the window ends. A rollup transaction is called safe if the batch containing it has been confirmed on L1. Frames
are read from L1 blocks to reconstruct batches. The current implementation does not allow the decompression of a channel to begin until all corresponding frames have been received. Invalid batches are ignored. Individual block transactions are obtained from the batches, which are used by the execution engine to apply state transitions and obtain the Rollup state. 2.1.3. Withdrawals In order to process withdrawals, an L2-to-L1 messaging system is implemented. Ethereum needs to know the state of L2 in order to accept withdrawals, and this is done by publishing on the L2 Output Oracle smart contract on L1 the state root of each L2 block. These roots are optimistically accepted as valid (or finalized) if no fault proof is performed during the dispute period. Only addresses designated as Proposers can publish output roots. The validity of output roots is incentivized by having Proposers deposit a stake that is slashed if they are shown to have proposed an invalid root. Transactions are initiated by calling the function initiateWithdrawal on a predeploy on L2 and then finalized on L1 by calling the function finalizeWithdrawalTransaction on the previously mentioned Optimism Portal. The output root corresponding to the L2 block is obtained from the L2 Output Oracle; it is verified that it is finalized, i.e. that the dispute period has passed; it is verified that the Output Root Proof matches the Oracle Proof; it is verified that the hash of the withdrawal is included in it using a Withdrawal Proof; that the withdrawal has not already been finalized; and then the call to the target address is executed, with the specified gas limit, amount of Ether and data. 2.1.4. Cannon: the fault proof system If a Rollup Full Node, by locally executing batches and deposited transactions, discovers that the Layer 2 state does not match the state root published on-chain by a Proposer, it can execute a fault proof on L1 to prove that the result of the block transition is incorrect. Because of the overhead, processing an entire Rollup block on L1 is too expensive. The solution implemented by Bedrock is to execute on-chain only the first instruction of disagreement of minigeth, compiling it into a MIPS architecture that is executed on an on-chain interpreter and published on L1. minigeth is a simplified version of geth 1 in which the consensus, RPC and database have been removed. To find the first instruction of disagreement, an interactive binary search is conducted between the one who initiated the fault proof and the one who published the output root. When the proof starts, both parties publish the root of the MIPS memory state halfway through the execution of the block on the Challenge contract: if the hash matches it means that both parties agree on the first half of the execution thus publishing the root of half of the second half, otherwise the half of the first half is published and so on. Doing so achieves the first instruction of disagreement in a logarithmic number of steps compared to the original execution. If one of the two stops interacting, at the end of the dispute period the other participant automatically wins. To process the instruction, the MIPS interpreter needs access to its memory: since the root is available, the necessary memory cells can be published by proving their inclusion. To access the state of the EVM, use is made of the Preimage Oracle: given the hash of a block it returns 1https://geth.ethereum.org/docs
the block header, from which one can get the hash of the previous block and go back in the chain, or get the hash of the state and logs from which one can get the preimage. The oracle is implemented by minigeth and replaces the database. Queries are made to other nodes to obtain the preimages.
乐观汇总
- 乐观汇总 乐观地接受块的输出而不验证其执行的想法是 已经出现在 Bitcoin 白皮书 [7] 中,讨论了轻节点。这些节点仅遵循 头链通过验证共识规则,使它们容易受到接受块的影响 包含发生 51% 攻击时无效的交易。中本聪提议解决这个问题 通过使用“警报”系统来警告轻节点某个区块包含无效交易来解决这个问题。 该机制首先由 Al-Bassam、Sonnino 和 Buterin [8] 实现,其中一个故障 使用基于纠错码[9]的证明系统。为了能够创建 故障证明,所有块(包括无效块)的数据都必须可用 网络:这是数据可用性问题,可以使用概率数据来解决 抽样机制。第一个 Optimistic Rollup 设计由 John Adler 提出, Mikerah Quintyne-Collins 在 2019 年 [10],其中区块发布在另一个 blockchain 上 这定义了他们对订购的共识。 2.1. Optimism 基岩 Bedrock [11] 是 Optimism(智能合约汇总)的最新版本。之前的版本, 乐观虚拟机 (OVM) 需要一个临时编译器将 Solidity 编译到其 自己的字节码:相比之下,Bedrock 完全等同于 EVM ,因为执行引擎 遵循 Ethereum 黄皮书规范 [12]。 2.1.1.存款 用户可以通过调用depositTransaction函数,通过Ethereum(Optimism门户)上的合约存入交易。 当一笔交易被执行时, 发出 TransactionDeposited 事件,Rollup 中的每个节点都会监听该事件并进行处理 存款。存入交易是源自 L1 的 L2 交易。如果呼叫者 函数是一个合约,地址通过添加一个常量值来转换:这可以防止 L1 上的合约与 L2 上的合约具有相同地址但代码不同的攻击。 存储交易包含在 L2 中是通过排序中的规范来确保的 窗口。 存入交易是新的EIP-2718兼容交易类型[13],前缀为0x7E, 其中 rlp 编码字段是: • bytes32 sourceHash:hash,唯一标识交易源。 • 地址来自:发件人的地址。 • 地址:接收者地址,或零地址(如果存入的交易是 合同创建。• uint256 mint:要在L2 上创建的值。 • uint256 值:要发送给接收者的值。 • 字节数据:输入数据。 • bytes GasLimit:交易的gas 限制。 sourceHash 计算为 L1 块 hash 的 keccak256 hash 和 L1 日志 索引,唯一标识块中的事件。 由于存入的交易是在L1上发起但在L2上执行的,所以系统需要一个 向 L1 支付 L2 所花费的 Gas 的机制。一种解决方案是通过门户发送 ETH, 但这意味着每个呼叫者(甚至间接呼叫者)都必须标记为应付,这是 对于许多现有项目来说这是不可能的。另一种方法是在 L1 上燃烧相应的气体。 分配给存入交易的gas𝑔称为保证gas。 L2 汽油价格 L1 不会自动同步,而是使用类似于 EIP-1559 的机制进行估计 [14]。每个 Ethereum 区块保证的最大 Gas 量为 800 万,目标 200万。在 L2 上支付 Gas 费用所需的 ETH 数量为 𝑐= 𝑔𝑏L2,其中 𝑏L2 是 L2 的基本费用。 L1 上的合约燃烧的 Gas 量等于 𝑐/𝑏L2。打电话所花费的gas 存款交易在 L2 上偿还:如果该金额大于保证气体, 没有气体被燃烧。 rollup区块的第一笔交易是L1属性存入交易,用于注册 在 L2 上预部署 Ethereum 块的属性。预部署提供的属性 访问的是区块号、时间戳、基本费用、区块 hash 和序列 number,L2 相对于关联的 L1 区块的区块编号(也称为纪元); 当新纪元开始时,该数字会重置。 2.1.2.测序 Rollup 节点完全从 Ethereum 派生出 Optimism 链。这条链条被延长了 每次在 L1 上发布新交易时,每次都会重新组织其区块 Ethereum 块被重新组织。 Rollup blockchain 分为多个纪元。对于每个 𝑛 区块号为Ethereum,有对应的𝑛纪元。每个纪元至少包含一个 一个 epoch 中的每个区块都包含一个 L1 属性的存入交易。第一个区块 一个纪元包含通过门户存入的所有交易。 Layer 2 块也可能 包含排序交易,即直接发送到排序器的交易。 排序器接受用户的交易并构建区块。对于每个块,它构造 一批将在 Ethereum 上发布。可以以压缩方式发布多个批次, 采取名称频道。一个通道可以分成几个帧,以防通道太大 单笔交易。通道被定义为使用 RLP 编码的 ZLIB [15] 进行压缩 批次。批次的字段包括纪元号、纪元 hash、父代 hash、 时间戳和交易列表。 一个由 epoch 标识的排序窗口,包含固定数量 𝑤 的连续 L1 推导步骤将其作为输入来构造可变数量的 L2 块。对于 纪元𝑛,排序窗口𝑛包括块[𝑛,𝑛+𝑤)。这意味着排序 排序窗口内的 L2 事务和块的数量直到窗口结束才固定。 如果包含 rollup 的交易已在 L1 上得到确认,则该交易被称为安全交易。镜框从 L1 块中读取以重建批次。当前的实现不允许 开始对通道进行解压缩,直到接收到所有相应的帧。无效 批次被忽略。单个区块交易是从批次中获得的,这些交易是 执行引擎使用它来应用状态转换并获取 Rollup 状态。 2.1.3.提款 为了处理提款,实施了 L2 到 L1 消息传递系统。 Ethereum 需要知道 L2 的状态才能接受提款,这是通过发布来完成的 L2 输出 Oracle smart contract 在 L1 上每个 L2 块的状态根。这些根 如果在期间没有执行故障证明,则乐观地认为是有效的(或最终确定的) 争议期。只有指定为提议者的地址才能发布输出根。有效性 的输出根是通过让提案者存入股份来激励的,如果他们 显示提出了无效的根。交易是通过调用该函数发起的 在 L2 上的预部署上启动撤回,然后通过调用该函数在 L1 上完成 FinalizeWithdrawalTransaction 在前面提到的 Optimism 门户上。 从L2 Output Oracle中获取L2块对应的输出根;是的 核实已最终确定,即争议期已过;经验证,输出 根证明与预言机证明相匹配;经核实,已包含提款的hash 使用提款证明;撤回尚未最终确定;然后是 使用指定的气体限制、以太币数量和数据执行对目标地址的调用。 2.1.4. Cannon:防故障系统 如果 Rollup Full Node 通过本地执行批次和存入交易发现 Layer 2 状态与提议者在链上发布的状态根不匹配,它可以执行 L1 上的故障证明,证明块转换的结果不正确。因为 开销,在 L1 上处理整个 Rollup 块的成本太高。实施的解决方案 by Bedrock 的目的是仅在链上执行 minigeth 不一致的第一条指令, 将其编译成 MIPS 架构,在链上解释器上执行并发布 在 L1 上。 minigeth是geth 1的简化版本,其中共识、RPC和数据库 已被删除。 为了找到第一个不一致的指令,在之间进行交互式二分搜索 发起故障证明的人和发布输出根的人。当证明 开始,双方在执行中途发布 MIPS 内存状态的根 挑战合约上的区块:如果 hash 匹配,则意味着双方都同意 执行的前半部分,从而发布后半部分的根,否则一半 上半年已出版等等。这样做就实现了第一个分歧指令 与原始执行相比,步骤数为对数。如果两者之一停止 互动时,在争议期结束时,另一方自动获胜。 为了处理该指令,MIPS 解释器需要访问其内存:因为根是 可用时,可以通过证明其包含性来发布必要的存储单元。访问 EVM 的状态,使用原像 Oracle:给定它返回的块的 hash 1https://geth.ethereum.org/docs
块头,从中可以获取前一个块的 hash 并返回到 链,或者获取可以获取原像的状态和日志的 hash 。 oracle 由minigeth实现并取代数据库。向其他节点进行查询 获得原像。
Validity Rollups
Validity Rollups
- Validity Rollups The goal of a Validity Rollup is to cryptographically prove the validity of the state transition given the sequence of transactions with a short proof that can be verified sub-linearly compared to the time of the original computations. These kind of certificates are called computational integrity proofs and are practically implemented with SNARKs (Succint Non-interactive ARgument of Knowledge), which use arithmetic circuits as their computational model. Different SNARK implementations differ in proving time, verification time, the need of a trusted setup and quantum resistance [16, 17]. STARKs (Scalable Transparent ARgument of Knowledge) [18] are a type of SNARKs that does not require a trusted setup and are quantum resistant, while giving up some efficiency on proving and verification compared to other solutions. 3.1. StarkNet StarkNet is a Smart Contract Validity Rollup developed by StarkWare that uses the STARK proof system to validate its state to Ethereum. To facilitate the construction of validity proofs, a virtual machine different than the EVM is used, whose high-level language is Cairo. 3.1.1. Deposits Users can deposit transactions via a contract on Ethereum by calling the sendMessageToL2 function. The message is recorded by computing its hash and increasing a counter. Sequencers listen for the LogMessageToL2 event and encode the information in a StarkNet transaction that calls a function of a contract that has the l1_handler decorator. At the end of execution, when the proof of state transition is produced, the consumption of the message is attached to it and it is deleted by decreasing its counter. The inclusion of deposited transactions is not required by the StarkNet specification, so a gas market is needed to incentivize Sequencers to publish them on L2. In the current version, because the Sequencer is centralized and managed by StarkWare, the cost of deposited transactions is only determined by the cost of executing the deposit. This cost is paid by sending ETH to sendMessageToL2. These Ethers remain locked on L1 and are transferred to the Sequencer on L1, when the deposited transaction is included in a state transition. The amount of ETH sent, if the deposited transaction is included, is fully spent, regardless of the amount of gas consumed on L2. StarkNet does not have a system that makes L1 block attributes available automatically. Alternatively, Fossil is a protocol developed by Oiler Network 2 that allows, given a hash of a block, any information to be obtained from Ethereum by publishing preimages. 2https://www.oiler.network/
3.1.2. Sequencing The current state of StarkNet can be derived entirely from Ethereum. Any state difference between transitions is published on L1 as calldata. Differences are published for each contract and are saved as uint256[] with the following encoding: • Number of field concerning contract deployments. • For each published contract: – The address of the published contract. – The hash of the published contract. – The number of arguments of the contract constructor. – The list of constructor arguments • Number of contract whose storage has been modified. • For each contract that has been modified: – The address of the modified contract. – The number of storage updates. – The key-value pairs of the storage addresses with the new values. The state differences are published in order, so it is sufficient to read them sequentially to reconstruct the state. 3.1.3. Withdrawals To send a message from L2 to L1, the syscall send_message_to_L1 is used. The message is published to L1 by increasing its hash counter along with the proof and finalized by calling the function consumeMessageFromL2 on the StarkGate smart contract on L1, which decrements the counter. Anyone can finalize any withdrawal. 3.1.4. Validity proofs The Cairo Virtual Machine [19] is designed to facilitate the construction of STARK proofs. The Cairo language allows the computation to be described with a high-level programming language, and not directly as a circuit. This is accomplished by a system of polynomial equations 3 representing a single computation: the FDE cycle of a von Neumann architecture. The number of constraints is thus fixed and independent of the type of computation, allowing for only one Verifier program for every program whose computation needs to be proved. StarkNet aggregates multiple transactions into a single STARK proof using a shared prover named SHARP. The proofs are sent to a smart contract on Ethereum, which verifies their validity and updates the Merkle root corresponding to the new state. The sub-linear cost of verifying a validity proof allows its cost to be amortized over multiple transactions. 3called Algebraic Intermediate Representation (AIR)
有效性汇总
- 有效性汇总 有效性汇总的目标是以密码方式证明状态转换的有效性 给定具有可进行亚线性比较验证的简短证明的交易序列 到原始计算的时间。 此类证书称为计算完整性证明,实际上是通过 SNARK(简洁非交互式知识论证)实现的,它使用算术 电路作为他们的计算模型。不同的 SNARK 实现在证明时间上有所不同, 验证时间、可信设置的需要和量子电阻 [16, 17]。 STARK(可扩展 透明的知识论证)[18] 是一种 SNARK,不需要可信的 设置和量子抗性,同时放弃一些证明和验证的效率 与其他解决方案相比。 3.1. StarkNet StarkNet 是 StarkWare 开发的智能合约有效性汇总,使用 STARK 证明系统将其状态验证为 Ethereum。为了促进有效性证明的构建, 使用与EVM不同的虚拟机,其高级语言为Cairo。 3.1.1.存款 用户可以通过调用 sendMessageToL2 通过 Ethereum 上的合约存入交易 功能。通过计算其 hash 并增加计数器来记录消息。测序仪 监听 LogMessageToL2 事件并将信息编码到 StarkNet 事务中 调用具有 l1_handler 装饰器的合约函数。执行结束时, 当状态转换的证明产生时,消息的消费被附加到它上面 并通过减少其计数器来删除它。 StarkNet 规范不要求包含存入交易,因此气体 需要市场来激励测序者在 L2 上发布它们。在当前版本中,因为 Sequencer 由 StarkWare 集中管理,存入交易的成本 仅由执行存款的成本决定。该费用通过将 ETH 发送至 发送消息到L2。这些以太币仍然锁定在 L1 上,并在 L1,当存入的交易包含在状态转换中时。发送的 ETH 数量,如果 无论消耗的 Gas 量如何,存入的交易都已包含在内并已全部花费 在 L2 上。 StarkNet 没有一个系统可以自动使 L1 块属性可用。 另外,Fossil 是由 Oiler Network 2 开发的协议,允许给定 hash 块,通过发布原像从 Ethereum 获得的任何信息。 2https://www.oiler.network/3.1.2.测序 StarkNet 的当前状态可以完全从 Ethereum 导出。任何状态差异 转换之间作为 calldata 在 L1 上发布。每个合同的差异均已公布 并保存为 uint256[],编码如下: • 涉及合同部署的领域数量。 • 对于每份已发布的合同: – 已发布合约的地址。 – 已发布合同的 hash。 – 合约构造函数的参数数量。 – 构造函数参数列表 • 存储已修改的合约数量。 • 对于每份已修改的合同: – 修改后的合约的地址。 – 存储更新的数量。 – 存储地址与新值的键值对。 状态差异是按顺序发布的,因此按顺序读取它们就足够了 重建国家。 3.1.3.提款 要从 L2 向 L1 发送消息,请使用系统调用 send_message_to_L1。消息是 通过增加其 hash 计数器以及证明来发布到 L1,并通过调用 L1 上 StarkGate smart contract 上的函数 ConsumerMessageFromL2 会递减 柜台。任何人都可以完成任何提款。 3.1.4.有效性证明 Cairo 虚拟机 [19] 旨在促进 STARK 证明的构建。 开罗语言允许用高级编程来描述计算 语言,而不是直接作为电路。这是通过多项式方程组来完成的 3 代表单个计算:冯诺依曼架构的 FDE 循环。数量 因此,约束的数量是固定的,并且与计算类型无关,仅允许一个 每个需要证明其计算的程序的验证程序。 StarkNet 使用共享证明者将多个交易聚合到单个 STARK 证明中 名为夏普。证明将发送至 Ethereum 上的 smart contract,以验证其有效性 并更新与新状态对应的 Merkle 根。验证一个的次线性成本 有效性证明允许其成本在多个交易中摊销。 3称为代数中间表示(AIR)
Comparison
Comparison
- Comparison 4.1. Withdrawal time The most important aspect that distinguishes Optimistic Rollups from Validity Rollups is the time that elapses between the initialization of a withdrawal and its finalization. In both cases, withdrawals are initialized on L2 and finalized on L1. On StarkNet, finalization is possible as soon as the validity proof of the new state root is accepted on Ethereum: theoretically, it is possible to withdraw funds in the first block of L1 following initialization. In practice, the frequency of sending validity proofs on Ethereum is a trade-off between the speed of block finalization and proof aggregation. Currently StarkNet provides validity proofs for verification every 10 hours 4, but it is intended to be decreased as transaction activity increases. On Optimism Bedrock it is possible to finalize a withdrawal only at the end of the dispute period (currently 7 days), after which a root is automatically considered valid. The length of this period is mainly determined by the fact that fault proofs can be censored on Ethereum until its end. The success probability of this type of attack decreases exponentially as time increases: E[subtracted value] = 𝑉𝑝𝑛 where 𝑛is the number of blocks in an interval, 𝑉is the amount of funds that can be subtracted by publishing an invalid root, and 𝑝is the probability of successfully performing a censorship attack in a single block. Suppose that this probability is 99%, that the value locked in the Rollup is one million Ether, and that the blocks in an interval are 1800 (6 hours of blocks with a 12 seconds interval): the expected value is about 0.01391 Ether. The system is made secure by asking Proposers to stake a much larger amount of Ether than the expected value. Winzer et al. showed how to carry out a censorship attack using a simple smart contract that ensures that certain areas of memory in the state do not change [20]. Modeling the attack as a Markov game, the paper shows that censoring is the dominant strategy for a rational block producer if they receive more compensation than including the transaction that changes the memory. The 𝑝value discussed above can be viewed as the percentage of rational block producers in the network, where “rational” does not take into account possibly penalizing externalities, such as less trust in the blockchain that decreases its cryptocurrency value. The following code presents a smart contract that can be used to perform a censorship attack on Bedrock. The attack exploits the incentives of block producers by offering them a bribe to censor the transactions that would modify specific parts of the state. The contract’s main function, claimBribe, allows block producers to claim the bribe if they successfully censor the targeted transaction by checking that the invalid output root is not touched. function claimBribe(bytes memory storageProof) external { require(!claimed[block.number], "bribe already claimed"); OutputProposal memory current = storageOracle.getStorage(L2_ORACLE, block.number, SLOT, storageProof); require(invalidOutputRoot == current.outputRoot, "attack failed"); claimed[block.number] = true; (bool sent, ) = block.coinbase.call{value: bribeAmount}(""); 4https://etherscan.io/address/0xc662c410c0ecf747543f5ba90660f6abebd9c8c4
require(sent, "failed to send ether"); } Listing 1: Example of a contract that incentivizes a censorship attack on Bedrock. The length of the dispute period must also take into account the fact that the fault proof is an interactive proof and therefore enough time must be provided for participants to interact and that any interaction could be censored. If the last move occurs at a time very close to the end of the dispute period, the cost of censoring is significantly less. Although censoring is the dominant strategy, the likelihood of success is lower because censoring nodes are vulnerable to Denial of Service attacks: an attacker can generate very complex transactions that end with the publication of a fault proof at no cost, as no fees would be paid. In extreme cases, a long dispute period allows coordination in the event of a successful censorship attack to organize a fork and exclude the attacking block producers. Another possible attack consists in publishing more state root proposals than disputants can verify, which can be avoided using a frequency limit. 4.1.1. Fast optimistic withdrawals Since the validity of an Optimistic Rollup can be verified at any time by any Full Node, a trusted oracle can be used to know on L1 whether the withdrawal can be finalized safely. This mechanism was first proposed by Maker [21]: an oracle verifies the withdrawal, publishes the result on L1 on which an interest-bearing loan is assigned to the user, which is automatically closed at the end of 7 days, i.e. when the withdrawal can actually be finalized. This solution introduces a trust assumption, but in the case of Maker it is minimized since the oracle operator is managed by the same organization that assumes the risk by providing the loan. 4.2. Transaction costs The cost of L2 transactions is mostly determined by the interaction with the L1. In both solutions the computational cost of transactions is very cheap as it is executed entirely off-chain. Optimism publishes L2 transactions calldata as calldata and rarely (or never) executes fault proofs, therefore calldata is the most expensive resource. On January 12, 2022 a Bedrock network has been launched on the Ethereum’s Goerli testnet. A gas compression rate can be calculated by tracking the amount of gas used on Bedrock in a certain period and by comparing it to the amount of gas spent on L1 for the corresponding blocks. Using this method a gas compression rate of ∼20 : 1 is found, but this figure may differ with real activity on mainnet. StarkNet publishes on Ethereum every change in L2 state as calldata, therefore storage is the most expensive resource. Since the network does not use the EVM, the transaction cost compression cannot be trivially estimated. By assuming the cost of execution and calldata to be negligible, it is possible to calculate the compression ratio of storage writes compared to L1. Assuming no contract is deployed and 10 cells not previously accessed on StarkNet are modified, a storage write cost compression rate of ∼24 : 1 is found. If a cell is overwritten 𝑛times between data publications, the cost of each write will be 1/𝑛compared to the cost of a single write, since only the last one is published. The cost can be further minimized by
compressing frequently used values. The cost of validity proof verification is divided among the transactions it refers to: for example, StarkNet block 4779 contains 200 transactions and its validity proof consumes 267830 units of gas, or 1339.15 gas for each transaction. 4.2.1. Optimizing calldata: cache contract Presented below is a smart contract that implements an address cache for frequently used addresses by taking advantage of the fact that storage and execution are much less expensive resources, along with a Friends contract that demonstrates its use. The latter keeps track of the “friends” of an address that can be registered by calling the addFriend function. If an address has already been used at least once, it can be added by calling the addFriendWithCache function: the cache indices are 4-byte integers while the addresses are represented by 20 bytes, so there is a 5:1 saving on the function argument. The same logic can be used for other data types such as integers or more generally bytes. contract AddressCache { mapping(address => uint32) public address2key; address[] public key2address; function cacheWrite(address _address) internal returns (uint32) { require(key2address.length < type(uint32).max, "AddressCache: cache is full"); require(address2key[_address] == 0, "AddressCache: address already cached"); // keys must start from 1 because 0 means "not found" uint32 key = uint32(key2address.length + 1); address2key[_address] = key; key2address.push(_address); return key; } function cacheRead(uint32 _key) public view returns (address) { require(_key <= key2address.length && _key > 0, "AddressCache: key not found"); return key2address[_key - 1]; } } Listing 2: Address cache contract. contract Friends is AddressCache { mapping(address => address[]) public friends; function addFriend(address _friend) public { friends[msg.sender].push(_friend); cacheWrite(_friend); } function addFriendWithCache(uint32 _friendKey) public { friends[msg.sender].push(cacheRead(_friendKey)); } function getFriends() public view returns (address[] memory) { return friends[msg.sender];
} } Listing 3: Example of a contract that inherits the address cache. The contract supports in cache about 4 billion (232) addresses, and adding one byte gives about 1 trillion (240). 4.2.2. Optimizing storage: Bloom’s filters On StarkNet there are several techniques for minimizing storage usage. If it is not necessary to guarantee the availability of the original data then it is sufficient to save on-chain its hash: this is the mechanism used to save data for an ERC-721 (NFT) [22], i.e., an IPFS link that resolves the hash of the data if available. For data that is stored multiple times, it is possible to use a look-up table similar to the caching system introduced for Optimism, requiring all values to be saved at least once. For some applications, saving all the values can be avoided by using a Bloom filter [23, 24, 25], i.e., a probabilistic data structure that allows one to know with certainty whether an element does not belong to a set but admits a small but non-negligible probability of false positives. A Bloom filter is initialized as an array of 𝑚bits at zero. To add an element, 𝑘hash functions with a uniform random distribution are used, each one mapping to a bit of the array that is set to 1. To check whether an element belongs to the set we run the 𝑘hash functions and verify that the 𝑘bits are set to 1. In a simple Bloom’s filter there is no way to distinguish whether an element actually belongs to the set or is a false positive, a probability that grows as the number of entries increases. After inserting 𝑛elements: P[false positive] = (︃ 1 − [︂ 1 −1 𝑚 ]︂𝑘𝑛)︃𝑘 ≈ (︁ 1 −𝑒−𝑘𝑛/𝑚)︁𝑘 assuming independence of the probability of each bit set. If 𝑛elements (of arbitrary size!) are expected to be included and the probability of a false positive tolerated is 𝑝, the size of the array can be calculated as: 𝑚= −𝑛ln 𝑝 (ln 2)2 While the optimal number of hash functions is: 𝑘= 𝑚 𝑛ln 2 If we assume to insert 1000 elements with a tolerance of 1%, the size of the array is 9585 bits with 𝑘= 6, while for a tolerance of 0.1% it becomes 14377 bits with 𝑘= 9. If a million elements are expected to be inserted, the size of the array becomes about 1170 kB for 1% and 1775 kB for 0.1%, with the same values of 𝑘, since it depends only on 𝑝[26]. In a game where players must not be assigned to an opponent they have already challenged, instead of saving in storage for each player the list of past opponents one can use a Bloom filter. The risk of not challenging some players is often acceptable, and the filter can be reset periodically.
4.3. Ethereum compatibility The main advantage of being compatible with EVM and Ethereum is the reuse of all the available tools. Ethereum smart contracts can be published on Optimism without any modification nor new audits. Wallets remain compatible, development and static analysis tools, general analysis tools, indexing tools and oracles. Ethereum and Solidity have a long history of well-studied vulnerabilities, such as reentrancy attacks, overflows and underflows, flash loans, and oracle manipulations. Because of this, Optimism was able to capture a large amount of value in a short time. Choosing to adopt a different virtual machine implies having to rebuild an entire ecosystem, with the advantage of a greater implementation freedom. StarkNet natively implements account abstraction, which is a mechanism whereby each account is a smart contract that can implement arbitrary logic as long as it complies with an interface (hence the term abstraction): this allows the use of different digital signature schemes, the ability to change the private key using the same address, or use a multisig. The Ethereum community proposed the introduction of this mechanism with EIP-2938 in 2020, but the proposal has remained stale for more than a year as other updates have been given more priority [27]. Another important benefit gained from compatibility is the reuse of existing clients: Optimism uses a version of geth for its own node with only ∼800 lines of difference, which has been developed, tested, and maintained since 2014. Having a robust client is crucial as it defines what is accepted as valid or not in the network. A bug in the implementation of the fault proof system could cause an incorrect proof to be accepted as correct or a correct proof for an invalid block to be accepted as incorrect, compromising the system. The likelihood of this type of attack can be limited with a wider client diversity: Optimism can reuse in addition to geth the other Ethereum clients already maintained, and development of another Erigon-based client is already underway. In 2016 a problem in the memory management of geth was exploited for a DoS attack and the first line of defense was to recommend the use of Parity, the second most used client at the time 5. StarkNet faces the same problem with validity proofs, but the clients have to be written from scratch and the proof system is much more complex, and consequently it is also much more complex to ensure correctness.
比较
- 比较 4.1.提款时间 区分乐观汇总和有效性汇总的最重要方面是 提款初始化和结束之间经过的时间。在这两种情况下, 提款在 L2 上初始化并在 L1 上完成。在 StarkNet 上,最终确定是可能的: 一旦新状态根的有效性证明在 Ethereum 上被接受:理论上,它是 初始化后可以在 L1 第一个区块中提取资金。在实践中, 在 Ethereum 上发送有效性证明的频率是区块速度之间的权衡 最终确定和证明聚合。目前StarkNet提供有效性证明以供验证 每 10 小时 4,但计划随着交易活动的增加而减少。 在 Optimism Bedrock 上,只有在争议结束时才有可能最终确定提款 期限(当前为 7 天),之后根自动被视为有效。长度为 这个时期主要是由以下事实决定的:故障证明可以在 Ethereum 上进行审查,直到 它的结束。随着时间的增加,此类攻击的成功概率呈指数下降: E[减去值] = 𝑉𝑝𝑛 其中𝑛是一个区间内的区块数量,𝑉是可以减去的资金量 通过发布无效根,𝑝是成功执行审查的概率 在单个块中进行攻击。假设这个概率是 99%,即 Rollup 中锁定的值 是一百万个以太币,一个时间间隔内的区块是 1800 个(6 小时的区块,12 个区块) 秒间隔):预期值约为 0.01391 以太。该系统的安全性是通过 要求提案者抵押比预期值多得多的以太币。 温泽等人。展示了如何使用简单的 smart contract 进行审查攻击 确保状态中的某些内存区域不会更改 [20]。对攻击进行建模 作为马尔可夫博弈,本文表明审查是理性的占优策略 如果区块生产者获得的补偿多于包含更改的交易 记忆。上面讨论的𝑝值可以看作是有理块的百分比 网络中的生产者,其中“理性”没有考虑可能的惩罚 外部性,例如对 blockchain 的信任度降低,从而降低了其加密货币的价值。 以下代码呈现了可用于执行审查攻击的 smart contract 在基岩上。该攻击通过向区块生产者提供贿赂来利用他们的动机 审查会修改国家特定部分的交易。合同主要内容 ClaimBribe 函数允许区块生产者在成功审查后索取贿赂 通过检查是否未触及无效的输出根来确定目标交易。 函数 ClaimBribe(字节内存 storageProof) 外部 { require(!claimed[block.number], "已索取贿赂"); OutputProposal 内存当前 = storageOracle.getStorage(L2_ORACLE, block.number, SLOT, 存储证明); require(invalidOutputRoot == current.outputRoot, "攻击失败"); 声称[区块数] = true; (bool 发送, ) = block.coinbase.call{值: bribeAmount}(""); 4https://etherscan.io/address/0xc662c410c0ecf747543f5ba90660f6abebd9c8c4require(sent, "发送以太币失败"); } 清单 1:激励对 Bedrock 进行审查攻击的合约示例。 争议期限的长短还必须考虑到过错证明是 交互式证明,因此必须为参与者提供足够的时间进行交互 并且任何互动都可能受到审查。如果最后一次移动发生的时间非常接近 争议期结束后,审查成本明显减少。虽然审查是 占优策略,成功的可能性较低,因为审查节点容易受到 拒绝服务攻击:攻击者可以生成非常复杂的交易,并以 免费发布故障证明,因为无需支付任何费用。 在极端情况下,较长的争议期可以在成功解决问题后进行协调 审查攻击,组织分叉并排除攻击区块生产者。另一个 可能的攻击在于发布比争议者可以验证的更多的国家根提案, 可以使用频率限制来避免这种情况。 4.1.1.快速乐观提款 由于任何全节点都可以随时验证 Optimistic Rollup 的有效性,因此 受信任的 oracle 可用于在 L1 上了解提款是否可以安全完成。这个 机制最初由 Maker [21] 提出:oracle 验证提现,发布 L1 上的结果,在该结果上将计息贷款分配给用户,该结果自动 7 天后关闭,即提款可以实际完成时。这个解决方案 引入了信任假设,但在 Maker 的情况下,由于 oracle 运算符,它被最小化 由通过提供贷款承担风险的同一组织管理。 4.2.交易成本 L2 交易的成本主要由与 L1 的交互决定。在两种解决方案中 交易的计算成本非常便宜,因为它完全在链下执行。 Optimism 将 L2 事务 calldata 发布为 calldata 并且很少(或从不)执行错误 证明,因此 calldata 是最昂贵的资源。 2022 年 1 月 12 日,基岩网络 已在 Ethereum 的 Goerli 测试网上启动。可以计算气体压缩率 通过跟踪特定时期内基岩上使用的气体量并将其与 相应区块的 L1 上花费的 Gas 量。使用这种方法进行气体压缩 发现比率为 ∼20 : 1,但该数字可能与主网上的实际活动有所不同。 StarkNet 在 Ethereum 上发布 L2 状态的每个更改作为 calldata,因此存储是 最昂贵的资源。由于网络不使用EVM,交易成本 压缩不能简单地估计。通过假设执行成本和调用数据 可以忽略不计,可以计算出存储写入的压缩比 L1。假设没有部署合约,并且之前未在 StarkNet 上访问过的 10 个单元格 修改后,发现存储写入成本压缩率为~24:1。如果单元格被覆盖 数据发布之间的𝑛次,每次写入的成本将是成本的1/𝑛 一次写入,因为仅发布了最后一个写入。成本可以通过以下方式进一步最小化压缩常用值。有效性证明验证的成本分为 它所指的交易:例如,StarkNet区块4779包含200笔交易及其 有效性证明消耗 267830 个单位的 Gas,即每笔交易消耗 1339.15 个 Gas。 4.2.1.优化calldata:缓存合约 下面介绍的是 smart contract,它实现了经常使用的地址缓存 通过利用存储和执行成本便宜得多的事实来解决问题 资源,以及演示其用途的 Friends 合约。后者跟踪 可以通过调用 addFriend 函数注册的地址的“好友”。如果一个地址 已经至少使用过一次,可以通过调用addFriendWithCache来添加 功能:缓存索引是4字节整数,而地址是20字节表示, 因此函数参数节省了 5:1。相同的逻辑可以用于其他数据 类型,例如整数或更一般的字节。 合约地址缓存 { 映射(地址=> uint32)公共地址2key; 地址[]公钥2地址; 函数cacheWrite(地址_地址)内部返回(uint32){ require(key2address.length < type(uint32).max, "AddressCache: 缓存已满"); require(address2key[_address] == 0, "AddressCache: 地址已缓存"); // 键必须从 1 开始,因为 0 表示“未找到” uint32 key = uint32(key2address.length + 1); 地址2键[_地址] = 键; key2address.push(_address); 返回键; } 函数cacheRead(uint32 _key)公共视图返回(地址){ require(_key <= key2address.length && _key > 0, "AddressCache: 找不到密钥"); 返回 key2address[_key - 1]; } } 清单 2:地址缓存合约。 合约好友是AddressCache { 映射(地址=>地址[])公众好友; 函数 addFriend(地址_friend) 公共 { 朋友[msg.sender].push(_friend); 缓存写入(_friend); } 函数 addFriendWithCache(uint32 _friendKey) 公共 { 朋友[msg.sender].push(cacheRead(_friendKey)); } 函数 getFriends() 公共视图返回 (address[] memory) { 返回好友[msg.sender];} } 清单 3:继承地址缓存的合约示例。 该合约在缓存中支持大约 40 亿(232)个地址,并且添加一个字节给出 约 1 万亿 (240)。 4.2.2.优化存储:Bloom 过滤器 在 StarkNet 上有多种技术可以最大限度地减少存储使用。如果没有必要的话 保证原始数据的可用性,那么将其 hash 保存在链上就足够了:this 是用于保存 ERC-721 (NFT) [22] 数据的机制,即解析 数据的 hash(如果有)。对于多次存储的数据,可以使用查找 表类似于 Optimism 引入的缓存系统,要求将所有值保存在 至少一次。对于某些应用程序,可以通过使用布隆过滤器来避免保存所有值 [23,24,25],即一种概率数据结构,可以让人们确定地知道是否 一个元素不属于一个集合,但承认有很小但不可忽略的错误概率 积极的一面。 布隆过滤器被初始化为 𝑚 位为零的数组。要添加元素,𝑘hash 函数 使用均匀随机分布,每个映射到设置的数组的一位
- 要检查某个元素是否属于集合,我们运行 𝑘hash 函数并验证 𝑘位设置为 1。在简单的布卢姆过滤器中,无法区分是否是 元素实际上属于该集合或者是误报,概率随着数量而增长 条目数量增加。插入𝑛元素后: P[假阳性] = (︃ 1 − [︂ 1 −1 𝑚 ]︂𝑘𝑛)︃𝑘 ≈ (︁ 1 −𝑒−𝑘𝑛/𝑚)︁𝑘 假设每个位组的概率独立。如果 𝑛 元素(任意大小!)是 预期包含在内,并且容忍误报的概率是 𝑝,即数组的大小 可以计算为: 𝑚= −𝑛ln 𝑝 (ln 2)2 而 hash 函数的最佳数量是: 𝑘=𝑚 𝑛2 如果我们假设以 1% 的容差插入 1000 个元素,则数组的大小为 9585 位 𝑘= 6,而对于 0.1% 的容差,当 𝑘= 9 时,它变成 14377 位。如果一百万个元素 预计将被插入,数组的大小变为约 1170 kB(对于 1%)和 1775 kB(对于 1%) 0.1%,与 𝑘 的值相同,因为它仅取决于 𝑝[26]。 在游戏中,玩家不得被分配给他们已经挑战过的对手, 可以使用 Bloom 来保存过去对手的列表,而不是为每个玩家保存存储空间 过滤器。不挑战某些玩家的风险通常是可以接受的,并且可以重置过滤器 定期。4.3. Ethereum 兼容性 与 EVM 和 Ethereum 兼容的主要优点是重用所有可用的 工具。 Ethereum smart contracts 可以在 Optimism 上发布,无需任何修改,也不 新的审计。钱包保持兼容,开发和静态分析工具,一般分析 工具、索引工具和 oracles。 Ethereum 和 Solidity 有着悠久的深入研究历史 漏洞,例如重入攻击、溢出和下溢、闪贷和 oracle 操纵。正因为如此,Optimism 能够在短时间内获取大量价值 时间。 选择采用不同的虚拟机意味着必须重建整个生态系统, 具有更大的实施自由度的优点。 StarkNet 本机实现帐户 抽象,这是一种机制,每个帐户都是一个 smart contract ,可以实现 任意逻辑,只要它符合接口(因此称为抽象):这允许 使用不同的数字签名方案,使用更改私钥的能力 相同的地址,或使用多重签名。 Ethereum 社区提议引入此功能 2020 年与 EIP-2938 的机制,但该提案已经过时了一年多,因为 其他更新已被赋予更高优先级[27]。 兼容性带来的另一个重要好处是现有客户端的重用:Optimism 使用 geth 版本作为自己的节点,只有 ∼800 行差异,这已被 自 2014 年以来开发、测试和维护。拥有强大的客户至关重要,因为它定义了 网络中哪些内容被认为有效,哪些内容无效。故障证明实施中的一个错误 系统可能会导致错误的证明被接受为正确的,或者正确的证明被无效的 块被认为不正确,从而损害系统。出现这种类型的可能性 可以通过更广泛的客户端多样性来限制攻击:Optimism 除了 geth 之外还可以重用 已维护其他 Ethereum 客户端,并且正在开发另一个基于 Erigon 的客户端 已经在进行中。 2016年,geth的内存管理问题被利用 DoS攻击的第一道防线是推荐使用Parity,第二道最 当时使用的客户端 5. StarkNet 面临同样的有效性证明问题,但是客户端 必须从头开始编写,证明系统要复杂得多,因此 确保正确性也要复杂得多。
Conclusion
Conclusion
- Conclusion Rollups are the most promising solution available today to solve the scalability problem in decentralized blockchains, paving the way for the era of modular blockchains as opposed to monolithic blockchains. The choice of developing either an Optimistic Rollup or a Validity Rollup is mainly shown as a trade-off between complexity and agility. StarkNet has numerous advantages such as fast withdrawals, structural inability to have invalid state transitions, lower transaction cost at the expense of a longer development period and incompatibility with EVM, while Optimism has leveraged the network economy to quickly gain a major share of the market. Optimism Bedrock, however, possesses a modular design that allows it to become a Validity 5https://blog.ethereum.org/2016/09/22/ethereum-network-currently-undergoing-dos-attack
Rollup in the future: Cannon currently uses minigeth compiled to MIPS for its fault proof system, but the same architecture can be used to obtain a circuit and produce validity proofs. Compiling a complex machine such as the EVM for a microarchitecture results in a simpler circuit that does not need to be modified and re-verified in case of upgrades. RISC Zero is a verifiable microarchitecture with STARK proofs already in development based on RISC-V that can be used for this purpose as an alternative to MIPS [28]. One aspect that should not be underestimated is the complexity in understanding how the technology works. A strength of traditional blockchains is to be able to verify the state of the blockchain without trusting any third party entity. However, in the case of StarkNet, it is necessary to trust the implementation when it is not possible to verify the various components based on cryptography and advanced mathematics. This may initially create friction for the adoption of the technology, but as the tools and the usage of integrity proofs advance even outside the blockchain field this problem will be hopefully solved.
结论
- 结论 Rollups 是当今最有前途的解决方案,可解决可扩展性问题 去中心化的 blockchains,为模块化 blockchains 时代铺平了道路,而不是 整体 blockchains。 主要显示了开发 Optimistic Rollup 或 Validity Rollup 的选择 作为复杂性和敏捷性之间的权衡。 StarkNet 具有许多优点,例如速度快 提款、结构上无法进行无效的状态转换、较低的交易成本 开发周期较长且与 EVM 不兼容,而 Optimism 有 借助网络经济,迅速占领市场主要份额。 然而,Optimism Bedrock 拥有模块化设计,使其成为 Validity 5https://blog.ethereum.org/2016/09/22/ethereum-network-currently-undergoing-dos-attack
未来的 Rollup:Cannon 目前使用编译为 MIPS 的 minigeth 来防止故障 系统,但可以使用相同的架构来获得电路并产生有效性证明。 为微架构编译复杂的机器(例如 EVM)会产生更简单的结果 升级时无需修改和重新验证电路。 RISC 零是 可验证的微架构,具有 STARK 证明,已基于 RISC-V 开发 可用于此目的作为 MIPS [28] 的替代。 不应低估的一个方面是理解如何实现这一目标的复杂性。 技术有效。传统 blockchains 的优点是能够验证 blockchain 而不信任任何第三方实体。然而,对于 StarkNet 来说,它是 当无法验证各个组件时,有必要信任实现 基于密码学和高等数学。这最初可能会产生摩擦 技术的采用,但随着工具和完整性证明的使用的进步,甚至 在 blockchain 字段之外,这个问题有望得到解决。